diff --git a/.vscode/settings.json b/.vscode/settings.json index f85eede..b34a6f3 100644 --- a/.vscode/settings.json +++ b/.vscode/settings.json @@ -25,5 +25,7 @@ "titleBar.inactiveBackground": "#21573299", "titleBar.inactiveForeground": "#e7e7e799" }, - "peacock.color": "#215732" + "peacock.color": "#215732", + "editor.snippetSuggestions": "bottom", + "emmet.showSuggestionsAsSnippets": true, } \ No newline at end of file diff --git a/docs/alphatex/introduction.mdx b/docs/alphatex/introduction.mdx index c9810a7..53dd31c 100644 --- a/docs/alphatex/introduction.mdx +++ b/docs/alphatex/introduction.mdx @@ -6,8 +6,8 @@ import { AlphaTexSample } from '@site/src/components/AlphaTexSample'; In this section you find all details about how to write music notation using AlphaTex. AlphaTex is a text format for writing music notation for AlphaTab. AlphaTex loading -can be enabled by specifying `data-tex="true"` on the container element. -AlphaTab will load the tex code from the element contents and parse it. +can be enabled by setting the [`tex`](/docs/reference/settings/core/tex) option or loading it via [`tex()`](/docs/reference/api/tex) method on the API. +AlphaTab will load the tex code from the element contents and parse it. You can also load it from a file like other formats. AlphaTex supports most of the features alphaTab supports overall. If you find anything missing you would like to see, feel free to [initiate a Discussion on GitHub](https://github.com/CoderLine/alphaTab/discussions/new) so we can find a good solution together. @@ -30,3 +30,23 @@ Here is an example score fully rendered using alphaTex. 15.1.8 :16 14.1{tu 3} 15.1{tu 3} 14.1{tu 3} :8 17.2 15.1 14.1 :16 12.1{tu 3} 14.1{tu 3} 12.1{tu 3} :8 15.2 14.2 | 12.2 14.3 12.3 15.2 :32 14.2{h} 15.2{h} 14.2{h} 15.2{h} 14.2{h} 15.2{h} 14.2{h} 15.2{h} 14.2{h} 15.2{h} 14.2{h} 15.2{h} 14.2{h} 15.2{h} 14.2{h} 15.2{h} `} + + +## General Song Structure + +alphaTex has the following structure variations. Comments are supported in C-style comments via `// Single Line` and `/* Multi Line */`. + + +```title=General File Structure +/* Song Metadata */ +. +/* Song Contents */ +. +/* Sync Points */ +``` + +The Song Metadata and Sync Points are optional but the dots are mandatory to separate the sections in case there is any content filled. + +* Song Metadata: This section contains all information generally about the song like title. +* Song Contents: This section contains defines the whole song contents with all the tracks, staves, bars, beats, notes that alphaTab supports. Bars are separated by `|` symbols. +* Sync Points: alphaTab can be synchronized with external media like audio backing tracks or videos. To have the correct cursor display and highlighting, songs have to be synchronized. This section defines such markers. \ No newline at end of file diff --git a/docs/alphatex/percussion.mdx b/docs/alphatex/percussion.mdx index 417e313..875c5c2 100644 --- a/docs/alphatex/percussion.mdx +++ b/docs/alphatex/percussion.mdx @@ -1,11 +1,11 @@ --- title: Percussion -since: 1.4.0-alpha.1026 +since: 1.4.0 --- import { SinceBadge } from '@site/src/components/SinceBadge'; - + import { AlphaTexSample } from '@site/src/components/AlphaTexSample'; diff --git a/docs/alphatex/sync-points.mdx b/docs/alphatex/sync-points.mdx new file mode 100644 index 0000000..97a5bfe --- /dev/null +++ b/docs/alphatex/sync-points.mdx @@ -0,0 +1,34 @@ +--- +title: Sync Points +--- + + +alphaTex support specifying sync points for the [synchronization with external media](/docs/guides/audio-video-sync). + +The related sync points are specified as flat list at the end of the song contents separated by a dot `.`. +As we consider it unlikely that authors write this information manually, we separated the sync points from the other song. +This way tools like our [Media Sync Editor](/docs/playground/) on the Playground can be used to synchronize songs and +the sync info can be copy-pasted after the main song. + +The supported formats of sync points are: + +* `\sync BarIndex Occurence MillisecondOffset` +* `\sync BarIndex Occurence MillisecondOffset RatioPosition` + +Where: + +* `BarIndex` is the numeric (0-based) index of the bar for which the sync point applies. +* `Occurence` is the numeric (0-based) index of bar repetitions. e.g. on Repeats or Jumps bars might be played multiple times. This value allows specifying points on subsequent plays of a bar. +* `MillisecondOffset` is the numeric timestamp in milliseconds in the external audio. +* `RatioPosition` is the relative offset within the bar at which the sync point is placed (0 if not provided). + +The `BarIndex`, `Occurence`, `RatioPosition` values define the absolute position within the music sheet. +The `MillisecondOffset` defines the absolute position within the external media. + +With this information known, alphaTab can synchronize the external media with the music sheet. + +The sample below uses an audio backing track with inconsistent tempos. The sync points correct the tempo differences and the cursor is placed correctly. + +import { AlphaTexSyncPointSample } from '@site/src/components/AlphaTexSyncPointSample'; + + \ No newline at end of file diff --git a/docs/getting-started/configuration-web.mdx b/docs/getting-started/configuration-web.mdx index 74ef36d..6717c11 100644 --- a/docs/getting-started/configuration-web.mdx +++ b/docs/getting-started/configuration-web.mdx @@ -13,9 +13,8 @@ Simply create a div container where you want alphaTab to be located on your page to the available width of the div when using the page layout. If you prefer a fixed layout simply set a fixed width on the div via CSS and no resizes to alphaTab will happen either. -If jQuery is detected on the page you can use the jQuery plugin to interact with alphaTab. Otherwise alphaTab is initailized using a special `API` -object. The main namespace `alphaTab` contains every class and enum exposed by the API. The main api is the `alphaTab.AlphaTabApi` -class: +The main namespace `alphaTab` and its sub-namespaces like `model` or `midi` contain all types and functionalities by alphaTab. The main api is the `alphaTab.AlphaTabApi` +class which can be used to initialize and interface with alphaTab: ```html
@@ -27,78 +26,20 @@ const api = new alphaTab.AlphaTabApi(element); ## Settings -There are 2 main ways to initialize alphaTab: either via a settings object or via data attributes. -Depending on the technologies used in your project either the direct code initialization or the data attributes might be easier to use. - -The data attributes might be more suitable for server side rendering technologies where settings are provided from a backend infrastructure -during page rendering. When printing the main alphaTab div element to the DOM you can pass on any settings you might want to have. - -When using client side frontend frameworks like Angular, React or even plain JavaScript it might be more suitable to initialize alphaTab -via a settings object. - -Both systems can be combined while the data attributes will overrule the JSON settings. -The full list of settings can be found in the [API Reference](/docs/reference/settings). - -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; - - - - The settings object is passed to the constructor of the API object as second parameter: ```js const api = new alphaTab.AlphaTabApi(element, { // any settings go here }); -``` - - - -AlphaTab is initialized via the `$.alphaTab()` plugin. The first parameter is the settings object and the API object will be returned. - -```js -const api = $('#alphaTab').alphaTab(); ``` - - - -Data Attributes will only allow configuration, you still need to manually initailize alphaTab with one of the other variants. -But the settings parameter can be simply left out. - -```html -
-``` -```js -const api = new alphaTab.AlphaTabApi(element); -``` - -
-
- ## Events -Events of alphaTab can be either subscribed on the API object directly, or via the DOM element to which alphaTab is attached. +Events of alphaTab should be either subscribed on the API object. Please refer to the [API Reference](/docs/reference/api) to find a full list of events available. - - - Each event has an `.on(handler)` and `.off(handler)` function to subscribe or unsubscribe. ```js @@ -108,35 +49,12 @@ api.scoreLoaded.on( (score) => { }); ``` - - - ## API -The main interaction with alphaTab happens through the API object or via jQuery plugin. +The main interaction with alphaTab happens through the API object. Simply use any [available API](/docs/reference/api) to get/set details or trigger actions. - - - ```js const api = new alphaTab.AlphaTabApi(element); api.tex('\title "Hello AlphaTab" . 1.1*4') -``` - - - - -```js -$(element).alphaTab('tex', '\title "Hello AlphaTab" . 1.1*4') -``` - - - - \ No newline at end of file +``` \ No newline at end of file diff --git a/docs/guides/audio-export.mdx b/docs/guides/audio-export.mdx new file mode 100644 index 0000000..fb654ed --- /dev/null +++ b/docs/guides/audio-export.mdx @@ -0,0 +1,71 @@ +--- +title: Audio Export +since: 1.6.0 +--- +import { SinceBadge } from '@site/src/components/SinceBadge'; + + + +This guides shows how to use alphaTab to generate raw audio samples for exporting purposes. The export feature serves following use cases: + +1. Allow users to download an audio file of the song.

+If the audio is passed additionally to an audio codec like MP3 or OGG Vorbis, users can save the audio of the music sheet to disk. +This is a quite common feature offered to users. + +2. Use the raw audio for synchronization with external systems.

+You app might have its own mechanisms to provide media playback. Your app might have additional custom backing tracks or you want to splitup +the individual audio tracks to play on separate output devices. By pre-computing the audio samples from the synthesizer you can +build an external system which combines the alphaTab audio with any custom components. + +The external system can then be combined this with the [Audio & Video Sync](/docs/guides/audio-video-sync) feature to still have an interactive music sheet that shows correctly what's being played. + + +> [!NOTE] +> The audio export can be used regardless of the current mode the alphaTab player is in. This allows exporting audio even if an external audio backing track or video is used. +> Just be sure to pass in the required soundfont in this case. If a synthesizer is already active, it can reuse the already loaded soundfont. + +## How to use this? + +The audio exporter follows an asynchronous pull pattern: + +* _async_ because the exporter uses `Promises` (`Task` for C#, `Deferred` for Kotlin) to provide a clean way of requesting audio data without fighting with callbacks or events. +* _pull_ because you request the next chunk of audio to be generated and pull the audio into your consumer code. + +To export the audio you follow tree main steps: + +1. You start a new exporter with [`await api.exportAudio(...)`](/docs/reference/api/exportaudio.mdx). +2. You call [`exporter.render()`](/docs/reference/types/synth/iaudioexporter/render.mdx) to produce a chunk of audio which you can then process further. (repeated until end is reached). +3. You cleanup the exporter via [`exporter.destroy()`](/docs/reference/types/synth/iaudioexporter/destroy.mdx). The exporter also implements `Disposable` (`IDisposable` for C#, `AutoCloseable` for `Kotlin`) which allows easy cleanup via language features if supported. + +> [!WARNING] +> The raw audio samples for a whole song can consume quite a huge amount of memory: A calculation example: +> +> * 4 bytes per sample (32-bit float samples) +> * 2 audio channels (left and right for stereo sound) +> * 44100 samples per second +> +> A 1 minute song already needs ~21MB of memory (`60s * 4bytes * 2channels * 44100samples/s`), multiply accordingly. +> +> To keep the memory pressure low, you might send the chunks into a 3rd party library encoding the audio in a smaller format (e.g. MP3 or OGG Vorbis). + +### Available options + +The [`AudioExportOptions`](/docs/reference/types/synth/audioexportoptions/index.mdx) allow customizing various aspects of the audio exported: + +* [`soundFonts`](/docs/reference/types/synth/audioexportoptions/soundfonts.mdx) can be used to customize the soundfonts used during export. +* [`sampleRate`](/docs/reference/types/synth/audioexportoptions/samplerate.mdx) can be used to customize the sample rate of the exported audio. +* [`useSyncPoints`](/docs/reference/types/synth/audioexportoptions/usesyncpoints.mdx) controls whether the sync points of the currently loaded song are appled during audio generation. +* [`masterVolume`](/docs/reference/types/synth/audioexportoptions/mastervolume.mdx) controls the master volume of the generated audio. +* [`metronomeVolume`](/docs/reference/types/synth/audioexportoptions/metronomevolume.mdx) controls the volume of the metronome ticks. (keep in mind that the use of `useSyncPoints` changes the audio duration, the metronome is aligned with the music notes, not with the synthesized audio) +* [`playbackRange`](/docs/reference/types/synth/audioexportoptions/playbackrange.mdx) controls the audio range which is exported. +* [`trackVolume`](/docs/reference/types/synth/audioexportoptions/trackvolume.mdx) controls the volume of every track (percentage-wise to the already configured absolute volume) +* [`trackTranspositionPitches`](/docs/reference/types/synth/audioexportoptions/tracktranspositionpitches.mdx) controls an additional transposition pitch for the tracks. + +## Example + +This example exports the audio and creates a [WAV File](https://en.wikipedia.org/wiki/WAV) out of the samples. WAV files contain the raw samples, we just need +to write the correct file header. In this demo we then create a Blob URL in the browser to set the WAV file as source of an `