Getting started
Once installed the atmoky WebSDK can be imported as an ES6 module.
import { Manager } from "@atmokyaudio/websdk";
The SDK is split into two parts, the WebAssembly rendering backend living in an AudioContext
, and the developer-facing library which provides easy-to-use abstractions which take care of communication with the backend.
Basic Setup
When it comes to the atmoky WebSDK, the Manager
is your friend!
It will take care of preparing an AudioContext
instance so the rendering backend can be created in its scope. The creation of the Renderer
requires you to specify the maximum number of sources you want to use in this session.
import { Manager, Renderer } from "@atmokyaudio/websdk";
var renderer: Renderer;
let manager = new Manager();
let context = new AudioContext();
manager.prepareContext(context).then(() => {
let numSources = 10;
renderer = manager.createRenderer(numSources);
// connect to the browser's audio output
renderer.connect(context.destination, 0, 0);
console.log("Setup complete!");
})
The Renderer
acts as an AudioNode
which you can connect to the destination node or any other AudioNode
of the AudioContext
(e.g. a compressor/limiter).
The maximum number of sources can't be changed during the lifetime of the renderer. If you want the processor to handle more inputs, you need to create a new instance.
The memory overhead for each additional source is quite small compared to the main processor. You can set the number initially higher and leave inputs unused when you don't need them.
Create source handles and connect input signals
The Source
class is a handle for an audio object which you can position in space and assign input signals to. You can create a source by calling createSource
on the renderer:
let source = renderer.createSource()
source.setInput(/* AudioNode */);
source.setPosition(5, 3, 0);
Calling setInput
will connect the source to the input of the renderer per default. You can prevent this by adding a false
argument and connect
it manually later on.
The code above also sets the position of the source to (5, 3, 0)
, front left of the listener. See the Coordinate System article for more information about the used coordinate system.
Only the number of connected sources will affect the CPU usage. Unused inputs won't use rendering resources. You can disconnect a source by calling disconnect
on it.
Moving the listener
The Renderer
also holds a Listener
instance, which can be used to adjust position and orientation of the listener.
renderer.listener.setPosition(0, -1, 0);
renderer.listener.setRotation(0.2, 0, 0);
This code moves the listener slightly to the right and rotates it a bit around the z-axis.
Sources will be automatically spatialized relative to the listener. Also their gains will be adjusted depending on there distance to the listener. For more on the distance attenuation see Going Further.
Adjusting rendering parameters
The renderer also exposes two parameter groups: Reverb and Externalizer. They can be accessed using renderer.reverb
and renderer.externalizer
, respectively.
renderer.reverb.amount.value = 90;
renderer.externalizer.amount.value = 60;
renderer.externalizer.character.value = 70;
This code sets the reverb level to 90%, the externalizer amount to 60%, and the character to a value of 70. See Going Further for a detailed description of the parameters.
Deleting a source
In case you don't need a source any more and want to make the used renderer input available again, you can call removeSource
on the renderer providing the source to remove, or directly calling delete
on the source.
renderer.removeSource(source);
// or alternatively:
source.delete();