Skip to main content
Version: 0.3.0

Basic Setup

In the following we build the basic setup which can be shared by the provider-specific integrations you'll find on the next pages.

The example will include:

  • the setup of the audio graph
  • the mapping between user-IDs and the corresponding Source instances
  • a routine which updates the positions of the participants
  • the communication provider specific integrations which incorporate the remote audio streams into the audio rendering graph (see next pages)

Setting up the renderer

We set up the renderer like we would for any other use-case, by creating the Manager, preparing an AudioContext, and finally instantiation the Renderer with out desired number of audio sources and attenuation curves. Once the renderer has been created, voice calls can be started.

import { Manager, Renderer, Source } from "@atmokyaudio/websdk";let renderer: Renderer;const manager = new Manager();const context = new AudioContext();manager.prepareContext(context).then(() => {    const NUM_SOURCES = 20;    const NUM_ATTENUATION_CURVES = 1;    renderer = manager.createRenderer(NUM_SOURCES, NUM_ATTENUATION_CURVES);    renderer.connect(context.destination, 0, 0);    // ready to start calls});
note

Don't forget to resume the context (context.resume()) through a user interaction.

Source Map

In general, the participants can be uniquely identified using a user-id. In this example we assume that the user-id is simply a string, but it can be of any type.

In order to bind a participant to an audio source, we create a Map which maps the participant's user-id to a Source instance.

type UserID = string; // can be any type, actuallyconst sourcesMap = new Map<UserID, Source>

Adding and removing audio source

We define two functions, one for adding an audio source when a participant joins the room, and one for removing it again, once the participant closed the connection. The addParticipantSource function will take the participant's user-id and the remote audio stream which shall be used as the audio feed for the source. The removeParticipantSource removes the source from the map, and disconnects it from the renderer.

function addParticipantSource(userID: UserID, track: MediaStreamTrack) {    let source = renderer.createSource();    source.name = user.uid; // will show up in developer tools    source.setInput(track);    // set some parameters    source.setPosition(1.0, 0.0, 0.0); // x y z coordinates e.g. from your game logic    source.setReverbSendDecibels(-10);    // add to map    sourcesMap.set(userID, source);}function removeParticipantSource(userID: UserID) {    let source = sourcesMap.get(userID);    source.delete();    // remove from map    soucesMap.delete(userID);}

Synchronization of participant positions.

In general, the acoustic spatialization of the participants should be aligned with the visual representation or at least the auditive scene you want to picture.

Depending on the use case, a position update can be triggered differently.

Examples

Imagine the video streams of the participants are placed on a flat grid in the browser window, as most video chat services do. Once the layout changes e.g. if a new participant joins, or the user resizes the browser window, you might want to calculate the new positions of each participant and update the acoustic one accordingly, so the users hear the others from their video positions.

Position Update Implementation

The following code assumes that onPositionUpdate is called once the application wants to adjust the position of a participant. The call provides the userID of the participant and the new position. The routine then looks up the Source instance of that participant and sets the position.

type Position = { x: number; y: number; y: number };function onPositionUpdate(userId: UserID, position: Position) {    const source = sourcesMap.get(userId);    source.setPosition(position.x, position.y, position.z);}

Full Code example

import { Manager, Renderer, Source } from "@atmokyaudio/websdk";let renderer: Renderer;const manager = new Manager();const context = new AudioContext();type UserID = string; // can be any type, actuallyconst sourcesMap = new Map<UserID, Source>manager.prepareContext(context).then(() => {    const NUM_SOURCES = 20;    const NUM_ATTENUATION_CURVES = 1;    renderer = manager.createRenderer(NUM_SOURCES, NUM_ATTENUATION_CURVES);    renderer.connect(context.destination, 0, 0);    // ready to start calls});function addParticipantSource(userID: UserID, track: MediaStreamTrack) {    let source = renderer.createSource();    source.name = user.uid; // will show up in developer tools    source.setInput(track);    // set some parameters    source.setPosition(1.0, 0.0, 0.0); // x y z coordinates e.g. from your game logic    source.setReverbSendDecibels(-10);    // add to map    sourcesMap.set(userID, source);}function removeParticipantSource(userID: UserID) {    let source = sourcesMap.get(userID);    source.delete();    // remove from map    soucesMap.delete(userID);}type Position = { x: number; y: number; y: number };function onPositionUpdate(userId: UserID, position: Position) {    const source = sourcesMap.get(userId);    source.setPosition(position.x, position.y, position.z);}