Skip to main content
Version: Next

AudioScene Builder (BETA)


This feature is still in BETA, meaning the API might change. Also the documentation isn't complete yet, but will be updated soon!

With the AudioScene class you can turn code like this:

import { Manager, Renderer, Source } from "@atmokyaudio/websdk"let renderer: Rendererlet source: Sourcelet manager = new Manager()let context = new AudioContext()manager.prepareContext(context).then(() => {    const NUM_SOURCES = 10    renderer = manager.createRenderer(NUM_SOURCES)    renderer.connect(context.destination, 0, 0)    source = renderer.createSource()    source.setInput(context.createOscillator())    source.setPosition(3, 2, 1)})


import { AudioScene } from "@atmokyaudio/websdk";let scene = new AudioScene("config.json");

All the scene information is stored in a JSON file like

{    "sources": [        {            "name": "Shaker",            "file": "../audio/shaker.mp3",            "type": "AudioFile",            "position": {                "x": 10,                "y": 3,                "z": 0.5            },            "reverbSendDecibels": -6,            "shouldStartPlaying": true,            "looping": true        }    ],    "attenuationCurves": [        {            "name": "MainCurve",            "ax": 0.1,            "ay": 0,            "bx": 1,            "by": -5,            "cx": 20,            "cy": -50,            "maxDistance": 60        }    ],    "listener": {        "position": {            "x": 0,            "y": -1,            "z": 0.8        },        "orientation": {            "yaw": 0.3,            "pitch": -0.2,            "roll": 0.0        },    }}

Scene File

The scene description can come from a JSON file like the example given above, but can also be provided via a JavaScript object.

The idea is that the scene description can be exported by almost every programming language. We will also provide DAW audio plug-ins that can be used by sound designers to create audio scenes. They can listen to the sources within the scene with the same rendering as in the browser, and export the audio and scene file once they are satisfied.


Please find the API for the audioscene builder below. Once the interface is stable it will be added to the API reference.

enum SourceType {
AudioFile = "AudioFile",
Empty = "EmptyInput"

declare type SourceDescription = {
name: string;
type: SourceType | string;
file?: string;
shouldStartPlaying?: boolean;
looping?: boolean;
position: {
x: number;
y: number;
z: number;
reverbSendDecibels?: number;
gainDecibels?: number;
occlusion?: number;
attenuationCurveIndex?: number;

declare type AttenuationCurveDescription = {
name: string;
ax: number;
ay: number;
bx: number;
by: number;
cx: number;
cy: number;
maxDistance: number;

declare type ListenerConfig = {
position?: {
x: number;
y: number;
z: number;
orientation?: {
yaw: number;
pitch: number;
roll: number;

declare type AudioSceneConfig = {
sources: SourceDescription[];
attenuationCurves: AttenuationCurveDescription[];
listener?: ListenerConfig;

declare class AudioScene {
context: BaseAudioContext;
renderer: Renderer;
sources: Map<string, Source>;
setupComplete: Promise<void>;
constructor(config: AudioSceneConfig | string, options?: AudioContextOptions, context?: BaseAudioContext);
startContext(): void;
private setupScene;
private static fetchAudioFile;