This repository provides four MCP servers that compile, render, or play Faust DSP code:
faust_server.py: C++ compile pipeline (Faust CLI + g++).faust_server_daw.py: DawDreamer offline render pipeline.faust_node_server.py: real-time playback via node-web-audio-api + Faust WASM.faust_browser_server.py: browser-only runtime proxy + static server.
For MCP protocol background, see:
faust_server.py: MCP server entrypoint (FastMCP) and tool implementation.faust_server_daw.py: DawDreamer-based MCP server (no C++ compile step).faust_node_server.py: Real-time MCP server using node-web-audio-api + Faust WASM.faust_browser_server.py: Browser-only runtime proxy + static server.faust_node_worker.mjs: Node worker that hosts the real-time DSP graph.analysis_arch.cpp: Faust C++ architecture used to generate analysis data.t1.dsp,t2.dsp,noise.dsp,probe.dsp: Example Faust DSP programs.sse_client_example.py: SSE client example.stdio_client_example.py: stdio client example.smoke_test.py: Basic stdio smoke test for both offline servers.Makefile: Common run/test targets.requirements.txt: Client-side Python dependencies.
The project has three MCP server variants that share a common client interface, but differ in how they compile/render Faust DSP code.
flowchart LR
LLM[LLM / MCP Client] -->|SSE or stdio| MCP[MCP Server]
subgraph Server3["S3:faust_node_server.py"]
S3["MCP tool calls"] --> PY[Python MCP server]
PY -->|stdin/stdout JSON| NODE[faust_node_worker.mjs]
NODE --> NWA[node-web-audio-api + faustwasm]
NODE --> UI["Optional UI server: faust UI or fallback"]
end
subgraph Server2["S2:faust_server_daw.py"]
S2["MCP tool call"] --> DD["DawDreamer + Faust DSP"]
DD --> JSON2[Analysis JSON + features]
end
subgraph Server1["S1:faust_server.py"]
S1["MCP tool call"] --> CLI["faust CLI + analysis_arch.cpp"]
CLI --> BIN[Native C++ binary]
BIN --> JSON1[Analysis JSON]
end
Notes:
- SSE is the recommended transport for web clients; stdio is useful for local CLI tools.
- The real-time server returns parameter metadata and current values, not offline analysis.
- Real-time tools:
compile_and_start,check_syntax,get_params,set_param,set_param_values,get_param,get_param_values,get_audio_metrics,save_wasm_module,load_wasm_module,get_midi_inputs,get_midi_status,select_midi_input,stop,destroy. - Offline tools:
compile_and_analyze. - DawDreamer and real-time servers accept optional
input_source(none,sine,noise,file),input_freq(Hz), andinput_file(path) to inject test inputs.
make setup
make smoke-test DSP=t1.dspReal-time setup:
make setup-rtMIDI input (Node backend):
- Use
get_midi_inputsto list available inputs. - Use
get_midi_statusto confirm the selected device and see the last MIDI message. - Use
select_midi_inputwithindexornameto choose a single active device. - Selection is session-only (no persistence across restarts).
Faust UI setup (optional):
make setup-uiCleanup:
make cleanMCP_HOST(default:127.0.0.1)MCP_PORT(default:8000)MCP_TRANSPORT(default:sse)- Supported values:
sse,streamable-http,stdio
- Supported values:
MCP_MOUNT_PATH(optional, SSE only)TMPDIR(recommended) temp folder used by the compiler toolchain
- Accept a Faust DSP string via the
compile_and_analyzetool. - Write it to a temporary
process.dspfile. - Compile Faust DSP to C++ using
analysis_arch.cpp. - Compile the generated C++ into a native binary (C++11+).
- Run the binary to produce JSON analysis output.
- Return the JSON result to the MCP client.
- Python 3.10+
- Faust CLI available in PATH (
faust) - C++ compiler (
g++) with C++11+ support - Python package
mcp
MCP_TRANSPORT=sse MCP_HOST=127.0.0.1 MCP_PORT=8000 \
TMPDIR=/path/to/tmp \
python3 faust_server.pyDefault SSE endpoint:
http://127.0.0.1:8000/sse
MCP_TRANSPORT=stdio python3 faust_server.pyFor faust_server.py, set TMPDIR to a writable path if compilation fails:
MCP_TRANSPORT=stdio TMPDIR=/tmp/faust-mcp-test python3 faust_server.pyInput:
faust_code(string) - the DSP source code
Output:
JSON string with:
statusmax_amplitudermsis_silentwaveform_asciinum_outputschannels(array of per-output metrics)
The analysis is performed by analysis_arch.cpp and returns a JSON payload with
these fields:
status: hard-coded to"success"when the binary completes.max_amplitude: maximum absolute value of the mono mix over the full render. The mono mix is the average of all output channels per sample.rms: root-mean-square of the mono mix over the full render.is_silent:truewhenmax_amplitude < 0.0001, otherwisefalse.waveform_ascii: a 60-character ASCII summary of the mono mix. Each character represents a chunk of the rendered buffer and is chosen by peak magnitude:_for near-silence (< 0.01),#for > 0.5,=for > 0.2, and-otherwise.num_outputs: number of output channels produced by the DSP.channels: array of per-output objects with:index(0-based output index)max_amplitudermsis_silentwaveform_ascii
Render details:
- Sample rate: 44100 Hz
- Duration: 2 seconds (88200 samples)
- Processing block size: 256 frames
This variant uses DawDreamer to compile
and render Faust DSP directly in Python, so you do not need to generate and compile C++ code.
It renders offline audio and returns the same analysis metrics plus a dawdreamer info block and
DawDreamer-only features.
- Python 3.10+
dawDreamer(import name can bedawDreamerordawdreamer)numpyfor spectral features (otherwisespectral_availableisfalse)
Install:
python3 -m pip install dawDreamerDD_SAMPLE_RATE,DD_BLOCK_SIZE,DD_RENDER_SECONDSfor renderingDD_FFT_SIZE,DD_FFT_HOP,DD_ROLLOFFfor spectral analysis
compile_and_analyze accepts optional input_source (none, sine, noise, file),
input_freq (Hz for sine, default 1000), and input_file (path for file) to inject
test inputs for effects. For DawDreamer, input_file must be a local WAV path
and requires numpy for decoding.
MCP_TRANSPORT=sse MCP_HOST=127.0.0.1 MCP_PORT=8000 \
DD_SAMPLE_RATE=44100 DD_BLOCK_SIZE=256 DD_RENDER_SECONDS=2.0 \
python3 faust_server_daw.pyMakefile targets:
make run-daw
make client-daw DSP=t1.dspmake client-daw DSP=... runs the SSE client against the DawDreamer server using
that DSP file. You can also use:
make client-sse DSP=t1.dspcompile_and_analyze with a test input source (DawDreamer):
python3 sse_client_example.py --url http://127.0.0.1:8000/sse \
--tool compile_and_analyze --dsp t1.dsp --input-source noisefeatures(global time + spectral features)- Per-channel
features dawdreamerobject with render settings and version info
Example output (truncated):
{
"status": "success",
"max_amplitude": 0.990577,
"rms": 0.49998,
"is_silent": false,
"waveform_ascii": "############################################################",
"num_outputs": 2,
"features": {
"dc_offset": 0.0001,
"zero_crossing_rate": 0.022,
"crest_factor": 1.98,
"clipping_ratio": 0.0,
"spectral_centroid": 1000.0,
"spectral_bandwidth": 120.0,
"spectral_rolloff": 1500.0,
"spectral_flatness": 0.12,
"spectral_flux": 0.04,
"spectral_frame_size": 2048,
"spectral_hop_size": 1024,
"spectral_rolloff_ratio": 0.85,
"spectral_available": true
},
"channels": [
{
"index": 0,
"max_amplitude": 1.0,
"rms": 0.707111,
"is_silent": false,
"waveform_ascii": "############################################################",
"features": {
"dc_offset": 0.0001,
"zero_crossing_rate": 0.022,
"crest_factor": 1.98,
"clipping_ratio": 0.0,
"spectral_centroid": 1000.0,
"spectral_bandwidth": 120.0,
"spectral_rolloff": 1500.0,
"spectral_flatness": 0.12,
"spectral_flux": 0.04,
"spectral_frame_size": 2048,
"spectral_hop_size": 1024,
"spectral_rolloff_ratio": 0.85,
"spectral_available": true
}
}
],
"dawdreamer": {
"version": "0.7.0",
"sample_rate": 44100,
"block_size": 256,
"render_seconds": 2.0,
"num_channels": 2
}
}This variant compiles Faust DSP code to WebAudio on the fly and plays it in real time
using the node-web-audio-api runtime. It returns parameter metadata extracted from
the Faust JSON so an LLM can control the running DSP (no offline analysis metrics).
node-web-audio-api is an open-source Node.js implementation of the Web Audio API that provides AudioContext/AudioWorklet support outside the browser, backed by native audio I/O.
- Node.js
node-web-audio-apicheckout atWEBAUDIO_ROOT(submodule:external/node-web-audio-api)@grame/faustwasminstalled in that checkout- Optional:
@julusian/midibackend for Node-side MIDI input (submodule:external/node-midi) - Optional:
@shren/faust-uiinstalled inui/for the UI bridge
Environment variables:
WEBAUDIO_ROOT: node-web-audio-api path (defaultexternal/node-web-audio-api)FAUST_UI_PORT: enable UI server on this port (optional)FAUST_UI_ROOT: path to a builtfaust-uibundle (optional, overrides auto-detect)FAUST_WORKER_PATH: absolute path tofaust_node_worker.mjs(optional)
Submodule setup (one-time):
git submodule update --init --recursive
cd external/node-web-audio-api
npm install
npm run buildMIDI backend setup (optional):
git submodule update --init --recursive external/node-midi
cd external/node-midi
npm install
npm run build:ts- The real-time server runs one DSP at a time;
compile_and_startreplaces it. - Parameter paths come from Faust JSON, not
RT_NAME. Usemake rt-get-params. npm run buildgeneratesnode-web-audio-api.build-release.nodeand should be re-run if you update the submodule or switch branches.- If you get no sound, check OS audio permissions and the default output device.
- Set
FAUST_MIDI_DEBUG=1to log raw MIDI messages and note-on/off counters (stderr).
WEBAUDIO_ROOT=external/node-web-audio-api \
MCP_TRANSPORT=sse MCP_HOST=127.0.0.1 MCP_PORT=8000 \
python3 faust_node_server.pySet FAUST_UI_PORT to start a small HTTP UI server (fallback sliders). If you
have the @shren/faust-ui package installed (in ui/), the server will auto-load
it. You can also point FAUST_UI_ROOT to a custom bundle directory so the page
can load /faust-ui/index.js and use it instead of the fallback UI.
The UI page (ui/rt-node-ui.html) connects to the running DSP over a lightweight
HTTP JSON API hosted by the Node worker (faust_node_worker.mjs) using
Node's built-in http server. This is separate from the MCP transport: MCP
still runs over SSE/stdio between the Python server and the client, while the UI
talks directly to the worker via HTTP.
Endpoints used by the UI:
GET /status: current DSP name and running statusGET /json: full Faust JSON for the current DSPGET /params: cached parameter metadataGET /param-values: current parameter values (polled)POST /param: set a parameter value{ path, value }GET /audio-metrics: scope/spectrum/probe data (polled)WS /ws: optional metrics stream (scope/spectrum/probes) for the UIGET /faust-ui/*: static assets for@shren/faust-ui(optional)
The page polls /json and /status to detect DSP changes, and polls
/param-values on a short interval to keep the UI in sync with parameter
updates coming from MCP (set_param). It also polls /audio-metrics to render
scope/spectrum data and build the probe scope history buffer. When /ws is
available, the UI switches to WebSocket streaming and falls back to polling if
the WS connection is unavailable.
For polyphonic DSPs, the UI also shows the current count of active voices
just below the MIDI device selector.
Probe scopes are derived from get_audio_metrics().probes values. The UI
selects a probe ID and plots a rolling history of those values (no extra DSP
analysis is required).
For details on the WebSocket analysis stream, see docs/ws-metrics.md.
WEBAUDIO_ROOT=external/node-web-audio-api \
FAUST_UI_PORT=8787 FAUST_UI_ROOT=/path/to/faust-ui/dist/esm \
MCP_TRANSPORT=sse MCP_HOST=127.0.0.1 MCP_PORT=8000 \
python3 faust_node_server.pyIf you want Claude Code over stdio and the UI at the same time, run the server
in stdio mode with FAUST_UI_PORT set:
WEBAUDIO_ROOT=external/node-web-audio-api \
FAUST_UI_PORT=8787 FAUST_UI_ROOT=/path/to/faust-ui/dist/esm \
MCP_TRANSPORT=stdio \
python3 faust_node_server.pyThen open:
http://127.0.0.1:8787/
This runtime keeps DSP compilation + audio + UI entirely in the browser. A small
Python proxy (faust_browser_server.py) serves static assets and optionally
exposes MCP tools over SSE/stdio via a long-polling bridge.
make setup-browser-uiRecommended:
make run-browser-uiMCP_TRANSPORT=sse MCP_HOST=127.0.0.1 MCP_PORT=8000 \\
python3 faust_browser_server.pyOpen the UI in a browser:
http://127.0.0.1:8010/
Notes:
- The browser must be real (WebAudio + Web MIDI); headless is not supported.
- Tool calls from MCP clients are forwarded to the browser via
/bridge/*. make test-browser-apiruns the full MCP tool sequence (requires a browser tab open).- SVG diagrams are rendered in the browser UI (DSP Diagram panel) and support in-SVG navigation.
You can prefill DSP code via query param:
http://127.0.0.1:8010/?dsp=process%3Dos.osc(440)%3B
Add a new MCP server entry in Claude Desktop’s config (stdio transport):
{
"mcpServers": {
"faust-browser": {
"command": "python3",
"args": [
"/path/to/faust-mcp/faust_browser_server.py"
],
"env": {
"MCP_TRANSPORT": "stdio",
"BROWSER_UI_HOST": "127.0.0.1",
"BROWSER_UI_PORT": "8010",
"BROWSER_UI_ROOT": "/path/to/faust-mcp",
"BROWSER_UI_INDEX": "ui/rt-browser-ui.html"
}
}
}
}Then:
- Open
http://127.0.0.1:8010/in a browser and click Unlock Audio (or callunlock_audiofrom an MCP client in a user gesture). - In Claude, choose the
faust-browserserver and call tools.
If you want both runtimes available, add two entries:
{
"mcpServers": {
"faust-node": {
"command": "python3",
"args": [
"/path/to/faust-mcp/faust_node_server.py"
],
"env": {
"MCP_TRANSPORT": "stdio",
"WEBAUDIO_ROOT": "/path/to/faust-mcp/external/node-web-audio-api",
"FAUST_UI_PORT": "8787",
"FAUST_WORKER_PATH": "/path/to/faust-mcp/faust_node_worker.mjs"
}
},
"faust-browser": {
"command": "python3",
"args": [
"/path/to/faust-mcp/faust_browser_server.py"
],
"env": {
"MCP_TRANSPORT": "stdio",
"BROWSER_UI_HOST": "127.0.0.1",
"BROWSER_UI_PORT": "8010",
"BROWSER_UI_ROOT": "/path/to/faust-mcp",
"BROWSER_UI_INDEX": "ui/rt-browser-ui.html"
}
}
}
}compile_and_start(faust_code, name?, latency_hint?, input_source?, input_freq?, input_file?, hide_meters?)compile(faust_code, name?, latency_hint?, input_source?, input_freq?, input_file?, hide_meters?)start()unlock_audio(latency_hint?)(browser-only)check_syntax(faust_code, name?)get_status()get_params()get_dsp_json()get_param(path)get_param_values()get_audio_metrics(include_scope?, include_spectrum?, per_channel?, fft_size?, smoothing?, min_db?, max_db?, edge_threshold?, log_bins?)save_wasm_module()load_wasm_module(wasm_base64?, wasm_path?, dsp_json?, dsp_json_path?, effect_wasm_base64?, effect_wasm_path?, effect_dsp_json?, effect_dsp_json_path?, name?, latency_hint?)get_midi_inputs()get_midi_status()select_midi_input(index?, name?)set_param_values(values)set_param(path, value)stop()(suspend audio, keep DSP/UI)destroy()(stop audio and clear DSP/UI state)
load_wasm_module accepts either inline content (wasm_base64 + dsp_json) or file paths
(wasm_path + dsp_json_path). Effect modules follow the same pattern with
effect_wasm_base64/effect_dsp_json or effect_wasm_path/effect_dsp_json_path.
Example (paths):
{
"wasm_path": "/path/to/dsp.wasm",
"dsp_json_path": "/path/to/dsp.json"
}get_audio_metrics() returns RMS/peak metering derived from bargraphs that are
automatically injected by the real-time server when it wraps your Faust DSP
code. The wrapper adds output meters for each channel and a mono mix meter
(Mix Peak/Mix RMS). Output meters are always injected; input meters are only
injected when input_source is sine, noise, or file. get_audio_metrics()
returns input meters under input.channels and output meters under output.
If you want to hide the meters in compatible UIs, pass hide_meters=true to
compile_and_start.
hasNaN is reported for the output mix and each output channel (inputs omit it).
Metering/probe bargraphs are added via attach, so they do not change the
DSP audio I/O count. The compiled DSP keeps the same number of inputs/outputs;
only UI bargraphs are appended for metering/probing.
When a polyphonic DSP defines effect, the wrapper re-exports it at top level
so faustwasm can apply it post-mix. In that mode, mix meters are attached via
an in-place tap so output arity stays unchanged.
When a bargraph includes [unit:dB] metadata, get_audio_metrics() converts its
value to linear amplitude before returning it. The conversion is
linear = 10^(dB/20), so you can apply linear thresholds like rms < 0.001 for
silence detection or peak > 1.0 for clipping heuristics. Bargraphs without the
[unit:dB] tag are returned as-is.
If a bargraph includes [probe:N] metadata (with N as an integer), its value
is added to the probes array in get_audio_metrics() as { id, value }. This
lets MCP clients inject extra metering probes into the DSP graph and retrieve
them alongside the standard mix/channel meters.
Optional scope/spectrum capture:
include_scope: include time-domain samples aligned to a rising edge.include_spectrum: include FFT bins (dB) and frequency axis.per_channel: include per-channel scope/spectrum arrays.fft_size,smoothing,min_db,max_db,edge_threshold,log_bins: analyser tuning. Makefile helpers:make rt-get-audio-metrics-scopemake rt-get-audio-metrics-spectrummake rt-get-audio-metrics-fullmake rt-get-audio-metrics-full-per-channelmake rt-ws-metrics
Real-time tool responses include schema_version (currently faust-mcp-rt/1).
This allows MCP clients to detect changes and adapt safely.
Structured errors are returned as:
{
"error": {
"schema_version": "faust-mcp-rt/1",
"code": "compile_failed",
"message": "Faust compilation failed",
"details": { "stage": "poly" }
}
}get_status() returns a snapshot of the current DSP runtime state:
{
"schema_version": "faust-mcp-rt/1",
"running": true,
"name": "faust-rt",
"poly_nvoices": 8,
"midi_enabled": true,
"midi_active_notes": 3
}get_midi_status() returns the Node MIDI backend state for the current session,
including the selected input (if any) and the most recent MIDI message.
{
"status": "ok",
"available": true,
"selected": { "index": 1, "name": "MidiKeys" },
"last_message": { "data": [176, 7, 64], "timestamp": 1736367076123 }
}{
"input": {
"channels": [
{ "rms": 0.2, "peak": 0.42 }
]
},
"output": {
"mix": { "rms": 0.23, "peak": 0.45, "hasNaN": false },
"channels": [
{ "rms": 0.2, "peak": 0.42, "hasNaN": false },
{ "rms": 0.25, "peak": 0.48, "hasNaN": false }
]
},
"probes": [
{ "id": 0, "value": 0.57 }
]
}latency_hint accepts interactive (default) or playback for compile,
compile_and_start, unlock_audio, and load_wasm_module.
input_source accepts none (default), sine, noise, or file. input_freq
sets the sine frequency in Hz (default 1000). input_file sets the path for a
soundfile input when input_source=file. hide_meters (default false) appends
[hidden:1] to the meter bargraph labels so compatible UIs can hide them.
Example (hide meter bargraphs):
make rt-compile DSP=t2.dsp HIDE_METERS=1Stdio client example:
python3 stdio_client_example.py --server faust_node_server.py \
--tool compile_and_start --dsp t2.dsp --hide-metersFor the real-time server (faustwasm), soundfiles must be served over HTTP/HTTPS.
For the browser runtime, you can load soundfiles either via an HTTP/HTTPS URL
(Faust soundfile() path) or via a relative path that resolves against the UI
origin (WebAudio fetch + decodeAudioData path).
To test local files, start a simple server in the repo root:
python3 -m http.server 9000The real-time server runs a Node worker process and talks to it over stdin/stdout:
faust_node_server.pystarts the worker withnode faust_node_worker.mjs(override withFAUST_WORKER_PATH) and passesWEBAUDIO_ROOTin the environment.- The worker reads JSON lines like:
{ "id": 1, "method": "compile_and_start", "params": {...} } - It responds with:
{ "id": 1, "result": {...} }or{ "id": 1, "error": "..." } - The Python server forwards MCP tool calls to the worker and returns the result.
t1.dsp:
import("stdfaust.lib");
cutoff = hslider("cutoff[Hz]", 1200, 50, 8000, 1);
drive = hslider("drive[dB]", 0, -24, 24, 0.1) : ba.db2linear;
process = _ * drive : fi.lowpass(2, cutoff);t2.dsp:
import("stdfaust.lib");
freq1 = hslider("freq1[Hz]", 500, 50, 2000, 1);
freq2 = hslider("freq2[Hz]", 600, 50, 2000, 1);
gain = hslider("gain[dB]", -6, -60, 6, 0.1) : ba.db2linear;
process = os.osc(freq1) * gain, os.osc(freq2) * gain;noise.dsp:
import("stdfaust.lib");
gain = hslider("gain[dB]", -6, -60, 6, 0.1) : ba.db2linear;
process = no.noise * gain;organ_poly.dsp (polyphonic organ):
import("stdfaust.lib");
declare options "[midi:on][nvoices:8]";
process = os.osc(440) <: _,_;poly_fx.dsp (polyphonic voices + global effect):
import("stdfaust.lib");
declare options "[midi:on][nvoices:8]";
process = os.osc(440) <: _,_;
effect = _,_ : + : fi.lowpass(2, 8000) : ef.reverb_mono(0.3, 0.5, 0.5, 1) <: _,_;probe.dsp:
import("stdfaust.lib");
probe_rms_db(id, hide, x) = x <: attach(x, an.rms_envelope_rect(0.1)
: max(0.00001) : ba.linear2db
: hbargraph("Probe RMS%2id[probe:%id][unit:dB][hidden:%hide]", -60, 0));
probe_rms_lin(id, hide, x) = x <: attach(x, an.rms_envelope_rect(0.1)
: hbargraph("Probe RMS%2id[probe:%id][hidden:%hide]", 0, 1));
probe_peak_db(id, hide, x) = x <: attach(x, an.peak_envelope(0.1)
: max(0.00001) : ba.linear2db
: hbargraph("Probe Peak%2id[probe:%id][unit:dB][hidden:%hide]", -60, 0));
probe_peak_lin(id, hide, x) = x <: attach(x, an.peak_envelope(0.1)
: hbargraph("Probe Peak%2id[probe:%id][hidden:%hide]", 0, 1));
freq = hslider("freq", 440, 20, 2000, 1);
gain = hslider("gain", 0.5, 0, 1, 0.01);
gate = button("gate");
// Pipeline with probes at each stage
osc = os.sawtooth(freq) : probe_rms_db(0, 0);
shaped = osc * en.adsr(0.01, 0.1, 0.7, 0.3, gate) : probe_rms_db(1, 0);
output = shaped * gain : probe_rms_db(2, 0);
process = output <: _,_;To inspect the Faust code that the real-time server generates (metering + test inputs),
use scripts/emit_wrapped_dsp.mjs:
node scripts/emit_wrapped_dsp.mjs --dsp poly_fx.dsp --out poly_fx_wrapped.dspExample flow for Claude Code or another MCP-capable LLM:
- Start the desired server (
faust_server.py,faust_server_daw.py, orfaust_node_server.py). - The LLM connects over SSE or stdio and lists available tools.
- The LLM sends DSP code to
compile_and_analyze(offline servers) orcompile_and_start(real-time). - The server returns analysis metrics (offline) or parameter metadata (real-time).
- The LLM uses
set_param/set_param_valuesto adjust controls, andget_param/get_param_valuesto read back state.
Minimal real-time loop (conceptual):
compile_and_start(faust_code="...", name="osc1")
get_param_values()
set_param(path="/freq", value=440)
set_param_values(values=[{"path": "/gain", "value": 0.2}, {"path": "/cutoff", "value": 1200}])
To list available tools from a local SSE server, use:
python3 scripts/list_tools.py
python3 scripts/list_tools.py --detailsLocal file server for soundfile inputs:
python3 -m http.server 9000Browser runtime soundfile inputs (two ways):
make run-browser-ui
# 1) HTTP/HTTPS URL (Faust soundfile() path).
make rt-compile DSP=t1.dsp INPUT_SOURCE=file INPUT_FILE=http://127.0.0.1:8010/tests/assets/tango.wav
# 2) Relative path (resolved against the UI origin, uses WebAudio fetch/decode).
make rt-compile DSP=t1.dsp INPUT_SOURCE=file INPUT_FILE=tests/assets/tango.wavOffline analysis with a sine test input (DawDreamer):
make run-daw
make client-daw DSP=t1.dsp INPUT_SOURCE=sine INPUT_FREQ=1000Offline analysis with a soundfile test input (DawDreamer, local path):
make run-daw
make client-daw DSP=t1.dsp INPUT_SOURCE=file INPUT_FILE=tests/assets/sine.wavIf you see addSoundfile : soundfile for sound cannot be created, make sure the
path points to a local WAV file (HTTP URLs are for the real-time server).
Real-time compile with noise test input:
make run-rt
make rt-compile DSP=t1.dsp RT_NAME=fx INPUT_SOURCE=noiseReal-time compile with a soundfile test input (HTTP URL):
make run-node-ui
make rt-compile DSP=t1.dsp RT_NAME=fx INPUT_SOURCE=file INPUT_FILE=http://127.0.0.1:9000/tests/assets/sine.wavpython3 sse_client_example.py --url http://127.0.0.1:8000/sse --dsp t1.dspcompile_and_start example:
python3 sse_client_example.py --url http://127.0.0.1:8000/sse \
--tool compile_and_start --dsp t1.dsp --name osc1 --latency interactiveWith a test input source:
python3 sse_client_example.py --url http://127.0.0.1:8000/sse \
--tool compile_and_start --dsp t1.dsp --name osc1 --latency interactive \
--input-source sine --input-freq 1000With a file test input:
python3 sse_client_example.py --url http://127.0.0.1:8000/sse \
--tool compile_and_start --dsp t1.dsp --name osc1 --latency interactive \
--input-source file --input-file http://127.0.0.1:9000/tests/assets/sine.wavtests/assets/sine.wav is a mono 1 kHz test file included in this repo.
The helper script scripts/test_full_api.sh runs all SSE tools in one go and
optionally checks the UI HTTP server endpoints.
# Requires the real-time server to be running (see make run-node-ui).
scripts/test_full_api.shTo skip UI checks (port 8787), use:
SKIP_UI=1 scripts/test_full_api.shYou can override URLs and DSP selection with environment variables:
MCP_URL=http://127.0.0.1:8000/sse \
MCP_HTTP_BASE=http://127.0.0.1:8000 \
UI_HTTP_BASE=http://127.0.0.1:8787 \
DSP=t2.dsp NAME=faust-rt \
scripts/test_full_api.shscripts/ci_batch_audio.py compiles a batch of DSPs, collects audio metrics, and
flags silence/clipping/NaN issues. It also reports probe values when available,
so you can validate RMS/peak/probe signals across a DSP library.
# Requires the real-time server to be running (see make run-node-ui).
scripts/ci_batch_audio.py --glob "*.dsp" --input-source sine --input-freq 1000You can adjust thresholds and warmup time:
scripts/ci_batch_audio.py --silence-threshold 0.001 --clip-threshold 1.0 --warmup-ms 400Require probes to be present (fail otherwise):
scripts/ci_batch_audio.py --require-probesThis repo ships a few helper scripts under scripts/:
scripts/ci_batch_audio.py: Batch compile DSPs, collectget_audio_metrics(), and flag silence/clipping/NaN. Supports optional probe checks (--require-probes).scripts/emit_wrapped_dsp.mjs: Emit the Faust DSP code after MCP wrapping (useful to debug input/meters/effect wrapping).scripts/list_tools.py: List MCP tools exposed by a running server (use--detailsfor schema and params).scripts/test_full_api.sh: End-to-end SSE tool exercise plus optional UI endpoint checks.scripts/test_ws_metrics.py: Connect to/ws, subscribe, and wait for a metrics frame.scripts/verify_sse.py: Lightweight SSE connectivity check for CI/health probes.
Quick examples:
python3 scripts/list_tools.py --details
node scripts/emit_wrapped_dsp.mjs --dsp t2.dsp
scripts/verify_sse.py --url http://127.0.0.1:8000/sse
scripts/test_ws_metrics.py --url ws://127.0.0.1:8787/ws --include-scope --include-spectrum# Defaults to faust_node_server.py; pass --server to target other servers.
python3 stdio_client_example.py --dsp t1.dspReal-time over stdio:
MCP_TRANSPORT=stdio python3 faust_node_server.pypython3 stdio_client_example.py --server faust_node_server.py \
--tool compile_and_start --dsp t1.dsp --name fx --latency interactive \
--input-source noiseFor an SSE-like loop in stdio (compile multiple DSPs in one session), use:
FAUST_UI_PORT=8787 python3 stdio_rt_session.py --dsp t1.dsp --dsp t2.dspThese commands exercise the common server/client combinations on a local machine:
# C++ server (stdio)
python3 stdio_client_example.py --server faust_server.py --dsp t1.dsp --tmpdir /tmp/faust-mcp-test
# DawDreamer server (stdio, file input)
python3 stdio_client_example.py --server faust_server_daw.py --dsp t1.dsp \
--input-source file --input-file tests/assets/sine.wav
# Real-time server (SSE)
WEBAUDIO_ROOT=external/node-web-audio-api MCP_PORT=8000 \
python3 faust_node_server.py
python3 sse_client_example.py --url http://127.0.0.1:8000/sse \
--tool compile_and_start --dsp t1.dsp --name fx --latency interactive \
--input-source noisemake run-rt
make run-node-ui
make run-rt-stdio
make run-rt-stdio-ui
make run-rt-stdio-session
make rt-compile DSP=t1.dsp RT_NAME=osc1
make rt-get-params
make rt-get-param RT_PARAM_PATH=/freq
make rt-get-audio-metrics
make rt-set-param RT_PARAM_PATH=/freq RT_PARAM_VALUE=440
make rt-stop
make rt-destroy
make stop-rt| Server | Transport | Client | Works |
|---|---|---|---|
faust_server.py |
SSE | sse_client_example.py / make client-sse |
Yes |
faust_server.py |
stdio | stdio_client_example.py / make client-stdio |
Yes |
faust_server_daw.py |
SSE | sse_client_example.py / make client-daw |
Yes |
faust_server_daw.py |
stdio | stdio_client_example.py |
Yes |
faust_node_server.py |
SSE | sse_client_example.py |
Yes |
faust_node_server.py |
stdio | stdio_client_example.py |
Yes |
Edit ~/.config/Claude/claude_desktop_config.json and add:
{
"mcpServers": {
"faust": {
"type": "sse",
"url": "http://127.0.0.1:8000/sse"
}
}
}If you use stdio with Claude Desktop, set a working directory (if supported)
or pass FAUST_WORKER_PATH so the server can locate the Node worker:
{
"mcpServers": {
"faust": {
"command": "python3",
"args": ["/path/to/faust-mcp/faust_node_server.py"],
"cwd": "/path/to/faust-mcp",
"env": {
"MCP_TRANSPORT": "stdio",
"WEBAUDIO_ROOT": "/path/to/faust-mcp/external/node-web-audio-api",
"FAUST_UI_PORT": "8787",
"FAUST_WORKER_PATH": "/path/to/faust-mcp/faust_node_worker.mjs"
}
}
}
}If your MCP client reads a servers.json file, add a stdio server entry:
{
"servers": {
"faust": {
"command": "python3",
"args": ["/path/to/faust-mcp/faust_server.py"],
"env": {
"MCP_TRANSPORT": "stdio",
"TMPDIR": "/path/to/faust-mcp/tmp"
}
}
}
}- If the compiler cannot create temp files, set
TMPDIRto a writable location. - Ensure the
tmp/directory exists if you useTMPDIR=./tmp(create it once withmkdir -p tmp). - If the server cannot bind to
127.0.0.1:8000, either stop the process using that port or changeMCP_PORTto another value.

