These samples demonstrate how to use the Foundry Local JavaScript SDK (foundry-local-sdk) with Node.js.
- Node.js (v18 or later recommended)
| Sample | Description |
|---|---|
| native-chat-completions | Initialize the SDK, download a model, and run non-streaming and streaming chat completions. |
| audio-transcription-example | Transcribe audio files using the Whisper model with streaming output. |
| chat-and-audio-foundry-local | Unified sample demonstrating both chat and audio transcription in one application. |
| electron-chat-application | Full-featured Electron desktop chat app with voice transcription and model management. |
| copilot-sdk-foundry-local | GitHub Copilot SDK integration with Foundry Local for agentic AI workflows. |
| langchain-integration-example | LangChain.js integration for building text generation chains. |
| tool-calling-foundry-local | Tool calling with custom function definitions and streaming responses. |
| web-server-example | Start a local OpenAI-compatible web server and call it with the OpenAI SDK. |
| tutorial-chat-assistant | Build an interactive multi-turn chat assistant (tutorial). |
| tutorial-document-summarizer | Summarize documents with AI (tutorial). |
| tutorial-tool-calling | Create a tool-calling assistant (tutorial). |
| tutorial-voice-to-text | Transcribe and summarize audio (tutorial). |
-
Clone the repository:
git clone https://github.com/microsoft/Foundry-Local.git cd Foundry-Local/samples/js -
Navigate to a sample and install dependencies:
cd native-chat-completions npm install -
Run the sample:
npm start
Tip
Each sample's package.json includes foundry-local-sdk as a dependency and foundry-local-sdk-winml as an optional dependency. On Windows, the WinML variant installs automatically for broader hardware acceleration. On macOS and Linux, the standard SDK is used. Just run npm install — platform detection is handled for you.