A local-first, scientific AI agent for bioacoustics with a chat UI and inline widgets. Use LyreBot to analyze your audio recordings with BirdNET and ask questions about the results in natural language.
This is more than a chat interface - LyreBot uses an agent-based architecture where the LLM plans analysis steps, Python executes them, and the results are the source of truth. You can attach audio files or folders, and ask questions like:
- "What species were detected?"
- "Show me the species distribution"
- "Filter for common species in Germany"
- "Generate a spectrogram for the first blackbird detection"
- "Let me hear a detection of the robin"
AI-powered bioacoustics tools can play a critical role in conservation by automating the analysis of large-scale audio datasets. Despite the availability of many open tools with graphical interfaces (e.g., the BirdNET Analyzer), end-to-end processing pipelines often remain complex and difficult to use. Agents such as LyreBot add an additional layer of abstraction by orchestrating these tools and can lower the barrier to entry, making advanced bioacoustic analysis more accessible to researchers and conservation practitioners without extensive technical expertise.
Read more about the motivation and vision for this project in our companion memo: Agentic AI and the Next Phase of Tool-Assisted Conservation.
This project is a proof-of-concept and research prototype that is still very rough around the edges. However, we are releasing it to the public to gather feedback and understand how such a tool can best serve the bioacoustics community.
Note: No data is sent to any external servers except for the LLM API calls (Anthropic). All audio analysis is done locally on your machine. See Architecture & Workflow for details.
- Features
- Interface Preview
- Prerequisites
- Installation
- Running the Application
- Configuration
- Usage
- Architecture & Workflow
- Available Tools
- Building for Production
- Project Structure
- Troubleshooting
- Contributing
- License
- Chat Interface: Natural language queries about bird detections
- BirdNET Integration: Analyze audio files locally using BirdNET
- Real-time Streaming: WebSocket-based updates show analysis progress as it happens
- Inline Widgets: Tables, plots (bar, box, scatter, histogram), spectrograms, audio playback, and downloads
- Markdown Responses: Rich formatted responses with proper styling
- CSV Export: Download detection results for use in spreadsheets or research
- Dark/Light Mode: Toggle between themes in settings
- Local-First: Your data stays on your machine
- Agent-Based: LLM plans, Python executes - results are source of truth
- Node.js 18+ and npm
- Python 3.10+
- Rust (for Tauri) - Install Rust
- Anthropic API Key - Get one here
git clone https://github.com/birdnet-team/lyrebot.git && cd lyrebot && ./setup.shWindows users should follow the Manual Setup below. Make sure you have:
- Python 3.10+ (check "Add to PATH" during install)
- Node.js 18+
- Rust (for Tauri desktop app, optional for web-only use)
git clone https://github.com/birdnet-team/lyrebot.git
cd lyrebotcd backend
# Create virtual environment
python3 -m venv .venv # On Windows: python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install dependencies
pip install -r requirements.txtcd ../app
# Install dependencies
npm install./run-all.shThen open http://localhost:1420 in your browser.
Terminal 1 - Start the backend:
cd backend
source .venv/bin/activate # On Windows: .venv\Scripts\activate
python run.pyTerminal 2 - Start the frontend:
cd app
npm run devThen open http://localhost:1420 in your browser.
cd app
npm run tauri devThis will:
- Start the Vite dev server
- Build and launch the Tauri desktop application
- Attempt to auto-start the Python backend
- Click the gear icon in the top-right corner
- Enter your Anthropic API Key
- Set the Allowed Data Root (directory where your audio files are)
- Toggle Dark/Light mode as preferred
- Click Save Settings
You can also configure via environment variables:
# In backend/.env
ANTHROPIC_API_KEY=your-key-here
ALLOWED_DATA_ROOT=/path/to/your/audio/files- Click the paperclip icon to attach files
- Select audio files (WAV, MP3, FLAC, OGG, M4A) or a folder
- Ask a question like:
- "Analyze these recordings"
- "What species were detected?"
- "Show me the species distribution"
- "Generate a spectrogram for the first blackbird detection"
- "Export results to CSV"
- "Let me hear a detection of the robin"
You can also ask general bioacoustics questions without attachments:
- "Tell me about blue tits"
- "What does confidence mean in BirdNET?"
- "Explain spectrograms"
LyreBot operates as a closed-loop scientific agent. Instead of just "chatting" about birds, it uses a multi-step orchestration process to ensure that all answers are backed by verifiable data analysis performed locally on your machine.
┌────────────────┐ ┌───────────────────────────────────┐ ┌─────────────┐
│ Desktop App │ │ FastAPI Backend │ │ Local Data │
│ (React/Tauri) │ <~~> │ - Agent Loop (LLM Planner/Writer) │ <──> │ - DuckDB │
│ UI & Widgets │ WS │ - Scientific Tools (BirdNET/SQL) │ │ - Audio │
└────────────────┘ └───────────────────────────────────┘ └─────────────┘
- Planner: The LLM (Anthropic Claude) receives your query and decides which tools it needs to use (e.g., "Run BirdNET analysis on this folder", "Query the database for robin detections").
- Execute: The Python backend carries out these tasks locally via WebSocket streaming, providing real-time progress updates. It runs the BirdNET model, stores results in a local DuckDB database, and uses scientific libraries to process audio or create plots.
- Writer: The LLM takes the raw data from the tools and translates it into a human-readable scientific summary, attaching interactive widgets (plots, tables, audio clips) directly to the chat bubble.
LyreBot uses a structured JSON exchange to bridge the gap between natural language and scientific execution. Here is a simplified look at the data flow:
When you ask a question, the LLM generates a structured plan of tool calls:
{
"steps": [
{
"tool": "query_results",
"arguments": { "species": "Common Blackbird", "limit": 5 }
},
{
"tool": "get_summary",
"arguments": { "run_id": "run_20240204_1200" }
}
],
"assistant_text": "I'll check the detections for blackbirds and summarize the overall run stats."
}After the backend executes the tools, it sends back a response containing both the natural language answer and the UI components:
{
"response": {
"response_text": "I found 5 detections of the Common Blackbird.",
"widgets": [
{
"type": "plot",
"title": "Species Distribution",
"series": [{ "name": "Detections", "x": ["Robin", "Blackbird"], "y": [12, 5] }]
}
]
}
}This ensures the UI can render rich scientific components regardless of which LLM is used, keeping the "intelligence" in the planning and the "rendering" in the frontend.
The LLM has access to these tools for analysis:
| Tool | Description |
|---|---|
get_species_at_location |
Get expected species for a GPS location (for filtering) |
register_dataset |
Register audio files for analysis |
create_run |
Create a new analysis run with settings |
run_analysis |
Execute BirdNET analysis |
job_status |
Check analysis progress |
get_summary |
Get aggregated analysis statistics |
query_results |
Query specific detection results |
get_plot_data |
Get data for visualizations (bar charts, box plots, scatter plots, histograms, timelines) |
get_spectrogram |
Generate spectrogram images |
get_audio_clip |
Extract audio segments for playback |
generate_report |
Create markdown reports |
export_csv |
Export detections to CSV for download |
The backend runs as a standalone Python process. For distribution, consider using PyInstaller:
cd backend
pip install pyinstaller
pyinstaller --onefile run.pycd app
npm run tauri buildThe built application will be in app/src-tauri/target/release/.
lyrebot/
├── app/ # Tauri + React frontend
│ ├── src/
│ │ ├── components/ # React components
│ │ ├── App.tsx # Main application
│ │ ├── api.ts # Backend API client
│ │ └── types.ts # TypeScript types
│ ├── src-tauri/ # Tauri configuration
│ └── package.json
│
├── backend/ # FastAPI backend
│ ├── app/
│ │ ├── main.py # FastAPI app
│ │ ├── models.py # Pydantic models
│ │ ├── config.py # Configuration
│ │ ├── database.py # DuckDB operations
│ │ ├── birdnet.py # BirdNET wrapper
│ │ ├── tools.py # LLM tools
│ │ └── llm.py # Agent loop
│ └── requirements.txt
│
└── README.md
- Ensure Python 3.10+ is installed
- Check if port 8765 is available
- Verify all dependencies are installed
- Make sure the backend is running on http://127.0.0.1:8765
- Check the browser console for CORS errors
- Try restarting both frontend and backend
- Ensure audio files are in a supported format (WAV, MP3, FLAC, OGG, M4A)
- Check file paths are under the allowed data root
- Try reinstalling dependencies:
pip install -r requirements.txt
- Verify the key is correctly pasted in Settings
- Ensure there are no extra spaces
- Check your Anthropic account has available credits
Contributions are welcome! Please feel free to submit issues and pull requests.
Areas for improvement include:
- adding more analysis tools
- add other LLM providers
- add a local LLM option
- improving the UI/UX
This project was largely co-developed with AI tools, so don't be afriad to leverage them in your contributions as well!
If you open a pull request, please make sure to address a specific issue or feature request instead of many changes at once.
- Source Code: The source code for this project is licensed under the MIT License.
- Models: The models used in this project are licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0).
Please ensure you review and adhere to the specific license terms provided with each model.
Please note that all educational and research purposes are considered non-commercial use and it is therefore freely permitted to use BirdNET models in any way.
Our work in the K. Lisa Yang Center for Conservation Bioacoustics is made possible by the generosity of K. Lisa Yang to advance innovative conservation technologies to inspire and inform the conservation of wildlife and habitats.
The development of BirdNET is supported by the German Federal Ministry of Research, Technology and Space (FKZ 01|S22072), the German Federal Ministry for the Environment, Climate Action, Nature Conservation and Nuclear Safety (FKZ 67KI31040E), the German Federal Ministry of Economic Affairs and Energy (FKZ 16KN095550), the Deutsche Bundesstiftung Umwelt (project 39263/01) and the European Social Fund.
BirdNET is a joint effort of partners from academia and industry. Without these partnerships, this project would not have been possible. Thank you!


