A Discord bot powered by LLM AI, ported from Python to Elixir using the Nostrum library.
- AI Chat: Mention the bot in any message to get an AI-powered response
- Message Context: Bot reads channel history to maintain conversation context
- Image Support: Attach images to your messages for AI analysis
- Slash Commands:
/helpand/infocommands for bot information - Rate Limiting: Per-user rate limiting to prevent spam
- Typing Indicators: Shows typing while processing requests
- Extensible LLM Providers: Easy to add new LLM backends
lib/
├── claude.ex # Main module with public API
├── claude/
│ ├── application.ex # OTP Application & Supervisor
│ ├── config.ex # Configuration (files & env vars)
│ ├── llm.ex # LLM facade module
│ ├── llm/
│ │ ├── provider.ex # Provider behaviour
│ │ └── providers/
│ │ └── openai.ex # OpenAI-compatible implementation
│ ├── message_consumer.ex # Nostrum event handler
│ ├── message_handler.ex # Message processing logic
│ ├── rate_limiter.ex # Per-user rate limiting (non-blocking)
│ ├── user_cache.ex # ETS-backed user cache
│ ├── member_cache.ex # ETS-backed member cache
│ ├── utils.ex # Helper functions
│ └── commands/
│ ├── command.ex # Command behaviour
│ ├── registry.ex # Command registration & routing
│ ├── info.ex # /info command
│ └── help.ex # /help command
Edit config/config.exs to configure the bot:
config :claude,
discord: %{
token: "YOUR_DISCORD_BOT_TOKEN"
},
llm: %{
provider: Claude.LLM.Providers.OpenAI,
model: "gpt-4",
llm_api_key: "YOUR_LLM_API_KEY",
llm_base_url: "https://api.openai.com/v1",
max_tokens: 1024,
max_context_messages: 50,
rate_limit_ms: 2_000
}Alternatively, one can create config/dev.exs or config/production.exs to set configs specifically for those runtime environments.
You can also configure the bot using environment variables (these take priority over config files):
export CLAUDE_DISCORD_TOKEN="your-discord-token"
export CLAUDE_LLM_API_KEY="your-llm-api-key"
export CLAUDE_LLM_BASE_URL="https://api.openai.com/v1"
export CLAUDE_MODEL="gpt-4"
export CLAUDE_MAX_TOKENS="1024"
export CLAUDE_MAX_CONTEXT_MESSAGES="50"
export CLAUDE_RATE_LIMIT_MS="2000"This is useful for deployments where you don't want to commit secrets to config files.
| Option | Description | Default |
|---|---|---|
provider |
LLM provider module | Claude.LLM.Providers.OpenAI |
model |
Model name for chat completions | "doubao-seed-1-6-thinking-250615" |
llm_api_key |
API key for the LLM provider | Required |
llm_base_url |
Base URL for OpenAI-compatible API | "https://aihubmix.com/v1" |
max_tokens |
Maximum tokens in LLM response | 1024 |
max_context_messages |
Messages to include in context | 50 |
rate_limit_ms |
Milliseconds between messages per user | 2000 |
-
Install dependencies:
mix deps.get
-
Configure the bot: Edit
config/config.exswith your Discord token and LLM API key. -
Run the bot:
mix run --no-halt
Or in IEx for development:
iex -S mix
Mention the bot in any message:
@Claude what's the weather like today?
The bot will:
- Read recent channel history for context
- Show a typing indicator
- Call the LLM API
- Reply with the AI response
Send this message to reset context:
[DO NOT COUNT PAST THIS MESSAGE]
/help- Show usage information/info- Display bot configuration and status
Implement the Claude.LLM.Provider behaviour:
defmodule Claude.LLM.Providers.MyProvider do
@behaviour Claude.LLM.Provider
@impl true
def name, do: "My Provider"
@impl true
def validate_config do
# Return :ok or {:error, reason}
:ok
end
@impl true
def chat_completion(messages, options) do
# Return {:ok, response_text} or {:error, reason}
{:ok, "Hello!"}
end
@impl true
def supports_images?, do: true
endThen configure it:
config :claude, :llm, %{
provider: Claude.LLM.Providers.MyProvider,
# ... other options
}Claude.Supervisor
├── Claude.RateLimiter (GenServer)
├── Claude.UserCache (ETS)
├── Claude.MemberCache (ETS)
└── Nostrum.Bot
└── Claude.MessageConsumer
mix testmix docs-
/imagecommand with fal.ai integration - File-based logging with rotation
- Guild-specific configuration
- Conversation memory persistence
AGPLv3