Skip to content

feat: add MiniMax provider support for voice agent LLM#15590

Open
octo-patch wants to merge 4 commits intoNVIDIA-NeMo:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax provider support for voice agent LLM#15590
octo-patch wants to merge 4 commits intoNVIDIA-NeMo:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

  • Add MiniMaxService LLM provider class that connects to MiniMax's OpenAI-compatible API
  • Support MiniMax-M2.7 (default) and MiniMax-M2.7-highspeed models
  • Add minimax as a new backend option in get_llm_service_from_config
  • Add example config llm_configs/minimax.yaml for voice agent setup
  • Add unit tests for MiniMaxService and the factory function

Changes

nemo/agents/voice_agent/pipecat/services/nemo/llm.py

  • New MiniMaxService(OpenAILLMService) class:
    • Reads API key from MINIMAX_API_KEY environment variable (or explicit api_key)
    • Default base URL: https://api.minimax.io/v1 (overseas endpoint)
    • Temperature must be in range (0.0, 1.0] — MiniMax does not support temperature=0
  • Updated get_llm_service_from_config to handle type: minimax
  • Updated assertion to include "minimax" in supported backends

examples/voice_agent/server/server_configs/llm_configs/minimax.yaml

New config file for using MiniMax as the voice agent LLM backend.

examples/voice_agent/tests/test_minimax_llm.py

Unit tests covering instantiation, env-var key resolution, missing key error, default URLs, supported models, and factory function routing.

API Reference

Usage

Set MINIMAX_API_KEY environment variable, then configure the voice agent server:

llm:
  type: minimax
  model: "MiniMax-M2.7"
  model_config: "./server_configs/llm_configs/minimax.yaml"

@pzelasko pzelasko requested a review from stevehuang52 April 21, 2026 12:20
@pzelasko
Copy link
Copy Markdown
Collaborator

@stevehuang52 can you take a look?

}
)
with patch("pipecat.services.openai.llm.OpenAILLMService.__init__", return_value=None) as mock_init:
svc = get_llm_service_from_config(cfg)
API documentation: https://platform.minimax.io/docs/api-reference/text-openai-api
"""

SUPPORTED_MODELS = [
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it work with MiniMax-M2.5 or other versions? We can probably drop this hardcoded SUPPORTED_MODELS if all MiniMax models support the same API.

call_kwargs = mock_init.call_args[1]
assert call_kwargs.get("base_url") == custom_url

def test_supported_models_list(self):
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can probably drop this test since the list of supported models may change frequently over time?

@svcnvidia-nemo-ci svcnvidia-nemo-ci added the waiting-on-customer Waiting on the original author to respond label Apr 21, 2026
@pzelasko pzelasko removed the Run CICD label Apr 28, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

community-request waiting-on-customer Waiting on the original author to respond

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants