-
Notifications
You must be signed in to change notification settings - Fork 12.4k
feat: add MiniMax Chat node with M2.7 as default model #12924
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Changes from all commits
acf25ee
c01e019
1ac73e8
8f84fde
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -5,6 +5,10 @@ | |
|
|
||
| from comfy_api.latest import IO, ComfyExtension | ||
| from comfy_api_nodes.apis.minimax import ( | ||
| MiniMaxChatMessage, | ||
| MiniMaxChatModel, | ||
| MiniMaxChatRequest, | ||
| MiniMaxChatResponse, | ||
| MinimaxFileRetrieveResponse, | ||
| MiniMaxModel, | ||
| MinimaxTaskResultResponse, | ||
|
|
@@ -437,6 +441,118 @@ async def execute( | |
| return IO.NodeOutput(await download_url_to_video_output(file_url)) | ||
|
|
||
|
|
||
| class MinimaxChatNode(IO.ComfyNode): | ||
| """ | ||
| Node to generate text responses using MiniMax language models. | ||
| """ | ||
|
|
||
| @classmethod | ||
| def define_schema(cls) -> IO.Schema: | ||
| return IO.Schema( | ||
| node_id="MinimaxChatNode", | ||
| display_name="MiniMax Chat", | ||
| category="api node/text/MiniMax", | ||
| description="Generate text responses using MiniMax language models (MiniMax-M2.7).", | ||
| inputs=[ | ||
| IO.String.Input( | ||
| "prompt", | ||
| default="", | ||
| multiline=True, | ||
| tooltip="Text prompt for the model to respond to.", | ||
| ), | ||
| IO.Combo.Input( | ||
| "model", | ||
| options=MiniMaxChatModel, | ||
| default=MiniMaxChatModel.M2_7.value, | ||
| tooltip="The MiniMax model to use for text generation.", | ||
| ), | ||
| IO.String.Input( | ||
| "system_prompt", | ||
| multiline=True, | ||
| optional=True, | ||
| tooltip="Optional system instructions to guide the model's behavior.", | ||
| ), | ||
| IO.Int.Input( | ||
| "max_tokens", | ||
| default=4096, | ||
| min=1, | ||
| max=204800, | ||
| step=1, | ||
| tooltip="Maximum number of tokens to generate in the response.", | ||
| optional=True, | ||
| ), | ||
| IO.Float.Input( | ||
| "temperature", | ||
| default=0.7, | ||
| min=0.01, | ||
| max=1.0, | ||
| step=0.01, | ||
| tooltip="Controls randomness in the response. Higher values produce more creative output.", | ||
| optional=True, | ||
| ), | ||
| ], | ||
| outputs=[ | ||
| IO.String.Output(), | ||
| ], | ||
| hidden=[ | ||
| IO.Hidden.auth_token_comfy_org, | ||
| IO.Hidden.api_key_comfy_org, | ||
| IO.Hidden.unique_id, | ||
| ], | ||
| is_api_node=True, | ||
| price_badge=IO.PriceBadge( | ||
| depends_on=IO.PriceBadgeDepends(widgets=["model"]), | ||
| expr=""" | ||
| ( | ||
| $m := widgets.model; | ||
| $contains($m, "highspeed") ? { | ||
| "type": "list_usd", | ||
| "usd": [0.00004, 0.0002], | ||
| "format": { "approximate": true, "separator": "-", "suffix": " per 1K tokens" } | ||
| } | ||
| : { | ||
| "type": "list_usd", | ||
| "usd": [0.0001, 0.0006], | ||
| "format": { "approximate": true, "separator": "-", "suffix": " per 1K tokens" } | ||
| } | ||
| ) | ||
| """, | ||
| ), | ||
| ) | ||
|
|
||
| @classmethod | ||
| async def execute( | ||
| cls, | ||
| prompt: str, | ||
| model: str = MiniMaxChatModel.M2_7.value, | ||
| system_prompt: Optional[str] = None, | ||
| max_tokens: int = 4096, | ||
| temperature: float = 0.7, | ||
| ) -> IO.NodeOutput: | ||
| validate_string(prompt, field_name="prompt") | ||
|
|
||
| messages: list[MiniMaxChatMessage] = [] | ||
| if system_prompt: | ||
| messages.append(MiniMaxChatMessage(role="system", content=system_prompt)) | ||
| messages.append(MiniMaxChatMessage(role="user", content=prompt)) | ||
|
|
||
| response = await sync_op( | ||
| cls, | ||
| ApiEndpoint(path="/proxy/minimax/chat/completions", method="POST"), | ||
| response_model=MiniMaxChatResponse, | ||
| data=MiniMaxChatRequest( | ||
| model=MiniMaxChatModel(model).value, | ||
| messages=messages, | ||
| max_tokens=max_tokens, | ||
| temperature=temperature, | ||
| ), | ||
| ) | ||
|
|
||
| if response.choices: | ||
| return IO.NodeOutput(response.choices[0].message.content) | ||
|
Comment on lines
+539
to
+552
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧩 Analysis chain🏁 Script executed: # First, let's find and examine the MiniMax-related files
find . -name "*minimax*" -type f | head -20Repository: Comfy-Org/ComfyUI Length of output: 129 🏁 Script executed: # Let's examine the MiniMax API models file
wc -l ./comfy_api_nodes/apis/minimax.pyRepository: Comfy-Org/ComfyUI Length of output: 98 🏁 Script executed: # And the nodes file
wc -l ./comfy_api_nodes/nodes_minimax.pyRepository: Comfy-Org/ComfyUI Length of output: 99 🏁 Script executed: # Read the MiniMaxChatRequest definition
cat -n ./comfy_api_nodes/apis/minimax.py | grep -A 50 "class MiniMaxChatRequest"Repository: Comfy-Org/ComfyUI Length of output: 1685 🏁 Script executed: # Read the execute method from nodes_minimax.py around lines 539-552
sed -n '530,560p' ./comfy_api_nodes/nodes_minimax.pyRepository: Comfy-Org/ComfyUI Length of output: 1157 🏁 Script executed: # Check git status and history to see if this code is new
cd ./comfy_api_nodes && git log --oneline -20 -- nodes_minimax.pyRepository: Comfy-Org/ComfyUI Length of output: 135 🏁 Script executed: # Check if reasoning_split field was previously in the file
git log -p -- comfy_api_nodes/apis/minimax.py | grep -A 20 "reasoning_split" | head -30Repository: Comfy-Org/ComfyUI Length of output: 43 🏁 Script executed: # Get more context about the MinimaxChatNode.execute method - find the full function
sed -n '520,560p' ./comfy_api_nodes/nodes_minimax.pyRepository: Comfy-Org/ComfyUI Length of output: 1384 🏁 Script executed: # Check the actual diff of the recent commit
git show c01e019 -- comfy_api_nodes/nodes_minimax.py | head -100Repository: Comfy-Org/ComfyUI Length of output: 3387 🏁 Script executed: # Also check if there are any references to <think> or reasoning in the codebase
rg -i "think|reasoning" comfy_api_nodes/Repository: Comfy-Org/ComfyUI Length of output: 2830 🌐 Web query:
💡 Result: MiniMax’s OpenAI-compatible Chat Completions endpoint ( To avoid having
Then the model’s thinking is provided separately in Example (OpenAI SDK-style): from openai import OpenAI
client = OpenAI(
api_key="YOUR_MINIMAX_KEY",
base_url="https://api.minimax.io/v1",
)
resp = client.chat.completions.create(
model="MiniMax-M2.5",
messages=[{"role":"user","content":"Explain X briefly."}],
extra_body={"reasoning_split": True},
)This “split thinking into Sources: MiniMax OpenAI-compatible API docs (including References: [1] (platform.minimax.io) [2] (platform.minimax.io) [3] (atoms.dev) Citations:
🌐 Web query:
💡 Result: MiniMax M2.5’s OpenAI-compatible Chat Completions endpoint does not use OpenAI’s documented “reasoning items” schema. Instead, MiniMax injects the model’s reasoning into the normal In OpenAI’s own API documentation, “reasoning content/state” is handled in the Responses API as dedicated reasoning items/streaming events (e.g., So: if you’re using MiniMax M2.5 via an OpenAI-compatible Chat Completions API, follow MiniMax’s docs for Sources: Set MiniMax's OpenAI-compatible API includes Suggested fix# comfy_api_nodes/apis/minimax.py
class MiniMaxChatRequest(BaseModel):
model: str = Field(..., description='ID of the model to use.')
messages: list[MiniMaxChatMessage] = Field(..., description='A list of messages comprising the conversation.')
max_tokens: Optional[int] = Field(None, description='The maximum number of tokens to generate.')
temperature: Optional[float] = Field(
None,
description='Sampling temperature. Must be between 0 (exclusive) and 1 (inclusive).',
gt=0.0,
le=1.0,
)
+ reasoning_split: Optional[bool] = Field(
+ None,
+ description='Return reasoning separately from content for the OpenAI-compatible MiniMax API.',
+ )
# comfy_api_nodes/nodes_minimax.py
data=MiniMaxChatRequest(
model=MiniMaxChatModel(model).value,
messages=messages,
max_tokens=max_tokens,
temperature=temperature,
+ reasoning_split=True,
),🤖 Prompt for AI Agents |
||
| raise RuntimeError("No response generated by MiniMax model.") | ||
|
|
||
|
|
||
| class MinimaxExtension(ComfyExtension): | ||
| @override | ||
| async def get_node_list(self) -> list[type[IO.ComfyNode]]: | ||
|
|
@@ -445,6 +561,7 @@ async def get_node_list(self) -> list[type[IO.ComfyNode]]: | |
| MinimaxImageToVideoNode, | ||
| # MinimaxSubjectToVideoNode, | ||
| MinimaxHailuoVideoNode, | ||
| MinimaxChatNode, | ||
| ] | ||
|
|
||
|
|
||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,27 @@ | ||
| import pytest | ||
| from comfy_api_nodes.apis.minimax import MiniMaxChatModel | ||
|
|
||
|
|
||
| class TestMiniMaxChatModel: | ||
| def test_m27_in_model_list(self): | ||
| """MiniMax-M2.7 should be available in the chat model enum.""" | ||
| assert MiniMaxChatModel.M2_7.value == 'MiniMax-M2.7' | ||
|
|
||
| def test_m27_highspeed_in_model_list(self): | ||
| """MiniMax-M2.7-highspeed should be available in the chat model enum.""" | ||
| assert MiniMaxChatModel.M2_7_highspeed.value == 'MiniMax-M2.7-highspeed' | ||
|
|
||
| def test_m27_is_first_in_enum(self): | ||
| """M2.7 should appear before older models in the enum.""" | ||
| members = list(MiniMaxChatModel) | ||
| assert members[0] == MiniMaxChatModel.M2_7 | ||
| assert members[1] == MiniMaxChatModel.M2_7_highspeed | ||
|
|
||
| def test_legacy_models_still_available(self): | ||
| """Previous M2.5 models should still be available.""" | ||
| assert MiniMaxChatModel.M2_5.value == 'MiniMax-M2.5' | ||
| assert MiniMaxChatModel.M2_5_highspeed.value == 'MiniMax-M2.5-highspeed' | ||
|
|
||
| def test_total_model_count(self): | ||
| """Should have 4 chat models total (M2.7, M2.7-highspeed, M2.5, M2.5-highspeed).""" | ||
| assert len(MiniMaxChatModel) == 4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
Repository: Comfy-Org/ComfyUI
Length of output: 1113
🌐 Web query:
MiniMax M2.5 API pricing per token cost 2024 2025💡 Result:
MiniMax M2.5 token pricing you’ll see published publicly is not a 2024/2025 price—MiniMax’s official pay‑as‑you‑go table listing MiniMax‑M2.5 appears in the current docs/pricing pages (live as of March 2026). [1]
Official MiniMax pay‑as‑you‑go (CNY)
Prices are shown as 元 / 1,000,000 tokens: [1]
Common USD quotes (third‑party trackers, for reference)
Artificial Analysis lists MiniMax‑M2.5 at: [2]
If you specifically need what MiniMax charged in 2024 or 2025, you’ll likely need an archived snapshot or a dated MiniMax announcement—because M2.5 pricing as such doesn’t appear to be a 2024/2025 item on the current official pages. [1]
Sources: [1] MiniMax Open Platform docs – Pay‑as‑you‑go pricing table (文本/按量计费) (platform.minimaxi.com) • [2] Artificial Analysis – MiniMax‑M2.5 pricing (artificialanalysis.ai)
🌐 Web query:
MiniMax M2.5 highspeed model pricing API cost💡 Result:
MiniMax M2.5-highspeed API cost (Pay‑as‑you‑go)
From MiniMax’s pay‑as‑you‑go pricing table (prices in CNY per 1M tokens): Input ¥4.2 / 1M, Output ¥16.8 / 1M; cache read ¥0.21 / 1M, cache write ¥2.625 / 1M. [1]
Approximate USD equivalents (using ~¥6.88 per $1): input ~$0.61 / 1M, output ~$2.44 / 1M, cache read ~$0.03 / 1M, cache write ~$0.38 / 1M. [1][2]
(Reference) MiniMax M2.5 (non‑highspeed) PAYG
Input ¥2.1 / 1M (~$0.31), Output ¥8.4 / 1M (~$1.22). [1][2]
If you meant the Coding Plan (subscription, not per-token)
Plans that include MiniMax M2.5-highspeed are $40 / $80 / $150 per month (300 / 1000 / 2000 prompts per 5 hours). [3]
Sources: [1] MiniMax PAYG pricing table, [2] USD/CNY rate reference, [3] MiniMax Coding Plan pricing.
🏁 Script executed:
Repository: Comfy-Org/ComfyUI
Length of output: 339
🏁 Script executed:
# Check the structure to confirm this is new code git status comfy_api_nodes/nodes_minimax.py git log --oneline -5 comfy_api_nodes/nodes_minimax.pyRepository: Comfy-Org/ComfyUI
Length of output: 201
Update price badge values to match current MiniMax API pricing.
The per-1K token amounts are materially below official rates. MiniMax's current pay-as-you-go pricing is $0.30/$1.20 per 1M tokens for M2.5 and $0.61/$2.44 per 1M tokens for M2.5-highspeed, which translates to $0.0003/$0.0012 and $0.0006/$0.0024 per 1K tokens respectively.
Suggested patch
( $m := widgets.model; $contains($m, "highspeed") ? { "type": "list_usd", - "usd": [0.00004, 0.0002], + "usd": [0.0006, 0.0024], "format": { "approximate": true, "separator": "-", "suffix": " per 1K tokens" } } : { "type": "list_usd", - "usd": [0.0001, 0.0006], + "usd": [0.0003, 0.0012], "format": { "approximate": true, "separator": "-", "suffix": " per 1K tokens" } } )🤖 Prompt for AI Agents