Skip to content

[NemoClaw][MacOS][Brev] [tag v0.0.1] Base URL example is misleading as per other OpenAI-compatible endpoint #1171

@caroline-xuan

Description

@caroline-xuan

Description

[Description]
[tag v0.0.1] nemoclaw onboard wizard, already specified correct openrouter api key

[3/7] Configuring inference (NIM)
──────────────────────────────────────────────────

Inference options:
1) NVIDIA Endpoints (recommended)
2) OpenAI
3) Other OpenAI-compatible endpoint
4) Anthropic
5) Other Anthropic-compatible endpoint
6) Google Gemini
7) Local Ollama (localhost:11434)

Choose [1]: 3
OpenAI-compatible base URL (e.g., https://openrouter.ai/api/v1): https://openrouter.ai/api/v1
Other OpenAI-compatible endpoint model []: qwen/qwen3-next-80b-a3b-instruct:free
Other OpenAI-compatible endpoint endpoint validation failed.
Responses API: HTTP 429: Provider returned error | Chat Completions API: HTTP 429: Provider returned error
Please enter a different Other OpenAI-compatible endpoint model name.

[Environment]

Item Version / detail
Device macos
Node.js v22.x
OpenShell CLI 0.0.16
NemoClaw v0.1.0
OpenClaw 2026.3.11 (29dc654)

[Steps to Reproduce]

1. nemoclaw onboard, select 3) Other OpenAI-compatible endpoint for inference
2. When prompting: OpenAI-compatible base URL (e.g., https://openrouter.ai/api/v1): fill https://openrouter.ai/api/v1 as shown and a correct api key
3. When prompting Other OpenAI-compatible endpoint model []: fill qwen/qwen3-next-80b-a3b-instruct:free


[Expected Result]
The base URL hint should be accurate, i.e. only the https+hostname without /api/v1

[Actual Result]

Choose [1]: 3
OpenAI-compatible base URL (e.g., https://openrouter.ai/api/v1): https://openrouter.ai/api/v1
Other OpenAI-compatible endpoint model []: qwen/qwen3-next-80b-a3b-instruct:free
Other OpenAI-compatible endpoint endpoint validation failed.
Responses API: HTTP 429: Provider returned error | Chat Completions API: HTTP 429: Provider returned error
Please enter a different Other OpenAI-compatible endpoint model name.

Other OpenAI-compatible endpoint model []:

# start over and fill the URL without /api/v1
nemoclaw onboard

NemoClaw Onboarding

[1/7] Preflight checks
──────────────────────────────────────────────────
✓ Docker is running
✓ Container runtime: colima
✓ openshell CLI: openshell 0.0.16
✓ Port 8080 already owned by healthy NemoClaw runtime (OpenShell gateway)
✓ Port 18789 already owned by healthy NemoClaw runtime (NemoClaw dashboard)
✓ Apple GPU detected: Apple M4 Pro (20 cores), 49152 MB unified memory
ⓘ NIM requires NVIDIA GPU — will use cloud inference
Reusing healthy NemoClaw gateway.

[3/7] Configuring inference (NIM)
──────────────────────────────────────────────────

Inference options:
1) NVIDIA Endpoints (recommended)
2) OpenAI
3) Other OpenAI-compatible endpoint
4) Anthropic
5) Other Anthropic-compatible endpoint
6) Google Gemini
7) Local Ollama (localhost:11434)

Choose [1]: 3
OpenAI-compatible base URL (e.g., https://openrouter.ai/api/v1): https://openrouter.ai
Other OpenAI-compatible endpoint model []: qwen/qwen3-next-80b-a3b-instruct:free
Responses API available — OpenClaw will use openai-responses.
Using Other OpenAI-compatible endpoint with model: qwen/qwen3-next-80b-a3b-instruct:free

[4/7] Setting up inference provider
──────────────────────────────────────────────────
✓ Active gateway set to 'nemoclaw'
✓ Created provider compatible-endpoint

Steps to reproduce

No steps provided.


[NVB# 6035277]

[NVB#6035277]

Metadata

Metadata

Assignees

Labels

Getting StartedUse this label to identify setup, installation, or onboarding issues.NV QABugs found by the NVIDIA QA TeamPlatform: macOSSupport for macOSUATIssues flagged for User Acceptance Testing.bugSomething isn't workingdocumentationImprovements or additions to documentation

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions