Skip to content

feat: add multi-provider AI support (Groq, OpenAI, Anthropic, Google)#35

Draft
dhairyashiil wants to merge 1 commit intomainfrom
devin/1773340910-multi-provider-ai
Draft

feat: add multi-provider AI support (Groq, OpenAI, Anthropic, Google)#35
dhairyashiil wants to merge 1 commit intomainfrom
devin/1773340910-multi-provider-ai

Conversation

@dhairyashiil
Copy link
Member

feat: add multi-provider AI support (Groq, OpenAI, Anthropic, Google)

Summary

Replaces the hardcoded Groq provider in apps/chat with a dynamic, env-driven provider selector. Two env vars now control AI behavior:

  • AI_PROVIDERgroq | openai | anthropic | google (default: groq)
  • AI_MODEL — optional model override (each provider has a sensible default)

The Vercel AI SDK's provider-agnostic LanguageModel interface means only lib/ai-provider.ts needs to know which provider is active — the rest of the codebase (streamText, tools, etc.) is unchanged.

Files changed (6):

File What changed
apps/chat/package.json Added @ai-sdk/anthropic, @ai-sdk/google
apps/chat/lib/ai-provider.ts Rewrote with PROVIDER_CONFIG registry + getModel() exhaustive switch
apps/chat/lib/agent.ts Broadened isAIToolCallError / isAIRateLimitError for all providers
apps/chat/lib/env.ts Validates AI_PROVIDER value and that the selected provider's API key is present
apps/chat/.env.example Updated AI/LLM section with all provider options
apps/chat/README.md Updated env var table and architecture description

Review & Testing Checklist for Human

  • Default model changed: The Groq default model changed from openai/gpt-oss-120bllama-3.3-70b-versatile. Verify this is the intended default for existing deployments, or whether existing deploys should set AI_MODEL=openai/gpt-oss-120b explicitly.
  • "function_call" pattern in isAIToolCallError: This substring is fairly generic. Confirm it won't false-positive on non-error messages that mention "function_call" in passing.
  • Rate limit detection loosened: isAIRateLimitError now returns true for any 429 status code, whereas previously it also required "retry" in the error message. Verify this won't cause unintended error handling behavior.
  • Test with at least one non-Groq provider (e.g. set AI_PROVIDER=openai with a real API key) to confirm end-to-end streaming works through the agent.
  • Version pinning: New deps use ^ ranges (^3.0.58, ^3.0.43) while existing @ai-sdk/groq and @ai-sdk/openai are pinned exactly. Confirm this inconsistency is acceptable.

Notes

  • Typecheck passes (bun run typecheck:chat). Pre-existing biome lint errors in packages/cli/ are unrelated to this PR.
  • No runtime testing was performed (no API keys available in the dev environment).
  • Link to Devin session
  • Requested by: @dhairyashiil

Replace the hardcoded Groq provider with a dynamic, env-driven provider
selector supporting Groq, OpenAI, Anthropic, and Google Gemini, controlled
by AI_PROVIDER and AI_MODEL env vars.

- Rewrite lib/ai-provider.ts with provider registry and getModel() switch
- Install @ai-sdk/anthropic and @ai-sdk/google packages
- Generalize error helpers in lib/agent.ts for all providers
- Add AI_PROVIDER validation in lib/env.ts
- Update .env.example and README.md documentation
@devin-ai-integration
Copy link
Contributor

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR that start with 'DevinAI' or '@devin'.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

@vercel
Copy link

vercel bot commented Mar 12, 2026

Deployment failed with the following error:

You don't have permission to create a Preview Deployment for this Vercel project: companion-chat.

View Documentation: https://vercel.com/docs/accounts/team-members-and-roles

@dhairyashiil dhairyashiil marked this pull request as ready for review March 12, 2026 19:27
@dhairyashiil dhairyashiil marked this pull request as draft March 12, 2026 19:28
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 7 files

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="apps/chat/lib/agent.ts">

<violation number="1" location="apps/chat/lib/agent.ts:888">
P2: Treating any 429 as an AI token limit can misclassify unrelated 429 errors and return an incorrect user message.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

(err as { statusCode?: number }).statusCode === 429 ||
(cause as { statusCode?: number } | undefined)?.statusCode === 429;
return hasRateLimit || (status429 && (msg.includes("retry") || causeMsg.includes("retry")));
return hasRateLimit || status429;
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Mar 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Treating any 429 as an AI token limit can misclassify unrelated 429 errors and return an incorrect user message.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At apps/chat/lib/agent.ts, line 888:

<comment>Treating any 429 as an AI token limit can misclassify unrelated 429 errors and return an incorrect user message.</comment>

<file context>
@@ -858,30 +858,34 @@ export function isAIToolCallError(err: unknown): boolean {
     (err as { statusCode?: number }).statusCode === 429 ||
     (cause as { statusCode?: number } | undefined)?.statusCode === 429;
-  return hasRateLimit || (status429 && (msg.includes("retry") || causeMsg.includes("retry")));
+  return hasRateLimit || status429;
 }
 
</file context>
Fix with Cubic

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant