A Next.js chatbot starter with InsForge auth, database, storage, and optional Vercel AI Gateway support.
Features · Demo · Quick Launch · Run locally · Vercel AI Gateway · Deploy to Vercel · First Try
- Next.js App Router
- Streaming chat UI with persisted history and file attachments
- InsForge auth, database, storage, and AI
- Optional routing through Vercel AI Gateway
- Multi-provider model selection with
provider/modelIDs - shadcn/ui components
- Styling with Tailwind CSS
Demo: demochatbot.insforge.site
The starter includes a simple first-try chat experience, persisted history, file uploads, authentication, and optional routing through the Vercel AI Gateway.
If you want the fastest path, use the InsForge CLI and follow the prompts:
npx @insforge/cli createFrom there:
- Choose the chatbot template
- Create or connect your InsForge project
- Let the CLI set up the project files
- Choose to deploy with InsForge automatically from the guided flow
Use the local setup below if you want to inspect the repo, edit environment variables manually, or control the setup step by step.
-
Clone the repository and move into the chatbot template:
git clone https://github.com/InsForge/insforge-templates.git cd insforge-templates/chatbot -
Install dependencies:
npm install
-
Go to the InsForge dashboard, create a project, and click Connect → CLI to get the link command:
npx @insforge/cli link --project-id <your-project-id>
-
Copy the example environment file:
cp .env.example .env.local
-
Fill in the required values (find these in the InsForge dashboard under Connect → API Keys):
NEXT_PUBLIC_INSFORGE_URL=https://your-project.region.insforge.app NEXT_PUBLIC_INSFORGE_ANON_KEY=your-public-anon-key NEXT_PUBLIC_APP_URL=http://localhost:3000
-
Apply the included schema and seed data to your InsForge project. You can either ask your agent using this prompt:
help me create table and seed data from migrations/db_init.sql
Or run the command directly:
npx @insforge/cli db import migrations/db_init.sql
This migration creates the chat tables and also inserts the
chat-attachmentsstorage bucket record used by file uploads. -
Start the dev server:
npm run dev
To route AI requests through the Vercel AI Gateway instead of InsForge AI:
-
Enable the provider:
AI_PROVIDER=vercel AI_GATEWAY_API_KEY=your-gateway-api-key
On Vercel deployments
AI_GATEWAY_API_KEYis optional — the gateway authenticates automatically via OIDC. -
Add provider credentials for the models you want to use. To get started,
OPENAI_API_KEYis enough foropenai/*models:OPENAI_API_KEY=sk-...
If you want to use other providers from the model picker, add the matching provider key in your environment or configure it in Vercel AI Gateway settings. Selecting a model whose provider key is missing will return a clear error in the chat UI — there is no silent fallback to another provider.
Alternatively, if deploying on Vercel, you can configure provider keys in the Vercel dashboard under AI Gateway settings instead of using environment variables.
Current limitations:
- PDF file parsing (
fileParser) is InsForge-only. PDFs are forwarded as base64 file parts, but results depend on the model. A toast warning is shown when this applies. - InsForge is still required for auth, database, storage, and file uploads regardless of AI provider.
After cloning the repo and running the starter locally, you can deploy it on Vercel:
- Set
NEXT_PUBLIC_INSFORGE_URL - Set
NEXT_PUBLIC_INSFORGE_ANON_KEY - Deploy the project
- In Vercel, open your project, go to
Settings→Environment Variables, and setNEXT_PUBLIC_APP_URLto your deployed app URL - Redeploy the project
- In the InsForge dashboard, open
Authentication→General→Allowed Redirect URLs, then add your deployed callback URL (for examplehttps://your-project.vercel.app/auth/callback)
The empty chat state starts with a welcome heading and three beginner-friendly starter prompts ("Explain a concept", "Improve my writing", "Brainstorm next steps") so you can try the template quickly on localhost or a cloud preview. Selecting a starter prompt fills the input first, so you can adjust it before sending. The prompts are defined in components/chat-empty-state.tsx and are easy to replace with your own product's use cases.
