Skip to content

bkataru-workshop/go-htmx-tailwind-llamacpp-minimal-chatbot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

go-htmx-tailwind-llamacpp-minimal-chatbot

a go + htmx + tailwind based chatbot with as minimal as dependencies as possible

Setup and Usage Guide

Project Structure

.
├── main.go          # Backend server
├── index.html       # Frontend UI
└── go.mod           # Go module file

Quick Start

Prerequisites

  • Go 1.21 or later installed
  • Your llama.cpp server running on http://192.168.31.84:1337

Running the Application

  1. Create a new directory for your project:
mkdir llama-chatbot
cd llama-chatbot
  1. Create the three files above (main.go, index.html, go.mod)

  2. Run the server:

go run main.go
  1. Open your browser and navigate to:
http://localhost:8080

How It Works

Backend (main.go)

  • Zero external dependencies - Uses only Go's standard library
  • HTTP Server: Runs on port 8080
  • Routes:
    • GET / - Serves the HTML frontend
    • POST /send - Handles chat messages, calls llama.cpp API, returns HTML fragments for HTMX
  • Conversation History: Maintains in-memory message history (resets on server restart)
  • OpenAI API Compatible: Sends requests to http://localhost:8080/v1/chat/completions

Frontend (index.html)

  • HTMX: Handles form submission and DOM updates without page reload
  • Tailwind CSS: Used via CDN for styling (no build step needed)
  • User Flow:
    1. User types message and clicks Send
    2. HTMX POSTs form data to /send endpoint
    3. Backend processes message and calls llama.cpp
    4. Response HTML is returned and inserted into the DOM
    5. Input field is cleared automatically

Key Features

  • Minimal Dependencies: Only Go stdlib + HTML/CSS/JS from CDN
  • No Build Step: HTML/CSS/JS work directly from CDN
  • Conversation Context: Full message history sent to llama.cpp for context-aware responses
  • Auto-scrolling: Messages container scrolls to the latest message
  • Error Handling: Errors are caught and displayed to the user
  • Responsive Design: Works on mobile and desktop

Customization

Change the llama.cpp Server Address

Edit the constant at the top of main.go:

const (
    llama_api_url = "http://192.168.31.84:1337"  // Change this
    port          = ":8080"
)

Change the Model Name

In main.go, modify the ChatRequest struct population:

reqBody := ChatRequest{
    Model:    "llama",  // Change this
    Messages: messages,
}

Change the Port

Edit the constant in main.go:

const (
    port = ":8080"  // Change this
)

Customize Styling

Edit the Tailwind classes in index.html. All classes are self-explanatory and documented in the Tailwind CSS documentation.

Debugging

To see detailed logs, the server logs every HTTP request:

2025/11/14 10:00:00 Server starting on http://localhost:8080
2025/11/14 10:00:05 GET /
2025/11/14 10:00:10 POST /send

If you need to debug API responses, add this in the handleSend function after getting the response:

log.Printf("API Response: %+v", aiMessage)

Performance Notes

  • In-memory Storage: Conversation history is stored in memory. This means:

    • It persists across multiple messages in the same session
    • It resets when the server restarts
    • It's not shared across multiple users (single-user chat)
  • No Database: No external database needed, everything runs in memory

  • HTTP Client Reuse: Consider creating a reusable http.Client for better performance in production

Next Steps

To extend this chatbot:

  1. Add Message Persistence: Store conversations in SQLite (zero-dependency SQLite driver exists)
  2. Multi-user Support: Add session management
  3. Streaming Responses: Use SSE (Server-Sent Events) for real-time streaming
  4. File Upload: Allow users to upload and process documents
  5. Custom System Prompts: Add a UI to set system prompts before each conversation

About

a go + htmx + tailwind based chatbot with as minimal as dependencies as possible

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors