Translatica is a production-ready AI-powered English → Spanish literary translation system designed to preserve tone, context, and narrative style. It leverages a LoRA-fine-tuned transformer (PEFT) to deliver high-quality translations with low inference cost, supported by a modular NLP pipeline, BLEU-based evaluation, and a clean full-stack web interface.
The system is Dockerized and deployment-ready, and can be scaled as a SaaS product for publishers and content platforms—demonstrating strong expertise in model optimization, end-to-end system design, and business-oriented AI engineering.
demo.mp4
| Component | Technology |
|---|---|
| Training | PyTorch, Transformers, PEFT, LoRA, Datasets |
| Inference | FastAPI, Uvicorn, Pydantic |
| Model | Helsinki-NLP/opus-mt-en-es (LoRA fine-tuned) |
| Frontend | React, Vite, Tailwind CSS |
| Testing | Pytest (63 tests, 92% coverage) |
| CI/CD | GitHub Actions (Lint → Test → Docker Build) |
| Deployment | Docker, Render |
Translatica follows a modular monolithic architecture that clearly separates training, inference, API, and frontend layers while maintaining simple deployment and strong production readiness.
┌──────────────────────────┐
│ Frontend UI │
│ HTML + CSS + JS (UI) │
└─────────────┬────────────┘
│ HTTP Requests
▼
┌──────────────────────────┐
│ FastAPI Server │
│ Routing + Validation │
└─────────────┬────────────┘
│
▼
┌──────────────────────────┐
│ Translation Service │
│ Preprocess → Inference │
└─────────────┬────────────┘
│
▼
┌──────────────────────────┐
│ Model Manager │
│ Load LoRA + Tokenizer │
└─────────────┬────────────┘
│
▼
┌────────────────────────────────────┐
│ LoRA Fine-Tuned Transformer Model │
│ Helsinki-NLP/opus-mt-en-es (PEFT) │
└────────────────────────────────────┘
Before you begin, ensure you have the following installed:
- Python 3.11+
- Node.js 18+ & npm
- Git
The easiest way to run the full application (Frontend + Backend) is using the unified runner.
-
Clone the repository:
git clone https://github.com/Md-Emon-Hasan/Translatica.git cd Translatica -
Setup Backend:
# Create virtual environment python -m venv venv # Activate it # Windows: venv\Scripts\activate # Mac/Linux: # source venv/bin/activate # Install dependencies pip install -r backend/requirements.txt
-
Setup Frontend:
cd frontend npm install cd ..
-
Run the Application:
# Make sure venv is active python run.py- Frontend UI: http://localhost:5173
- Backend API: http://localhost:8000
Translatica/
│
├── .github/ # GitHub Configuration
│ └── workflows/
│ └── main.yml # CI/CD Pipeline Configuration
│
├── backend/ # Backend Service (FastAPI & Training)
│ ├── app/ # Main Application Package
│ │ ├── api/ # API Request Handlers
│ │ │ ├── __init__.py
│ │ │ └── routes.py # Endpoint Definitions
│ │ ├── core/ # Core Infrastructure
│ │ │ ├── __init__.py
│ │ │ ├── config.py # Application Settings
│ │ │ ├── database.py # Database Connection Logic
│ │ │ └── model.py # ML Model Loading & Management
│ │ ├── models/ # Data Models
│ │ │ ├── __init__.py
│ │ │ └── translation.py # Database Schema Models
│ │ ├── services/ # Business Logic Layer
│ │ │ ├── __init__.py
│ │ │ └── translation.py # Translation Processing Service
│ │ ├── utils/ # Utility Functions
│ │ │ ├── __init__.py
│ │ │ └── logger.py # Logging Configuration
│ │ ├── __init__.py
│ │ └── main.py # FastAPI Application Entry Point
│ ├── data/ # Persistent Data Storage
│ │ └── translations.db # SQLite Database File
│ ├── fine-tuned-model/ # Trained Model Artifacts
│ │ ├── fine-tuned-model/ # Model Weights and Config
│ │ │ ├── adapter_config.json
│ │ │ ├── adapter_model.safetensors
│ │ │ └── README.md
│ │ └── fine-tuned-tokenizer/ # Tokenizer Assets
│ │ ├── source.spm
│ │ ├── special_tokens_map.json
│ │ ├── target.spm
│ │ ├── tokenizer_config.json
│ │ └── vocab.json
│ ├── logs/ # Application Logs
│ │ └── app.log
│ ├── notebook/ # Jupyter Notebooks
│ │ └── Experiment.ipynb # Training Experiments
│ ├── tests/ # Test Suite
│ │ ├── __init__.py
│ │ ├── conftest.py # Test Fixtures
│ │ ├── test_api.py # API Endpoint Tests
│ │ ├── test_config.py # Config Tests
│ │ ├── test_main.py # App Initialization Tests
│ │ ├── test_model.py # Model Manager Tests
│ │ ├── test_services.py # Service Layer Tests
│ │ ├── test_training_data.py # Training Data Tests
│ │ ├── test_training_logger.py # Training Logger Tests
│ │ ├── test_training_model.py # Training Model Tests
│ │ ├── test_training_train.py # Training Script Tests
│ │ └── test_training_trainer.py # Trainer Tests
│ ├── training/ # Model Training Source
│ │ ├── __init__.py
│ │ ├── data.py # Dataset Loading & Processing
│ │ ├── logger.py # Training Logger Config
│ │ ├── model.py # Training Model Configuration
│ │ ├── run_train.py # Training Execution Script
│ │ ├── train.py # Main Training Logic
│ │ └── trainer.py # Trainer Setup
│ ├── Dockerfile # Backend Docker Configuration
│ ├── pyproject.toml # Python Project Configuration
│ ├── requirements.txt # Python Dependencies
│ └── run.py # Backend-specific Runner
│
├── frontend/ # Frontend Service (React + Vite)
│ ├── public/ # Public Static Assets
│ │ └── vite.svg
│ ├── src/ # Frontend Source Code
│ │ ├── assets/ # Assets
│ │ │ ├── css/
│ │ │ │ └── index.css # Global Styles
│ │ │ ├── images/
│ │ │ └── react.svg
│ │ ├── components/ # React Components
│ │ │ ├── layout/ # Layout Components
│ │ │ │ ├── Footer.tsx
│ │ │ │ ├── Header.tsx
│ │ │ │ └── MainLayout.tsx
│ │ │ └── ui/ # UI Components
│ │ │ └── Features.tsx
│ │ ├── features/ # Feature Modules
│ │ │ └── translator/
│ │ │ └── TranslatorCard.tsx # Main Translation Widget
│ │ ├── hooks/ # Custom React Hooks
│ │ │ └── useParticles.tsx # Background Animation Hook
│ │ ├── services/ # API Services
│ │ │ └── api.ts # Backend API Client
│ │ ├── types/ # TypeScript Types
│ │ ├── utils/ # Frontend Utilities
│ │ ├── App.css # App-specific Styles
│ │ ├── App.tsx # Root Component
│ │ ├── main.tsx # Frontend Entry Point
│ │ └── vite-env.d.ts # Vite Type Definitions
│ ├── .gitignore
│ ├── Dockerfile # Frontend Docker
│ ├── eslint.config.js
│ ├── index.html
│ ├── package-lock.json
│ ├── package.json
│ ├── postcss.config.js
│ ├── tailwind.config.js # Tailwind CSS Configuration
│ ├── tsconfig.app.json
│ ├── tsconfig.json
│ ├── tsconfig.node.json
│ └── vite.config.ts
│
├── .gitignore # Git Ignore Rules
├── app.png # Application Screenshot
├── docker-compose.yml # Docker Compose Configuration
├── LICENSE # Project License
├── README.md # Project Documentation
├── render.yml # Render Deployment Configuration
└── run.py # Unified Application Launcher
If you prefer to run services individually for debugging:
cd backend
# Ensure venv is active
python -m uvicorn app.main:app --reloadcd frontend
npm run dev| Method | Endpoint | Description |
|---|---|---|
| GET | / |
Web UI |
| POST | /translate |
Translate text |
| GET | /health |
Health check |
| GET | /docs |
Swagger UI |
Once running, access the automatic API docs:
- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
To fine-tune the translation model:
# Standard training
python -m backend.training.train
# Custom parameters
python -m backend.training.train \
--model-checkpoint "Helsinki-NLP/opus-mt-en-es" \
--output-dir "./fine-tuned-model" \
--num-epochs 3 \
--batch-size 16| Parameter | Default |
|---|---|
| Base Model | Helsinki-NLP/opus-mt-en-es |
| Dataset | opus_books (en-es) |
| LoRA Rank | 8 |
| LoRA Alpha | 32 |
| Target Modules | ["q_proj", "v_proj"] |
| Trainable Params | ~0.38% |
Run the full backend test suite:
cd backend
pytest tests/ -v --cov=app --cov=training --cov-report=term-missingCurrent Coverage: ~92% (63 tests passed)
Run the complete stack with Docker Compose:
# Build and start
docker-compose up --build
# Run in background
docker-compose up -dLogs are stored in logs/ directory:
app.log- Application logstraining.log- Training logs
Md Emon Hasan Email: emon.mlengineer@gmail.com | GitHub | LinkedIn | WhatsApp
MIT License - see LICENSE
