AgentJet (AJet) is a cutting-edge, user-friendly training framework designed to optimize agents and workflows (built with OpenAI SDK, AgentScope, Langchain, or just HTTP requests), fine-tuning language model weights behind the scenes.
Simply provide your agent workflow, training dataset, and reward function, and AgentJet will be ready to enhance your agents to their optimal performance!
Let's begin with the simplest example: a math agent with a tool call.
- First, please check out the installation guide to set up the training environment.
- Then, tune your first model using the minimum example.
ajet --conf tutorial/example_math_agent/math_agent.yaml --backbone='verl' # change to --backbone='trinity' if you want to switch to trinity training engine; # or --backbone='debug' if you want to debug with only vLLM
We aim to build a easy-to-learn Agent tuner that unlock more possibilities for agent developers:
- Easy and Friendly. AgentJet helps you tune models behind your agent workflows easily, optimizing your agents for top performance with minimal effort.
- Rich Tutorial Library. AgentJet provides a rich library of examples as tutorials.
- Efficient and Scalable. AgentJet uses [verl] as the default backbone (
--backbone=verl). However, we also support trinity as alternative backbone, accelerating your tuning process via fully asynchronous RFT. - Flexible and Fast. AgentJet supports multi-agent workflows and adopts a context merging technique, accelerating training by 1.5x to 10x when the workflow involves multi-turn (or multi-agent) conversations.
- Reliability and Reproducibility. Our team keeps track of framework performance across multiple tasks + major-git-version + training-backbones (under construction, still gathering data, coming soon).
For advanced researchers, AgentJet also provides high-resolution logging and debugging solutions:
- High-Resolution Logging: AgentJet allows users to save and inspect token-level rollout details, recording token IDs, token loss masks, and even token logprobs to facilitate workflow development and agent diagnostics.
- Fast Debugging: AgentJet also provides the
--backbone=debugoption for the best debugging experience, shortening your wait period from minutes to seconds after code changes and enabling breakpoint debugging in IDEs.
- Click here to read the installation guide.
-
You can start training your first agent with a single command using a pre-configured YAML file. Take the Math agent as an example:
ajet --conf tutorial/example_math_agent/math_agent.yaml
Explore our rich library of examples to kickstart your journey:
- 🔢 Training a math agent that can write python code.
- 📱 Creating an AppWorld agent using AgentScope and training it.
- 🐺 Developing Werewolves RPG agents and training them.
- 👩🏻⚕️ Learning to ask questions like a doctor.
- 🎴 Writing a countdown game using AgentScope and solving it.
- 🚶 Solving a frozen lake walking puzzle using AgentJet.
AgentJet makes agent fine-tuning straightforward by separating the developer interface from the internal execution logic.
To optimize an agent, you provide three core inputs:
- Trainable Workflow: Define your agent logic by inheriting the Workflow class, supporting both simple agent setups and advanced multi-agent collaborations.
- Task Reader: Load training tasks from JSONL files, HuggingFace datasets, interactive environments, or auto-generate them from documents.
- Task Judger: Evaluates agent outputs and assigns rewards to guide training.
The internal system orchestrates several specialized modules to handle the complexities of RL training and agent interactions.
- Launcher: Manages background service processes (Ray, vLLM) and routes the backbone.
- Task Reader: Handles data ingestion, augmentation, and filtering.
- Task Rollout: Bridges LLM engines and manages the Gym environment lifecycle.
- Task Runner: Executes the Agent workflow and calculates rewards.
- Model Tuner: Forwards inference requests from the workflow to the LLM engine.
- Context Tracker: Monitors LLM calls and automatically merges shared-history timelines to improve training efficiency by 1.5x to 10x.
- Tutorials: From Installation to Tuning your first agent — the essential path for beginners.
- Core Components: Define your Trainable Workflow and manage Data and Reward.
- Example: Check the Example Library above for real-world cases like Math, Werewolves game and Learning to ask task.
- Deep Dive: Master advanced Configuration.
AgentJet is a constantly evolving project. We are planning to add the following features in the near future.
| Category | Feature | Status |
|---|---|---|
| Examples | Covering LangGraph and AutoGen frameworks | Done & Verifying |
| Examples | Add LoRA training examples | Todo |
| Infra | Cross-process Tuner wrapper to pass though process forking | Done & Verifying |
| Infra | Optimize configurations for long-context adaptation on smaller GPUs | In Progress |
| Capability | Prompt tuning | In Progress |
| Capability | Multi-modal training support | Todo |
| Capability | MARL Credit assignment | Todo |
| Capability | Training dataset generation from few-shot samples | Done & Verifying |
