Welcome to verl’s documentation!
verl is a flexible, efficient and production-ready RL training framework designed for large language models (LLMs) post-training. It is an open source implementation of the HybridFlow paper.
verl is flexible and easy to use with:
Easy extension of diverse RL algorithms: The hybrid programming model combines the strengths of single-controller and multi-controller paradigms to enable flexible representation and efficient execution of complex Post-Training dataflows. Allowing users to build RL dataflows in a few lines of code.
Seamless integration of existing LLM infra with modular APIs: Decouples computation and data dependencies, enabling seamless integration with existing LLM frameworks, such as PyTorch FSDP, Megatron-LM, vLLM and SGLang. Moreover, users can easily extend to other LLM training and inference frameworks.
Flexible device mapping and parallelism: Supports various placement of models onto different sets of GPUs for efficient resource utilization and scalability across different cluster sizes.
Ready integration with popular HuggingFace models
verl is fast with:
State-of-the-art throughput: By seamlessly integrating existing SOTA LLM training and inference frameworks, verl achieves high generation and training throughput.
Efficient actor model resharding with 3D-HybridEngine: Eliminates memory redundancy and significantly reduces communication overhead during transitions between training and generation phases.
Quickstart
Programming guide
Data Preparation
Configurations
Algorithms
- Proximal Policy Optimization (PPO)
- Group Relative Policy Optimization (GRPO)
- Recipe: Decoupled Clip and Dynamic Sampling Policy Optimization (DAPO)
- Recipe: Self-Play Fine-Tuning (SPIN)
- Recipe: Self-Play Preference Optimization (SPPO)
- Recipe: Entropy Mechanism
- 📃Evaluation
- On-Policy RL with Optimal Reward Baseline (OPO)
- Algorithm Baselines
PPO Trainer and Workers
Performance Tuning Guide
Adding new models
Advanced Features
Hardware Support
API References
FAQ
- Frequently Asked Questions
- Ray related
- Distributed training
- Install related
- Illegal memory access
- Checkpoints
- Triton
compile_module_from_src
error - What is the meaning of train batch size, mini batch size, and micro batch size?
- How to generate ray timeline to analyse performance of a training job?
- How to set proxy only for wandb?
Development Notes
Contribution
verl is free software; you can redistribute it and/or modify it under the terms of the Apache License 2.0. We welcome contributions. Join us on GitHub, Slack and Wechat for discussions.
Contributions from the community are welcome! Please check out our project roadmap and good first issues to see where you can contribute.
Code Linting and Formatting
We use pre-commit to help improve code quality. To initialize pre-commit, run:
pip install pre-commit
pre-commit install
To resolve CI errors locally, you can also manually run pre-commit by:
pre-commit run
Adding CI tests
If possible, please add CI test(s) for your new feature:
Find the most relevant workflow yml file, which usually corresponds to a
hydra
default config (e.g.ppo_trainer
,ppo_megatron_trainer
,sft_trainer
, etc).Add related path patterns to the
paths
section if not already included.Minimize the workload of the test script(s) (see existing scripts for examples).
We are HIRING! Send us an email if you are interested in internship/FTE opportunities in MLSys/LLM reasoning/multimodal alignment.