--bench flag. It will run each module in isolation for a few steps and log performance benchmark results in a rich table to the console.
SFT
Benchmark on the default fake data configurationRL
Trainer
Benchmark on a fake data loaderInference
To benchmark the inference engine in isolation, start the inference server with the correct configuration file and run the orchestrator with the--bench flag.
Trainer + Inference
To benchmark the full RL training, you can add the--bench flag to your RL entrypoint. This will benchmark the RL trainer against fake data and the inference engine against real data from the orchestrator.