--- library_name: stable-baselines3 tags: - reinforcement-learning - stable-baselines3 - deep-reinforcement-learning - fluidgym - active-flow-control - fluid-dynamics - simulation - CylinderJet3D-easy-v0 model-index: - name: PPO-CylinderJet3D-easy-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FluidGym-CylinderJet3D-easy-v0 type: fluidgym metrics: - type: mean_reward value: -0.18 name: mean_reward --- # PPO on CylinderJet3D-easy-v0 (FluidGym) This repository is part of the **FluidGym** benchmark results. It contains trained Stable Baselines3 agents for the specialized **CylinderJet3D-easy-v0** environment. ## Evaluation Results ### Global Performance (Aggregated across 3 seeds) **Mean Reward:** -0.18 ± 0.02 ### Per-Seed Statistics | Run | Mean Reward | Std Dev | | --- | --- | --- | | Seed 0 | -0.18 | 0.25 | | Seed 1 | -0.15 | 0.27 | | Seed 2 | -0.20 | 0.25 | ## About FluidGym FluidGym is a benchmark for reinforcement learning in active flow control. ## Usage Each seed is contained in its own subdirectory. You can load a model using: ```python from stable_baselines3 import PPO model = PPO.load("0/ckpt_latest.zip") ``` **Important:** The models were trained using ```fluidgym==0.0.2```. In order to use them with newer versions of FluidGym, you need to wrap the environment with a `FlattenObservation` wrapper as shown below: ```python import fluidgym from fluidgym.wrappers import FlattenObservation from stable_baselines3 import PPO env = fluidgym.make("CylinderJet3D-easy-v0") env = FlattenObservation(env) model = PPO.load("path_to_model/ckpt_latest.zip") obs, info = env.reset(seed=42) action, _ = model.predict(obs, deterministic=True) obs, reward, terminated, truncated, info = env.step(action) ``` ## References * [Plug-and-Play Benchmarking of Reinforcement Learning Algorithms for Large-Scale Flow Control](http://arxiv.org/abs/2601.15015) * [FluidGym GitHub Repository](https://github.com/safe-autonomous-systems/fluidgym)