SAC on TCFLarge3D-both-medium-v0 (FluidGym)

This repository is part of the FluidGym benchmark results. It contains trained Stable Baselines3 agents for the specialized TCFLarge3D-both-medium-v0 environment.

Evaluation Results

Global Performance (Aggregated across 5 seeds)

Mean Reward: 0.09 ± 0.06

Per-Seed Statistics

Run Mean Reward Std Dev
Seed 0 0.00 0.02
Seed 1 0.08 0.03
Seed 2 0.07 0.03
Seed 3 0.13 0.04
Seed 4 0.18 0.05

About FluidGym

FluidGym is a benchmark for reinforcement learning in active flow control.

Usage

Each seed is contained in its own subdirectory. You can load a model using:

from stable_baselines3 import SAC
model = SAC.load("0/ckpt_latest.zip")

Important: The models were trained using fluidgym==0.0.2. In order to use them with newer versions of FluidGym, you need to wrap the environment with a FlattenObservation wrapper as shown below:

import fluidgym
from fluidgym.wrappers import FlattenObservation
from stable_baselines3 import SAC

env = fluidgym.make("TCFLarge3D-both-medium-v0")
env = FlattenObservation(env)
model = SAC.load("path_to_model/ckpt_latest.zip")

obs, info = env.reset(seed=42)

action, _ = model.predict(obs, deterministic=True)
obs, reward, terminated, truncated, info = env.step(action)

References

Downloads last month
208
Video Preview
loading

Collection including safe-autonomous-systems/ma-sac-TCFLarge3D-both-medium-v0

Paper for safe-autonomous-systems/ma-sac-TCFLarge3D-both-medium-v0

Evaluation results

  • mean_reward on FluidGym-TCFLarge3D-both-medium-v0
    self-reported
    0.090