A multi-agent reinforcement learning (MARL) environment, trained using Proximal Policy Optimization (PPO). Learning agents Predators (red) and Prey (blue) both expend energy moving around, and replenish it by eating. Prey eat Grass (green), and Predators eat Prey if they end up on the same grid cell. In the base case for simplicity, the agents obtain 100% of the energy from the eaten Prey or Grass. However, in the 'real world' this is much less because the ecological efficiency is only around 10% in most cases. Predators die of starvation when their energy is zero, Prey die either of starvation or when being eaten by a Predator. The agents asexually reproduce when energy levels of learning agents rise above a certain treshold by eating. Learning agents, learn to execute movement actions based on their partial observations (transparent red and blue squares respectively) of the environment to maximize cumulative reward.
This algorithm is an example of how elaborate behaviors can emerge from simple rules in agent-based models. In the above example, rewards for learning agents are only obtained by reproduction. However, maximizing these rewards results in emerging behaviors such as: 1) Predators hunting Prey, 2) Predators hovering around grass to catch Prey and 3) Prey trying to escape Predators. These emerging behaviors lead to more complex dynamics at the ecosystem level. The trained agents are displaying a classic Lotka–Volterra pattern over time. This learned outcome is not obtained with a random policy:
More emergent behavior and findings are described on our website.
Editor used: Visual Studio Code 1.93.1 on Linux Mint 21.3 Cinnamon
- Clone the repository:
git clone https://github.com/doesburg11/PredPreyGrass.git
- Open Visual Studio Code and execute:
- Press
ctrl+shift+p
- Type and choose: "Python: Create Environment..."
- Choose environment: Conda
- Choose interpreter: Python 3.11.7
- Open a new terminal
- Install dependencies:
pip install -r requirements.txt
- Press
- If encountering "ERROR: Failed building wheel for box2d-py," run:
and
conda install swig
pip install box2d box2d-kengz
- Alternative 1:
pip install wheel setuptools pip --upgrade pip install swig pip install gymnasium[box2d]
- Alternative 2: a workaround is to copy Box2d files from assets/box2d to the site-packages directory.
- If facing "libGL error: failed to load driver: swrast," execute:
conda install -c conda-forge gcc=12.1.0
In Visual Studio Code run:
pettingzoo/predpreygrass/random_policy.py
Adjust parameters accordingly in:
predpreygrass/envs/_predpreygrass_v0/config/config_predpreygrass.py
In Visual Studio Code run:
predpreygrass/optimizationspredpreygrass/train_predpreygrass_v0_ppo.py
To evaluate and visualize after training follow instructions in:
predpreygrass/optimizationspredpreygrass/evaluate_from_file.py
The benchmark configuration used in the gif-video above.
- Terry, J and Black, Benjamin and Grammel, Nathaniel and Jayakumar, Mario and Hari, Ananth and Sullivan, Ryan and Santos, Luis S and Dieffendahl, Clemens and Horsch, Caroline and Perez-Vicente, Rodrigo and others. Pettingzoo: Gym for multi-agent reinforcement learning. 2021-2024 The utlimate go-to for multi-agent reinforcement learning deployment.
- Paper Collection of Multi-Agent Reinforcement Learning (MARL)