You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The ray example was super helpful in getting things up and running, however, when I tried to configure the PPOTrainer to use one policy per agent, the wrapper provided by VMAS could not be used as is.
PS: I'm not 100% sure if this is a feature request or a misuse from my side, as I was trying to make each agent have its own policy and not share the policy model across the agents.
The text was updated successfully, but these errors were encountered:
Yes so unfortunately by default vmas is not compatible with the Multiagent interface of RLLib because rllib does not allow to subclass both VectorEnv and MultiAgentEnv (a genius choice, I know).
So I went with subclassing only VectorEnv.
If you want to see how we use vmas multiagent in rllib with the option to share or not policy and critics see https://github.com/proroklab/HetGPPO
Hello,
The ray example was super helpful in getting things up and running, however, when I tried to configure the PPOTrainer to use one policy per agent, the wrapper provided by VMAS could not be used as is.
My configuration:
The error:
PS: I'm not 100% sure if this is a feature request or a misuse from my side, as I was trying to make each agent have its own policy and not share the policy model across the agents.
The text was updated successfully, but these errors were encountered: