Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

wrapper problem under simple_speaker_listener environment #120

Open
CaptainYin opened this issue Dec 3, 2024 · 1 comment
Open

wrapper problem under simple_speaker_listener environment #120

CaptainYin opened this issue Dec 3, 2024 · 1 comment

Comments

@CaptainYin
Copy link

when using simple_speaker_listener environment, the observation dimension of speaker and lisener is not the same, so there is a problem when wrapper the env by env_wrappers.py

execute this shell
`#!/bin/sh
env="MPE"
scenario="simple_speaker_listener"
#"simple_speaker_listener" "simple_spread"
num_landmarks=3
num_agents=2
algo="rmaddpg"
exp="debug"
seed_max=1

echo "env is ${env}, scenario is ${scenario}, algo is ${algo}, exp is ${exp}, max seed is ${seed_max}"

for seed in $(seq ${seed_max}); do
echo "seed is ${seed}:"
CUDA_VISIBLE_DEVICES=0 python3 train/train_mpe.py --env_name ${env} --n_rollout_threads 1 --algorithm_name ${algo} --experiment_name ${exp} --scenario_name ${scenario} --num_agents ${num_agents} --num_landmarks ${num_landmarks} --seed ${seed} --episode_length 25 --actor_train_interval_step 1 --tau 0.005 --lr 7e-4 --num_env_steps 10000000 --use_reward_normalization --share_policy
echo "training is done!"
done
`
output

warm up... Traceback (most recent call last): File "/project/off-policy-release/offpolicy/scripts/train/train_mpe.py", line 193, in <module> main(sys.argv[1:]) File "/project/off-policy-release/offpolicy/scripts/train/train_mpe.py", line 176, in main runner = Runner(config=config) File "/project/off-policy-release/offpolicy/runner/rnn/mpe_runner.py", line 15, in __init__ self.warmup(num_warmup_episodes) File "/project/off-policy-release/offpolicy/runner/rnn/base_runner.py", line 221, in warmup env_info = self.collecter(explore=True, training_episode=False, warmup=True) File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/project/off-policy-release/offpolicy/runner/rnn/mpe_runner.py", line 145, in separated_collect_rollout obs = env.reset() File "/project/off-policy-release/offpolicy/envs/env_wrappers.py", line 439, in reset return np.array(obs) ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 2 dimensions. The detected shape was (1, 2) + inhomogeneous part. training is done!

@CaptainYin
Copy link
Author

1 2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant