You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've noticed that replay_buffers pre-allocates memory according to its max_length. In RL, max_length is usually set to a large number (1 million) which could result in unwanted situations where replay buffer takes up the memory too much. I think it would be worth it to mention it on the documentation. https://www.tensorflow.org/agents/api_docs/python/tf_agents/replay_buffers/replay_buffer/ReplayBuffer
The text was updated successfully, but these errors were encountered:
I've noticed that replay_buffers pre-allocates memory according to its max_length. In RL, max_length is usually set to a large number (1 million) which could result in unwanted situations where replay buffer takes up the memory too much. I think it would be worth it to mention it on the documentation. https://www.tensorflow.org/agents/api_docs/python/tf_agents/replay_buffers/replay_buffer/ReplayBuffer
The text was updated successfully, but these errors were encountered: