You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current RL agent uses a very simple GNN +MLP architecture. From the supremacy of ResNET to the advent of autoregressive Transformers, the latest papers in either natural language processing or image processing have shown the benefits of using extremely complex architectures sometimes made of more than a billion parameters.
To what extent these conclusions can be applied to SeaPearl ? Why do we need very little CPU computing power to achieve good results ? I would be very interesting to study given a specific CP problem, the performance of different deep-NN architectures and to more widely study the impact of different scaling parameters regarding the input data size, the model size, the computing power available.
The text was updated successfully, but these errors were encountered:
The current RL agent uses a very simple GNN +MLP architecture. From the supremacy of ResNET to the advent of autoregressive Transformers, the latest papers in either natural language processing or image processing have shown the benefits of using extremely complex architectures sometimes made of more than a billion parameters.
To what extent these conclusions can be applied to SeaPearl ? Why do we need very little CPU computing power to achieve good results ? I would be very interesting to study given a specific CP problem, the performance of different deep-NN architectures and to more widely study the impact of different scaling parameters regarding the input data size, the model size, the computing power available.
The text was updated successfully, but these errors were encountered: