You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all thanks for the book and the video course. The motivation behind multilevel models is clear: partial pooling is an "adaptive compromise" between no pooling and complete pooling. In the video lecture-12 (https://speakerdeck.com/rmcelreath/statistical-rethinking-2022-lecture-12?slide=40) we show the "gain" of using partial pooling using "cross-validation". But the cross-validation score of partial pooling is very similar to the complete pooling.
For this particular example, are there any other (more convincing?) arguments (other than cross-validation) to use partial pooling against complete pooling?
Is it possible to create a "simple" example in which we observe a more "dramatic" U-shaped cross-validation line?
Thanks in advance.
The text was updated successfully, but these errors were encountered:
First of all thanks for the book and the video course. The motivation behind multilevel models is clear: partial pooling is an "adaptive compromise" between no pooling and complete pooling. In the video lecture-12 (https://speakerdeck.com/rmcelreath/statistical-rethinking-2022-lecture-12?slide=40) we show the "gain" of using partial pooling using "cross-validation". But the cross-validation score of partial pooling is very similar to the complete pooling.
Thanks in advance.
The text was updated successfully, but these errors were encountered: