Skip to content

Commit

Permalink
Merge pull request #4 from m2lines/Jansen_link
Browse files Browse the repository at this point in the history
Jansen link
  • Loading branch information
asross authored Jun 3, 2022
2 parents ed20a44 + 06ed8f9 commit 4db3c6a
Showing 1 changed file with 3 additions and 42 deletions.
45 changes: 3 additions & 42 deletions notebooks/online_metrics.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@
"cells": [
{
"cell_type": "markdown",
"id": "09a96a8e-1b3f-4e04-b55a-5b350e42bf43",
"metadata": {},
"source": [
"# Online Metrics\n",
Expand All @@ -13,7 +12,6 @@
{
"cell_type": "code",
"execution_count": 1,
"id": "5120a32e-6e75-48af-b415-44dfbf5bbb52",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -28,7 +26,6 @@
},
{
"cell_type": "markdown",
"id": "a657f112-f1ae-4a9f-a472-438a419205d0",
"metadata": {},
"source": [
"## Load datasets\n",
Expand All @@ -41,7 +38,6 @@
{
"cell_type": "code",
"execution_count": 2,
"id": "c50d6f55-0617-4627-b880-f052ae37d2dd",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -56,15 +52,14 @@
},
{
"cell_type": "markdown",
"id": "a6846d73-938a-4b8e-97db-ff5ebbdd3700",
"metadata": {},
"source": [
"## Define parameterizations\n",
"\n",
"Here we'll define three kinds of parameterizations:\n",
"\n",
"1. The hybrid symbolic model we learned in the paper (with the exact set of weights we fitted)\n",
"1. A backscatter parameterization based on [Jansen et al. 2015](https://doi.org/10.1016/j.ocemod.2015.05.007) and adapted by [Pavel Perezhogin](https://github.com/Pperezhogin)\n",
"1. A backscatter parameterization based on [Jansen et al. 2015](https://www.sciencedirect.com/science/article/pii/S1463500315001341) and adapted by [Pavel Perezhogin](https://github.com/Pperezhogin)\n",
"1. A classic <a href='https://doi.org/10.1175/1520-0493(1963)091%3C0099:GCEWTP%3E2.3.CO;2'>Smagorinsky parameterization</a> (which is just the dissipation portion of the backscatter model).\n",
"\n",
"Note that the latter two parameterizations are [already available in `pyqg`](https://pyqg.readthedocs.io/en/latest/api.html#pyqg.parameterizations.Smagorinsky). (We import them here with a tiny bit of adaptation so we can automatically run a simulation and save intermediate snapshots.)"
Expand All @@ -73,7 +68,6 @@
{
"cell_type": "code",
"execution_count": 3,
"id": "195b1bb2-afd0-4d5b-874a-50fb8e50b78a",
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -125,7 +119,6 @@
},
{
"cell_type": "markdown",
"id": "083d7778-ccb6-45a4-afbf-66ae932f243d",
"metadata": {},
"source": [
"We'll also consider a fully convolutional neural network parameterization (see the paper for more details), but we'll load some simulations rather than running them to save time.\n",
Expand All @@ -138,7 +131,6 @@
{
"cell_type": "code",
"execution_count": 4,
"id": "e287a522-2844-4d2d-8f4c-2631a5d64968",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -149,7 +141,6 @@
{
"cell_type": "code",
"execution_count": 6,
"id": "a7291673-5459-4a8d-9c21-c8a6fd00953b",
"metadata": {},
"outputs": [
{
Expand All @@ -175,7 +166,6 @@
{
"cell_type": "code",
"execution_count": 5,
"id": "91387819-3670-41dc-b121-fa451abc150f",
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -203,7 +193,6 @@
{
"cell_type": "code",
"execution_count": 7,
"id": "97a89c69-7a08-4217-8395-afa1c022435e",
"metadata": {
"tags": []
},
Expand Down Expand Up @@ -231,7 +220,6 @@
{
"cell_type": "code",
"execution_count": 30,
"id": "5e5a512f-01bb-4fb9-b6de-453537c874a3",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -242,7 +230,6 @@
},
{
"cell_type": "markdown",
"id": "10c96543-e3e3-4f68-be57-334205a58cc4",
"metadata": {},
"source": [
"## Compute similarity metrics\n",
Expand All @@ -267,7 +254,6 @@
{
"cell_type": "code",
"execution_count": 8,
"id": "4bf1227c-c72b-45bd-820c-bfb52b08d15f",
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -305,7 +291,6 @@
},
{
"cell_type": "markdown",
"id": "72a7e996-4e22-4555-8d7d-90ec66d70cb5",
"metadata": {},
"source": [
"The Smagorinsky parameterization has negative similarity scores for most of our metrics. That means it's bringing simulation quantities further from high-res.\n",
Expand All @@ -316,7 +301,6 @@
{
"cell_type": "code",
"execution_count": 9,
"id": "0be2dbf9-1df2-4bb4-910b-dbef62806364",
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -354,7 +338,6 @@
},
{
"cell_type": "markdown",
"id": "2e3a3dc6-2545-44ff-81bb-f852319ee114",
"metadata": {},
"source": [
"These are quite good.\n",
Expand All @@ -365,7 +348,6 @@
{
"cell_type": "code",
"execution_count": 10,
"id": "44ac97f6-96dc-4312-a01c-a8b309ec595f",
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -403,7 +385,6 @@
},
{
"cell_type": "markdown",
"id": "e45ad0d8-01a8-4237-9f90-b256f125333e",
"metadata": {},
"source": [
"Also quite good, and somewhat hard to distinguish from the backscatter model.\n",
Expand All @@ -414,7 +395,6 @@
{
"cell_type": "code",
"execution_count": 31,
"id": "db2a4fe0-66e8-402e-bc51-36016528c215",
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -452,7 +432,6 @@
},
{
"cell_type": "markdown",
"id": "3dbad53f-5886-425d-a6a6-b214d82089d1",
"metadata": {},
"source": [
"Once again, very good, and with higher worst-case scores. It's essentially matching everything.\n",
Expand All @@ -463,7 +442,6 @@
{
"cell_type": "code",
"execution_count": 37,
"id": "6c857a14-4797-4556-aa17-bf6590f3fe88",
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -499,7 +477,6 @@
},
{
"cell_type": "markdown",
"id": "b06d00a1-a874-4049-bfd0-7ae0160e416c",
"metadata": {},
"source": [
"We can see that three of our four parameterizations are helping essentially across the board, though the FCNN is most consistent. Interestingly, neither the backscatter nor the hybrid symbolic model lessen distributional differences between the upper-layer enstrophy.\n",
Expand All @@ -512,7 +489,6 @@
{
"cell_type": "code",
"execution_count": 12,
"id": "6144a329-3144-44df-8fa6-3a27ed65d0aa",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -523,7 +499,6 @@
{
"cell_type": "code",
"execution_count": 13,
"id": "2577ce09-b0be-4d5a-a5c5-6df445ea195d",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -537,7 +512,6 @@
{
"cell_type": "code",
"execution_count": 14,
"id": "46fb27b8-9d98-4cde-9c32-862ea5e38c37",
"metadata": {},
"outputs": [
{
Expand All @@ -563,7 +537,6 @@
{
"cell_type": "code",
"execution_count": 15,
"id": "764f8e89-baf5-46b7-ac0d-4bf4699f4717",
"metadata": {},
"outputs": [
{
Expand All @@ -589,7 +562,6 @@
{
"cell_type": "code",
"execution_count": 16,
"id": "57c60943-e524-48e3-93a3-6a33af76994b",
"metadata": {},
"outputs": [
{
Expand All @@ -615,7 +587,6 @@
{
"cell_type": "code",
"execution_count": 38,
"id": "60113988-d0bd-4589-8d6d-5b59292ffce5",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -625,7 +596,6 @@
{
"cell_type": "code",
"execution_count": 17,
"id": "1823478a-b8de-455c-a3a6-2ed78f9cd55b",
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -663,7 +633,6 @@
},
{
"cell_type": "markdown",
"id": "6600dc6d-aa29-462c-82b6-3b0a5aede365",
"metadata": {},
"source": [
"The Smagorinsky parameterization still gets negative similarity scores, though they aren't any _more_ negative than before."
Expand All @@ -672,7 +641,6 @@
{
"cell_type": "code",
"execution_count": 18,
"id": "773e3506-0bd1-4e60-a768-4d3c38c2990f",
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -710,7 +678,6 @@
},
{
"cell_type": "markdown",
"id": "cac766e9-a223-4baa-ac86-a1437a4214ea",
"metadata": {},
"source": [
"The backscatter model gets marginally positive similarity scores in this case (it's not super consistent across re-runs), though it also has a number of fairly negative scores. Overall, it's not generalizing exactly the way we would hope. "
Expand All @@ -719,7 +686,6 @@
{
"cell_type": "code",
"execution_count": 19,
"id": "407c8fbc-d533-421d-9738-0f87b544c80c",
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -757,7 +723,6 @@
},
{
"cell_type": "markdown",
"id": "6fbb9958-42cc-4c53-921a-ccce405d6907",
"metadata": {},
"source": [
"The symbolic regression model has much more consistently positive scores, indicating better generalization ability!"
Expand All @@ -766,7 +731,6 @@
{
"cell_type": "code",
"execution_count": 39,
"id": "3a5d286a-0280-4f71-a774-f70f9bbb9f8d",
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -804,7 +768,6 @@
},
{
"cell_type": "markdown",
"id": "494d9da6-d099-4eca-9a8e-077f53d2377b",
"metadata": {},
"source": [
"The neural network gets very negative similarity scores, indicating that it's not succeeding in transferring. The lower-layer enstrophy in particular gets a similarity score of -34. When plotting, we'll need to manually set the $y$ limits, or else the plot will be unreadable:"
Expand All @@ -813,7 +776,6 @@
{
"cell_type": "code",
"execution_count": 42,
"id": "d9942a09-d941-463a-8294-615a2d7f838b",
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -850,7 +812,6 @@
},
{
"cell_type": "markdown",
"id": "96979017-d1c0-4879-8467-8c85fd10efef",
"metadata": {},
"source": [
"The symbolic regression model does best by most of the metrics, though the backscatter model does well on many, and interestingly the FCNN does get closest on the upper-layer enstrophy again. This is consistent with a result we see in [the FCNN notebook](./neural_networks.ipynb), where its upper-layer $R^2$s remain high when transferring to jet configuration while its lower-layer $R^2$s plummet.\n",
Expand All @@ -861,7 +822,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
Expand All @@ -875,7 +836,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.5"
"version": "3.8.6"
}
},
"nbformat": 4,
Expand Down

0 comments on commit 4db3c6a

Please sign in to comment.