-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathreviews.txt
2214 lines (1265 loc) · 422 KB
/
reviews.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
Review Papers
=============
Steven K. Baum
v0.1, 2014-07-14
:doctype: book
:toc:
:icons:
:numbered!:
1943
----
*Stochastic Problems in Physics and Astronomy* - S. Chandrasekhar
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.15.1[+https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.15.1+]
1973
----
*Planetary fluid dynamics* - J. G. Charney
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://link.springer.com/chapter/10.1007/978-94-010-2599-7_2[+https://link.springer.com/chapter/10.1007/978-94-010-2599-7_2+]
The term “planetary” will be applied to fluid motions on the earth whose space and time scales are so large that the earth’s rotation may not be ignored. Such motions have properties which are not to be found in nonrotating systems. For example, the action of external forces such as gravity invariably bring into being Coriolis forces which in turn produce circulatory motions. If a stone is thrown into an infinite resting ocean, the gravitational oscillations engendered will radiate their energy to infinity and leave the ocean finally undisturbed ; if the stone is thrown into an infinite rotating ocean, some of the energy of the gravitational oscillations will be converted by the action of the Coriolis forces into rotational motions, and these will persist until they are dissipated by viscosity.
*Space and Time Meteorological Data Analysis and Initialization* - P. Morel
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://link.springer.com/chapter/10.1007/978-94-010-2599-7_5[+https://link.springer.com/chapter/10.1007/978-94-010-2599-7_5+]
The guideline of this Summer School on Dynamic Meteorology was to explore the problem of predicting the general circulation of the atmosphere from one specified state of motion at some initial time to. The previous speakers, particularly Professor PHILLIPS, showed that this initial value problem is approximately solved by replacing the continuous meteorological fields by a finite set of discrete values and integrating numerically the corresponding finite difference equations. Alternatively, one may choose to expand the continuous fields in series of orthogonal functions truncated at some finite order and solve numerically a set of algebraic interaction equations. In any case, the forecasting procedure starts from a set of initial values inferred from observations of the real atmosphere. The purpose of these talks is to review the methods used to infer these field values from the available experimental data. Several such methods, each claiming to be “optimal”, have been proposed and some actually implemented in operational practice. It should be understood as the discussion progresses from straightforward to sophisticated approaches, that the best method from a mathematical or physical standpoint, may not be economical of computer processing time. But one must also keep in mind that the data assimilation and analysis process is an essential link between the world observing system and global extended range forecasting.
*Boundary Layers in Planetary Atmospheres* - A. S. Monin
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://link.springer.com/chapter/10.1007/978-94-010-2599-7_4[+https://link.springer.com/chapter/10.1007/978-94-010-2599-7_4+]
In large-scale air currents, near the surface of a planet, the combined action of the pressure gradient, turbulent friction and Coriolis force results in the formation of the atmospheric boundary layer. Unlike most boundary layers dealt with in aerodynamical engineering, the atmospheric boundary layer is characterized by the influence of the Coriolis force (i.e., the planet’s rotation) and the density stratification of air (affecting turbulence through buoyancy forces). Thus the atmospheric boundary layer is a turbulent boundary layer in a rotating stratified fluid.
*Principles of Large Scale Numerical Weather Prediction* - N. A. Phillips
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://link.springer.com/chapter/10.1007/978-94-010-2599-7_1[+https://link.springer.com/chapter/10.1007/978-94-010-2599-7_1+]
The motion of the atmosphere, when treated from the viewpoint of continuum mechanics, is governed by Newton’s law of motion relating the acceleration to the forces, the thermodynamic law relating the rate of change of internal energy to the rate of heating, the principle of conservation of mass, the thermodynamic state relations, detailed mathematical formulations of the forces and rates of heating, and, finally, appropriate conditions at the boundaries of the particular mathematical model of the atmosphere being considered. Implied in this statement already is a recognition that we limit our attention to an approximate model of the real “atmosphere”. For example, at very high altitudes the mean free path and time interval between molecular collisions becomes large enough that the quasi-equilibrium assumptions of continuum mechanics and thermodynamics break down. As other examples of the limitation of our meteorological viewpoint we may cite on the one hand our neglect of the interaction between the “atmosphere” and its extension to interplanetary space, and on the other hand our ignoring of the exchange of dry air across the air-ground and air-ocean interface, which we shall treat as impervious to the flow of dry air. These somewhat trite examples are cited only to show immediately that we admit to a simplified atmospheric model ; in fact, however, the practical analysis of large-scale atmospheric motions at the present state of hydrodynamical theory forces approximations upon dynamical meteorologists which have a much greater effect on the accuracy of the results than do the somewhat esoteric examples mentioned above.
1977
----
*West Antarctic ice streams*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/RG015i001p00001[+https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/RG015i001p00001+]
Solar heat is the acknowledged driving force for climatic change. However, ice sheets are also capable of causing climatic change. This property of ice sheets derives from the facts that ice and rock are crystalline whereas the oceans and atmosphere are fluids and that ice sheets are massive enough to depress the earth's crust well below sea level. These features allow time constants for glacial flow and isostatic compensation to be much larger than those for ocean and atmospheric circulation and therefore somewhat independent of the solar variations that control this circulation. This review examines the nature of dynamic processes in ice streams that give ice sheets their degree of independent behavior and emphasizes the consequences of viscoplastic instability inherent in anisotropic polycrystalline solids such as glacial ice. Viscoplastic instability and subglacial topography are responsible for the formation of ice streams near ice sheet margins grounded below sea level. As a result the West Antarctic marine ice sheet is inherently unstable and can be rapidly carved away by calving bays which migrate up surging ice streams. Analyses of tidal flexure along floating ice stream margins, stress and velocity fields in ice streams, and ice stream boundary conditions are presented and used to interpret ERTS 1 photomosaics for West Antarctica in terms of characteristic ice sheet crevasse patterns that can be used to monitor ice stream surges and to study calving bay dynamics.
1979
----
*Geostrophic Turbulence* - P. Rhines
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.annualreviews.org/doi/10.1146/annurev.fl.11.010179.002153[+https://www.annualreviews.org/doi/10.1146/annurev.fl.11.010179.002153+]
1980
----
*Two-dimensional turbulence* - R. H. Kraichnan et al
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://iopscience.iop.org/article/10.1088/0034-4885/43/5/001/meta[+https://iopscience.iop.org/article/10.1088/0034-4885/43/5/001/meta+]
The theory of turbulence in two dimensions is reviewed and unified and a number of hydrodynamic and plasma applications are surveyed. The topics treated include the basic dynamical equations, equilibrium statistical mechanics of continuous and discrete vorticity distributions, turbulent cascades, predictability theory, turbulence on a rotating sphere, turbulent diffusion, two-dimensional magnetohydrodynamics, and superfluidity in thin films.
*The Milankovitch astronomical theory of paleoclimates: A modern review*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/0083665680900264[+https://www.sciencedirect.com/science/article/pii/0083665680900264+]
1982
----
*On the reconstruction of pleistocene ice sheets: A review*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/0277379182900178[+https://www.sciencedirect.com/science/article/pii/0277379182900178+]
Pleistocene ice sheets can be reconstructed through three separate approaches: (1) Evidence based on glacial geological studies, such as erratic trains, till composition, crossing striations and exposures of multiple tills/nonglacial sediments. (2) Reconstructions based on glaciological theory and observations. These can be either two- or three-dimensional models; they can be constrained by ‘known’ ice margins at specific times; or they can be ‘open-ended’ with the history of growth and retreat controlled by parameters resting entirely within the model. (3) Glacial isostatic rebound after deglaciation provides a measure of the distribution of mass (ice) across a region. A ‘best fit’ ice sheet model can be developed that closely approximates a series of relative sea level curves within an area of a former ice sheet; in addition, the model should also provide a reasonable sea level fit to relative sea level curves at sites well removed from glaciation.
This paper reviews some of the results of a variety of ice sheet reconstructions and concentrates on the various attempts to reconstruct the ice sheets of the last (Wisconsin, Weischelian, Würm, Devensian) glaciation. Evidence from glacial geology suggests flow patterns at variance with simple, single-domed ice sheets over North America and Europe. In addition, reconstruction of ice sheets from glacial isostatic sea level data suggests that the ice sheets were significantly thinner than estimates based on 18 ka equilibrium ice sheets (cf. Denton and Hughes, 1981). The review indicates it is important to differentiate between ice divides, which control the directions of glacial flow, and areas of maximum ice thickness, which control the glacial isostatic rebound of the crust upon deglaciation. Recent studies from the Laurentide Ice Sheet region indicate that the center of mass was not over Hudson Bay; that a major ice divide lay east of Hudson Bay so that flow across the Hudson Bay and James Bay lowlands was from the northeast; that Hudson Bay was probably open to marine invasions two or three times during the Wisconsin Glaciation; and that the Laurentide Ice Sheet was thinner than an equilibrium reconstruction would suggest.
1986
----
*Eddies, Waves, Circulation, and Mixing: Statistical Geofluid Mechanics* - G. Holloway
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.annualreviews.org/doi/10.1146/annurev.fl.18.010186.000515[+https://www.annualreviews.org/doi/10.1146/annurev.fl.18.010186.000515+]
1988
----
*Milankovitch Theory and climate*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://agupubs.onlinelibrary.wiley.com/doi/10.1029/RG026i004p00624[+https://agupubs.onlinelibrary.wiley.com/doi/10.1029/RG026i004p00624+]
Among the longest astrophysical and astronomical cycles that might influence climate (and even among all forcing mechanisms external to the climatic system itself), only those involving variations in the elements of the Earth's orbit have been found to be significantly related to the long‐term climatic data deduced from the geological record. The aim of the astronomical theory of paleoclimates, a particular version of which being due to Milankovitch, is to study this relationship between insolation and climate at the global scale. It comprises four different parts: the orbital elements, the insolation, the climate model, and the geological data.
In the nineteenth century, Croll and Pilgrim stressed the importance of severe winters as a cause of ice ages. Later, mainly during the first half of the twentieth century, Köppen, Spitaler, and Milankovitch regarded mild winters and cool summers as favoring glaciation. After Köppen and Wegener related the Milankovitch new radiation curve to Penck and Brückner's subdivision of the Quaternary, there was a long‐lasting debate on whether or not such changes in the insolation can explain the Quaternary glacial‐interglacial cycles. In the 1970s, with the improvements in dating, in acquiring, and in interpreting the geological data, with the advent of computers, and with the development of astronomical and climate models, the Milankovitch theory revived.
Over the last 5 years it overcame most of the geological, astronomical, and climatological difficulties. The accuracy of the long‐term variations of the astronomical elements and of the insolation values and the stability of their spectra have been analyzed by comparing seven different astronomical solutions and four different time spans (0–0.8 million years before present (Myr B.P.), 0.8–1.6 Myr B.P., 1.6–2.4 Myr B.P., and 2.4–3.2 Myr B.P.). For accuracy in the time domain, improvements are necessary for periods earlier than 2 Myr B.P. As for the stability of the frequencies, the fundamental periods (around 40, 23, and 19 kyr) do not deteriorate with time over the last 5 Myr, but their relative importance for each insolation and each astronomical parameter is a function of the period considered.
Spectral analysis of paleoclimatic records has provided substantial evidence that, at least near the obliquity and precession frequencies, a considerable fraction of the climatic variance is driven in some way by insolation changes forced by changes in the Earth's orbit. Not only are the fundamental astronomical and climatic frequencies alike, but also the climatic series are phase‐locked and strongly coherent with orbital variations. Provided that monthly insolation (i.e., a detailed seasonal cycle) is considered for the different latitudes, their long‐term deviations can be as large as 13% of the long‐term average, and sometimes considerable changes between extreme values can occur in less than 10,000 years.
Models of different categories of complexity, from conceptual ones to three‐dimensional atmospheric general circulation models and two‐dimensional time‐dependent models of the whole climate system, have now been astronomically forced in order to test the physical reality of the astronomical theory. The output of most recent modeling efforts compares favorably with data of the past 400,000 years. Accordingly, the model predictions for the next 100,000 years are used as a basis for forecasting how climate would evolve when forced by orbital variations in the absence of anthropogenic disturbance. The long‐term cooling trend which began some 6,000 years ago will continue for the next 5,000 years; this first temperature minimum will be followed by an amelioration at around 15 kyr A.P. (after present), by a cold interval centered at 23 kyr A.P., and by a major glaciation at around 60 kyr A.P.
1989
----
*Kolmogorov: Life and Creative Activities*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://projecteuclid.org/euclid.aop/1176991251[+https://projecteuclid.org/euclid.aop/1176991251+]
1995
----
*Dynamics of Jovian Atmospheres* - T. E. Dowling
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.annualreviews.org/doi/abs/10.1146/annurev.fl.27.010195.001453[+https://www.annualreviews.org/doi/abs/10.1146/annurev.fl.27.010195.001453+]
2000
----
*Volcanic eruptions and climate*
https://agupubs.onlinelibrary.wiley.com/doi/10.1029/1998RG000054[+https://agupubs.onlinelibrary.wiley.com/doi/10.1029/1998RG000054+]
Volcanic eruptions are an important natural cause of climate change on many timescales. A new capability to predict the climatic response to a large tropical eruption for the succeeding 2 years will prove valuable to society. In addition, to detect and attribute anthropogenic influences on climate, including effects of greenhouse gases, aerosols, and ozone‐depleting chemicals, it is crucial to quantify the natural fluctuations so as to separate them from anthropogenic fluctuations in the climate record. Studying the responses of climate to volcanic eruptions also helps us to better understand important radiative and dynamical processes that respond in the climate system to both natural and anthropogenic forcings. Furthermore, modeling the effects of volcanic eruptions helps us to improve climate models that are needed to study anthropogenic effects.
Large volcanic eruptions inject sulfur gases into the stratosphere, which convert to sulfate aerosols with an e‐folding residence time of about 1 year. Large ash particles fall out much quicker. The radiative and chemical effects of this aerosol cloud produce responses in the climate system. By scattering some solar radiation back to space, the aerosols cool the surface, but by absorbing both solar and terrestrial radiation, the aerosol layer heats the stratosphere. For a tropical eruption this heating is larger in the tropics than in the high latitudes, producing an enhanced pole‐to‐equator temperature gradient, especially in winter. In the Northern Hemisphere winter this enhanced gradient produces a stronger polar vortex, and this stronger jet stream produces a characteristic stationary wave pattern of tropospheric circulation, resulting in winter warming of Northern Hemisphere continents. This indirect advective effect on temperature is stronger than the radiative cooling effect that dominates at lower latitudes and in the summer.
The volcanic aerosols also serve as surfaces for heterogeneous chemical reactions that destroy stratospheric ozone, which lowers ultraviolet absorption and reduces the radiative heating in the lower stratosphere, but the net effect is still heating. Because this chemical effect depends on the presence of anthropogenic chlorine, it has only become important in recent decades. For a few days after an eruption the amplitude of the diurnal cycle of surface air temperature is reduced under the cloud. On a much longer timescale, volcanic effects played a large role in interdecadal climate change of the Little Ice Age. There is no perfect index of past volcanism, but more ice cores from Greenland and Antarctica will improve the record. There is no evidence that volcanic eruptions produce El Niño events, but the climatic effects of El Niño and volcanic eruptions must be separated to understand the climatic response to each.
2001
----
*Hydrodynamical Modeling Of Oceanic Vortices* - X. Carlton
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://link.springer.com/article/10.1023/A%3A1013779219578[+https://link.springer.com/article/10.1023/A%3A1013779219578+]
Mesoscale coherent vortices are numerous in the ocean.Though they possess various structures in temperature and salinity,they are all long-lived, fairly intense and mostly circular. Thephysical variable which best describes the rotation and the density anomaly associated with coherent vortices is potential vorticity. It is diagnostically related to velocity and pressure, when the vortex is stationary. Stationary vortices can be monopolar (circular or elliptical) or multipolar; their stability analysis shows thattransitions between the various stationary shapes are possible when they become unstable. But stable vortices can also undergo unsteady evolutions when perturbed by environmental effects, likelarge-scale shear or strain fields, β-effect or topography. Changes in vortex shapes can also result from vortex interactions. such as the pairing, merger or vertical alignment of two vortices, which depend on their relative polarities and depths. Such interactions transfer energy and enstrophy between scales, and are essential in two-dimensional and in geostrophic turbulence. Finally, in relation with the observations, we describe a few mechanisms of vortex generation.
*Arnol'd Nonlinear Stability Theorems and their Application to the Atmosphere and Oceans*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://link.springer.com/article/10.1023/A%3A1014229917728[+https://link.springer.com/article/10.1023/A%3A1014229917728+]
Within the context of atmospheric and oceanic fluid dynamicsthe problems of nonlinear stability and instability, particularlythe Arnol'd second type nonlinear stability, are surveyed.The stability criteria obtained by means of the energy-Casimirand energy-Lagrange methods are presented for a varietyof models, the estimates for various generalized perturbationenergy and enstrophy are given. Potential applications of thesecriteria are shown in the estimation of bounds on the perturbationenergy and enstrophy, in the diagnostic study of the persistence orbreakdown of jet flows in the middle and high latitudes, and in theverification of the validity of the tangent linear model in bothatmospheric dynamics and oceanography.Some further research results are also highlighted.
*Ice Sheet And Satellite Altimetry*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://link.springer.com/article/10.1023/A%3A1010765923021[+https://link.springer.com/article/10.1023/A%3A1010765923021+]
Since 1991, the altimeters of the ERS European Satellites allow the observation of 80% of the Antarctica ice sheet and the whole Greenland ice sheet: They thus offer for the first time a unique vision of polar ice caps. Indeed, surface topography is an essential data thanks to its capacity to highlight the physical processes which control the surface shape, or to test models. Moreover, the altimeter is also a radar which makes it possible to estimate the snow surface or subsurface characteristics, such as surface roughness induced by the strong katabatic wind or ice grain size. The polar ice caps may not be in a stationary state, they continue to respond to the climatic warming of the beginning of the Holocene, that is 18000 years ago, and possibly start to react to present climatic warming: the altimeter offers the unique means of estimating the variations of volume and thus the contribution of polar ice caps to present sea level change.
*North Atlantic oscillation - Concepts and studies*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://link.springer.com/article/10.1023%2FA%3A1014217317898[+https://link.springer.com/article/10.1023%2FA%3A1014217317898+]
This paper aims to provide a comprehensive review of previous studies and concepts concerning the North Atlantic Oscillation. The North Atlantic Oscillation (NAO) and its recent homologue, the Arctic Oscillation/Northern Hemisphere annular mode (AO/NAM), are the most prominent modes of variability in the Northern Hemisphere winter climate. The NAO teleconnection is characterised by a meridional displacement of atmospheric mass over the North Atlantic area. Its state is usually expressed by the standardised air pressure difference between the Azores High and the Iceland Low. ThisNAO index is a measure of the strength of the westerly flow (positive with strong westerlies, and vice versa). Together with the El Niño/Southern Oscillation (ENSO) phenomenon, the NAO is a major source of seasonal to interdecadal variability in the global atmosphere. On interannual and shorter time scales, the NAO dynamics can be explained as a purely internal mode of variability of the atmospheric circulation. Interdecadal variability maybe influenced, however, by ocean and sea-ice processes.
*The dynamics of ocean heat transport variability*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2000RG000084[+https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2000RG000084+]
The north‐south heat transport is the prime manifestation of the ocean's role in global climate, but understanding of its variability has been fragmentary owing to uncertainties in observational analyses, limitations in models, and the lack of a convincing mechanism. We review the dynamics of global ocean heat transport variability, with an emphasis on timescales from monthly to interannual. We synthesize relatively simple dynamical ideas and show that together they explain heat transport variability in a state‐of‐the‐art, high‐resolution ocean general circulation model. Globally, the cross‐equatorial seasonal heat transport fluctuations are close to ±3 × 1015 W, the same amplitude as the cross‐equatorial seasonal atmospheric energy transport. The variability is concentrated within 20° of the equator and dominated by the annual cycle.
The majority of the variability is due to wind‐induced current fluctuations in which the time‐varying wind drives Ekman layer mass transports that are compensated by depth‐independent return flows. The temperature difference between the mass transports gives rise to the time‐dependent heat transport. It is found that in the heat budget the divergence of the time‐varying heat transport is largely balanced by changes in heat storage. Despite the Ekman transport's strong impact on the time‐dependent heat transport, the largely depth‐independent character of its associated meridional overturning stream function means that it does not affect estimates of the time‐mean heat transport made by one‐time hydrographic surveys. Away from the tropics the heat transport variability associated with the depth‐independent gyre and depth‐dependent circulations is much weaker than the Ekman variability. The non‐Ekman contributions can amount to a 0.2–0.4 × 1015 W standard deviation in the heat transport estimated from a one‐time hydrographic survey.
*Glacial cycles: Toward a new paradigm*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2000RG000091[+https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2000RG000091+]
The largest environmental changes in the recent geological history of the Earth are undoubtedly the successions of glacial and interglacial times. It has been clearly demonstrated that changes in the orbital parameters of our planet have a crucial role in these cycles. Nevertheless, several problems in classical astronomical theory of paleoclimate have indeed been identified: (1) The main cyclicity in the paleoclimatic record is close to 100,000 years, but there is no significant orbitally induced changes in the radiative forcing of the Earth in this frequency range (the “100‐kyr problem”); (2) the most prominent glacial‐interglacial transition occurs at a time of minimal orbital variations (the “stage 11 problem); and (3) at ∼0.8 Ma a change from a 41‐kyr dominant periodicity to a 100‐kyr periodicity occurred without major changes in orbital forcing or in the Earth's configuration (the “late Pleistocene transition problem”).
Additionally, the traditional view states that the climate system changes slowly and continuously together with the slow evolution of the large continental ice sheets, whereas recent high‐resolution data from ice and marine sediment cores do not support such a gradual scenario. Most of the temperature rise at the last termination occurred over a few decades in the Northern Hemisphere, indicating a major and abrupt reorganization of the ocean‐atmosphere system. Similarly, huge iceberg discharges during glacial times, known as Heinrich events, clearly demonstrate that ice sheet changes may also be sometimes quite abrupt. In light of these recent paleoclimatic data the Earth climate system appears much more unstable and seems to jump abruptly between different quasi steady states. Using the concept of thresholds, this new paradigm can be easily integrated into classical astronomical theory and compared with recent observational evidence. If the ice sheet changes are, by definition, the central phenomenon of glacial‐interglacial cycles, other components of the climate system (atmospheric CO2 concentration, Southern Ocean productivity, or global deep‐ocean circulation) may play an even more fundamental role in these climatic cycles.
*The quasi‐biennial oscillation*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://agupubs.onlinelibrary.wiley.com/doi/10.1029/1999RG000073[+https://agupubs.onlinelibrary.wiley.com/doi/10.1029/1999RG000073+]
The quasi‐biennial oscillation (QBO) dominates the variability of the equatorial stratosphere (∼16–50 km) and is easily seen as downward propagating easterly and westerly wind regimes, with a variable period averaging approximately 28 months. From a fluid dynamical perspective, the QBO is a fascinating example of a coherent, oscillating mean flow that is driven by propagating waves with periods unrelated to that of the resulting oscillation. Although the QBO is a tropical phenomenon, it affects the stratospheric flow from pole to pole by modulating the effects of extratropical waves. Indeed, study of the QBO is inseparable from the study of atmospheric wave motions that drive it and are modulated by it. The QBO affects variability in the mesosphere near 85 km by selectively filtering waves that propagate upward through the equatorial stratosphere, and may also affect the strength of Atlantic hurricanes.
The effects of the QBO are not confined to atmospheric dynamics. Chemical constituents, such as ozone, water vapor, and methane, are affected by circulation changes induced by the QBO. There are also substantial QBO signals in many of the shorter‐lived chemical constituents. Through modulation of extratropical wave propagation, the QBO has an effect on the breakdown of the wintertime stratospheric polar vortices and the severity of high‐latitude ozone depletion. The polar vortex in the stratosphere affects surface weather patterns, providing a mechanism for the QBO to have an effect at the Earth's surface. As more data sources (e.g., wind and temperature measurements from both ground‐based systems and satellites) become available, the effects of the QBO can be more precisely assessed.
This review covers the current state of knowledge of the tropical QBO, its extratropical dynamical effects, chemical constituent transport, and effects of the QBO in the troposphere (∼0–16 km) and mesosphere (∼50–100 km). It is intended to provide a broad overview of the QBO and its effects to researchers outside the field, as well as a source of information and references for specialists. The history of research on the QBO is discussed only briefly, and the reader is referred to several historical review papers. The basic theory of the QBO is summarized, and tutorial references are provided.
*Origin and Evolution of the Great Lakes*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/S038013300170665X[+https://www.sciencedirect.com/science/article/pii/S038013300170665X+]
This paper presents a synthesis of traditional and recently published work regarding the origin and evolution of the Great Lakes. It differs from previously published reviews by focusing on three topics critical to the development of the Great Lakes: the glaciation of the Great Lakes watershed during the late Cenozoic, the evolution of the Great Lakes since the last glacial maximum, and the record of lake levels and coastal erosion in modern times.
The Great Lakes are a product of glacial scour and were partially or totally covered by glacier ice at least six times since 0.78 Ma. During retreat of the last ice sheet large proglacial lakes developed in the Great Lakes watershed. Their levels and areas varied considerably as the oscillating ice margin opened and closed outlets at differing elevations and locations; they were also significantly affected by channel downcutting, crustal rebound, and catastrophic inflows from other large glacial lakes.
Today, lake level changes of about a 1/3 m annually, and up to 2 m over 10 to 20 year time periods, are mainly climatically-driven. Various engineering works provide small control on lake levels for some but not all the Great Lakes. Although not as pronounced as former changes, these subtle variations in lake level have had a significant effect on shoreline erosion, which is often a major concern of coastal residents.
2002
----
*Monte Carlo Methods in Geophysical Inverse Problems*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2000RG000089[+https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2000RG000089+]
Monte Carlo inversion techniques were first used by Earth scientists more than 30 years ago. Since that time they have been applied to a wide range of problems, from the inversion of free oscillation data for whole Earth seismic structure to studies at the meter‐scale lengths encountered in exploration seismology. This paper traces the development and application of Monte Carlo methods for inverse problems in the Earth sciences and in particular geophysics. The major developments in theory and application are traced from the earliest work of the Russian school and the pioneering studies in the west by Press [1968] to modern importance sampling and ensemble inference methods. The paper is divided into two parts. The first is a literature review, and the second is a summary of Monte Carlo techniques that are currently popular in geophysics. These include simulated annealing, genetic algorithms, and other importance sampling approaches. The objective is to act as both an introduction for newcomers to the field and a comprehensive reference source for researchers already familiar with Monte Carlo inversion. It is our hope that the paper will serve as a timely summary of an expanding and versatile methodology and also encourage applications to new areas of the Earth sciences.
*Array Seismology: Methods and Applications*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2000RG000100[+https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2000RG000100+]
Since their development in the 1960s, seismic arrays have given a new impulse to seismology. Recordings from many uniform seismometers in a well‐defined, closely spaced configuration produce high‐quality and homogeneous data sets, which can be used to study the Earth's structure in great detail. Apart from an improvement of the signal‐to‐noise ratio due to the simple summation of the individual array recordings, seismological arrays can be used in many different ways to study the fine‐scale structure of the Earth's interior. They have helped to study such different structures as the interior of volcanos, continental crust and lithosphere, global variations of seismic velocities in the mantle, the core‐mantle boundary and the structure of the inner core. For this purpose many different, specialized array techniques have been developed and applied to an increasing number of high‐quality array data sets.
Most array methods use the ability of seismic arrays to measure the vector velocity of an incident wave front, i.e., slowness and back azimuth. This information can be used to distinguish between different seismic phases, separate waves from different seismic events and improve the signal‐to‐noise ratio by stacking with respect to the varying slowness of different phases. The vector velocity information of scattered or reflected phases can be used to determine the region of the Earth from whence the seismic energy comes and with what structures it interacted. Therefore seismic arrays are perfectly suited to study the small‐scale structure and variations of the material properties of the Earth.
In this review we will give an introduction to various array techniques which have been developed since the 1960s. For each of these array techniques we give the basic mathematical equations and show examples of applications. The advantages and disadvantages and the appropriate applications and restrictions of the techniques will also be discussed. The main methods discussed are the beam‐forming method, which forms the basis for several other methods, different slant stacking techniques, and frequency–wave number analysis. Finally, some methods used in exploration geophysics that have been adopted for global seismology are introduced. This is followed by a description of temporary and permanent arrays installed in the past, as well as existing arrays and seismic networks. We highlight their purposes and discuss briefly the advantages and disadvantages of different array configurations.
*A Millennium of Geomagnetism*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2000RG000097[+https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2000RG000097+]
The history of geomagnetism began around the year 1000 with the discovery in China of the magnetic compass. Methodical studies of the Earth's field started in 1600 with William Gilbert's De Magnete [Gilbert, 1600] and continued with the work of (among others) Edmond Halley, Charles Augustin de Coulomb, Carl Friedrich Gauss, and Edward Sabine. The discovery of electromagnetism by Hans Christian Oersted and André‐Marie Ampére led Michael Faraday to the notion of fluid dynamos, and the observation of sunspot magnetism by George Ellery Hale led Sir Joseph Larmor in 1919 to the idea that such dynamos could sustain themselves naturally in convecting conducting fluids. From that came modern dynamo theory, of both the solar and terrestrial magnetic fields. Paleomagnetic studies revealed that the Earth's dipole had undergone reversals in the distant past, and these became the critical evidence in establishing plate tectonics. Finally, the recent availability of scientific spacecraft has demonstrated the intricacy of the Earth's distant magnetic field, as well as the existence of magnetic fields associated with other planets and with satellites in our solar system.
*Advanced Spectral Methods for Climatic Time Series*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2000RG000092[+https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2000RG000092+]
The analysis of univariate or multivariate time series provides crucial information to describe, understand, and predict climatic variability. The discovery and implementation of a number of novel methods for extracting useful information from time series has recently revitalized this classical field of study. Considerable progress has also been made in interpreting the information so obtained in terms of dynamical systems theory. In this review we describe the connections between time series analysis and nonlinear dynamics, discuss signal‐to‐noise enhancement, and present some of the novel methods for spectral analysis. The various steps, as well as the advantages and disadvantages of these methods, are illustrated by their application to an important climatic time series, the Southern Oscillation Index. This index captures major features of interannual climate variability and is used extensively in its prediction. Regional and global sea surface temperature data sets are used to illustrate multivariate spectral methods. Open questions and further prospects conclude the review.
*Evolution of networks*
~~~~~~~~~~~~~~~~~~~~~~~
https://www.tandfonline.com/doi/abs/10.1080/00018730110112519[+https://www.tandfonline.com/doi/abs/10.1080/00018730110112519+]
We review the recent rapid progress in the statistical physics of evolving networks. Interest has focused mainly on the structural properties of complex networks in communications, biology, social sciences and economics. A number of giant artificial networks of this kind have recently been created, which opens a wide field for the study of their topology, evolution, and the complex processes which occur in them. Such networks possess a rich set of scaling properties. A number of them are scale-free and show striking resilience against random breakdowns. In spite of the large sizes of these networks, the distances between most of their vertices are short - a feature known as the 'small-world' effect. We discuss how growing networks self-organize into scale-free structures, and investigate the role of the mechanism of preferential linking. We consider the topological and structural properties of evolving networks, and percolation and disease spread on these networks. We present a number of models demonstrating the main features of evolving networks and discuss current approaches for their simulation and analytical study. Applications of the general results to particular networks in nature are discussed. We demonstrate the generic connections of the network growth processes with the general problems of non-equilibrium physics, econophysics, evolutionary biology, and so on.
*Statistical mechanics of complex networks*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.74.47[+https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.74.47+]
Complex networks describe a wide range of systems in nature and society. Frequently cited examples include the cell, a network of chemicals linked by chemical reactions, and the Internet, a network of routers and computers connected by physical links. While traditionally these systems have been modeled as random graphs, it is increasingly recognized that the topology and evolution of real networks are governed by robust organizing principles. This article reviews the recent advances in the field of complex networks, focusing on the statistical mechanics of network topology and dynamics. After reviewing the empirical data that motivated the recent interest in networks, the authors discuss the main models and analytical tools, covering random graphs, small-world and scale-free networks, the emerging theory of evolving networks, and the interplay between topology and the network’s robustness against failures and attacks.
2003
----
*Sea breeze: Structure, forecasting, and impacts* - S. T. K. Miller et al
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2003RG000124[+https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2003RG000124+]
The sea breeze system (SBS) occurs at coastal locations throughout the world and consists of many spatially and temporally nested phenomena. Cool marine air propagates inland when a cross‐shore mesoscale (2–2000 km) pressure gradient is created by daytime differential heating. The circulation is also characterized by rising currents at the sea breeze front and diffuse sinking currents well out to sea and is usually closed by seaward flow aloft. Coastal impacts include relief from oppressive hot weather, development of thunderstorms, and changes in air quality. This paper provides a review of SBS research extending back 2500 years but focuses primarily on recent discoveries. We address SBS forcing mechanisms, structure and related phenomena, life cycle, forecasting, and impacts on air quality.
*The Earth's Climate in the Next Hundred Thousand years (100 kyr)* - A. Berger et al
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://link.springer.com/article/10.1023/A%3A1023233702670[+https://link.springer.com/article/10.1023/A%3A1023233702670+]
One of the most striking features of the Quaternary paleoclimate records remains the so-called 100-kyr cycle which is undoubtedly linked to the future of our climate. Such a 100-kyr cycle is indeed characterised by long glacial periods followed by a short-interglacial (∼10–15 kyr long). As we are now in an interglacial, the Holocene, the previous one (the Eemian, which corresponds quite well to Marine Isotope Stage 5e, peaking at ∼125 kyr before present, BP) was assumed to be a good analogue for our present-day climate. In addition, as the Holocene is 10 kyr long, paleoclimatologists were naturally inclined to predict that we are quite close to the next ice age. Simulations using the 2-D climate model of Louvain-la-Neuve show, however, that the current interglacial will most probably last much longer than any previous ones. It is suggested here that this is related to the shape of the Earth's orbit around the Sun, which will be almost circular over the next tens of thousands of years. As this is primarily related to the 400-kyr cycle of eccentricity, the best and closest analogue for such a forcing is definitely Marine Isotopic Stage 11 (MIS-11), some 400 kyr ago, not MIS-5e. Because the CO2 concentration in the atmosphere also plays an important role in shaping long-term climatic variations – especially its phase with respect to insolation – a detailed reconstruction of this previous interglacial from deep sea and ice records is urgently needed. Such a study is particularly important in the context of the already exceptional present-day CO2 concentrations (unprecedented over the past million years) and, even more so, because of even larger values predicted to occur during the 21st century due to human activities.
*The Structure and Function of Complex Networks*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://epubs.siam.org/doi/10.1137/S003614450342480[+https://epubs.siam.org/doi/10.1137/S003614450342480+]
Inspired by empirical studies of networked systems such as the Internet, social networks, and biological networks, researchers have in recent years developed a variety of techniques and models to help us understand or predict the behavior of these systems. Here we review developments in this field, including such concepts as the small-world effect, degree distributions, clustering, network correlations, random graph models, models of network growth and preferential attachment, and dynamical processes taking place on networks.
*Ice streams as the arteries of an ice sheet: their mechanics, stability and significance*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/S0012825202001307[+https://www.sciencedirect.com/science/article/pii/S0012825202001307+]
Ice streams are corridors of fast ice flow (ca. 0.8 km/year) within an ice sheet and are responsible for discharging the majority of the ice and sediment within them. Consequently, like the arteries in our body, their behaviour and stability is essential to the well being of an ice sheet. Ice streams may either be constrained by topography (topographic ice streams) or by areas of slow moving ice (pure ice streams). The latter show spatial and temporal patterns of variability that may indicate a potential for instability and are therefore of particular interest. Today, pure ice streams are largely restricted to the Siple Coast of Antarctica and these ice streams have been extensively investigated over the last 20 years. This paper provides an introduction to this substantial body of research and describes the morphology, dynamics, and temporal behaviour of these contemporary ice streams, before exploring the basal conditions that exist beneath them and the mechanisms that drive the fast flow within them. The paper concludes by reviewing the potential of ice streams as unstable elements within ice sheets that may impact on the Earth's dynamic system.
2004
----
*Polynya Dynamics: a Review of Observations and Modeling* - M. A. Morales Maqueda et al
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2002RG000116[+https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2002RG000116+]
Polynyas are nonlinear‐shaped openings within the ice cover, ranging in size from 10 to 105 km2. Polynyas play an important climatic role. First, winter polynyas tend to warm the atmosphere, thus affecting atmospheric mesoscale motions. Second, ocean surface cooling and brine rejection during sea ice growth in polynyas lead to vertical mixing and convection, contributing to the transformation of intermediate and deep waters in the global ocean and the maintenance of the oceanic overturning circulation. Since 1990, there has been an upsurge in polynya observations and theoretical models for polynya formation and their impact on the biogeochemistry of the polar seas. This article reviews polynya research carried out in the last 2 decades, focusing on presenting a state‐of‐the‐art picture of the physical interactions between polynyas and the atmosphere‐sea ice‐ocean system. Observational and modeling studies, the surface heat budget, and water mass transformation within these features are addressed.
*Nonlinear multivariate and time series analysis by neural network methods* - W. Hsieh
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2002RG000112[+https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2002RG000112+]
Methods in multivariate statistical analysis are essential for working with large amounts of geophysical data, data from observational arrays, from satellites, or from numerical model output. In classical multivariate statistical analysis, there is a hierarchy of methods, starting with linear regression at the base, followed by principal component analysis (PCA) and finally canonical correlation analysis (CCA). A multivariate time series method, the singular spectrum analysis (SSA), has been a fruitful extension of the PCA technique. The common drawback of these classical methods is that only linear structures can be correctly extracted from the data. Since the late 1980s, neural network methods have become popular for performing nonlinear regression and classification. More recently, neural network methods have been extended to perform nonlinear PCA (NLPCA), nonlinear CCA (NLCCA), and nonlinear SSA (NLSSA). This paper presents a unified view of the NLPCA, NLCCA, and NLSSA techniques and their applications to various data sets of the atmosphere and the ocean (especially for the El Niño‐Southern Oscillation and the stratospheric quasi‐biennial oscillation). These data sets reveal that the linear methods are often too simplistic to describe real‐world systems, with a tendency to scatter a single oscillatory phenomenon into numerous unphysical modes or higher harmonics, which can be largely alleviated in the new nonlinear paradigm.
*Nonlinear Shallow Water Theories for Coastal Waves* - E. Barthelemy
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://link.springer.com/article/10.1007/s10712-003-1281-7[+https://link.springer.com/article/10.1007/s10712-003-1281-7+]
Ocean waves entering the near-shore zone undergo nonlinear and dispersive processes. This paper reviews nonlinear models, focusing on the so-called Serre equations. Techniques to overcome their limitations with respect to the phase speed are presented. Nonlinear behaviours are compared with theoretical results concerning the properties of Stokes waves. In addition, the models are tested against experiments concerning periodic wave transformation over a bar topography and of the shoaling of solitary waves on a beach.
*Theory of Basin Scale Dynamics of a Stratified Rotating Fluid* - L. R. M. Maas
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://link.springer.com/article/10.1007/s10712-004-1277-y[+https://link.springer.com/article/10.1007/s10712-004-1277-y+]
The dynamics of a stratified fluid contained in a rotating rectangular box is described in terms of the evolution of the lowest moments of its density and momentum fields. The first moment of the density field also gives the position of the fluid’s centre-of-mass. The resulting low-order model allows for fast assessment both of adopted parameterisations, as well as of particular values of parameters. In the ideal fluid limit (neglect of viscous and diffusive effects), in the absence of wind, the equations have a Hamiltonian structure that is integrable (non-integrable) in the absence (presence) of differential heating. In a non-rotating convective regime, dynamically rich behaviour and strong dependence on the single (lumped) parameter are established. For small values of this parameter, in a self-similar regime, further reduction to an explicit map is discussed in an Appendix. Introducing rotation in a nearly geostrophic regime leads through a Hopf bifurcation to a limit cycle, and under the influence of wind and salt to multiple equilibria and chaos, respectively.
*From Classical To Statistical Ocean Dynamics* - G. Holloway
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://link.springer.com/article/10.1007/s10712-004-1272-3[+https://link.springer.com/article/10.1007/s10712-004-1272-3+]
Traditional ocean modeling treats fields resolved on the model grid according to the classical dynamics of continua. Variability on smaller scales is included through sundry “eddy viscosities”, “mixing coefficients” and other schemes. In this paper we develop an alternative approach based on statistical dynamics. First, we recognize that we treat probabilities of flows, not the flows themselves. Modeled dependent variables are the moments (expectations) of the probabilities of possible flows. Second, we address the challenge to obtain the equations of motion for the moments of probable flows rather than the (traditional) equations for explicit flows. For linear terms and on larger resolved scales, the statistical equations agree with classical dynamics where those of traditional modeling works well.
Differences arise where traditional modeling would relegate unresolved motion to “eddy viscosity”, etc.. Instead, changes of entropy (<-log P> over the probability distribution of possible flows) with respect to the modeled moments act as forcings upon those moments. In this way we obtain a consistent framework for specifying the terms which, traditionally, represent subgridscale effects. Although these statistical equations are close to the classical equations in many ways, important differences are also evident; here, two phenomena are described where the results differ. We consider eddies interacting with bottom topography. It is seen that traditional “eddy viscosity” and/or “topographic drag”, which would reduce large scale flows toward rest, are wrong. The second law of thermodynamics is violated; the “arrow of time” is running backwards! From statistical dynamics, approximate corrections are obtained, yielding a practical improvement to the fidelity of ocean models. Another phenomenon occurs at much smaller scales in the turbulent mixing of heat and salt. Even when both heat and salt are stably stratifying, their rates of turbulent transfer should differ. This suggests a further model improvement.
*Stochastic Models of Quasigeostrophic Turbulence* - T. Delsole
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://link.springer.com/article/10.1023/B%3AGEOP.0000028160.75549.0d[+https://link.springer.com/article/10.1023/B%3AGEOP.0000028160.75549.0d+]
Atmospheric and oceanic eddies are believed to be manifestations of quasigeostrophic turbulence — turbulence that occurs in rapidly rotating, vertically stratified fluid systems. The heat, momentum, and water transport by these eddies constitute a significant component of the climate balance, without which climate change cannot be understood. A major, unsolved problem is whether the turbulent eddy fluxes can be parameterized in terms of the large-scale, background flow. In the past, stochastic models have been used quite extensively to investigate quasigeostrophic turbulence in the case in which the eddy statistics are isotropic and homogeneous. Unfortunately, these models ignore the background shear which is absolutely essential to maintaining the eddies in the presence of dissipation. Recent attempts to extend stochastic models to shear flows have shown significant skill in predicting the structure of the eddy fluxes in arbitrary, three-dimensionally varying flows. This paper provides an accessible introduction to these models. The topics reviewed include quasigeostrophic turbulence and two-dimensional turbulence, non-modal andoptimal perturbations, mathematical theory of stochastic models, stochastic model simulations with realistic background states, and recent closure theories. A list of unsolved problems concludes this review.
*Heinrich events: Massive late Pleistocene detritus layers of the North Atlantic and their global climate imprint*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2003RG000128[+https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2003RG000128+]
[1] Millennial climate oscillations of the glacial interval are interrupted by extreme events, the so‐called Heinrich events of the North Atlantic. Their near‐global footprint is a testament to coherent interactions among Earth's atmosphere, oceans, and cryosphere on millennial timescales. Heinrich detritus appears to have been derived from the region around Hudson Strait. It was deposited over approximately 500 ± 250 years. Several mechanisms have been proposed for the origin of the layers: binge‐purge cycle of the Laurentide ice sheet, jökulhlaup activity from a Hudson Bay lake, and an ice shelf buildup/collapse fed by Hudson Strait. To determine the origin of the Heinrich events, I recommend (1) further studies of the timing and duration of the events, (2) further sedimentology study near the Hudson Strait, and (3) greater spatial and temporal resolution studies of the layers as well as their precursory intervals. Studies of previous glacial intervals may also provide important constraints.
2005
----
*Madden‐Julian Oscillation* - C. Zhang
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2004RG000158[+https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2004RG000158+]
The Madden‐Julian Oscillation (MJO) is the dominant component of the intraseasonal (30–90 days) variability in the tropical atmosphere. It consists of large‐scale coupled patterns in atmospheric circulation and deep convection, with coherent signals in many other variables, all propagating eastward slowly (∼5 m s−1) through the portion of the Indian and Pacific oceans where the sea surface is warm. It constantly interacts with the underlying ocean and influences many weather and climate systems. The past decade has witnessed an expeditious progress in the study of the MJO: Its large‐scale and multiscale structures are better described, its scale interaction is recognized, its broad influences on tropical and extratropical weather and climate are increasingly appreciated, and its mechanisms for disturbing the ocean are further comprehended. Yet we are facing great difficulties in accurately simulating and predicting the MJO using sophisticated global weather forecast and climate models, and we are unable to explain such difficulties based on existing theories of the MJO. It is fair to say that the MJO remains an unmet challenge to our understanding of the tropical atmosphere and to our ability to simulate and predict its variability. This review, motivated by both the acceleration and gaps in our knowledge of the MJO, intends to synthesize what we currently know and what we do not know on selected topics: its observed basic characteristics, mechanisms, numerical modeling, air‐sea interaction, and influences on the El Niño and Southern Oscillation.
2006
----
*Complex networks: Structure and dynamics*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/S037015730500462X?via%3Dihub[+https://www.sciencedirect.com/science/article/pii/S037015730500462X?via%3Dihub+]
Coupled biological and chemical systems, neural networks, social interacting species, the Internet and the World Wide Web, are only a few examples of systems composed by a large number of highly interconnected dynamical units. The first approach to capture the global properties of such systems is to model them as graphs whose nodes represent the dynamical units, and whose links stand for the interactions between them. On the one hand, scientists have to cope with structural issues, such as characterizing the topology of a complex wiring architecture, revealing the unifying principles that are at the basis of real networks, and developing models to mimic the growth of a network and reproduce its structural properties. On the other hand, many relevant questions arise when studying complex networks’ dynamics, such as learning how a large ensemble of dynamical systems that interact through a complex wiring topology can behave collectively. We review the major concepts and results recently achieved in the study of the structure and dynamics of complex networks, and summarize the relevant applications of these ideas in many different disciplines, ranging from nonlinear science to biology, from statistical mechanics to medicine and engineering.
*Orbital changes and climate*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/S0277379106002691[+https://www.sciencedirect.com/science/article/pii/S0277379106002691+]
At the 41,000-period of orbital tilt, summer insolation forces a lagged response in northern ice sheets. This delayed ice signal is rapidly transferred to nearby northern oceans and landmasses by atmospheric dynamics. These ice-driven responses lead to late-phased changes in atmospheric CO2 that provide positive feedback to the ice sheets and also project ‘late’ 41-K forcing across the tropics and the Southern Hemisphere. Responses in austral regions are also influenced by a fast response to summer insolation forcing at high southern latitudes.
At the 22,000-year precession period, northern summer insolation again forces a lagged ice-sheet response, but with muted transfers to proximal regions and no subsequent effect on atmospheric CO2. Most 22,000-year greenhouse-gas responses have the ‘early’ phase of July insolation. July forcing of monsoonal and boreal wetlands explains the early CH4 response. The slightly later 22-K CO2 response originates in the southern hemisphere. The early 22-K CH4 and CO2 responses add to insolation forcing of the ice sheets.
The dominant 100,000-year response of ice sheets is not externally forced, nor does it result from internal resonance. Internal forcing appears to play at most a minor role. The origin of this signal lies mainly in internal feedbacks (CO2 and ice albedo) that drive the gradual build-up of large ice sheets and then their rapid destruction. Ice melting during terminations is initiated by uniquely coincident forcing from insolation and greenhouse gases at the periods of tilt and precession.
2007
----
*Heterogeneous Multiscale Methods: A Review* - W. E. B. Engquist et al
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
http://www.global-sci.com/intro/article_detail.html?journal=cicp&article_id=7911[+http://www.global-sci.com/intro/article_detail.html?journal=cicp&article_id=7911+]
This papergives asystematic introductionto HMM,the heterogeneousmultiscale methods, including the fundamental design principles behind the HMM philosophy and the main obstacles that have to be overcome when using HMM for a particular problem. This is illustrated by examples from several application areas, including complex fluids, micro-fluidics, solids, interface problems, stochastic problems, and statistically self-similar problems. Emphasis is given to the technical tools, such as the various constrained molecular dynamics, that have been developed, in order to apply HMM to these problems. Examples of mathematical results on the error analysis of HMM are presented. The review ends with a discussion on some of the problems that have to be solved in order to make HMM a more powerful tool.
*On the driving processes of the Atlantic meridional overturning circulation* - A. Griesel et al
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Because of its relevance for the global climate the Atlantic meridional overturning circulation (AMOC) has been a major research focus for many years. Yet the question of which physical mechanisms ultimately drive the AMOC, in the sense of providing its energy supply, remains a matter of controversy. Here we review both observational data and model results concerning the two main candidates: vertical mixing processes in the ocean's interior and wind‐induced Ekman upwelling in the Southern Ocean. In distinction to the energy source we also discuss the role of surface heat and freshwater fluxes, which influence the volume transport of the meridional overturning circulation and shape its spatial circulation pattern without actually supplying energy to the overturning itself in steady state. We conclude that both wind‐driven upwelling and vertical mixing are likely contributing to driving the observed circulation. To quantify their respective contributions, future research needs to address some open questions, which we outline.
*The study of Earth's magnetism (1269–1950): A foundation by Peregrinus and subsequent development of geomagnetism and paleomagnetism* - V. Courtillot
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2006RG000198[+https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2006RG000198+]
This paper summarizes the histories of geomagnetism and paleomagnetism (1269–1950). The role of Peregrinus is emphasized. In the sixteenth century a debate on local versus global departures of the field from that of an axial dipole pitted Gilbert against Le Nautonier. Regular measurements were undertaken in the seventeenth century. At the turn of the nineteenth century, de Lamanon, de Rossel, and von Humboldt discovered the decrease of intensity as one approaches the equator. Around 1850, three figures of rock magnetism were Fournet (remanent and induced magnetizations), Delesse (remagnetization in a direction opposite to the original), and Melloni (direction of lava magnetization acquired at time of cooling). Around 1900, Brunhes discovered magnetic reversals. In the 1920s, Chevallier produced the first magnetostratigraphy and hypothesized that poles had undergone enormous displacements. Matuyama showed that the Earth's field had reversed before the Pleistocene. Our review ends in the 1940s, when exponential development of geomagnetism and paleomagnetism starts.
*Neural network emulations for complex multidimensional geophysical mappings*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2006RG000200[+https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2006RG000200+]
A group of geophysical applications, which from the mathematical point of view, can be formulated as complex, multidimensional, nonlinear mappings and which in terms of the neural network (NN) technique, utilize a particular type of NN, the multilayer perceptron (MLP), is reviewed in this paper. This type of NN application covers the majority of NN applications developed in geosciences like satellite remote sensing, meteorology, oceanography, numerical weather prediction, and climate studies. The major properties of the mappings and MLP NNs are formulated and discussed. Three particular groups of NN applications are presented in this paper as illustrations: atmospheric and oceanic satellite remote sensing applications, NN emulations of model physics for developing atmospheric and oceanic hybrid numerical models, and NN emulations of the dependencies between model variables for application in data assimilation systems.
*Angular momentum in the global atmospheric circulation* - J. Egger et al
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2006RG000213[+https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2006RG000213+]
Angular momentum is a variable of central importance to the dynamics of the atmosphere both regionally and globally. Moreover, the angular momentum equations yield a precise description of the dynamic interaction of the atmosphere with the oceans and the solid Earth via various torques as exerted by friction, pressure against the mountains and the nonspherical shape of the Earth, and by gravity. This review presents recent work with respect to observations and the theory of atmospheric angular momentum of large‐scale motions. It is mainly the recent availability of consistent global data sets spanning decades that sparked renewed interest in angular momentum. In particular, relatively reliable estimates of the torques are now available. In addition, a fairly wide range of theoretical aspects of the role of angular momentum in atmospheric large‐scale dynamics is covered.
*Predictability: Recent insights from information theory* - T. DelSole
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2006RG000202[+https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2006RG000202+]
This paper summarizes a framework for investigating predictability based on information theory. This framework connects and unifies a wide variety of statistical methods traditionally used in predictability analysis, including linear regression, canonical correlation analysis, singular value decomposition, discriminant analysis, and data assimilation. Central to this framework is a procedure called predictable component analysis (PrCA). PrCA optimally decomposes variables by predictability, just as principal component analysis optimally decomposes variables by variance. For normal distributions the same predictable components are obtained whether one optimizes predictive information, the dispersion part of relative entropy, mutual information, Mahalanobis error, average signal to noise ratio, normalized mean square error, or anomaly correlation. For joint normal distributions, PrCA is equivalent to canonical correlation analysis between forecast and observations. The regression operator that maps observations to forecasts plays an important role in this framework, with the left singular vectors of this operator being the predictable components and the singular values being the canonical correlations. This correspondence between predictable components and singular vectors occurs only if the singular vectors are computed using Mahalanobis norms, a result that sheds light on the role of norms in predictability. In linear stochastic models the forcing that minimizes predictability is the one that renders the “whitened” dynamical operator normal. This condition for minimum predictability is invariant to linear transformation and is equivalent to detailed balance. The framework also inspires some new approaches to accounting for deficiencies of forecast models and estimating distributions from finite samples.
*Characterization of complex networks: A survey of measurements*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.tandfonline.com/doi/abs/10.1080/00018730601170527[+https://www.tandfonline.com/doi/abs/10.1080/00018730601170527+]
Each complex network (or class of networks) presents specific topological features which characterize its connectivity and highly influence the dynamics of processes executed on the network. The analysis, discrimination, and synthesis of complex networks therefore rely on the use of measurements capable of expressing the most relevant topological features. This article presents a survey of such measurements. It includes general considerations about complex network characterization, a brief review of the principal models, and the presentation of the main existing measurements. Important related issues covered in this work comprise the representation of the evolution of complex networks in terms of trajectories in several measurement spaces, the analysis of the correlations between some of the most traditional measurements, perturbation analysis, as well as the use of multivariate statistics for feature selection and network classification. Depending on the network and the analysis task one has in mind, a specific set of features may be chosen. It is hoped that the present survey will help the proper application and interpretation of measurements.
*Atmospheric bridge, oceanic tunnel, and global climatic teleconnections*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2005RG000172[+https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2005RG000172+]
We review teleconnections within the atmosphere and ocean, their dynamics and their role in coupled climate variability. We concentrate on teleconnections in the latitudinal direction, notably tropical‐extratropical and interhemispheric interactions, and discuss the timescales of several teleconnection processes. The tropical impact on extratropical climate is accomplished mainly through the atmosphere. In particular, tropical Pacific sea surface temperature anomalies impact extratropical climate variability through stationary atmospheric waves and their interactions with midlatitude storm tracks. Changes in the extratropics can also impact the tropical climate through upper ocean subtropical cells at decadal and longer timescales. On the global scale the tropics and subtropics interact through the atmospheric Hadley circulation and the oceanic subtropical cell. The thermohaline circulation can provide an effective oceanic teleconnection for interhemispheric climate interactions.
2008
----
*Lectures on Probability, Entropy, and Statistical Physics* - Ariel Caticha
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://arxiv.org/abs/0808.0012[+https://arxiv.org/abs/0808.0012+]
These lectures deal with the problem of inductive inference, that is, the problem of reasoning under conditions of incomplete information. Is there a general method for handling uncertainty? Or, at least, are there rules that could in principle be followed by an ideally rational mind when discussing scientific matters? What makes one statement more plausible than another? How much more plausible? And then, when new information is acquired how do we change our minds? Or, to put it differently, are there rules for learning? Are there rules for processing information that are objective and consistent? Are they unique? And, come to think of it, what, after all, is information? It is clear that data contains or conveys information, but what does this precisely mean? Can information be conveyed in other ways? Is information physical? Can we measure amounts of information? Do we need to? Our goal is to develop the main tools for inductive inference--probability and entropy--from a thoroughly Bayesian point of view and to illustrate their use in physics with examples borrowed from the foundations of classical statistical physics.
*Geophysical and astrophysical fluid dynamics beyond the traditional approximation* - T. Gerkema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2006RG000220[+https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2006RG000220+]
In studies on geophysical fluid dynamics, it is common practice to take the Coriolis force only partially into account by neglecting the components proportional to the cosine of latitude, the so‐called traditional approximation (TA). This review deals with the consequences of abandoning the TA, based on evidence from numerical and theoretical studies and laboratory and field experiments. The phenomena most affected by the TA include mesoscale flows (Ekman spirals, deep convection, and equatorial jets) and internal waves. Abandoning the TA produces a tilt in convective plumes, produces a dependence on wind direction in Ekman spirals, and gives rise to a plethora of changes in internal wave behavior in weakly stratified layers, such as the existence of trapped short low‐frequency waves, and a poleward extension of their habitat. In the astrophysical context of stars and gas giant planets, the TA affects the rate of tidal dissipation and also the patterns of thermal convection.
*Synchronization in complex networks*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/S0370157308003384?via%3Dihub[+https://www.sciencedirect.com/science/article/pii/S0370157308003384?via%3Dihub+]
Synchronization processes in populations of locally interacting elements are the focus of intense research in physical, biological, chemical, technological and social systems. The many efforts devoted to understanding synchronization phenomena in natural systems now take advantage of the recent theory of complex networks. In this review, we report the advances in the comprehension of synchronization phenomena when oscillating elements are constrained to interact in a complex network topology. We also take an overview of the new emergent features coming out from the interplay between the structure and the function of the underlying patterns of connections. Extensive numerical work as well as analytical approaches to the problem are presented. Finally, we review several applications of synchronization in complex networks to different disciplines: biological systems and neuroscience, engineering and computer science, and economy and social sciences.
*Classifications of Atmospheric Circulation Patterns*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://nyaspubs.onlinelibrary.wiley.com/doi/full/10.1196/annals.1446.019[+https://nyaspubs.onlinelibrary.wiley.com/doi/full/10.1196/annals.1446.019+]
We review recent advances in classifications of circulation patterns as a specific research area within synoptic climatology. The review starts with a general description of goals of classification and the historical development in the field. We put circulation classifications into a broader context within climatology and systematize the varied methodologies and approaches. We characterize three basic groups of classifications: subjective (also called manual), mixed (hybrid), and objective (computer‐assisted, automated). The roles of cluster analysis and principal component analysis in the classification process are clarified. Several recent methodological developments in circulation classifications are identified and briefly described: the introduction of nonlinear methods, objectivization of subjective catalogs, efforts to optimize classifications, the need for intercomparisons of classifications, and the progress toward an optimum, if possible unified, classification method. Among the recent tendencies in the applications of circulation classifications, we mention a more extensive use in climate studies, both of past, present, and future climates, innovative applications in the ensemble forecasting, increasing variety of synoptic–climatological investigations, and steps above from the troposphere. After introducing the international activity within the field of circulation classifications, the COST733 Action, we briefly describe outputs of the inventory of classifications in Europe, which was carried out within the Action. Approaches to the evaluation of classifications and their mutual comparisons are also reviewed. A considerable part of the review is devoted to three examples of applications of circulation classifications: in historical climatology, in analyses of recent climate variations, and in analyses of outputs from global climate models.
*Towards the Probabilistic Earth-System Model*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://arxiv.org/abs/0812.1074[+https://arxiv.org/abs/0812.1074+]
Multi-model ensembles provide a pragmatic approach to the representation of model uncertainty in climate prediction. However, such representations are inherently ad hoc, and, as shown, probability distributions of climate variables based on current-generation multi-model ensembles, are not accurate. Results from seasonal re-forecast studies suggest that climate model ensembles based on stochastic-dynamic parametrisation are beginning to outperform multi-model ensembles, and have the potential to become significantly more skilful than multi-model ensembles.
The case is made for stochastic representations of model uncertainty in future-generation climate prediction models. Firstly, a guiding characteristic of the scientific method is an ability to characterise and predict uncertainty; individual climate models are not currently able to do this. Secondly, through the effects of noise-induced rectification, stochastic-dynamic parametrisation may provide a (poor man's) surrogate to high resolution. Thirdly, stochastic-dynamic parametrisations may be able to take advantage of the inherent stochasticity of electron flow through certain types of low-energy computer chips, currently under development.
These arguments have particular resonance for next-generation Earth-System models, which purport to be comprehensive numerical representations of climate, and where integrations at high resolution may be unaffordable.
2009
----
*Introduction to Bioinformatics* - Sabu M. Thampi
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://arxiv.org/abs/0911.4230[+https://arxiv.org/abs/0911.4230+]
Bioinformatics is a new discipline that addresses the need to manage and interpret the data that in the past decade was massively generated by genomic research. This discipline represents the convergence of genomics, biotechnology and information technology, and encompasses analysis and interpretation of data, modeling of biological phenomena, and development of algorithms and statistics. This article presents an introduction to bioinformatics.
*Fast Numerical Methods for Stochastic Computations: A Review* - D. Xiu
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
http://www.global-sci.com/intro/article_detail.html?journal=cicp&article_id=7732[+http://www.global-sci.com/intro/article_detail.html?journal=cicp&article_id=7732+]
This paper presents a review of the current state-of-the-art of numerical methods for stochastic computations. The focus is on efficient high-order methods suitable for practical applications, with a particular emphasis on those based on generalized polynomial chaos (gPC) methodology. The framework of gPC is reviewed, along with its Galerkin and collocation approaches for solving stochastic equations. Properties of these methods are summarized by using results from literature. This paper also attempts to present the gPC based methods in a unified framework based on an extension of the classical spectral methods into multi-dimensional random spaces.
*The “chessboard” classification scheme of mineral deposits: Mineralogy and geology from aluminum to zirconium*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/S0012825209001688[+https://www.sciencedirect.com/science/article/pii/S0012825209001688+]
Economic geology is a mixtum compositum of all geoscientific disciplines focused on one goal, finding new mineral depsosits and enhancing their exploitation. The keystones of this mixtum compositum are geology and mineralogy whose studies are centered around the emplacement of the ore body and the development of its minerals and rocks. In the present study, mineralogy and geology act as x- and y-coordinates of a classification chart of mineral resources called the “chessboard” (or “spreadsheet”) classification scheme. Magmatic and sedimentary lithologies together with tectonic structures (1-D/pipes, 2-D/veins) are plotted along the x-axis in the header of the spreadsheet diagram representing the columns in this chart diagram. 63 commodity groups, encompassing minerals and elements are plotted along the y-axis, forming the lines of the spreadsheet. These commodities are subjected to a tripartite subdivision into ore minerals, industrial minerals/rocks and gemstones/ornamental stones.
Further information on the various types of mineral deposits, as to the major ore and gangue minerals, the current models and the mode of formation or when and in which geodynamic setting these deposits mainly formed throughout the geological past may be obtained from the text by simply using the code of each deposit in the chart. This code can be created by combining the commodity (lines) shown by numbers plus lower caps with the host rocks or structure (columns) given by capital letters.
Each commodity has a small preface on the mineralogy and chemistry and ends up with an outlook into its final use and the supply situation of the raw material on a global basis, which may be updated by the user through a direct link to databases available on the internet. In this case the study has been linked to the commodity database of the US Geological Survey. The internal subdivision of each commodity section corresponds to the common host rock lithologies (magmatic, sedimentary, and metamorphic) and structures. Cross sections and images illustrate the common ore types of each commodity. Ore takes priority over the mineral. The minerals and host rocks are listed by their chemical and mineralogical compositions, respectively, separated from the text but supplemented with cross-references to the columns and lines, where they prevalently occur.
*Statistical mechanics of money, wealth, and income*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.81.1703[+https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.81.1703+]
This Colloquium reviews statistical models for money, wealth, and income distributions developed in the econophysics literature since the late 1990s. By analogy with the Boltzmann-Gibbs distribution of energy in physics, it is shown that the probability distribution of money is exponential for certain classes of models with interacting economic agents. Alternative scenarios are also reviewed. Data analysis of the empirical distributions of wealth and income reveals a two-class distribution. The majority of the population belongs to the lower class, characterized by the exponential (“thermal”) distribution, whereas a small fraction of the population in the upper class is characterized by the power-law (“superthermal”) distribution. The lower part is very stable, stationary in time, whereas the upper part is highly dynamical and out of equilibrium.
2010
----
*The origin and early radiation of dinosaurs*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/S0012825210000401[+https://www.sciencedirect.com/science/article/pii/S0012825210000401+]
Dinosaurs were remarkably successful during the Mesozoic and one subgroup, birds, remain an important component of modern ecosystems. Although the extinction of non-avian dinosaurs at the end of the Cretaceous has been the subject of intense debate, comparatively little attention has been given to the origin and early evolution of dinosaurs during the Late Triassic and Early Jurassic, one of the most important evolutionary radiations in earth history. Our understanding of this keystone event has dramatically changed over the past 25 years, thanks to an influx of new fossil discoveries, reinterpretations of long-ignored specimens, and quantitative macroevolutionary analyses that synthesize anatomical and geological data. Here we provide an overview of the first 50 million years of dinosaur history, with a focus on the large-scale patterns that characterize the ascent of dinosaurs from a small, almost marginal group of reptiles in the Late Triassic to the preeminent terrestrial vertebrates of the Jurassic and Cretaceous.
We provide both a biological and geological background for early dinosaur history. Dinosaurs are deeply nested among the archosaurian reptiles, diagnosed by only a small number of characters, and are subdivided into a number of major lineages. The first unequivocal dinosaurs are known from the late Carnian of South America, but the presence of their sister group in the Middle Triassic implies that dinosaurs possibly originated much earlier. The three major dinosaur lineages, theropods, sauropodomorphs, and ornithischians, are all known from the Triassic, when continents were joined into the supercontinent Pangaea and global climates were hot and arid. Although many researchers have long suggested that dinosaurs outcompeted other reptile groups during the Triassic, we argue that the ascent of dinosaurs was more of a matter of contingency and opportunism. Dinosaurs were overshadowed in most Late Triassic ecosystems by crocodile-line archosaurs and showed no signs of outcompeting their rivals. Instead, the rise of dinosaurs was a two-stage process, as dinosaurs expanded in taxonomic diversity, morphological disparity, and absolute faunal abundance only after the extinction of most crocodile-line reptiles and other groups.
*The principles of cryostratigraphy*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/S0012825210000413[+https://www.sciencedirect.com/science/article/pii/S0012825210000413+]
Cryostratigraphy adopts concepts from both Russian geocryology and modern sedimentology. Structures formed by the amount and distribution of ice within sediment and rock are termed cryostructures. Typically, layered cryostructures are indicative of syngenetic permafrost while reticulate and irregular cryostructures are indicative of epigenetic permafrost. ‘Cryofacies’ can be defined according to patterns of sediment characterized by distinct ice lenses and layers, volumetric ice content and ice-crystal size. Cryofacies can be subdivided according to cryostructure. Where a number of cryofacies form a distinctive cryostratigraphic unit, these are termed a ‘cryofacies assemblage’. The recognition, if present, of (i) thaw unconformities, (ii) other ice bodies such as vein ice (ice wedges), aggradational ice and thermokarst-cave (‘pool’) ice, and (iii) ice, sand and gravelly pseudomorphs is also important in determining the nature of the freezing process, the conditions under which frozen sediment accumulates, and the history of permafrost.
*The largest volcanic eruptions on Earth*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/S0012825210000814[+https://www.sciencedirect.com/science/article/pii/S0012825210000814+]
Large igneous provinces (LIPs) are sites of the most frequently recurring, largest volume basaltic and silicic eruptions in Earth history. These large-volume (> 1000 km3 dense rock equivalent) and large-magnitude (> M8) eruptions produce areally extensive (104–105 km2) basaltic lava flow fields and silicic ignimbrites that are the main building blocks of LIPs. Available information on the largest eruptive units are primarily from the Columbia River and Deccan provinces for the dimensions of flood basalt eruptions, and the Paraná–Etendeka and Afro-Arabian provinces for the silicic ignimbrite eruptions. In addition, three large-volume (675–2000 km3) silicic lava flows have also been mapped out in the Proterozoic Gawler Range province (Australia), an interpreted LIP remnant. Magma volumes of > 1000 km3 have also been emplaced as high-level basaltic and rhyolitic sills in LIPs.
The data sets indicate comparable eruption magnitudes between the basaltic and silicic eruptions, but due to considerable volumes residing as co-ignimbrite ash deposits, the current volume constraints for the silicic ignimbrite eruptions may be considerably underestimated. Magma composition thus appears to be no barrier to the volume of magma emitted during an individual eruption. Despite this general similarity in magnitude, flood basaltic and silicic eruptions are very different in terms of eruption style, duration, intensity, vent configuration, and emplacement style. Flood basaltic eruptions are dominantly effusive and Hawaiian–Strombolian in style, with magma discharge rates of ~ 106–108 kg s−1 and eruption durations estimated at years to tens of years that emplace dominantly compound pahoehoe lava flow fields. Effusive and fissural eruptions have also emplaced some large-volume silicic lavas, but discharge rates are unknown, and may be up to an order of magnitude greater than those of flood basalt lava eruptions for emplacement to be on realistic time scales (< 10 years).
Most silicic eruptions, however, are moderately to highly explosive, producing co-current pyroclastic fountains (rarely Plinian) with discharge rates of 109–1011 kg s−1 that emplace welded to rheomorphic ignimbrites. At present, durations for the large-magnitude silicic eruptions are unconstrained; at discharge rates of 109 kg s−1, equivalent to the peak of the 1991 Mt Pinatubo eruption, the largest silicic eruptions would take many months to evacuate > 5000 km3 of magma. The generally simple deposit structure is more suggestive of short-duration (hours to days) and high intensity (~ 1011 kg s−1) eruptions, perhaps with hiatuses in some cases. These extreme discharge rates would be facilitated by multiple point, fissure and/or ring fracture venting of magma. Eruption frequencies are much elevated for large-magnitude eruptions of both magma types during LIP-forming episodes.
However, in basalt-dominated provinces (continental and ocean basin flood basalt provinces, oceanic plateaus, volcanic rifted margins), large magnitude (> M8) basaltic eruptions have much shorter recurrence intervals of 103–104 years, whereas similar magnitude silicic eruptions may have recurrence intervals of up to 105 years. The Paraná–Etendeka province was the site of at least nine > M8 silicic eruptions over an ~ 1 Myr period at ~ 132 Ma; a similar eruption frequency, although with a fewer number of silicic eruptions is also observed for the Afro-Arabian Province. The huge volumes of basaltic and silicic magma erupted in quick succession during LIP events raises several unresolved issues in terms of locus of magma generation and storage (if any) in the crust prior to eruption, and paths and rates of ascent from magma reservoirs to the surface.
Available data indicate four end-member magma petrogenetic pathways in LIPs: 1) flood basalt magmas with primitive, mantle-dominated geochemical signatures (often high-Ti basalt magma types) that were either transferred directly from melting regions in the upper mantle to fissure vents at surface, or resided temporarily in reservoirs in the upper mantle or in mafic underplate thereby preventing extensive crustal contamination or crystallisation; 2) flood basalt magmas (often low-Ti types) that have undergone storage at lower ± upper crustal depths resulting in crustal assimilation, crystallisation, and degassing; 3) generation of high-temperature anhydrous, crystal-poor silicic magmas (e.g., Paraná–Etendeka quartz latites) by large-scale AFC processes involving lower crustal granulite melting and/or basaltic underplate remelting; and 4) rejuvenation of upper-crustal batholiths (mainly near-solidus crystal mush) by shallow intrusion and underplating by mafic magma providing thermal and volatile input to produce large volumes of crystal-rich (30–50%) dacitic to rhyolitic magma and for ignimbrite-producing eruptions, well-defined calderas up to 80 km diameter (e.g., Fish Canyon Tuff model), and which characterise of some silicic eruptions in silicic LIPs.
*Complex networks as a unified framework for descriptive analysis and predictive modeling in climate science*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://onlinelibrary.wiley.com/doi/full/10.1002/sam.10100[+https://onlinelibrary.wiley.com/doi/full/10.1002/sam.10100+]
The analysis of climate data has relied heavily on hypothesis‐driven statistical methods, while projections of future climate are based primarily on physics‐based computational models. However, in recent years a wealth of new datasets has become available. Therefore, we take a more data‐centric approach and propose a unified framework for studying climate, with an aim toward characterizing observed phenomena as well as discovering new knowledge in climate science. Specifically, we posit that complex networks are well suited for both descriptive analysis and predictive modeling tasks. We show that the structural properties of ‘climate networks’ have useful interpretation within the domain. Further, we extract clusters from these networks and demonstrate their predictive power as climate indices. Our experimental results establish that the network clusters are statistically significantly better predictors than clusters derived using a more traditional clustering approach. Using complex networks as data representation thus enables the unique opportunity for descriptive and predictive modeling to inform each other.
*Track and vertex reconstruction: From classical to adaptive methods*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.82.1419[+https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.82.1419+]
This paper reviews classical and adaptive methods of track and vertex reconstruction in particle physics experiments. Adaptive methods have been developed to meet the experimental challenges at high-energy colliders, in particular, the CERN Large Hadron Collider. They can be characterized by the obliteration of the traditional boundaries between pattern recognition and statistical estimation, by the competition between different hypotheses about what constitutes a track or a vertex, and by a high level of flexibility and robustness achieved with a minimum of assumptions about the data. The theoretical background of some of the adaptive methods is described, and it is shown that there is a close connection between the two main branches of adaptive methods: neural networks and deformable templates, on the one hand, and robust stochastic filters with annealing, on the other hand. As both classical and adaptive methods of track and vertex reconstruction presuppose precise knowledge of the positions of the sensitive detector elements, the paper includes an overview of detector alignment methods and a survey of the alignment strategies employed by past and current experiments.
2011
----
*Self-oscillation* - Alejandro Jenkins
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://arxiv.org/abs/1109.6640[+https://arxiv.org/abs/1109.6640+]
Physicists are very familiar with forced and parametric resonance, but usually not with self-oscillation, a property of certain dynamical systems that gives rise to a great variety of vibrations, both useful and destructive. In a self-oscillator, the driving force is controlled by the oscillation itself so that it acts in phase with the velocity, causing a negative damping that feeds energy into the vibration: no external rate needs to be adjusted to the resonant frequency. The famous collapse of the Tacoma Narrows bridge in 1940, often attributed by introductory physics texts to forced resonance, was actually a self-oscillation, as was the swaying of the London Millennium Footbridge in 2000. Clocks are self-oscillators, as are bowed and wind musical instruments. The heart is a "relaxation oscillator," i.e., a non-sinusoidal self-oscillator whose period is determined by sudden, nonlinear switching at thresholds. We review the general criterion that determines whether a linear system can self-oscillate. We then describe the limiting cycles of the simplest nonlinear self-oscillators, as well as the ability of two or more coupled self-oscillators to become spontaneously synchronized ("entrained"). We characterize the operation of motors as self-oscillation and prove a theorem about their limit efficiency, of which Carnot's theorem for heat engines appears as a special case. We briefly discuss how self-oscillation applies to servomechanisms, Cepheid variable stars, lasers, and the macroeconomic business cycle, among other applications. Our emphasis throughout is on the energetics of self-oscillation, often neglected by the literature on nonlinear dynamical systems.
*Supercontinents, mantle dynamics and plate tectonics: A perspective based on conceptual vs. numerical models*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/S0012825210001753[+https://www.sciencedirect.com/science/article/pii/S0012825210001753+]
The periodic assembly and dispersal of supercontinents through the history of the Earth had considerable impact on mantle dynamics and surface processes. Here we synthesize some of the conceptual models on supercontinent amalgamation and disruption and combine it with recent information from numerical studies to provide a unified approach in understanding Wilson Cycle and supercontinent cycle. Plate tectonic models predict that superdownwelling along multiple subduction zones might provide an effective mechanism to pull together dispersed continental fragments into a closely packed assembly. The recycled subducted material that accumulates at the mantle transition zone and sinks down into the core–mantle boundary (CMB) provides the potential fuel for the generation of plumes and superplumes which ultimately fragment the supercontinent. Geological evidence related to the disruption of two major supercontinents (Columbia and Gondwana) attest to the involvement of plumes.
The re-assembly of dispersed continental fragments after the breakup of a supercontinent occurs through complex processes involving ‘introversion’, ‘extroversion’ or a combination of both, with the closure of the intervening ocean occurring through Pacific-type or Atlantic-type processes. The timescales of the assembly and dispersion of supercontinents have varied through the Earth history, and appear to be closely linked with the processes and duration of superplume genesis. The widely held view that the volume of continental crust has increased over time has been challenged in recent works and current models propose that plate tectonics creates and destroys Earth's continental crust with more crust being destroyed than created.
The creation–destruction balance changes over a supercontinent cycle, with a higher crustal growth through magmatic influx during supercontinent break-up as compared to the tectonic erosion and sediment-trapped subduction in convergent margins associated with supercontinent assembly which erodes the continental crust. Ongoing subduction erosion also occurs at the leading edges of dispersing plates, which also contributes to crustal destruction, although this is only a temporary process. The previous numerical studies of mantle convection suggested that there is a significant feedback between mantle convection and continental drift. The process of assembly of supercontinents induces a temperature increase beneath the supercontinent due to the thermal insulating effect. Such thermal insulation leads to a planetary-scale reorganization of mantle flow and results in longest-wavelength thermal heterogeneity in the mantle, i.e., degree-one convection in three-dimensional spherical geometry.
The formation of degree-one convection seems to be integral to the emergence of periodic supercontinent cycles. The rifting and breakup of supercontinental assemblies may be caused by either tensional stress due to the thermal insulating effect, or large-scale partial melting resulting from the flow reorganization and consequent temperature increase beneath the supercontinent. Supercontinent breakup has also been correlated with the temperature increase due to upwelling plumes originating from the deeper lower mantle or CMB as a return flow of plate subduction occurring at supercontinental margins. The active mantle plumes from the CMB may disrupt the regularity of supercontinent cycles. Two end-member scenarios can be envisaged for the mantle convection cycle. One is that mantle convection with dispersing continental blocks has a short-wavelength structure, or close to degree-two structure as the present Earth, and when a supercontinent forms, mantle convection evolves into degree-one structure. Another is that mantle convection with dispersing continental blocks has a degree-one structure, and when a supercontinent forms, mantle convection evolves into degree-two structure. In the case of the former model, it would take longer time to form a supercontinent, because continental blocks would be trapped by different downwellings thus inhibiting collision.
Although most of the numerical studies have assumed the continent/supercontinent to be rigid or nondeformable body mainly because of numerical limitations as well as a simplification of models, a more recent numerical study allows the modeling of mobile, deformable continents, including oceanic plates, and successfully reproduces continental drift similar to the processes and timescales envisaged in Wilson Cycle.
*Relative sea-level fall since the last interglacial stage: Are coasts uplifting worldwide?*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/S0012825211000705[+https://www.sciencedirect.com/science/article/pii/S0012825211000705+]
The growing interest in quantification of vertical ground motion stems from the need to understand in detail how the Earth's crust behaves, for both scientific and social reasons. However, only recently has the refinement of dating techniques made possible the use of paleoshorelines as reliable tools for tectonic studies. Although there are many local studies of Quaternary vertical motions of coastlines, we know of no comprehensive worldwide synthesis. Here we provide a compilation of 890 records of paleoshoreline sequences, with particular emphasis on the last interglacial stage (Marine Isotopic Stage [MIS] 5e, ~ 122 ka). The quality of dating MIS 5e makes it a reliable marker to evaluate vertical ground motion rates during the late Quaternary on a global scale.
The results show that most coastal segments have risen relative to sea-level with a mean uplift rate higher than 0.2 mm/yr, i.e. more than four times faster than the estimated eustatic drop in sea level. The results also reveal that the uplift rate is faster on average for active margins than for passive margins. Neither dynamic topography nor glacio-hydro-isostasy may explain sustained uplift of all continental margins, as revealed by the wide distribution of uplifted sequences of paleoshorelines. Instead, we suggest that only plate-tectonic processes reconcile all observations of Quaternary coastal uplift. We propose that long-term continental accretion has led to compression of continental plates and uplift of their margins. Therefore this study concludes that plate-tectonics processes impact all margins and emphasizes the fact that the notion of a stable platform is unrealistic. These results therefore seriously challenge the evaluation of past sea levels from the fossil shoreline record.
*Permafrost*
~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/S0012825211000894[+https://www.sciencedirect.com/science/article/pii/S0012825211000894+]
Since its introduction, the definition of permafrost has rarely been discussed or reviewed. Recent decades have brought a series of significant, often interdisciplinary works on a periglacial zone and permafrost as well as their relation with other components of the environment, especially with glaciers. They show that, despite its unequivocal definition, the term has lost its sharpness and explicitness with regard to some aspects of research. The article presents a current state of understanding of permafrost phenomenon, regarding the use of the term permafrost, which means a physical state, not a material thing. Processes which it undergoes, that is exclusively aggradation and degradation, and also the possibility of its occurrence in glacial and periglacial environments of geographical space, where it covers over a quarter of land area on the Earth.
*Intermittent search strategies*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.83.81[+https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.83.81+]
This review examines intermittent target search strategies, which combine phases of slow motion, allowing the searcher to detect the target, and phases of fast motion during which targets cannot be detected. It is first shown that intermittent search strategies are actually widely observed at various scales. At the macroscopic scale, this is, for example, the case of animals looking for food; at the microscopic scale, intermittent transport patterns are involved in a reaction pathway of DNA-binding proteins as well as in intracellular transport. Second, generic stochastic models are introduced, which show that intermittent strategies are efficient strategies that enable the minimization of search time. This suggests that the intrinsic efficiency of intermittent search strategies could justify their frequent observation in nature. Last, beyond these modeling aspects, it is proposed that intermittent strategies could also be used in a broader context to design and accelerate search processes.
*Comparison of astrophysical and terrestrial frequency standards*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.83.1[+https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.83.1+]
The stability of pulse arrival times from pulsars and white dwarfs have been reanalyzed using several analysis tools for measuring the noise characteristics of sampled time and frequency data. The best terrestrial artificial clocks are shown to substantially exceed the performance of astronomical sources as timekeepers in terms of accuracy (as defined by cesium primary frequency standards) and stability. This superiority in stability can be directly demonstrated over time periods up to 2 years, where there is high quality data for both. Beyond 2 years there is a deficiency of data for clock-to-clock comparisons, and both terrestrial and astronomical clocks show equal performance being equally limited by the quality of the reference time scales used to make the comparisons. Nonetheless, the detailed accuracy evaluations of modern terrestrial clocks imply that these new clocks are likely to have a stability better than any astronomical source up to comparison times of at least hundreds of years. This article is intended to provide a correct appreciation of the relative merits of natural and artificial clocks. The use of natural clocks as tests of physics under the most extreme conditions is entirely appropriate; however, the contention that these natural clocks, particularly white dwarfs, can compete as timekeepers against devices constructed by mankind is shown to be doubtful.
*Bayesian inference in physics*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.83.943[+https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.83.943+]
Bayesian inference provides a consistent method for the extraction of information from physics experiments even in ill-conditioned circumstances. The approach provides a unified rationale for data analysis, which both justifies many of the commonly used analysis procedures and reveals some of the implicit underlying assumptions. This review summarizes the general ideas of the Bayesian probability theory with emphasis on the application to the evaluation of experimental data. As case studies for Bayesian parameter estimation techniques examples ranging from extra-solar planet detection to the deconvolution of the apparatus functions for improving the energy resolution and change point estimation in time series are discussed. Special attention is paid to the numerical techniques suited for Bayesian analysis, with a focus on recent developments of Markov chain Monte Carlo algorithms for high-dimensional integration problems. Bayesian model comparison, the quantitative ranking of models for the explanation of a given data set, is illustrated with examples collected from cosmology, mass spectroscopy, and surface physics, covering problems such as background subtraction and automated outlier detection. Additionally the Bayesian inference techniques for the design and optimization of future experiments are introduced. Experiments, instead of being merely passive recording devices, can now be designed to adapt to measured data and to change the measurement strategy on the fly to maximize the information of an experiment. The applied key concepts and necessary numerical tools which provide the means of designing such inference chains and the crucial aspects of data fusion are summarized and some of the expected implications are highlighted.
*Physical basis of radiation protection in space travel*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.83.1245[+https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.83.1245+]
The health risks of space radiation are arguably the most serious challenge to space exploration, possibly preventing these missions due to safety concerns or increasing their costs to amounts beyond what would be acceptable. Radiation in space is substantially different from Earth: high-energy (E) and charge (Z) particles (HZE) provide the main contribution to the equivalent dose in deep space, whereas γ rays and low-energy α particles are major contributors on Earth. This difference causes a high uncertainty on the estimated radiation health risk (including cancer and noncancer effects), and makes protection extremely difficult. In fact, shielding is very difficult in space: the very high energy of the cosmic rays and the severe mass constraints in spaceflight represent a serious hindrance to effective shielding. Here the physical basis of space radiation protection is described, including the most recent achievements in space radiation transport codes and shielding approaches. Although deterministic and Monte Carlo transport codes can now describe well the interaction of cosmic rays with matter, more accurate double-differential nuclear cross sections are needed to improve the codes. Energy deposition in biological molecules and related effects should also be developed to achieve accurate risk models for long-term exploratory missions. Passive shielding can be effective for solar particle events; however, it is limited for galactic cosmic rays (GCR). Active shielding would have to overcome challenging technical hurdles to protect against GCR. Thus, improved risk assessment and genetic and biomedical approaches are a more likely solution to GCR radiation protection issues.
2012
----
*Tsunami hazard and exposure on the global scale*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/S0012825211001619[+https://www.sciencedirect.com/science/article/pii/S0012825211001619+]
In the aftermath of the 2004 Indian Ocean tsunami, a large increase in the activity of tsunami hazard and risk mapping is observed. Most of these are site-specific studies with detailed modelling of the run-up locally. However, fewer studies exist on the regional and global scale. Therefore, tsunamis have been omitted in previous global studies comparing different natural hazards. Here, we present a first global tsunami hazard and population exposure study. A key topic is the development of a simple and robust method for obtaining reasonable estimates of the maximum water level during tsunami inundation. This method is mainly based on plane wave linear hydrostatic transect simulations, and validation against results from a standard run-up model is given. The global hazard study is scenario based, focusing on tsunamis caused by megathrust earthquakes only, as the largest events will often contribute more to the risk than the smaller events. Tsunamis caused by non-seismic sources are omitted. Hazard maps are implemented by conducting a number of tsunami scenario simulations supplemented with findings from literature. The maps are further used to quantify the number of people exposed to tsunamis using the Landscan population data set. Because of the large geographical extents, quantifying the tsunami hazard assessment is focusing on overall trends.
*Global continental and ocean basin reconstructions since 200 Ma*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/S0012825212000311[+https://www.sciencedirect.com/science/article/pii/S0012825212000311+]
Global plate motion models provide a spatial and temporal framework for geological data and have been effective tools for exploring processes occurring at the earth's surface. However, published models either have insufficient temporal coverage or fail to treat tectonic plates in a self-consistent manner. They usually consider the motions of selected features attached to tectonic plates, such as continents, but generally do not explicitly account for the continuous evolution of plate boundaries through time. In order to explore the coupling between the surface and mantle, plate models are required that extend over at least a few hundred million years and treat plates as dynamic features with dynamically evolving plate boundaries.
We have constructed a new type of global plate motion model consisting of a set of continuously-closing topological plate polygons with associated plate boundaries and plate velocities since the break-up of the supercontinent Pangea. Our model is underpinned by plate motions derived from reconstructing the seafloor-spreading history of the ocean basins and motions of the continents and utilizes a hybrid absolute reference frame, based on a moving hotspot model for the last 100 Ma, and a true-polar wander corrected paleomagnetic model for 200 to 100 Ma. Detailed regional geological and geophysical observations constrain plate boundary inception or cessation, and time-dependent geometry. Although our plate model is primarily designed as a reference model for a new generation of geodynamic studies by providing the surface boundary conditions for the deep earth, it is also useful for studies in disparate fields when a framework is needed for analyzing and interpreting spatio-temporal data.
*Singular vectors in atmospheric sciences: A review*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/S0012825212000657[+https://www.sciencedirect.com/science/article/pii/S0012825212000657+]
During the last decade, singular vectors (SVs) have received a lot of attention in the research and operational communities especially due to their use in ensemble forecasting and targeting of observations. SVs represent the orthogonal set of perturbations that, according to linear theory, will grow fastest over a finite‐time interval with respect to a specific metric. Hence, the study of SVs gives information about the dynamics and structure of rapidly growing and finite-time instabilities representing an important step toward a better understanding of perturbations evolution in the atmosphere. This paper reviews the SV formulation and gives a brief overview of their recent applications in atmospheric sciences. A particular attention is accorded to the SV sensitivity to different parameters such as optimization time interval, norm, horizontal resolution and tangent linear model, various choices leading to different initial structures and evolutions.
*Phanerozoic polar wander, palaeogeography and dynamics*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/S0012825212000797[+https://www.sciencedirect.com/science/article/pii/S0012825212000797+]
A significant number of new palaeomagnetic poles have become available since the last time a compilation was made (assembled in 2005, published in 2008) to indicate to us that a new and significantly expanded set of tables with palaeomagnetic results would be valuable, with results coming from the Gondwana cratonic elements, Laurentia, Baltica/Europe, and Siberia. Following the Silurian Caledonian Orogeny, Laurentia's and Baltica's Apparent Polar Wander Paths (APWPs) can be merged into a Laurussia path, followed in turn by a merger of the Laurussia and Siberia data from latest Permian time onward into a Laurasian combined path. Meanwhile, after about 320 Ma, Gondwana's and Laurussia/Laurasia's path can be combined into what comes steadily closer to the ideal of a Global Apparent Polar Wander Path (GAPWaP) for late Palaeozoic and younger times. Tests for True Polar Wander (TPW) episodes are now feasible since Pangaea fusion and we identify four important episodes of Mesozoic TPW between 250 and 100 Ma. TPW rates are in the order of 0.45–0.8°/M.y. but cumulative TPW is nearly zero since the Late Carboniferous. With the exception of a few intervals where data are truly scarce (e.g., 390–340 Ma), the palaeomagnetic database is robust and allows us to make a series of new palaeogeographic reconstructions from the Late Cambrian to the Palaeogene.
*Storytelling in Earth sciences: The eight basic plots*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/S0012825212001249[+https://www.sciencedirect.com/science/article/pii/S0012825212001249+]
Reporting results and promoting ideas in science in general, and Earth science in particular, is treated here as storytelling. Just as in literature and drama, storytelling in Earth science is characterized by a small number of basic plots. Though the list is not exhaustive, and acknowledging that multiple or hybrid plots and subplots are possible in a single piece, eight standard plots are identified, and examples provided: cause-and-effect, genesis, emergence, destruction, metamorphosis, convergence, divergence, and oscillation. The plots of Earth science stories are not those of literary traditions, nor those of persuasion or moral philosophy, and deserve separate consideration. Earth science plots do not conform those of storytelling more generally, implying that Earth scientists may have fundamentally different motivations than other storytellers, and that the basic plots of Earth Science derive from the characteristics and behaviors of Earth systems. In some cases preference or affinity to different plots results in fundamentally different interpretations and conclusions of the same evidence. In other situations exploration of additional plots could help resolve scientific controversies. Thus explicit acknowledgement of plots can yield direct scientific benefits. Consideration of plots and storytelling devices may also assist in the interpretation of published work, and can help scientists improve their own storytelling.
*Overturning in the North Atlantic*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.annualreviews.org/doi/abs/10.1146/annurev-marine-120710-100740[+https://www.annualreviews.org/doi/abs/10.1146/annurev-marine-120710-100740+]
The global overturning of ocean waters involves the equatorward transport of cold, deep waters and the poleward transport of warm, near-surface waters. Such movement creates a net poleward transport of heat that, in partnership with the atmosphere, establishes the global and regional climates. Although oceanographers have long assumed that a reduction in deep water formation at high latitudes in the North Atlantic translates into a slowing of the ocean's overturning and hence in Earth's climate, observational and modeling studies over the past decade have called this assumed linkage into question. The observational basis for linking water mass formation with the ocean's meridional overturning is reviewed herein. Understanding this linkage is crucial to efforts aimed at predicting the consequences of the warming and freshening of high-latitude surface waters to the climate system.
*Ice structures, patterns, and processes: A view across the icefields*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.84.885[+https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.84.885+]
From the frontiers of research on ice dynamics in its broadest sense, this review surveys the structures of ice, the patterns or morphologies it may assume, and the physical and chemical processes in which it is involved. Open questions in the various fields of ice research in nature are highlighted, ranging from terrestrial and oceanic ice on Earth, to ice in the atmosphere, to ice on other Solar System bodies and in interstellar space.
*Statistical physics of fracture, friction, and earthquakes*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.84.839[+https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.84.839+]
The present status of research and understanding regarding the dynamics and the statistical properties of earthquakes is reviewed, mainly from a statistical physical viewpoint. Emphasis is put both on the physics of friction and fracture, which provides a microscopic basis for our understanding of an earthquake instability, and on the statistical physical modelling of earthquakes, which provides macroscopic aspects of such phenomena. Recent numerical results from several representative models are reviewed, with attention to both their critical and their characteristic properties. Some of the relevant notions and related issues are highlighted, including the origin of power laws often observed in statistical properties of earthquakes, apparently contrasting features of characteristic earthquakes or asperities, the nature of precursory phenomena and nucleation processes, and the origin of slow earthquakes, etc.
*Hamiltonian complexity*
~~~~~~~~~~~~~~~~~~~~~~~~
https://iopscience.iop.org/article/10.1088/0034-4885/75/2/022001[+https://iopscience.iop.org/article/10.1088/0034-4885/75/2/022001+]
In recent years we have seen the birth of a new field known as Hamiltonian complexity lying at the crossroads between computer science and theoretical physics. Hamiltonian complexity is directly concerned with the question: how hard is it to simulate a physical system? Here I review the foundational results, guiding problems, and future directions of this emergent field.
*The physics of wind-blown sand and dust*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://iopscience.iop.org/article/10.1088/0034-4885/75/10/106901[+https://iopscience.iop.org/article/10.1088/0034-4885/75/10/106901+]
The transport of sand and dust by wind is a potent erosional force, creates sand dunes and ripples, and loads the atmosphere with suspended dust aerosols. This paper presents an extensive review of the physics of wind-blown sand and dust on Earth and Mars. Specifically, we review the physics of aeolian saltation, the formation and development of sand dunes and ripples, the physics of dust aerosol emission, the weather phenomena that trigger dust storms, and the lifting of dust by dust devils and other small-scale vortices. We also discuss the physics of wind-blown sand and dune formation on Venus and Titan.
2013
----
*A Survey on Array Storage, Query Languages, and Systems* - Florin Rusu et al
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://arxiv.org/abs/1302.0103[+https://arxiv.org/abs/1302.0103+]
Since scientific investigation is one of the most important providers of massive amounts of ordered data, there is a renewed interest in array data processing in the context of Big Data. To the best of our knowledge, a unified resource that summarizes and analyzes array processing research over its long existence is currently missing. In this survey, we provide a guide for past, present, and future research in array processing. The survey is organized along three main topics. Array storage discusses all the aspects related to array partitioning into chunks. The identification of a reduced set of array operators to form the foundation for an array query language is analyzed across multiple such proposals. Lastly, we survey real systems for array processing. The result is a thorough survey on array data storage and processing that should be consulted by anyone interested in this research topic, independent of experience level. The survey is not complete though. We greatly appreciate pointers towards any work we might have forgotten to mention.
*Category Theory for Scientists* - David I. Spivak
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://arxiv.org/abs/1302.6946[+https://arxiv.org/abs/1302.6946+]
There are many books designed to introduce category theory to either a mathematical audience or a computer science audience. In this book, our audience is the broader scientific community. We attempt to show that category theory can be applied throughout the sciences as a framework for modeling phenomena and communicating results. In order to target the scientific audience, this book is example-based rather than proof-based. For example, monoids are framed in terms of agents acting on objects, sheaves are introduced with primary examples coming from geography, and colored operads are discussed in terms of their ability to model self-similarity. A new version with solutions to exercises will be available through MIT Press.
*The volcanic response to deglaciation*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/S0012825213000664[+https://www.sciencedirect.com/science/article/pii/S0012825213000664+]
Several lines of evidence have previously been used to suggest that ice retreat after the last glacial maximum (LGM) resulted in regionally-increased levels of volcanic activity. It has been proposed that this increase in volcanism was globally significant, forming a substantial component of the post-glacial rise in atmospheric CO2, and thereby contributing to climatic warming. However, as yet there has been no detailed investigation of activity in glaciated volcanic arcs following the LGM. Arc volcanism accounts for 90% of present-day subaerial volcanic eruptions. It is therefore important to constrain the impact of deglaciation on arc volcanoes, to understand fully the nature and magnitude of global-scale relationships between volcanism and glaciation.
The first part of this paper examines the post-glacial explosive eruption history of the Andean southern volcanic zone (SVZ), a typical arc system, with additional data from the Kamchatka and Cascade arcs. In all cases, eruption rates in the early post-glacial period do not exceed those at later times at a statistically significant level. In part, the recognition and quantification of what may be small (i.e. less than a factor of two) increases in eruption rate is hindered by the size of our datasets. These datasets are limited to eruptions larger than 0.1 km3, because deviations from power-law magnitude–frequency relationships indicate strong relative under-sampling at smaller eruption volumes. In the southern SVZ, where ice unloading was greatest, eruption frequency in the early post-glacial period is approximately twice that of the mid post-glacial period (although frequency increases again in the late post-glacial). A comparable pattern occurs in Kamchatka, but is not observed in the Cascade arc. The early post-glacial period also coincides with a small number of very large explosive eruptions from the most active volcanoes in the southern and central SVZ, consistent with enhanced ponding of magma during glaciation and release upon deglaciation.
In comparison to non-arc settings, evidence of post-glacial increases in rates of arc volcanism is weak, and there is no need to invoke significantly increased melt production upon ice unloading, as occurred in areas such as Iceland. Non-arc volcanoes may therefore account for a relatively higher proportion of global volcanic emissions in the early post-glacial period than is suggested by the relative contributions of arc and non-arc settings at the present day.
The second part of this paper critically examines global eruption records, in an effort to constrain global-scale changes in volcanic output since the LGM. Accurate interpretation of these records relies on correcting both temporal and spatial variability in eruption recording. In particular, very low recording rates, which also vary spatially by over two orders of magnitude, prevent precise, and possibly even accurate, quantitative analysis. For example, if we assume record completeness for the past century, the number of known eruptions (volcanic explosivity index ≥ 2) from some low-latitude regions, such as Indonesia, is approximately 1 in 20,000 (0.005%) for the period 5–20 ka. There is a need for more regional-scale studies of past volcanism in such regions, where current data are extremely sparse. We attempt to correct for recording biases, and suggest a maximum two-fold (but potentially much less) increase in global eruption rates, relative to the present day, between 13 and 7 ka. Although volcanism may have been an important source of CO2 in the early Holocene, it is unlikely to have been a dominant control on changes in atmospheric CO2 after the LGM.
*Biostratigraphy: Interpretations of Oppel's zones*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/S0012825213001463[+https://www.sciencedirect.com/science/article/pii/S0012825213001463+]
Zones like those of Oppel and Hedberg's Oppel-Zone are commonly interpreted as rock units delimited temporally. A more restricted view is that they are rock units empirically defined by bioevents that occur in the same order in all sections. Methods used by Oppel and definitions proposed by Hedberg are reviewed to assess their adequacy for definition of biostratigraphic units and their ability to support temporal inferences. Although they are usually interpreted as chronostratigraphic units, Oppel defined his zones in stratigraphic space, without temporal reference. In contrast, Hedberg required that bioevents for his Oppel-Zone should be approximately isochronous across their distribution but provided no operational way to identify such bioevents. Neither author clearly indicated how boundaries should be defined. Recourse to a principle of biosynchroneity to support inferences that stratigraphically ordered bioevents are temporal markers conflicts with knowledge of the biogeographies of modern taxa. Evolutionary theory explains why some bioevents occur in the same stratigraphic order but does not support the inference that they are isochronous events. Since its inception biostratigraphy has focused on ordered classifications, like those of Oppel. Stratigraphic codes should allow for a complementary category of biofacies zones that reflect depositional environments and are not constrained to occur in a particular order.
*Virtual Geographic Environments (VGEs): A New Generation of Geographic Analysis Tool*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/S001282521300127X#bi0005[+https://www.sciencedirect.com/science/article/pii/S001282521300127X#bi0005+]
Virtual Geographic Environments (VGEs) are proposed as a new generation of geographic analysis tool to contribute to human understanding of the geographic world and assist in solving geographic problems at a deeper level. The development of VGEs is focused on meeting the three scientific requirements of Geographic Information Science (GIScience) — multi-dimensional visualization, dynamic phenomenon simulation, and public participation. To provide a clearer image that improves user understanding of VGEs and to contribute to future scientific development, this article reviews several aspects of VGEs. First, the evolutionary process from maps to previous GISystems and then to VGEs is illustrated, with a particular focus on the reasons VGEs were created. Then, extended from the conceptual framework and the components of a complete VGE, three use cases are identified that together encompass the current state of VGEs at different application levels: 1) a tool for geo-object-based multi-dimensional spatial analysis and multi-channel interaction, 2) a platform for geo-process-based simulation of dynamic geographic phenomena, and 3) a workspace for multi-participant-based collaborative geographic experiments. Based on the above analysis, the differences between VGEs and other similar platforms are discussed to draw their clear boundaries. Finally, a short summary of the limitations of current VGEs is given, and future directions are proposed to facilitate ongoing progress toward forming a comprehensive version of VGEs.
*Gemstones and geosciences in space and time: Digital maps to the “Chessboard classification scheme of mineral deposits”*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/S0012825213001220[+https://www.sciencedirect.com/science/article/pii/S0012825213001220+]
The gemstones, covering the spectrum from jeweler's to showcase quality, have been presented in a tripartite subdivision, by country, geology and geomorphology realized in 99 digital maps with more than 2600 mineralized sites. The various maps were designed based on the “Chessboard classification scheme of mineral deposits” proposed by Dill (2010a, 2010b) to reveal the interrelations between gemstone deposits and mineral deposits of other commodities and direct our thoughts to potential new target areas for exploration. A number of 33 categories were used for these digital maps: chromium, nickel, titanium, iron, manganese, copper, tin–tungsten, beryllium, lithium, zinc, calcium, boron, fluorine, strontium, phosphorus, zirconium, silica, feldspar, feldspathoids, zeolite, amphibole (tiger's eye), olivine, pyroxenoid, garnet, epidote, sillimanite–andalusite, corundum–spinel − diaspore, diamond, vermiculite–pagodite, prehnite, sepiolite, jet, and amber.
Besides the political base map (gems by country) the mineral deposit is drawn on a geological map, illustrating the main lithologies, stratigraphic units and tectonic structure to unravel the evolution of primary gemstone deposits in time and space. The geomorphological map is to show the control of climate and subaerial and submarine hydrography on the deposition of secondary gemstone deposits. The digital maps are designed so as to be plotted as a paper version of different scale and to upgrade them for an interactive use and link them to gemological databases.
*High-Frequency Radar Observations of Ocean Surface Currents*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.annualreviews.org/doi/abs/10.1146/annurev-marine-121211-172315[+https://www.annualreviews.org/doi/abs/10.1146/annurev-marine-121211-172315+]
This article reviews the discovery, development, and use of high-frequency (HF) radio wave backscatter in oceanography. HF radars, as the instruments are commonly called, remotely measure ocean surface currents by exploiting a Bragg resonant backscatter phenomenon. Electromagnetic waves in the HF band (3–30 MHz) have wavelengths that are commensurate with wind-driven gravity waves on the ocean surface; the ocean waves whose wavelengths are exactly half as long as those of the broadcast radio waves are responsible for the resonant backscatter. Networks of HF radar systems are capable of mapping surface currents hourly out to ranges approaching 200 km with a horizontal resolution of a few kilometers. Such information has many uses, including search and rescue support and oil-spill mitigation in real time and larval population connectivity assessment when viewed over many years. Today, HF radar networks form the backbone of many ocean observing systems, and the data are assimilated into ocean circulation models.
*Lunar laser ranging: the millimeter challenge*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://iopscience.iop.org/article/10.1088/0034-4885/76/7/076901[+https://iopscience.iop.org/article/10.1088/0034-4885/76/7/076901+]
Lunar laser ranging has provided many of the best tests of gravitation since the first Apollo astronauts landed on the Moon. The march to higher precision continues to this day, now entering the millimeter regime, and promising continued improvement in scientific results. This review introduces key aspects of the technique, details the motivations, observables, and results for a variety of science objectives, summarizes the current state of the art, highlights new developments in the field, describes the modeling challenges, and looks to the future of the enterprise.
*Geodetic imaging with airborne LiDAR: the Earth's surface revealed*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://iopscience.iop.org/article/10.1088/0034-4885/76/8/086801[+https://iopscience.iop.org/article/10.1088/0034-4885/76/8/086801+]
The past decade has seen an explosive increase in the number of peer reviewed papers reporting new scientific findings in geomorphology (including fans, channels, floodplains and landscape evolution), geologic mapping, tectonics and faulting, coastal processes, lava flows, hydrology (especially snow and runoff routing), glaciers and geo-archaeology. A common genesis of such findings is often newly available decimeter resolution 'bare Earth' geodetic images, derived from airborne laser swath mapping, a.k.a. airborne LiDAR, observations. In this paper we trace nearly a half century of advances in geodetic science made possible by space age technology, such as the invention of short-pulse-length high-pulse-rate lasers, solid state inertial measurement units, chip-based high speed electronics and the GPS satellite navigation system, that today make it possible to map hundreds of square kilometers of terrain in hours, even in areas covered with dense vegetation or shallow water. To illustrate the impact of the LiDAR observations we present examples of geodetic images that are not only stunning to the eye, but help researchers to develop quantitative models explaining how terrain evolved to its present form, and how it will likely change with time. Airborne LiDAR technology continues to develop quickly, promising ever more scientific discoveries in the years ahead.
*Synthetic biological networks*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://iopscience.iop.org/article/10.1088/0034-4885/76/9/096602[+https://iopscience.iop.org/article/10.1088/0034-4885/76/9/096602+]
Despite their obvious relationship and overlap, the field of physics is blessed with many insightful laws, while such laws are sadly absent in biology. Here we aim to discuss how the rise of a more recent field known as synthetic biology may allow us to more directly test hypotheses regarding the possible design principles of natural biological networks and systems. In particular, this review focuses on synthetic gene regulatory networks engineered to perform specific functions or exhibit particular dynamic behaviors. Advances in synthetic biology may set the stage to uncover the relationship of potential biological principles to those developed in physics.
2014
----
*Control: A Perspective* - Karl J.A˚ström et al
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/S0005109813005037[+https://www.sciencedirect.com/science/article/pii/S0005109813005037+]
Feedback is an ancient idea, but feedback control is a young field. Nature long ago discovered feedback since it is essential for homeostasis and life. It was the key for harnessing power in the industrial revolution and is today found everywhere around us. Its development as a field involved contributions from engineers, mathematicians, economists and physicists. It is the first systems discipline; it represented a paradigm shift because it cut across the traditional engineering disciplines of aeronautical, chemical, civil, electrical and mechanical engineering, as well as economics and operations research. The scope of control makes it the quintessential multidisciplinary field. Its complex story of evolution is fascinating, and a perspective on its growth is presented in this paper. The interplay of industry, applications, technology, theory and research is discussed.
*Synchronization in complex networks of phase oscillators: A survey* - Florian Dorfler et al
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/S0005109814001423[+https://www.sciencedirect.com/science/article/pii/S0005109814001423+]
The emergence of synchronization in a network of coupled oscillators is a fascinating subject of multidisciplinary research. This survey reviews the vast literature on the theory and the applications of complex oscillator networks. We focus on phase oscillator models that are widespread in real-world synchronization phenomena, that generalize the celebrated Kuramoto model, and that feature a rich phenomenology. We review the history and the countless applications of this model throughout science and engineering. We justify the importance of the widespread coupled oscillator model as a locally canonical model and describe some selected applications relevant to control scientists, including vehicle coordination, electric power networks, and clock synchronization. We introduce the reader to several synchronization notions and performance estimates. We propose analysis approaches to phase and frequency synchronization, phase balancing, pattern formation, and partial synchronization. We present the sharpest known results about synchronization in networks of homogeneous and heterogeneous oscillators, with complete or sparse interconnection topologies, and in finite-dimensional and infinite-dimensional settings. We conclude by summarizing the limitations of existing analysis methods and by highlighting some directions for future research.
*The role of palaeogeography in the Phanerozoic history of atmospheric CO2 and climate*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/S0012825213001918[+https://www.sciencedirect.com/science/article/pii/S0012825213001918+]
The role of the palaeogeography on the geological evolution of the global carbon cycle has been suspected since the development of the first global geochemical models in the early 80s. The palaeogeography has been rapidly recognized as a key factor controlling the long-term evolution of the atmospheric CO2 through its capability of modulating the efficiency of the silicate weathering. First the role of the latitudinal position of the continents has been emphasized: an averaged low latitudinal position promotes the CO2 consumption by silicate weathering, and is theoretically associated to low CO2 periods.
With the increase of model complexity and the explicit consideration of the hydrological cycle, the importance of the continentality factor has been recognized: periods of supercontinent assembly coincide with high pCO2 values due to the development of arid conditions which weaken the silicate weathering efficiency. These fundamental feedbacks between climate, carbon cycle and tectonic have been discovered by pioneer modelling studies and opened new views in the understanding of the history of Earth's climate. Today, some of the key features of the Phanerozoic climate can be explained by: (1) continental drift; (2) small continental blocks moving to tropical belts; and (3) modulation of the climate sensitivity to CO2 by palaeogeography changes. Those results emphasize the need for a careful process-based modelling of the water cycle and climate response to the continental drift.
*Arctic Ocean glacial history*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/S0277379113002989[+https://www.sciencedirect.com/science/article/pii/S0277379113002989+]
While there are numerous hypotheses concerning glacial–interglacial environmental and climatic regime shifts in the Arctic Ocean, a holistic view on the Northern Hemisphere's late Quaternary ice-sheet extent and their impact on ocean and sea-ice dynamics remains to be established. Here we aim to provide a step in this direction by presenting an overview of Arctic Ocean glacial history, based on the present state-of-the-art knowledge gained from field work and chronological studies, and with a specific focus on ice-sheet extent and environmental conditions during the Last Glacial Maximum (LGM).
The maximum Quaternary extension of ice sheets is discussed and compared to LGM. We bring together recent results from the circum-Arctic continental margins and the deep central basin; extent of ice sheets and ice streams bordering the Arctic Ocean as well as evidence for ice shelves extending into the central deep basin. Discrepancies between new results and published LGM ice-sheet reconstructions in the high Arctic are highlighted and outstanding questions are identified. Finally, we address the ability to simulate the Arctic Ocean ice sheet complexes and their dynamics, including ice streams and ice shelves, using presently available ice-sheet models. Our review shows that while we are able to firmly reject some of the earlier hypotheses formulated to describe Arctic Ocean glacial conditions, we still lack information from key areas to compile the holistic Arctic Ocean glacial history.
*Interglacials, Milankovitch Cycles, Solar Activity, and Carbon Dioxide*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.hindawi.com/journals/jcli/2014/345482/[+https://www.hindawi.com/journals/jcli/2014/345482/+]
The existing understanding of interglacial periods is that they are initiated by Milankovitch cycles enhanced by rising atmospheric carbon dioxide concentrations. During interglacials, global temperature is also believed to be primarily controlled by carbon dioxide concentrations, modulated by internal processes such as the Pacific Decadal Oscillation and the North Atlantic Oscillation. Recent work challenges the fundamental basis of these conceptions.
*Fractional calculus view of complexity: A tutorial*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.86.1169[+https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.86.1169+]
The fractional calculus has been part of the mathematics and science literature for 310 years. However, it is only in the past decade or so that it has drawn the attention of mainstream science as a way to describe the dynamics of complex phenomena with long-term memory, spatial heterogeneity, along with nonstationary and nonergodic statistics. The most recent application encompasses complex networks, which require new ways of thinking about the world. Part of the new cognition is provided by the fractional calculus description of temporal and topological complexity. Consequently, this Colloquium is not so much a tutorial on the mathematics of the fractional calculus as it is an exploration of how complex phenomena in the physical, social, and life sciences that have eluded traditional mathematical modeling become less mysterious when certain historical assumptions such as differentiability are discarded and the ordinary calculus is replaced with the fractional calculus. Exemplars considered include the fractional differential equations describing the dynamics of viscoelastic materials, turbulence, foraging, and phase transitions in complex social networks.
*Noise in biology*
~~~~~~~~~~~~~~~~~~
https://iopscience.iop.org/article/10.1088/0034-4885/77/2/026601[+https://iopscience.iop.org/article/10.1088/0034-4885/77/2/026601+]
Noise permeates biology on all levels, from the most basic molecular, sub-cellular processes to the dynamics of tissues, organs, organisms and populations. The functional roles of noise in biological processes can vary greatly. Along with standard, entropy-increasing effects of producing random mutations, diversifying phenotypes in isogenic populations, limiting information capacity of signaling relays, it occasionally plays more surprising constructive roles by accelerating the pace of evolution, providing selective advantage in dynamic environments, enhancing intracellular transport of biomolecules and increasing information capacity of signaling pathways. This short review covers the recent progress in understanding mechanisms and effects of fluctuations in biological systems of different scales and the basic approaches to their mathematical modeling.
*Tensegrity, cellular biophysics, and the mechanics of living systems*
https://iopscience.iop.org/article/10.1088/0034-4885/77/4/046603[+https://iopscience.iop.org/article/10.1088/0034-4885/77/4/046603+]
The recent convergence between physics and biology has led many physicists to enter the fields of cell and developmental biology. One of the most exciting areas of interest has been the emerging field of mechanobiology that centers on how cells control their mechanical properties, and how physical forces regulate cellular biochemical responses, a process that is known as mechanotransduction. In this article, we review the central role that tensegrity (tensional integrity) architecture, which depends on tensile prestress for its mechanical stability, plays in biology. We describe how tensional prestress is a critical governor of cell mechanics and function, and how use of tensegrity by cells contributes to mechanotransduction. Theoretical tensegrity models are also described that predict both quantitative and qualitative behaviors of living cells, and these theoretical descriptions are placed in context of other physical models of the cell. In addition, we describe how tensegrity is used at multiple size scales in the hierarchy of life—from individual molecules to whole living organisms—to both stabilize three-dimensional form and to channel forces from the macroscale to the nanoscale, thereby facilitating mechanochemical conversion at the molecular level.
*How to deal with petabytes of data: the LHC Grid project*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://iopscience.iop.org/article/10.1088/0034-4885/77/6/065902[+https://iopscience.iop.org/article/10.1088/0034-4885/77/6/065902+]
We review the Grid computing system developed by the international community to deal with the petabytes of data coming from the Large Hadron Collider at CERN in Geneva with particular emphasis on the ATLAS experiment and the UK Grid project, GridPP. Although these developments were started over a decade ago, this article explains their continued relevance as part of the 'Big Data' problem and how the Grid has been forerunner of today's cloud computing.
*The three-body problem*
~~~~~~~~~~~~~~~~~~~~~~~~
https://iopscience.iop.org/article/10.1088/0034-4885/77/6/065901[+https://iopscience.iop.org/article/10.1088/0034-4885/77/6/065901+]
The three-body problem, which describes three masses interacting through Newtonian gravity without any restrictions imposed on the initial positions and velocities of these masses, has attracted the attention of many scientists for more than 300 years. In this paper, we present a review of the three-body problem in the context of both historical and modern developments. We describe the general and restricted (circular and elliptic) three-body problems, different analytical and numerical methods of finding solutions, methods for performing stability analysis and searching for periodic orbits and resonances. We apply the results to some interesting problems of celestial mechanics. We also provide a brief presentation of the general and restricted relativistic three-body problems, and discuss their astronomical applications.
*Physics and financial economics (1776–2014): puzzles, Ising and agent-based models*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://iopscience.iop.org/article/10.1088/0034-4885/77/6/062001[+https://iopscience.iop.org/article/10.1088/0034-4885/77/6/062001+]
This short review presents a selected history of the mutual fertilization between physics and economics—from Isaac Newton and Adam Smith to the present. The fundamentally different perspectives embraced in theories developed in financial economics compared with physics are dissected with the examples of the volatility smile and of the excess volatility puzzle. The role of the Ising model of phase transitions to model social and financial systems is reviewed, with the concepts of random utilities and the logit model as the analog of the Boltzmann factor in statistical physics. Recent extensions in terms of quantum decision theory are also covered. A wealth of models are discussed briefly that build on the Ising model and generalize it to account for the many stylized facts of financial markets. A summary of the relevance of the Ising model and its extensions is provided to account for financial bubbles and crashes. The review would be incomplete if it did not cover the dynamical field of agent-based models (ABMs), also known as computational economic models, of which the Ising-type models are just special ABM implementations. We formulate the 'Emerging Intelligence Market Hypothesis' to reconcile the pervasive presence of 'noise traders' with the near efficiency of financial markets. Finally, we note that evolutionary biology, more than physics, is now playing a growing role to inspire models of financial markets.
*The physics of anomalous ('rogue') ocean waves*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://iopscience.iop.org/article/10.1088/0034-4885/77/10/105901[+https://iopscience.iop.org/article/10.1088/0034-4885/77/10/105901+]
There is much speculation that the largest and steepest waves may need to be modelled with different physics to the majority of the waves on the open ocean. This review examines the various physical mechanisms which may play an important role in the dynamics of extreme waves. We examine the evidence for these mechanisms in numerical and physical wavetanks, and look at the evidence that such mechanisms might also exist in the real ocean.
*Non-equilibrium physics and evolution—adaptation, extinction, and ecology: a Key Issues review*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://iopscience.iop.org/article/10.1088/0034-4885/77/10/102602[+https://iopscience.iop.org/article/10.1088/0034-4885/77/10/102602+]
Evolutionary dynamics in nature constitute an immensely complex non-equilibrium process. We review the application of physical models of evolution, by focusing on adaptation, extinction, and ecology. In each case, we examine key concepts by working through examples. Adaptation is discussed in the context of bacterial evolution, with a view toward the relationship between growth rates, mutation rates, selection strength, and environmental changes. Extinction dynamics for an isolated population are reviewed, with emphasis on the relation between timescales of extinction, population size, and temporally correlated noise. Ecological models are discussed by focusing on the effect of spatial interspecies interactions on diversity. Connections between physical processes—such as diffusion, turbulence, and localization—and evolutionary phenomena are highlighted.
*Stochastic Climate Theory and Modelling*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://arxiv.org/abs/1409.0423[+https://arxiv.org/abs/1409.0423+]
Stochastic methods are a crucial area in contemporary climate research and are increasingly being used in comprehensive weather and climate prediction models as well as reduced order climate models. Stochastic methods are used as subgrid-scale parameterizations as well as for model error representation, uncertainty quantification, data assimilation and ensemble prediction. The need to use stochastic approaches in weather and climate models arises because we still cannot resolve all necessary processes and scales in comprehensive numerical weather and climate prediction models. In many practical applications one is mainly interested in the largest and potentially predictable scales and not necessarily in the small and fast scales. For instance, reduced order models can simulate and predict large scale modes. Statistical mechanics and dynamical systems theory suggest that in reduced order models the impact of unresolved degrees of freedom can be represented by suitable combinations of deterministic and stochastic components and non-Markovian (memory) terms. Stochastic approaches in numerical weather and climate prediction models also lead to the reduction of model biases. Hence, there is a clear need for systematic stochastic approaches in weather and climate modelling. In this review we present evidence for stochastic effects in laboratory experiments. Then we provide an overview of stochastic climate theory from an applied mathematics perspectives. We also survey the current use of stochastic methods in comprehensive weather and climate prediction models and show that stochastic parameterizations have the potential to remedy many of the current biases in these comprehensive models.
*Saving Human Lives: What Complexity Science and Information Systems can Contribute*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://arxiv.org/abs/1402.7011[+https://arxiv.org/abs/1402.7011+]
We discuss models and data of crowd disasters, crime, terrorism, war and disease spreading to show that conventional recipes, such as deterrence strategies, are often not effective and sufficient to contain them. Many common approaches do not provide a good picture of the actual system behavior, because they neglect feedback loops, instabilities and cascade effects. The complex and often counter-intuitive behavior of social systems and their macro-level collective dynamics can be better understood by means of complexity science. We highlight that a suitable system design and management can help to stop undesirable cascade effects and to enable favorable kinds of self-organization in the system. In such a way, complexity science can help to save human lives.
*The Matthew effect in empirical data*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://arxiv.org/abs/1408.5124[+https://arxiv.org/abs/1408.5124+]
"The Matthew effect describes the phenomenon that in societies the rich tend to get richer and the potent even more powerful. It is closely related to the concept of preferential attachment in network science, where the more connected nodes are destined to acquire many more links in the future than the auxiliary nodes. Cumulative advantage and success-breads-success also both describe the fact that advantage tends to beget further advantage. The concept is behind the many power laws and scaling behaviour in empirical data, and it is at the heart of self-organization across social and natural sciences. Here we review the methodology for measuring preferential attachment in empirical data, as well as the observations of the Matthew effect in patterns of scientific collaboration, socio-technical and biological networks, the propagation of citations, the emergence of scientific progress and impact, career longevity, the evolution of common English words and phrases, as well as in education and brain development. We also discuss whether the Matthew effect is due to chance or optimisation, for example related to homophily in social systems or efficacy in technological systems, and we outline possible directions for future research."
2015
----
*Theory of Machines through the 20th century* - J. S. Rao
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/S0094114X14002006[+https://www.sciencedirect.com/science/article/pii/S0094114X14002006+]
This paper is written in honor of Professor Dr. F. R. Erskine Crossley, Professor Emeritus, Architect and Founder Member of the International Federation for Promotion of Theory of Mechanisms and Machines (IFToMM) and its first Vice President on his achieving 100 years of age on 21st July 2015 with glorious service to Kinematics and Kinetics Community of the world. This paper deals with two parts viz., 1. My association with him in the 1960's during the formation of IFToMM and 2. Kinematics and Kinetics, the way this subject is developed in the second half of the 20th century and practiced today; a subject matter very close to Professor Crossley's heart.
*Statistical inference for dynamical systems: A review* - K. McGoff et al
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://projecteuclid.org/euclid.ssu/1447165229[+https://projecteuclid.org/euclid.ssu/1447165229+]
The topic of statistical inference for dynamical systems has been studied widely across several fields. In this survey we focus on methods related to parameter estimation for nonlinear dynamical systems. Our objective is to place results across distinct disciplines in a common setting and highlight opportunities for further research.
*The vector algebra war: a historical perspective*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://arxiv.org/abs/1509.00501[+https://arxiv.org/abs/1509.00501+]
There are a wide variety of different vector formalisms currently utilized in engineering and physics. For example, Gibbs' three-vectors, Minkowski four-vectors, complex spinors in quantum mechanics, quaternions used to describe rigid body rotations and vectors defined in Clifford geometric algebra. With such a range of vector formalisms in use, it thus appears that there is as yet no general agreement on a vector formalism suitable for science as a whole. This is surprising, in that, one of the primary goals of nineteenth century science was to suitably describe vectors in three-dimensional space. This situation has also had the unfortunate consequence of fragmenting knowledge across many disciplines, and requiring a significant amount of time and effort in learning the various formalisms. We thus historically review the development of our various vector systems and conclude that Clifford's multivectors best fulfills the goal of describing vectorial quantities in three dimensions and providing a unified vector system for science.
*Graph theory in the geosciences*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/S0012825215000239[+https://www.sciencedirect.com/science/article/pii/S0012825215000239+]
Graph theory has long been used in quantitative geography and landscape ecology and has been applied in Earth and atmospheric sciences for several decades. Recently, however, there have been increased, and more sophisticated, applications of graph theory concepts and methods in geosciences, principally in three areas: spatially explicit modeling, small-world networks, and structural models of Earth surface systems. This paper reviews the contrasting goals and methods inherent in these approaches, but focuses on the common elements, to develop a synthetic view of graph theory in the geosciences.
Techniques applied in geosciences are mainly of three types: connectivity measures of entire networks; metrics of various aspects of the importance or influence of particular nodes, links, or regions of the network; and indicators of system dynamics based on graph adjacency matrices. Geoscience applications of graph theory can be grouped in five general categories: (1) Quantification of complex network properties such as connectivity, centrality, and clustering; (2) Tests for evidence of particular types of structures that have implications for system behavior, such as small-world or scale-free networks; (3) Testing dynamical system properties, e.g., complexity, coherence, stability, synchronization, and vulnerability; (4) Identification of dynamics from historical records or time series; and (5) spatial analysis. Recent and future expansion of graph theory in geosciences is related to general growth of network-based approaches. However, several factors make graph theory especially well suited to the geosciences: Inherent complexity, exploration of very large data sets, focus on spatial fluxes and interactions, and increasing attention to state transitions are all amenable to analysis using graph theory approaches.
*Ice streams in the Laurentide Ice Sheet*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/S0012825215000203[+https://www.sciencedirect.com/science/article/pii/S0012825215000203+]
This paper presents a comprehensive review and synthesis of ice streams in the Laurentide Ice Sheet (LIS) based on a new mapping inventory that includes previously hypothesised ice streams and includes a concerted effort to search for others from across the entire ice sheet bed. The inventory includes 117 ice streams, which have been identified based on a variety of evidence including their bedform imprint, large-scale geomorphology/topography, till properties, and ice rafted debris in ocean sediment records. Despite uncertainty in identifying ice streams in hard bedrock areas, it is unlikely that any major ice streams have been missed. During the Last Glacial Maximum, Laurentide ice streams formed a drainage pattern that bears close resemblance to the present day velocity patterns in modern ice sheets. Large ice streams had extensive onset zones and were fed by multiple tributaries and, where ice drained through regions of high relief, the spacing of ice streams shows a degree of spatial self-organisation which has hitherto not been recognised. Topography exerted a primary control on the location of ice streams, but there were large areas along the western and southern margin of the ice sheet where the bed was composed of weaker sedimentary bedrock, and where networks of ice streams switched direction repeatedly and probably over short time scales.
As the ice sheet retreated onto its low relief interior, several ice streams show no correspondence with topography or underlying geology, perhaps facilitated by localised build-up of pressurised subglacial meltwater. They differed from most other ice stream tracks in having much lower length-to-width ratios and have no modern analogues. There have been very few attempts to date the initiation and cessation of ice streams, but it is clear that ice streams switched on and off during deglaciation, rather than maintaining the same trajectory as the ice margin retreated. We provide a first order estimate of changes in ice stream activity during deglaciation and show that around 30% of the margin was drained by ice streams at the LGM (similar to that for present day Antarctic ice sheets), but this decreases to 15% and 12% at 12 cal ka BP and 10 cal ka BP, respectively. The extent to which these changes in the ice stream drainage network represent a simple and predictable readjustment to a changing mass balance driven by climate, or internal ice dynamical feedbacks unrelated to climate (or both) is largely unknown and represents a key area for future work to address.
*A review of non-stationarities in climate variability of the last century*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/S0012825215000756[+https://www.sciencedirect.com/science/article/pii/S0012825215000756+]
Non-stationarities are inherent characteristics of the climate system and can be observed on different temporal and spatial scales. Non-stationarities in large-scale modes of climate variability of the Northern Hemisphere and the associated changes in the surface climate like modifications of the temperature and precipitation patterns are analysed. As major modes of climate variability the El Niño–Southern Oscillation phenomenon, the Pacific–North American pattern and the North Atlantic Oscillation are selected. The North Atlantic–European area is taken as an example to highlight non-stationarities in the atmospheric circulation and their impact on regional climate. The mechanisms and consequences of circulation-climate non-stationarities are discussed.
*Theoretical foundation of cyclostationary EOF analysis for geophysical and climatic variables*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://www.sciencedirect.com/science/article/pii/S0012825215300040[+https://www.sciencedirect.com/science/article/pii/S0012825215300040+]
Natural variability is an essential component of observations of all geophysical and climate variables. In principal component analysis (PCA), also called empirical orthogonal function (EOF) analysis, a set of orthogonal eigenfunctions is found from a spatial covariance function. These empirical basis functions often lend useful insights into physical processes in the data and serve as a useful tool for developing statistical methods. The underlying assumption in PCA is the stationarity of the data analyzed; that is, the covariance function does not depend on the origin of time. The stationarity assumption is often not justifiable for geophysical and climate variables even after removing such cyclic components as the diurnal cycle or the annual cycle. As a result, physical and statistical inferences based on EOFs can be misleading.
Some geophysical and climatic variables exhibit periodically time-dependent covariance statistics. Such a dataset is said to be periodically correlated or cyclostationary. A proper recognition of the time-dependent response characteristics is vital in accurately extracting physically meaningful modes and their space–time evolutions from data. This also has important implications in finding physically consistent evolutions and teleconnection patterns and in spectral analysis of variability—important goals in many climate and geophysical studies. In this study, the conceptual foundation of cyclostationary EOF (CSEOF) analysis is examined as an alternative to regular EOF analysis or other eigenanalysis techniques based on the stationarity assumption. Comparative examples and illustrations are given to elucidate the conceptual difference between the CSEOF technique and other techniques and the entailing ramification in physical and statistical inferences based on computational eigenfunctions.
*GRACE, time-varying gravity, Earth system dynamics and climate change*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://iopscience.iop.org/article/10.1088/0034-4885/77/11/116801[+https://iopscience.iop.org/article/10.1088/0034-4885/77/11/116801+]
Continuous observations of temporal variations in the Earth's gravity field have recently become available at an unprecedented resolution of a few hundreds of kilometers. The gravity field is a product of the Earth's mass distribution, and these data—provided by the satellites of the Gravity Recovery And Climate Experiment (GRACE)—can be used to study the exchange of mass both within the Earth and at its surface. Since the launch of the mission in 2002, GRACE data has evolved from being an experimental measurement needing validation from ground truth, to a respected tool for Earth scientists representing a fixed bound on the total change and is now an important tool to help unravel the complex dynamics of the Earth system and climate change. In this review, we present the mission concept and its theoretical background, discuss the data and give an overview of the major advances GRACE has provided in Earth science, with a focus on hydrology, solid Earth sciences, glaciology and oceanography.
*Greenland ice sheet mass balance: a review*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://iopscience.iop.org/article/10.1088/0034-4885/78/4/046801[+https://iopscience.iop.org/article/10.1088/0034-4885/78/4/046801+]
Over the past quarter of a century the Arctic has warmed more than any other region on Earth, causing a profound impact on the Greenland ice sheet (GrIS) and its contribution to the rise in global sea level. The loss of ice can be partitioned into processes related to surface mass balance and to ice discharge, which are forced by internal or external (atmospheric/oceanic/basal) fluctuations. Regardless of the measurement method, observations over the last two decades show an increase in ice loss rate, associated with speeding up of glaciers and enhanced melting. However, both ice discharge and melt-induced mass losses exhibit rapid short-term fluctuations that, when extrapolated into the future, could yield erroneous long-term trends. In this paper we review the GrIS mass loss over more than a century by combining satellite altimetry, airborne altimetry, interferometry, aerial photographs and gravimetry data sets together with modelling studies. We revisit the mass loss of different sectors and show that they manifest quite different sensitivities to atmospheric and oceanic forcing. In addition, we discuss recent progress in constructing coupled ice-ocean-atmosphere models required to project realistic future sea-level changes.
*The physics of Martian weather and climate: a review*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://iopscience.iop.org/article/10.1088/0034-4885/78/12/125901[+https://iopscience.iop.org/article/10.1088/0034-4885/78/12/125901+]
The planet Mars hosts an atmosphere that is perhaps the closest in terms of its meteorology and climate to that of the Earth. But Mars differs from Earth in its greater distance from the Sun, its smaller size, its lack of liquid oceans and its thinner atmosphere, composed mainly of CO2. These factors give Mars a rather different climate to that of the Earth. In this article we review various aspects of the martian climate system from a physicist's viewpoint, focusing on the processes that control the martian environment and comparing these with corresponding processes on Earth. These include the radiative and thermodynamical processes that determine the surface temperature and vertical structure of the atmosphere, the fluid dynamics of its atmospheric motions, and the key cycles of mineral dust and volatile transport. In many ways, the climate of Mars is as complicated and diverse as that of the Earth, with complex nonlinear feedbacks that affect its response to variations in external forcing. Recent work has shown that the martian climate is anything but static, but is almost certainly in a continual state of transient response to slowly varying insolation associated with cyclic variations in its orbit and rotation. We conclude with a discussion of the physical processes underlying these long- term climate variations on Mars, and an overview of some of the most intriguing outstanding problems that should be a focus for future observational and theoretical studies.
*Soft matter food physics—the physics of food and cooking*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://iopscience.iop.org/article/10.1088/0034-4885/78/12/124602[+https://iopscience.iop.org/article/10.1088/0034-4885/78/12/124602+]
This review discusses the (soft matter) physics of food. Although food is generally not considered as a typical model system for fundamental (soft matter) physics, a number of basic principles can be found in the interplay between the basic components of foods, water, oil/fat, proteins and carbohydrates. The review starts with the introduction and behavior of food-relevant molecules and discusses food-relevant properties and applications from their fundamental (multiscale) behavior. Typical food aspects from 'hard matter systems', such as chocolates or crystalline fats, to 'soft matter' in emulsions, dough, pasta and meat are covered and can be explained on a molecular basis. An important conclusion is the point that the macroscopic properties and the perception are defined by the molecular interplay on all length and time scales.
*On Holo-Hilbert spectral analysis: a full informational spectral representation for nonlinear and non-stationary data*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://royalsocietypublishing.org/doi/10.1098/rsta.2015.0206[+https://royalsocietypublishing.org/doi/10.1098/rsta.2015.0206+]
The Holo-Hilbert spectral analysis (HHSA) method is introduced to cure the deficiencies of traditional spectral analysis and to give a full informational representation of nonlinear and non-stationary data. It uses a nested empirical mode decomposition and Hilbert–Huang transform (HHT) approach to identify intrinsic amplitude and frequency modulations often present in nonlinear systems. Comparisons are first made with traditional spectrum analysis, which usually achieved its results through convolutional integral transforms based on additive expansions of an a priori determined basis, mostly under linear and stationary assumptions. Thus, for non-stationary processes, the best one could do historically was to use the time–frequency representations, in which the amplitude (or energy density) variation is still represented in terms of time. For nonlinear processes, the data can have both amplitude and frequency modulations (intra-mode and inter-mode) generated by two different mechanisms: linear additive or nonlinear multiplicative processes. As all existing spectral analysis methods are based on additive expansions, either a priori or adaptive, none of them could possibly represent the multiplicative processes. While the earlier adaptive HHT spectral analysis approach could accommodate the intra-wave nonlinearity quite remarkably, it remained that any inter-wave nonlinear multiplicative mechanisms that include cross-scale coupling and phase-lock modulations were left untreated. To resolve the multiplicative processes issue, additional dimensions in the spectrum result are needed to account for the variations in both the amplitude and frequency modulations simultaneously. HHSA accommodates all the processes: additive and multiplicative, intra-mode and inter-mode, stationary and non-stationary, linear and nonlinear interactions. The Holo prefix in HHSA denotes a multiple dimensional representation with both additive and multiplicative capabilities.