Lightning in Alpine regions is associated with events such as thunderstorms,
extreme precipitation, high wind gusts, flash floods, and debris flows.
We present a statistical approach to predict lightning counts based on
numerical weather predictions. Lightning counts are considered on a grid
with 18 km mesh size. Skilful prediction is obtained for a forecast horizon
of 5 days over complex terrain.
This work uses current temperature observations combined with climate models to project future temperature estimates, e.g., 100 years into the future. We accomplish this by modeling temperature as a smooth function of time both in the seasonal variation as well as in the annual trend. These smooth functions are estimated at multiple quantiles that are all projected into the future. We hope that this work can be used as a template for how other climate variables can be projected into the future.
A new study indicates that heatwaves in India will become more frequent and last longer with global warming. Its results were derived from a large number of global climate models, and the calculations differed from previous studies in the way they included advanced statistical theory. The projected changes in the Indian heatwaves will have a negative consequence for wheat crops in India.
This paper presents the analysis of data from small-scale laboratory experimental smouldering fires that were digitally video-recorded. The video images of these fires bear a resemblance to remotely sensed images of wildfires and provide an opportunity to fit and assess a spatial model for fire spread that attempts to account for uncertainty in fire growth. We found that the fitting method is feasible, and the spatial model provides a suitable mathematical for the fire spread process.
As an example of how to robustly determine climate model uncertainty, the paper describes an experiment that perturbs the initial conditions for the ocean's temperature of a climate model. A total of 30 perturbed simulations are used (via an emulator) to estimate spatial uncertainties for temperature and precipitation fields. We also examined (using maximum covariance analysis) how ocean temperatures affect air temperatures and precipitation over land and the importance of feedback processes.
Snowfall forecasts are important for a range of economic sectors as well as for the safety of people and infrastructure, especially in mountainous regions. This work presents a novel statistical approach to provide accurate forecasts for fresh snow amounts and the probability of snowfall combining data from various sources. The results demonstrate that the new approach is able to provide reliable high-resolution hourly snowfall forecasts for the eastern European Alps up to 3 days ahead.
Low-visibility conditions reduce the flight capacity of airports and can lead to delays and supplemental costs for airlines and airports. In this study, the forecasting skill and most important model predictors of airport-relevant low visibility are investigated for multiple flight planning horizons with different statistical models.
Accurate wind forecasts are of great importance for decision-making processes in today's society. This work presents a novel probabilistic post-processing method for wind vector forecasts employing a bivariate Gaussian response distribution. To capture a possible mismatch between the predicted and observed wind direction caused by location-specific properties, the approach incorporates a smooth rotation of the wind direction conditional on the season and the forecasted ensemble wind direction.
This article presents a method for improving probabilistic air temperature forecasts, particularly at Alpine sites. Using a nonsymmetric forecast distribution, the probabilistic forecast quality can be improved with respect to the common symmetric Gaussian distribution used. Furthermore, a long-term training approach of 3 years is presented to ensure the stability of the regression coefficients. The research was based on a PhD project on building an automated forecast system for northern Italy.
Climate data contain spurious breaks, e.g., by relocation of stations, which makes it difficult to infer the secular temperature trend. Homogenization algorithms use the difference time series of neighboring stations to detect and eliminate this spurious break signal. For low signal-to-noise ratios, i.e., large distances between stations, the correct break positions may not only remain undetected, but segmentations explaining mainly the noise can be erroneously assessed as significant and true.
In this work, we combine the information from a complex and a simple atmospheric model to efficiently build a statistical representation (an emulator) of the complex model and to study the relationship between them. Thanks to the improved efficiency, this process is now feasible for complex models, which are slow and costly to run. The constructed emulator provide approximations of the model output, allowing various analyses to be made without the need to run the complex model again.
We present new probabilistic estimates of model parameters in the MIT Earth System Model using more recent data and an updated method. Model output is compared to observed climate change to determine which sets of model parameters best simulate the past. In response to increasing surface temperatures and accelerated heat storage in the ocean, our estimates of climate sensitivity and ocean diffusivity are higher. Using a new interpolation algorithm results in smoother probability distributions.
Extreme temperature and precipitation events in Australia have caused significant socio-economic and environmental impacts. Determining the factors contributing to these extremes is an active area of research. This paper describes a set of studies that have examined the causes of extreme climate events in recent years in Australia. Ideally, this review will be useful for the application of these extreme event attribution approaches to climate and weather extremes occurring elsewhere.
This paper investigates the influence of atmospheric rivers on spatial coherence of extreme precipitation under a changing climate. We use our TECA software developed for detecting atmospheric river events and apply statistical techniques based on extreme value theory to characterize the spatial dependence structure between precipitation extremes within the events. The results show that extreme rainfall caused by atmospheric river events is less spatially correlated under the warming scenario.
Millions of people worldwide are at a risk of coastal flooding, and this number will increase as the climate continues to change. This study analyzes how climate change affects future flood hazards. A new model that uses multiple climate variables for flood hazard is developed. For the case study of Norfolk, Virginia, the model predicts 23 cm higher flood levels relative to previous work. This work shows the importance of accounting for climate change in effectively managing coastal risks.
The attribution of classes of extreme events, such as heavy precipitation or heatwaves, relies on the estimate of small probabilities (with and without climate change). Such events are connected to the large-scale atmospheric circulation. This paper links such probabilities with properties of the atmospheric circulation by using a Bayesian decomposition. We test this decomposition on a case of extreme precipitation in the UK, in January 2014.
We show that ostensibly empirical methods of analyzing trends in the global mean temperature record, which appear to de-emphasize assumptions, can nevertheless produce misleading inferences about trends and associated uncertainty. We illustrate how a simple but physically motivated trend model can provide better-fitting and more broadly applicable results, and show the importance of adequately characterizing internal variability for estimating trend uncertainty.
There is considerable demand for accurate air quality information in human health analyses. The sparsity of ground monitoring stations across the US motivates the need for advanced statistical models to predict air quality metrics. We propose a statistical model that jointly models ground-monitoring station data and satellite-obtained data allowing for temporal and spatial misalignment, missingness, and spatially and temporally varying correlation to enhance prediction of particulate matter.
We assess the mean temperature effect of global and regional climate model combinations for the North American Regional Climate Change Assessment Program using varying classes of linear regression models, including possible interaction effects. We use both pointwise and simultaneous inference procedures to identify regions where global and regional climate model effects differ. We conclusively show that accounting for multiple comparisons is important for making proper inference.
We present a statistical framework for the reconstruction of historic temperature patterns from sparse, irregular data collected from observer stations. A common statistical technique for climate reconstruction uses modern era data as a set of temperature patterns that can be used to estimate the spatial temperature patterns. We present a framework for exploration of different assumptions about the sets of patterns used in the reconstruction while providing statistically rigorous estimates.
A deep learning convolutional neural network (DL-FRONT) was around 90 % successful in determining the locations of weather fronts over North America when compared against front locations determined manually by forecasters. DL-FRONT detects fronts using maps of air pressure, temperature, humidity, and wind from historical observations or climate models. DL-FRONT makes it possible to do science that was previously impractical because manual front identification would take too much time.
Several climate models are evaluated under current climate conditions to determine how well they are able to capture frequencies of severe-storm environments (conditions conducive for the formation of hail storms, tornadoes, etc.). They are found to underpredict the spatial extent of high-frequency areas (such as tornado alley), as well as underpredict the frequencies in the areas.
Several multi-site stochastic generators of zonal and meridional components of wind are proposed in this paper. Various questions are explored, such as the modeling of the regime in a multi-site context, the extraction of relevant clusterings from extra variables or from the local wind data, and the link between weather types extracted from wind data and large-scale weather regimes. We also discuss the relative advantages of hidden and observed regime-switching models.
Trends in gridded temperature data are commonly assessed independently for each grid cell, ignoring spatial coherencies. This may severely affect the interpretation of the results. This article proposes a space–time model for temperatures that allows for joint assessments of the trend across locations. In a case study of summer season trends in Europe, it is found that the region with a significant trend under spatial coherency is vastly different from that under independent assessments.
Visibility is estimated for the 21st century using global and regional climate model output. A baseline decrease in visibility in the Arctic (10 %) is more notable than in the North Atlantic (< 5 %). We develop an adjustment that yields greater consistency among models and explore the justification of our ad hoc adjustment toward ship observations during the historical period. Baseline estimates are found to be sensitive to the representation of temperature and humidity.
Climate influences on hurricane intensification are investigated by averaging hourly intensification rates over the period 1975–2014 in 8° by 8° latitude–longitude grid cells. The statistical effects of hurricane intensity, sea-surface temperature (SST), El Niño–Southern Oscillation (ENSO), the North Atlantic Oscillation (NAO), and the Madden–Julian Oscillation (MJO) are quantified. Intensity, SST, and NAO had a positive effect on intensification rates. The NAO effect should be further studied.
Here, we propose a classification methodology of various space-time atmospheric datasets into discrete air mass groups homogeneous in temperature and humidity through a probabilistic point of view: both the classification process and the data are probabilistic. Unlike conventional classification algorithms, this methodology provides the probability of belonging to each class as well as the corresponding uncertainty, which can be used in various applications.
Between atmospheric temperatures of 0 and −38 °C, clouds contain ice crystals, super-cooled liquid droplets, or a mixture of both, impacting how they influence the atmospheric energy budget and challenging our ability to simulate climate change. Better cloud-phase measurements are needed to improve simulations. We demonstrate how a Bayesian method to identify cloud phase can improve on currently used methods by including information from multiple measurements and probability estimates.
This paper proposes a new generalisation of the block bootstrap methodology, which allows for any positive real number as expected block size. We use this bootstrap for determining the p values of a homogeneity test for copulas. The methods are applied to a temperature data set - we have found some significant changes in the dependence structure between the standardised temperature values of pairs of observation points within the Carpathian Basin.
In this paper, we introduce a method for expressing the agreement between climate model output time series and time series of observational data as a probability value. Our metric is an estimate of the probability that one would obtain two time series as similar as the ones under consideration, if the climate model and the observed series actually shared the same underlying climate signal.
Sea surface temperature (SST) is a key component of global climate models, particularly in the tropical Pacific Ocean where El Niño and La Nina events have worldwide implications. In our paper, we analyse monthly SSTs in the Niño 3.4 region and find a transformation that removes a spatial mean-variance dependence for each month. For 10 out of 12 months in the year, the transformed monthly time series gave more accurate or as accurate forecasts than those from the untransformed time series.
Event attribution studies can now be performed at short notice. We document a protocol developed by the World Weather Attribution group. It includes choices of which events to analyse, the event definition, observational analysis, model evaluation, multi-model multi-method attribution, hazard synthesis, vulnerability and exposure analysis, and communication procedures. The protocol will be useful for future event attribution studies and as a basis for an operational attribution service.
We present a method for estimating intrinsic model error in a model of the California Current System. The estimated model error covariance matrix is used in the weak constraint formulation of the Regional Ocean Modeling System, four-dimensional variational data assimilation system, and comparison of the circulation estimates computed in this way show demonstrable improvement to those computed in the strong constraint formulation, where intrinsic model error is not taken into account.
Two-dimensional wavelet transformations can be used to analyse the local structure of predicted and observed precipitation fields and allow for a forecast verification which focuses on the spatial correlation structure alone. This paper applies the novel concept to real numerical weather predictions and radar observations. Systematic similarities and differences between nature and simulation can be detected, localized in space and attributed to particular weather situations.
Uncertainties in land model projections are important to understand in order to build confidence in Earth system modeling. In this paper, we introduce a framework for estimating uncertain land model parameters with machine learning. This method increases the computational efficiency of this process relative to traditional hand tuning approaches and provides objective methods to assess the results. We further identify key processes and parameters that are important for accurate land modeling.
State-of-the-art statistical methods are applied to postprocess an ensemble of numerical forecasts for vertical profiles of air temperature. These profiles are important tools in weather forecasting as they show the stratification and the static stability of the atmosphere. Flexible regression models combined with the multi-dimensionality of the data lead to better calibration and representation of uncertainty of the vertical profiles.
In climate change attribution studies, one often seeks to maximize a signal-to-noise ratio, where the signal is the anthropogenic response and the noise is climate variability. A solution commonly used in D&A studies thus far consists of projecting the signal on the subspace spanned by the leading eigenvectors of climate variability. Here I show that this approach is vastly suboptimal – in fact, it leads instead to maximizing the noise-to-signal ratio. I then describe an improved solution.
Extremes in weather can have lasting effects on human health and resource consumption. Studying the recurrence of these events on a regional scale can improve response times and provide insight into a changing climate. We introduce a set of clustering tools that allow for regional clustering of weather recordings from stations across Germany. We use these clusters to form regional models of summer temperature extremes and find an increase in the mean from 1960 to 2018.
Scientists often are confronted with the question of whether two time series are statistically distinguishable. This paper proposes a test for answering this question. The basic idea is to fit each time series to a time series model and then test whether the parameters in that model are equal. If a difference is detected, then new ways of visualizing those differences are proposed, including a clustering technique and a method based on optimal initial conditions.
Very short-term forecasting, called nowcasting, is used to monitor storms that pose a significant threat to people and infrastructure. These threats could include lightning strikes, hail, heavy precipitation, strong winds, and possible tornados. This paper proposes a fast approach to nowcasting lightning threats using simple statistical methods. The proposed model results in fast nowcasts that are more accurate than a competitive, computationally expensive, approach.
Evaluation of modern high-resolution global climate models often does not account for the geographic location of the underlying weather station data. In this paper, we quantify the impact of geographic sampling on the relative performance of climate model representations of precipitation extremes over the United States. We find that properly accounting for the geographic sampling of weather stations can significantly change the assessment of model performance.
We have developed a new statistical method to describe how a severe weather event, such as a heat wave, may have been influenced by climate change. Our method incorporates both observations and data from various climate models to reflect climate model uncertainty. Our results show that both the probability and the intensity of the French July 2019 heatwave have increased significantly in response to human influence. We find that this heat wave might not have been possible without climate change.
We have developed a novel and fast statistical method for diagnosing effective radiative forcing (ERF), a measure of the net effect of greenhouse gas emissions on Earth's energy budget. Our method works by inverting a recursive digital filter energy balance representation of global climate models and has been successfully validated using simulated data from UK Met Office climate models. We have estimated time series of historical ERF by applying our method to the global temperature record.