Scientists Launch Hurricane-Tracking Satellites

A new kind of weather observation system was launched by NASA today that will provide information to help better monitor and forecast tropical cyclones around the world. The 8-microsatellite constellation of observatories was the brainchild of a group of scientists from the University of Michigan.  UM Rosenstiel School Professor Sharan Majumdar and Dr. Robert Atlas, Director of NOAA’s Atlantic Oceanographic and Meteorological Laboratory (AOML) were tasked with assembling and guiding a team of researchers to conduct data impact studies on hurricane model analyses and predictions.


After three days of delays, the Cyclone Global Navigation Satellite System (CYGNSS) was carried aloft aboard Orbital ATK’s Stargazer L-1011 aircraft, inside a three-stage Pegasus XL rocket from Cape Canaveral Air Force Station in Florida and launched over the Atlantic Ocean at 7:38 a.m. EST on Thursday, December 15.  At approximately 40,000 feet over the western Atlantic Ocean, the Pegasus rocket was released from the aircraft at 8:38 a.m.  The rocket was then launched in mid-air to take all 8 CYGNSS spacecraft in to orbit around Earth.

Once in orbit, CYGNSS will make frequent and accurate measurements of ocean surface winds throughout the lifecycle of tropical storms and hurricanes. The constellation of eight observatories will measure surface winds in and near a hurricane’s inner core, including regions beneath the eyewall and intense inner rainbands that previously could not be measured from space because of the heavy precipitation.

“The University of Miami and NOAA AOML team has demonstrated the potential for CYGNSS data to improve numerical analyses and predictions of the surface wind structure in tropical cyclones.  We expect that the investment in new microsatellite technologies such as CYGNSS will pave the way for better predictions of tropical cyclone impacts to benefit society around the globe,” said Majumdar.

Majumdar and colleagues wrote about the scientific motivation and the primary science goal of the mission, which is to better understand how and why winds in hurricanes intensify, in a March 2016 article in the Bulletin of the American Meteorological Society.

The local CYGNSS research team included Sharan Majumdar and Brian McNoldy from the UM Rosenstiel School, Robert Atlas from NOAA AOML, and Bachir Annane, Javier Delgado and Lisa Bucci (also a UM graduate student) from the UM Rosenstiel School’s Cooperative Institute for Marine and Atmospheric Science (CIMAS).  They have been working with simulated CYGNSS data since early 2013 to demonstrate and maximize the data’s impact in hurricane forecast models through the use of an OSSE, or Observing System Simulation Experiment, summarized by McNoldy in a NASA blog post.

Watch CYGNSS overview animation

Watch the launch!

Learn more about the hurricane-probing mission on NASA’s website.

–UM Rosenstiel School Communications Office

Water, Water, Everywhere: Sea Level Rise in Miami

Like many low-lying coastal cities around the world, Miami is threatened by rising seas.  Whether the majority of the cause is anthropogenic or natural, the end result is indisputable: sea level is rising.  It is not a political issue, nor does it matter if someone believes in it or not.

Tidal flooding on the corner of Dade Blvd and Purdy Ave in Miami Beach in 2010. (Steve Rothaus, Miami Herald)

The mean sea level has risen noticeably in the Miami and Miami Beach areas just in the past decade.  Flooding events are getting more frequent, and some areas flood during particularly high tides now: no rain or storm surge necessary.  Perhaps most alarming is that the rate of sea level rise is accelerating.

Diving Into Data

Certified measurements of sea level have been taken at the University of Miami’s Rosenstiel School on Virginia Key since 1996 (Virginia Key is a small island just south of Miami Beach and east of downtown Miami)[1].  Simple linear trends drawn through annual averages of all high tides, low tides, and the mean sea level are shown below, and all three lines are about 4.2 inches (11cm) higher in 2015 than they were in 1996.

Annual averages of high tide, low tide, and mean sea level, with linear trend lines drawn through them. The trend line slopes for each time series are labeled. [This chart was updated in Jan 2016 to include verified data through the end of 2015.]

Zooming in to daily data, let’s look at two representative months (nothing unique about them): May 1996 and May 2014.  Tidal predictions are calculated to high precision using dozens of known astronomical factors, but do not account for non-astronomical factors such as weather or sea level rise.  In 1996, the observed water levels were typically close to the predicted values… sometimes slightly higher, sometimes slightly lower due to meteorological influences.  In May 2014, however, there was still variability, but the tides were always higher than predicted — the baseline, or mean sea level, has increased.

Predicted (blue) and observed (green) high/low water heights at Virginia Key, May1-May 31. (NOAA/NOS)

Predicted (blue) and observed (green) high/low water heights at Virginia Key, May1-May 31. (NOAA/NOS)

For the following chart, only the daily high water mark (highest of the two high tides) for 20 years is plotted.  The water levels at high tides are the most relevant because that is when flooding events are more prone to occur.  For reference, the average seasonal cycle is shown by the thick black line and is calculated using a 31-day running mean of all 20 years of daily data.  The daily high tide values are plotted with a thin blue line, and the thin red line is a smoothed version of the blue line (91-day running mean).  The highest water marks are annotated… they have historically been associated with the passage of hurricanes, until September 2015 when a very high water level was reached without a storm nearby.

[This chart was updated in Jan 2016 to include verified data through the end of 2015.]

As eluded to in the introduction, sea level is not just rising here, the rate of the rise is accelerating.  If the seasonal cycle (black line in the figure above) is subtracted from the data, as well as the mean of all of the data, a series of trendlines can be generated (see figure below).  Removing the dominant annual and semi-annual cycles from the time series leaves only daily variability, miscellaneous cycles, and trends.  The data are color-coded by arbitrary 5-year periods (red is 2011-2015, green is 2006-2010, blue is 2001-2005, and purple is 1996-2000). The trendlines are drawn through the past 5 years (red), 10 years (green), 15 years (blue), and 20 years (purple).  There is plenty of daily and intra-annual variability of course, but what stands out is the increasing slopes of the linear trends in more recent periods.  Over the past 20 years, the average high tide has increased by 0.22 inches/year, which agrees very closely with the trend shown in the first chart using annual averages (0.21 inches/year).  However, notice that the trends over shorter and shorter periods become increasingly rapid.

Be advised that simple linear trends of noisy time series are not reliable for extrapolating very far into the future, nor are the trend values reliable for shorter time periods.  Longer data records allow for greater confidence in a linear trend, but cannot account for accelerating rates.

[This chart was updated in Jan 2016 and includes verified data through the end of 2015.]

Clearly, the 0.92 inches/year rate for the past five years is too high, but what would a representative recent trend be?  If we use data from the first chart (annual averages) and rather than relying on a trend through just the past five years, we use the past five five-year periods from each of the three time series (2011-2015, 2010-2014, 2009-2013, 2008-2012, 2007-2011), then we average out some of the interannual variability and eliminate the dependence on specific endpoints.  The average trend of the three time series (high tide, low tide, and mean sea level) for the past five five-year periods comes out to be 0.36 inches/year, or nearly double that of the full twenty-year record.

Linear trends of annual average data over the past five five-year periods. [This table was added in Jan 2016 to include verified data through the end of 2015.]


The Miami metropolitan region has the greatest amount of exposed financial assets and 4th-largest population vulnerable to sea level rise in the world.  The only other cities with a higher combined (financial assets and population) risk are Hong Kong and Calcutta [2].

Using a sea level rise projection of 3 feet by 2100 from the 5th IPCC Report [3] and elevation/inundation data, a map showing the resulting inundation is shown below.  The areas shaded in blue would be flooded during routine high tides, and very easily flooded by rain during lower tides.  Perhaps the forecast is too aggressive, but maybe not… we simply do not know with high confidence what sea level will do in the coming century.  But we do know that it is rising and showing no sign of slowing down.

Map showing areas of inundation by three feet of sea level rise, which is projected to occur by 2100. (NOAA)

Map showing areas of inundation by three feet of sea level rise, which is projected to occur by 2100. (NOAA)

An Attack from Below

In addition to surface flooding, there is trouble brewing below the surface too.  That trouble is called saltwater intrusion, and it is already taking place along coastal communities in south Florida. Saltwater intrusion occurs when saltwater from the ocean or bay advances further into the porous limestone aquifer.  That aquifer also happens to supply about 90% of south Florida’s drinking water.  Municipal wells pump fresh water up from the aquifer for residential and agricultural use, but some cities have already had to shut down some wells because the water being pumped up was brackish (for example, Hallandale Beach has already closed 6 of its 8 wells due to saltwater contamination[4]).

Schematic drawing of saltwater intrusion. Sea level rise, water use, and rainfall all control the severity of the intrusion. (

Schematic drawing of saltwater intrusion. Sea level rise, water use, and rainfall all control the severity of the intrusion. (

The wedge of salt water advances and retreats naturally during the dry and rainy seasons, but the combination of fresh water extraction and sea level rise is drawing that wedge closer to land laterally and vertically.

In other words, the water table rises as sea level rises, so with higher sea level, the saltwater exerts more pressure on the fresh water in the aquifer, shoving the fresh water further away from the coast and upward toward the surface.

Map of the Miami area, where colors indicate the depth to the water table. A lot of area is covered by 0-4 feet, including all of Miami Beach. (Dr. Keren Bolter)

Map of the Miami area, where colors indicate the depth to the water table. A lot of area is covered by 0-4 feet, including all of Miami Beach. (Keren Bolter, FAU)

An Ever-Changing Climate

To gain perspective on the distant future, we should examine the distant past.  Sea level has been rising for about 20,000 years, since the last glacial maximum.  There were periods of gradual rise, and periods of rapid rise (likely due to catastrophic collapse of ice sheets and massive interior lakes emptying into the ocean). During a brief period about 14,000 years ago, “Meltwater Pulse 1A”, sea level rose over 20 times faster than the present rate. Globally, sea level has already risen about 400 feet, and is still rising.

Observed global sea level over the past 20,000 years... since the last glacial maximum. (Robert Rohde, Berkeley Earth).

Observed global sea level over the past 20,000 years… since the last glacial maximum. (Robert Rohde, Berkeley Earth).

With that sea level rise came drastically-changing coastlines.  Coastlines advance and retreat by dozens and even hundreds of miles as ice ages come and go (think of it like really slow, extreme tides).  If geologic history is a guide, we could still have up to 100 feet of sea level rise to go… eventually.  During interglacial eras, the ocean has covered areas that are quite far from the coastline today.

Florida's coastline through the ages. (Florida Geological Survey)

Florida’s coastline through the ages. (Florida Geological Survey)

As environmental author Rachel Carson stated, “to understand the living present, and promise of the future, it is necessary to remember the past”.

What Comes Next?

In the next 20 years, what should we reasonably expect in southeast Florida?  The median value of sea level from various observed trends in 2034 is around 5 inches, with a realistic range of 3-7 inches.

Year by year, flooding due to heavy rain, storm surge, and high tides will become more frequent and more severe.  Water tables will continue to rise, and saltwater intrusion will continue to contaminate fresh water supplies.

This is not an issue that will simply go away.  Even without any additional anthropogenic contributions, sea level will continue to rise, perhaps for thousands of years.  But anthropogenic contributions are speeding up the process, giving us less time to react and plan.

Coastal cities were built relatively recently, without any knowledge of or regard for rising seas and evolving coastlines.  As sea level rises, coastlines will retreat inward. Sea level rise is a very serious issue for civilization, but getting everyone to take it seriously is a challenge.  As Dutch urban planner Steven Slabbers said, “Sea level rise is a … storm surge in slow motion that never creates a sense of crisis”.  It will take some creative, expensive, and aggressive planning to be able to adapt in the coming decades and centuries.


Special thanks to Dr. Keren Bolter at Florida Atlantic University and Dr. Shimon Wdowinski at University of Miami for their inspiration and assistance.





Hurricane Warning: Consume Rainbow Spaghetti with Caution

Most of the United States is well-aware of the dangers of “drinking the Kool-Aid” when it is time to form an opinion on a particular subject. However, the dangers of “eating the rainbow spaghetti” have not yet permeated the consciousness of the general public when interpreting the forecasts of hurricanes and tropical storms (tropical cyclones, or TCs). The spaghetti plot or spaghetti diagram is a visualization tool that shows the predicted paths (tracks) or wind speeds (intensities) of the numerous different TC models. Each potential TC track and intensity is shaded a different color; hence the appearance that the graphic is filled with rainbow spaghetti.

Examples of spaghetti diagrams for track and intensity from Tropical Storm Arthur 2014. (NCAR)

Examples of spaghetti diagrams for track and intensity from Tropical Storm Arthur 2014. (NCAR)

If used correctly, the spaghetti diagram can be a valuable forecasting tool. Viewing all of the potential tracks and intensities of the most realistic TC models helps scientists to understand how each model’s formulation (parameterizations, data assimilation schemes, etc.) can lead to different predicted outcomes. Additionally, the agreement or lack of agreement (commonly referred to as spread) between the models is often related to the confidence one should place in a particular forecast. If the models’ tracks and intensities are grouped together, it is often an indication that the hurricane’s future is more predictable. As a result, the spaghetti diagram can be used as a supplement to the National Hurricane Center’s (NHC) official track and intensity forecast.

When a tropical depression, tropical storm, or hurricane is present in the Atlantic or Eastern Pacific Ocean, the NHC issues an official intensity and track forecast. The intensity forecast is reported as a predicted wind speed but there are no details regarding the uncertainty in the forecast. Instead, ambitious users could look over the error statistics from past years to provide an expectation for the errors of the current storm. However, historical trends are not always the best guide for the intensity errors in individual storms, and errors often vary significantly depending on the situation. The ability to look at a spaghetti diagram and diagnose the spread of the models’ forecasts is helpful for anticipating the reliability of a particular hurricane’s intensity forecast.

(Top Panel) Spaghetti diagram for Tropical Storm Debby at 0600 UTC (2 am EST) on June 24, 2012.  (Bottom Panel) NHC official forecast track cone for Tropical Storm Debby at the same time as the spaghetti diagram. Figures courtesy of NCAR and NOAA.

(Top Panel) Spaghetti diagram for Tropical Storm Debby at 0600 UTC (2 am EST) on June 24, 2012. (Bottom Panel) NHC official forecast track cone for Tropical Storm Debby at the same time as the spaghetti diagram. Figures courtesy of NCAR and NOAA.

Spaghetti diagrams provide a similar advantage for track forecasts. Unlike intensity forecasts, NHC’s track forecasts provide some basic uncertainty information by surrounding the predicted storm path with a forecast cone. Before each hurricane season begins, the size of the forecast cone for the year is calculated based on the NHC official forecast track errors for all storms over the past five years. The same cone is used for the whole hurricane season, no matter how confident the NHC is (see “Forecast Cone Refresher”). By evaluating the spaghetti diagram alongside the forecast cone, it is possible to foresee the situations where the cone is more reliable than others.

The 2012 track forecasts of Tropical Storm Debby are a perfect example of how useful the spaghetti diagram can be. While the NHC forecast cone was showing a developing tropical storm moving westward off the Louisiana coast, half of the model tracks were directed eastward into the panhandle of Florida. Debby eventually migrated eastward and made landfall as a weak tropical storm north of Tampa Bay, Florida. The spaghetti diagram helped reveal the particular forecast cone was less reliable than normal and that there was a possibility the storm could travel in a completely different direction than the forecast cone.

Still, the spaghetti diagram quickly loses value if evaluated by an uninformed eye. With all the cryptic model abbreviations that accompany the diagram, it is hard for the average person to develop any intuition on what models normally perform better than others. Along with the NHC official forecast (shown as OFCI on the spaghetti diagrams), there are four main types of models that are typically included in spaghetti diagrams: trajectory/statistical, statistical-dynamical, dynamical, and consensus. All of these models arrive at their predictions using different methodologies.  The consensus aids are not independent; they are simply averages of other models.  Some of the models you see on spaghetti plots are outlined in the table below, and a more complete list is available here.

A selection of some of the model guidance routinely available to hurricane forecasters. Highlighted sections include very simple trajectory or statistical models (blue), skillful but still relatively simple statistical-dynamical schemes (green),  dynamical models (red), and averages of certain model combinations (tan).

A selection of some of the model guidance routinely available to hurricane forecasters. Highlighted sections include very simple trajectory or statistical models (blue), skillful but still relatively simple statistical-dynamical schemes (green), dynamical models (red), and averages of certain model combinations (tan).

Most spaghetti diagrams for track forecasts will include the models: “BAMS”, “BAMM”, and “BAMD”. These track models are called trajectory models and are much simpler than full dynamical or statistical-dynamical models. Trajectory models use data from dynamical models to estimate the winds at different layers of the atmosphere that are steering the TC but they do not account for the TC interacting with the surrounding atmosphere. Due to this major simplification, trajectory models should rarely be taken seriously but are included on the plots for reference. Averaged over the past five years, these models have track errors that are almost double the best performing model for a particular forecast time.

Statistical models produce track and intensity forecasts that are based solely on climatology and persistence. In other words, these models create a forecast for a TC using information on how past TCs behaved during similar times of the year at comparable locations and intensities (climatology) while also taking into account the recent movement and intensity change of the TC (persistence). Statistical models do not use any information about the atmospheric environment of the TC. As a result, statistical models are outperformed considerably by dynamical, statistical-dynamical, and consensus forecasts and should only be used as benchmarks of skill against the more complex and accurate models. The main track and intensity statistical models included on spaghetti diagrams are respectively CLP5 and SHF5. An even simpler statistical track “model” that is included on some spaghetti diagrams is XTRP (an extrapolation of the future direction of a hurricane solely based on its motion over the past 12 hours).

Statistical-dynamical models are similar to statistical models except that they also use output from the dynamical models on the environmental conditions surrounding the TC and storm-specific details to predict intensity change. The statistical-dynamical models commonly shown on intensity spaghetti diagrams are SHIP, DSHP, and LGEM. SHIP and DSHP are identical except DSHP accounts for the intensity decay of TCs over land and is therefore more accurate than SHIPS. LGEM is the best performing out of the three models. Both LGEM and DSHP are similar in skill to the dynamical models. These models are not capable of predicting rapid changes in intensity, nor are they meant to forecast intensity of weak disturbances.

Dynamical models make track and intensity forecasts by solving the equations that describe the evolution of the atmosphere. There are two main reasons why different dynamical models produce track and intensity forecasts that always differ even though they share a common goal of reproducing the physical processes of the atmosphere. First, even with the growing network of scientific instruments scattered across the globe and space, models have an imperfect picture of the current conditions in the atmosphere. This uncertainty in the current state of the atmosphere cannot be remedied; we do not have the resources to blanket every piece of the Earth and sky with instruments and measure all the necessary atmospheric parameters simultaneously. Additionally, all instruments have inherent measurement errors. Each model uniquely uses the imperfect and sometimes sparse observations available to arrive at slightly different starting points for their forecast. Secondly, even using the most cutting-edge computer systems in the world, the equations that govern the atmosphere cannot be solved for every inch of the atmosphere; it would take too long. Models have to solve equations on a 3-dimensional grid that spans the surface of the Earth and extends upward around 10 miles. Thus, even the finest resolution operational hurricane models have grid points horizontally separated by nearly 2 miles.

Scientists know that this level of detail is not sufficient; there are important physical processes happening within the grid boxes that affect the TC’s evolution. To prevent the weather that is happening at your friend’s house two miles away from being used to describe the weather at your house, modelers often use different “parameterizations”. This fancy word boils down to a variety of approximations used to extrapolate weather at larger scales (at the grid points) to smaller scales (within the grid points). The different dynamical models use a variety of grid sizes and parameterizations to capture some of TC’s small-scale processes, but these approximations ultimately lead to the models developing the TC in different ways.

The simplest dynamical model shown on spaghetti diagrams is the LBAR model, which is only a track model. Analogously to the trajectory models, the approximations used for LBAR lead to large errors and over the long-term, it is one of the worst performing models. The rest of the dynamical models depicted on spaghetti diagrams perform at a higher  level. Most spaghetti diagrams include the “early models” or “early-version” of these dynamical models because they are available to NHC during the forecast cycle. These track and intensity dynamical models often include the GFDI, HWFI, and AVNI/GFSI. These models are called interpolated models (that’s the “I” on the end) because they are adjusted versions of “late models”; the previous run’s forecast is interpolated to the current time because the current run is not available yet.

The fourth class of guidance included on spaghetti diagrams is the consensus model, which is actually not a model at all. Consensus forecasts are a combination of forecasts from a collection of models, usually obtained by averaging them together. For the spaghetti diagrams of intensity forecasts, the consensus models typically included are ICON and IVCN. The consensus models for track forecasts that are normally shown are TCON, TVCE (also known as TVCN), and AEMI.

The dynamical, statistical-dynamical, consensus models, and NHC official forecast all perform at a similar level for track and intensity forecasts, while the trajectory and statistical models have significantly higher errors. Yet when someone sees one of these inferior models deviating from the rest and steering a strong hurricane into their backyard, the natural intuition is to panic. In these situations, it is important to remember which are the more skillful models.

Still, among the skillful models, some perform a little better on average than the others but there is currently no way to foresee the dominant model(s) for a particular scenario. In fact, models will seemingly have good days and bad days, good months and bad months, and even good years and bad years. That is why an informed rainbow spaghetti consumer should not focus too much on an individual noodle but instead use all of the noodles as a side dish to NHC’s forecast cone. So when staring down an approaching hurricane this season, feel free to grab a colorful bowl of spaghetti, just remember to consume with care.

– Kieran Bhatia (PhD candidate in the Department of Atmospheric Sciences)

WHARF Mooring Deployment

Graduate students in the Meteorology and Physical Oceanography department at RSMAS deploy a sub-surface mooring in the Straits of Florida to measure the surface wave field.

So you want to go fishing this weekend to catch a nice big tuna to grill on the BBQ.  What’s the weather like?  You don’t want to get seasick! That means you don’t want the wave heights to be too large, nor the wave period too long (Did you know?…seasickness intensifies with an increasing period of oscillation1 ).

If you want to know what is happening in the waters offshore of Miami, you can take a look at the National Weather Service (NWS) website.  Every day the NWS provides forecasts of the wind and wave conditions over the Straits of Florida, which are used by commercial fisherman, shipping companies and recreational boaters.  The wave forecasts are based on model predictions. However, this region is highly dynamic due to the presence of the fast flowing Florida Current (named the Gulf Stream further north); this current interacts with the wave field and represents a challenge to the wave forecasting models.

Dr. Nick Shay leads the Upper Ocean Dynamics Laboratory at UM Rosenstiel School of Marine and Atmospheric Science, which has operated shore based high frequency (HF) radar systems for over a decade. These radars remotely measure near-real time surface currents across the Straits of Florida with high accuracy2.  HF radar also has the ability to measure the wave field over the surface of the Straits of Florida.  This has attracted increasing interest in recent years, as to provide operational real-time observations of the wave field can help improve the model forecasts.








But first, the accuracy of the HF radar wave measurements must be evaluated using in-situ observations of the wave field.  This is why the WHARF experiment was conceived.

Mr. Matthew Archer, a PhD student working in Dr. Shay’s lab, is the recipient of a prestigious award to deploy an acoustic wave and current profiler (AWAC) (  The AWAC is built by Nortek for long-term deployment in the ocean, to measure the surface wave field and ocean currents.  This instrument will gather data over a 4-month period, during the transition from spring to summer, to measure the in-situ wave heights and currents during different weather conditions.

On April 22nd, the mooring was successfully deployed offshore of Miami Beach, which gave the students experience of working at sea.  The AWAC was attached to a buoy that was moored to the ocean bottom with an anchor – in our case, a train wheel! The instrument, which is moored in 300-m water depth, floats 40-m below the surface, facing upward to measure the surface waves and currents far offshore of the coast, within the Florida Current.

Using this in-situ dataset, the radar system can be calibrated to make sure that the wave data are accurate.  The radar provides data every 20-min, which will be provided in near-real time on the lab website. The results of the WHARF project will provide valuable information that can be used in the further development of the NWS marine forecasts, benefiting shipping and navigation as well as the construction and management of sustainable coastal developments.  It will also give UM Rosentiel scientists data to investigate the relationship between strong currents and the surface wave field, a topic which is not fully understood.

The project was made possible by funding from SECOORA (Southeast Coastal Ocean Observing Regional Association).


1 Cheung, B. and A. Nakashima, 2006. A review on the effects of frequency of oscillation on motion sickness. In: Technical Report; No. DRDC-TR-2006-229. Defense research and development Toronto (Canada).

2 Parks, A. B., L. K. Shay, W. E. Johns, J. Martinez-Pedraja, and K.-W. Gurgel, 2009.  HF radar observations of small-scale surface current variability in the Straits of Florida.  In: J. Geophys. Res., 114, C08002, doi:10.1029/2008JC005025.



The MPO Best Paper Award Goes To…

UM Rosenstiel School Ph.D. student Katinka Bellomo received the Best Paper Award from the Division of Meteorology and Physical Oceanography (MPO) for her research paper recently published in the American Meteorology Society’s Journal of Climate.

“Receiving the MPO best paper award is a huge personal satisfaction,” said Katinka. “This is the first paper of my dissertation and of my life.”

Addu Atoll lagoon at sunset

The paper, titled “Observational and Model Estimates of Cloud Amount Feedback over the Indian and Pacific Oceans,” addressed the largest uncertainty in climate models – cloud feedback – by examining observations of cloud cover taken from ships and satellites from 1954 to 2005. The results of this paper represent the first observational long-term estimate of cloud feedback.

In response to greenhouse gas forcing, the Earth would naturally cool off by emitting more radiation back into space. However, feedback mechanisms, from clouds, can increase or reduce this cooling rate.

“I am satisfied that the paper shows how to handle the uncertainties in observations and provides a methodology to estimate cloud feedbacks from these observations,” said Katinka.

Congrats Katinka!

What Caused the Rapid Intensification of Super Typhoon Haiyan?

Typhoon Haiyan at peak intensity on November 7, 2013. Credit: NASA

Typhoon Haiyan at peak intensity on November 7, 2013. Credit: NASA

During the AMS Hurricane and Tropical Meteorology meeting in San Diego last week, Rosenstiel School professor Nick Shay presented research on the role ocean warming played in the rapid intensification of last year’s devastating Super Typhoon Haiyan in Southeast Asia.

Shay’s study suggests that temperature fluctuations from semi-diurnal internal tides need to be analyzed to fully understand the causes of rapid intensification as the storm went over the warm pool of water in the western Pacific prior to landfall in the Philippines.

Using temperature, salinity and current data collected as Haiyan made a direct hit over the Japan’s Triton buoy (formerly a NOAA TAO buoy), along with satellite-derived data from the SPORTS climatology model (Systematically merged Pacific Ocean Temperature and Salinity developed by Rosenstiel School graduate student Claire McCaskill), Shay’s research team examined ocean warming conditions prior to Haiyan at the thermocline, a distinct ocean temperature layer that is known to fluctuate seasonally due to tides and currents.


infrared satellite loop of Typhoon Haiyan in the Philippines. Credit: NOAA

infrared satellite loop of Typhoon Haiyan in the Philippines. Credit: NOAA

Internal tides are known to create large temperature fluctuations. Shay suggests that the upper ocean heat content was important in the rapid intensification of Haiyan similar to what is observed in the Atlantic Ocean basin. While the semidiurnal tides were amplified in the warming thermocline in this regime, they have to be removed from the data to accurately evaluate questions related to the roles climate change and oceanic warming played in the storm’s intensification.