A certain Republican Senator from Oklahoma who shall remain nameless was once quoted as suggesting that the US should withdraw from the World Meteorological Organization and stop providing its weather data to competing nations like China. A professor of mine from Stony Brook was in a large group of scientists called down to Capital Hill to discuss the upcoming Kyoto Summit on Climate Change back in 1998 when these remarks were uttered in frustration (this particular Senator was angry because China – now the world’s largest consumer of fossil fuels – had pointed the finger at the US for being the leading source of global warming and wagged their finger at us for not participating in Kyoto while they themselves had no intentions of complying with emission reduction standards either) and remembers chuckling at the Senator’s ignorance of the importance of WMO data to scientific research. Of course, the Senator was not ignorant – he was just venting – but many are quick to assume that if you wear an R on your lapel, you’ve got the scientific knowledge of a three year old.
I will have other comments about climate science in future articles – this, however, is not a global warming rant. That story is presented to give you an indication of the unfriendly relationship between our government and the scientists in charge of weather data and information. This is a rant about the intrusion of politics into the science of short term weather prediction. Today, there is a fairly strong tropical cyclone sitting off the Florida coast heading North toward the Carolinas. Like dozens of tropical storms in the historical record (which now dates back to 1861 with the application of some clever reconstructions of ship reports and surface analyses hand drawn by members of the old-time US Weather Bureau), Tropical Storm Nicole is not an ideal case. Most of the thunderstorms with it are well east of the surface low pressure system and it is interacting with a coastal front (currently producing HUGE rainfalls in the Appalachians). In fact, yesterday at this time, it looked very much like it does now (this is being written Thursday afternoon, September 30th)…a ragged surface circulation, a much better defined mid-level circulation (several thousand feet above our heads) and a lot of thunderstorms to its east. Yesterday it was a named storm and Tropical Storm Warnings were posted for South Florida and most of Cuba. Today, the National Hurricane Center has no interest in the storm because yesterday evening, they declared that the storm had “dissipated.”
I would like for someone to explain to me exactly how a storm that hasn’t changed at all in satellite appearance, potential impacts on the US coast line, and overall intensity has “dissipated?” In meteorological terms, dissipation implies that the storm can no longer be tracked…this is clearly not the case here. Nor is it the case that the system no longer meets the criteria of a tropical cyclone. There are three scientifically accepted methods for determining whether a storm is tropical or extratropical (the central criteria for a tropical cyclone are that it must be warmer aloft over the circulation than around it, it must have a complete surface low pressure system that surrounds it in all directions, and it must be born over waters that are warm enough to support development on their own with no influence from the mid-latitude storm track). Those criteria are the Dvorak intensity estimates from satellite observations (which currently classify this storm as a 50 mph tropical cyclone), Dr. Bob Hart’s Cyclone Phase Analysis project (you can see that analysis here if you care about the details of this storm), which tries to get a mathematical measure of a storm’s directional symmetry and of the temperature of its’ core, and hurricane hunter aircraft measurements (which the NHC has refused to commission today, saying the area is of no interest). No objective criteria calls this a non-entity.
I’ll tell you exactly why this storm was downgraded. If they did not discontinue tracking Nicole, they’d have had to issue tropical storm warnings for a large portion of the East Coast. With those warnings would come an onslaught of calls from reporters, political figures, and FEMA coordinators demanding to know what to expect and what needed to be done to prepare. This very thing happened when Hurricane Earl took a sideswiping run at the coast…the NHC issued watches and warnings for everyone from Wilmington, NC to Eastport, ME…most of which did not verify (there were minimal hurricane conditions at Cape Hatteras and some minial Tropical Storm conditions over Cape Cod, the rest of the coast was largely spared). There’s only one thing worse than issuing warnings and watches and fielding all those pressure-filled calls…and that’s issuing warnings, fielding the calls, and being wrong. THOSE calls – the ones that begin “where the hell is this tropical storm you forecasted?!”…those calls are very…very painful. Every time a tropical storm warning is issued, the estimated cost of preparation for the event is roughly $80,000 per mile of coast affected. Every time a hurricane warning is issued, the cost is roughly $0.15-$0.5 MILLION per mile depending on whether evacuation orders are issued. The average warning bar covers 400 miles of coastline…you do the math. Yes…it really is that expensive. When you bust, you made the government spend a bunch of money it doesn’t have, attend a bunch of time-sucking threat-assessment and preparedness meetings, and put a bunch of emergency managers on all-day stand-by for no good reason (or so the view is from the government).
Of course, if the top priority were safeguarding the lives of our citizens, that cost would be considered a necessary evil…we do these things because tropical cyclones are inherently hard to predict and if we don’t prepare and are wrong, people die. But at the government level, lives are on a ledger against government resources, and the anger when that money was spent without the warnings verify is enormous. The consequence is that NHC forecasters are so afraid of being wrong that as soon as it becomes possible for them to pass the buck to another agency (in this case, the National Weather Service), they do it and don’t look back. For the same reason, the NHC is frequently caught naming storms that shouldn’t be named in the middle of nowhere and retroactively changing their analysis to buff up their seasonal forecast accuracy. Because guess what…if they predict 14 named storms and only 9 verify, they get angry calls too. Last year, for example, they predicted 11 named storms and by late October, had only SIX…so they named a storm that supposedly developed at FIFTY DEGREES NORTH LATITUDE…IN OCTOBER!…OVER 65 F WATER!!…and then killed it 12 hours later…just to get the verified storm count closer to their forecast.
None of this happens in the private sector. Private sector weather forecasters do not benefit from cheating…if they forecast 15 storms and their clients think there were only 6…they’re not going to get business next season whether they claim there were 8 or 12 storms. No amount of false-naming or sudden downgrading protects you from accountability for your forecast if you are not viewed as the “official record.” I know this to be precisely what is happening at the NHC and all other NOAA branches because they routinely make pronouncements like “The National Hurricane Center is the authority on hurricane prediction. There needs to be one unified voice for communicating risks and issuing warnings or public confusion may result.” Translation: we are your government…we know more about hurricanes than anyone else…we should be the only ones allowed to make hurricane forecasts and our records should be held as the official facts.
Of course, if you knew anything about how forecast offices are actually run, you would laugh at the idea that the government should be the only source of information about the weather. I recently observed with some frustration to my adviser here at Stony Brook that the National Weather Service seemed unable to adjust their forecast to evolving reality even when it was obvious that things weren’t working out as earlier predicted. During the July 3-8 heat wave, their forecast high temperatures over Long Island were consistently way…way too low. If I can spend five minutes looking at the model forecast on July 1st and say “ah crap…here comes a huge heat wave” based on nothing more than large-scale pattern recognition…and I’m far from a perfect forecaster, mind you…then why can’t the NWS even do so much as adjust their numbers up after they busted 10 degrees too low on the 4th? It’s obviously a horrendous heat wave…conditions have not changed…and yet they’re still calling for cooling 24 and 48 hours later. This happens because NWS forecasters know that it’s easier to blame a computer model for forecast errors than to take responsibility and diverge significantly from the model output statistics (MOS)…and then be wrong and have to explain yourself. It also happens because the NWS runs two-man forecast shifts (and only 3 shifts instead of 4) now…and there is no one to protect the scientists from having to answer ten calls an hour from reporters when the weather becomes extreme. They literally have two hours to make a forecast and that is not enough time to do more than look at the models and the current data and tweak MOS a bit.
That is the NATURE of government work…anyone who works for a government agency knows that all of the people there really want to do good work and that the vast majority of their problems comes from the extreme inefficiency of government projects (and that inefficiency comes from the fact that the government is constantly besieged with media intrusion and administrative bureaucracy that borders on farcical in scale). There are lots of great…great scientists employed by the National Weather Service. The data shows that despite the rapid improvement in numerical weather prediction skill, we still consistently out-forecast the models despite the lack of time to do in depth forecasting. That’s remarkable. Unfortunately, our forecast skill VANISHES when we need it the most. When there’s a huge a tropical rain and wind-maker heading for Cape Hatteras and NYC, the Hurricane Center passes the buck and the NWS has to field hundreds and hundreds of calls demanding information while scrambling to hoist eleven different watches and warnings that would ALL BE COVERED by a Tropical Storm warming…if you think scientists can make a good forecast under those conditions..think again. They’re parroting the models and curled up in a fetal position at their AWIPS terminals right about now, praying the models aren’t way off. Case in point..if you go the http://www.nws.noaa.gov right now, there’s a headline stating that TS Nicole has dissipated (which is just horribly misleading to anyone who does not understand basic weather satellite data and cannot therefore see that there’s still a huge storm out there). What’s worse, when you click on, say, Central Long Island and look at the point and click forecast, it’s calling for winds tonight of 22-26 mph gusting as high as 41 (hilarious how precise they’re intuitively claiming to be and also hilarious how way…way underdone those wind forecasts are)…do you think Joe Public is going to read the High Wind Warning statement that says winds could gust to 65 mph tonight…or will he just look at the point and click and go “gee…that doesn’t sound so bad?”
I am not saying we shouldn’t be doing government forecasting…the government is tasks with protecting its’ people and part of that is forecasting the weather. And the government actually does admirably well at forecasting the weather given their extenuating circumstances and the chaotic nature of our atmosphere. I am, however, saying three things need to happen to improve our reaction to very inclement weather.
1) We need to get better at communicating the uncertainty in weather forecasts as well as communicating the possible dangers of any weather disaster to the public. If there’s a heat wave coming…but your model guidance says it may not be all that severe…and you post a forecast high of 91…people are going to assume it’s just another warm summer day…if there’s any chance that it actually gets to 98 (heat advisory ends up being needed)…the public should know that chance exists. Point and click forecasts should NOT be deterministic…you shouldn’t click on Long Island and see a high of exactly 74 for tomorrow and 62 for Saturday…that’s madness. The point and click should be zone-based and the forecasts should express uncertainty (ranges should be given).
2) We have GOT to devote more financial resources to advancing all of the sciences (I would not pay for that with extra tax money…I would divert spending away from corporate bailouts and wastes like the National Endowment for the Arts (sorry…but art should pay for itself…demand for it certainly won’t vanish without government aid)…we need staffers around to field questions from reporters and play blocker for the scientists and we need more than two people on shift at any given time to actually collaboratively make a forecast.
3) Everyone…non-scientists and scientists alike need to demand accountability from the government when it comes to maintaining accurate weather records. As it stands right now, playing with the hurricane numbers every..single…year…is completely acceptable behavior and storms are named or not named to fit the previous forecasts at least twice a month during the season. PAY ATTENTION to your public authorities on weather prediction…don’t let them get away with claiming their weather stations are properly cited when in fact they’re between a highway and an airport runway. Don’t let the NHC get away with changing the facts to suit their mood. We all have to do our part to demand that scientific branches of the government remain dedicated to the truth and not to reducing paperwork or fitting a long term agenda (e.g. what happens re: global warming).
In the meantime, at times like these, I’ll get my extreme weather forecasts from the private sector, thank you very much.