Site Migration

This may be the last post you see on the old site.  We are currently migrating away to a much better Wordpress platform (the new Typepad software changes have really irritated me, and it was time to move on).  You may experience some site problems and broken links for a day or two, but soon all will be well (I hope).  You will know that you are on the new site because the right had sidebar will be blue, not black.

Climate Model Validation

I am sorry that posting has been light, but I am currently working to migrate this site to Wordpress from hosted Typepad.  This is a real hassle, as described at my other blog where I just completed a succesful migration.  I hope to have this blog moved over this weekend. 

In the mean time, I thought my readers might need some help understanding James Hansen's recent comments that flat world temperatures over the last 10 years and substantially cooler temperatures in 2008 were entirely consistent with the climate models that forecast  0.2-0.3C (or more) warming for this decade.  Most other natural sciences are stuck in the old and outdated practice of questioning forecasts when actual observational data diverges from the forecast by several standard deviations.  Not so modern, enlightened, consensus-based climate science.  Below is my graphical representation of how climate scientists evaluate their models in light of new data.


Global Warming Is Caused by Computers

In particular, a few computers at NASA's Goddard Institute seem to be having a disproportionate effect on global warming.  Anthony Watt takes a cut at an analysis I have tried myself several times, comparing raw USHCN temperature data to the final adjusted values delivered from that data by the NASA computers.  My attempt at this compared the USHCN adjusted to raw for the entire US:


Anthony Watt does this analysis from USHCN raw all the way through to the GISS adjusted number  (the USHCN adjusts the number, and then the GISS adds their own adjustments on top of these adjustments).  The result:  100%+ of the 20th century global warming signal comes from the adjustments.  There is actually a cooling signal in the raw data:


Now, I really, really don't want to be misinterpreted on this, so a few notes are necessary:

  1. Many of the adjustments are quite necessary, such as time of observation adjustments, adjustments for changing equipment, and adjustments for changing site locations and/or urbanization.  However, all of these adjustments are educated guesses.  Some, like the time of observation adjustment, probably are decent guesses.  Some, like site location adjustments, are terrible (as demonstrated at

    The point is that finding a temperature change signal over time with current technologies is a measurement subject to a lot of noise.  We are looking for a signal on the order of magnitude of 0.5C where adjustments to individual raw instrument values might be 2-3C.  It is a very low signal-noise environment, and one that is inherently subject to biases  (researches who expect to find a lot of warming will, not surprisingly, adjust a lot of measurements higher).
  2. Warming has occurred in the 20th century.  The exact number is unclear, but we have much better data via satellites now that have shown a warming trend since 1979, though that trend is lower than the one that results from surface temperature measurements with all these discretionary adjustments.

Steve Chu: "Climate More Sensitive Than We Thought"

The quote in the title comes from Obama's nominee to become energy secretary, Steven Chu.  Specifically,

Chu's views on climate change would be among the most forceful ever held by a cabinet member. In an interview with The Post last year, he said that the cost of electricity was "anomalously low" in the United States, that a cap-and-trade approach to limiting greenhouse gases "is an absolutely non-partisan issue," and that scientists had come to "realize that the climate is much more sensitive than we thought."

I will leave aside of why hard scientists typically make bad government officials (short answer:  they have a tendency towards hubris in their belief in a technocrats ability to optimize complex systems.  If one thinks they can assign a 95% probability that a specific hurricane is due to man-made CO2, against the backdrop of the unimaginable chaos of the Earth's climate, then they will often have similar overconfidence in regulating the economy and/or individual behavior).

However, I want to briefly touch on his "more sensitive" comment.

Using assumptions from the last IPCC report, we can disaggregate climate forecasts into two components:  the amount of warming from CO2 alone, and the multiplication of this warming by feedbacks in the climate.  As I have pointed out before, even by the IPCC's assumptions, most of the warming comes not from CO2 alone, but from assumed quite large positive feedbacks.


This is based on the formula used by the IPCC (which may or may not be exaggerated)

T = F(C2) – F(C1) Where F(c) = Ln(1+1.2c+0.005c2+0.0000014c3)

Plotting this formula, we get the blue no-feedback line above (which leads to about a degree of warming over the next century).  We then apply the standard feedback formula of Multiplier = 1/(1-feedback%)  to get the other lines with feedback.  It requires a very high 60% positive feedback number to get a 3C per century rise, close to the IPCC base forecast, and nutty 87% feedback to get temperature rises as high as 10C, which have been quoted breathlessly in the press.  It is amazing to me that any natural scientist can blithely accept such feedback numbers as making any sense at all, particularly since every other long-term stable natural process is dominated by negative rather than positive feedback.

By saying that climate is "more sensitive than we thought" means essentially that Mr. Chu and others are assuming higher and higher levels of positive feedback.  But even the lower feedback numbers are almost impossible to justify given past experience.  If we project these sensitivity numbers backwards, we see:


The higher forecasts for the future imply that we should have seen 2-4C of warming over the last century, which we clearly have not.  Even if all the past warming of the last century is attributable to man's CO2  (a highly unlikely assumption) past history only really justifies the zero feedback case  (yes, I know about damping and time delays and masking and all that -- but these adjustments don't come close to closing the gap).

In fact, there is good evidencethat at most, man's CO2 is responsible for about half the past warming, or 0.3-0.4C.  But if that is the case, as the Reference Frame put it:

The authors looked at 750 years worth of the local ice core, especially the oxygen isotope. They claim to have found a very strong correlation between the concentration of this isotope (i.e. temperature) on one side and the known solar activity in the epoch 1250-1850. Their data seem to be precise enough to determine the lag, about 10-30 years. It takes some time for the climate to respond to the solar changes.

It seems that they also have data to claim that the correlation gets less precise after 1850. They attribute the deviation to CO2 and by comparing the magnitude of the forcings, they conclude that "Our results are in agreement with studies based on NH temperature reconstructions [Scafetta et al., 2007] revealing that only up to approximately 50% of the observed global warming in the last 100 years can be explained by the Sun."...

Note that if 0.3 °C or 0.4 °C of warming in the 20th century was due to the increasing CO2 levels, the climate sensitivity is decisively smaller than 1 °C. At any rate, the expected 21st century warming due to CO2 would be another 0.3-0.4 °C (the effect of newer CO2 molecules is slowing down for higher concentrations), and this time, if the solar activity contributes with the opposite sign, these two effects could cancel.

Not surprisingly, then, given enough time to measure against them, alarmist climate forecasts, such as James Hansen's below, tend over-estimate actual warming.  Which is probably why the IPCC throws out their forecasts and redoes them every 5 years, so no one can call them on their failures (click to enlarge chart below)


Because, at the end of the day, for whatever reason, warming has slowed or stopped over the last 10 years, even as CO2 concentrations have increased faster than ever in the modern period.  So it is hard to say what physical evidence one can have that tenperature sensitivity to CO2 is increasing.


New Climate Video - RCRC Climate Debate

I have finally been able to publish a video of my presentation at the climate debate held by the Regional Council of Rural Counties last September.  The entire video is about an hour long.  As usual, I am offering several ways to view it.  First, it has been posted on YouTube but had to be broken into seven parts.  The playlist of all seven parts is below:

The playlist link is here:  RCRC Climate Debate (Skeptic's Side)

Unfortunately, YouTube crushes the resolution so many of the charts are hard to read.  You can download the full resolution windows media version (about 96MB) as long as my bandwidth holds out by right-clicking and downloading form this link:  Download RCRC Climate Debate (wmv)

Also, you can stream higher resolution version of this film (and all my other climate films) at this site.  The resolution is not as good as the downloadable version but is much better than YouTube.  Again, bandwidth pending.

Finally, you can download the actual powerpoint presentation shown in this video here or you can view the presentation online here.

In the future, all of my videos and presentations will be available via the links just under the banner for this site.

Linear Regression Doesn't Work if the Underlying Process is not Linear

Normally, I would have classified the basic premise of Craig Loehle's recent paper, as summarized at Climate Audit, as a blinding glimpse of the obvious.  Unfortunately, the climate science world is in desperate need of a few BGO's, so the paper is timely.  I believe his premise can be summarized as follows:

  1. Many historical temperature reconstructions, like Mann's hockey stick, use linear regressions to translate tree ring widths into past temperatures
  2. Linear regressions don't work when the underlying relationship, here between tree rings and temperature, is not linear.

The relationship between tree ring growth and temperature is almost certainly non-linear.  For example, tree ring growth does not go up forever, linearly, with temperature.  A tree that grows 3mm in a year at 80F and 4mm at 95F is almost certainly not going to grow 6mm at 125F. 

However, most any curve, over a sufficiently narrow range, can be treated as linear for the purposes of most analyses.  The question here is, given the relationship between tree ring growth and temperatures, do historical temperatures fall into such a linear region?  I think it is increasingly obvious the answer is "no," for several reasons:

  1. There is simply not very good, consistent data on the behavior of tree ring growths with temperature from folks like botanists rather than climate scientists.  There is absolutely no evidence whether we can treat ring widths as linear with temperatures over a normal range of summer temperatures.
  2. To some extent, folks like Mann (author of the hockey stick) are assuming their conclusion.  They are using tree ring analysis to try to prove the hypothesis that historic temperatures stayed in a narrow band (vs. current temperatures that are, they claim, shooting out of that band).  But to prove this, they must assume that temperatures historically remained in a narrow band that is the linear range of tree ring growth.  Essentially, they have to assume their conclusion to reach their conclusion.
  3. There is strong evidence that tree rings are not very good, linear measurements of temperature due to the divergence issue.  In short -- Mann's hockey stick is only hockey stick shaped if one grafts the surface temperature record onto the tree ring history.  Using only tree ring data through the last few decades shows no hockey stick.  Tree rings are not following current rises in temperatures, and so it is likely they underestimate past rises in temperature.  Much more here.

  4. Loehle's pursues several hypotheticals, and demonstrates that a non-linear relationship of tree rings to temperature would explain the divergence problem and would make the hockey stick a completely incorrect reconstruction.

Deconstructing the Hockey Stick

Will there ever be a time when sane people are not having to deconstruct yet another repackaging of Mann's hockey stick, like some endless wack-a-mole game?  Mann is back with a new hockey stick and, blow me away with surprise, it looks a heck of a lot like the old hockey stick:


Willis Eschenbach, writing at Climate Audit, outlines a new statistical approach he claims can help determine the signal-to-noise ratio in such a multi-proxy average, and in turn determine which proxies are contributing the most to the final outcome. 

His approach and findings seem interesting, but I need to withhold judgment and let the statistical geeks tear it apart.  I am always suspicious of algorithms that purport to sort or screen samples in or out of a sample set.

However, his climate-related finding can be accepted without necessarily agreeing with the methodology that got there.  He claims his methodology shows that two sets of proxies -- the Tiljander sediments and the Southwestern US Pines (mainly the bristlecones) -- drive the hockey stick shape.  This is reminiscent of Steve McIntyre's finding years ago that just a few proxies in the original MBH 1999 drove most of the hockey stick form.  Interestingly, these two series are the very ones that have received the most independent criticism for their methodology and ability to act as a proxy.  In particular, the Tiljander Lake sediment data is out and out corrupted, and it is incredible that they could get past a peer review process (just reinforcing my feeling that peer review passes shoddy work that reinforces the professions prejudices and stands in the way of quality work by mavericks challenging the consensus).

Anyway, with these proxies removed, less than a quarter of the total, the hockey stick disappears.


Update:  If you still have any confidence at all in climate scientists, I urge you to read this discussion of the Tiljander sediments.  Mann managed to make two enormous mistakes.  One, he used a series that the authors of the series very specifically caution has been disturbed and is not a valid proxy for the last 200-300 years.  And two, he inverts the whole series!  instead of showing it decreasing in the last 200 years  (again due to corruption the authors warned about) he shows it upside down, increasing in the last 200 years, which then helps him build his hockey stick on absolutely false data. 

One might argue that this is just the indictment of one scientist, but everyone in the profession seems to rally around and defend this one scientist, and the flaws listed above have been public for a while and absolutely no one seems interested in demanding Mann correct his numbers.  In fact, most climate scientists spend their time shooting the messenger (Steve McIntyre).

Uh Oh. I Think I Am On NASA's S-List

This screen shot was sent by a reader, who titled the email "you have hit the big time."  I suppose I have, or at least I have really ticked off James Hansen and Gavin Schmidt at NASA.  It appears that this site has been added to the list of sites blocked by the NASA servers as ostensiblybeing sexually explicit.  Well, I guess we have caught the GISS with their pants down a few times....


As usual, you may click on the image for the full-size version.  Thanks to a reader, who asked only that I hide his/her IP address.

Update: From the archives:

The top climate scientist at NASA says the Bush administration has tried to stop him from speaking out since he gave a lecture last month calling for prompt reductions in emissions of greenhouse gases linked to global warming.

The scientist, James E. Hansen, longtime director of the agency's Goddard Institute for Space Studies, said in an interview that officials at NASA headquarters had ordered the public affairs staff to review his coming lectures, papers, postings on the Goddard Web site and requests for interviews from journalists.

Dr. Hansen said he would ignore the restrictions. "They feel their job is to be this censor of information going out to the public," he said.

OK, I kindof mostly don't think there is anything sinister here.  Coyote's Law tells us that this is much more likely to be incompetence rather than evil intent.  But it would be interesting to see how Dr. Hansen would react if, say, the RealClimate site had been similarly filtered.  Anyone want to bet he would have thrown a conspiracy-laden hissy fit?

Minor Site Redesign

I am doing a bit of site redesign as my CSS skills improve.  All of this is a prelude to my pending attempt to move this entire beast over to Wordpress, a goal mainly thwarted right now by trying to preserve all the permalinks at the same addresses.

Anyway, I have a new page with all my published books and Powerpoint presentations here.  I have a page collecting all my videos here.   Since YouTube crunches all the videos to a resolution too small to really read my charts well, I have also set up a streaming video site with full resolution videos here.  All of these sites are easily reachable by the new menu bar across the top of the site.

Polar Amplification

Climate models generally say that surface warming on the Earth from greenhouse gasses should be greater at the poles than at the tropics.  This is called "polar amplification."  I don't now if the models originally said this, or if it was observed that the poles were warming more so it was thereafter built into the models, but that's what they say now.  This amplification is due in part to how climate forcings around the globe interact with each other, and in part due to hypothesized positive feedback effects at the poles.  These feedback effects generally center around increases in ice melts and shrinking of sea ice extents, which causes less radiative energy to be reflected back into space and also provides less insulation of the cooler atmosphere from the warmer ocean.

In response to polar amplification, skeptics have often shot back that there seems to be a problem here, as while the North Pole is clearly warming, it can be argued the South Pole is cooling and has seen some record high sea ice extents at the exact same time the North Pole has hit record low sea ice extents.

Climate scientists now argue that by "polar amplification" they really only meant the North Pole.  The South Pole is different, say some scientists (and several comm enters on this blog) because the larger ocean extent in the Southern Hemisphere has always made it less susceptible ot temperature variations.  The latter is true enough, though I am not sure it is at all relevant to this issue.  In fact, per this data from the Cryosphere today, the seasonal change in sea ice area is larger in the Antarctic than the Arctic, which might argue that the south should see more sea ice extent.  Anyway, even the realclimate folks have never doubted it applied to the Antarctic, they just say it is slow to appear.

Anyway, I won't go into the whole Antarctic thing more (except maybe in a postscript) but I do want to ask a question about Arctic amplification.  If the amplification comes in large part due to decreased albedo and more open ocean surface, doesn't that mean most of the effect should be visible in summer and fall?  This would particularly be our expectation when we recognize that most of the recent anomaly in sea ice extent in the Arctic has been in summer.  I will repeat this chart just to remind you:



You can see that July-August-September are the biggest anomaly periods.  I took the UAH temperature data for the Arctic, and did something to it I had not seen before -- I split it up into seasons.  Actually, I split it up into quarters, but these come within 8 days or so of matching the seasons.  Here is what I found (I used 5 year moving averages because the data is so volatile it was hard to eyeball a trend;  I also set each of the 4 seasonal anomalies individually to zero using the period 199-1989 as the base period)


I see no seasonal trend here.  In fact, winter and spring have the highest anomalies vs. the base period, but the differences are so small currently as to be insignificant.  If polar amplification were occurring and the explanation for the North Pole warming more than the rest of the Earth (by far) over the last 30 years, shouldn't I see it in the seasonal data.  I am honestly curious, and would like comments.

Postscript:  Gavin Schmidt (who else) and Eric Steig have an old article in RealClimate if you want to read their Antarctic apologia.   It is kind of a funny article, if one asks himself "how many of the statements do they make discounting Antarctic cooling are identical to the ones skeptics use in reverse?  Here are a couple of gems:

It is important to recognize that the widely-cited “Antarctic cooling” appears, from the limited data available, to be restricted only to the last two decades

Given that this was written in 2004, he means restricted to 1984-2004.  Unlike global warming? By the way, he would see it for much longer than 20 years if these NASA scientists were not so hostile to space technologies (ie satellite measurement)


It gets better.  They argue:

Additionally, there is some observational evidence that atmospheric dynamical changes may explain the recent cooling over parts of Antarctica. .

Thompson and Solomon (2002) showed that the Southern Annular Mode (a pattern of variability that affects the westerly winds around Antarctica) had been in a more positive phase (stronger winds) in recent years, and that this acts as a barrier, preventing warmer air from reaching the continent.

Interestingly, these same guys now completely ignore the same type finding when it is applied to North Pole warming.  Of course, this finding was made by a group entire hostile to folks like Schmidt at NASA. It comes from.... NASA

A new NASA-led study found a 23-percent loss in the extent of the Arctic's thick, year-round sea ice cover during the past two winters. This drastic reduction of perennial winter sea ice is the primary cause of this summer's fastest-ever sea ice retreat on record and subsequent smallest-ever extent of total Arctic coverage. ...

Nghiem said the rapid decline in winter perennial ice the past two years was caused by unusual winds. "Unusual atmospheric conditions set up wind patterns that compressed the sea ice, loaded it into the Transpolar Drift Stream and then sped its flow out of the Arctic," he said. When that sea ice reached lower latitudes, it rapidly melted in the warmer waters

I think I am going to put this into every presentation I give.  They say:

First, short term observations should be interpreted with caution: we need more data from the Antarctic, over longer time periods, to say with certainly what the long term trend is. Second, regional change is not the same as global mean change.

Couldn't agree more.  Practice what you preach, though.  Y'all are the same guys raising a fuss over warming on the Antarctic Peninsula and the Lassen Ice Shelf, less than 2% of Antarctica which in turn is only a small part of the globe.

I will give them the last word, from 2004:

 In short, we fully expect Antarctica to warm up in the future.

Of course, if they get the last word, I get the last chart (again from those dreaded satellites - wouldn't life be so much better at NASA without satellites?)


Update:  I ran the same seaonal analysis for may different areas of the world.  The one area I got a strong seasonal difference that made sense was for the Northern land areas above the tropics. 


This is roughly what one would predict from CO2 global warming (or other natural forcings, by the way).  The most warming is in the winter, when reduced snow cover area reduces albedo and so provides positive feedback, and when cold, dry night air is thought to be more sensitive to such forcings. 

For those confused -- the ocean sea ice anomaly is mainly in the summer, the land snow/ice extent anomaly will appear mostly in the winter.

Black Carbon and Arctic Ice

My company runs a snow play area north of Flagstaff, Arizona.  One of the problems with this location is that the main sledding runs are on a black cinder hill.  Covered in snow, this is irrelevant.  But once the smallest hole opens up to reveal the black cinders underneath, the hole opens and spreads like crazy.  The low albedo cinders absorb heat much faster than reflective white snow, and then spread that heat into the snow and melts it.

Anthony Watt does an experiment with ash and snow in his backyard, and the effects are dramatic.

Even tiny amounts of soot pollution can induce high amounts of melting. There is little or no ash at upper right.. Small amounts of ash in the lower and left areas of the photo cause significant melting at the two-hour mark in the demonstration.

I won't steal his thunder by taking his pictures, but you should look at them -- as the saying goes, they are worth a thousand words.

We know that Chinese coal plants pump out a lot of black carbon soot that travels around the world and deposits itself over much of the norther hemisphere.  We can be pretty sure a lot of this carbon ends up on the Arctic ice cap, and as such contributes to an acceleration of melting. 

I'v tried to do a thought experiment to think about what we would expect to see if this soot was driving a measurable percentage of Arctic ice melt.  It seems fairly certain that the soot would have limited effects during the season when new snow is falling.  Even a thin layer of new snow on top of deposited carbon would help mitigate its albedo-reducing effect.  So we would expect winter ice to look about like it has in the past, but summer ice, after the last snowfalls, to melt more rapidly in the past.  Once the seasons cool off again, when new ice is forming fresh without carbon deposits and snow again begins to fall, we would expect a catch-up effect where sea ice might increase very rapidly to return to winter norms. 

Here is the Arctic ice chart from the last several years:


Certainly consistent with our though experiment, but not proof by any means.  The last 2 years have shown very low summer ice conditions, but mostly normal/average winter extent.  One way we might get some insights into cause and effect is to look at temperatures.  If the last 2 years have had the lowest summer sea ice extents in 30 years, did they have the highest temperatures?


Not really, but it may have been past warming has had a lag effect via ocean temperatures. 

The point is that I am not opposed the idea that there can be anthropogenic effects on the climate, and it looks like black carbon deposits might have a real negative impact on sea ice.  If that were the case, this is really good news.  It is a LOT easier and cheaper to mitigate black carbon from combustion (something we have mostly but not completely done in the US) than it is to mitigate CO2  (which is a fundamental combustion product).

Don't Count Those Skeptics Out

From Mark Scousen in "Making Modern Economics"

Ironically, by the time of the thirteenth edition [of Paul Samuelsons popular economics textbook], right before the Berlin Wall was torn down, Samuleson and Nordhaus confidently declared, "The Soviet economy is proof that, contrary to what many skeptics believed [a reference to Mises and Hayek], a socialist command economy can function and even thrive."  From this online excerpt.


Your One-Stop Climate Panic Resource

Absolutely classic video -- a must see:

From Marc Marano via Tom Nelson:

This 9 ½ minute video brilliantly and accurately (it is not a spoof!) shows the absurdity of today’s man-made global warming fear campaign. It appears to have been produced by a group called Conservative Cavalry. They really did their homework and put together quite a show. This video should be shown in classrooms across the country and in newsrooms!

The video is based on the website “A complete list of things caused by global warming.”

The website is run by Dr. John Brignell is a UK Emeritus Engineering Professor at the University of Southampton who held the Chair in Industrial Instrumentation at Southampton.

This Just In, From Climate Expert Barrack Obama

Via Tom Nelson:

"Few challenges facing America -- and the world – are more urgent than combating climate change," he says in the video. "The science is beyond dispute and the facts are clear. Sea levels are rising. Coastlines are shrinking. We’ve seen record drought, spreading famine, and storms that are growing stronger with each passing hurricane season. Climate change and our dependence on foreign oil, if left unaddressed, will continue to weaken our economy and threaten our national security.

From Ryan Maue of FSU comes accumulated cyclonic energy, the best single metric of the strength of hurricane seasons:


Coming soon, Obama tells that story about this guy he knows who swears his grandmother tried to dry her cat by putting it in the microwave.


NOAA Adjustments

Anthony Watts has an interesting blink comparisonbetween the current version of history from the GISS and their version of history in 1999.  It is amazing that all of the manual adjustments they add to the raw data constantly have the effect of increasing historical warming.  By continuing to adjust recent temperatures up, and older temperatures down, they are implying that current measurement points have a cooling bias vs. several decades ago.  REALLY?  This makes absolutely no sense given what we now know via Anthony Watt's efforts to document station installation details at

I created a blink comparison a while back that was related but slightly different.  I created a blink comparison to show the effect of NOAA manual adjustments to the raw temperature data. 


My point was not that all these adjustments were unnecessary (the time of observation adjustment is required, though I have always felt it to be exaggerated).  But all of the adjustments are upwards, even those for station quality.  The net effect is that there is no global warming signal in the US, at least in the raw data.  The global warming signal emerges entirely from the manual adjustments.  Which causes one to wonder as to the signal to noise ratio here.  And increases the urgency to get more scrutiny on these adjustments.

It only goes through 2000, because I only had the adjustment numbers through 2000.  I will see if I can update this.

On Quality Control of Critical Data Sets

A few weeks ago, Gavin Schmidt of NASAcame out with a fairly petulant response to critics who found an error in NASA's GISS temperature database.  Most of us spent little time criticizing this particular error, but instead criticized Schmidts unhealthy distaste for criticism and the general sloppiness and lack of transparency in the NOAA and GISS temperature adjustment and averaging process.

I don't want to re-plow old ground, but I can't resist highlighting one irony.  Here is Gavin Schmidt in his recent post on RealClimate:

It is clear that many of the temperature watchers are doing so in order to show that the IPCC-class models are wrong in their projections. However, the direct approach of downloading those models, running them and looking for flaws is clearly either too onerous or too boring.

He is criticizing skeptics for not digging into the code of the individual climate models, and focusing only on how their output forecasts hold out (a silly criticism I dealt with here).  But this is EXACTLY what folks like Steve McIntyre have been trying to do for years with the NOAA, GHCN, and GISS temperature metric code.  Finding nothing about the output that makes sense given the raw data, they have asked to examine the source code.  And they have met with resistance at every turn by, among others, Gavin Schmidt.  As an example, here is what Steve gets typically when he tries to do exactly as Schmidt asks:

I'd also like to report that over a year ago, I wrote to GHCN asking for a copy of their adjustment code:

I’m interested in experimenting with your Station History Adjustment algorithm and would like to ensure that I can replicate an actual case before thinking about the interesting statistical issues.  Methodological descriptions in academic articles are usually very time-consuming to try to replicate, if indeed they can be replicated at all. Usually it’s a lot faster to look at source code in order to clarify the many little decisions that need to be made in this sort of enterprise. In econometrics, it’s standard practice to archive code at the time of publication of an article – a practice that I’ve (by and large unsuccessfully) tried to encourage in climate science, but which may interest you. Would it be possible to send me the code for the existing and the forthcoming Station History adjustments. I’m interested in both USHCN and GHCN if possible.

To which I received the following reply from a GHCN employee:

You make an interesting point about archiving code, and you might be encouraged to hear that Configuration Management is an increasingly high priority here. Regarding your request — I'm not in a position to distribute any of the code because I have not personally written any homogeneity adjustment software. I also don't know if there are any "rules" about distributing code, simply because it's never come up with me before.

I never did receive any code from them.

Here, by the way, is a statement from the NOAA web site about the GHCN data:

Both historical and near-real-time GHCN data undergo rigorous quality assurance reviews. These reviews include preprocessing checks on source data, time series checks that identify spurious changes in the mean and variance, spatial comparisons that verify the accuracy of the climatological mean and the seasonal cycle, and neighbor checks that identify outliers from both a serial and a spatial perspective.

But we will never know, because they will not share the code developed at taxpayer expense by government employees to produce official data.

A year or so ago, after intense pressure and the revelation of another mistake (again by the McIntyre/Watt online communities) the GISS did finally release some of their code.  Here is what was found:

Here are some more notes and scripts in which I've made considerable progress on GISS Step 2. As noted on many occasions, the code is a demented mess - you'd never know that NASA actually has software policies (e.g. here or here. I guess that Hansen and associates regard themselves as being above the law. At this point, I haven't even begum to approach analysis of whether the code accomplishes its underlying objective. There are innumerable decoding issues - John Goetz, an experienced programmer, compared it to descending into the hell described in a Stephen King novel. I compared it to the meaningless toy in the PPM children's song - it goes zip when it moves, bop when it stops and whirr when it's standing still. The endless machinations with binary files may have been necessary with Commodore 64s, but are totally pointless in 2008.

Because of the hapless programming, it takes a long time and considerable patience to figure out what happens when you press any particular button. The frustrating thing is that none of the operations are particularly complicated.

So Schmidt's encouragement that skeptics should go dig into the code was a) obviously not meant to be applied to hiscode and b) roughly equivalent to a mom answering her kids complaint that they were bored and had nothing to do with "you can clean your rooms" -- something that looks good in the paper trail but is not really meant to be taken seriously.  As I said before:

I am sure Schmidt would love us all to go off on some wild goose chase in the innards of a few climate models and relent on comparing the output of those models against actual temperatures.

Responses to Gavin Schmidt, Part 2

OK, we continue to the final paragraph of Gavin Schmidt's postadmitting a minor error in the October GISS numbers, and then proceeding to say that all the folks who pointed out the error are biased and unhelpful, in spite of the fact (or maybe because of the fact) that they found this error.

As I reviewed in part 1, most of the letter was just sort of petulant bad grace.  But this paragraph was worrisome, and I want to deal with it in more depth:

Which brings me to my last point, the role of models. It is clear that many of the temperature watchers are doing so in order to show that the IPCC-class models are wrong in their projections. However, the direct approach of downloading those models, running them and looking for flaws is clearly either too onerous or too boring. Even downloading the output (from here or here) is eschewed in favour of firing off Freedom of Information Act requests for data already publicly available - very odd. For another example, despite a few comments about the lack of sufficient comments in the GISS ModelE code (a complaint I also often make), I am unaware of anyone actually independently finding any errors in the publicly available Feb 2004 version (and I know there are a few). Instead, the anti-model crowd focuses on the minor issues that crop up every now and again in real-time data processing hoping that, by proxy, they'll find a problem with the models.

I say good luck to them. They'll need it.

Since when has direct comparison of forecast models against observation and measurement been the wrong way to validate or invalidate the forecast or model?  I am sure there were lots of guys who went through the Principia Mathematica and tore apart the math and equations to make sure they balanced, but most of the validation consisted of making observations of celestial bodies to see if their motion fit the predicted results.  When Einstein said time would change pace in a gravity well, scientists took atomic clocks up in high-altitude airplanes to see if his predictions matched measured results.  And physicists can play with models and equations all day, but nothing they do with the math will be as powerful as finding a Higgs Boson at the LHC.

Look, unlike some of the commenters Schmidt quoted, there is no reason to distrust a guy because his staff made a data error.  But I think there is a big freaking reason to distrust someone who gets huffy that people are using actual data measurements to test his prediction models.

There is probably a reason for Schmidt to be sensitive here.  We know that Hansen's 1988 forecasts don't validate at all against actual data from the last 20 years (below uses the Hansen A case from his Congressional testimony, the case which most closely matches actual CO2 production since the speech).


More recent forecasts obviously have had less time to validate.  Many outsiders have found that current temperatures fall outside of the predicted range of the IPCC forecasts, and those that have found temperatures within the error bars of the forecasts have generally done so by combining large error bars, white noise, and various smoothing approaches to just eek actual temperatures into the outer molecular layers of the bottom edge of the forecast band.

As to the rest, I am not sure Schmidt knows who has and has not poked around in the innards of the models - has he studied all the referrer logs for their web sites?  But to some extent this is beside the point.  Those of us who have a lot of modeling experience in complex systems (my experience is in both econometrics and in mechanical control systems) distrust models and would not get any warm fuzzies from poking around in their innards.  Every modeler of chaotic systems knows that it is perfectly possible to string together all sorts of logically sound and reasonable assumptions and algorithms only to find that the whole mass of them combined spits out a meaningless mess.  Besides, there are, what, 60 of these things?  More?  I could spend 6 months ripping the guts out of one of them only to have Schmidt then say, well there are 59 others.  That one does not really affect anything.  I mean, can't you just see it -- it would be entirely equivalent to the reaction every time an error or problem measurement station is found in the GISS data set.  I am sure Schmidt would love us all to go off on some wild goose chase in the innards of a few climate models and relent on comparing the output of those models against actual temperatures.

No, I am perfectly happy to accept the IPCC's summary of these models and test this unified prediction against history.  I am sure that no matter what temperature it is this month, some model somewhere in the world came close.  But how does that help, unless it turns out that it is the same model that is right month after month, and then I might get excited someone was on to something.  But just saying current temperatures fall into a range where some model predicts it just says that there is a lot of disagreement among the models, and in turn raises my doubts about the models.

The last sentence of Schmidt's paragraph is just plain wrong.  I have never seen anyone who is out there really digging into this stuff (and not just tossing in comments) who has said that errors in the GISS temperature anomaly number imply the models are wrong, except of course to the extent that the models are calibrated to an incorrect number.  Most everyone who looks at this stuff skeptically understand that the issues with the GISS temperature metric are very different than issues with the models. 

In a nutshell, skeptics are concerned with the GISS temperature numbers because of the signal to noise problem, and a skepticism that the GISS has really hit on algorithms that can, blind to station configuration, correct for biases and errors in the data.  I have always felt that rather than eliminate biases, the gridcell approach simply spreads them around like peanut butter.

My concern with the climate models is completely different.  I won't go into them all, but they include:

  • the inherent impossibility of modeling such a chaotic system
  • scientists assume CO2 drives temperatures, so the models they build unsurprisingly result in CO2 driving temperature
  • modelers assume WAY too much positive feedback.  No reasonable person, if they step back from it, should really be able to assume so much positive feedback in a long-term stable system
  • When projected backwards, modeler's assumptions imply far more warming than we have experienced, and it takes heroic assumptions and tweaks and plugs to make the models back-cast reasonably well.
  • Its insane to ignore changes in solar output, and/or to assume that the sun over the last 40 years has been in a declining cycle
  • Many models, by their own admission, omit critical natural cycles like ENSO/PDO.

By the way, my simple hypothesis to describe past and future warming is here.

As a final note, the last little dig on Steve McIntyre (the bit about FOIA requests) is really low.  First, it is amazing to me that, like Hogwarts students who can't say the word Voldemort, the GISS folks just can't bring themselves to mention his name.  Second, Steve has indeed filed a number of FOIA requests on Michael Mann, the GISS, and others.  Each time he has a pretty good paper trail of folks denying him data (Here is the most recent for the Santer data). Almost every time, the data he is denied is taxpayer funded research, often by public employees, or is data that the publication rules of a particular journal require to be made public.  And remember the source for this -- this is coming from the GISS, which resisted McIntyre's calls for years to release their code  (publicly funded code of a government organization programmed by government employees to produce an official US statistic) for the GISS grid cell rollup of the station data, releasing the code only last year after McIntyre demonstrated an error in the code based on inspection of the inputs and outputs.

At the end of the day, Hansen and Schmidt are public employees who like having access to government budgets and the instant credibility the NASA letterhead provides them, but don't like the public scrutiny that goes with it.  Suck it up guys.  And as to your quest to rid yourself of these skeptic gadflies, I will quote your condescending words back to you:  Good Luck.  You'll need it.

Sorry Dr. Schmidt, But I am Not Feeling Guilty Yet (Part 1)

By accident, I have been drawn into a discussion surrounding a fairly innocent mistake made by NASA's GISS in their October global temperature numbers.  It began for me when I compared the October GISS and UAH satellite numbers for October, and saw an incredible diversion.  For years these two data sets have shown a growing gap, but by tiny increments.  But in October they really went in opposite directions.  I used this occasion to call on the climate community to make a legitimate effort at validating and reconciling the GISS and satellite data sets.

Within a day of my making this post, several folks started noticing some oddities in the GISS October data, and eventually the hypothesis emerged that the high number was the result of reusing September numbers for certain locations in the October data set. Oh, OK.  A fairly innocent and probably understandable mistake, and far more minor than the more systematic error a similar group of skeptics, (particularly Steve McIntyre, the man whose name the GISS cannot speak) found in the GISS data set a while back.  The only amazing thing to me was not the mistake, but the fact that there were laymen out there on their own time who figured out the error so quickly after the data release.  I wish there were a team of folks following me around, fixing material errors in my analysis before I ran too far with it.

So Gavin Schmidt of NASA comes out a day or two later and says, yep, they screwed up.  End of story, right?  Except Dr. Schmidt chose his blog post about the error to lash out at skeptics.  This is so utterly human -- in the light of day, most will admit it is a bad idea to lash out at your detractors in the same instant you admit they caught you in an error (however minor).  But it is such a human need to try to recover and sooth one's own ego at exactly this same time.  And thus we get Gavin Schmidt's post on, which I would like to highlight a bit below.

He begins with a couple of paragraphs on the error itself.  I will skip these, but you are welcome to check them out at the original.  Nothing about the error seems in any way outside the category of "mistakes happen."  Had the post ended with something like "Many thanks to the volunteers who so quickly helped us find this problem," I would not even be posting.  But, as you can guess, this is not how it ends.

It's clearly true that the more eyes there are looking, the faster errors get noticed and fixed. The cottage industry that has sprung up to examine the daily sea ice numbers or the monthly analyses of surface and satellite temperatures, has certainly increased the number of eyes and that is generally for the good. Whether it's a discovery of an odd shiftin the annual cycle in the UAH MSU-LT data, or this flub in the GHCN data, or the USHCN/GHCN merge issue last year, the extra attention has led to improvements in many products. Nothing of any consequence has changed in terms of our understanding of climate change, but a few more i's have been dotted and t's crossed.

Uh, OK, but it is a bit unfair to characterize the "cottage industry" looking over Hansen's and Schmidt's shoulders as only working out at the third decimal place.  Skeptics have pointed out what they consider to be fundamental issues in some of their analytical approaches, including their methods for compensating statistically for biases and discontinuities in measurement data the GISS rolls up into a global temperature anomaly.  A fairly large body of amateur and professional work exists questioning the NOAA and GISS methodologies which often result in manual adjustments to the raw data larger in magnitude than the underlying warming signal tyring to be measured.  I personally think there is a good case to be made that the GISS approach is not sufficient to handle this low signal to noise data, and that the GISS has descended in to "see no evil, hear no evil" mode in ignoring the station survey approach being led by Anthony Watt.  Just because Schmidt does not agree doesn't mean that the cause of climate science is not being advanced. 
The bottom line, as I pointed out in my original post, is that the GISS anomaly and the satellite-measured anomaly are steadily diverging.  Given some of the inherent biases and problems of surface temperature measurement, and NASA's commitment to space technology as well as its traditional GISS metric, its amazing to me that Schmidt and Hansen are effectively punting instead of doing any serious work to reconcile the two metrics.  So it is not surprising that into this vacuum left by Schmidt rush others, including us lowly amateurs.
By the way, this is the second time in about a year when the GISS has admitted an error in their data set, but petulently refused to mention the name of the person who helped them find it.

But unlike in other fields of citizen-science (astronomy or phenology spring to mind), the motivation for the temperature observers is heavily weighted towards wanting to find something wrong. As we discussed last year, there is a strong yearning among some to want to wake up tomorrow and find that the globe hasn't been warming, that the sea ice hasn't melted, that the glaciers have not receded and that indeed, CO2is not a greenhouse gas. Thus when mistakes occur (and with science being a human endeavour, they always will) the exuberance of the response can be breathtaking - and quite telling.

I am going to make an admission here that Dr. Schmidt very clear thinks is evil:  Yes, I want to wake up tomorrow to proof that the climate is not changing catastrophically.  I desperately hope Schmidt is overestimating future anthropogenic global warming.  Here is something to consider.  Take two different positions:

  1. I hope global warming theory is correct and the world faces stark tradeoffs between environmental devastation and continued economic growth and modern prosperity
  2. I hope global warming theory is over-stated and that these tradeoffs are not as stark.

Which is more moral?  Why do I have to apologize for being in camp #2?  Why isn't it equally "telling" that Dr. Schmidt apparently puts himself in camp #1.

Of course, we skeptics would say the same of Schmidt.  As much as we like to find a cooler number, we believe he wants to find a warmer number.  Right or wrong, most of us see a pattern in the fact that the GISS seems to constantly find ways to adjust the numbers to show a larger historic warming, but require a nudge from outsiders to recognize when their numbers are too high.  The fairest way to put it is that one group expects to see lower numbers and so tends to put more scrutiny on the high numbers, and the other does the opposite. 

Really, I don't think that Dr. Schmidt is a very good student of the history of science when he argues that this is somehow unique to or an aberration in modern climate science.  Science has often depended on rivalries to ensure that skepticism is applied to both positive and negative results of any experiment.  From phlogistan to plate techtonics, from evolution to string theory, there is really nothing new in the dynamic he describes.

A few examples from the comments at Watt's blog will suffice to give you a flavour of the conspiratorial thinking: "I believe they had two sets of data: One would be released if Republicans won, and another if Democrats won.", "could this be a sneaky way to set up the BO presidency with an urgent need to regulate CO2?", "There are a great many of us who will under no circumstance allow the oppression of government rule to pervade over our freedom—-PERIOD!!!!!!" (exclamation marks reduced enormously), "these people are blinded by their own bias", "this sort of scientific fraud", "Climate science on the warmer side has degenerated to competitive lying", etc… (To be fair, there were people who made sensible comments as well).

Dr. Schmidt, I am a pretty smart person.  I have lots of little diplomas on my wall with technical degrees from Ivy League universities.  And you know what - I am sometimes blinded by my own biases.  I consider myself a better thinker, a better scientist, and a better decision-maker because I recognize that fact.  The only person who I would worry about being biased is the one who swears that he is not.

By the way, I thought the little game of mining the comments section of Internet blogs to discredit the proprietor went out of vogue years ago, or at least has been relegated to the more extreme political  blogs like Kos or LGF.  Do you really think I could not spend about 12 seconds poking around environmentally-oriented web sites and find stuff just as unfair, extreme, or poorly thought out?

The amount of simply made up stuff is also impressive - the GISS press release declaring the October the 'warmest ever'? Imaginary (GISS only puts out press releases on the temperature analysis at the end of the year). The headlines trumpeting this result? Non-existent. One clearly sees the relief that finally the grand conspiracy has been rumbled, that the mainstream media will get it's comeuppance, and that surely now, the powers that be will listen to those voices that had been crying in the wilderness.

I am not quite sure what he is referring to here.  I will repeat what I wrote.  I said "The media generally uses the GISS data, so expect stories in the next day or so trumpeting 'Hottest October Ever.'"  I leave it to readers to decide if they find my supposition unwarranted.  However, I encourage the reader to consider the 556,000 Google results, many media stories, that come up in a search for the words "hottest month ever."  Also, while the GISS may not issue monthly press releases for this type of thing, the NOAA and British Met Office clearly do, and James Hansen has made many verbal statements of this sort in the past.

By the way, keep in mind that that Dr. Schmidt likes to play Clinton-like games with words.  I recall one episode last year when he said that climate models did not use the temperature station data, so they cannot be tainted with any biases found in the stations.  Literally true, I guess, because the the models use gridded cell data.  However, this gridded cell data is built up, using a series of correction and smoothing algorithms that many find suspect, from the station data.  Keep this in mind when parsing Dr. Schmidt. 

Alas! none of this will come to pass. In this case, someone's programming error will be fixed and nothing will change except for the reporting of a single month's anomaly. No heads will roll, no congressional investigations will be launched, no politicians (with one possible exception) will take note. This will undoubtedly be disappointing to many, but they should comfort themselves with the thought that the chances of this error happening again has now been diminished. Which is good, right?

I'm narrowly fine with the outcome.  Certainly no heads should roll over a minor data error.  I'm not certain no one like Watt or McIntyre suggested such a thing.  However, the GISS should be embarrassed that they have not addressed and been more open about the issues in their grid cell correction/smoothing algorithms, and really owe us an explanation why no one there is even trying to reconcile the growing differences with satellite data.

In contrast to this molehill, there is an excellent story about how the scientific community really deals with serious mismatches between theory, models and data. That piece concerns the 'ocean cooling' story that was all the rage a year or two ago. An initial analysisof a new data source (the Argo float network) had revealed a dramatic short term cooling of the oceans over only 3 years. The problem was that this didn't match the sea level data, nor theoretical expectations. Nonetheless, the paper was published (somewhat undermining claims that the peer-review system is irretrievably biased) to great acclaim in sections of the blogosphere, and to more muted puzzlement elsewhere. With the community's attention focused on this issue, it wasn't however long before problemsturned up in the Argo floats themselves, but also in some of the other measurement devices - particularly XBTs. It took a couple of years for these things to fully work themselves out, but the most recent analysesshow far fewer of the artifacts that had plagued the ocean heat content analyses in the past. A classic example in fact, of science moving forward on the back of apparent mismatches. Unfortunately, the resolution ended up favoring the models over the initial data reports, and so the whole story is horribly disappointing to some.

OK, fine, I have no problem with this.  However, and I am sure that Schmidt would deny this to his grave, but he is FAR more supportive of open inspection of measurement sources that disagree with his hypothesis (e.g. Argo, UAH) than he is willing to tolerate scrutiny of his methods.  Heck, until last year, he wouldn't even release most of his algorithms and code for his grid cell analysis that goes into the GISS metric, despite the fact he is a government employee and the work is paid for with public funds.  If he is so confident, I would love to see him throw open the whole GISS measurement process to an outside audit.  We would ask the UAH and RSS guys to do the same.  Here is my prediction, and if I am wrong I will apologize to Dr. Schmidt, but I am almost positive that while the UAH folks would say yes, the GISS would say no.  The result, as he says, would likely be telling.

Which brings me to my last point, the role of models. It is clear that many of the temperature watchers are doing so in order to show that the IPCC-class models are wrong in their projections. However, the direct approach of downloading those models, running them and looking for flaws is clearly either too onerous or too boring. Even downloading the output (from here or here) is eschewed in favour of firing off Freedom of Information Act requests for data already publicly available - very odd. For another example, despite a few comments about the lack of sufficient comments in the GISS ModelE code (a complaint I also often make), I am unaware of anyone actually independently finding any errors in the publicly available Feb 2004 version (and I know there are a few). Instead, the anti-model crowd focuses on the minor issues that crop up every now and again in real-time data processing hoping that, by proxy, they'll find a problem with the models.

I say good luck to them. They'll need it.

Red Alert!  Red Alert!  Up to this point, the article was just petulant and bombastic.  But here, Schmidt becomes outright dangerous, suggesting a scientific process that is utterly without merit.  But I want to take some time on this, so I will pull this out into a second post I will label part 2.

Global Warming.... Accelerating?


Via here 

This is Getting Absurd

Update:  The gross divergence in October data reported below between the various metrics is explained by an error, as reported at the bottom.  The basic premise of the post, that real scientific work should go into challenging these measurement approaches and choosing the best data set, remains.

The October global temperature data highlights for me that it is time for scientists to quit wasting time screwing around with questions of whether global warming will cause more kidney stones, and address an absolutely fundamental question:  Just what is the freaking temperature?

Currently we are approaching the prospect of spending hundreds of billions of dollars, or more, to combat global warming, and we don't even know its magnitude or real trend, because the major temperature indices we possess are giving very different readings.  To oversimplify a bit, there are two competing methodologies that are giving two different answers.  NASA's GISS uses a melding of surface thermometer readings around the world to create a global temperature anomaly.  And the UAH uses satellites to measure temperatures of the lower or near-surface troposhere.  Each thinks it has the better methodology  (with, oddly, NASA fighting against the space technology).  But they are giving us different answers.

For October, the GISS metric is showing the hottest October on record, nearly 0.8C hotter than it was 40 years ago in 1978 (from here).


However, the satellites are showing no such thing, showing a much cooler October, and a far smaller warming trend over the last 40 years (from here)


So which is right?  Well, the situation is not helped by the fact that the GISS metric is run by James Hansen, considered by skeptics to be a leading alarmist, and the UAH is run by John Christy, considered by alarmists to be an arch-skeptic.  The media generally uses the GISS data, so expect stories in the next day or so trumpeting "Hottest October Ever," which the Obama administration will wave around as justification for massive economic interventions.  But by satellite it will only be the 10th or so hottest in the last 30, and probably cooler than most other readings this century.

It is really a very frustrating situation.  It is as if two groups in the 17th century had two very different sets of observations of planetary motions that resulted in two different theories of gravity,

Its amazing to me the scientific community doesn't try to take this on.  If the NOAA wanted to do something useful other than just creating disaster pr0n, it could actually have a conference on the topic and even some critical reviews of each approach.  Why not have Christy and Hansen take turns in front of the group and defend their approaches like a doctoral thesis?  Nothing can replace surface temperature measurement before 1978, because we do not have satellite data before then.  But even so, discussion of earlier periods is important given issues with NOAA and GISS manual adjustments to the data.

Though I favor the UAH satellite data (and prefer a UAH - Hadley CRUT3 splice for a longer time history), I'll try to present as neutrally as possible the pros and cons of each approach.

GISS Surface Temperature Record

+  Measures actual surface temperatures

+  Uses technologies that are time-tested and generally well-understood

+  Can provide a 100+ year history

- Subject to surface biases, including urban heat bias.  Arguments rage as to the size and correctability of these biases

- Coverage can range from dense to extremely spotty, with as little as 20KM and as much as 1000KM between measurement sites

- Changing technologies and techniques, both at sea and on land, have introduced step-change biases

- Diversity of locations, management, and technology makes it hard to correct for individual biases

- Manual adjustments to the data to correct errors and biases are often as large or larger than the magnitude of the signal (ie global warming) trying to be measured.  Further, this adjustment process has historically been shrouded in secrecy and not subject to much peer review

- Most daily averages based on average of high and low temperature, not actual integrated average

UAH Satellite Temperature Record

+  Not subject to surface biases or location biases

+  Good global coverage

+  Single technology and measurement point such that discovered biases or errors are easier to correct

-  Only 40 years of history

-  Still building confidence in the technology

-  Coverage of individual locations not continuous - dependent on satellite passes.

-  Not measuring the actual surface temperature, but the lower troposphere (debate continues as to whether these are effectively the same).

-  Single point of failure - system not robust to the failure of a single instrument.

-  I am not sure how much the UAH algorithms have been reviewed and tested by outsiders.

Update:  Well, this is interesting.  Apparently the reason October was so different between the two metrics was because one of the two sources made a mistake that substantially altered reported temperatures.  And the loser is ... the GISS, which apparently used the wrong Russian data for October, artificially inflating temperatures.  So long "hottest October ever," though don't hold your breath for the front-page media retraction.

Sign This Guy Up for the IPCC!

via FailBlog:


Another Urban Heat Island Example

I do not claim that urban heat island effects are the only cause of measured surface warming -- after all, satellites are largely immune to UHI and have measured a (small) warming trend since they began measuring temperature in 1979. 

But I do think that the alarmist efforts to argue that UHI has no substantial, uncorrectable effect on surface temperature measurement is just crazy.  Even if one tries to correct for it, the magnitude can be so substantial (up to 10 degrees or more F) that even a small error in correcting for the effect yields big errors in trying to detect an underlying warming signal.

Just as a quick example, let's say the urban heat island effect in a city can be up to 10 degrees F.  And, let's say by some miracle you came up with a reliable approach to correct for 95% of this effect  (and believe me, no one has an approach this good).  This means that there would still be a 0.5F warming bias or error from the UHI effect, an amount roughly of the order of magnitude of the underlying warming signal we are trying to detect (or falsify).

When my son and I ran a couple of transects of the Phoenix area around 10PM one winter evening, we found the city center to be 7 to 10 degrees F warmer than the outlying rural areas.  Anthony Watts did a similar experiment this week in Reno  (the similarity is not surprising, since he suggested the experiment to me in the first place).  He too found about a 10 degree F variation.  This experiment was a follow-on to this very complete post showing the range of issues with surface temperature measurement, via one example in Reno.

By the way, in the latter article he had this interesting chart with the potential upward bias added by an instrumentation switch at many weather stations


This kind of thing happens in the instrumentation world, and is why numbers have to be adjusted from the raw data  (though these adjustments, even if done well, add error, as described above).  What has many skeptics scratching their heads is that despite this upward bias in the instrumentation switch, and the upward bias from many measurement points being near growing urban areas, the GISS and NOAA actually have an increasingly positive adjustment factor for the last couple of decades, not a negative one  (net of red, yellow, and purple lines here).   In other words, the GISS and NOAA adjustment factors imply that there is a net growing cooling bias in the surface temperature record in the last couple of decades that needs to be corrected.  This makes little sense to anyone whose main interest is not pumping up the official numbers to try to validate past catastrophic forecasts.

Update:  The NOAA's adjustment numbers imply a net cooling bias in station locations, but they do have a UHI correction component.  That number is about 0.05C, or 0.03F.  This implies the average urban heat island effect on measurement points over the last 50 years is less than 1/300th of the UHI effect we measured in Reno and Phoenix.  This seems really low, especially once one is familiar with the "body of work" of NOAA measurement stations as surveyed at Anthony's site.

Sun, PDO, and CO2

For those who have not seen it, Roy Spencer has a new paper on the PDO, clouds and temperature history.

I have never explicitly stated this, but my sense is that medium to long scale 20th century temperature trends can be explained mostly through three drivers:

1.  A cyclical variation driven by multi-decade oceanic cycles like the Pacific Decadal Oscillation (PDO):


2.  Changes in solar output, either directly as increased heating or indirectly via a variety of theories on things like cosmic rays and cloud formation:


3.  A long term trend of up to +0.05C per decade that may include a CO2-warming component. 

I am willing to posit a CO2 impact net of feedbacks of perhaps 0.5-1.0C over the next century.  This may appear low, but is the only scale of number reasonably supported by history.  Any higher number would result in temperatures way too high historically.  And even assuming a number this high runs into the following problem:  There was probably a trend of about this magnitude emerging from the little ice age 200+ years ago and extending into the 20th century.  You can see it in the glacier numbers below:  (source)


Those that want to assign the temperature trend, once the sun and the PDO are removed, post-1950 to CO2, need to explain what effect was causing the nearly exact same trend from 1800-1950, and why that trend conveniently switched off at the exact moment man's CO2 takes over.  In the context of the glacier chart, what was causing the glaciers to retreat in 1880, and why is that effect not the one at work today?

Global Warming ... Accelerating?

A week or two ago a "study" by the World Wildlife Fund got a lot of play in the media.  Its key conclusion:

The report says that the 2007 report from the Intergovernmental Panel on Climate Change (IPCC) - a study of global warming by 4,000 scientists from more than 150 countries which alerted the world to the possible consequences of global warming - is now out of date.

WWF's report, Climate Change: Faster, stronger, sooner, has updated all the scientific data and concluded that global warming is accelerating far beyond the IPCC's forecasts.

As an example it says the first tipping point may have already been reached in the Arctic where sea ice is disappearing up to 30 years ahead of IPCC predictions and may be gone completely within five years - something that hasn't occurred for 1m years. This could result in rapid and abrupt climate change rather than the gradual changes forecast by the IPCC.

This is not at all an uncommon meme.  If one searches "global warming accelerating" on Google, one gets 1,100,000 hits.  The #1 hit says:

Global warming is accelerating three times more quickly than feared, a series of startling, authoritative studies has revealed.

I actually believe there is a small upward temperature trend due to CO2, on the order of 0.05 - 0.1C per decade.  But it is staggering to me that so many people can insist, with a straight face, that warming is "accelerating" or, crazier, that it is "worse than forecast."

Let's take the acceleration first.  Here is the recent temperature trend from the UAH satellite data (all the smoothed lines you will see are 36-month moving averages centered on the middle month).


It is possible to argue that there is a warming trend here, but never-the-less it is impossible to see "acceleration," particularly since 2001.  There is an implication in the article that the acceleration has occurred since the last couple of IPCC reports, so let's zoom into the period since the 3rd and 4th IPCC reports:


No acceleration.  Not even any warming  (for 8 years!  where is that story in the press?) 

But how about the proposition that temperatures are rising faster than forecasts?  This is patently absurd.  We can go back to just about every IPCC and alarmist projection and show that temperatures are well less than forecast, but lets use James Hansen's forecasts to Congress in 1988 because it gives us 20 years of data to work with (actual data is a blend of Hadley CRUT3 and UAH satellite as discussed here.)


I always get folks who insist that I am making a mistake by using the Hansen A scenario because Hansen at the time described it as extreme.  But in fact, world CO2 production has been even greater than the Hansen A scenario.  Hansen A underestimates the inputs, and still grossly overestimates the output.  The only real discussion one can find on the IPCC forecasts is whether one can argue the actuals are barely poking their nose up into the low end of the forecast confidence intervals or not.

The one piece of evidence most of these folks making the "accelerating" argument use is sea ice extent in the North Pole.  The media has been full of stories about disappearing sea ice, and in fact in 2007 the North Pole had the lowest sea ice extent in the last 30 years, though coincidently in the same year the South Pole had the highest sea ice extent in 30 years.  But there is a logical fallacy here.  The fact that the statement "global warming causes sea ice to retreat" is true does not mean the statement "sea ice retreat means the globe is warming" has to be true.  And in fact, we see from the data above, this is not true.  It is amazing to me that in the conflict between "thermometers" and "sea ice extent at one pole" as measures for global temperature, sea ice extent seems to trump thermometer readings.  Particularly when this sea ice signal only exists at one of the two poles.

There is no question that the Arctic has warmed more than the rest of the planet.  In fact, much of the rise in global averages is driven by the Arctic  (and all of it is driven by the norther hemisphere above the tropics -- the rest of the world has no warming signal over the last 30 years).  Below we can see the satellite measurement of the temperature anomaly in the Arctic:


A one degree rise over a couple of decades is indeed substantial.  In fact, though, during the last couple of years, we have actually seen either flat temperatures or, perhaps, a cooling trend.  Here is a closeup:


So it might be that we should look for other explanations of unusually large sea ice retreats in the summers of 2007 and 2008.  It has been suggested by NASA that winds and ocean currents are in part to blame, and by others that black carbon deposits on the ice from Chinese coal plants may also be increasing summer melt.

Whatever the case, there are a lot of good reasons to believe we are not seeing an "acceleration" in global warming.  And a lot of very, very good reasons to believe we are not reaching a "tipping point."  Tipping point implies that we have entered a regime where the climate is dominated by runaway positive feedback.  I have addressed this topic many times, and will not address it right now, but in short all of the catastrophe in climate models is due not to the assumption of CO2 as a greenhouse gas (which actually tends to yield modest warming in models) but the assumption that the Earth's climate is dominated by substantial positive feedbacks.  I discuss the entire topic of positive feedbacks and climate forecasts in the video below:

Update - if we add glaciers here in addition to sea ice, we see the same slow retreating trend.  However, the trend goes back 200 years!  That's 150 years longer than man has been producing substantial CO2 emissions.  (source)


New Typepad Editor Bugged

I am working on several new posts, but the new Typepad editor is really buggy.  For some reason, Typepad put this particular blog (but not my others) on the new editor, probably as an involuntary beta.  The new editor is much, much slower, and has fatal bugs that make use of images in posts virtually impossible.

This is actually a problem with online applications I had not considered before.  When I heard iTunes 8 was initially bugged or learned to hate Vista, I would just avoid making the "upgrade."  But with online services, I have no choice but to accept the new version, even if I consider it worse (as is so often the case nowadays in software).

Arctic and Greenland Ice

Arctic Sea ice and Greenland glaciers have been on a slow retreating trend for decades, perhaps centuries (at least since the little ice age).  This should not be surprising.  First, glaciers all around the world have been steadily retreating since 1800:


Also, the Arctic has been the hot spot of the world over the last 30 years or so:


Increasing far more than global averages:


So the question is not necessarily why Arctic Sea ice continues to retreat - this appears part of a long term trend that in fact pre-dates things like, say, man's production of substantial amounts of CO2.  But the more worrisome question has been, why has this retreat seemed to have accelerated the past several years:


Its hard to fully correlate recent activity with Arctic temperatures.  In fact, in the last three or four years (see above) we have seen decreasing Attic temperatures, not increasing ones.  But never-the-less, this ice picture is often used as exhibit #1 to prove anthropogenic warming.  The "tipping point is near" cry supporters of the theory that Earth's climate, unlike nearly every other long-term stable natural system, is dominated by positive feedback (and ignoring anecdotal evidence that the Arctic experienced similar melting in the 1930s).

Well, last year, there was some preliminary findings form NASA that said that the unusual low ice pack in 2008 may have been due to shifting wind patterns.  Now, Anthony Watts points us to two new studies that both conclude something other than global warming and CO2 may be behind recent ice pack trends in the Arctic.

Observations over the past decades show a rapid acceleration of several outlet glaciers in Greenland and Antarctica1. One of the largest changes is a sudden switch of Jakobshavn Isbræ, a large outlet glacier feeding a deep-ocean fjord on Greenland’s west coast, from slow thickening to rapid thinning2 in 1997, associated with a doubling in glacier velocity3. Suggested explanations for the speed-up of Jakobshavn Isbræ include increased lubrication of the ice-bedrock interface as more meltwater has drained to the glacier bed during recent warmer summers4 and weakening and break-up of the floating ice tongue that buttressed the glacier5. Here we present hydrographic data that show a sudden increase in subsurface ocean temperature in 1997 along the entire west coast of Greenland, suggesting that the changes in Jakobshavn Isbræ were instead triggered by the arrival of relatively warm water originating from the Irminger Sea near Iceland. We trace these oceanic changes back to changes in the atmospheric circulation in the North Atlantic region. We conclude that the prediction of future rapid dynamic responses of other outlet glaciers to climate change will require an improved understanding of the effect of changes in regional ocean and atmosphere circulation on the delivery of warm subsurface waters to the periphery of the ice sheets.

When Computer Models Are Treated Like Reality

On April 28, 2004, the SEC made a significant change in policy in the regulation of large investment banks.  On that day, they "decided to allow the five largest US investment banks to substitute advanced mathematical risk models for traditional capital requirements."  Al Gore has touted the "success" of such models as a reason to feel confident that computer models can accurately predict long-term climate trends.

But it turns out, as everyone is discovering this week, that computer models are not reality.  In fact, computer models are extraordinarily sensitive to their inputs, and small changes in their inputs, or the narrowing of models to ignore certain factors, can make them worse than useless.  Computer models are also very easy to force to a preferred conclusion.

Why Kyoto Used 1990 as a Base Year

I have made this point several times, but the 1990 reference date for Kyoto was not just picked randomly.  In fact, on its face, it was a bit odd for a treaty negotiated in 1997 to use a 1990 base year.   But 1990 allowed signatory countries to claim credit for alot of improvements in CO2 output that had nothing to do with the treaty.  For example, in Germany, 1990 was after unification but before wildly inefficient east German factories had been shut down.  In England, 1990 was just before a concerted effort to substitute North Sea oil and nuclear to shut down Midlands coal use.  In France and Japan, 1990 was the beginning of a period of slow economic growth (and, as an added special bonus, punished the US because it was the beginning of strong economic growth here).

Here is further proof:

In an odd twist on market economics, Europe's ex-communist states are starting to exploit a new market. Thanks to the Kyoto climate-change agreement, they can, in effect, now make money off the pollution their onetime central planners were willing to tolerate as the price for rapid industrialization and universal employment.

Ukraine, Hungary, the Czech Republic and other countries of the region not exactly renowned for clean air have made or are close to signing deals to sell the rights to emit greenhouse gases, and their main customer is environmentally friendly Japan.

This carbon windfall dropped into Central and East Europe's lap because the Kyoto Protocol sets 1990 as the reference year for future reductions in greenhouse gas emissions. The socialist states at that time were producing gargantuan amounts of CO2 and other gases implicated in global warming from unfiltered coal-fired power plants and factories; when those unprofitable industries withered, countless thousands of workers went on the dole — but the air got cleaner. In the coming years, in line with European Union mandates, would-be members gradually adopted better environmental policies. It's the difference between the often unspeakably bad air of 1990 and the comparatively clean air of today that allows them to sell "carbon credits" potentially worth billions of euros.

In effect, signatory countries are still making their Kyoto goals with actions that had nothing to do with Kyoto, in this case the modernization and/or shut down of communist-era industry.  This continues the charade that a) Europe is actually making real progress on CO2 emissions, which it is not and b) emissions reductions are cheap.

Update:  Before the treaty, but for which the treaty supporters claim credit by selecting 1990 as the base year, signatory countries had large CO2 reductions due to the forces at work detailed above:

CO2 Emissions Changes, 1990-1995

EU -2.2%
Former Communist -26.1%
Germany -10.7%
UK -6.9%
Japan 7.2%
US 6.4%

Since the treaty was actually signed, from 1997 to 2005, countries that ratified the treaty had emissions rise 21%.  When the treaty was signed in 1997, they signatories knew they had this pool of 1990-1995 emissions reductions to draw on to claim victory.  To this day, this is the only improvement they can show, improvement that occured before the treaty and through steps unrelated, in the main, to CO2 abatement.

Meet the Future




Hat Tip to a reader.  I am envisioning a "Team America:  World Police" sequel.

Make Sure "Climate Change" is in Your Grant Application

The best way to get grant money nowadays is to try to draw from a torrent of global warming money.  I would say that the first rule of grant application writing today is "include climate change in your study."

Examples, sent by a reader:

From the Correlation Not Equal Causation Department


More here.  Maybe we can get Mann to calculate some correlation statistics for this.

RCRC Climate Presentation

I made a 30-minute presentation to the California Regional Council of Rural Counties yesterday.   The audience was mainly county supervisors and other officials from about 30 rural counties.  The presentation was the skeptical counterpoint to a presentation by Joe Nation, who among other accomplishments was an author of AB32, the California global warming abatement law.  Download RCRC_Global_Warming_Presentation_update_Sept-25-2008.ppt .  Some of the charts may not be self-explanatory, so I am working on a YouTube video with my speech overlaid on the slides.

It was an interesting experience for me because the audience was hugely sympathetic to my pitch, but frustrated because, for them, it was beside the point:  They were already committed by AB32 to take drastic and expensive action under AB32.  The only policy recommendation I made in my speech was to lament the obsession with cap-and-trade and make a plea for a carbon tax.  The discussion afterward pretty much made my point for me, with every member lamenting the absurdities that are emerging in the CARB regulation process.  Even Mr. Nation admitted that the CARB is setting up programs that are preferentially regulating those with the least political muscle and pushing policies which make no sense in any kind of cost-benefit analysis for fighting CO2.  Mr. Nation said that when he was in the legislature, he tried a carbon tax first but could not get it out of committee, even a small one that would have raised gas taxes about 5 cents.  It seems politicians have no problem enacting huge taxes (which is what AB32 does) as long as those taxes are not called a tax and are hidden from the view of the general public (at least until prices start to rise and businesses start to exit the state).

I thought Mr. Nation did a perfectly reasonable job, and I agreed with much of what he presented.  I differed only, of course, in the amount of past warming I was willing to ascribe to CO2 and the amount of future warming from CO2 that we might expect.  However, this was the first time I have ever seen a global warming catastrophist be explicit that CO2 only causes a bit of future warming, and that most is from positive feedbacks multiplying the greenhouse effect.  Kudos for him for highlighting this, and this certainly fed into my pitch well.

The one area where I thought he made an explicit factual mistake in his presentation was in evaluating Hansen's forecast to Congress in 1988.  He argued that one shouldn't judge Hansen by his "A" scenario (which is WAY off) because Hansen said at the time that this was based on unrealistically high assumptions.  But in Hansen's appendix, he says that the A scenario is based on 1.5% a year future growth in CO2 output.  In fact, the world has grown CO2 output by 1.75 % a year in the last 20 (source), so in fact the A scenario is, if anything, low.  The B and C scenarios should be treated as totally irrelevant.  This is a mistake I think Lucia made at the Blackboard, considering B and C at all.  These scenarios differ in their CO2 forecasts, not the model parameters, so the scenario closest to actual CO2 output should be chosen and the rest are irrelevant.  By the way, here is my chart.  As I did with many of my charts, I like to counterpoint the data against media reports (the box in the upper left).  This helps later in the discussion when the disconnect people have between what I have said and what they have heard inevitably crops up.


I was pleased that Russ Steele of NC Media Watch was there to say hi and observe the proceedings.  Thanks Russ -- I enjoy your blog and am sorry that I did not recognize you in my pre-presentation stress. 

Update:  Russ has a more complete roundup of the discussion

Update 2:  The actuals in the chart above are UAH satellite numbers, with the anomaly shifted up about 0.1C to match zero values with the Hansen forecast data.

Computer Models

Al Gore has argued that computer models can be trusted to make long-term forecasts, because Wall Street has been using such models for years.  From the New York Times:

In fact, most Wall Street computer models radically underestimated the risk of the complex mortgage securities, they said. That is partly because the level of financial distress is “the equivalent of the 100-year flood,” in the words of Leslie Rahl, the president of Capital Market Risk Advisors, a consulting firm.

But she and others say there is more to it: The people who ran the financial firms chose to program their risk-management systems with overly optimistic assumptions and to feed them oversimplified data. This kept them from sounding the alarm early enough.

Top bankers couldn’t simply ignore the computer models, because after the last round of big financial losses, regulators now require them to monitor their risk positions. Indeed, if the models say a firm’s risk has increased, the firm must either reduce its bets or set aside more capital as a cushion in case things go wrong.

In other words, the computer is supposed to monitor the temperature of the party and drain the punch bowl as things get hot. And just as drunken revelers may want to put the thermostat in the freezer, Wall Street executives had lots of incentives to make sure their risk systems didn’t see much risk.

“There was a willful designing of the systems to measure the risks in a certain way that would not necessarily pick up all the right risks,” said Gregg Berman, the co-head of the risk-management group at RiskMetrics, a software company spun out of JPMorgan. “They wanted to keep their capital base as stable as possible so that the limits they imposed on their trading desks and portfolio managers would be stable.”

Tweaking model assumptions to get the answer you want from them?  Unheard of!

We Can't Think of Anything Else It Could Be

I am still reading the new Douglas and Christy paper, so I won't comment on it yet, but you can see Anthony Watts thoughts here.

However, in reading Anthony's site this morning, I was struck by a quote in another one of his posts.  For a while, I have been telling folks that the main argument behind anthropogenic global warming is "we have looked at everything else, and we can't think of what else it could be other than man."  Lacking positive correlation between CO2 and major shifts in temperature  (particularly when ice core evidence collapsed under the weight of the 800 year lag), scientists instead argue that they have gone through a long checklist (sun, clouds, volcanoes, etc) and have convinced themselves none of these others have caused late 20th century warming, so it must be man -- that's all that is left.

Here is an example, from Anthony's site:

Bill Chameides, dean of Duke University’s Nicholas School of the Environment and Earth Sciences, said Spencer’s arguments are what magicians call “ignoratio elenchi” or logical fallacy.”We’ve looked at every possible form of heat, including clouds, and the only source of heat is greenhouse gases,” he said, adding it’s insulting that Spencer would suggest scientists are paid to come to this conclusion. “Scientists make their reputation on debunking theories.”

Well, a number of folks would beg to differ that scientists have truly eliminated every other possible cause, particularly Mr. Sun (more than really eliminating these effects, they seem to be seeking excuses to ignore them).  In fact, climate models of late have admitted that they don't even include the Pacific Decadal Osculation in their models, or didn't until recently.  So much for thinking of everything.

But if Mr. Chameides wants to talk in terms of logical fallacies, I will as well:  Just because scientists cannot image another cause does not mean that another cause does not exist.  Can you imagine the first astrophysicists to discover pulsars to say "well, we can't think of anything else that would cause this phenomenon, so it must be space aliens."  Well, come to think of it, some people did say that.  But it turned out to be absurd, and after some decades of effort, we think we now understand pulsars.  But it is a bizarre form of arrogance to assume that it is not possible in our current degree of climate knowledge that there is some factor we don't even know about.

Long Postscript:  I am working on a powerpoint presentation for next week on anthropogenic global warming, but here are two charts from that presentation that get at the "we can't think of anything other than man that might be causing late 20th century warming."  The first is the correlation between 20th century temperature and the PDO cycle  (temperature numbers are Hadley CRUT3 and UAH combined as described here).   By the way, there seems to be some argument over exactly where and how often to call the turns in the PDO early in the 20th century -- I have used one frequent estimate but others exist.Pdo 

The second interesting analysis is a sunspot number chart.  To highlight recent increases in activity, I have overlaid on the monthly International sunspot numbers (light blue) a 9.8 year moving average (in black) of sunspot numbers (9.8 selected as an average cycle length).  In the chart below, selection of the 50 average sunspot number as a reference value is arbitrary, but serves to visually demonstrate the increase in solar activity over the last 50 years.


The average monthly sunspot number from 1900-1949 was 48.  The average monthly number from 1950-1999 was 73.1, an increase of 52%.

Some of this increase is real, but some may be a measurement bias related to the ability to better detect smaller spots.  Anyone have any sources on how large this latter effect might be?  We are talking about an enormous percentage increase in the last half of the century, so my guess is that it is not all due to this bias.

Retreating Glaciers

One of the panicky claims of global warming catastrophists is that some sort of "unprecedented" melting and retreat of glaciers is occurring tied to anthropogenic global warming.  I have seen anecdotal evidence for a while that this melting of glaciers began long before the 1950-present "anthropogenic" era, but I had not seen anything systematic on the topic until I discovered this study by  L. Oerlemans et al as published in Science in 2005.  Download Oerlemans 2005 as pdf.  His results look like this (click to enlarge):


His data for the last decade is a little squirrelly because the data sets he uses are slow to update, but the overall picture is pretty clear -- a pretty steady 150+ year history of steady retreat, with the only change is slope being a flattening rather than an acceleration of the curve.  Here are a few individual glaciers he highlights:


One is again left in a quandary - if recent glacial retreats are due to anthropogenic warming, then what cased the retreats before 1950 or so?  And, whatever caused the earlier retreats, what made this natural effect "switch off" at the exact same instant that anthropogenic effects took over?   

Update:  Here is a piece of annecdotal evidence to match, a map from Alaska Geogrpahic on the retreat of the glaciers at Glacier Bay


Light Posting

Sorry for the light posting.  I have not lost interest, I have just been extremely busy.  Relevant to climate, I am working on a 30-minute presentation for a climate debate I am participating in soon at Lake Tahoe.  Once that is done, the material I have developed for it should drive a number of new posts.

Comment Policy

Since it has come up a couple of times in the last few days, here is a reminder of the comment policy on this blog:

1)  I do not edit, moderate or delete comments, except for outright spam.  The reasons for this are many.  First, I don't have time.  Second, I don't have the inclination.  Third, I take zero responsibility from an editorial standpoint on what is in the comments.  The comments are an open public forum I offer as a public service.  Even light moderation or isolated bans would break this bright-line rule and might lead some to some confusion as to whether I implicitly support some particular comment because I didn't delete it.  So I don't touch anything.

2)  I encourage everyone who agrees with me to remain civil, rational, open-minded, and professional in the comments.  Everyone else is encouraged to discredit his or her own opinions by making as much of an ass of him or herself as they choose. Some of my commenters seem particularly adept at the latter.

3) Commenter names are entirely arbitrary.  It is amazing that I have to remind folks of this nowadays, but if you see a commenter named "Al Gore,"  you should be entirely suspicious as to the person's true identity (though of course Al would be welcome to hang out here).  Its not like I check everyone's ID. 

Sucking the Oxygen Out of the Environmental Movement

I have written on a number of occasions that, years from now, folks who would like to see meaningful reductions in man's negative impacts on the environment are going to look back on the global warming charade as a disaster for their movement -- not just in terms of credibility, but in terms of lost focus on real, meaningful improvements.

China is a great example.  Like London in the 19th century or Pittsburgh in the early 20th, China's air quality is a mess.  Real steps need to be taken to clean up the air, for the health and safety of its residents.  The Olympics might have been a venue for people around the world to apply pressure to China to clean up its act.

But, in fact, there is little real pressure from outside for China to clean up the soot, unburned hydrocarbons, NO2, SO2 and other such pollutants from its vehicles and coal plants.  That is because all the pressure, all the attention, is on China's CO2 production.  But there is nothing China can do to slow down CO2 growth without killing its economy and probably destabilizing its government in the process.  So, it gives the world a big FU to such admonitions. 

Which is a shame.  Unlike for CO2 abatement, there are real technologies that are proven to be economic that can abate the worst of China's pollution problems.  Had we instead been spending our moral capital pressuring China to take such steps, there might be real progress. 

This is all the more true as we learn that some of the problems we ascribe to CO2 may in fact be more linked to soot from Chinese industry.  John Goetz recently linked one such story:

Smog, soot and other particles like the kind often seen hanging over Beijing add to global warming and may raise summer temperatures in the American heartland by three degrees in about 50 years, says a new federal science report released Thursday.

These overlooked, shorter-term pollutants — mostly from burning wood and kerosene and from driving trucks and cars — cause more localized warming than once thought, the authors of the report say.They contend there should be a greater effort to attack this type of pollution for faster results.For decades, scientists have concentrated on carbon dioxide, the most damaging greenhouse gas because it lingers in the atmosphere for decades. Past studies have barely paid attention to global warming pollution that stays in the air merely for days.

This is consistent with other recent work that hypothesizes that increase of melting rates of Arctic sea ice may be as much due to Chinese black carbon falling on the ice  (and thereby decreasing its albedo and increasing solar heating) than from rising global temperatures.  This makes sense to me, and may help explain why melting in the Arctic sea ice was nearly as great as last year's record, despite much lower Arctic temperatures (see below) over the last year.


Lipstick on a Pig

Apparently, Michael Mann is yet again attempting a repackaging of his hockey stick work.  The question is, has he re-worked his methodologies to overcome the many statistical issues third parties have had with his work, or is this more like AirTran changing its name from ValuJet to escape association in people's mind with its 1996 plane crash?

Well, Steve McIntyre is on the case, and from first glance, the new Mann work seems to be the same old mish-mash of cherry-picked proxies, bizarre statistical methods, and manual tweaking of key proxies to make them look the way Mann wants them to look.  One thing I had never done was look at all the component proxies of the temperature reconstructions all in one place.  At the link above, Steve has all the longer ones in a animated GIF.  It is really striking how a) almost none of them have a hockey stick shape and b) even the few that do have HS shapes typically show the warming trend beginning in 1800, not in the late 19th century CO2 period. 

If you would like to eyeball all 1209 of the proxies Mann begins with (before he starts cherry picking), they are linked here.  I really encourage you to click through to one of the five animations, just to get  a feel for it.  As someone who has done a lot of data analysis, it is just staggering that he can get a hockey stick out of these and claim that it is in some way statistically significant.  It is roughly equivalent to watching every one of your baseball team's games, seeing them lose each one, and then being told that they have the best record in the league.  It makes no sense.

The cherry-picking is just staggering, though you have to read the McIntyre articles as a sort of 2-3 year serial to really get the feel of it.  However, this post gives one a feel of how Mann puts a thin statistical-sounding veneer to cover his cherry-picking, but at the end of the day, he has basically invented a process that takes about a thousand proxy series and kicks out all but the 484 that will generate a hockey stick.

Update:  William Briggs finds other problems with Mann's new analysis:

The various black lines are the actual data! The red-line is a 10-year running mean smoother! I will call the black data the real data, and I will call the smoothed data the fictional data. Mann used a “low pass filter” different than the running mean to produce his fictional data, but a smoother is a smoother and what I’m about to say changes not one whit depending on what smoother you use.

Now I’m going to tell you the great truth of time series analysis. Ready? Unless the data is measured with error, you never, ever, for no reason, under no threat, SMOOTH the series! And if for some bizarre reason you do smooth it, you absolutely on pain of death do NOT use the smoothed series as input for other analyses! If the data is measured with error, you might attempt to model it (which means smooth it) in an attempt to estimate the measurement error, but even in these rare cases you have to have an outside (the learned word is “exogenous”) estimate of that error, that is, one not based on your current data.

If, in a moment of insanity, you do smooth time series data and you do use it as input to other analyses, you dramatically increase the probability of fooling yourself! This is because smoothing induces spurious signals—signals that look real to other analytical methods. No matter what you will be too certain of your final results! Mann et al. first dramatically smoothed their series, then analyzed them separately. Regardless of whether their thesis is true—whether there really is a dramatic increase in temperature lately—it is guaranteed that they are now too certain of their conclusion.

and further:

The corollary to this truth is the data in a time series analysis is the data. This tautology is there to make you think. The data is the data! The data is not some model of it. The real, actual data is the real, actual data. There is no secret, hidden “underlying process” that you can tease out with some statistical method, and which will show you the “genuine data”. We already know the data and there it is. We do not smooth it to tell us what it “really is” because we already know what it “really is.”

Update:  I presume it is obvious, but the commenter "mcIntyre" has no relation that I know of to the "mcintyre" quoted and referred to in the post.  As a reminder of my comment policy, 1) I don't ban or delete anything other than outright spam and 2) I strongly encourage everyone who agrees with me to remain measured and civil in your tone -- everyone else is welcome to make as big of an ass out of him or herself as they wish.

By the way, to the commenter named "mcintyre,"  I have never ever seen the other McIntyre (quoted in this post) argue that CO2 does not act as a greenhouse gas.  He spends most of his time arguing that the statistical methods used in certain historic temperature reconstructions (e.g. Mann's hockey stick but also 20th century instrument rollup's like the GISS global temperature anamoly) are flawed.  I have read his blog for 3 years now and can honestly say I don't know what his position on the magnitude of future anthropogenic warming is.  Mr. McIntyre is apparenlty not alone -- Ian Jolliffe holds the opinion that the reputation of climate science is being hurt by the statistical sloppiness in certain corners of dendro-climatology.

Be Cool and Prevent Cooling by Reducing Global Warming

I seldom highlight stories like this, because I am more interested in the science than the propoganda of climate issues, but I just couldn't resist this story.  Via Tom Nelson, from Belleville News Democrat:

Speakers at the public forum also addressed how global warming may directly affect metro-east residents, citing this summer's flooding and unusually cool weather, and future impact on agriculture.

"These are symptoms of global warming," said Kathy Andria, who represented the American Bottoms Conservancy at the forum.

Andria said she will help organize Belleville residents to work towards being a greener city and become designated by the Sierra Club as a Cool City.

So we want to fight global warming which is causing unusually cool weather.  If we are succesful, then we will be labelled a Cool City so that we won't have all this cool weather because we will have stopped warming.

Seriously, has there even been a more transparent "heads I win, tails you lose" argument than saying that unusually cool weather is evidence of global warming?

100 Months to the Tipping Point

Wow -- it turns out that after hundreds of millions or even billions of years of remaining stable, the world climate will, due to (at most) a few tenths of degrees of man-made warming and an increase of a trace gas composition in the atmosphere by about 0.01%, go past its tipping point or point of no return and run away to catastrophe.  I sure wish there was a prediction market where I could bet against this.  See this end of the world website here (HT to a reader). 

Given a bit more time, I will try to take on in depth the underlying article behind this site.  But for now, suffice it to say that underlying hypothesis is that the world's climate is dominated by positive feedback, a hypothesis, if true, that would set climate apart from nearly every other natural process that we know of.  In fact, the only major natural process I can think of that is dominated by positive feedback and tipping points is nuclear fission.  Here are many articles on how catastrophic forecasts assume large positive feedbacks and why this assumption is unlikely.

Global Warming "Fingerprint"

Many climate scientists say they see a "fingerprint" in recent temperature increases that they claim is distinctive and makes current temperature increases different from past "natural" temperature increases. 

So, to see if we are all as smart as the climate scientists, here are two 51-year periods from the 20th century global temperature record as provided by the Hadley CRUT3.  Both are scaled the same (each line on the y-axis is 0.2C, each x-axis division is 5 years) -- in fact, both are clips from the exact same image.  So, which is the anthropogenic warming and which is the natural? 

  Periodb       Perioda_3

One clip is from 1895 to 1946 (the"natural" period) and one is from 1957 to present  (the supposedly anthropogenic period). 

If you have stared at these charts long enough, the el Nino year of 1998 has a distinctive shape that I recognize, but otherwise these graphs look surprisingly similar.  If you are still not sure, you can find out which is which here.

Measuring Climate Sensitivity

As I am sure most of my readers know, most climate models do not reach catastrophic temperature forecasts from CO2 effects alone.  In these models, small to moderate warming by CO2 is multiplied many fold by assumed positive feedbacks in the climate system.  I have done some simple historical analyses that have demonstrated that this assumption of massive positive feedback is not supported historically.

However, many climate alarmists feel they have good evidence of strong positive feedbacks in the climate system.  Roy Spencer has done a good job of simplifying his recent paper on feedback analysis in this article.  He looks at satellite data from past years and concludes:

We see that the data do tend to cluster along an imaginary line, and the slope of that line is 4.5 Watts per sq. meter per deg. C. This would indicate low climate sensitivity, and if applied to future global warming would suggest only about 0.8 deg. C of warming by 2100.

But he then addresses the more interesting issue of reconciling this finding with other past studies of the same phenomenon:

Now, it would be nice if we could just stop here and say we have evidence of an insensitive climate system, and proclaim that global warming won't be a problem. Unfortunately, for reasons that still remain a little obscure, the experts who do this kind of work claim we must average the data on three-monthly time scales or longer in order to get a meaningful climate sensitivity for the long time scales involved in global warming (many years).

One should always before of a result where the raw data yield one result but averaged data yields another.  Data averaging tends to do funny things to mask physical processes, and this appears to be no exception here.  He creates a model of the process, and finds that such averaging always biases the feedback result higher:

Significantly, note that the feedback parameter line fitted to these data is virtually horizontal, with almost zero slope. Strictly speaking that would represent a borderline-unstable climate system. The same results were found no matter how deep the model ocean was assumed to be, or how frequently or infrequently the radiative forcing (cloud changes) occurred, or what the specified feedback was. What this means is that cloud variability in the climate system always causes temperature changes that "look like" a sensitive climate system, no matter what the true sensitivity is.

In short, each time he plugged low feedback into the model, the data that emerged mimicked that of a high feedback system, with patterns very similar to what researchers have seen in past feedback studies of actual temperature data. 

Interestingly, the pattern is sort of a circular wandering pattern, shown below:Simplemodelradiativeforcing

I will have to think about it a while -- I am not sure if it is a real or spurious comparison, but the path followed by his model system is surprisingly close to that in the negative feedback system I modeled in my climate video, that of a ball in the bottom of a bowl given a nudge (about 3 minutes in).

No Trend in Drought or Floods

It is often said by warming alarmists that a) global warming will increase both extremes of droughts and floods and b) that we already see these conditions accelerating  (ie with California droughts and this year's midwestern floods).  The recent NOAA/NASA draft CCSP climate change report I commented on last week said

Temperature and precipitation have increased over recent decades, along with some extreme weather events such as heat waves and heavy downpours...

Widespread increases in heavy precipitation events have occurred, even in places where total amounts have decreased. These changes are associated with the fact that warmer air holds more water vapor evaporating from the world’s oceans and land surface. Increases in drought are not uniform, and some regions have seen increases in the occurrences of both droughts and floods

The Antiplanner, in an article on firefighting, shares this data at the National Climate Data Center that I had never seen before.  It is the monthly estimate of the percent of US land area subject to extremes of wet or dry weather.  First, the dry weather:


Then the wet weather:


There is no trend here, and certainly no acceleration** of a trend, merely what is obviously a cyclical phenomenon.   

** I am constantly amazed at the ability of alarmists to dedice the second derivitive of natural phenomenon (eg an acceleration in a rate of change) from single data points (e.g. 2008 flooding in the Midwest).

Update:  Since the claim is an increase in total extreme weather, to be fair I also looked at the history of the two data sets above combined:


Thre is a slight trend here, on the order of about a 2-3 percentage point increase per century.  I am fairly certain this does not clear the margin of error. 

Because, You Know, All We Skeptics Are Fighting Against Settled Science

I saw Al's climate sci-fi movie, but I didn't read the book.  Via Tom Nelson, Robert Johnston has a refutation of some of Al's claims in his book.  This one caught my eye because it is a topic with which I am pretty familiar.  Gore writes:

"People who want to deny global warming because it's easier than dealing with it try to argue that what scientists are really observing is just the 'urban heat island' effect... This is simply wrong. Temperature measurements are generally taken in parks, which are actually cool areas within the urban heat islands... Most scientific research shows that 'urban heat islands' have a negligible effect..." (p. 318)

I can't believe we let Al Gore lecture us on science.  A few responses:

  • I don't think most skeptics deny that some warming has occurred in the 20th century.  Satellite measurement, which is not subject to urban heat island biases, has shown several tenths of a degree C warming since the late 1970's.  However, skeptics do tend to argue that surface temperature networks do tend to overestimate the 20th century warming signal due in part to urban biases  (not to mention over-zealous addition of fudge-factors by the alarmists running the data gathering). Of course, we also will dispute that "most" of this warming is due to anthropogenic CO2.
  • The statement that most temperature measurements are taken in parks is so wrong as to be absurd.  As Anthony Watts climate station survey process has shown, the vast majority of stations are actually located near buildings  (a predictable result of siting and cable length limitations of the most commonly used sensors).  You don't have to take my word for it, just scan the pictures yourself at random.  I have had a lot of fun participating in this project.  Here, by the way, is the Tucson station I surveyed.  As you can see, the station is definitely located in a park[ing lot].


  • We skeptics are often called "deniers" for not accepting that the theory of catastrophic man-made global warming is settled science.  But if you want to see real denialism in the face of facts, one only has to look at the alarmist's absurd position that, as Al Gore puts it, "urban heat islands have a negligible effect."  The fact is that urban heat islands are well-known to science, and can cause the center of cities to be as high as 5-8C hotter than the outlying rural areas.  It turns out that this is so horribly difficult to understand and prove that ... my 14-year-old son did it for a science project.  Here is the results of one of our data runs across town  (details described in the article).


  • Defenders of the surface temperature record will sometimes argue that they have successfully corrected for urban biases (leading to the cognitive dissonance of their saying that the biases have no effect and that they have fully corrected for them).  But here is the problem:  without detailed siting information, and surveys like that run by my son, it is impossible to make these corrections anything but guesses (ironically, many of the folks making this argument have opposed Anthony Watt's survey process and continue to maintain that they can make better adjustments blind than having data of station siting).  At most, the total warming signal we are trying to identify over the last century is about a degree F.  But as you can see above, we found a 6 degree urban heat effect on the first night of our study, and we found a 9 degree urban effect our second night.  You can see that not only does the magnitude of this heat island effect swamp the signal we are trying to measure, even the variability or uncertainty in assessing the urban bias is several times larger than the warming signal. 

Update:  Here is a new study debunking Gore's claim that man-made global warming was melting the Kilimanjaro ice cap.  This claim never made much sense, since even if temperatures were to warm by several degrees, they would still remain well below freezing all year long.

Climate Tourism

While driving between some of the campgrounds we run in Inyo and Mono County, California, I stumbled across the White Mountain bristle-cone pine forest.  I just couldn't resist checking it out.  Of course, it through me off my schedule for an hour or so, but its not the first time that bristle-cones have been a source of divergence ;=)

PS-  I had a crappy rent car, but if you have a sports car and are near Highway 168 east of Big Pine, CA, you should definitely give it a test drive.  It would be a real hoot to drive with the right car.

Comments on NOAA USP Draft

As promised, here are my comments on the USP Global Climate Change draft.  I simply did not have the time to plow through the entire NOAA/NASA CCSP climate change report, so I focused on the 28-page section labeled Global Climate Change.  Even then, I was time-crunched, so most of my comments are cut-and-pastes from my blog, and many lack complete citations.  I would feel bad about that, except the USP report itself is very clearly a haphazard cut-and-paste from various sources and many of its comments and charts totally lack citations and sources (I challenge you to try to figure out even simple things, like where the 20th century temperature data on certain charts came from).

Backcasting with Computer Climate Models

I found the chart below in the chapter Global Climate Change of the NOAA/NASA CCSP climate change report. (I discuss this report more here). I thought it was illustrative of some interesting issues:


The Perfect Backcast

What they are doing is what I call "backcasting," that is, taking a predictive model and running it backwards to see how well it preforms against historical data.  This is a perfectly normal thing to do.

And wow, what a fit.  I don't have the data to do any statistical tests, but just by eye, the red model output line does an amazing job at predicting history.  I have done a lot of modeling and forecasting in my life.  However, I have never, ever backcast any model and gotten results this good.  I mean it is absolutely amazing.

Of course, one can come up with many models that backcast perfectly but have zero predictive power

A recent item of this ilk maintains that the results of the last game played at home by the NFL's Washington Redskins (a football team based in the national capital, Washington, D.C.) before the U.S. presidential elections has accurately foretold the winner of the last fifteen of those political contests, going back to 1944. If the Redskins win their last home game before the election, the party that occupies the White House continues to hold it; if the Redskins lose that last home game, the challenging party's candidate unseats the incumbent president. While we don't presume there is anything more than a random correlation between these factors, it is the case that the pattern held true even longer than claimed, stretching back over seventeen presidential elections since 1936.

And in fact, our confidence in the climate models based on their near-perfect back-casting should be tempered by the fact that when the models first were run backwards, they were terrible at predicting history.  Only a sustained effort to tweak and adjust and plug them has resulted in this tight fit  (we will return to the subject of plugging in a minute).

In fact, it is fairly easy to demonstrate that the models are far better at predicting history than they are at predicting the future.  Like the Washington Redskins algorithm, which failed in 2004 after backcasting so well, climate models have done a terrible job in predicting the first 10-20 years of the future.  This is the reason that neither this nor any other global warming alarmist report every shows a chart grading how model forecasts have performed against actual data:  Because their record has been terrible.  After all, we have climate model forecasts data all the way back from the late 1980's -- surely 20+ years is enough to get a test of their performance.

Below is the model forecasts James Hansen, whose fingerprints are all over this report, used before Congress in 1988 (in yellow, orange, and red), with a comparison to the actual temperature record (in blue).  (source)


Here is the detail from the right side:


You can see the forecasts began diverging from reality even as early as 1985.  By the way, don't get too encouraged by the yellow line appearing to be fairly close -- the Hansen C case in yellow was similar to the IPCC B1 case which hypothesizes strong international CO2 abatement programs which have not come about.  Based on actual CO2 production, the world is tracking, from a CO2 standpoint, between the orange and red lines.  However, temperature is no where near the predicted values.

So the climate models are perfect at predicting history, but begin diverging immediately as we move into the future.  That is probably why the IPCC resets its forecasts every 5 years, so they can hit the reset button on this divergence.  As an interesting parallel, temperature measurements of history with trees have very similar divergence issues when carried into the future.

What the Hell happened in 1955?

Looking again at the backcast chart at the top of this article, peek at the blue line.  This is what the models predict to have been the world temperature without man-made forcings.  The blue line is supposed to represent the climate absent man.  But here is the question I have been asking ever since I first started studying global warming, and no one has been able to answer:  What changed in the Earth's climate in 1955?  Because, as you can see, climate forecasters are telling us the world would have reversed a strong natural warming trend and started cooling substantially in 1955 if it had not been for anthropogenic effects.

This has always been an issue with man-made global warming theory.  Climate scientists admit the world warmed from 1800 through 1955, and that most of this warming was natural.  But somehow, this natural force driving warming switched off, conveniently in the exact same year when anthropogenic effects supposedly took hold.  A skeptical mind might ask why current warming is not just the same natural trend as warming up to 1955, particularly since no one can say with any confidence why the world warmed up to 1955 and why this warming switched off and reversed after that.

Well, lets see if we can figure it out.  The sun, despite constant efforts by alarmists to portray it is climactically meaningless, is a pretty powerful force.  Did the sun change in 1955? (click to enlarge)


Well, it does not look like the sun turned off.  In fact, it appears that just the opposite was happening -- the sun hit a peak around 1955 and has remained at this elevated level throughout the current supposedly anthropogenic period.

OK, well maybe it was the Pacific Decadal Oscillation?  The PDO goes through warm and cold phases, and its shifts can have large effects on temperatures in the Northern Hemisphere.


Hmm, doesn't seem to be the PDO.  The PDO turned downwards 10 years before 1955.  And besides, if the line turned down in 1955 due to the PDO, it should have turned back up in the 1980's as the PDO went to its warm phase again. 

So what is it that happened in 1955.  I can tell you:  Nothing. 

Let me digress for a minute, and explain an ugly modeling and forecasting concept called a "plug".  It is not unusual that when one is building a model based on certain inputs (say, a financial model built from interest rates and housing starts or whatever) that the net result, while seemingly logical, does not get to what one thinks the model should be saying.  While few will ever admit it, I have been inside the modeling sausage factory for enough years that it is common to add plug figures to force a model to reach an answer one thinks it should be reaching -- this is particularly common after back-casting a model.

I can't prove it, any more than this report can prove the statement that man is responsible for most of the world's warming in the last 50 years.  But I am certain in my heart that the blue line in the backcasting chart is a plug.  As I mentioned earlier, modelers had terrible success at first matching history with their forecasting models.  In particular, because their models showed such high sensitivity of temperature to CO2 (this sensitivity has to be high to get catastrophic forecasts) they greatly over-predicted history. 

Here is an example.  The graph below shows the relationship between CO2 and temperature for a number of sensitivity levels  (the shape of the curve was based on the IPCC formula and the process for creating this graph was described here).


The purple lines represent the IPCC forecasts from the fourth assessment, and when converted to Fahrenheit from Celsius approximately match the forecasts on page 28 of this report.  The red and orange lines represent more drastic forecasts that have received serious consideration.  This graph is itself a simple model, and we can actually backcast with it as well, looking at what these forecasts imply for temperature over the last 100-150 years, when CO2 has increased from 270 ppm to about 385 ppm.


The forecasts all begin at zero at the pre-industrial number of 270ppm.  The green dotted line is the approximate concentration of CO2 today.  The green 0.3-0.6C arrows show the reasonable range of CO2-induced warming to date.  As one can see, the IPCC forecasts, when cast backwards, grossly overstate past warming.  For example, the IPCC high case predicts that we should have see over 2C warming due to CO2 since pre-industrial times, not 0.3 or even 0.6C

Now, the modelers worked on this problem.   One big tweak was to assign an improbably high cooling effect to sulfate aerosols.  Since a lot of these aerosols were produced in the late 20th century, this reduced their backcasts closer to actuals.  (I say improbably, because aerosols are short-lived and cover a very limited area of the globe.  If they cover, say, only 10% of the globe, then their cooling effect must be 1C in their area of effect to have even a small 0.1C global average effect).

Even after these tweaks, the backcasts were still coming out too high.  So, to make the forecasts work, they asked themselves, what would global temperatures have to have done without CO2 to make our models work?  The answer is that if the world naturally were to have cooled in the latter half of the 20th century, then that cooling could offset over-prediction of temperatures in the models and produce the historic result.  So that is what they did.  Instead of starting with natural forcings we understand, and then trying to explain the rest  (one, but only one, bit of which would be CO2), modelers start with the assumption that CO2 is driving temperatures at high sensitivities, and natural forcings are whatever they need to be to make the backcasts match history.

By the way, if you object to this portrayal, and I will admit I was not in the room to confirm that this is what the modelers were doing, you can do it very simply.  Just tell me what substantial natural driver of climate, larger in impact that the sun or the PDO, reversed itself in 1955.

A final Irony

I could go on all day making observations on this chart, but I would be surprised if many readers have slogged it this far.  So I will end with one irony.  The climate modelers are all patting themselves on the back for their backcasts matching history so well.  But the fact is that much of this historical temperature record is fraught with errors.  Just as one example, measured temperatures went through several large up and down shifts in the 40's and 50's solely because ships were switching how they took sea surface temperatures (engine inlet sampling tends to yield higher temperatures than bucket sampling).  Additionally, most surface temperature readings are taken in cities that have experienced rapid industrial growth, increasing urban heat biases in the measurements.  In effect, they have plugged and tweaked their way to the wrong target numbers!  Since the GISS and other measurement bodies are constantly revising past temperature numbers with new correction algorithms, it will be interesting to see if the climate models magically revise themselves and backcast perfectly to the new numbers as well.

Another Climate Report Written Backwards

I simply do not have the time to plow through the entire NOAA/NASA CCSP climate change report, so I focused on the 28-page section labeled Global Climate Change.

I will post my comments when they are done, but suffice it to say that this is yet another report written backwards, with the guts of the report written by politicians trying to push an agenda.  This is an incredibly shallow document, more shallow even than the IPCC report and possibly even than the IPCC summary for policy makers.  Call it the NASA summary for the mentally retarded. 

The report is a full-force sales piece for catastrophic global warming.  Not once in the entire chapter I read was there a hint of doubt or uncertainty.  Topics for which scientists have but the flimsiest of understandings, for example feedback effects, are treated with the certainty of Newtonian mechanics.  Any bit of conflicting evidence -- whether it be the fact that oceans were rising before the industrial era, or that tropospheric temperatures are not higher than surface temperatures as predicted, or that large parts of Antarctica are gaining ice -- are blissfully omitted. 

Many of the most important propositions in the report are stated without proof or citation.  Bill Kovacs wrote the other day that of the 21 papers that were cited, only 8 are available to the public prior to the August 14 deadline for public comment.  Just like with the IPCC, the summary is written months ahead of the science.  Much of the report seems to be cut-and-pasted from other sources  (you can tell, be graphs are reproduced exactly as they appear in other reports, such as the IPCC fourth assessment).  In many cases, the data between these various charts do not agree (for example, their charts have three or four different versions of 20th century global temperatures, none of which are either sourced or consistent). 

And, of course, the hockey stick, the Freddy Krueger of scientific analysis, is brought back yet again from the dead.

Let me give you just one taste of the quality science here.  Here is a precipitation chart they put in on page 28:


This is like those before-and-after photo games.  Can you see the sleight of hand?  Look at the legend for the green historic line.  It says that it is based on "Simulations."  This means that someone has hypothesized a relationship between temperature and precipitation (the precipitation line in this chart is tellingly nearly identical in pattern and slope to the "human + natural" temperature model output as shown at the top of page 26) and built that relationship into a model.  So the green line is a result of a) a model projecting temperature backward and b) the model taking that temperature and, based on a series of assumptions that temperature drives heavy precipitation events, generating this graph of heavy precipitation events.

Now, look at the caption.  It calls the green line "observed...changes in the heaviest 5 percent of precipitation events."  I am sorry, but model output and observations are not the same thing.  Further, note the circularity of the argument.  Models built on the assumption that temperature increases cause an increase in these events is used as proof that temperature increases these events. 

By the way, look at the error band on the green line.  For some reason, we have near perfect knowledge for worldwide precipitation events in the 1960's, but are less certain about the 1990's.

Practically A Summary of this Blog

In a letter of support for Lord Monckton's recent paper in the Newsletter of the American Physical Society, APS member Roger Cohen summarized his disagreeements with the IPCC position on global warming in what could easily have been the table of contents for this blog:

I retired four years ago, and at the time of my retirement I was well convinced, as were most technically trained people, that the IPCC’s case for Anthropogenic Global Warming (AGW) is very tight. However, upon taking the time to get into the details of the science, I was appalled at how flimsy the case really is. I was also appalled at the behavior of many of those who helped produce the IPCC reports and by many of those who promote it. In particular I am referring to the arrogance; the activities aimed at shutting down debate; the outright fabrications; the mindless defense of bogus science, and the politicization of the IPCC process and the science process itself.

At this point there is little doubt that the IPCC position is seriously flawed in its central position that humanity is responsible for most of the observed warming of the last third of the 20th century, and in its projections for effects in the 21st century. Here are five key reasons for this:

  1. The recorded temperature rise is neither exceptional nor persistent. For example, the earth has not warmed since around 1997 and may in fact be in a cooling trend. Also, in particular, the Arctic and contiguous 48 states are at about the same temperature as they were in the 1930s. Also in particular the rate of global warming in the early 20th century was as great as the last third of the century, and no one seriously ascribes the early century increase to greenhouse gas emissions.
  2. Predictions of climate models are demonstrably too high, indicating a significant overestimate of the climate sensitivity (the response of the earth to increases in the incident radiation caused by atmospheric greenhouse gases). This is because the models, upon which the IPCC relies for their future projections, err in their calculations of key feedback and driving forces in the climate system.
  3. Natural effects have been and continue to be important contributors to variations in the earth’s climate, especially solar variability and decadal and multidecadal ocean cycles.
  4. The recorded land-based temperature increase data are significantly exaggerated due to widespread errors in data gathering and inadequately corrected contamination by human activity.
  5. The multitude of environmental and ecological effects blamed on climate change to date is either exaggerated or nonexistent. Examples are claims of more frequent and ferocious storms, accelerated melting of terrestrial icecaps, Mount Kilimanjaro’s glacier, polar bear populations, and expansive mosquito-borne diseases. All of these and many others have been claimed and ascribed to global warming and by extension to human activity, and all are bogus or highly exaggerated.

via Anthony Watts

A Quick Thought on "Peer Review"

One of the weird aspects of climate science is the over-emphasis on peer review as the ne plus ultra guarantor of believable results.  This is absurd.  At best, peer review is a screen for whether a study is worthy of occupying limited publication space, not for whether it is correct.  Peer review, again at best, focuses on whether a study has some minimum level of rigor and coherence and whether it offers up findings that are new or somehow advance the ball on an important topic. 

In "big boy sciences" like physics, study findings are not considered vetted simply because they are peer-reviewed.  They are vetted only after numerous other scientists have been able to replicate the results, or have at least failed to tear the original results down.  Often, this vetting process is undertaken by people who may even be openly hostile to the original study group.  For some reason, climate scientists cry foul when this occurs in their profession, but mathematicians and physicists accept it, because they know that findings need to be able to survive the scrutiny of enemies, not just of friends.  To this end, an important part of peer review is to make sure the publication of the study includes all the detail on methodology and data that others might need to replicate the results  (which is something climate reviewers are particularly bad at).

In fact, there are good arguments to be made that strong peer review may even be counter-productive to scientific advancement.  The reason is that peer review, by the nature of human beings and the incentives they tend to have, is often inherently conservative.  Studies that produce results the community expects often receive only cursory scrutiny doled out by insiders chummy with the authors.  Studies that show wildly unexpected results sometimes have trouble getting published at all.

Poscscript:  As I read this, it strikes me that one way to describe climate is that it acts like a social science, like sociology or gender studies, rather than like a physical science.  I will ahve to think about this -- it would be an interesting hypothesis to expand on in more depth.  Some quick parallels of why I think it is more like a social science:

  • Bad statistical methodology  (a hallmark, unfortunately, of much of social science)
  • Emphasis on peer review over replication
  • Reliance on computer models rather than observation
  • Belief there is a "right" answer for society with subsequent bias to study results towards that answer  (example, and another example)

Climate Alarmists and Individual Rights

I am not sure this even needs comment:  (HT:  Maggies Farm)

I’m preparing a paper for an upcoming conference on this, so please comment if you can! Thanks. Many people have urged for there to be some legal or moral consequence for denying climate change. This urge generally comes from a number of places. Foremost is the belief that the science of anthropogenic climate change is proven beyond reasonable doubt and that climate change is an ethical issue. Those quotes from Mahorasy’s blog are interesting. I’ll include one here:

Perhaps there is a case for making climate change denial an offence. It is a crime against humanity, after all. –Margo Kingston, 21 November 2005

The urge also comes from frustration with a ‘denial’ lobby: the furthest and more extreme talkers on the subject who call global warming a ‘hoax’ (following James Inhofe’s now infamous quote). Of course there would be frustration with this position–a ‘hoax’ is purposeful and immoral. And those who either conduct the science or trust the science do not enjoy being told they are perpetrating a ‘hoax’, generating a myth, or committing a fraud....

I’m an advocate for something stronger. Call it regulation, law, or influence. Whatever name we give it, it should not be seen as regulation vs. freedom, but as a balancing of different freedoms. In the same way that to enjoy the freedom of a car you need insurance to protect the freedom of other drivers and pedestrians; in the same way that you enjoy the freedom to publish your views, you need a regulatory code to ensure the freedoms of those who can either disagree with or disprove your views. Either way. While I dislike Brendan O’Neill and know he’s wrong, I can’t stop him. But we need a body with teeth to be able to say, “actually Brendan, you can’t publish that unless you can prove it.” A body which can also say to me, and to James Hansen, and to the IPCC, the same....

What do you think? Perhaps a starting point is a draft point in the codes for governing how the media represent climate change, and a method for enforcing that code. And that code needs to extend out to cover new media, including blogs. And perhaps taking a lesson from the Obama campaign’s micro-response strategy: a team empowered with responding to complaints specifically dealing with online inaccuracy, to which all press and blogs have to respond. And so whatever Jennifer Mahorasy, or Wattsupwiththat, or Tom Nelson, or Climate Sceptic, or OnEarth, or La Marguerite, or the Sans Pretence, or DeSmog Blog, or Monckton or me, say, then we’re all bound by the same freedoms of publishing.

He asked for comments.  I really did not have much energy to refute something so wrong-headed, but I left a few thoughts:

Wow, as proprietor of, I am sure flattered to be listed as one of the first up against the wall come the great green-fascist revolution.  I found it particularly ironic that you linked my post skewering a climate alarmist for claiming that heavier objects fall faster than lighter objects.  Gee, I thought the fact that objects of different masses fall at the same rate had been "settled science" since the late 1500s.

But I don't think you need a lecture on science, you need a lecture on civics.  Everyone always wants free speech for themselves.  The tough part is to support free speech for others, even if they are horribly, terribly wrong-headed.  That is the miracle of the first amendment, that we have stuck by this principle for over 200 years.

You see, technocrats like yourself are always assuming the perfect government official with perfect knowledge and perfect incentives to administer your little censorship body.  But the fact is, such groups are populated with real people, and eventually, the odds are they will be populated by knaves.  And even if folks are well-intentioned, incentives kill such government efforts every time.  What if, for example, your speech regulation bureaucrats felt that their job security depended on a continued climate crisis, and evidence of no crisis might cause their job to go away?  Would they really be unbiased with such an incentive?

Here is a parallel example to consider.  It strikes me that the laws of economics are better understood than the activity of greenhouse gasses.  I wonder if the author would support limits on speech for supporters of such things like minimum wages and trade protectionism that economists routinely say make no sense in the science of economics.  Should Barrack Obama be enjoined from discussing his gasoline rebate plan because most all economists say that it won't work the way he says?  There is an economist consensus, should that be enough to silence Obama?

5% Chance? No Freaking Way

Via William Biggs, Paul Krugman is quoting a study that says there is a 5% chance man's CO2 will raise temperatures 10C and a 1% chance man will raise global temperatures by 20 Celsius.  The study he quotes gets these results by applying various statistical tests to the outcomes from the IPCC climate models.

I am calling Bullshit.

There are any number of problems with the Weitzman study that is the basis for these numbers, but I will address just two.

The more uncertain the models, the more certain the need for action?

The first problem is in looking at the tail end (e.g. the last 1 or 5 percent) of a distribution of outcomes for which we don't really know the mean and certainly don't know the standard deviation.  In fact, the very uncertainty in the modeling and lack of understanding of the values of the most basic assumptions in the models creates an enormous standard deviation.  As a result, the confidence intervals are going to be huge, such that about every imaginable value may be within them. 

In most sciences, outsiders would use the fact of these very wide confidence intervals to deride the findings, arguing that the models were close to meaningless and they would be reluctant to make policy decisions based on these iffy findings.  Weitzman, however, uses this ridiculously wide range of potential projections and total lack of certainty to increase the pressure to take policy steps based on the models, by cleverly taking advantage of the absurdly wide confidence intervals to argue that the tail way out there to the right spells catastrophe.  By this argument, the worse the models and the more potential errors that exist, then the wider the distribution of outcomes and therefore the greater the risk and need for government action.  The less we understand anthropogenic warming, the more vital it is that we take immediate, economy-destroying action to combat it.  Following this argument to its limit, the risks we know nothing about are the ones we need to spend the absolute most money on.  By this logic, the space aliens we know nothing about out there pose an astronomical threat that justifies immediate application of 100% of the world's GDP to space defenses. 

My second argument is simpler:  Looking at the data, there is just no freaking way. 

In the charts below, I have given climate alarmists every break.  I have used the most drastic CO2 forecast (A2) from the IPCC fourth assessment, and run the numbers for a peak concentration around 800ppm.  I have used the IPCC's own formula for the effect of CO2 on temperatures without feedback  (Temperature Increase = F(C2) - F(C1) where F(c)=Ln (1+1.2c+0.005c^2 +0.0000014c^3) and c is the concentration in ppm).  Note that skeptics believe that both the 800ppm assumption and the IPCC formula above overstate warming and CO2 buildup, but as you will see, it is not going to matter.

The other formula we need is the feedback formula.  Feedback multiplies the temperature increase from CO2 alone by a factor F, such that F=1/(1-f), where f is the percentage of the original forcing that shows up as first order feedback gain (or damping if negative).

The graph below shows various cases of temperature increase vs. CO2 concentration, based on different assumptions about the physics of the climate system.  All are indexed to equal zero at the pre-industrial CO2 concentration of about 280ppm.

So, the blue line below is the temperature increase vs. CO2 concentration without feedback, using the IPCC formula mentioned above.  The pink is the same formula but with 60% positive feedback (1/[1-.6] = a 2.5 multiplier), and is approximately equal to the IPCC mean for case A2.  The purple line is with 75% positive feedback, and corresponds to the IPCC high-side temperature increase for case A2.  The orange and red lines represent higher positive feedbacks, and correspond to the 10C 5% case and 20C 1% case in Weitzman's article.  Some of this is simplified, but in all important respects it is by-the-book based on IPCC assumptions.


OK, so what does this tell us?  Well, we can do something interesting with this chart.   We have actually moved part-way to the right on this chart, as CO2 today is now at 385ppm, up from the pre-industrial 280ppm.  As you can see, I have drawn this on the chart below.  We have also seen some temperature increase from CO2, though no one really knows what the increase due to CO2 has been vs. the increase due to the sun or other factors.  But the number really can't be much higher than 0.6C, which is about the total warming we have recorded in the last century, and may more likely be closer to 0.3C.  I have drawn these two values on the chart below as well.


Again, there is some uncertainty in a key number (e.g. the amount of historic warming due to CO2) but you can see that it really doesn't matter.  For any conceivable range of past temperature increases due to the CO2 increase from 280-385 ppm, the numbers are no where near, not even within an order of magnitude, of what one would expect to have seen if the assumptions behind the other lines were correct.  For example, if we were really heading to a 10C increase at 800ppm, we would have expected temperatures to have risen in the last 100 years by about 4C, which NO ONE thinks is even remotely the case.  And if there is zero chance historic warming from man-made CO2 is anywhere near 4C, then there is zero (not 5%, not 1%) chance future warming will hit 10C or 20C.

In fact, experience to date seems to imply that warming has been under even the no feedback case.  This should not surprise anyone in the physical sciences.  A warming line on this chart below the no feedback line would imply negative feedback or damping in the climate system.  And, in fact, most long term stable physical systems are dominated by such negative feedback and not by positive feedback.  In fact, it is hard to find many natural processes except for perhaps nuclear fission that are driven by positive feedbacks as high as one must assume to get the 10 and 20C warming cases.  In short, these cases are absurd, and we should be looking closely at whether even the IPCC mean case is overstated as well.

What climate alarmists will argue is that these curves are not continuous.  They believe that there is some point out there where the feedback fraction goes above 100%, and thus the gain goes infinite, and the temperature runs away suddenly.  The best example is fissionable material being relatively inert until it reaches critical mass, when a runaway nuclear fission reaction occurs. 

I hope all reasonable people see the problem with this.  The earth, on any number of occasions, has been hotter and/or had higher CO2 concentrations, and there is no evidence of this tipping point effect ever having occurred.  In fact, climate alarmists like Michael Mann contradict themselves by arguing (in the infamous hockey stick chart) that temperatures absent mankind have been incredibly stable for thousands of years, despite numerous forcings like volcanoes and the Maunder Minimum.  Systems this stable cannot reasonably be dominated by high positive feedbacks, much less tipping points and runaway processes.

Postscript:  I have simplified away lag effects and masking effects, like aerosol cooling.  Lag effects of 10-15 years barely change this analysis at all.  And aerosol cooling, given its limited area of effect (cooling aerosols are short-lived and so are geographically limited in area downwind of industrial areas) is unlikely to be masking more than a tenth or two of warming, if any.  The video below addresses all these issues in more depth, and provides more step-by-step descriptions of how the charts above were created

Update:  Lucia Liljegren of the Blackboard has created a distribution of the warming forecasts from numerous climate models and model runs used by the IPCC, with "weather noise" similar to what we have seen over the last few decades overlaid on the model mean 2C/century trend. The conclusion is that our experience in the last century is unlikely to be solely due to weather noise masking the long-term trend.  It looks like even the IPCC models, which are well below the 10C or 20C warming forecasts disused above, may themselves be too high.  (click for larger version)


While Weitzman was looking at a different type of distribution, it is still interesting to observe that while alarmists are worried about what might happen out to the right at the 95% or 99% confidence intervals of models, the world seems to be operating way over to the left.

It's CO2, Because We Can't Think of Anything Else it Could Be

For a while, I have written about the bizarre assumption made by climate scientists.  They cannot prove or show any good link historically between CO2 and warming.  What they instead do is show that they can't explain some of the warming by understood processes, so they assume that any warming they cannot explain is from CO2.   Don't believe me?

Researchers are trying to understand how much of the melting is due to the extreme natural variability in the northern polar climate system and how much is due to global warming caused by humans. The Arctic Oscillation climate pattern, which plays a big part in the weather patterns in the northern hemisphere, has been in "positive" mode in recent decades bringing higher temperatures to the Arctic.

Dr Igor Polyakov, an oceanographer from the International Arctic Research Centre in Fairbanks, Alaska, explained that natural variability as well as global warming is crucial to understanding the ice melt. "A combination of these two forces led to what we observe now and we should not ignore either forces" he said.

The consensus among scientists is that while the natural variability in the Arctic is an important contributor to climate change there, the climate models cannot explain the rapid loss of sea ice without including "human-induced" global warming. This means human activity such as burning fossil fuels and land clearing which are releasing greenhouse gases in the atmosphere.

"There have been numerous models run that have looked at that and basically they can't reproduce the ice loss we've had with natural variability," said Dr Perovich. "You have to add a carbon dioxide warming component to it."

In other words, any warming scientists can't explain is chalked up to, without proof mind you, CO2.  Why?  Well, perhaps because it is CO2 that gets the funding, so CO2 it is.  To show you how dangerous this assumption is, I note that this study apparently did not consider the effect of man-made soot from inefficient coal and oil combustion (e.g. from China).  Soot lands on the ice, lowers its albedo, and causes it to melt a lot faster.  Several recent studies have hypothesized that this alternate anthropogenic effect (with a very different solution set from Co2 abatement) may explain much of recent Arctic ice loss. 

Here is a big fat clue for climate scientists:  It is not part of the scientific method to confidently ascribe your pet theory (and source of funding) to every phenomenon you cannot explain.  Or, maybe climate scientists are on to something.  Why does gravity seem to work instantaneously at long distances? Co2!  What causes cancer cells to turn on and grow out of control?  CO2!  Hey, its easy.  All of our scientific dilemmas are instantly solved.

More on "the Splice"

I have written that it is sometimes necesary to splice data gathered from different sources, say when I suggested splicing satellite temperature measurements onto surface temperature records.

When I did so, I cautioned that there can be issues with such splices.  In particular, one needs to be very, very careful not to make too much of an inflextion in the slope of the data that occurs right at the splice.  Reasonable scientific minds would wonder if that inflection point was an artifact of the changes in data source and measurement technology, rather than in the underlying phenomenon being measured.  Of course, climate sceintists are not reasonable, and so they declare catastrophic anthropogenic global warming to be settled science based on an inflection in temperature data right at a data source splice (between tree rings and thermometers).  More here.

Absoutely Priceless Example of How Poor Alarmists' Science Can Be

This is absolutely amazing.  I was checking out this article in the Ithaca Journal called "Climate Change 101: Positive Feedback Cycles" based on a pointer from Tom Nelson.

The Journal is right to focus on feedback.  As I have written on numerous occasions, the base effects of CO2 even in the IPCC projections is minimal.  Only by assuming unbelievably high positive feedback numbers does the IPCC and other climate modelers get catastrophic warming forecasts.  Such an assumption is hard to swallow - very few (like, zero) long-term stable natural processes (like climate) are dominated by high positive feedbacks (the IPCC forecasts assume 67-80% feedback factors, leading to forecasts 3x to 5x higher). 

So I guess I have to give kudos to an alarmist article that actually attempts to take on the feedback issue, the most critical, and shakiest, of the climate model assumptions. 

But all their credibility falls apart from the first paragraph.  They begin:

Our world is full of positive feedback cycles, and so is our society. Popular children's books like “If You Give a Mouse a Cookie” by Laura Numeroff are excellent examples. In Numeroff's tale, a mouse asks for a cookie, leading it to ask for a glass of milk, and so on, till finally it asks for another cookie.

Oh my God, they go to a children's book to prove positive feedback?  If I had gone this route, I probably would have played the "sorcerer's apprentice" card from Fantasia.  Anyway, they do soon get into real physics in the next paragraph.  Sort of.

Here's an example everyone in Ithaca can relate to: the snowball. If you make a small snowball and set it on the top of a hill, what happens? 1) It begins rolling, and 2) it collects snow as it rolls. When it collects snow, the snowball becomes heavier, which causes gravity to pull on it with more force, making the snowball roll faster down the hill. This causes more snow to collect on the snowball faster, etc., etc. Get the picture? That is a positive feedback cycle.

OMG, my head is hurting.  Is there a single entry-level physics student who doesn't know this is wrong?  The speed of a ball rolling downhill (wind resistance ignored) is absolutely unaffected by its weight.  A 10 pound ball would reach the bottom at the same moment as a 100 pound ball.  Do I really need to be lectured by someone who does not understand even the most basic of Newtonian physics.  (I would have to think about what increasing diameter would do to a ball rolling downhill and its speed -- but the author's argument is about weight, not size, so this is irrelevant."

Do you really need any more?  This guy has already disqualified himself from lecturing to us about physical processes.  But lets get a bit more:

And what happens to the snowball? Eventually the hill flattens and the ball comes to a stop. But if the hill continued forever, the snowball would reach some critical threshold. It would become too big to hold itself together at the raging speed it was traveling down the hill and it would fall apart. Before the snowball formed, it was at equilibrium with its surroundings, and after it falls apart, it may again reach an equilibrium, but the journey is fast-paced and unpredictable.

Two problems:  1) In nature, "hills" are never infinitely long.  And any hills that are infinitely long with minimal starting energy would find everything at the bottom of the hill long before we came into being 12 billion years or so into the history of the universe.  2)  Climate is a long-term quite stable process.  It oscillates some, but never runs away.  Temperatures in the past have already been many degrees higher and lower than they are today.  If a degree or so is all it takes to start the climate snowball running down the infinite hill, then the climate should have already run down this hill in the past, but it never has.  That is because long-term stable natural processes are generally dominated by negative, not positive, feedback. [ed: fixed this, had it backwards]

The author goes on to discuss a couple of well-known possible positive feedback factors - increases in water vapor and ice albedo.  But it completely fails to mention well-understood negative feedback factors, including cloud formation.  In fact, though most climate models assume positive feedback from the net of water processes (water vapor increase and cloud formation), in fact the IPCC admits we don't even know the net sign of these factors.  And most recent published work on feedback factors have demonstrated that climate does not seem to be dominated by positive feedback factors.

It hardly goes without saying that an author who begins with a children's book and a flawed physics example can't take credit for being very scientific.  But perhaps his worst failing of all is discussing a process that has counter-veiling forces butfails to even mention half of these forces that don't support his case.  It's not science, it's propaganda.

CO2 Limits Most Harmful to Low-Income Minorities

The Environmental Justice and Climate Change Initiative has issued a report that rising temperatures, supposedly from CO2, will hurt American blacks the most.

Blacks are more likely to be hurt by global warming than other Americans, according to a report issued Thursday.

The report was authored by the Environmental Justice and Climate Change Initiative, a climate justice advocacy group, and Redefining Progress, a nonprofit policy institute. It detailed various aspects of climate change, such as air pollution and rising temperatures, which it said disproportionately affect blacks, minorities and low-income communities in terms of poor health and economic loss.

“Right now we have an opportunity to see climate change in a different light; to see it for what it is, a human rights issue on a dangerous collision course of race and class,” said Nia Robinson, director of the Environmental Justice and Climate Change Initiative. “While it’s an issue that affects all of us, like many other social justice issues, it is disproportionately affecting African-Americans, other people of color, low-income people and indigenous communities.”

Heat-related deaths among blacks occur at a 150 to 200 percent greater rate than for non-Hispanic whites, the report said. It also reported that asthma, which has a strong correlation to air pollution, affects blacks at a 36 percent higher rate of incidence than whites.

Existing disparities between low-income communities and wealthier ones, such as high unemployment rates, are exacerbated by such negative effects of climate change as storms and floods, the report said.

Hmm, no mention of reductions in cold-related deaths, which typically are larger in a given year than heat-related deaths.  But a more serious issue is the CO2-abatement measures this advocacy group supports.  These abatement efforts could easily increase gas prices by as much as $20 a gallon, along with similar increases in electricity and natural gas prices.  In addition, strong CO2 abatement programs are likely to knock a percent or two off economic growth rates and, if ethanol is still a preferred tactic, will likely substantially raise food prices as well.  I am no expert, but I would say that rising gas, electricity, and food prices and falling economic growth are all likely to hit low-income minorities pretty hard. 

CO2 abatement is a wealthy persons cause.  The poor of America and the world at large will be demonstrably worse off in a world that is cooler but poorer.

No Detectable Hurricane Trend

Hurricanes offer a difficult data set to work with.  Since there are so few, even small numerical changes year over year can lead to substantial percentage changes.  Also, random variations in landfall can change at least media perceptions of hurricane frequency.  That is why I have argued for a while that metrics like total cyclonic energy are better for looking at hurricane trends.  And, as you can see below, there has been no positive trend over the last 15 years or so:


The Australian National Climate Center confirmed these findings:

Concern about the enhanced greenhouse effect affecting TC frequency and intensity has grown over recent decades. Recently, trends in global TC activity for the period 1970 to 2004 have been examined by Webster et al. [2005]. They concluded that no global trend has yet emerged in the total number of tropical storms and hurricanes."…  For the 1981/82 to 2005/06 TC seasons, there are no apparent trends in the total numbers and cyclone days of TCs, nor in numbers and cyclone days of severe TCs with minimum central pressure of 970 hPa or lower.

Media Rorschach Test

This will come as no surprise to folks who attempt to follow climate science through the media, but a recent study really sheds some interesting light on how the media report science based on their pre-conceived notions, and not on the science itself.  Alex Tabarrok discusses media reporting on the relative math skills of men and women.  The politically correct view is that there are no differences, so it seems that was going to be the way the new science was reported, whether the data matched or not:

For the past week or so the newspapers have been trumpeting a new study showing no difference in average math ability between males and females.  Few people who have looked at the data thought that there were big differences in average ability but many media reports also said that the study showed no differences in high ability.

The LA Times, for example, wrote:

The study also undermined the assumption -- infamously espoused by former Harvard University President Lawrence H. Summers in 2005 -- that boys are more likely than girls to be math geniuses.

Scientific American said:

So the team checked out the most gifted children. Again, no difference. From any angle, girls measured up to boys. Still, there’s a lack of women in the highest levels of professional math, engineering and physics. Some have said that’s because of an innate difference in math ability. But the new research shows that that explanation just doesn’t add up.

The Chronicle of Higher Education said:

The research team also studied if there were gender discrepancies at the highest levels of mathematical ability and how well boys and girls resolved complex problems. Again they found no significant differences.

All of these reports and many more like them are false.  In fact, consistent with many earlier studies (JSTOR), what this study found was that the ratio of male to female variance in ability was positive and significant, in other words we can expect that there will be more math geniuses and more dullards, among males than among females.  I quote from the study (VR is variance ratio):

Greater male variance is indicated by VR > 1.0. All VRs, by state and grade, are >1.0 [range 1.11 to 1.21].

Notice that the greater male variance is observable in the earliest data, grade 2.  (In addition, higher male VRS have been noted for over a century).  Now the study authors clearly wanted to downplay this finding so they wrote things like "our analyses show greater male variability, although the discrepancy in variances is not large." Which is true in some sense but the point is that small differences in variance can make for big differences in outcome at the top.  The authors acknowledge this with the following:

If a particular specialty required mathematical skills at the 99th percentile, and the gender ratio is 2.0, we would expect 67% men in the occupation and 33% women. Yet today, for example, Ph.D. programs in engineering average only about 15% women.

Both the WSJ and economist Mark Perry get it right.

Climate: The First Post-Modernist Science?

When I was in college, we mechanical engineers had little but disdain for practitioners of the various social sciences, who seemed more focused on advancing political ideologies than conducting quality science.  Apparently, denizens of these softer sciences have become convinced that the lack of objectivity or objective research that plagues their fields is par for the course in the hard sciences as well.  MaxedOutMamma describes this post-modernist view of science:

If some reader is not familiar with the full-bodied modern explications of post-modernism, the story of the Dartmouth professor who decided to sue her students will serve as an introduction. Here is her version of the problem with her students. Here is an article she wrote about working as a post-doc researcher at Dartmouth Medical School, which may give a hint as to why her students were so, ah, unwilling to assent to her view of the world:

In graduate school, I was inculcated in the tenets of a field known as science studies, which teaches that scientific knowledge has suspect access to truth and that science is motivated by politics and human interest. This is known as social constructivism and is the reigning mantra in science studies, which considers historical and sociological understandings of science. From the vantage point of social constructivism, scientific facts are not discovered but rather created within a social framework. In other words, scientific facts do not correspond to a natural reality but conform to a social construct.

: As a practicing scientist, I feel these views need to be qualified in the context of literary inquiry. My mentor, Chris Lowrey, is an extraordinary physician- scientist whose vision of science is pragmatic and positivist. My experience in his lab has shown me that the practice of science is at least partly motivated by the scientific method, though with some qualifications.

Through my experience in the laboratory, I have found that postmodernism offers a constructive critique of science in ways that social constructivism cannot, due to postmodernism's emphasis on openly addressing the presupposed moral aims of science. In other words, I find that while an individual ethic of motivation exists, and indeed guides the conduct of laboratory routine, I have also observed that a moral framework—one in which the social implications of science and technology are addressed—is clearly absent in scientific settings. Yet I believe such a framework is necessary. Postmodernism maintains that it is within the rhetorical apparatus of science—how scientists talk about their work—that these moral aims of science may be accomplished.

For those of you who cling to scientific method, this is pretty bizarre stuff. But she, and many others, are dead serious about it. If a research finding could harm a class of persons, the theory is that scientists should change the way they talk about that finding. Since scientific method is a way of building a body of knowledge based on skeptical testing, replication, and publication, this is a problem.

The tight framework of scientific method mandates figuring out what would disprove the theory being tested and then looking for the disproof. The thought process that spawned the scientific revolution was inherently skeptical, which is why disciples of scientific method say that no theory can be definitively and absolutely proved, but only disproved (falsified). Hypotheses are elevated to the status of theories largely as a result of continued failures to disprove the theory and continued conformity of experimentation and observation with the theory, and such efforts should be conducted by diverse parties.

Needless to say postmodernist schools of thought and scientific method are almost polar opposites.

Reading this, I start to come to the conclusion that climate scientists are attempting to make Climate the first post-modernist physical science.  It certainly would explain why climate is so far short of being a "big-boy science" like physics, where replicating results is more important than casual review of publications by a cherry-picked group of peers.  It also explains  this quote from National Center for Atmospheric Research (NOAA) climate researcher and global warming action promoter, Steven Schneider:

We have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we have. Each of us has to decide what the right balance is between being effective and being honest.

Additionally, it goes a long way to explaining why Steve McIntyre gets this response when he requests the data he needs to try to replicate certain climate studies (and here):

    We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it. There is IPR to consider.

Roy Spencer Congressional Testimony

I am a bit late on this, but Roy Spencer raises a number of good issues here in his testimony to Congress.  In particular, he focuses on just how much climate alarmists' assumption of strong positive feedback drive catastrophic forecasts.  Put in more realistic, better justified feedback assumptions, and the catastrophe goes away.

Testimony of Roy W. Spencer before the
Senate Environment and Public Works Committee on 22 July 2008

A printable PDF of this testimony can be found here

I would like to thank Senator Boxer and members of the Committee for allowing me to discuss my experiences as a NASA employee engaged in global warming research, as well as to provide my current views on the state of the science of global warming and climate change.

I have a PhD in Meteorology from the University of Wisconsin-Madison, and have been involved in global warming research for close to twenty years. I have numerous peer reviewed scientific articles dealing with the measurement and interpretation of climate variability and climate change. I am also the U.S. Science Team Leader for the AMSR-E instrument flying on NASA’s Aqua satellite.

1. White House Involvement in the Reporting of Agency Employees’ Work

On the subject of the Administration’s involvement in policy-relevant scientific work performed by government employees in the EPA, NASA, and other agencies, I can provide some perspective based upon my previous experiences as a NASA employee. For example, during the Clinton-Gore Administration I was told what I could and could not say during congressional testimony. Since it was well known that I am skeptical of the view that mankind’s greenhouse gas emissions are mostly responsible for global warming, I assumed that this advice was to help protect Vice President Gore’s agenda on the subject.

This did not particularly bother me, though, since I knew that as an employee of an Executive Branch agency my ultimate boss resided in the White House. To the extent that my work had policy relevance, it seemed entirely appropriate to me that the privilege of working for NASA included a responsibility to abide by direction given by my superiors.

But I eventually tired of the restrictions I had to abide by as a government employee, and in the fall of 2001 I resigned from NASA and accepted my current position as a Principal Research Scientist at the University of Alabama in Huntsville. Despite my resignation from NASA, I continue to serve as Team Leader on the AMSR-E instrument flying on the NASA Aqua satellite, and maintain a good working relationship with other government researchers.

2. Global Warming Science: The Latest Research
Regarding the currently popular theory that mankind is responsible for global warming, I am very pleased to deliver good news from the front lines of climate change research. Our latest research results, which I am about to describe, could have an enormous impact on policy decisions regarding greenhouse gas emissions.
Despite decades of persistent uncertainty over how sensitive the climate system is to increasing concentrations of carbon dioxide from the burning of fossil fuels, we now have new satellite evidence which strongly suggests that the climate system is much less sensitive than is claimed by the U.N.’s Intergovernmental Panel on Climate Change (IPCC).

Another way of saying this is that the real climate system appears to be dominated by “negative feedbacks” — instead of the “positive feedbacks” which are displayed by all twenty computerized climate models utilized by the IPCC. (Feedback parameters larger than 3.3 Watts per square meter per degree Kelvin (Wm-2K-1) indicate negative feedback, while feedback parameters smaller than 3.3 indicate positive feedback.)

If true, an insensitive climate system would mean that we have little to worry about in the way of manmade global warming and associated climate change. And, as we will see, it would also mean that the warming we have experienced in the last 100 years is mostly natural. Of course, if climate change is mostly natural then it is largely out of our control, and is likely to end — if it has not ended already, since satellite-measured global temperatures have not warmed for at least seven years now.

2.1 Theoretical evidence that climate sensitivity has been overestimated
The support for my claim of low climate sensitivity (net negative feedback) for our climate system is two-fold. First, we have a new research article1 in-press in the Journal of Climate which uses a simple climate model to show that previous estimates of the sensitivity of the climate system from satellite data were biased toward the high side by the neglect of natural cloud variability. It turns out that the failure to account for natural, chaotic cloud variability generated internal to the climate system will always lead to the illusion of a climate system which appears more sensitive than it really is.

Significantly, prior to its acceptance for publication, this paper was reviewed by two leading IPCC climate model experts - Piers Forster and Isaac Held– both of whom agreed that we have raised a legitimate issue. Piers Forster, an IPCC report lead author and a leading expert on the estimation of climate sensitivity, even admitted in his review of our paper that other climate modelers need to be made aware of this important issue.

To be fair, in a follow-up communication Piers Forster stated to me his belief that the net effect of the new understanding on climate sensitivity estimates would likely be small. But as we shall see, the latest evidence now suggests otherwise.

2.2 Observational evidence that climate sensitivity has been overestimated
The second line of evidence in support of an insensitive climate system comes from the satellite data themselves. While our work in-press established the existence of an observational bias in estimates of climate sensitivity, it did not address just how large that bias might be.

But in the last several weeks, we have stumbled upon clear and convincing observational evidence of particularly strong negative feedback (low climate sensitivity) from our latest and best satellite instruments. That evidence includes our development of two new methods for extracting the feedback signal from either observational or climate model data, a goal which has been called the “holy grail” of climate research.
The first method separates the true signature of feedback, wherein radiative flux variations are highly correlated to the temperature changes which cause them, from internally-generated radiative forcings, which are uncorrelated to the temperature variations which result from them. It is the latter signal which has been ignored in all previous studies, the neglect of which biases feedback diagnoses in the direction of positive feedback (high climate sensitivity).
Based upon global oceanic climate variations measured by a variety of NASA and NOAA satellites during the period 2000 through 2005 we have found a signature of climate sensitivity so low that it would reduce future global warming projections to below 1 deg. C by the year 2100. As can be seen in Fig. 1, that estimate from satellite data is much less sensitive (a larger diagnosed feedback) than even the least sensitive of the 20 climate models which the IPCC summarizes in its report. It is also consistent with our previously published analysis of feedbacks associated with tropical intraseasonal oscillations3.

Fig. 1. Frequency distributions of feedback parameters (regression slopes) computed from three-month low-pass filtered time series of temperature (from channel 5 of the AMSU instrument flying on the NOAA-15 satellite) and top-of-atmosphere radiative flux variations for 6 years of global oceanic satellite data measured by the CERES instrument flying on NASA’s Terra satellite; and from a 60 year integration of the NCAR-CCSM3.0 climate model forced by 1% per year CO2 increase. Peaks in the frequency distributions indicate the dominant feedback operating. This NCAR model is the least sensitive (greatest feedback parameter value) of all 20 IPCC models.
A second method for extracting the true feedback signal takes advantage of the fact that during natural climate variability, there are varying levels of internally-generated radiative forcings (which are uncorrelated to temperature), versus non-radiative forcings (which are highly correlated to temperature). If the feedbacks estimated for different periods of time involve different levels of correlation, then the “true” feedback can be estimated by extrapolating those results to 100% correlation. This can be seen in Fig. 2, which shows that even previously published4 estimates of positive feedback are, in reality, supportive of negative feedback (feedback parameters greater than 3.3 Wm-2K-1).

Fig. 2. Re-analysis of the satellite-based feedback parameter estimates of Forster and Gregory (2006) showing that they are consistent with negative feedback rather than positive feedback (low climate sensitivity rather than high climate sensitivity).

2.3 Why do climate models produce so much global warming?
The results just presented beg the following question: If the satellite data indicate an insensitive climate system, why do the climate models suggest just the opposite? I believe the answer is due to a misinterpretation of cloud behavior by climate modelers.

The cloud behaviors programmed into climate models (cloud “parameterizations”) are based upon researchers’ interpretation of cause and effect in the real climate system5. When cloud variations in the real climate system have been measured, it has been assumed that the cloud changes were the result of certain processes, which are ultimately tied to surface temperature changes. But since other, chaotic, internally generated mechanisms can also be the cause of cloud changes, the neglect of those processes leads to cloud parameterizations which are inherently biased toward high climate sensitivity.

The reason why the bias occurs only in the direction of high climate sensitivity is this: While surface warming could conceivably cause cloud changes which lead to either positive or negative cloud feedback, causation in the opposite direction (cloud changes causing surface warming) can only work in one direction, which then “looks like” positive feedback. For example, decreasing low cloud cover can only produce warming, not cooling, and when that process is observed in the real climate system and assumed to be a feedback, it will always suggest a positive feedback.
2.4 So, what has caused global warming over the last century?
One necessary result of low climate sensitivity is that the radiative forcing from greenhouse gas emissions in the last century is not nearly enough to explain the upward trend of 0.7 deg. C in the last 100 years. This raises the question of whether there are natural processes at work which have caused most of that warming.
On this issue, it can be shown with a simple climate model that small cloud fluctuations assumed to occur with two modes of natural climate variability — the El Nino/La Nina phenomenon (Southern Oscillation), and the Pacific Decadal Oscillation — can explain 70% of the warming trend since 1900, as well as the nature of that trend: warming until the 1940s, no warming until the 1970s, and resumed warming since then. These results are shown in Fig. 3.

Fig. 3. A simple climate model forced with cloud cover variations assumed to be proportional to a linear combination of the Southern Oscillation Index (SOI) and Pacific Decadal Oscillation (PDO) index. The heat flux anomalies in (a), which then result in the modeled temperature response in (b), are assumed to be distributed over the top 27% of the global ocean (1,000 meters), and weak negative feedback has been assumed (4 W m-2 K-1).

While this is not necessarily being presented as the only explanation for most of the warming in the last century, it does illustrate that there are potential explanations for recent warming other that just manmade greenhouse gas emissions. Significantly, this is an issue on which the IPCC has remained almost entirely silent. There has been virtually no published work on the possible role of internal climate variations in the warming of the last century.

3. Policy Implications
Obviously, what I am claiming today is of great importance to the global warming debate and related policy decisions, and it will surely be controversial. These results are not totally unprecedented, though, as other recently published research6 has also led to the conclusion that the real climate system does not exhibit net positive feedback.

While it will take some time for the research community to digest this new information, it must be mentioned that new research contradicting the latest IPCC report is entirely consistent with the normal course of scientific progress. I predict that in the coming years, there will be a growing realization among the global warming research community that most of the climate change we have observed is natural, and that mankind’s role is relatively minor.

While other researchers need to further explore and validate my claims, I am heartened by the fact that my recent presentation of these results to an audience of approximately 40 weather and climate researchers at the University of Colorado in Boulder last week (on July 17, 2008 ) led to no substantial objections to either the data I presented, nor to my interpretation of those data.

And, curiously, despite its importance to climate modeling activities, no one from Dr. Kevin Trenberth’s facility, the National Center for Atmospheric Research (NCAR), bothered to drive four miles down the road to attend my seminar, even though it was advertised at NCAR.

I hope that the Committee realizes that, if true, these new results mean that humanity will be largely spared the negative consequences of human-induced climate change. This would be good news that should be celebrated — not attacked and maligned.

And given that virtually no research into possible natural explanations for global warming has been performed, it is time for scientific objectivity and integrity to be restored to the field of global warming research. This Committee could, at a minimum, make a statement that encourages that goal.

1. Spencer, R.W., and W.D. Braswell, 2008: Potential biases in cloud feedback diagnosis:
A simple model demonstration. J. Climate, in press.
2. Allen, M.R., and D.J. Frame, 2007: Call off the quest. Science, 318, 582.
3. Spencer, R.W., W. D. Braswell, J. R. Christy, and J. Hnilo, 2007: Cloud and radiation
budget changes associated with tropical intraseasonal oscillations. Geophys. Res.
Lett., 34, L15707, doi:10.1029/2007GL029698.
4. Forster, P. M., and J. M. Gregory, 2006: The climate sensitivity and its components
diagnosed from Earth Radiation Budget data. J. Climate, 19, 39-52.
5. Stephens, G. L., 2005: Clouds feedbacks in the climate system: A critical review. J.
Climate, 18, 237-273.
6. Schwartz, S. E., 2007: Heat capacity, time constant, and sensitivity of the Earth’s
climate system. J. Geophys. Res., 112, D24S05, doi:10.1029/2007JD008746.

Climate Global Warming Is Caused by Everything Our Interest Group Opposed Before It Came Along As An Issue

Many leftish groups have for years had a curious opposition to advertising.  Ralph Nader and his PIRG groups always made it a particular issue.  This always struck me as inherently insulting, as the "logic" behind their opposition to advertising is that people are all dumb, unthinking, programmable robots who launch off and buy whatever they see advertised on TV.

The global warming hysteria kind of sucks all the oxygen out of every other goofy leftish issue out there, so now its necessary to link your leftish cause to global warming.  So it is no surprise to find out that advertising apparently causes global warming:

AUSTRALIAN television advertising is producing as much as 57 tonnes of carbon dioxide per hour, and thirty second ad breaks are among the worst offenders, according to audit figures from pitch consultants TrinityP3.

Carbon emissions are particularly strong during high-rating programs such as the final episodes of the Ten Network’s Biggest Loser, which produced 2135kgs per 30 second ad, So You Think You Can Dance at 2061kg for every 30 seconds, closely followed by the Seven News 6pm news at 1689kg and Border Security at 1802kg.

TrinityP3 managing director Darren Woolley said emissions are calculated by measuring a broadcasters’ power consumption and that of a consumer watching an ad on television in their home, B&T Magazine reports.

“We look at the number of households and the number of TVs, and then the proportion of TVs that are plasma, LCD or traditional, and calculate energy consumption based on those factors,” Woolley said.

TrinityP3 is formalising a standard carbon footprint measurement of advertising, which it claims will be the first of its kind.

“Most companies have been obliged to think through their strategies on reducing carbon emissions and they need to remember that their marketing strategies do have an environmental impact that needs to be included. This is not something that is easily able to be measured,” Mr Woolley said.

“Reality television is interesting as the more viewers and voters that tune in, the higher the carbon footprint. The more people vote, the more it adds to the CO2 in the atmosphere.

Note that, oddly, the 54 minutes an hour of regular programming is OK, it's only the 6 minutes of advertising that has a carbon footprint.  That's OK, though, because I am going to start turning off the TV during advertisements and go out and sit in my idling SUV and listen to my commercial-free satellite radio instead.

Some Day Climate May Be A Big-Boy Science

In big-boy science, people who run an experiment and arrive at meaningful findings will publish not only those findings but the data and methodology they used to reach those findings.  They do that because in most sciences, a conclusion is not really considered robust until multiple independent parties have replicated the finding, and they can't replicate the finding until they know exactly how it was reached.  Physics scientists don't run around talking about peer review as the be-all-end-all of scientific validation.  Instead of relying on peers to read over an article to look for mistakes, they go out and see if they can replicate the results.  It is expected that others in the profession will try to replicate, or even tear down, a controversial new finding.  Such a process is why we aren't all running around talking about the cold fusion "consensus" based on "peer-reviewed science."  It would simply be bizarre for someone in physics, say, to argue that their findings were beyond question simply because it had been peer reviewed by a cherry-picked review group and to refuse to publish their data or detailed methodology. 

Some day climate science may be all grown up, but right now its far from it.

1990: A Year Selected Very Carefully

Most of you will know that the Kyoto Treaty adopted CO2 reduction goals referenced to a base year of 1990.  But what you might not know is exactly how that year was selected.  Why would a treaty, negotiated and signed in the latter half of the 90's adopt 1990 as a base year, rather than say 1995 or 2000?  Or even 1980.

Closely linked to this question of base year selection for the treaty is a sort of cognitive dissonance that is occurring in reports about compliance of the signatories with the treaty.  Some seem to report substantial progress by European countries in reducing emissions, while others report that nearly everyone is going to miss the goals by a lot and that lately, the US has been doing better than signatory countries in terms of CO2 emissions.

To answer this, lets put ourselves back in about 1997 as the Kyoto Treat was being hammered out.  Here is what the negotiators knew at that time:

  • Both Japan and Europe had been mired in a recession since about 1990, cutting economic growth and reducing emissions growth.  The US economy had been booming.  From 1990-1995, US average real GDP growth was 2.5%, while Japan and Europe were both around 1.4% per year (source xls). 
  • The Berlin Wall fell in 1989, and Germany began unifying with East Germany in 1990.  In 1990, All that old, polluting, inefficient Soviet/Communist era industry was still running, pumping out incredible amounts of CO2 per unit produced.  By 1995, much of that industry had been shut down, though even to this day Germany continues to reap year over year efficiency improvements as they restructure old Soviet-era industry, transportation infrastructure, etc.
  • The UK in the late 1980's had embarked on a huge campaign to replace Midlands coal with natural gas from the North Sea.  From 1990-1995, for reasons having nothing to do with CO2, British substituted a lot of lower CO2 gas combustion in place of higher CO2 coal production.

Remember, negotiators knew all this stuff in 1997.  All the above experience netted to this CO2 data that was in the negotiators pocket at Kyoto (from here):

CO2 Emissions Changes, 1990-1995

EU -2.2%
Former Communist -26.1%
Germany -10.7%
UK -6.9%
Japan 7.2%
US 6.4%

In the above, the categories are not mutually exclusive.  Germany and UK are also in the EU numbers, and Germany is included in the former communist number as well.  Note that all numbers exclude offsets and credits.

As you can see, led by the collapse of the former communist economies and the shuttering of inefficient Soviet industries, in addition to the substitution of British gas for coal, the European negotiators knew they had tremendous CO2 reductions already in their pocket, IF 1990 was chosen as a base year.  They could begin Kyoto already looking like heroes, despite the fact that the reductions from 1990-1997 were almost all due to economic and political happenings unrelated to CO2 abatement programs.

Even signatory Japan was ticked off about the 1990 date, arguing that it benefitted the European countries but was pegged years after Japan had made most of their improvements in energy efficiency:

Jun Arima, lead negotiator for Japan's energy ministry, said the 1990 baseline for CO2 cuts agreed at Kyoto was arranged for the convenience of the UK and Germany. ...

Mr Arima said: "The base year of 1990 was very advantageous to European countries. In the UK, you had already experienced the 'dash for gas' from coal - then in Germany they merged Eastern Germany where tremendous restructuring occurred.

"The bulk of CO2 reductions in the EU is attributable to reductions in UK and Germany."

His other complaint was that the 1990 baseline ruled inadmissible the huge gains in energy efficiency Japan had made in the 1980s in response the 1970s oil shocks.

"Japan achieved very high level of energy efficiency in the 1980s so that means the additional reduction from 1990 will mean tremendous extra cost for Japan compared with other countries that can easily achieve more energy efficiency."

So 1990 was chosen by the European negotiators as the best possible date for their countries to look good and, as an added bonus, as a very good date to try to make the US look bad.  That is why, whenever you see a press release from the EU about carbon dioxide abatement, you will see them trumpet their results since 1990.  Any other baseline year would make them look worse.

One might arguably say that anything that occured before the signing of the treaty in 1997 is accidental or unrelated, and that it is more interesting to see what has happened once governments had explicit programs in place to reduce CO2.  This is what you will see:

Just let me remind you of some salutary statistics. Between 1997 and 2004, carbon dioxide emissions rose as follows:

Emissions worldwide increased 18.0%;

Emissions from countries that ratified the protocol increased 21.1%;

Emissions from non-ratifiers of the protocol increased 10.0%;

Emissions from the US (a non-ratifier) increased 6.6%;

A lot more CO2 data here.

Postscript:  One would expect that absent changes in government regulations, the US has probably continued to do better than Europe on this metric the last several years.  The reason is that increases in wholesale gas prices increase US gas retail prices by a higher percentage than it does European retail prices.   This is because fixed-amount taxes make up a much higher portion of European gas prices than American.  While it does not necesarily follow from this, it is not illogical to assume that recent increases in oil and gas prices have had a greater effect on US than European demand, particularly since, with historically lower energy prices, the US has not made many of the lower-hanging efficiency investments that have already been made in Europe.

Climate Re-Education Program

  A reader sent me a heads-up to an article in the Bulletin of the American Meteorological Society ($, abstract here) titled "Climate Change Education and the Ecological Footprint".  The authors express concern that non-science students don't sufficiently understand global warming and its causes, and want to initiate a re-education program in schools to get people thinking the "right" way.

So, do climate scientists want to focus on better educating kids in details of the carbon cycle?  In the complexities in sorting out causes of warming between natural and man-made effects?  In difficulties with climate modeling?  In the huge role that feedback plays in climate forecasts?

Actually, no.  Interestingly, the curriculum advocated in the Journal of American Meteorology has very little to do with meteorology or climate science.  What they are advocating is a social engineering course structured around the concept of "ecological footprint."  The course, as far as I can tell, has more in common with this online kids game where kids find out what age they should be allowed to live to based on their ecological footprint.

Like the Planet Slayer game above, the approach seems to be built around a quiz (kind of slow and tedious to get through).  Like Planet Slayer, most of the questions are lifestyle questions - do you eat meat, do you buy food from more than 200 miles away, how big is your house, do you fly a lot, etc.  If you answer that yes, you have a good diet and a nice house and travel a bit and own a car, then you are indeed destroying the planet.

I could go nuts on a rant about propoganda in government monopoly schools, but I want to make a different point [feel free to insert rant of choice here].  The amazing thing to me is that none of this has the first thing to do with meteoroogy or climate science.  If there were any science at all in this ecological footprint stuff, it would have to be economics.  What does meteorology have to say about the carrying capacity of the earth?  Zero.  What does climate science have to say about the balance between the benefits of air travel and the cost of the incremental warming that might result from that air travel?  Zero. 

Take one example - food miles.  I live in Phoenix.  The cost to grow crops around here (since most of the agricultural water has to be brought in from hundreds of miles away) is high.  The cost is also high because even irrigated, the soil is not as productive for many crops as it is in, say, Iowa, so crops require more labor, more fertilizer, and more land for the same amount of yield.  I could make a really good argument that an ear of corn trucked in from Iowa probably uses less resources than an ear of corn grown withing 200 miles of where I live.  Agree or disagree, this is a tricky economics question that requires fairly sophisiticated analysis to answer.  How is teaching kids that "food grown within 200 miles helps save the planet" advancing the cause of climate science?  What does meteorology have to say about this question?

I am sorry I don't have more excerpts, but I am lazy and I have to retype them by hand.  But this is too priceless to miss:

Responding to the statement "Buying bottled water instead of drinking water from a faucet contributes to global warming" only 21% of all [San Jose State University] Meteorology 112 students answered correctly.  In the EF student group, this improved to a 53% correct response....  For the statement, "Eating a vegetarian diet can reduce global warming," the initial correct response by all Meteorology 112 students was 14%, while the EF group improved to 80%.

Oh my god, every time you drink bottled water you are adding 0.0000000000000000000000000001C to the world temperature.  How much global warming do I prevent if I paint flowers on my VW van?  We are teaching college meteorology students this kind of stuff?  The gulf between this and my freshman physics class is so wide, I can't even get my head around it.  This is a college science class?

In fact, the authors admit that their curriculum is an explicit rejection of science education, bringing the odd plea in a scientific journal that science students should be taught less science:

Critics of conventional environmental education propose that curriculum focused solely on science without personal and social connections may not be the most effective educational model for moving toward social change.

I think it is a pretty good sign that a particular branch of science has a problem when it is focused more on "social change" than on getting the science right, and when its leading journal focuses on education studies rather than science.

If I were a global warming believer, this program would piss me off.  Think about it.  Teaching kids this kind of stuff and then sending them out to argue with knowlegeable skeptics is like teaching a bunch of soldiers only karate and judo and then sending them into a modern firefight.  They are going to get slaughtered. 

Hockey Stick: RIP

I have posted many times on the numerous problems with the historic temperature reconstructions that were used in Mann's now-famous "hockey stick."   I don't have any problems with scientists trying to recreate history from fragmentary evidence, but I do have a problem when they overestimate the certainty of their findings or enter the analysis trying to reach a particular outcome.   Just as an archaeologist must admit there is only so much that can be inferred from a single Roman coin found in the dirt, we must accept the limit to how good trees are as thermometers.  The problem with tree rings (the primary source for Mann's hockey stick) is that they vary in width for any number of reasons, only one of which is temperature.

One of the issues scientists are facing with tree ring analyses is called "divergence."  Basically, when tree rings are measured, they have "data" in the form of rings and ring widths going back as much as 1000 years (if you pick the right tree!)  This data must be scaled -- a ring width variation of .02mm must be scaled in some way so that it translates to a temperature variation.  What scientists do is take the last few decades of tree rings, for which we have simultaneous surface temperature recordings, and scale the two data sets against each other.  Then they can use this scale when going backwards to convert ring widths to temperatures.

But a funny thing happened on the way to the Nobel Prize ceremony.  It turns out that if you go back to the same trees 10 years later and gather updated samples, the ring widths, based on the scaling factors derived previously, do not match well with what we know current temperatures to be. 

The initial reaction from Mann and his peers was to try to save their analysis by arguing that there was some other modern anthropogenic effect that was throwing off the scaling for current temperatures (though no one could name what such an effect might be).  Upon further reflection, though, scientists are starting to wonder whether tree rings have much predictive power at all.  Even Keith Briffa, the man brought into the fourth IPCC to try to save the hockey stick after Mann was discredited, has recently expressed concerns:

There exists very large potential for over-calibration in multiple regressions and in spatial reconstructions, due to numerous chronology predictors (lag variables or networks of chronologies – even when using PC regression techniques). Frequently, the much vaunted ‘verification’ of tree-ring regression equations is of limited rigour, and tells us virtually nothing about the validity of long-timescale climate estimates or those that represent extrapolations beyond the range of calibrated variability.

Using smoothed data from multiple source regions, it is all too easy to calibrate large scale (NH) temperature trends, perhaps by chance alone.

But this is what really got me the other day.  Steve McIntyre (who else) has a post that analyzes each of the tree ring series in the latest Mann hockey stick.  Apparently, each series has a calibration period, where the scaling is set, and a verification period, an additional period for which we have measured temperature data to verify the scaling.  A couple of points were obvious as he stepped through each series:

  1. Each series individually has terrible predictive ability.  Each were able to be scaled, but each has so much noise in them that in many cases, standard T-tests can't even be run and when they are, confidence intervals are huge.  For example, the series NOAMER PC1 (the series McIntyre showed years ago dominates the hockey stick) predicts that the mean temperature value in the verification period should be between -1C and -16C.  For a mean temperature, this is an unbelievably wide range.  To give one a sense of scale, that is a 27F range, which is roughly equivalent to the difference in average annual temperatures between Phoenix and Minneapolis!  A temperature forecast with error bars that could encompass both Phoenix and Minneapolis is not very useful.
  2. Even with the huge confidence intervals above, the series above does not verify!  (the verification value is -.19).  In fact, only one out of numerous data series individually verifies, and even this one was manually fudged to make it work.

Steve McIntyre is a very careful and fair person, so he allows that even if none of the series individually verify or have much predictive power, they might when combined.  I am not a statistician, so I will leave that to him to think about, but I know my response -- if all of the series are of low value individually, their value is not going to increase when combined.  They may accidentally in mass hit some verification value, but we should accept that as an accident, not as some sort of true signal emerging from the data. 

Nice Animation

This is a pretty cool animation of world weather patterns over the last 24 hours.  There are more here, at Anthony Watt's weather information company.

Why Does NASA Oppose Satellites? A Modest Proposal For A Better Data Set

One of the ironies of climate science is that perhaps the most prominent opponent of satellite measurement of global temperature is James Hansen, head of ... wait for it ... the Goddard Institute for Space Studies at NASA!  As odd as it may seem, while we have updated our technology for measuring atmospheric components like CO2, and have switched from surface measurement to satellites to monitor sea ice, Hansen and his crew at the space agency are fighting a rearguard action to defend surface temperature measurement against the intrusion of space technology.

For those new to the topic, the ability to measure global temperatures by satellite has only existed since about 1979, and is admittedly still being refined and made more accurate.  However, it has a number of substantial advantages over surface temperature measurement:

  • It is immune to biases related to the positioning of surface temperature stations, particularly the temperature creep over time for stations in growing urban areas.
  • It is relatively immune to the problems of discontinuities as surface temperature locations are moved.
  • It is much better geographic coverage, lacking the immense holes that exist in the surface temperature network.

Anthony Watt has done a fabulous job of documenting the issues with the surface temperature measurement network in the US, which one must remember is the best in the world.  Here is an example of the problems in the network.  Another problem that Mr. Hansen and his crew are particularly guilty of is making a number of adjustments in the laboratory to historical temperature data that are poorly documented and have the result of increasing apparent warming.  These adjustments, that imply that surface temperature measurements are net biased on the low side, make zero sense given the surveys and our intuition about urban heat biases.

What really got me thinking about this topic was this post by John Goetz the other day taking us step by step through the GISS methodology for "adjusting" historical temperature records  (By the way, this third party verification of Mr. Hansen's methodology is only possible because pressure from folks like Steve McIntyre forced NASA to finally release their methodology for others to critique).  There is no good way to excerpt the post, except to say that when its done, one is left with a strong sense that the net result is not really meaningful in any way.  Sure, each step in the process might have some sort of logic behind it, but the end result is such a mess that its impossible to believe the resulting data have any relevance to any physical reality.  I argued the same thing here with this Tucson example.

Satellites do have disadvantages, though I think these are minor compared to their advantages  (Most skeptics believe Mr. Hansen prefers the surface temperature record because of, not in spite of, its biases, as it is believed Mr. Hansen wants to use a data set that shows the maximum possible warming signal.  This is also consistent with the fact that Mr. Hansen's historical adjustments tend to be opposite what most would intuit, adding to rather than offsetting urban biases).  Satellite disadvantages include:

  • They take readings of individual locations fewer times in a day than a surface temperature station might, but since most surface temperature records only use two temperatures a day (the high and low, which are averaged), this is mitigated somewhat.
  • They are less robust -- a single failure in a satellite can prevent measuring the entire globe, where a single point failure in the surface temperature network is nearly meaningless.
  • We have less history in using these records, so there may be problems we don't know about yet
  • We only have history back to 1979, so its not useful for very long term trend analysis.

This last point I want to address.  As I mentioned above, almost every climate variable we measure has a technological discontinuity in it.  Even temperature measurement has one between thermometers and more modern electronic sensors.  As an example, below is a NOAA chart on CO2 that shows such a data source splice:


I have zero influence in the climate field, but I would never-the-less propose that we begin to make the same data source splice with temperature.  It is as pointless continue to rely on surface temperature measurements as our primary metric of global warming as it is to rely on ship observations for sea ice extent. 

Here is the data set I have begun to use (Download crut3_uah_splice.xls ).  It is a splice of the Hadley CRUT3 historic data base with the UAH satellite data base for historic temperature anomalies.  Because the two use different base periods to zero out their anomalies, I had to reset the UAH anomaly to match CRUT3.  I used the first 60 months of UAH data and set the UAH average anomaly for this period equal to the CRUT3 average for the same period.  This added exactly 0.1C to each UAH anomaly.  The result is shown below (click for larger view)


Below is the detail of the 60-month period where the two data sets were normalized and the splice occurs.  The normalization turned out to be a simple addition of 0.1C to the entire UAH anomaly data set.  By visual inspection, the splice looks pretty good.


One always needs to be careful when splicing two data sets together.  In fact, in the climate field I have warned of the problem of finding an inflection point in the data right at a data source splice.  But in this case, I think the splice is clean and reasonable, and consistent in philosophy to, say, the splice in historic CO2 data sources.

A Reminder

As we know, alarmists have adopted the term "climate change" over "global warming," in large part since the climate is always changing for all manner of reasons, one can always find, well, climate change.   This allows alarmists in the media to point to any bit of weather in the tails for the normal distribution and blame these events on man-made climate change.

But here is a reminder for those who may be uncomfortable with their own grasp of climate science (don't feel bad, the media goes out of its way not to explain things very well).  There is no mechanism that has been proven, or even credibly identified, for increasing levels of CO2 in the atmosphere to "change the climate" or cause extreme weather without first causing warming.  In other words, the only possible causality is CO2 --> warming --> changing weather patterns.  If we don't see the warming, we don't see the changing weather patterns. 

I feel the need to say this, because alarmists (including Gore) have adopted the tactic of saying that climate change is accelerating, or that they see the signs of accelerating climate change everywhere.  But for the last 10 years, we have not seen any warming.  Uah

So if climate change is in fact somehow "accelerating," then it cannot possibly be due to CO2.  I believe that they are trying to create the impression that somehow CO2 is directly causing extreme weather, which it does not, under any mechanism anyone has ever suggested.   

Antarctic Sea Ice

I have written a number of times that alarmists like Al Gore focus their cameras and attention on small portions of the Antarctic Peninsula where sea ice is has been shrinking  (actually, it turns out Al Gore did not focus actual cameras but used special effects footage from the disaster movie Day after Tomorrow).  I have argued that this is disingenuous, because the Antarctic Peninsula is not representative of climate trends in the rest of Antarctica, much less a good representative of climate trends across the whole globe.  This map reinforces my point, showing in red where sea ice has increased, and in blue where it has decreased  (this is a little counter-intuitive where we expect anomaly maps to show red as hotter and blue as colder).


The Cost of the Insurance Policy Matters

Supporters of the precautionary principle argue that even if it is uncertain that we will face a global warming catastrophe from producing CO2, we should insure against it by abating CO2 just in case.  "You buy insurance on your house, don't you," they often ask.  Sure, I answer, except when the cost of the insurance is more than the cost of the house.

In a speech yesterday here in Washington, Al Gore challenged the United States to "produce every kilowatt of electricity through wind, sun, and other Earth-friendly energy sources within 10 years. This goal is achievable, affordable, and transformative." (Well, the goal is at least one of those things.) Gore compared the zero-carbon effort to the Apollo program. And the comparison would be economically apt if, rather than putting a man on the moon—which costs about $100 billion in today's dollars—President Kennedy's goal had been to build a massive lunar colony, complete with a casino where the Rat Pack could perform.

Gore's fantastic—in the truest sense of the word—proposal is almost unfathomably pricey and makes sense only if you think that not doing so almost immediately would result in an uninhabitable planet. ...

This isn't the first time Gore has made a proposal with jaw-dropping economic consequences. Environmental economist William Nordhaus ran the numbers on Gore's idea to reduce carbon emissions by 90 percent by 2050. Nordhaus found that while such a plan would indeed reduce the maximum increase in global temperatures to between 1.3 and 1.6 degrees Celsius, it did so "at very high cost" of between $17 trillion and $22 trillion over the long term, as opposed to doing nothing. (Again, just for comparative purposes, the entire global economy is about $50 trillion.)

I think everyone's numbers are low, because they don't include the cost of storage (technology unknown) or alternative capacity when it is a) dark and/or b) not windy.

A while back I took on Gore's suggestion that all of America's electricity needs could be met with current Solar technology with a 90 mile x 90 mile tract of solar.  Forgetting the fact that Al's environmental friends would never allow us to cover 8100 square miles of the desert in silicon, I got a total installation cost of $21 trillion dollars.  And that did not include the electrical distribution systems necessary for the whole country to take power from this one spot, nor any kind of storage technology for using electricity at night  (it was hard to cost one out when no technology exist for storing America's total energy needs for 12 hours).  Suffice it to say that a full solution with storage and distribution would easily cost north of $30 trillion dollars.

This Too Shall Pass (By Popular Demand)

In perhaps the largest batch of email I have ever gotten on one subject, readers are demanding more coverage of the effect of trace atmospheric gasses on kidney function.  So here you go:

In early July, when a former government employee accused Dick Cheney's office of deleting from congressional testimony key statements about the impact of climate change on public health, White House staff countered that the science just wasn't strong enough to include. Not two weeks later, however, things already look different. University of Texas researchers have laid out some of the most compelling science to date linking climate change with adverse public-health effects: scientists predict a steady rise in the U.S. incidence of kidney stones — a medical condition largely brought on by dehydration — as the planet continues to warm.

I am certainly ready to believe that this is "the most compelling science to date" vis a vis the negative effects of global warming, though I thought perhaps the study about global warming increasing acne was right up there as well.

Here are 48,900 other things that "global warming will cause."  More from Lubos Motl.  And here is the big list of global warming catastrophe claims.

Update:  I am not sure I would have even bothered, but Ryan M actually dives into the "science" of the kidney stone finding

Working on New Videos

Sorry posting has been light, but I am working on starting a new series of videos.  At some point I want to update the old ones, but right now I want to experiment with some new approaches -- the old ones are pretty good, but are basically just powerpoint slides with some narration.  If you have not seen the previous videos, you may find them as follows:

  • The 6-part, one hour version is here
  • The 10-minute version, which is probably the best balance of time vs. material covered, is here.
  • The short 3-minute version I created for a contest (I won 2nd place) is here.

Combined, they have over 40,000 views.

Another Dim Bulb Leading Global Warming Efforts

Rep. Edward Markey (D-Mass.) is chairman of the House (Select) Energy Independence and Global Warming Committee.  He sure seems to know his stuff, huh:

A top Democrat told high school students gathered at the U.S. Capitol Thursday that climate change caused Hurricane Katrina and the conflict in Darfur, which led to the “black hawk down” battle between U.S. troops and Somali rebels....

“In Somalia back in 1993, climate change, according to 11 three- and four-star generals, resulted in a drought which led to famine,” said Markey.

“That famine translated to international aid we sent in to Somalia, which then led to the U.S. having to send in forces to separate all the groups that were fighting over the aid, which led to Black Hawk Down. There was this scene where we have all of our American troops under fire because they have been put into the middle of this terrible situation,” he added.


Yes, It's Another Antarctic Ice Post

From a reader, comes yet another article claiming micro-climate variations on the Antarctic Peninsula are indicative of global warming.

New evidence has emerged that a large plate of floating ice shelf attached to Antarctica is breaking up, in a troubling sign of global warming, the European Space Agency (ESA) said on Thursday.

Images taken by its Envisat remote-sensing satellite show that Wilkins Ice Shelf is "hanging by its last thread" to Charcot Island, one of the plate's key anchors to the Antarctic peninsula, ESA said in a press release.

"Since the connection to the island... helps stabilise the ice shelf, it is likely the breakup of the bridge will put the remainder of the ice shelf at risk," it said.

Wilkins Ice Shelf had been stable for most of the last century, covering around 16,000 square kilometres (6,000 square miles), or about the size of Northern Ireland, before it began to retreat in the 1990s.

No, No, No.  The Antartic Peninsula's climate is not indicative of the rest of Antarctica or the rest of the Southern Hemisphere, much less of the globe.  Here, one more time, is the missing context:

    1. The Antarctic Peninsula is a very small area that has very clearly been warming substantially over the last decades, but it represents only 2% of Antarctica 

    2. The rest of Antarctica has seen flat and even declining temperatures, as has the entire southern hemisphere.  In fact, the Antarctic Peninsula is a very small area that is anomalous within the entire Southern Hemisphere, which makes it incredible that it so often is used as indicative of a global climate trend.


    3. Antarctic sea ice extent is actually at the highest levels observed since we started watching it via satellite around 1979.  Ice may be shrinking around the Peninsula, but is net growing over the whole continent
    4. We have no clue how ice shelves behave over time spans longer than the 100 years we have watched them.  It may well be they go through long-term natural growth and collapse cycles.

Much more here.

In Search of Honesty

Both major presidential candidates have endorsed CO2 abatement targets for the US, with Obama advocating for the most stringent -- the "20 by 50" target by which the US would reduce CO2 emissions by 80% in the next 40 years.

Given that they support such targets, the candidates' public positions on gasoline prices should be something like this:

Yeah, I know that $4 gas is painful.  But do you know what?  Gas prices are going to have to go a LOT higher for us to achieve the CO2 abatement targets I am proposing, so suck it up.  Just to give you a sense of scale, the Europeans pay nearly twice as much as we do for gas, and even at those levels, they are orders of magnitude short of the CO2 abatement I have committed us to achieve.  Since late 2006, gas prices in this country have doubled, and demand has fallen by perhaps 5%.  That will probably improve over time as people buy new cars and change behaviors, but it may well require gasoline prices north of $20 a gallon before we meet the CO2 goal I have adopted.  So get ready.

You have heard Obama and McCain say this?  Yeah, neither have I.  At least Obama was consistent enough not to adopt McCain's gas tax holiday idea.  But it's time for some honesty here, not that I really expect it. 

We need to start being a lot clearer about the real costs of CO2 abatement and stop this mindless "precautionary principle" jargon that presupposes that there are no costs to CO2 abatement.  When proponents of the precautionary principle say "Well, CO2 abatement is like insurance -- you buy insurance on your house, don't you," I answer, "Not if the insurance costs more than the cost to replace the house, I don't."

Climate Blogs That Don't Necessarily Accept "The Consensus"

Via Tom Nelson and Climate Debate Daily

William M. Briggs
Climate Audit
Climate Change Facts
Climate Change Fraud
Climate Police
Climate Resistance
Climate Scam
Climate Science
CO2 Science
Friends of Science
Global Climate Scam
Global Warming Heretic
Global Warming Hoax
Global Warming Skeptic
Greenie Watch
Bruce Hall
Warwick Hughes
Lucia Liljegren
Jennifer Marohasy
Warren Meyer
Maurizio Morabito
Luboš Motl
Tom Nelson
Newsbusters climate
Planet Gore
Roger Pielke Sr.
Fred Singer
David Stockwell
Philip Stott
Anthony Watts
World Climate Report

Map of Pain Created by CO2 Abatement Efforts

Government treaties and legislation will of necessity increase the cost of energy substantially.  It will also indirectly increase the cost of food and other staples, as fertilizer, equipment, and transportation costs rise.  This is not to mention the substantial rise in food costs that will continue as long as governments continue their misguided efforts to promote and subsidize food-based ethanol as a global warming solution. 

I found the map below in another context at economist Mark Perry's site.  It shows the percentage of the average person's income that is spent on food, fuel, and drink, with low percentages in green and high percentages in red.  However, this could easily be a map of the pain created by CO2 abatement efforts, with the most pain felt in red and the least in green.  In fact, this map actually underestimates the pain in yellow-red areas, as is does not factor in the lost development potential and thus lost future income from CO2 abatement efforts.


Update on food prices:

Biofuels have caused a 75 per cent increase in world food prices, a new report suggests.

The rise is far greater than previous estimates including a US Government claim that plant-derived fuels contribute less than three per cent to food price hikes.

According to reports last night, a confidential World Bank document indicates the true extent of the effect of biofuels on prices at a crucial time in the world's negotiations on biofuel policy.

Rising food prices have been blamed for pushing 100 million people beneath the poverty line. The confidential report, based on a detailed economic analysis of the effect of biofuels, will put pressure on the American and European governments, which have turned to biofuels in attempts to reduce the greenhouse gases associated with fossil fuels and to reduce their reliance on oil imports.

The report says: "Without the increase in biofuels, global wheat and maize stocks would not have declined appreciably and price increases due to other factors would have been moderate."

Extrapolating From One Data Point

Years ago, when I was studying engineering in college, I had a professor who used to "joke"  (remember, these are engineers, so the bar for the word "joke" is really low) that when he wanted to prove something, it was a real benefit to have only one data point.  That way, he said, you could plot a trend in any direction with any slope you wanted through the point.  Once you had two or three or more data points, your flexibility was ruined.

I am reminded of this in many global warming articles in the press today.  Here is one that caught my eye today on Tom Nelson's blog.  There is nothing unusual about it, it just is the last one I saw:

Byers said he has decided to run because he wants to be able to look at his children in 20 or 30 years and be able to say that he took action to try to address important challenges facing humanity. He cited climate change as a “huge” concern, noting that this was driven home during a trip he took to the Arctic three weeks ago.

“The thing that was most striking was how the speed of climate change is accelerating—how it’s much worse than anyone really wants to believe,” Byers said. “To give you a sense of this, we flew over Cumberland Sound, which is a very large bay on the east coast of Baffin Island. This was three weeks ago; there was no ice.”

Do you see the single data point:  Cumberland Sound three weeks ago had no ice.  Incredibly, from this single data point, he not only comes up with a first derivative (the world is warming) but he actually gets the second derivative from this single data point (change is accelerating).  Wow!

We see this in other forms all the time:

  • We had a lot of flooding in the Midwest this year
  • There were a lot of tornadoes this year
  • Hurricane Katrina was really bad
  • The Northwest Passage was navigable last year
  • An ice shelf collapsed in Antarctica
  • We set a record high today in such-and-such city

I often criticize such claims for their lack of any proof of causality  (for example, linking this year's floods and tornadoes to global warming when it is a cooler year than most of the last 20 seems a real stretch). 

But all of these stories share another common problem - they typically are used by the writer to make a statement about the pace and direction of change (and even the acceleration of this change), something that is absolutely scientifically impossible to do from a single data point.  As it turns out, we often have flooding in the Midwest.  Neither tornadoes nor hurricanes have shown any increasing trend over the past decades.  The Northwest Passage has been navigable a number of years in the last century.  During the time of the ice shelf collapse panic, Antarctica was actually setting 30-year record highs for sea ice extent.  And, by simple math, every city on average should set a new 100-year high temperature record every 100 days, and this is even before considering the urban heat island effect's upward bias on city temperature measurement.

Postscript:  Gee, I really hate to add a second data point to the discussion, but from Cyrosphere Today, here is a comparison of the Arctic sea ice extent today and exactly 20 years ago (click for a larger view)


The arrow points to Cumberland Sound.  I will not dispute Mr. Byers personal observations, except to say that whatever condition it is in today, there seems to have been even less ice there 20 years ago.

To be fair, sea ice extent in the Arctic is down about a million square kilometers today vs. where it was decades ago (though I struggle to see it in these maps), while the Antarctic is up about a million, so the net world anomaly is about zero right now. 

The Date You Should Die

A while back I wrote about a disgusting little online game sponsored by the Australian government via the ABC.  It appears that this game is being promoted in the public schools as well:

Professor Schpinkee's “date one should die” exercise is meant to be a “fun” experience for primary students of public schools associated with the Australian Sustainability Schools Initiative.” According to a 2007 Schools Environment newsletter, written by the government sustainability officer in New South Wales and sent to schools in this program, teachers are encouraged to lead children to the Australian Broadcasting Corporation's Planet Slayer website and use Professor Schpinkee's Greenhouse Calculator. The newsletter refers to the calculator as a “great game for kids.”

My original post has screenshots and more description.  Via Tom Nelson

Another Assessment of Hansen's Predictions

The Blackboard has done a bit more work to do a better assessment of Hansen's forecast to Congress 20 years ago on global warming than I did in this quick and dirty post here.  To give Hansen every possible chance, the author has evaluated Hansen's forecast against Hansen's preferred data set, the surface temperature measurements of the Hadley Center and his own GISS  (left to other posts will be irony of a scientist at the Goddard Institute of Space Studies at NASA preferring klunky surface temperature measurements over satellite measurements, but the surface measurements are biased upwards and so give Hansen a better shot at being correct with his catastrophic warming forecasts).

Here is the result of their analysis:


All three forecasts are high. 

Don't be too encouraged at Hansen's prediction power when you observe the yellow line is not too far off. The yellow line represented a case where there was a radical effort to reduce CO2, something we have not seen.  Note that these are not different cases for different climate sensitivities to CO2 -- my reading of Hansen's notes is that these all use the same sensitivity, just with different CO2 production forecast inputs. In fact, based on our actual CO2 ouput in the last 20 years, we should use a case between the orange and the red to evaluate Hansen's predictive ability.

Great Moments In Alarmism

Apparently a number of papers are "commemorating" today the 20th anniversary of James Hansen's speech before Congress warning of catastrophic man-made global warming.  So let's indeed commemorate it.  Here is the chart from the appendices of Hansen's speech showing his predictions for man-made global warming:


I have helpfully added in red the actual temperature history, as measured by satellite, over the last 20 years (and scale-shifted to match the base anomaly in Hansens graph).  Yes, 2008 has been far colder than 1988.  We have seen no warming trend in the last 10 years, and temperatures have undershot every one of Hansen's forecasts.  He thought the world would be a degree C warmer in 20 years, and it is not.  Of course, today, he says the world will warm a degree in the next 20 years -- the apocalypse never goes away, it just recesses into the future.

This may explain why Hansen's GISS surface temperature measurements are so much higher than everyone else's, and keep getting artificially adjusted upwards:  Hansen put himself way out on a limb, and now is using the resources of the GISS to try to create warming in the metrics where none exist to validate his forecasts of Apocalypse. 

By the way, if you want more insight into the "science" led by James Hansen, check out this post from Steve McIntyre on his trying to independently reproduce the GISS temperature aggregation methodology. 

Here are some more notes and scripts in which I’ve made considerable progress on GISS Step 2. As noted on many occasions, the code is a demented mess - you’d never know that NASA actually has software policies (e.g. here or here . I guess that Hansen and associates regard themselves as being above the law. At this point, I haven’t even begum to approach analysis of whether the code accomplishes its underlying objective. There are innumerable decoding issues - John Goetz, an experienced programmer, compared it to descending into the hell described in a Stephen King novel. I compared it to the meaningless toy in the PPM children’s song - it goes zip when it moves, bop when it stops and whirr when it’s standing still. The endless machinations with binary files may have been necessary with Commodore 64s, but are totally pointless in 2008.

Because of the hapless programming, it takes a long time and considerable patience to figure out what happens when you press any particular button. The frustrating thing is that none of the operations are particularly complicated.

Hansen, despite being paid by US Taxpayers and despite all regulations on government science, refused for years to even release this code for inspection by outsiders and to this day resists helping anyone trying to reproduce his mysterious methodologies.

Which in some ways is all irrelevent anyway, since surface temperature measurement is flawed for so many reasons (location biases, urban heat islands, historical discontinuities, incomplete coverage) that satellite temperature measurement makes far more sense, which is why I used it above.  Of course, there is one person who fights hard against use of this satellite methodology.  Ironically, this person fighting use of space technology is ... James Hansen, of the Goddard Institute for Space Studies of NASA!  In our next episode, the head of the FCC will be actively fighting for using the telegraph over radio and TV.

Today's Exercise

Using this chart from the NOAA:


Explain how midwestern flooding in 2008 is due to global warming.  For those who wish to make the argument that global temperatures, not just US temperatures, matter because the world is one big interelated climate system, you may use this chart of global temperatures instead in your explanation:


For extra credit, also blame 2008 spike in tornadoes on global warming.  Thanks for charts to Anthony Watt.

Savanarola Apparently Working for NASA

In 1497, Savonarola tried to end the Italian Renaissance in a massive pyre of books and artwork (the Bonfire of the Vanities).  The Renaissance was about inquiry and optimism, neither of which had much appeal to  Savonarola, who thought he had all the answers he needed in his apocalyptic vision of man.  For him, how the world worked, and particularly the coming apocalypse, was "settled science" and any questioning of his world view was not only superfluous, it was evil.

Fortunately, while the enlightenment was perhaps delayed (as much by the French King and the Holy Roman Emperor as by Savonarola), it mans questing nature was not to be denied.

But now, the spirit of Savonarola has returned, in the guise of James Hansen, a man who incredibly calls himself a scientist.  Mr. Hansen has decided that he is the secular Savonarola, complete with apocalyptic predictions and a righteousness that allows no dissent:

“James Hansen, one of the world’s leading climate scientists, will today call for the chief executives of large fossil fuel companies to be put on trial for high crimes against humanity and nature, accusing them of actively spreading doubt about global warming in the same way that tobacco companies blurred the links between smoking and cancer.

Hansen will use the symbolically charged 20th anniversary of his groundbreaking speech to the US Congress - in which he was among the first to sound the alarm over the reality of global warming - to argue that radical steps need to be taken immediately if the “perfect storm” of irreversible climate change is not to become inevitable.

Speaking before Congress again, he will accuse the chief executive officers of companies such as ExxonMobil and Peabody Energy of being fully aware of the disinformation about climate change they are spreading.”

It will be interesting to see if any champions of free speech on the left can work up the energy to criticize Hansen here.  What we have is a government official threatening prosecution and jail time for Americans who exercise their free speech rights.  GWB, rightly, would never get a pass on this.  Why does Hansen?

The Power of Government Schools

What does a good government technocrat do when the public does not support his expensive vision?  Why, he uses the power of the government education monopoly to try to do a little indocrination.  This is the summary from the climate education bill as proposed by Barrack Obama:

Climate Change Education Act - Requires the Director of the National Science Foundation to establish a Climate Change Education Program to: (1) broaden the understanding of climate change, possible long and short-term consequences, and potential solutions; (2) apply the latest scientific and technological discoveries to provide learning opportunities to people; and (3) emphasize actionable information to help people understand and to promote implementation of new technologies, programs, and incentives related to energy conservation, renewable energy, and greenhouse gas reduction.
Requires such Program to include: (1) a national information campaign to disseminate information on and promote implementation of the new technologies, programs, and incentives; and (2) a competitive grant program to provide grants to states, municipalities, educational institutions, and other organizations to create materials relevant to climate change and climate science, develop climate science kindergarten through grade 12 curriculum and supplementary educational materials, or publish climate change and climate science information.
This helps to explain why Obama opposes school choice -- because he sees the government schools not just as an education establishment, but as a re-education tool.

CBS Walks Away From Story Claiming Global Warming is Increasing Earthquakes

That fabled multiple-levels-of-editorial-review is at work again at CBS, this time with the story heaadlined "Seismic Activity 5 Times More Energetic Than 20 Years Ago Because Of Global Warming."  Now, only a journalism major who had assiduously avoided taking any science and math classes in his/her life could have possibly found this reasonable.  Half degee changes in atmospheric temperatures are hardly likely to affect seismic activity (the subject of the article, Dr. Tom Chalko, has also written that global warming might make the Earth explode).  Had CBS actually approached any other scientist in the world in any specialization for some kind of comment on the article, they would have likely been told that it made no sense.  But, of course, MSM editorial policy is not to ask for dissenting views in global warming alarmism articles.

Well, it appears CBS has walked away from the story without comment.  Anthony Watt has the whole story, including screen caps of the original article. 

This is Just Pathetic

I could probably fill this blog with examples of fact-challenged alarmism, but this one is so easy to debunk it is just staggering.  I am going to make the dangerous assumptions that the WWF is not just outright lying.  If that is true, this is a great example of how popular perception and hysteria substitute for facts and observations.  The WWF is just so convinced this is going on, no one even bothers to check to see if it is true.  First the story, from here, via Tom Nelson:

According to a recent report, endangered migratory whales will have reduced feeding areas due to the shrinkage of Antarctic sea ice from global warming.

The Worldwide Fund for Nature (WWF) said this could threaten the species. The report, “Ice Breaker – Pushing the boundaries for Whales” says whales will soon have to travel up to 310 miles further south in search of food because the ice will retreat up to 30 percent in some areas.

The study also says the whales’ food supply will be further reduced because of the balance between cold sea ice and warmer sea water which causes an up swelling of nutrients that could further contract.

WWF officer Heather Sohl said, "Essentially, what we are seeing is that ice-associated whales such as the Antarctic minke whale will face dramatic changes to their habitat over little more than the lifespan of an individual whale."

OK, two problems with this.  First, the even the IPCC predicts Antarctic ice to grow, not shrink, even under a strong global warming case.  Note the Antarctic is below zero, actually contributing to a sea level drop and mitigating Greenland melting.

And, there is that problem of reality introding, because in fact Antarctica has been hitting 30-year highs for sea ice extent over the past year:


I will leave it to y'all in the comments to decide if they are outright lying or if they are just ignorant.

We are so Confident of our Positon that We Refuse to Tolerate Debate

Via Tom Nelson, this guy is certainly a fine example of enlightened scientific discourse:

Climate "skepticism" is not a morally defensible position. The debate is over, and it's been over for quite some time, especially on this blog.

We will delete comments which deny the absolutely overwhelming scientific consensus on climate change, just as we would delete comments which questioned the reality of the Holocaust or the equal mental capacities and worth of human beings of different ethnic groups. Such "debates" are merely the morally indefensible trying to cover itself in the cloth of intellectual tolerance.

Wow.  It is amazing that the discussion of how trace atmospheric gasses might affect global temperature, and whether the climactic reaction to this is one of positive or negative feedback, has become a moral rather than a scientific question. 

Though this may be obvious to readers, its worth repeating once in a while the chain of reasoning that must all be true for dramatic government action to be justified in reducing CO2.  That chain is roughly as follows:

  1. Can the presence of CO2 be shown in a lab to increase absorption of incoming radiation?
  2. If so, can trace amounts (370ppm) of CO2 in the Earth's atmosphere be enough to absorb meaningful amounts of radiation and if so, how much?
  3. If CO2 in the atmosphere tends to provide a heating effect, do feedback effects (e.g. water vapor) tend to amplify (positive feedback) or damp (negative feedback) the resulting temperature change
  4. What would the effect of the temperature changes be, both negative AND positive.  Undoubtedly some things would be worse, while others, like longer growing seasons, would be better
  5. How are other natural effects, such as the sun, changing the climate and global temperatures, and how large are these effects compared to man's.
  6. If the effects in #4 are net negative, and they are large enough even to be recognizable against the backdrop of natural variations in #5, do they outweigh the substantial costs, in terms of increased poverty, slowed development, lost wealth, etc. in substantial CO2 abatement.

The answer to #1 is yes, it is settled science. 

The answer to #2 is probably yes, though the amount is in some doubt, but everyone (even the IPCC) agrees it is probably less than a degree per century. 

Most of the warming in forecasts (2/3 or more in the IPCC cases) comes from positive feedback in #3, but we really know nothing here, except that most systems are driven by negative feedback.  In other words, this is so unsettled we don't even know the sign of the effect.  (Video here)

#4 is the focus of a lot of really, really bad science.  The funding mechanism at universities has forced many people to try to come up with a global warming angle for their area of interest, so it causes a lot of people to posit bad things without much proof.  If you want to study grape growing in Monterrey County, you are much more likely to get funded if you say you want to study "the negative effects of global warming on grape growing in Monterrey County."  Serious science is starting to debunk many of the most catastrophic claims, and history tells us that the world has thrived in periods of warmer climates.  Even the IPCC, for example, projects only minimal sea level rise over the next century as increases in Antarctic ice offset melting in Greenland.  (more here)

We are beginning to understand that natural variability is pretty high in #5.  Alarmist might be call "sun variability deniers" as they refuse to admit that Mr. Sun might have substantial effects on the Earth.  They are kind of in a hole, though.  They are trying to simultaneously claim in #3 that the climate is dominated by positive feedback, but the same time in #5 claim the climate without man is really, really stable.  These two in tandem make no sense. 

And in #6, nobody knows the answer, but a few serious looks at the problem have shown that aggressive CO2 abatement programs could have catastrophic effects on world poverty.  Which is ironic, since the best correlation with severe weather death rates in the world is not CO2 level but wealth and poverty reduction.  No matter how many storms there are, as poverty has declined in a certain region, so have severe weather deaths, even while CO2 has been increasing.  So one could easily argue that CO2 abatement programs will increase rather than decrease severe weather deaths

So this is the trick people like this blogger use.  They point to good science in #1 and partially in #2 to claim the whole chain of reasoning is "settled science," when in fact there are gaping holes in our knowledge of 3-4-5-6.

As a note, I have never deleted a comment on this site (except for obvious spam), despite many that disagree strongly with my position.

Unusual Climactic Stability

Newsbusters found this in a 1993 NY Times article:

The scientists said their data showed that significantly warmer periods and significantly colder periods had occurred during the last interval between glacial epochs, about 115,000 to 135,000 years ago. They said they could not tell whether that meant similar changes were in store. Their findings were reported today in two papers in the journal Nature. [...]

The new studies found that the average global temperature can change as much as 18 degrees Fahrenheit in a couple of decades during interglacial periods, [Dr. J. W. C. White of the Institute of Arctic and Alpine Research of the University of Colorado] said. The current average global temperature is 59 degrees Fahrenheit. ...

At one point between the last two glacial epochs, the climate melted enough polar ice to raise sea levels some 30 feet. As noted by a member of the drilling team, Dr. David A. Peel of the British Antarctic Survey, it was so warm in England that hippopotamuses wallowed in the Thames and lions roamed its banks....

In his commentary, Dr. White wrote: "We humans have built a remarkable socioeconomic system during perhaps the only time when it could be built, when climate was sufficiently stable to allow us to develop the agricultural infrastructure required to maintain an advanced society. We don't know why we have been so blessed, but even without human intervention, the climate system is capable of stunning variability.

Why They Changed the Name to Climate Change from Global Warming

From the Center for American Progress Action Fund via Maggies Farm:

This tragic, deadly, and destructive weather -- not to mention the droughts in Georgia, California, Kansas, North Carolina, Florida, Tennessee, North Dakota, and elsewhere across the country -- are consistent with the changes scientists predicted would come with global warming. Gov. Chet Culver (D-IA) called the three weeks of storms that gave rise to the floods in his state "historic in proportion," saying "very few people could anticipate or prepare for that type of event." Culver is, unfortunately, wrong. As far back as 1995, analysis by the National Climatic Data Center showed that the United States "had suffered a statistically significant increase in a variety of extreme weather events." In 2007, the U.N. Intergovernmental Panel on Climate Change (IPCC) concluded that it is "very likely" that man-made global warming will bring an "increase in frequency of hot extremes, heat waves and heavy precipitation." The Nobel Prize-winning panel of thousands of scientists and government officials also found, "Altered frequencies and intensities of extreme weather, together with sea level rise, are expected to have mostly adverse effects on natural and human systems." In 2002, scientists said that "increased precipitation, an expected outcome of climate change, may cause losses of US corn production to double over the next 30 years -- additional damage that could cost agriculture $3 billion per year." Scientists have also found that the "West will see devastating droughts as global warming reduces the amount of mountain snow and causes the snow that does fall to melt earlier in the year."

Beyond the fact that these folks could profitably learn about a writing concept called a "paragraph break,"  this analysis is hilariously bad.  The key fact not mentioned is that the first five months of 2008 have been the coldest in decades, both in the US and worldwide, and have been far colder than 2007, which saw much milder weather and fewer tornadoes this time of year (more here).  In fact one could easily, but probably incorrectly since it is such a short period of time, posit that warming would reduce tornadoes, since this year's cold weather has increased them so much.

Because we have not seen any global warming trend over the last 10 years, alarmists have switched to "climate change" as their bogeyman.  In particular, they argue that global warming will increase severe weather frequency.  There is a lot of evidence that this statement is incorrect, but lets accept it for a minute.  Their theory still requires an intermediate step of warming.  There is no mechanism anyone has ever described where increasing CO2 directly yields increases in severe weather without passing through warming first. 

But this is exactly what they are trying to claim, at least with the masses:  They are in effect claiming that somehow CO2 causes severe weather directly.  But this is simply impossible.  If the world has been colder this year, then severe weather, if it results from temperature change at all, is resulting from the cold weather, not warming.

In fact, the article goes on to imply that crop problems this year are due to man-made effects, that somehow global warming is causing these failures.  But crop problems this year are almost entirely due to cold spring tempertures and late frosts.  You have really got to be a master PR spinner to convert frost and cold issues into a global warming problem.

The whole thing is pretty funny.   More on tornadoes and warming here.

Update:  I could post a zillion of these, but here is one example of what is ailing crops:

Wheat, durum and barley crops are currently one to two weeks behind normal due to cold weather so far this spring, with temperatures 3° to 5°C below normal.

"A continuation of cool weather could lead to delayed development and increased risk of frost damage this fall," said Bruce Burnett, the CWB's director of weather and market analysis, in the board's release Thursday.

Update #2:  US Tornado fatalities graphed for the last 100 years:


Who'd Have Believed It? A Natural Process Dominated By Negative Feedback!

Frequent readers will know that I have often criticized climate scientists for assuming, without strong evidence, that climate is dominated by positive feedback.  Such an assumption about a long-term stable system implies that climate is relatively unique among natural processes, and is a real head scratcher when advocated by folks like Michael Mann, who simultaneously claim that past temerpatures are stable within very narrow ranges  (Stability and positive feedback are two great tastes that do not go great together).

Well, it seems that those of use who were offended by the notion of a long-term stable natural process being dominated by positive feedback may have been right after all (via Tom Nelson):

Cirrus clouds are performing a disappearing act which is taking scientists by surprise.

In the global warming debate, it is assumed that temperature rises will lead to more rainfall, which in turn will see an increase in high-altitude cloud cover that will trap infrared heat.

But research on tropical climate systems has found the opposite is happening, with cirrus clouds thinning as the air warms, leading to rapid cooling as infrared heat escapes from the atmosphere to outer space.

The Cost of the Insurance Policy Matters

I once found myself in a debate with someone advocating the precautionary principle, that we should abate all CO2 production "just in case" it might cause a catastrophe.  The person I was debating with said, trying to be reasonable, "you buy car insurance, right?"

I answered, "Yes, but I wouldn't buy insurance on my car if the insurance itself cost more than my car."  The point is that in most forecasts, the cost of CO2 abatement with current technologies tends to outweigh even some of the more dramatic catastrophic costs of warming, particularly since the best defense against climate disaster is wealth, not less CO2.  Here is another example:

Climatologist Patrick Michaels thinks it would have virtually no effect on the climate, an additional 0.013 degrees (Celsius) of "prevented" warming. That's another little bitty fact that will never see the light of day on most press reports. Instead what we'll get is the usual hot air, except this time it has the price tag of 660 hurricanes.

Update:  "the sexiness has gone out of the movement...AGW was fun ... as long as nobody lost an eye."  Or as long as no Senator had to put his name on a $4 a gallon gas tax (about what they have in Europe, an amount that is still way insufficient to force compliance with Kyoto goals).

Creating Global Warming in the Laboratory

The topic of creating global warming at the computer workstation with poorly-justified "corrections" of past temperature records is one with which my readers should be familiar.  Some older posts on the topic are here and here and here.

The Register updates this topic use March, 2008 temperature measurements from various sources.  They show that in addition to the USHCN adjustments we discussed here, the GISS overlays another 0.15C warming through further adjustments. 


Nearly every measurement bias that you can imagine that changes over time tends to be an upward / warming bias, particularly the urban heat island effect my son and I measured here.  So what is all this cooling bias that these guys are correcting for?  Or are they just changing the numbers by fiat to match their faulty models and expensive policy goals?

Update:  Another great example is here, with faulty computer assumptions on ocean temperature recording substantially screwing up the temperature history record.

Polar Bears and Combustion

The biggest danger to polar bears may not be combustion, but incomplete combustion.  Inefficient or incomplete combustion can lead to carbon particles or dense hydrocarbons going up the smokestack (or exhaust pipe).  We commonly call this soot.  It is one reason white marble buildings in cities look so dingy, and it is a pollution problem we have done a lot with in the US but is way down the priority scale in places like China.

It turns out, though, that soot may have more to do with melting ice and rising arctic temperatures than CO2, and this is actually good news:

“Belching from smokestacks, tailpipes and even forest fires, soot—or black carbon—can quickly sully any snow on which it happens to land. In the atmosphere, such aerosols can significantly cool the planet by scattering incoming radiation or helping form clouds that deflect incoming light. But on snow—even at concentrations below five parts per billion—such dark carbon triggers melting, and may be responsible for as much as 94 percent of Arctic warming.

“Impurities cause the snow to darken and absorb more sunlight,” says Charlie Zender, a climate physicist at the University of California, Irvine. “A surprisingly large temperature response is caused by a surprisingly small amount of impurities in snow in polar regions.”

Zender, physicist Mark Flanner and other colleagues built a model to examine how soot impacts temperature in the Arctic and Antarctic regions. Temperatures in the northern polar region have already risen by 1.6 degrees Celsius (2.88 degrees Fahrenheit) since the dawn of the Industrial Revolution. The researchers incorporated information on soot produced by burning fossil fuels, wood and other biofuels, along with that naturally produced by forest fires and then checked their model predictions against global measurements of soot levels in polar snow from Sweden to Alaska to Russia and in Antarctica as well as in nonpolar areas such as the Tibetan Plateau....

Whereas forest fires contribute to the problem—the effect noticeably worsens in years with widespread boreal wildfires—roughly 80 percent of polar soot can be traced to human burning, adding as much as 0.054 watt of energy per square meter of Arctic land, according to the research published this week in the Journal of Geophysical Research. When the snow melts, it exposes dark land below it, further accelerating regional warming. “Black carbon in snow causes about three times the temperature change as carbon dioxide in the atmosphere,” Zender says. “The climate is more responsive to this than [to] anything else we know.”

If correct, this is an incredibly powerful finding, for a couple of reasons.  First, over the last 30 years since we have had good satellite temperature measurements, the vast majority of the warming has been in the Arctic, with temperatures flat to down in the tropics and the Antarctic.  This has never made much sense in the context of greenhouse warming theory (though its proponents have tied themselves into pretzels trying to explain it) since global warming theory (as embodied in the last IPCC report) holds that the largest temperature gains should be in the lower troposphere over the tropics, and offers no reason why the warming in the Artic should be orders of magnitude larger than in the Antarctic. 

But this soot theory turns it all around.  By this theory, the warming of the Arctic partially results from the loss of ice, rather than the other way around.  And no one would deny that the Artic should have much more soot than the Antarctic, since Northern Hemisphere industrial output dwarfs that of the Southern Hemisphere (and most all soot stays in the hemisphere in which it was created).  This would help explain the differential vs the tropics (soot has less effect on warming when it falls on a rain forest than on snow) as well as the differential between Artic and Antarctic.

But the theory is powerful for another reason:  It would be MUCH easier to engage in a global effort to reduce soot substantially.  While CO2 is a necessary bi-product of combustion, soot is not.  Better furnace design and exhaust gas scrubbing, as well as some gasoline reformulations and internal combustion tweaks, would make an enormous dent in soot production, an effort I would gladly support.

Postscript:  You may actually have heard of black carbon in the context of global warming.  Over the last decade, when climate alarmists began running their catastrophic warming models backwards, they found they vastly over-predicted past warming.  To save their models (god forbid anyone would rethink the theory) they cast about for potential man-made cooling effects that might be masking or offsetting man-made warming.  In this context, they settled on sulfur dioxide aerosols and black carbon as cooling agents (which they are, at least to some extent).  Not having a good theory on how much cooling the cause, they could assign arbitrarily large numbers to them, in effect making them the "plug" to get their models to fit history.

With a bit more research, scientists are beginning to admit the cooling effect can't be that great.  The reason is that unlike CO2, black carbon and aerosols break down and come to earth  (as soot and acid rain) relatively quickly, so that they have only limited, local effects in the areas in which they are produced.  At most, a third of the world's land area or about 8% of the entire earth's surface had any kind of concentrations of these in the atmosphere.  To have a cooling effect of .5-1.0C (which is what they needed, at a minimum, to make their models work running backwards) would imply aerosols were cooling these selected areas of effect by 6-12 degrees Celsius, which was totally improbable.  Besides, almost all of these aerosols are in the norther hemisphere, but it has been the southern hemisphere that has been cooler. 

You Make the Call

A half degree cooler or $45 trillion poorer?  You make the call.  And remember, these are the cost numbers from climate alarmists, so they are very likely way too low. 

The press is so used to the politically correct language of victimization, that they don't even think about it before applying it.  As a result, global warming alarmists get a pass on claiming to be helping the poor by fighting global warming.

But this is absurd.  The poor don't care about polar bears or bad snow at the ski resort or hurricanes hitting their weekend beach house.  They care about agriculture, which has always been improved by warmer weather and longer growing seasons, and development, which relies on the profligate expenditure of every hydrocarbon they can get their hands on.  Can anyone really argue that a half degree warmer world is harder on the poor than a $45 trillion dollar price increase in energy costs?

A Pretty Good Skeptics Starter-Article

Though regular readers may find little new here, this is a pretty good starter-article for skeptics still on training wheels.

Global Warming Videos

All American Blogger is linking a good number of online videos skeptical of catastrophic man-made global warming theory.

Part 1:  Documentaries

Part 2:  Lectures

Part 3:  Debates

Part 4: Shorts

And, of course, there are my videos

Only Skeptics Are Driven By Money

Or not:

Noel Sheppard's got the goods on Al Gore.

For years, NewsBusters has contended that Nobel Laureate Al Gore is spreading global warming hysteria to benefit his own wallet.

On Wednesday, despite claims by one of Gore's representatives two months ago, it was revealed that his Generation Investment Management private equity fund has taken a 9.5 percent stake in a company that has one of the largest carbon credit portfolios in the world.

Can the IRS please tell me how Al Gore can get away with having the nonprofit Alliance for Climate Protection do all the PR heavy lifting for his for-profit investments?

Note that carbon credits are a zero-value asset unless by government fiat they are declared to have some value.  In the absence of global warming legislation and cap-and-trade schemes, this company's portfolio is worth nothing.  Only the lobbying by Gore and his "non-profits" can make it have value. 

Interestingly, this carbon credit portfolio also has zero value even under alternative CO2 reduction alternatives, such as a carbon tax.  Under a carbon tax, there is much less opportunity for rent-seeking by powerful people like Gore, and carbon credit portfolios are worthless.  Interestingly, Gore proposed a US carbon tax 15 years or so ago.  My guess is that he would no longer support a carbon tax, as it would bankrupt many of his investments.

My Favorite Headline of the Day

Tom Nelson really sums up much of the global warming movement in one blog post headline:

Once again, the best way to avoid global warming catastrophe is to do whatever some special interest group already wanted done anyway

Specifically he was referring to this:

“Arguably, the best way to reduce global warming in our lifetimes is to reduce or eliminate our consumption of animal products..."

Price / Value of Solar

I have a big four thousand square foot roof in one of the greatest solar sites in the world (6 equivilent hours of full sun a day) that is just begging for solar panels.  Except that even with substantial government subsidies, the payback numbers are awful. Lynne Keisling reports that this may be about to change:

There are a couple of very interesting recent solar developments that have substantial economic implications. First, the blue sky stuff: courtesy of Slashdot, a team of researchers in the Netherlands have demonstrated avalanche effects in semiconductors that can be used in solar cells (here's the original article). Avalanche effects mean that instead of having a 1:1 relationship between a photon and an electron, in which 1 photon releases 1 electron, it's physically possible in these nano-scale semiconducting materials to have 2:1 or even 3:1 -- 2 or 3 electrons released per photon in the material. This means twofold or threefold increase in the possible energy intensity of the solar cell material. These nanocrystals are even inexpensive to manufacture. How cool is that?

What are the economic implications of this new material and new knowledge? The low energy intensity of solar cells has been a factor in making solar a less cost-effective means of generating electricity than fossil fuels, which are extremely energy intensive. This avalanche effect can mean smaller, more energy intensive solar cells, which changes the cost structure for solar. I think it will certainly shift the long-run average cost curve downward, which creates an opportunity for solar retailers to reduce prices. A lower solar retail price shifts the price ratio between solar power and all other electricity power sources. For example, the price ratio between solar-generated and coal-generated electricity would shift such that at the margin, consumers would substitute out of coal-powered electricity and into solar-powered electricity. If I were better at generating the isoquant and indifference curve graphs electronically, I'd show it here graphically ... but the logic is straightforward.

Unfortunately, we have been hearing this for years.  I price solar out on my home just about once per year, and the numbers have not changed for a while.  Here's hpoing....

Visits (Coyote Blog + Climate Skeptic)

Powered by TypePad