The surprisingly weak case for global warming

I welcome your thoughts on this post, but please read through to the end before commenting. Also, you’ll find the related code (in R) at the end. For those new to this blog, you may be taken aback (though hopefully not bored or shocked!) by how I expose my full process and reasoning. This is intentional and, I strongly believe, much more honest than presenting results without reference to how many different approaches were taken, or how many models were fit, before everything got tidied up into one neat, definitive finding.

Fast summaries

TL;DR (scientific version): Based solely on year-over-year changes in surface temperatures, the net increase since 1881 is fully explainable as a non-independent random walk with no trend.
TL;DR (simple version): Statistician does a test, fails to find evidence of global warming.

Introduction and definitions

As so often happens to terms which have entered the political debate, “global warming” has become infused with additional meanings and implications that go well beyond the literal statement: “the earth is getting warmer.” Anytime someone begins a discussion of global warming (henceforth GW) without a precise definition of what they mean, you should assume their thinking is muddled or their goal is to bamboozle. Here’s my own breakdown of GW into nine related claims:

  1. The earth has been getting warmer.
  2. This warming is part of a long term (secular) trend.
  3. Warming will be extreme enough to radically change the earth’s environment.
  4. The changes will be, on balance, highly negative.
  5. The most significant cause of this change is carbon emissions from human beings.
  6. Human beings have the ability to significantly reverse this trend.
  7. Massive, multilateral cuts to emissions are a realistic possibility.
  8. Such massive cuts are unlikely to cause unintended consequences more severe than the warming itself.
  9. Emissions cuts are better than alternative strategies, including technological fixes (i.e. iron fertilization), or waiting until scientific advances make better technological fixes likely.

Note that not all proponents of GW believe all nine of these assertions.

The data and the test (for GW1)

The only claims I’m going to evaluate are GW1 and GW2. For data, I’m using surface temperature information from NASA. I’m only considering the yearly average temperature, computed by finding the average of four seasons as listed in the data. The first full year of (seasonal) data is 1881, the last year is 2011 (for this data, years begin in December and end in November).

According to NASA’s data, in 1881 the average yearly surface temperature was 13.76°C. Last year the same average was 14.52°C, or 0.76°C higher (standard deviation on the yearly changes is 0.11°C). None of the most recent ten years have been colder than any of the first ten years. Taking the data at face value (i.e. ignoring claims that it hasn’t been properly adjusted for urban heat islands or that it has been manipulated), the evidence for GW1 is indisputable: The earth has been getting warmer.

Usually, though, what people mean by GW is more than just GW; they mean GW2 as well, since without GW2 none of the other claims are tenable, and the entire discussion might be reduced to a conversation like this:

“I looked up the temperature record this afternoon, and noticed that the earth is now three quarters of a degree warmer than it was in the time of my great great great grandfather.”
“Why, I do believe you are correct, and wasn’t he the one who assassinated James A. Garfield?”
“No, no, no. He’s the one who forced Sitting Bull to surrender in Saskatchewan.”

Testing GW2

Do the data compel us to view GW as part of a trend and not just background noise? To evaluate this claim, I’ll be taking a standard hypothesis testing approach, starting with the null hypothesis that year-over-year (YoY) temperature changes represent an undirected random walk. Under this hypothesis, the YoY changes are modeled as a independent draws from a distribution with mean zero. The final temperature represents the sum of 130 of these YoY changes. To obtain my sampling distribution, I’ve calculated the 130 YoY changes in the data, then subtracted the mean from each one. This way, I’m left with a distribution with the same variance as in the original data. YoY jumps in temperature will be just as spread apart as before, but with the whole distribution shifted over until its expected value becomes zero. Note that I’m not assuming a theoretical distributional form (eg Normality), all of the data I’m working with is empirical.

My test will be to see if, by sampling 130 times (with replacement!) from this distribution of mean zero, we can nonetheless replicate a net change in global temperatures that’s just as extreme as the one in the original data. Specifically, our p-value will be the fraction of times our Monte Carlo simulation yields a temperature change of greater than 0.76°C or less than -0.76°C. Note that mathematically, this is the same test as drawing from the original data, unaltered, then checking how often the sum of changes resulted in a net temperature change of less than 0 or more than 1.52°C.

I have not set a “critical” p-value in advance for rejecting the null hypothesis, as I find this approach to be severely limiting and just as damaging to science as J-Lo is to film. Instead, I’ll comment on the implied strength of the evidence in qualitative terms.

Initial results

The initial results are shown graphically at the beginning of this post (I’ll wait while you scroll back up). As you can see, a large percentage of the samples gave a more extreme temperature change than what was actually observed (shown in red). During the 1000 trials visualized, 56% of the time the results were more extreme than the original data after 130 years worth of changes. I ran the simulation again with millions of trials (turn off plotting if you’re going to try this!); the true p-value for this experiment is approximately 0.55.

For those unfamiliar with how p-values work, this means that, assuming temperature changes are randomly plucked out of a bundle of numbers centered at zero (ie no trend exists), we would still see equally dramatic changes in temperature 55% of the time. Under even the most generous interpretation of the p-value, we have no reason to reject the null hypothesis. In other words, this test finds zero evidence of a global warming trend.

Testing assumptions Part 1

But wait! We still haven’t tested our assumptions. First, are the YoY changes independent? Here’s a scatterplot showing the change in temperature one year versus the change in temperature the next year:

Looks like there’s a negative correlation. A quick linear regression gives a p-value of 0.00846; it’s highly unlikely that the correlation we see (-0.32) is mere chance. One more test worth running is the ACF, or the Autocorrelation function. Here’s the plot R gives us:

Evidence for a negative correlation between consecutive YoY changes is very strong, and there’s some evidence for a negative correlation between YoY changes which are 2 years apart as well.

Before I explain how to incorporate this information into a revised Monte Carlo simulation, what does a negative correlation mean in this context? It tells us that if the earth’s temperature rises by more than average in one year, it’s likely to fall (or rise less than average) the following year, and vice versa. The bigger the jump one way, the larger the jump the other way next year (note this is not a case of regression to the mean; these are changes in temperature, not absolute temperatures. Update: This interpretation depends on your assumptions. Specifically, if you begin by assuming a trend exists, you could see this as regression to the mean. Note, however, that if you start with noise, then draw a moving average, this will induce regression to the mean along your “trendline”). If anything, this is evidence that the earth has some kind of built in balancing mechanism for global temperature changes, but as a non-climatologist all I can say is that the data are compatible with such a mechanism; I have no idea if this makes sense physically.

Correcting for correlation

What effect will factoring in this negative correlation have on our simulation? My initial guess is that it will cause the total temperature change after 130 years to be much smaller than under the pure random walk model, since changes one year are likely to be balanced out by changes next year in the opposite direction. This would, in turn, suggest that the observed 0.76°C change over the past 130 years is much less likely to happen without a trend.

The most straightforward way to incorporate this correlation into our simulation is to sample YoY changes in 2-year increments. Instead of 130 individual changes, we take 65 changes from our set of centered changes, then for each sample we look at that year’s changes and the year that immediately follows it. Here’s what the plot looks like for 1000 trials.

After doing 100,000 trials with 2 year increments, we get a p-value of 0.48. Not much change, and still far from being significant. Sampling 3 years at a time brings our p-value down to 0.39. Note that as we grab longer and longer consecutive chains at once, the p-value has to approach 0 (asymptotically) because we are more and more likely to end up with the original 130 year sequence of (centered) changes, or a sequence which is very similar. For example, increasing our chain from one YoY change to three reduces the number of samplings from 130130 to approximately 4343 – still a huge number, but many orders of magnitude less (Fun problem: calculate exactly how many fewer orders of magnitude. Hint: If it takes you more than a few minutes, you’re doing it wrong).

Correcting for correlation Part 2 (A better way?)

To be more certain of the results, I ran the simulation in a second way. First I sampled 130 of the changes at random, then I threw out any samplings where the correlation coefficient was greater than -0.32. This left me with the subset of random samplings whose coefficients were less than -0.32. I then tested these samplings to see the fraction that gave results as extreme as our original data.

Compared to the chained approach above, I consider this to be a more “honest” way to sample an empirical distribution, given the constraint of a (maximum) correlation threshold. I base this on E.T. Jaynes’ demonstration that, in the face of ignorance as to how a particular statistic was generated, the best approach is to maximize the (informational) entropy. The resulting solution is the most likely result you would get if you sampled from the full space (uniformly), then limited your results to those which match your criteria. Intuitively, this approach says: Of all the ways to arrive at a correlation of -0.32 or less, which are the most likely to occur?

For a more thorough discussion of maximum entropy approaches, see Chapter 11 of Jaynes’ book “Probability Theory” or his “Papers on Probability” (1979). Note that this is complicated, mind-blowing stuff (it was for me, anyway). I strongly recommend taking the time to understand it, but don’t bother unless you have at least an intermediate-level understanding of math and probability.

Here’s what the plot looks like subject to the correlation constraint:

If it looks similar to the other plots in terms of results, that’s because it is. Empirical p-value from 1000 trials? 0.55. Because generating samples with the required correlation coefficients took so long, these were the only trials I performed. However, the results after 1000 trials are very similar to those for 100,000 or a million trials, and with a p-value this high there’s no realistic chance of getting a statistically significant result with more trials (though feel free to try for yourself using the R code and your cluster of computers running Hadoop). In sum, the maximum entropy approach, just like the naive random walk simulation and the consecutive-year simulations, gives us no reason to doubt our default explanation of GW2 – that it is the result of random, undirected changes over time.

One more assumption to test

Another assumption in our model is that that YoY changes have constant variance over time (homoscedasticity). Here’s the plot of the (raw, uncentered) YoY changes:

It appears that the variance might be increasing over time, but just looking at the plot isn’t conclusive. To be sure, I took the absolute value of the changes and ran a simple regression on them. The result? Variance is increasing (p-value 0.00267), though at a rate that’s barely perceptible; the estimated absolute increase in magnitude of the YoY changes is 0.046. That figure is in hundreths of degrees Celsius, so our linear model gives a rate of increase in variability of just 4.6 ten-thousands of a degree per year. Over the course of 130 years, that equates to an increase of six hundredths of a degree Celsius (margin of error of 3.9 hundredths at two std deviations). This strikes me as a miniscule amount, though relative to the size of the YoY changes themselves it’s non-trivial.

Does this increase in volatility invalidate our simulation? I don’t think so. Any model which took into account this increase in volatility (while still being centered) would be more likely to produce extreme results under the null hypothesis of undirected change. In other words, the bigger the yearly temperature changes, the more likely a random sampling of those changes will lead us far away from our 13.8°C starting point in 1881, with most of the variation coming towards the end. If we look at the data, this is exactly what happens. During the first 63 years of data the temperature increases by 42 hundredths of a degree, then drops 40 hundredths in just 12 years, then rises 80 hundredths within 25 years of that; the temperature roller coaster is becoming more extreme over time, as variability increases.

Beyond falsifiability

Philosopher Karl Popper insisted that for a theory to be scientific, it must be falsifiabile. That is, there must exist the possibility of evidence to refute the theory, if the theory is incorrect. But falsifiability, by itself, is too low a bar for a theory to gain acceptance. Popper argued that there were gradations and that “the amount of empirical information conveyed by a theory, or it’s empirical content, increases with its degree of falsifiability” (emphasis in original).

Put in my words, the easier it is to disprove a theory, the more valuable the theory. (Incorrect) theories are easy to disprove if they give narrow prediction bands, are testable in a reasonable amount of time using current technology and measurement tools, and if they predict something novel or unexpected (given our existing theories).

Perhaps you have already begun to evaluate the GW claims in terms of these criteria. I won’t do a full assay of how the GW theories measure up, but I will note that we’ve had several long periods (10 years or more) with no increase in global temperatures, so any theory of GW3 or GW5 will have to be broad enough to encompass decades of non-warming, which in turn makes the theory much harder to disprove. We are in one of those sideways periods right now. That may be ending, but if it doesn’t, how many more years of non-warming would we need for scientists to abandon the theory?

I should point out that a poor or a weak theory isn’t the same as an incorrect theory. It’s conceivable that the earth is in a long-term warming trend (GW2) and that this warming has a man-made component (GW5), but that this will be a slow process with plenty of backsliding, visible only over hundreds or thousands of years. The problem we face is that GW3 and beyond are extreme claims, often made to bolster support for extreme changes in how we live. Does it make sense to base extreme claims on difficult to falsify theories backed up by evidence as weak as the global temperature data?

Invoking Pascal’s Wager

Many of the arguments in favor of radical changes to how we live go like this: Even if the case for extreme man-made temperature change is weak, the consequences could be catastrophic. Therefore, it’s worth spending a huge amount of money to head off a potential disaster. In this form, the argument reminds me of Pascal’s Wager, named after Blaise Pascal, a 17th century mathematician and co-founder of modern probability theory. Pascal argued that you should “wager” in favor of the existance of God and live life accordingly: If you are right, the outcome is infinitely good, whereas if you are wrong and there is no God, the most you will have lost is a lifetime of pleasure.

Before writing this post, I Googled to see if others had made this same connection. I found many discussions of the similarities, including this excellent article by Jim Manzi at The American Scene. Manzi points out problems with applying Pascal’s Wager, including the difficulty in defining a stopping point for spending resources to prevent the event. If a 20°C increase in temperature is possible, and given that such an increase would be devastating to billions of people, then we should be willing to spend a nearly unlimited amount to avert even a tiny chance of such an increase. The math works like this: Amount we should be willing to spend = probability of 20°C increase (say 0.00001) * harm such an increase would do (a godzilla dollars). The end result is bigger than the GDP of the planet.

Of course, catastrophic GW isn’t the only potential threat can have Pascal’s Wager applied to it. We also face annihilation from asteroids, nuclear war, and new diseases. Which of these holds the trump card to claim all of our resources? Obviously we need some other approach besides throwing all our money at the problem with the scariest Black Swan potential.

There’s another problem with using Pascal’s Wager style arguments, one I rarely see discussed: proponents fail to consider the possibility that, in radically altering how we live, we might invite some other Black Swan to the table. In his original argument, Pascal the Jansenist (sub-sect of Christianity) doesn’t take into account the possibility that God is a Muslim and would be more upset by Pascal’s professed Christianity than He would be with someone who led a secular lifestyle. Note that these two probabilities – that God is Muslim who hates Christians more than atheists, or that God is Christian and hates atheists – are incommesurable! There’s no rational way to weigh them and pick the safer bet.

What possible Black Swans do we invite by forcing people to live at the same per-capita energy-consumption level as our forefathers in the time of James A. Garfield?

Before moving on, I should make clear that humans should, in general, be very wary of inviting Black Swans to visit. This goes for all experimentation we do at the sub-atomic level, including work done at the LHC (sorry!), and for our attempts to contact aliens (as Stephen Hawking has pointed out, there’s no certainty that the creatures we attract will have our best interests in mind). So, unless we can point to strong, clear, tangible benefits from these activities, they should be stopped immediately.

Beware the anthropic principle

Strictly speaking, the anthropic principle states that no matter how low the odds are that any given planet will house complex organisms, one can’t conclude that the existence of life on our planet is a miracle. Essentially, if we didn’t exist, we wouldn’t be around to “notice” the lack of life. The chance that we should happen to live on a planet with complex organisms is 1, because it has to be.

More broadly, the anthropic principle is related to our tendency to notice extreme results, then assume these extremes must indicate something more than the noise inherent in random variation. For example, if we gathered together 1000 monkeys to predict coin tosses, it’s likely that one of them will predict the first 10 flips correctly. Is this one a genius, a psychic, an uber-monkey? No. We just noticed that one monkey because its record stood out.

Here’s another, potentially lucrative, most likely illegal, definitely immoral use of the anthropic principle. Send out a million email messages. In half of them, predict that a particular stock will go up the next day, in the other half predict it will go down. The next day, send another round of predictions to just those emails that got the correct prediction the first time. Continue sending predictions to only those recipients who receive the correct guesses. After a dozen days, you’ll have a list of people who’ve seen you make 12 straight correct predictions. Tell these people to buy a stock you want to pump and dump. Chances are good they’ll bite, since from their perspective you look like a stock-picking genius.

What does this have to do with GW? It means that we have to disentangle our natural tendency to latch on to apparent patterns from the possibility that this particular pattern is real, and not just an artifact of our bias towards noticing unlikely events under null hypotheses.

Biases, ignorance, and the brief life, death, and afterlife of a pet theory

While the increase in volatility seen in the temperature data complicates our analysis of the data, it gives me hope for a pet theory about climate change which I’d buried last year (where does one bury a pet theory?). The theory (for which I share credit with my wife and several glasses of  wine) is that the true change in our climate should best be described as Distributed Season Shifting, or DSS. In short, DSS states that we are now more likely to have unseasonably warm days during the colder months, and unseasonably cold days during the warmer months. Our seasons are shifting, but in a chaotic, distributed way. We built this theory after noticing a “weirdening” of our weather here in Toronto. Unfortunately (for the theory), no matter how badly I tortured the local temperature data, I couldn’t get it to confess to DSS.

However, maybe I was looking at too small a sample of data. The observed increase in volatility of global YoY changes might also be reflected in higher volatility within the year, but the effects may be so small that no single town’s data is enough to overcome the high level of “normal” volatility within seasonal weather patterns.

My tendency to look for confirmation of DSS in weather data is a bias. Do I have any other biases when it comes to GW? If anything, as the owner of a recreational property located north of our northern city, I have a vested interest in a warmer earth. Both personally (hotter weather = more swimming) and financially, GW2 and 3 would be beneficial. In a Machiavellian sense, this might give me an incentive to downplay GW2 and beyond, with the hope that our failure to act now will make GW3 inevitable. On the other hand, I also have an incentive to increase the perception of GW2, since I will someday be selling my place to a buyer who will base her bid on how many months of summer fun she expects to have in years to come.

Whatever impact my property ownership and failed theory have on this data analysis, I am blissfully free of one biasing factor shared by all working climatologists: the pressures to conform to peer consensus. Don’t underestimate the power of this force! It effects everything from what gets published to who gets tenure. While in the long run scientific evidence wins out, the short run isn’t always so short: For several decades the medical establishment pushed the health benefits of a low fat, high carb diet. Alternative views are only now getting attention, despite hundreds of millions of dollars spent on research which failed to back up the consensus claims.

Is the overall evidence for GW2 – 9 as weak as the evidence used to promote high carb diets? I have no idea. Beyond the global data I’m examining here, and my failed attempt to “discover” DSS in Toronto’s temperature data, I’m coming from a position of nearly complete ignorance: I haven’t read the journal articles, I don’t understand the chemistry, and I’ve never seen Al Gore’s movie.

Final analysis and caveats

Chances are, if you already had strong opinions about the nine faces of GW before reading this article, you won’t have changed your opinion much. In particular, if a deep understanding of the science has convinced you that GW is a long term, man-made trend, you can point out that I haven’t disproven your view. You could also argue the limitations of testing the data using the data, though I find this more defensible than testing the data with a model created to fit the data.

Regardless of your prior thinking, I hope you recognize that my analysis shows that YoY temperature data, by itself, provides no evidence for GW2 and beyond. Also, because of the relatively long periods of non-warming within the context of an overall rise in global temperature, any correct theory of GW must include backsliding within it’s confidence intervals for predictions, making it a weaker theory.

What did my analysis show for sure? Clearly, temperatures have risen since the 1880s. Also, volatility in temperature changes has increased. That, of itself, has huge implications for our lives, and tempts me to do more research on DSS (what do you call pet theory that’s risen from the dead?). I’ve also become intrigued with the idea that our climate (at large) has mechanisms to balance out changes in temperature. In terms of GW2 itself, my analysis has not convinced me that it’s all a myth. If we label random variation “noise” and call trend a “signal,” I’ve shown that yearly temperature changes are compatible with an explanation of pure noise. I haven’t shown that no signal exists.

Thanks for reading all the way through! Here’s the code:

Code in R

theData = read.table("/path/to/theData/FromNASA/cleanedForR.txt", header=T) 
 
# There has to be a more elegant way to do this
theData$means = rowMeans(aggregate(theData[,c("DJF","MAM","JJA","SON")], by=list(theData$Year), FUN="mean")[,2:5])
 
# Get a single vector of Year over Year changes
rawChanges = diff(theData$means, 1)
 
# SD on yearly changes
sd(rawChanges)
 
# Subtract off the mean, so that the distribution now has an expectaion of zero
changes = rawChanges - mean(rawChanges)
 
# Find the total range, 1881 to 2011
(theData$means[131] - theData$means[1])/100
 
# Year 1 average, year 131 average, difference between them in hundreths
y1a = theData$means[1]/100 + 14
y131a = theData$means[131]/100 + 14
netChange = (y131a - y1a)*100 
 
# First simulation, with plotting
plot.ts(cumsum(c(0,rawChanges)), col="red", ylim=c(-300,300), lwd=3, xlab="Year", ylab="Temperature anomaly in hundreths of a degrees Celsius")
 
trials = 1000
finalResults = rep(0,trials)
 
for(i in 1:trials) {
	jumps = sample(changes, 130, replace=T)
 
	# Add lines to plot for this, note the "alpha" term for transparency
	lines(cumsum(c(0,jumps)), col=rgb(0, 0, 1, alpha = .1))
 
	finalResults[i] = sum(jumps)
 
}
 
# Re-plot red line again on top, so it's visible again
lines(cumsum(c(0,rawChanges)), col="red", ylim=c(-300,300), lwd=3) 
 
# Fnd the fraction of trials that were more extreme than the original data
( length(finalResults[finalResults>netChange]) + length(finalResults[finalResults<(-netChange)]) ) / trials # Many more simulations, minus plotting trials = 10^6 finalResults = rep(0,trials) for(i in 1:trials) { 	jumps = sample(changes, 130, replace=T) 	 	finalResults[i] = sum(jumps) } # Fnd the fraction of trials that were more extreme than the original data ( length(finalResults[finalResults>netChange]) + length(finalResults[finalResults<(-netChange)]) ) / trials # Looking at the correlation between YoY changes x = changes[seq(1,129,2)] y = changes[seq(2,130,2)] plot(x,y,col="blue", pch=20, xlab="YoY change in year i (hundreths of a degree)", ylab="YoY change in year i+1 (hundreths of a degree)") summary(lm(x~y)) cor(x,y) acf(changes) # Try sampling in 2-year increments plot.ts(cumsum(c(0,rawChanges)), col="red", ylim=c(-300,300), lwd=3, xlab="Year", ylab="Temperature anomaly in hundreths of a degrees Celsius") trials = 1000 finalResults = rep(0,trials) for(i in 1:trials) { 	indexes = sample(1:129,65,replace=T) 	 	# Interlace consecutive years, to maintian the order of the jumps  	jumps = as.vector(rbind(changes[indexes],changes[(indexes+1)])) 	 	lines(cumsum(c(0,jumps)), col=rgb(0, 0, 1, alpha = .1)) 	 	finalResults[i] = sum(jumps) } # Re-plot red line again on top, so it's visible again lines(cumsum(c(0,rawChanges)), col="red", ylim=c(-300,300), lwd=3)  # Find the fraction of trials that were more extreme than the original data ( length(finalResults[finalResults>netChange]) + length(finalResults[finalResults<(-netChange)]) ) / trials # Try sampling in 3-year increments trials = 100000 finalResults = rep(0,trials) for(i in 1:trials) { 	indexes = sample(1:128,43,replace=T) 	 	# Interlace consecutive years, to maintian the order of the jumps  	jumps = as.vector(rbind(changes[indexes],changes[(indexes+1)],changes[(indexes+2)])) 	 	# Grab one final YoY change to fill out the 130 	jumps = c(jumps, sample(changes, 1)) 	 	finalResults[i] = sum(jumps) } # Fnd the fraction of trials that were more extreme than the original data ( length(finalResults[finalResults>netChange]) + length(finalResults[finalResults<(-netChange)]) ) / trials # The maxEnt method for conditional sampling lines(cumsum(c(0,rawChanges)), col="red", ylim=c(-300,300), lwd=3)  trials = 1000 finalResults = rep(0,trials) for(i in 1:trials) { 	theCor = 0 	while(theCor > -.32) {
		jumps = sample(changes, 130, replace=T)
		theCor = cor(jumps[1:129],jumps[2:130])
	}
 
	# Add lines to plot for this
	lines(cumsum(jumps), col=rgb(0, 0, 1, alpha = .1))
 
	finalResults[i] = sum(jumps)
 
}
 
# Re-plot red line again on top, so it's visible again
lines(cumsum(c(0,rawChanges)), col="red", ylim=c(-300,300), lwd=3) 
 
( length(finalResults[finalResults>74]) + length(finalResults[finalResults<(-74)]) ) / trials
 
# Plot of YoY changes over time
plot(rawChanges,pch=20,col="blue", xlab="Year", ylab="YoY change (in hundreths of a degree)")
 
# Is there a trend?
absRawChanges = abs(rawChanges)
pts = 1:130
summary(lm(absRawChanges~pts))

Tags: , ,

130 comments

  1. Matt:
    I applaud your taking a fresh look at Global Warming and going to the raw statistics. The results are of course surprising given the scientific consensus that there is evidence of global warming.

    This concerned me, so I too went to the NASA data that you linked to and conducted a Bayesian moving average time series analysis. The goal was to determine the distribution in the trend rate of annual changes in temperature. I allowed for the trend rate to change over time and also allowed for the standard deviation in the trend rate to change. The model that I used is:
    [Temp in year t] = [temp in year t-1] + c + b(t + (n-1)/2) + theta * (change in temperature in year t-1)

    where:

    c= annual trend in change in temperature (the Trend Factor)

    b= annual change in Trend Factor

    theta = regression to the mean factor

    t=1,…, 131

    I also assume that the change in temperature has a standard deviation that can change over time (parameter d).

    Based on my analysis, the value, c, the Trend Factor, has a 97% chance of being greater than zero. This suggests that the NASA data is strongly supportive of Global Warming.

    The full results of the analysis and the code that I used can be found here:

    In summary, using the same data that you used, but adopting a Bayesian approach, I get to very different results. Perhaps this is due to the way you model correlations, where the correlation between years gets “broken” after two or three years.

  2. “To be as clear as possible, many of the GW claims are highly extreme, in terms of estimated temperature changes (all of the extreme predictions made 10 years ago have failed) and how we should change our lives. Do the data provide strong enough evidence to support this?”

    Well, extreme predictions usually fail. But maybe you’d like to indicate which extreme predictions you are talking about? Is it the IPCC on atmospheric temperatures? Or sea level rise? And by saying “in the last 10 years”, aren’t you falling into the usual trap of drawing conclusions from not enough data?

    “At this point, none of the proponents of the consensus views on GW have been willing to state their degree of belief in the different GW claims.”

    Quite a few scientists have given likely bounds on climate sensitivity, and this is the key claim in AGW. The “skeptics” like to pretend it is 0.7C (or even 0C) for a doubling of CO2. The scientists generally go for around 2 – 3C for a doubling of CO2. There’s also some distinction between the transient response (which only includes “quick” feedbacks like humidity) and the equilibrium response that includes the long term feedbacks like changing albedo with loss of ice.

    “None have pointed to an alternative data set that’s more convincing (or any other data set). None have posited an alternative method that doesn’t start with an assumption of a trend then require that it be disproved.”

    But it is an almost trivial exercise to show that the assumption of no trend in the temperature data is wrong. However, if you were give the data in isolation, you would only conclude that it had increased – you would have no idea about its future behaviour. But the data is not in isolation, it is accompanied by a convincing physical rationale for a rising temperature. Put the two together, and its very hard to see anything other than a continued increase in temperatures.

    Of course you should feel free to look for other explanations, but “skeptics” have been doing that for quite a few years now, and they have yet to come up with anything even remotely as convincing as AGW. What is worse, many of their attempts contradict each other, and seem mainly to exist to mislead people, rather than having any substance to them. Its almost as though the “skeptics” know that they are wrong, but are trying to convince the general public that they aren’t.

  3. Now this is just getting silly. A ridiculous model, whose implications about long-term climate stability are utterly disastrous, ‘can’t be ruled out’ as a competitor with GW2 because it allows large departures from initial ‘temperatures’ without any basis in physics, and this is insisted on as a serious objection, despite the fundamental untenability of this model being pointed out in multiple comments?

    Then, to pile Pelion on Ossa, the author complains that there are deep problems of falsifiability, ‘Pascal’s wager arguments’ and the new suggestion that greater year to year variance could explain the strong trend in global temperatures over the second half of the 1881 to present record.

    Falsifiability is not a problem here: show that CO2 isn’t a greenhouse gas, or that the evidence for rapidly climbing CO2 levels is somehow misleading, or that the basic radiative physics is wrong, or that measures of change in outgoing LW radiation are wrong, or that … the list is nearly endless but I don’t see any chance of the challenge being taken up seriously here, where predictive models are dismissed as unsupported because non-physical tinker-toy models are “alternatives” we should consider.

    Pascal’s wager starts out from the assumption that we have no evidence at all for Xianity, and argues we should believe it anyway. No one is using that kind of argument to support taking action to reduce GHG emissions. Instead, real evidence, from the basic physics to measurements confirming the impact of GHGs on downwelling LW radiation to the agreement of serious physical models that substantial warming is a very real risk, is what motivates acting now to avoid the worst-case outcomes of continued, unrestrained emissions.

    Finally, if you have a credible model instead of a statistical tinker toy that gives a substantial probability to the observed trend given only increased variance, let’s see it.

  4. Matt Asher: “The advantage of my ignorance of the science is that I can offer an independent analysis of the data. ”

    OK, Matt, read that. Now, read that again. If you do not understand the context of the data–and that includes the physical system that produced it–you cannot hope to develop a meaningful analysis.

    You also misunderstand the point of the graphic supplied by Alfonso. It is that even with the stakes as high as they are, that only 0.17% of articles in climate change question anthropogenic causation of the current warming epoch. If there were any controversy over the role of CO2 in climate, you would expect that to be far higher.

    As to falsifiability, climate models have a very good record of successful prediction. See, for example:
    http://bartonpaullevenson.com/ModelsReliable.html

    Finally, as a Toronto graduate, might I commend to you the work by Jim Prall:

    http://www.eecg.utoronto.ca/~prall/climate/

    • That would be a great response iff (if and only if) climatologists’ predictions turn out to be true.

      The easiest predictions to test against reality are those made at IPCC’s first assessment report (FAR), the report that got the ball rolling. Let’s take the predictions from the FAR and compare to the data.

      It can be found here :http://www.ipcc.ch/ipccreports/1992%20IPCC%20Supplement/IPCC_1990_and_1992_Assessments/English/ipcc_90_92_assessments_far_overview.pdf (this is the document that the IPCC saw fit to present to “policymakers” and clearly intends as a guide to any decisions)

      The claims made here now have accrued ~ 22 years of data (and given they’re 100 years-out predictions, even 22 years seems far too little data).

      Predictions:
      “A n average rate o f increase o f globa l mean
      temperature during the next century of about 0.3°C per
      decade (with an uncertainty range of 0.2—0.5°C per
      decade)” (uncertainty range is generally meant to mean 95%)
      Second prediction – not applicable since action was not taken, CO2 usage has grown as assumed for “Business as usual scenario”
      “an average rate of global mean sea-level
      rise of about 6 cm per decade over the next century
      (with an uncertainty range of 3—10 cm per decade)”

      Ok now let’s connect the actual data.

      A) temperature change:
      Temperature rise per decade for 1990-2012 interval:
      (data from http://www.wolframalpha.com/input/?i=global+climate+studies+from+1990+to+2012 )
      14.2 to 14.25 or 0.05 degrees. Split over the decades we get 0.1 and -0.05 degrees.

      ALL these values fall outside of the 95% certainty interval that is presented as the scientific consensus.

      Now I have had this course “philosophy of science” that I distinctly recall teaching me that if predictions don’t pan out, the theory they’re based on is flawed. Granted I hated the course, but still.

      B) sea level rise
      data from http://en.wikipedia.org/wiki/File:Trends_in_global_average_absolute_sea_level,_1870-2008_(US_EPA).png

      I read this graph as slightly under 1 inch per decade, or ~2.5 cm per decade.

      Again this value is outside of the 95% interval that the IPCC’s scientific consensus gave.

      So, frankly, the way I see your comment is “people who live in glass houses shouldn’t throw stones”. Maybe the statistics are wrong. But the scientific consensus is wrong, by scientific rules (wrong prediction made -> your theory is wrong).

  5. “Is it possible that, on a small enough scale, the changes in direction of your boat are essentially random?”

    And, if true, you’d then conclude that it’s possible that the engine, right there in front of your eyes, doesn’t exist, and that the boat quite likely will never cross the atlantic?

    CO2 forcing is real. If you think you can disprove this basic physical fact with statistical analysis, I invite you to stare into the business end of a CO2 laser, hit the “on” switch, and report back afterwards …

    Meanwhile, I have a perpetual motion machine for sale. Make me an offer …

  6. This post wasn’t even making that strong of a statement, just saying that a random walk can’t be ruled out-

    has ignited the true believers.

    Apparently, infidels (like Mr. Asher) must be purged if they do not accept chapter and verse-

    well done.

    • Scharfy, a random walk can certainly be ruled out on physical grounds unless you want to throw out conservation of energy. Several other posters have also pointed out a variety of other problems with the analysis–including that violates its own assumptions. Are the concepts of coservation of energy and self-consistency too advanced for you?

  7. Interesting exercise and all, but the problem is that the data you’re looking at are a summary statistic of a huge physical system that follows physical laws like any other. This isn’t some particle bouncing around in a dish where we’re trying to figure out if it’s moving randomly or has a center point it’s attracted to. We KNOW there are physical laws that create a baseline expected temperature, and that if all the inputs that affect temperature don’t change, we CAN’T be observing a random walk with paths leading to wide dispersion from the expected mean equally likely as other paths that don’t deviate from the expected as much.

    Think statistical mechanics. States that lead to temperatures far from the expected mean have a high free energy and are unstable and unlikely. Just taking the auto-correlation into account is a weak model for this fact.

    • Matt, I’m very much afraid the negative auto correlations don’t tell you very much. I did the calculation based on the data I get a value of -0.30. If you had nothing but random normal deviates to start with you would expect a value of -0.50 for the corelation. Do the math. It is the correlation between x1-x2 and x2-x3.

      • Hi Larry,

        Yes the correlation is -0.30 or -0.32 depending on whether you compare the vector of data to itself offset by one:

        cor(changes[1:129],changes[2:130])

        or compare the odd and even entries of x.

        I’m not sure how you got such a high amount for the expected correlation. Random noise tends to give a much lower number, try running this a few times:

        x = rnorm(130)
        cor(x[1:129],x[2:130])

        Also, I saw the post on your blog, please note that I never claimed this method could be extended to an indefinite number of years. In general, models (or simulations) that are good over specific ranges are the norm for what we do, not the exception, no?

        If you did a study of some students and found that the scores they got on their test could be modeled well with a straight regression line, should you reject that model because extending it to a student who studies 50 hours would predict the nonsensical result of 120% on their test?

  8. Hmm, you ran your random walk model for 130 years… We have temperature data sets going back for millenia, or longer depending on what proxies you use.

    What would your random walk model show for the temperature evolution of the earth over 2000 years? 200K years? 2 B years?

    According to your simple statistics, shouldn’t the temperature have “walked off” one way or another by a pretty large amount by now?

    I think you have shown that no one expects the earth’s temperature to follow a random walk. Good job. Now go learn some of the science and try again.

  9. Hi Matt
    I have a few comments to your code.

    Why don’t you just use column 15 in the datafile you link to? That is the annual average from December to November, and it is easy to see that it is equal to the average of the seasonal numbers you use.

    But I would actually recommend you to use the Januar to December average from column 14, since then will have data for1880 also.

    To get reproducible results of simulation it is a good idea to set the seed of your random number generator. I added this line before the first simulation:
    set.seed(123)

    I then get the fraction of trials that were more extreme than the original data to 53.7%. You should be able to reproduce that value if you set the same seed.

    Btw, a lot of your code is on the same line above, so that line is extremely long, makes it a bit confusing to rerun.

    • Hi SRJ,

      Thanks for your recommendations about the code! I’ll try to remember to set seeds and keep my line lengths reasonable. I wish R had an easy way to do multi-line comments.

      • Hi Matt
        Another question
        In the part of the code that does the Maxent stuff, why do you use the value 74 in this line:
        ( length(finalResults[finalResults>74]) + length(finalResults[finalResults<(-74)]) ) / trials

        Shouldn’t it be the netChange that you calculate earlier?

        • Sorry about that, that line was old code that hard-coded the temp change from an initial (slightly different calculation) I did. Use netChange (which is 75.75).

          ( length(finalResults[finalResults>netChange]) + length(finalResults[finalResults<(-netChange)]) ) / trials

  10. ad Sharfy: We’re making the point that this little exercise in statistics is not relevant to a serious discussion of climate change. If you think it is relevant, then let’s hear why. ‘True belief’ isn’t the issue– having a serious discussion is. Are you up for that?

  11. A couple people have argued that it’s invalid to use a model for 130 years of data if that model becomes meaningless when extended to a million years. Note that if this is your argument, than you are saying that most regression analyses are invalid because they cannot be extended much beyond the existing data while still retaining meaning.

    Think for a moment, do you really believe that the same simulation must work for 130 years and a million? If so, then why do we need climate change models at all when we have perfectly good weather forecasts? Why not just extend these out for the next 10 years?

    A lot of the confusion stems from the thinking that I’m trying to “model the climate.” I’m not. I’m doing a simulation based on the real data, to see how extreme the observed results are relative to the yearly changes and what might happen if this empirical data represented a distribution of possible changes. If that’s not clear, please re-read the post.

    Note that I’ve asked a number of questions in the comments to try and understand where those who disagree are coming from, but I can even get basic quantitative estimates of how strongly you believe in the different GW claims (thanks to Joshua for providing a qualitative estimate). Nor has anyone given any indication that failed predictions or periods of non-warming matter (thanks to Anne R. for providing a list).

    One final note, there’s an inherent tradeoff between mistaking noise for signal, and missing an existing signal. Some of the complaints about the piece take the form of “If you look at it like this you can see the signal.” I don’t doubt that true, but you may also see something where there’s nothing (or only a very small signal). See my post about testing your model with fake data.

    • – Note that if this is your argument, than you are saying that most regression analyses are invalid because they cannot be extended much beyond the existing data while still retaining meaning.

      Every stat course I took that touched on regression also stated one of the required limitations: Thou shalt not estimate beyond the data. Worth remembering.

    • “…you are saying that most regression analyses are invalid because they cannot be extended much beyond the existing data while still retaining meaning.”

      As Robert notes, this is indeed a standard caveat in the very first lecture in regression analysis.

      Show me a person who extrapolates regression data beyond the range of the independent variable and I’ll show you someone who is misapplying statistics. Moreover, I’ll show you someone who is misapplying statistics in ignorance of the complexities of the physical world.

      For those who don’t understand the point, in the world of regression “as within” an independent variable does not equal “as beyond” an independent variable. Ignore this dictum at your peril.

  12. Irrelevant distractions here from Matt. Your exercise is pointless if the alternative view of the (recent) climate record it offers is not a real alternative. No one needs the temperature record itself to rule your random walk out– it’s ruled out physically: absent substantial changes in the energy flows involved, large excursions of temperature (secular trends like the one actually observed) are not possible. Your model ignores that constraint, so it’s not a candidate and not a serious competitor for GW2. So it doesn’t show that there’s a significant probability that the climate change we’ve observed is the product of some kind of random process and not evidence of a substantial change in the forces that drive the climate.

    And we know you’re not trying to model the climate. That’s the problem: what we’re talking about is the climate!

    You want a response re. your various hypotheses– there’s already a pretty good one above– but I’ll bite, too:

    1: yes 2: yes 3: yes (re. my understanding ‘radical’: the rate at which we’re forcing the climate is extreme, and the consequences of a 4 degree Celsius rise would be very serious; with a bit of bad luck a mass extinction on the order of the paleocence-eocene seems possible) 4: yes 5: yes (well understood radiative physics by itself makes this likely) 6: yes 7: yes (a combination of energy efficiency and aggressive deployment of low carbon energy technologies– not as big an effort as a major war, with much better payoffs) 8: yes (most side effects are improvements on present practice: health benefits from reduced air pollution, higher population densities and less sprawl occupying good farmland, …) 9: yes (I thought, from 8, that you were worried about unintended consequences? Global engineering of the kinds proposed looks very risky to me– and requires a long term, continuing effort to sustain, so long as GHG levels remain elevated, i.e. for much longer than the median lifespan of any political order the world has ever seen).

    • “And we know you’re not trying to model the climate. That’s the problem: what we’re talking about is the climate!”

      Wow are you a troll or just really dense? I’m only in second year (biostats) and I get it. He did a simulation of global temperature and found it was indistinguishable from a random noise!

      PROTIP: “quantitative means” numerical. Since you don’t give your probability for the warming claims, you must be saying “yes” I have 100% confidence in them. That’s not how science works! You don’t even care about the data or predictions. It’s like if a drug is tested vs placebo and the statistician finds no real difference only natural variation then the company says “oh our theory says it has to work because we understand the chemistry and you don’t we know there must be an effect you need to look at the data with our theory.” They say go ahead FDA approve it. We are 100% sure it works no doubt! You don’t even consider what is significant effect size. Basic stuff!

  13. You’ve put the AGW case in point form, and I’ll say that I believe in 1 – 5, and hope that the remainder are true. But as an exercise, I’ve constructed what I believe the skeptic position to be:

    1) Its not warming, and anyway, its been warmer in the past
    2) If it is, its natural, or it will only be small
    3) If it will be big, it will be good. Warm is better.
    4) CO2 is not to blame, it is plant food.
    5) CO2 levels are not going up, and they’ve been higher in the past.
    6) If CO2 levels are going up, its from undersea volcanoes
    7) If CO2 levels are going up, and it is our fault, there is nothing we can do about it, because civilisation would collapse if we stopped burning fossil fuel

  14. “Think for a moment, do you really believe that the same simulation must work for 130 years and a million?”

    Yes, of course. The physics haven’t changed. The major problem is getting good data on solar output at the time, etc. If major forcings can be pinned down, geographical location of the continent[s], etc, then yes, we’d expect a good simulation to work over any timeframe.

    “If so, then why do we need climate change models at all when we have perfectly good weather forecasts?”

    Well, weather forecasting models don’t have to deal with changing forcings due to decadal fluctuations in solar output, increased forcing due to increased GHGs, Milankovic cycles, etc. On the other hand, climate models don’t have to be as concerned with precise and fine-grained data on current conditions.

    You’re not making a lot of sense. And my perpetual motion machine is still for sale.

  15. ““Think for a moment, do you really believe that the same simulation must work for 130 years and a million?”

    So your position is that paleoclimatologists who do use GCMs to model paleoclimate are on a fool’s errand? Because they do, you know.

    A moment in google reveals that much work along these lines is being done, just one example:

    http://www.geo.arizona.edu/~rees/data-models.html

    Perhaps you’d like to suggest they model climate as a random walk instead?

  16. Take a good look at the plot and play with Matt’s code and it is clear that the random walk model does not do a good job of reflecting reality.

    Yes some of us have looked at it over a million year time period. Clearly it fails there. But the type of changes implied by the typical simulation over the 131 year time span imply much large temperature variations than have been seen over that time span at least since the end of the last ice age.

    In addition the individual simulations imply a much greater variability even within the 131 year plots. Change he variable trial to something like 10 and change the code in the line command to something like
    col=rgb(0, 0, 1, alpha = .9) so that the individual simulations show up much better. It then becomes readily apparent that every single simulation shows a much greater variability than does the red temperature line. This is clear indication that the model does not reflect what is happening in the real world.

    Change trials to 1 and you can see the individual simulations. Run the code a hundred times if you want. When none of those simulations, which claim to represent what was going on with the real data, show stability in the temperatures equal to or better than the actual temperature record that is clear evidence for the failure of the model.

    With the failure of the model the claims about the weak evidence for global warming cannot be substantiated.

    The problem Matt has is that he has proposed a model but has failed to justify why it is a good model. He has made no attempt to validate that the model reflects the real world. And then goes on to make claims about what the failed model says about global warming.

    • Larry, I found your comment to be the most disappointing:

      “The problem Matt has is that he has proposed a model but has failed to justify why it is a good model. He has made no attempt to validate that the model reflects the real world. And then goes on to make claims about what the failed model says about global warming.”

      Matt has proposed that the current trend in global temperature can be explained by random chance. Using this model would almost never predict the actual outcome because it is nearly impossible to predict chance. We cannot model the outcome of 100 coin flips, no matter how well we understand the coin or the mechanics of flipping it. But we can calculate that 51 heads and 49 tails does not a biased coin make.

  17. Matt – I actually think it is an interesting academic exercise to see if you can tell if that data was randomly generated (without any understanding of the physics) – there was a few arguments that I thought were interesting that I don’t think you have addressed

    1) the ADF test that alfesin mentioned
    2) the Bayesian moving average time series analysis that Howard mentioned
    3) the Eduardo Zorita paper that I mentioned

    “It considers the likelihood that the observed recent clustering of warm record-breaking mean temperatures at global, regional and local scales may occur by chance in a stationary climate. Under two statistical null-hypotheses, autoregressive and long-memory , this probability turns to be very low: for the global records lower than p= 0001, and even lower for some regional records. The picture for the individual long station records is not as clear, as the number of recent record years is not as large as for the spatially averaged temperatures.”

  18. (I didn’t mean to post this as a reply to an earlier comment, but as a new comment. Sorry for the double post)

    1) Matt, now you are inventing new terminology. “[A] structured random process with no trend?” The term random walk is not only used in the context of independent increments. What you are simulating IS a random walk. The fact that the increments are correlated does not change its asymptotics (limsup = +infty, liminf = -infty), as you can guess from looking at your plot. And again, I am not just raising this objection because of the asymptotics- but because a random walk is the wrong kind of model to use and you are effectively cheating by using it.

    *** I challenge you to explain your reasoning for choosing a random walk for your simulation, instead of something like linear regression or splines.

    2) “As with all of my posts here (see the website motto or the manifesto page), I don’t so much model (models try to see if the data conforms to platonic distributional forms) as simulate. … [The] data … provides no evidence to reject this hypothesis.”

    What you are simulating from is a model- a random walk model with non-independent increments sampled from an empirical cdf. Your hypothesis is that the data are well-represented by this model, and you failed to reject that hypothesis. Now, leaving aside the point above that this is the wrong kind of model and that makes it unduly difficult for the data to cause rejection of the hypothesis, as others have pointed out we could still reject this hypothesis using a different statistic. The statistic you chose was just the temperature difference at the end of 130 years. That statistic is not a sufficient statistic for this model (by far- it loses a huge amount of information).

    *** I challenge you to consider some other statistics and provide p-values. For example, consider the integral of the square of the sample path, and report a p-value for the probability of observing a smaller square integral than the data.

    (As an aside: I think you undervalue models. Of course they are platonic, but they serve a purpose for which we don’t yet have anything better. Language serves to represent things so that we can reason about them and communicate, but nobody would mistake a definition of a chair in a dictionary for an actual chair. Similarly, careful modeling allows us to reason about things and understand them. And understanding is vitally important, it is the difference between a scientist who actually knows stuff about the real climate and a hobbyist blindly writing code and producing graphs that have no connection whatsoever to the thing he thinks he is analyzing)

    3) “At this point, none of the proponents of the consensus views on GW have been willing to state their degree of belief in the different GW claims.”

    Okay, I’ll bite harder this time:

    1-4 all greater than 99%. I would say 100*(1-epsilon)%, but I hesitate to be that certain about a topic I don’t know much about based solely on the expertise of others, and I guess the climate may be a sufficiently complex system so that it’s possible the experts have missed something.

    The remaining claims are stated in ways where I am highly certain about some aspect of them and uncertain about another. For 5, I would say greater than 95% that humans are an important cause, and I have no clue about whether they are the MOST important cause. For 6 I would say 100% that we can (hypothetically) stop the trend and maybe 80% or so that we could hypothetically reverse it (this is an off-the-cuff guess).

    For 7, I would say less than 20% if “realistic” is interpreted to mean “without drastic change from the current political system.” But this is entirely a failure of politics, not economics or science or will of the people.

    For 8, I guess less than 10% for unintended negative consequences being more damaging than warming itself. And I think this guess is conservative- not a tight upper bound.

    For 9, say greater than 80% (again just a ball-park).

    These are all very very subjective degree of belief statements and I would probably say different numbers if I were to answer again at another time.

    *** I challenge you to state your own degrees of belief in the list of GW claims or any other GW-skeptic claims that you prefer over them.

    4) “None have pointed to an alternative data set that’s more convincing (or any other data set). None have posited an alternative method that doesn’t start with an assumption of a trend then require that it be disproved. … None of the proponents have addressed the points I raised about degrees of falsifiability, biases (including our tendency to see patterns and the anthropic principle)”

    I guess you haven’t read all the comments or something (there are quite a few of them). Of course people have suggested other data, the berkeley link for one. And other models, like the ones I mentioned above, were also brought up before (regression). The fact that regression involves a mean function does not mean it is assuming a *nonzero* trend. The question is whether the regression function is increasing over time, and you could conceivably answer “no” if the data allows that.

    I specifically mentioned falsifiability, along with other commentors above. The theory of GW makes many predictions, all of which can be falsified. Or you could try to falsify any of the many intermediary scientific results that GW rests upon, such as the laws of thermodynamics- that would call into question all the models built using those laws. I also specifically mentioned biases, asking if anyone has ever demonstrated whether or not the scientific community is more likely to accept studies which confirm its existing theories or ones which disprove or establish other theories. My hypothesis is that there might actually be a bias toward constantly changing things, even unnecessarily, because of the “publish or perish” imperative.

    • – My hypothesis is that there might actually be a bias toward constantly changing things, even unnecessarily, because of the “publish or perish” imperative.

      Historically, more the other way. Three points.
      1) Einstein’s theories were not rapidly accepted, since they conflicted with Newton, and everyone else.
      2) Einstein resisted quantum mechanics, because he didn’t believe in the premise (God playing at dice, and all).
      3) Standard dictum: data doesn’t invalidate a theory, but another theory does.

      3) may sound odd to the youngsters, but it’s been an article of faith for hundreds of years. Science doesn’t toss a theory based on some contradictory data. A theory gets tossed when a proposed theory explains both the data explained by previous theory as well as data only explained by the proposed theory. This is why it’s the “theory” of evolution. Saying “God did it” for either evolution or climate change doesn’t cut it.

      Publish or perish has led, given the explosion in population which includes Ph.Ds, to a lot of dancing on pinheads. Fundamental theory disputes largely ignore all of that.

      • Your first point is not correct.
        According to Wikipedia. special relativity was widely accepted within 6 years of publication:
        Eventually, around 1911 most mathematicians and theoretical physicists accepted the results of special relativity. …

        And experimental confirmation of general relativity was coming in as early as 1919, only four years after Einstein’s final version in 1915.

        Your second point is irrelevant, a physicist is not physics, and so what if Einstein wouldn’t accept QM?

      • “1) Einstein’s theories were not rapidly accepted, since they conflicted with Newton, and everyone else.”

        On the contrary, within five years of publishing his paper on special relativity, “most mathematicians and theoretical physicists accepted the results of special relativity”. (http://en.wikipedia.org/wiki/History_of_special_relativity#Early_reception)

        Physicists knew Newton’s theories were wrong long before then due to experimental evidence and Einstein’s work was the culmination of much effort to resolve that problem.

        “2) Einstein resisted quantum mechanics, because he didn’t believe in the premise (God playing at dice, and all).”

        He did, however, receive the Nobel Prize for his 1905 paper on the photoelectric effect, which effectively *established* quantum theory. What he *resisted* was the premise that quantum theory is complete and that there are no local hidden variables that, if known, would explain away the apparent randomness. It wasn’t until 1981 that experiments convincingly proved John Bell’s 1964 theory that there are no (local) hidden variables, both well after Einstein’s death. His “resistance” is therefore a lot more subtle than you suggest.

        “3) Standard dictum: data doesn’t invalidate a theory, but another theory does.”

        That depends on both the data and the theory in question. A well-established theory that has stood the test of time and made countless correct predictions isn’t going to be tossed overnight because of one experiment that contradicts it. Remember “cold fusion”, an experimental result that seemingly overturned known physics?

        Furthermore, even though a theory is known to be wrong — as both the Standard Model and Relativity are known to be — doesn’t mean the theory isn’t *useful*. We just have to know under what circumstances each is valid, and tread very carefully when dealing with situations where both apply (e.g. singularities).

  19. Conclusion that statistical analysis with random walk fails to find evidence of global warming is correct but seems incomplete. How strong is this conclusion?

    What is missing is an experiment that applies the same method on data with known trend and same variance.

    For instance, if we measure the temperature in a thermostat that keeps it between T1 and T2, the temperatures will go up and down over time. Random walk permutations will go far beyond T1 and T2. If T1 and T2 are changing over time how extreme the trend should be for this analysis to detect it? One can use synthetic data generated in the same R program with the same variance. I am afraid that for all but a few extreme trends this analysis will come to the same conclusion – no evidence of a trend. But again – such experiment is missing.

  20. John Rogers (December 6, 2012 at 6:26 am)

    “He did a simulation of global temperature and found it was indistinguishable from a random noise!”

    No he didn’t John. He made 1000 runs of a random walk model that produced a large envelope of outputs one of which (or a subgroup of which) could be selected as being coincidentally similar to the actual progression of surface temperature.

    Your “PROTIP” example is not a good analogy ‘though it does perhaps have an ironic relationship to Matt’s analysis. The FDA wouldn’t approve a drug based on lack of efficacy relative to a placebo. On the other hand (and we know this happens occasionally), a drug might be approved under circumstances that the company was aware of a problem with respect to side effects, or had given the pretence of efficacy by using trials with flawed designs. The key point in assessing drug efficacy is Hard Unbiased Information.

    The same applies to understanding the Earth temperature evolution. If we’re interested in attribution/causality, we obviously take into account the fact that the troposphere has warmed while the stratosphere has cooled (indicating enhanced greenhouse effect), that the surface warming is associated with the vast and progressive increase in ocean heat content (i.e. not a “random walk” at all), and all of the other physical signatures that informs our understanding.

    No doubt one could also model the progression of a quantitative parameter of a person’s cancer (for example) as a “random walk” but you’d be unlikely to convince knowledgeable people that cancers are the products of the accumulation of random fluctuations of cell mass without causality.

  21. It seems to me that “average yearly surface temperature” is not just an abstract philosophical notion, but rather a physical property. Any physical property is dependent on other physical properties, and therefore cannot be analyzed in isolation.
    How does this “random walk” model of yours account for other related properties? Energy in the system comes to mind, as energy and temperature are very closely related. If the temperature increase is just a random walk, where did the required energy come from? Was it the energy going on a random walk, which was the actual basis of the temperature change? Is it possible for energy to randomly change?

  22. Matt,

    Thanks for the comments you posted over on my blog. I responded directly to those there, and if someone is interested in that exchange I trust they can find their way of there.

    But I wanted to expand on some of the what has been said and to say I think the issue of going beyond the original 131 is getting in the way of understanding the deficiencies in your model.

    Others have taken the original 131 year data set and applied a CADF test and found that it rejects a claim that the series is a random walk. I would think that would also make it clear that modeling using a random walk to see if the trend is “significant” would be ruled out.

    Let me take it from the other side. You have created 1000 simulations of the time series based on the random walk model. So you have a distribution of potential outcomes if in fact the random walk was a good model. So I ask the question is it reasonable to conclude that the actual time series arose from the simulated distribution.

    In very simplistic terms if I assumed that a variable was from a N(0,1) distribution(your simulation) than obtained an observation x(the actual time series) can I conclude that x came from that distribution. If x=1.2 then certainly it could have come from there, but if x=1000 it is very unlikely to have come from there.

    Your set of simulations provide a base distribution of outcomes. As an evaluator I want to focus on the variability of the time series. My actual measure is the difference between the maximum and the minimum temperature during the 131 year period.

    For the base data this is

    max(theData$means)-min(theData$means)

    which yields a value of 107.

    Then I calculate this same value for each of the 1000 simulations. To do this I add the following code to your program.

    set.seed(123)

    This way you can duplicate my numbers if you wish.

    Just prior to the for loop add the two lines:

    jump.max <- rep(0,trials)
    jump.min <- rep(0,trials)

    I use these variables to capture the spread of the temperature range for each simulation.

    Inside the for loop add the lines:

    running.sum <- cumsum(c(0,jumps))
    jump.max[i] <- max(running.sum)
    jump.min[i] max(theData$means)-min(theData$means))

    This will give a count of the number of cases in the simulation where the difference between the minimum and maximum temperature is greater than 107. The range in the original data. The result I got is 963.

    That is over 96% of the simulations had a greater variability than does the original time series, using this measure, is pretty conclusive to me that the original time series does not follow the distribution that results from your model.

    Again I am force to reject the hypothesis that a random walk models the temperature over the last 131 years.

    • I see I mixed up my code a bit. Must have cut somewhere at the wrong time.

      In the for loop the code is:

      running.sum <- cumsum(c(0,jumps))
      jump.max[i] <- max(running.sum)
      jump.min[i] max(theData$means)-min(theData$means))

      It is this last line that gives the value of 963.

      • Hi Larry,

        I think the comment parser at wordpress messes up some of the code.

        Did you try adding your code to the MaxEnt version, which bakes the covariance back into the data? I just did this and the number of trials which exceeded the observed range was about 80%, still perhaps uncomfortably large, but much less concerning than 96%. I used set.seed(345) and 1000 trials for this.

        Your point about the range of the trials being similar to the original is interesting though and worth more consideration. Because the original data dips so little before heading up, the total range isn’t much above the final result (0.76). Thus, anytime the simulation yields a more extreme result, it’s likely to also have a larger range, no?

        I must admit that the CADF details are new to me. At this point it’s clear that I shouldn’t have used the “random walk” term, since right away people start thinking AR and ARMA and Gaussian noise and all these other things and then get upset with the idea of applying this to climate, even over (relatively) short periods of time. Unfortunately, calling the method a “bootstrap of centered empirical temperature changes adjusted for observed data structures” probably wouldn’t have helped with comprehension.

        • Sorry should have begun by saying that I understood what you were doing with the code and implemented it. Here’s what I used:

          running.sum = cumsum(c(0,jumps))
          jump.range[i] = max(running.sum) – min(running.sum)

          inside the loop, with:

          jump.range <- rep(0,trials)

          outside the loop.

        • Matt, I did all my coding and testing on your first set of data before you played with the adjustments for the negative correlations.

          I had two reasons for that. First you had said the adjustment did not make any fundamental differences in your conclusions. And second I never saw the need for doing the adjustment due in the problems I saw with the data model. I also expected a negative correlation. You are actually calculation the correlation between x(1)-x(2) and x(2)-x(3). If the observations are random then the correlation will be negative.

          Yes the 80% number is not quite the concern as is the 96% figure I got. Keep in mind thought that if there is a trend in the data then the range in the actual data is going to be higher than in a stationary climate situation. The way you build your simulations there is no “real trend” except what a random walk tends to create. I view the 96% and the 80% to be biased downward due to what I see as a real trend in the actual data set.

  23. Frustration. I know I cut and pasted right the second time. One last try. The jump.min code is the same as the jump .max code except that is used the min function

    The last line of code is outside of the for loop. It subtracts jump.min from jump.max to get the range and computes the number of times that values is greater than 107.

    Hope that works….

  24. Firstly, for an R post nice code and graphs and everything.

    However, for a GW post – shame on you. The title is incredibly misleading, you redeem yourself by pointing out that YoY global averages are meaningless yet everything prior to that can be used by ‘GW deniers’ (read: big oil companies, governments not wanting to spend money etc. etc.) as ‘evidence’.

    You already state that you don’t have the required background to be claiming that there is only a weak basis for global warming, yet make that claim anyway – shame. How can you honestly expect to get a decent idea if what is going on using only an average temperature for the whole planet per year (“I’m only considering the yearly average temperature”), that average is almost meaningless on its own, not to mention the data from the early years is far less reliable.

    GW causes among other things more extremes in temperature, eg. colder winters, warmer summers and more intense storms, these things are lost when taking an average.

    The scientific evidence (that you admit to not even knowing) is overwhelming, it is not a bunch of people following the pack, that is not how science works. A real science article would reference the rising ocean levels, increasing CO2 levels and all the other evidence that shows global warming is real. Cherry picking one small peice of a very large puzzle is not real science.

    In addition your mention of Pascal’s wager is not apt here, in that wager what you lose is a life time of pleasure (a pretty big deal to most people). In this argument if we act according to GW being real and man made and it turns out to either not be real or not man made then what do we actually lose? Nothing, money gets spent making the planet much greener, less cancer causing pollution etc. To claim that would be a waste of time and money is incredibly short sighted, criminal almost.

    • – How can you honestly expect to get a decent idea if what is going on using only an average temperature for the whole planet per year (“I’m only considering the yearly average temperature”), that average is almost meaningless on its own, not to mention the data from the early years is far less reliable.

      As the canard goes, “one foot in a lava stream, the other on a glacier; comfortable on average”.

    • The mention of Pascal’s Wager is valid, but lacks the broader context of the precautionary principle. This is the logic of the environmental movement. See my comment on this from Dec 5th.

      You’re argument is flawed in that it assumes “doing something” has no costs and no risks (no black swans). You’re assuming “doing something” can only have good outcomes. You’re position is the environment trumps everything else, and use the precautionary principle to smuggle in your desire for “doing something” (whatever that may be).

      • So, as you see a bad collision approaching, you would refuse to step on the brakes because you couldn’t be sure that wouldn”t somehow make it worse?

        We have (if you’re prepared to actually look at the evidence seriously) powerful evidence that our GHG emissions pose a terrible threat to the climate and oceans we depend on. And you think doing something about it is a bad idea because something or other just might go wrong? That’s some serious crazy in my book.

        • You’re assuming you know the climate as well as you know your automobile. You’re assuming you know what are the “brakes” and this will only stop or slow you down with no other effect. You’ve not even suggested what this “brake” would be. So it’s impossible to assess the risk or costs of “doing something.”

          • I made no such assumption– maybe you’re unfamiliar with the use of metaphor? There are plenty of obvious things to do, beginning with rapid, ongoing reductions in our use of fossil fuels. Energy efficiency (for vehicles, buildings, industry), development of non-fossil fuel based energy sources (already competitive in many applications) and quite possibly development of next-generation nuclear energy systems (thorium, for instance, is a safer fuel cycle, very difficult to divert to weapons production). All these things are well within current technology, and the easy ones are economically superior to current practice already (even ignoring the externalities associated with fossil fuels).

            Instead, we continue to insist on subsidizing fossil fuel development (a mature and very profitable industry with reserves large enough to commit us to 5 or 6 degrees C of warming, perhaps as early as the end of the century. Our agricultural systems won’t survive that in anything like their current form– and acidification of the oceans threatens them as a food source too.

            Impossible to assess? The only alternative to burning fossil fuels until they’re all gone is to jump down a dark well of mystery? Why not actually look at the plans and proposals out there? The ‘wedge’ strategy (combining conservation with multiple new energy sources over time); proposals to cut vehicle fuel consumption by 50% in the next 10-15 years (well within current technology); proposals for thorium fueled reactors; continued improvement in wind power systems… the notion that it’s all just unknowable is pure fear-mongering.

  25. Once again, you did not make any mistake by calling it a random walk. IT IS A RANDOM WALK (with correlated increments). Everything that is true generally about random walks (without assuming independence of increments) is true about your model.

    Also, the max-min statistic gets more information but still is not even close to being a sufficient statistic. Try the integrated square, or integrated absolute value. Just sum(x^2)/131 where x[i] = temp at time i.

    • Joshua,

      The sum of the squared deviations looks to me to be a better measure than the range statistic I used. It better captures what is going on for the entire time series.

  26. As noted, this analysis is of the land and sea surface records, which have been averaged together. This is the flaw of one foot in a lava stream, the other on a glacier where on average one is comfortable. But it’s actually worse than that.

    Averaging them together is like averaging the temperature of a balloon and a bowling ball. They have the same volume, but they’re not equal. The density of water is much greater than air. The temperature in my bathroom is not the average of my cold bath water and the air coming from my hair drier. Air requires much less energy to warm than water. The problem is assuming temperatures are equal measurements of energy.

    I have a distrust of the land-based temperature records. First, these records are effected by land management, urban development, tall buildings, etc. Knowing the “normal” temperature from these areas is unlikely. We also have a alternative state of the art system (USCRN) unaffected by such factors and it’s shown the commonly used land based data you’ve used is significantly effected with a warming bias.

    Secondly, these land based records have a history of being quietly adjusted (at least I haven’t found an explanation). These adjustments would create a warming trend even if the source were random noise. The past has been made cooler, and the more recent years have been made warmer.

    Simply compare the land and the sea surface record from 1880 to the present. Presumably they should be similar. Yet you’ll find the land temperatures from 1980 have been significantly warming that over the sea. That alone causes me to question the reliability of land based temperature data.

    I prefer to examine the sea surface temperatures in isolation. It’s not effected by factors that occur on land. Plus the air can more more freely, making it homogenous. It’s less prone to variability. More importantly the oceans are the planet’s main store of energy. The temperatures of the ocean waters and air are not equal, and cannot be average together. Unfortunately we don’t have long-term records of the ocean’s temperature, but we do have air temperatures at the sea’s surface (which is at least something better than our land temperature records).

    I’m not a statistician and I cannot apply your code to the available data. The data is also provided in more detail. You can get monthly temperatures of the sea surface going back to 1880, and even for specific long/lat grids.

    In my amateurish analysis of sea surface temperature I find a warming trend that began in 1909 rising 0.68 degrees C in 35 years. Then temps dropped dramatically, for some inexplicable reason, in just four years erasing nearly half of that warming. Presumably this was natural and not due to AGW since the level of CO2 was not yet significant.

    To be generous, if we say the next warming trend begins at the coldest point in 1948 and continues to the present. There’s been less warming and over a longer period of time (50 years to climb 0.6 degrees C). In other words, the warming that began in 1909 shows a naturally occurring phenomena that exceeds what is claimed by the AGW theory. The conclusion is current warming isn’t unusual, even with the significant increase of CO2 in recent years (1/3rd has been released since 1998).

    Hopefully someone with the statistical skills I lack can better analyze the data. It can be found here:

    http://climexp.knmi.nl/select.cgi?id=someone@somewhere&field=hadsst2

    http://www.metoffice.gov.uk/hadobs/hadisst/data/download.html

  27. Quick question:
    Since you used the observed YoY changes over the whole dataset to set your parameters, aren’t you just basically just re-creating the dataset with some stochasticity? You’re essentially saying that the observed trend is no different from a randomized version of the past 130 years.
    You should probably use the first 50 years or so to set your parameters and then see if the rest of the dataset deviates from that model.

  28. The analyses you present would make some sort of sense for a purely observational study of an unknown phenomenon. I’m an epidemiologist and I do this sort of analysis all the time.

    In my spare time I do a bit of demography, where the ‘physics’ of the system are quite well understood. People are born, they migrate, and they die. All my work on demography is based on this well understood model.

    This is, I gather, how people who know the subject analyse climatology data. Please understand that you can no more analyse climate data in splendid isolation, than I can anlyse demographic data, ignoring the processes that lead to my populations.

    Anthony

Leave a comment