Climategate Part Three: What Does the Data Really Show?

My last couple of posts (here and here) have focused on some of the details that came out of the Climategate scandal.  Specifically, I’ve presented evidence from the leaked emails and source code showing that:

  1. Various questionable techniques and datasets were being used in the CRU’s models.
  2. Source code was written in such a way as to specifically weight and outright change data to achieve desired effects.
  3. There was an active conspiracy to prevent the raw temperature data from being released to outsiders.  This included, but was not limited to, intentionally deleting content after Freedom of Information Act requests had been initiated.
  4. Ethically questionable steps were taken which did subvert the scientific process.  This included controlling which papers got published (only those sympathetic to ‘the cause’), undermining journals and editors who published papers critical of man-made global warming, and direct attacks against ‘skeptics’.

The Question Remains

But the question remains: Is there actually man-made global warming taking place?

Just because we know the scientists in the paleo-climate arena have been manipulating the data and public opinion, we cannot conclude that global warming is a myth.

One thing is clear: In our search to answer this question we must be extremely wary of the type of data we’re willing to consider.  In particular we need to look out for:

  1. Any temperature data that is derived (i.e. proxy data like tree-rings and ice cores).  These proxies are provably affected by factors other than temperature.  Refer to part one of this series for a discussion on the tree-ring divergence problem.
  2. Any data that has been statistically manipulated.  We have already scratched the surface of how statistical analysis can be used to serve agendas.  This is especially true in the paleo-climate world where principal component, regression, and other complex transformations are applied to data.  See A.W. Monford’s excellent bookThe Hockey Stick Illusion: Climategate and the Corruption of Science‘ for a detailed discussion of exactly the kinds of statistical games that get played in paleo-climate circles.

Raw Temperature Data

So what are we left with?  Well, the obvious thing that remains is the historical raw temperature data from various impartial meteorological institutions.  These raw datasets are readily obtained and have results stretching, in some cases, all the way back to around 1850.  These datasets are a good starting point to understand if, at least in the context of recent history, current temperatures are abnormal.

In all cases except where noted, the monthly mean temperature is used.  This is defined simply as the average of the maximum and minimum temperature at a location for the days of the given month.  This measure gives us a simple, reliable representation of the average temperature for that location for the target month.  My Excel spreadsheet with all data and charts can be downloaded here.

Also graphed (in red) is a 12 month moving average trend line.  This trend line represents the annual mean temperature and makes it much easier to pick out any overall increase or decrease in the series.

First up, the temperatures from my hometown of Ottawa, Canada (courtesy of Environment Canada):


Next, let’s head over to the Pacific coast and Vancouver, Canada (courtesy of Environment Canada):


Unfortunately, the Environment Canada data doesn’t go back before the 1930’s.  Let’s look at a random sampling of datasets which go back further starting with New York, New York (courtesy of NOAA):


Note: The unit here is unknown. This is the raw data from NOAA. Clearly it is not fahrenheit. Nonetheless, it does provide a nice trend line against instrument data stretching back into the 1800s.

Next, let’s head across the Atlantic and look at Oxford, England:


Finally let’s take a look at Melbourne, Australia (courtesy of the Australian Bureau of Meteorology):


Raw Temperature Data Conclusions

What, if anything, can we conclude from the raw temperature data?

Well, I think we can see that, in general, there has indeed been a slight increase in average temperatures over the last few decades.  By playing around with my charts and applying linear trend-lines I quantify this as being about one degree Celsius over the entire 160 year period back to the 1850s.  This is relatively consistent between locations and thus, I conclude, represents a small but real and global warming over that period.

Take a Step Back

Now take a step back.  What have we actually shown?  Only that temperatures over the last couple of decades have increased slightly above the average for the previous 150 or so years.

From this limited data we can certainly not conclude that man-made carbon dioxide emissions are causing global warming.  In addition to the fact that we can in no way establish any type of causal relationship from such a limited study, there are other known factors that can skew temperatures.

For example there’s an effect known as the urban heat island (UHI) effect.  UHI is a well documented phenomenon whereby metropolitan areas tend to have higher temperatures than surrounding areas since they are constructed from materials that have high heat retention (concrete).  Given that all my datasets are for urban centres that, over the span of the data, became more built up, we would expect to see the UHI effect cause some increase in temperature.

And let’s not forget the influence of the sun (which I will focus on in a future post), the Earth’s orbital variations (obliquity, precession, Milankovitch cycles, etc.), volcanic activity, etc.

The point is there are many things that can cause temperatures to fluctuate over time.  We can’t possibly conclude that the very small increase we’re currently observing is the result of man-made carbon emissions.

Are Current Temperatures Unprecedented?

The logical continuation of this discussion is to look at whether the current temperatures have any precedence in the relatively recent history of our planet.

Here we head into dangerous territory since we must go beyond the instrument record and look deeper into our planet’s past.  In doing so we must guard against relying solely on proxy data since we’ve seen how such data is easily misused.

With that caution in mind let’s look at an obvious candidate: The Medieval Warm Period.

The Medieval Warm Period and Little Ice Age

One of the many major criticisms of the infamous hockey stick chart from Climategate, was the absence of the Medieval Warm Period (MWP).  This was a huge departure from the established consensus within the scientific community and the opinion of the IPCC (UN International Panel on Climate Change) at the time.  Here’s how the IPCC conclusions evolved over the years to eliminate the previously accepted MWP:


As you can see, with the release of Michael Mann’s hockey stick, the MWP was expunged from the IPCC temperature record.  The effect is obvious.  With the MWP included current temperatures do not appear abnormal or unprecedented.  But, with the MWP removed, current temperatures appear much more concerning.

So which is correct?  Did the MWP really occur?

I think there are many obvious data points that indicate the MWP was a real event.  Its removal from the temperature record by Mann et. al., and the subsequent adoption by the IPCC of this bastardization of climate history is, in my mind, an amazing example of statistics and obfuscation being used for political and personal motives.

On the statistical side of things, the previously mentioned book by Montford goes into all the juicy details.  I would highly recommend this book for anyone who wants to really understand the game that Mann and others on ‘the hockey stick team’ played.  After reading this book there is no doubt in my mind that Mann’s hockey stick chart was based on completely flawed data and intentionally flawed analysis of that data to remove the MWP and exaggerate current temperature trends.  This is not opinion but proven fact.  There is simply no way to spin Mann’s techniques in a favourable light.  Indeed, understanding the machinations Mann had to go through to abolish the MWP from his chart is itself strong evidence that the MWP is real.

There is much evidence in the scientific literature that supports the existence of the MWP as well as the Little Ice Age (LIA) that followed.  A great summary of this evidence can be found here.

As a further point of evidence supporting the MWP and LIA, I would point to the Viking colonization of Greenland, Iceland and parts of Newfoundland:


We all learned about the Vikings in school.  It’s historical fact that the Vikings did establish colonies on Greenland and Iceland that coincided with the MWP only to abandon these colonies with the onset of colder temperatures during the LIA.  It’s also interesting to see in the above chart the divergence problem I talked about in my first Climategate post.

Given the vast body of evidence supporting the existence of a MWP and LIA, I do not believe the existence of these periods of temperature fluctuation can be refuted.  Thus, Mann and the current IPCC temperature reconstructions that do not show a pronounced MWP are, in my mind, highly suspect and must be based on flawed data, flawed analysis or a combination of both.

Pre-civilization Temperature Record

Let’s step back still further into the temperature record.  In doing so let’s remain mindful that we’re relying on temperature proxies.  But, that said, what’s the current understanding of our planet’s recent history?

Here’s a look back 800,000 years courtesy of NASA (using antarctic ice-core data):


Going back further into the temperature record:


I think we all remember something like this from our high-school textbooks.  It fits with what we all know intuitively – that it was much warming during the age of the dinosaurs than today.

This chart also shows us that large scale temperature fluctuations are totally normal for our planet.  In fact, we’re currently at what would be considered the bottom of the long term temperature range (far below temperatures of the past).

Clearly the previous climate changes were not anthropogenic since we, homo sapiens, weren’t around.  The inescapable conclusion must be that even in the absence of human influences, our planet’s temperatures are volatile.


The conclusion that temperatures have increased slightly over the last hundred years (about one degree Celsius) is support by the raw temperature data.

The conclusions that this warming is unprecedented and man-made are not supported by the data.

We know that there were both a Medieval Warm Period and Little Ice Age in the last Millennium.  We know that the temperature record for the last million, or hundreds of millions of years reveals cyclical temperature variations which could obviously not be man-made.  Temperatures during these periods did regularly exceed by a large margin anything even close to what we see today.

These conclusions do not give us the right to pollute our planet with impunity.  But they do call into question the global warming phenomena as the most pressing danger mankind faces.  We have been mislead and outright lied to.  The science has been corrupted.  We need to question the conclusion of global warming and cast a critical eye on the policies that are being put in place on the basis of perhaps the greatest con in human history.

Project Souvenir Part Two: Anything for National Security

In this post we’ll apply the analysis discussed in part one to what is open source intelligence about Project Souvenir with the goal to decipher if this should be investigated officially based on the evidence that it may be a false flag operation carried out by elements within the national security establishment.

Sting vs. Inside Job

Many will claim Project Souvenir is a sting operation and that such an operation is not a false flag.  Sting operations are just a few steps removed from entrapment and ultimately sit on the spectrum of inside jobs.  The security establishment likes to sucker people into shades of grey.  How about we don’t embark down the slippery slope that leads to full on corruption?  There’s a reason countries like Sweden do not allow stings.  They are fraught with ethical concerns.  In this author’s opinion calling it a sting operation does nothing to deter me from the fact it is ultimately an inside job / false flag operation.  It is too easy to coerce some dumb or drugged up patsy into some illegal activity and use them as a scapegoat or cover!

Cui bono? (To whose benefit?)

Whether staged or not, it is indisputable that the agencies and departments whose programs and funding will be legitimized and/or boosted from security breaches and terrorism have a clear benefit and thus a motive to potentially create, aid and puff up threats and incidents. With Project Souvenir we see the following entities and programs taking benefit:

  • RCMP
  • CSIS
  • Canadian Border Protection Agency (CBP)
  • National Security Information Network
  • Integrated National Security Enforcement Team (INSET).
  • War on Terror
  • Domestic surveillance
  • In general, government authority

Following are five primary examples of grandstanding, posturing and directing of benefit from Project Souvenir by people in the government:

1. B.C. Premier Christy Clark took opportunity to puff up the threat:

“My suspicion is they wanted to cause as much damage as they possibly could, because they want to be able to take control of our streets, our cities, our institutions, and we will not allow that.”

We also see general tugging at the heart strings of Canadian patriotism with Clark praising the work of police investigators saying that terrorists

“will not succeed in tearing down the values that make this country strong”.

Analysis: This reminds me of post 9/11, Bush-era War on Terror rhetoric and it certainly is a continuation of the War on Terror narrative to say the least.  They hate our freedoms and our democracy everyone… let’s stand together in support of the great leadership of our “officials” and trust blindly those who operate under cover of secrecy courtesy of public funding. As well, we should not forget that John Nuttall and Amanda Korody are alleged culprits at this stage and have not been convicted!  I find it incredible that an high ranking public servant can publicly speak as if the guilty verdict has already been served.  If Nuttall and Korody are not sentenced they should certainly launch law suits at key talking heads that have clearly forgotten the common law principle of innocent until proven guilty in a court of law.

2. Reporting in the Economist a day following the RCMP briefing said:

So far there has not been a large-scale attack on Canadian soil. But the latest incident might jolt federal authorities into reversing some planned cuts to the police and security services.

Analysis: More funding for the guys to conduct more stings.

3. As we saw earlier, Vic Toews called for applause of the national security joint task force (INSET), as follows:

The success of this operation was due to the close collaboration of our security and law enforcement agencies, including CSIS [Canadian Security Intelligence Service].  I would like to applaud the RCMP-led Integrated National Security Enforcement Teams — known as INSET — and all of the partners for their outstanding work on this investigation.”

Analysis: Legitimizing domestic operation of intelligence agencies, domestic surveillance, and the duty of national security operations.  It also paves the way for destructive militarization of law enforcement.

4. Repeatedly, the assertion was made in the RCMP briefing that the alleged bombing was inspired by al-Qaeda. But at the same time stating al-Qaeda inspiration they further noted there were no “international linkages” and the two alleged culprits were “self-radicalized”. However, don’t forget, it’s an al-Qaeda inspired plot. Doublethink anyone?

Analysis: Force fitting the War on Terror narrative, re-emphasizing the omniscient threat of al-Qaeda.

5. Finally, RCMP Assistant Commissioner James Malizia uses the crisis to promote a homeland security program, saying,

“It is very important that Canadians remain vigilant. We urge the public to bring any suspicious activities to the RCMP’s attention through our national security information network.”

Analysis: If you see something, say something… provided it fits our narrative.  This citizen spy atmosphere, as if taken from the Stasi or Nazi SS, must be a product of the “harmonized” homeland security initiatives that have been agreed to in secret between Canada and the U.S.  “Beyond the border vision” anyone?

Consider this motive for RCMP and Public Safety

Before we look further at stronger indicators of potential corruption, let’s examine a final point on cui bono: The RCMP’s stated target for the number of times it intends on disrupting “terrorist criminal activities”. The RCMP effectively has a quota of six (6) times where they are expected to prevent terrorism in Canada in a specified period. You can see it on page 19 of the RCMP’s Report on Plans and Priorities 2013-14. You might ask: What if there aren’t incidents to disrupt? Well, clearly, we are back to having a clear motive for staging incidents and/or entrapping dumbies to achieve a certain benefit. As described in the document: 6 is the “Number of disruptions, through law enforcement actions, to the ability of a group(s) and/or an individual(s) to carry out terrorist criminal activity, or other criminal activity, that may pose a threat to national security in Canada and abroad”.

Consider the public relations opportunity

Just another point to consider, the RCMP press briefing and B.C. Premier Christy Clark’s public announcements related to Project Souvenir occurred on July 2, 2013, a day after Canada Day, Canada’s national celebration of its (supposed) independence.  On the heels of our most patriotic day, Clark’s rhetoric has an increased appeal.

Prior knowledge

Let us now turn our focus to evidence of prior knowledge.  For this, we find obvious leads in the RCMP’s own admissions during their July 2nd press briefing where Project Souvenir was first made public.

The suspects were committed to acts of violence and discussed a wide variety of targets and techniques,” said Assistant Commissioner Wayne Rideout.

RCMP Assistant Commissioner James Malizia had other indicators in his written statement:

“These charges are the result of an RCMP investigation named Project Souvenir, which was launched in February 2013, based on information received from the Canadian Security Intelligence ServiceThese individuals were inspired by al-Qaeda ideology.  Our investigation demonstrated that this was a domestic threat without international linkages.  I want to reassure our citizens that at all times during the investigation our primary focus was the safety and protection of the public.  While the RCMP believes that this threat was real, at no time was the security of the public at risk.  These arrests are another example of the effectiveness of our Integrated National Security Enforcement Team who work tenaciously to prevent this plan from being carried out.  We detected the threat early and disrupted it.  On behalf of the RCMP, I want to express our appreciation to CSIS and all our partners for their tremendous support throughout this investigation.

Specific details are not public at this time, but it is clear that Nuttall and Korody were under surveillance.  When did surveillance begin?  I think it is fair to say that likely on or sometime before the February 2013 founding of the joint task force operation known as Project Souvenir, six months prior to the arrests, is when continual surveillance most likely began.  That’s a long time to have prior knowledge.

I also find it interesting that CSIS is the source on this entire investigation.  We just have to take the word of the guys who operate under total cover of secrecy and have a history of linkages to international secret societies, international criminal operations and have no real accountability to the public or elected representatives.  I’d love to investigate whether dark elements within or tied to CSIS promoted Nuttall and Korody out of some MK-ULTRA program and then tipped RCMP off to do their thing.  Evidence of handlers and mind control on Nuttall and Korody would support this working hypothesis.  (In part three of this series, we’ll look closer at Nuttall and Korody).

According to the RCMP’s narrative, let’s move on to see what they did with their prior knowledge.

Prior involvement

If there is evidence that folks from the intelligence community or law enforcement special operations units were involved in any aspect, from funding, training, aiding, cajoling, or otherwise participating, then you can start placing your bets on this being a false flag operation.

As reported by the Economist:

Investigators had apparently become close enough to the plot to render the bombs harmless.

This is somewhat eluded to in a statement by RCMP Assistant Commissioner James Malizia, as follows:

“At no time was the security of the public at risk,” said Malizia. “We detected the threat early and disrupted it.”

They disrupted it?  Well, skeptics of my analysis might say that disrupting it means they stopped it from happening when they presumably jumped out of the bushes to arrest the alleged bombers.  However, the RCMP confidently and clearly says that there was no security risk to the public, at any time!  The only way for this to be true is if undercover agents (agent provocateurs) were engaged with the improvised explosive device (IED) in advance of the arrest.  Assistant Commissioner Wayne Rideout clarifies for us with a more definitive statement:

“These devices were completely under our control, they were inert, and at no time represented a threat to public safety.”

Incredible!  “Completely under our control…”.  In order for this to be true, an undercover officer would have indeed had to have been hands on with the IED, possibly supplying it in the first place, and indeed quite close to Nuttall and Korody to gain their trust to participate in the plot.  How else could the RCMP be in complete control of the device and know it was inert?

During the July 2 press briefing, Mr. Rideout was asked about the techniques employed by undercover officers, specifically if they had acted as collaborators in the plot. Evasively Rideout said,

“I’m not prepared to go into the details of how we were able to disrupt and assure public safety, but there are a vast number of resources and specialty groups that are available to us. We employed most of them.”

Wow.  Tenacious.  Professionals.  Specialists.  High-end and sophisticated.  That’s the public relations image being projected here, but any thinking individual smells a rat and now has more questions than answers.


Project Souvenir has all the markings of a false flag operation. We have strong evidence (of prior knowledge and involvement) telling us there is a deeper back story here. We also have multiple and clear motives for this to be staged.  The suspects for staging such an incident have a history of employing such tactics and they are also deeply entwined with international counterparts that do as well. And we also have to factor in that the entities involved have the ability to operate with the cover of national security. As history has proven, secrecy breeds corruption.

If there is evidence of drills for a similar incident being run by local law enforcement during the time of this incident, then we have all the tell-tale signs of a false flag operation.

With deceptive tactics such as false flag operations, patsies and provocateurs regularly used by law enforcement, the intelligence community and black ops, we must not be quick to agree with the conclusions we’re being given in press briefings or shallow news reporting. We must remain healthy skeptics of government and we must investigate agencies that have a history of deception and law breaking. At this stage, I think it is obvious that the official story from the RCMP briefing must be transparently investigated by an independent third party.

In part three, we will take a closer look at Nuttall and Korody, the alleged bombers, and explore the issue of them playing patsy to this “operation”.

Climategate Part Two: The Undermining of the Scientific Method

In my previous post I provided an overview of Climategate, looked at the role tree ring proxy data played in the controversy, and provided evidence from leaked emails and source code that the climate data was being intentionally and knowingly manipulated.

Today I want to answer the following question: Was there intent on the part of the CRU team to mislead the scientific community and the public at large?

To me there are a couple of areas we can focus on as we attempt to tease out the intent of the CRU:

  1. Did the CRU openly publish their raw temperature data and fully disclose the transformations applied to that data in order to generate their models?
  2. Did the CRU intentionally circumvent the peer-review scientific process?

Let’s start with the first question.

CRU Refusal to Release Raw Data

Evidence is clear that the CRU went to extraordinary lengths in their attempts to prevent the raw temperature data from being exposed.

Why?  Well, in the words of head CRU scientist Phil Jones:

I should warn you that some data we have we are not supposed to pass on to others. We can pass on the gridded data – which we do. Even if WMO agrees, I will still not pass on the data. We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it
-Phil Jones email Feb. 21, 2005

This strikes to the core of the scientific process.  The whole point of peer-reviewed science is that other scientists can reproduce and verify your results.  This mechanism has been the bedrock of science going back a hundred years.  Phil Jones simply stands the logic on its head indicating that he doesn’t want to release the details since he’s got a vested interest in his conclusions and doesn’t want to give anyone the chance to prove those conclusions wrong.

As for not being ‘supposed’ to pass on the data there is some basis here.  Some of the data used by the CRU came from third party governments and was subject to various restrictions.

However, a large amount of the data was provided by U.S. and European agencies and not under such limitations.  Further, the CRU’s work was publicly funded largely by the U.S. Department of Energy (DOE).  On what grounds could the CRU possibly refuse to release this data given that the taxpayers footed the bill for its collection and analysis?  All people were asking for was raw temperature sensor data.  What could it possibly harm to release this data to the public?

Now, it turns out that the CRU was technically not required to release even the publicly funded raw data.  The DOE rules, while strongly stressing that data be released when possible, did not strictly require it.  So, technically the CRU wasn’t breaking any laws but their refusal to publish data is certainly highly suspect.

CRU Obstruction of Freedom of Information Act Requests

As more people started to push harder for the raw data, the CRU team started to get concerned that Freedom of Information Act (FOIA) requests might be used to compel them to release the data:

[D]on’t leave stuff lying around on ftp sites — you never know who is trawling them. The two MMs have been after the CRU station data for years. If they ever hear there is a Freedom of Information Act now in the UK, I think I’ll delete the file rather than send to anyone. Does your similar act in the US force you to respond to enquiries within 20 days? – our does! […] Tom Wigley has sent me a worried email when he heard about it—thought people could ask him for his model code. He has retired officially from UEA so he can hide behind that.
-Phil Jones email Feb. 2, 2005

I’m getting hassled by a couple of people to release the CRU station temperature data. Don’t any of you three tell anybody that the UK has a Freedom of Information Act.
—Phil Jones email Feb. 21, 2005

Think I’ve managed to persuade UEA [the University of East Anglia] to ignore all further FOIA requests if the people have anything to do with Climate Audit.
-Phil Jones email Jun. 19, 2007

Can you delete any emails you may have had with Keith re AR4 [IPCC 4th Assessment on Climate Change, 2007]? Keith will do likewise. […] Can you email Gene and get him to do the same? […] We will be getting Caspar to do likewise.
-Phil Jones email May 29, 2008

John Mitchell did respond to a request from Holland. John had conveniently lost many emails, but he did reply with a few. Keith and Tim have moved all their emails from all the named people off their PCs and they are all on a memory stick.
-Phil Jones email Jun. 4, 2008

Finally, might I ask that you note and then erase this email. I have found that recent enquiries under the Freedom of Information Act, or Data Protection Act, can become considerable time sinks , or the basis of some inconvenient subsequent distractions.
-Keith Briffa email Oct. 9, 2008

When the FOI requests began here, the FOI person said we had to abide by the requests. It took a couple of half hour sessions – one at a screen, to convince them otherwise… About 2 months ago I deleted loads of  emails, so have very little – if anything at all.
-Phil Jones email Dec. 3, 2008

You might want to check with the IPCC Bureau. I’ve been told that IPCC is above national FOI Acts. One way to cover yourself and all those working in AR5 [IPCC 5th Assessment on Climate Change] would be to delete all e-mails at the end of the process. Hard to do, as not everybody will remember it.
-Phil Jones email May 12, 2009

For example Keith Briffa took home emails that were subject to FOI to ensure their safekeeping.
-CRU IT security team meeting notes Jul 14, 2010

It’s pretty clear from the above emails that Phil Jones and the CRU team was fairly determined to resist releasing their data or email records subject to FOIA requests.  This determination extended to physically deleting or moving emails after the FOIA requests had been commenced.  This was blatantly illegal and a conspiracy in every sense of the word.

There is simply no way to spin these emails in a favourable light.  The CRU did intentionally and illegally obstruct the FOIA process.

CRU Refusal to Release Climate Models

My previous post went into some details around the actual model used by the CRU and some of the egregious problems with the code.  From even a cursory analysis of the code it’s clear that the CRU’s model was littered with fudge factors and blatant manipulations to force the data to produce the desired result.

Given this, it’s no surprise that the CRU also refused to release the details of their model:

I got this email from McIntyre a few days ago. As far as I’m concerned he has the data — sent ages ago. I’ll tell him this, but that’s all — no code. If I can find it, it is likely to be hundreds of lines of uncommented fortran ! I recall the program did a lot more than just average the series. I know why he can’t replicate the results early on — it is because there was a variance correction for fewer series.
-Phil Jones email Apr. 27, 2005

Giving them the algorithm would be giving in to the intimidation tactics that these people are engaged in
-Michael Mann, interview Wall Street Journal, 2005

Thus, any scientists attempting to reproduce the CRU’s results were predictably unsuccessful.  The CRU, cloaking both the data and model in secrecy, were able to get away with their inaccurate conclusions.

So why didn’t more scientists stand up and protest such an obvious obfuscation?  We turn our attention to this question next.

Where Were the Dissenters?

One valid question readers may have is why didn’t other scientists dig-in and reject the global warming hypothesis if the data was being so badly manipulated?

They did.  Or, at least, they tried to.

The reality is that many climatologists rejected the CRU and IPCC conclusions on global warming and refused to jump on the band wagon.  Such scientists were quickly labelled as ‘skeptics’ and subjected to concerted attacks aimed at discrediting and marginalizing them.  And it wasn’t just one or two wing-nuts – a significant number climatologists spoke out against the distortions of the CRU team.

But it didn’t matter.  The mainstream media had already picked their side and relentlessly pushed the global warming meme.  Governments also jumped aboard the bandwagon and began formulating policy, funding further research (all to scientists on record as supporters of the anthropogenic global warming conclusion) and setting school curriculums to brainwash the youth about the evils of carbon dioxide and unprecedented global warming.

But, like all good tyrants, the CRU knew that ALL dissent must be crushed without mercy.  They could be no room for questioning or independent thought.  After all, the CRU knew how vulnerable their rigged data and models were.

CRU Circumventing of the Scientific Process

The best defence is a good offence.  The CRU took this maxim to heart and aggressively attacked dissenters where ever they were found.

The main tactic used was to prevent the publishing of any dissenting papers in journals.  This was, for the most part, easily accomplished with CRU members (or affiliates) having close relationships with the editors of all major journals.  In addition, because CRU team members, Phil Jones in particular, were considered top experts in the field, they had the opportunity to peer-review many of the papers prior to their publishing.  Phil Jones could, and did, use this opportunity to blackball papers that disagreed with his conclusions and effectively block those papers from publication.

But, predictably, some journals had editors willing to stand up for scientific integrity and who refused to be brainwashed or intimidated by the climate change zealots.  These editors caused much consternation within the CRU team.  The CRU tactics used here ranged from attempting to blacklist entire journals to forcing editors to resign or be fired.  In this the CRU was highly successful.  Virtually all dissenting voices were quashed one way or another allowing the claim of a ‘scientific consensus’ on global warming.

To give you a flavour of the coordination that went on within the CRU team here’s extracts from some of the more damning emails.

…I think the skeptics will use this paper to their own ends and it will set paleo back a number of years if it goes unchallenged. I will be emailing the journal to tell them I’m having nothing more to do with it until they rid themselves of this troublesome editor. A CRU person is on the editorial board, but papers get dealt with by the editor assigned by Hans von Storch…
-Phil Jones email Mar. 11, 2003

In fact, Mike McCracken first pointed out this article to me, and he and I have discussed this a bit. I’ve cc’d Mike in on this as well, and I’ve included Peck too. I told Mike that I believed our only choice was to ignore this paper. They’ve already achieved what they wanted–the claim of a peer-reviewed paper. There is nothing we can do about that now, but the last thing we want to do is bring attention to this paper, which will be ignored by the community on the whole… It is pretty clear that thee skeptics here have staged a bit of a coup, even in the presence of a number of reasonable folks on the editorial board (Whetton, Goodess, …). My guess is that Von Storch is actually with them (frankly, he’s an odd individual, and I’m not sure he isn’t himself somewhat of a skeptic himself), and without Von Storch on their side, they would have a very forceful personality promoting their new vision. There have been several papers by Pat Michaels, as well as the Soon & Baliunas paper, that couldn’t get published in a reputable journal. This was the danger of always criticising the skeptics for not publishing in the “peer-reviewed literature”. Obviously, they found a solution to that–take over a journal! So what do we do about this? I think we have to stop considering “Climate Research” as a legitimate peer-reviewed journal. Perhaps we should encourage our colleagues in the climate research community to no longer submit to, or cite papers in, this journal. We would also need to consider what we tell or request of our more reasonable colleagues who currently sit on the editorial board… What do others think?
-Michael Mann email Mar. 11, 2003

Re CR [Climate Research Journal], I do not know the best way to handle the specifics of the editoring. Hans von Storch is partly to blame — he encourages the publication of crap science ‘in order to stimulate debate’. One approach is to go direct to the publishers and point out the fact that their journal is perceived as being a medium for disseminating misinformation under the guise of refereed work…
Note that I am copying this view only to Mike Hulme and Phil Jones. Mike’s idea to get editorial board members to resign will probably not work — must get rid of von Storch too, otherwise holes will eventually fill up with people like Legates, Balling, Lindzen, Michaels, Singer, etc. I have heard that the publishers are not happy with von Storch, so the above approach might remove that hurdle too.
-Tom Wigley email Apr. 24, 2003

I am really sorry but I have to nag about that review – Confidentially I now need a hard and if required extensive case for rejecting – to support Dave Stahle’s and really as soon as you can.
-Keith Briffa email Jun. 4, 2003

I would like to play with it in an effort to refute their claims. If published as is, this paper could really do some damage. It is also an ugly paper to review because it is rather mathematical, with a lot of Box-Jenkins stuff in it. It won’t be easy to dismiss out of hand as the math appears to be correct theoretically
-Edward Cook email Jun. 4, 2003

It seems to me that this “Kinne” character’s words are disingenuous, and he probably supports what De Freitas is trying to do. It seems clear we have to go above him. I think that the community should, as Mike H has previously suggested in this eventuality, terminate its involvement with this journal at all levels–reviewing, editing, and submitting, and leave it to wither way into oblivion and disrepute.
-Michael Mann email Jul. 3, 2003

In the meantime, I urge people to dissociate themselves from Climate Research. The residual ‘editorial’ (a word I use almost tongue in cheek) board is looking like a rogues’ gallery of skeptics. Those remaining who are credible scientists should resign.
-Tom Wigley email Aug. 19, 2003

Recently rejected two papers (one for JGR and for GRL) from people saying CRU has it wrong over Siberia. Went to town in both reviews, hopefully successfully. If either appears I will be very surprised.
-Phil Jones email March 31, 2004

I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow – even if we have to redefine what the peer-review literature is!
-Phil Jones email Jul. 8, 2004

This is truly awful. GRL [Geophysical Research Letters journal] has gone downhill rapidly in recent years. I think the decline began before Saiers. I have had some unhelpful dealings with him recently with regard to a paper Sarah and I have on glaciers — it was well received by the referees, and so is in the publication pipeline. However, I got the impression that Saiers was trying to keep it from being published.
Proving bad behavior here is very difficult. If you think that Saiers is in the greenhouse skeptics camp, then, if we can find documentary evidence of this, we could go through official AGU [American Geophysical Union] channels to get him ousted [as editor for the Geophysical Research Letters journal] . Even this would be difficult.
-Tom Wigley email Jan. 20, 2005

The panel is solid. Gerry North should do a good job in chairing this, and the other members are all solid. Chrisy is the token skeptic, but there are many others to keep him in check…
-Michael Mann email Feb. 2, 2006

Its your prerogative to suggest alternates, and I think they’ll take your suggestions very seriously. My greatest fear is that McIntyre dominates the discussion. Its important that they hear from the legitimate scientists.
-Michael Mann email Feb. 13, 2006

I see you’re down for a meeting in London tomorrow and Friday.  I have been having something of a run in with a French scientist called Vincent Courtillot. He is making Edouard Bard’s life awful in French.  If you’re there on the Friday when Vincent is talking then tell him he’s just completely wrong. He will likely say the climate isn’t warming and even if it was it has little to do with greenhouse gases. So shouldn’t be difficult!!
I’m lecturing here in Norwich to students so can’t make it to London.  If you’re not there on the Friday, just make sure one or two reasonable scientists are aware that they have invited a bit of rogue!
-Phil Jones email Feb. 6, 2008

You’ll get one awful talk on the Friday from a Vincent Courtillot. If he lays into me, or says the world isn’t warming you have my permission to go and put the boot it[sic]. Shouldn’t be difficult.
Have emailed Jim as well.Vincent is a prat, but he’s a well connected prat – French Academy and all that.  He’s been making life a misery for Eduoard Bard.  I can’t make it – I’m just trying to help Eduoard!
-Phil Jones email Feb. 6, 2008

Dear folks, You may be interesting in this snippet of information about Pat Michaels. Perhaps the University of Wisconsin ought to open up a public comment period to decide whether Pat Michaels, PhD needs re-assessing?
-Tom Wigley email October 14, 2009

These emails illustrate the key tactics employed by the CRU in their campaign against the ‘skeptics’:

  1. Applying pressure to editorial boards to ‘rid themselves’ of ‘troublesome’ editors.
  2. Organizing the blacklisting of journals which refused to be intimidated and dared publish papers critical of anthropogenic global warming.
  3. Ensuring that lectures given by ‘deniers’ would be attended by friendly forces who could disrupt and dispute any claims that global warming wasn’t open and shut.
  4. Attempting to have the PhD’s of dissenters who dared question their conclusions revoked.  See here for the full story on that one.
  5. Leveraging their positions as ‘experts’ to review papers prior to publishing and undermine any papers that disagreed with the global warming meme.
  6. Ensuring papers sympathetic to ‘the cause’ would be reviewed by friendly editors and peer-reviewers to ensure publication.

The last two tactics were especially effective.  What better way to prevent dissent than quash papers even before they get published?  This is much more efficient than having to discredit the authors of papers that have already been published.  And, if you additionally ensure that papers supporting your view always get out, you quickly skew the corpus of scientific literature in the direction of your conclusions.

Other scientists did recognize this attempt to establish a monopoly on the truth.  Raymond Bradley, head of the Geosciences department at the University of Massachusetts voiced his objection as follows:

I have just returned from Finland and have now read all the correspondence regarding the Science perspectives article you asked Keith Briffa & Tim Osborn to write. I’ve sent Tim Osborn & Keith Briffa a few suggestions re their perspectives article. If you would like to see them, let me know.
I would like to diasassociate myself from Mike Mann’s view that “xxxxxxxxxxx” [x’d out in released email] and that they “xxxxxxxxxxxxx” [x’d out in released email]. I find this notion quite absurd. I have worked with the UEA group for 20+ years and have great respect for them and for their work. Of course, I don’t agree with everything they write, and we often have long (but cordial) arguments about what they think versus my views, but that is life. Indeed, I know that they have broad disagreements among themselves, so to refer to them as “the UEA group”, as though they all march in lock-step seems bizarre.
As for thinking that it is “Better that nothing appear, than something unnacceptable[sic] to us” … though we are the gatekeepers of all that is acceptable in the world of paleoclimatology seems amazingly arrogant. Science moves forward whether we agree with individiual[sic] articles or not…


The infamous hockey stick graph is still used today in an attempt to force draconian regulations on the entire world.  The graph is the product of fault data and a brutally manipulative model.

The scientists involved knew the science was junk and covered it up.  Those scientists undertook a systematic and coordinated campaign to prevent their data and model methodologies from being made known.  This campaign extended to physically deleting emails in violation of the Freedom of Information Act.  This was completely unethical and also illegal.  It was done purely in the furtherance of their fraud.

But worst of all, these global warming advocates assaulted the core principals of the scientific method by attacking the peer-review process itself.  By pressuring journals and editors to publish papers supporting the ’cause’ while blocking publication of ‘dissenting’ papers, proper scientific debate, the means by which we arrive at the truth, was quashed.

Project Souvenir Part One: Introduction to Critical Thinking in the Face of Government Hysteria

On July 2nd, 2013 the Royal Canadian Mounted Police (RCMP) held a press conference to announce their success in thwarting an alleged homegrown terrorist attack that, according to their report, was to target the provincial legislature in British Columbia on Canada Day with an improvised explosive device (IED) made of pressure cookers. In the RCMP’s press briefing, they revealed that the proactive arrest of two alleged culprits came as a result of a 6-month joint task force investigation named Project Souvenir.

Following the press briefing, Minister of Public Safety (at the time) Vic Toews heralded Project Souvenir a successful “operation”. “The success of this operation was due to the close collaboration of our security and law enforcement agencies, including CSIS [Canadian Security Intelligence Service],” he said. “I would like to applaud the RCMP-led Integrated National Security Enforcement Teams — known as INSET — and all of the partners for their outstanding work on this investigation.” The “partners” in question were named as RCMP, CSIS and Canadian Border Protection Agency (CBP), but it is likely that Communication Security Establishment (CSE) (Canada’s NSA equivalent) would have also been involved.

From July 2nd to July 4th major news outlets like the CBC, BBC, the Economist and others published articles with consistent talking points, which were mostly parroted from the RCMP press briefing. Across the board, news reporting supported the RCMP statements without question. Mainstream news reporting also went on to mention the other recent alleged terrorist acts by Canadians that were intercepted by “authorities”, as if to paint the picture that this is a growing trend in Canada.

At this point, I suppose we should all be happy. Perhaps we should even start to get comfortable with the government monitoring our communications because that’s how this rising threat of terrorism is going to be combatted, right?

The official narrative leans us towards the conclusion that the federal government is doing its job; that the national security apparatus is efficient; that sting operations are just and a great tactic for public safety. I suppose any other line of thinking would be heretical and nothing more than a conspiracy theory.

Unfortunately, Project Souvenir stinks for several obvious reasons which will be discussed in this series. It stinks of entrapment; a type of false flag operation; a self-inflicted wound.

In this post, part one of a series, we will establish an understanding of a false flag operation and look at how security incidents in general might be analyzed to decipher the potential of a false flag. Follow-on posts in this series will provide analysis of Project Souvenir in more specific detail.

What is a False Flag?

Let’s take a quick look at the definition, characteristics and reasons of a false flag operation.


Simply put, a false flag operation is an inside job with the design to place blame on a certain entity for purposes of some gain. False flag is a military term that originated in naval warfare, and the tactic has long been used in both hot war and peace-time clandestine operations.


There is a spectrum of false flags. Involvement can be extremely minimal, such as taking actions certain to create conditions for another party to react a certain way, thus indirectly generating an event to be used for gain. The far other end of the spectrum is the direct involvement and carrying out of an operation while posing as the entity to be blamed for the event.


Rooted in Hegelian dialectic, a false flag operation is essentially a crisis to be leveraged for the realization of some goal. Undoubtedly, a real crisis can be leveraged for political gain; however, a staged or created crisis is useful just the same provided the true perpetrators are not exposed and the event is understood to be manufactured.

When a crisis occurs, with the pain fresh in the mind, people are malleable; more willing to sacrifice something that they believe will be a solution to the crisis. Think about it, because we’ve all been subject to false flags – even historical crisis that occurred before our birth can have an impact on our ideological perspectives and beliefs.

Historical examples

There are dozens of examples of false flags in modern history carried out by American, British, German, Japanese and other governments. Operation Ajax has been discussed on this site. Operation Northwoods is another great example. The Gulf of Tonkin incident is another.

In Nazi Germany, Nazi leadership employed false flag operations as a precursor to advancing a police state over the domestic population and as a means to building public support for military aggression. For instance, in 1939, German public support for war on Poland was manufactured by dressing prisoners from Nazi concentration camps in German soldier uniforms, then having the Gestapo gun them down, followed by propaganda that portrayed them as German soldiers murdered by Polish aggressors. The scandal is known as the Gleiwitz incident and is a great example of how the ruling class requires consent from the sheep-like public.

Common Indicators of a False Flag Operation

Following is a brief overview of the most common and obvious indicators that an event might be staged. Look for these things and these questions the next time a security incident is presented to you.

  1. The official story increases government power – Does the incident work as a catalyst or justification for new programs? Is it being positioned for use as a reason to increase funding for a program or department? Does it legitimize programs that are unpopular, under attack and would represent a significant loss of government power if overturned?
  2. The news media carries a common meme, avoids asking tough questions – Are the major news outlets failing to look at key evidence or ask tough and obvious questions? Are there logical fallacies that if discussed would oppose the government’s official storyline? When the incident first occurred, were significant leads reported that never again received any coverage?
  3. There is evidence of cover up activities – Are whistleblowers speaking out? Are people close to the issue saying something stinks and then these same people go quiet? Are public requests for information ignored or stonewalled? Has key evidence been destroyed? Where real corruption has occurred and the truth starts to come out, things like evidence destruction, intimidation tactics and mysterious suicides or disappearances will usually be observed.
  4. There is evidence of prior knowledge – If the “authorities” knew certain details in advance then we have to ask: Was it not stopped because of incompetence? Negligence? Evidence of prior knowledge can be an indicator that a stand down occurred, which would mean complicacy and thus an inside job at some level. More than a stand down, prior knowledge can also indicate that certain internal elements were involved in carrying out the incident.
  5. Evidence of prior involvement – Includes any type of assistance, facilitation or interaction in the event. If this is determined, then an investigation must be carried out by an independent and transparent body. Were the patsies trained by those responding? Was the device controlled by those responding? Were the attackers transported by those responding?
  6. There were drills being conducted at the same time the incident occurred – Through compartmentalization, an inside job will typically be conducted under the cover of drills. In other words, useful idiot elements within the department or community are involved in executing what appear to be standard drills. During the exercise(s), corrupt elements participating in or alongside the drill will conduct or facilitate the real attack. Those carrying out the drills have no idea that they are effectively providing cover for a false flag operation.

The following diagram depicts the indicators that a false flag operation is likely:



We need to question the official narrative on Project Souvenir and think critically about the information that is provided to us by those that stand to gain.  Not every security incident is an insider conspiracy; however, more often than not such “operations” as Project Souvenir are corrupt and will be used for political gain.

In the next post we will apply our analysis against what is known about Project Souvenir to ask the question: Are elements within the Canadian government conducting a false flag operation?

Climategate Part One: Overview, Tree Rings and the Divergence Problem, Data Manipulation

What Was Climategate

On November 19th, 2009, 160 MB of compressed data was leaked from the University of East Anglia’s Climate Research Unit (CRU), a government funded program.  Included were over a thousand emails and over three thousand documents including the source code used to generate the CRU’s climate models from the raw data.  The leaked information did not include the raw temperature data itself.

In November of 2011 another leak of over 5000 CRU emails was also posted to internet servers.  This is often referred to as Climategate II.

The CRU’s climate models are so important because they form the basis of the UN Climate Panel’s recommendations for implementing crippling carbon regulations on the developed economies of the world.  These regulations and taxes, if implemented, would result in massive increases in the price of everything from electricity and gasoline to virtually every manufactured good.  In short, life if general will get a whole lot more expensive for the average person if these recommendations get implemented.  As such, it’s vital to be confident that the science underlying the recommendation is sound.

Included in the Climategate data dumps was sufficient information to conclude the following:

  1. Various questionable techniques and datasets were being used in the CRU’s models.
  2. Source code was written in such a way as to specifically weight and outright change data to achieve desired effects.
  3. There was an active conspiracy to prevent the raw temperature data from being released to outsiders.  This included obstructing people issuing Freedom of Information Act requests for the data.  Remember that the CRU was funded by the taxpayers.
  4. Ethically questionable steps were taken which did appear to subvert the scientific process.  This included ensuring papers were peer-reviewed by only scientists known to be sympathetic to ‘the cause’ .  Additionally, active steps where taken against anyone who dissented from CRU’s opinion including the editors of journals.

There’s way too much information to cover in one post so, for this post, let’s focus in on the first two topics: The datasets used and how that data was manipulated in the code.

Tree Rings as Proxy Data

Before we get into the nitty gritty it’s important to understand the role tree ring data played in Climategate.

It goes without saying that accurate temperature data is not available for anything except the last hundred years or so.  For climate models to go further back in time so-called proxy data is used.  Typically this involves using tree rings but may include ice core or other means to approximate historical temperatures.  With tree rings the general premise is that the wider the ring for a given year, the warmer the temperature for that year.  Or, at least that’s the theory.

The CRU used tree ring derived data to allow its models to graph hundreds and even thousands of years into the past to produce long-term historical graphs.  These graphs thus mixed historical tree ring proxy data with modern instrumentation data.

There’s an obvious question here: How can we be sure that tree ring data is truly an accurate proxy for instrument data?  And, if the answer is that tree ring data is not an accurate proxy, then clearly any models based on that assumption are undermined from the start.

The reality is that tree ring data can unequivocally NOT be trusted.  This fact is well known in climate circles.  It even has a name: The Divergence Problem.

The Divergence Problem

Wikipedia has a pretty good overview of the divergence problem but, in a nutshell here’s the bare bones:

When the instrumentation data (for which reliable data exists from about 1850 or so) is graphed against tree ring proxy data, there’s a major divergence that occurs starting around 1960.  It looks like this:


There are several theories about why this divergence has occurred but no consensus on the matter.  It is known that, in addition to temperature, moisture and CO2 also affect ring width.  It’s also likely that there are other variables affecting ring width that are not yet known.

The conclusion is obvious: It is provable that tree ring data is not a suitable proxy for temperature readings.  Any climate model based on tree ring data is flawed from the start.  Using historical tree rings to predict temperature doesn’t pass basic validation.

Making the Divergence Problem Disappear

The folks at CRU were well aware of the divergence problem and its affect on their models.  After all, if only tree ring data was used, there didn’t appear to be any warming at all!  This was not the conclusion they were looking for.  Here’s email 3234 from the Climategate set which lays out the problem they were facing (emphasis mine):

Despite assurances from Ed and Keith, I must admit that I still don’t get it. The NRC committee is looking at a number of issues, but the one that is most publicly noted is to determine whether, and with what confidence, we can say that recent temperatures have emerged from the band of natural variability over the last millennium or two. Millennial reconstructions with high time resolution are mostly tree-ring based, mostly northern hemisphere, and as I understand it, some are correlated to mean-annual temperatures and others to seasonal temperatures. The performance of the tree-ring paleothermometry is central. Taking the recent instrumental record and the tree-ring record and joining them yields a dramatic picture, with rather high confidence that recent times are anomalously warm. Taking strictly the tree-ring record and omitting the instrumental record yields a less-dramatic picture and a lower confidence that the recent temperatures are anomalous. When a big difference is evident between recent and a millennium ago, small errors don’t matter; the more similar they are, the more important become possible small issues regarding CO2 fertilization, nitrogen fertilization (or ozone inhibition on the other side…).

Unless the “divergence problem” can be confidently ascribed to some cause that was not active a millennium ago, then the comparison between tree rings from a millennium ago and instrumental records from the last decades does not seem to be justified, and the confidence level in the anomalous nature of the recent warmth is lowered. This is further complicated by the possible small influence of CO2 fertilization…. I was just looking at some of the recent Mann et al. papers, and at the Osborn and Briffa paper from this year. In that one, as nearly as I can tell, there are 14 long records, of which 2 extend to 2000, 8 end in the early to mid 1990s, 1 in the early to mid 1980s, 2 in the early to mid 1970s, and one in the late 1940s. That looks to be a pretty small data set by the time you get into the strongest part of the instrumental warming. If some of the records, or some other records such as Rosanne’s new ones, show “divergence”, then I believe it casts doubt on the use of joined tree-ring/instrumental records, and I don’t believe that I have yet heard why this interpretation is wrong…

Clearly they had a real problem here.  How could the divergence problem be resolved?  Their answer: Just change the numbers.

And that’s exactly what they did.  They changed the numbers.

I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) and from 1961 for Keith’s to hide the decline.
-Dr. Phil Jones, Director of CRU, Nov. 16, 1999

Also we have applied a completely artificial adjustment to the data after 1960, so they look closer to observed temperatures than the tree-ring data actually were.
-Dr. Tim Osborn, CRU, Email Dec. 20, 2006

Also, we set all post-1960 values to missing in the MXD data set (due to decline), and the method will infill these, estimating them from the real temperatures – another way of “correcting” for the decline, though may be not defensible!
-Dr. Tim Osborn, CRU, Email Oct 16, 2000

So all the scary hockey stick graphs used to terrorize the general public into believing we were on the precipice of impending doom were crap.  CRU scientists messed with the numbers by mixing two very different sources of data (instrument and tree ring proxy) when the two datasets were clearly incompatible due to the divergence problem.

Was There Intent to Deceive

The question becomes whether the deceptive ‘trick’ to ‘hide the decline’ was done in good faith or not.  Were the scientists at the CRU really trying to pull the wool over the public’s eyes?

In order to answer that question with certainty we need to understand:

  1. The exact transformation applied to the data to yield the published results.
  2. Whether the CRU team respected the peer-review scientific process and openly disclosed the transformations they applied to the data, and the logic behind those transformations.  Stated another way, was there intent to obfuscate the truth?

Let’s turn our attention to the first question: How exactly was the data manipulated?

The Code

It all comes down to the software that was used to massage the raw data into the output that feed into the climate models.  Fortunately, the Climategate release included the source code used to perform this transformation against the raw data.

I’m a software developer by trade and so I took a look myself at some of the Climategate code other researchers have called attention to.  It is as bad as they say.  What a mess.

First off, there’s some absolutely crazy regressions and transformations being applied.  Many of these transformations might be legitimate.  I’m not a climate scientist after all and there could be very good reasons for some of the manipulation that gets performed.

That said, consistently throughout the code there is clear evidence of:

  1. Fudge factors being used to achieve desired results when proper transformations fail.
  2. The divergence problem being specifically addressed by code treating data from before 1960 differently from data after 1960.

I don’t want to dump a whole bunch of source code on you so let me give you a selection of comments the programmers left embedded in the code.  These comments give you an idea of the extent to which the data was mucked with.  Most of these are taken from Jeffrey Smalls excellent ‘Climategate In Review‘:

“Plots 24 yearly maps of calibrated (PCR-infilled or not) MXD reconstructions of growing season temperatures. Uses “corrected” MXD – but shouldn’t usually plot past 1960 because these will be artificially adjusted to look closer to the real temperatures.”

“stop in 1960 to avoid the decline”

“stop in 1940 to avoid the decline”

“but why does the sum-of-squares parameter OpTotSq go negative?!!”

“and already I have that familiar Twilight Zone sensation.”

“this renders the station counts totally meaningless.”

“Oh yeah – there is no ‘supposed’, I can make it up. So I have :-)”

“As we can see, even I’m cocking it up!”

“yet another problem that’s based on the hopeless state of our databases”

“recent decline in tree-ring density has been ARTIFICIALLY REMOVED”

“Apply a VERY ARTIFICAL correction for decline!!”

“artificially removed (i.e. corrected) the decline”

“we know the file starts at yr 440, but we want nothing till 1400”

“It’s botch after botch after botch.”

“Oh, GOD, if I could start this project again and actually argue the case for junking the inherited program suite.”

“As far as I can see, this renders the [weather] station counts totally meaningless.”

“So what’s going on? I don’t see how the ‘final’ precip file can have been produced from the ‘final’ precipitation database, even though the dates imply that. The obvious conclusion is that the precip file must have been produced before 23 Dec 2003, and then redated (to match others?) in Jan 04.”

“You can’t imagine what this has cost me — to actually allow the operator to assign false WMO [World Meteorological Organization] codes!! But what else is there in such situations? Especially when dealing with a ‘Master’ database of dubious provenance …”

“OH F— THIS. It’s Sunday evening, I’ve worked all weekend, and just when I thought it was done, I’m hitting yet another problem that’s based on the hopeless state of our databases.”

The above comments should give you a good flavour for how thoroughly the data was corrupted and manipulated.  But the absolute smoking gun is the code itself.  Let me give you one concrete example.  If you take nothing else away from this post, understand the implications of the following:

Apply a VERY ARTIFICAL correction for decline!!
valadj =[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,-0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6, 2.6,2.6,2.6]*0.75 ; fudge factor
if n_elements(yrloc) ne n_elements(valadj) then message,’Oooops!’

This code is essentially specifying a series of adjustments applied to the input data by blocks of years.  As you can see from the series, initially data undergoes no adjustment.  Then, towards the beginning of the sequence, the data is adjusted down.  As we get to the middle and end of the series the data is adjusted up by progressively increasing amounts.

The end result: If the raw input data was completely flat, you’d still wind up with a curve that looked like this:


And that, my friends, is how you take any data series and produce a scary looking ‘hockey stick’ chart.  By weighting early data negatively and later data increasing positively, you produce a completely artificial slope exactly like the one CRU produced to convince everyone that climate change was real.


I’ve shown how the use of tree ring proxy data lays at the heart of the infamous hockey stick graph used to convince everyone that the climate was undergoing unprecedented warming.  I’ve also shown how that tree ring data is provably not a good proxy of temperature.

Emails from the scientists at the CRU show they were fully aware of the divergence problem and took active steps to mask it in their models.  Ultimately these steps were expressed in the source code which was littered with special fudge factors and transformations to force the data to produce a desired result.

But the question still remains: Was the deception intentional?  Was there a conspiracy to hide the manipulations used and mislead the scientific community and the public at large?  The answer to this question rests on whether the CRU was open and transparent in its operation and embraced the peer-review scientific method.  This is what I’ll focus on in my next post.  Stay tuned…