Here's an article from Bore Patch that discusses the problems with the computer code in the CRU models. There are only four data sets for temperature change in the world, and CRU basically controls two of them. When it is reported that the vast number of scientists confirm AGW, remember that they are a) quoting each other and b) using the same data sets. If these data sets are corrupt, how can their conclusions not be?
--begin:
The HARRY_READ_ME.txt file documents a multi-year effort by one of the CRU programming staff to figure out just how the climate model software worked. The only way to describe it is "hair raising" (at least to someone who is in the software industry).
If you want to look for smoking guns, you can find it
here. It is
thousands of lines long. If you were to look at the text starting on line 5434, you will find this:
Here, the expected 1990-2003 period is MISSING – so the correlations aren’t so hot! Yet
the WMO codes and station names /locations are identical (or close). What the hell is
supposed to happen here? Oh yeah – there is no ’supposed’, I can make it up. So I have
If an update station matches a ‘master’ station by WMO code, but the data is unpalatably
inconsistent, the operator is given three choices:
You have failed a match despite the WMO codes matching.
This must be resolved!! Please choose one:
1. Match them after all.
2. Leave the existing station alone, and discard the update.
3. Give existing station a false code, and make the update the new WMO station.
Enter 1,2 or 3:
You can’t imagine what this has cost me – to actually allow the operator to assign false
WMO codes!! But what else is there in such situations? Especially when dealing with a ‘Master’
database of dubious provenance (which, er, they all are and always will be).
What can we glean from this? Several things, none of them good for the reputation of the "science" of Anthropogenic Global Warming:
1. The climate change data sets are - by CRU's own admission - are filled with decade-long gaps ("the expected 1990-2003 period is MISSING").
2. The climate data sets contain - by CRU's own admission - fabricated data ("I can make it up. So I have
").
3. The the data is inconsistent to the point of confusion ("the WMO codes and station names /locations are identical") and so - by CRU's own admission - a manual override process was added to the code, allowing the person running the code to make arbitrary changes to the data (this bit:
Please choose one:
1. Match them after all.
2. Leave the existing station alone, and discard the update.
3. Give existing station a false code, and make the update the new WMO station.
Enter 1,2 or 3
4. (speculation here) These manual overrides are not logged anywhere, meaning that for any given output of the model, it is impossible to know what was manually changed during the run, or the impact of those changes on the output.
5. (speculation here) There is no method to save these changes, so that the next time the model is run it may (probably will?) produce different output.
I'm not sure quite what to call this process on display here, but "science" is not the appropriate word. What does "peer review" mean when likely
nobody has reviewed the operator changes? I would go as far as to say that any published paper that relies on the output from this process should be assumed to be suspect until demonstrated otherwise.
Look, who are you going to believe, Al Gore or
your lieing eyes?