Before the advent of computers, meteorologists received data from regional weather stations and plotted a weather map.
The map consisted of creating lines called isobars that connected points of equal barometric pressure (Figure 1):
I know from personal experience from Air Force days that when these meteorologists gave a forecast, it was because they knew and understood patterns and achieved a reasonable accuracy. Forecast accuracy deteriorated after computer models produced the forecasts.
As recently as 2008 Tim Palmer, climate modeler at the European Centre for Medium-Range Weather Forecasts in Reading England said in New Scientist:
I don’t want to undermine the IPCC, but the forecasts, especially for regional climate change, are immensely uncertain.
None of this is surprising.There is inadequate data on which to build the models. Also, many of the mechanisms that create the weather are inadequately understood.
With global climate models, it is worse because the Intergovernmental Panel on Climate Change (IPCC) leave out major mechanisms. They acknowledged one of the limitations in the Third Assessment Report (TAR) when they wrote:
In climate research and modeling, we should recognize that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.
Nobody provided that quote at the recent Paris Climate Conference. They were all led to believe the long-term predictions were possible and accurate.
Figure 2 shows the basic theoretical structure of a global climate model.
It is not a model like a miniature replica of a car. It is a mathematical model with numbers and formulae representing data and mechanisms. They mathematically divide the world into a series of cubes based on the grid system of latitude and longitude with a vertical scale.
They claim that the smaller the grid system, the more accurate the model, but this is inaccurate because it doesn’t matter how small the grid is if the data is not available – and it isn’t.
Oceans cover 70 percent and land 30 percent of the Earth’s surface. There is essentially no weather data for approximately 85 percent of the globe.
Virtually none for the 70 percent that is an ocean and of the 30 percent land surface, there are very few stations for the 19 percent mountains, 20 percent deserts, 20 percent boreal forest, 20 percent grasslands and 6 percent tropical rain forest.
There is so little data that for many parts of the world they use a single station as representative of a 1200 km radius.
Figure 3 shows 1200 km radius circles drawn around Calgary and Toronto to illustrate the problem.
Surface data is completely inadequate, but it’s worse in the vertical, with no data in space and time.
Even the smallest grid size is so large that major weather systems such as thunderstorms and tornadoes cannot be included.
In addition, there are virtually no measures of atmospheric constituents such as water or dust particles.
The data for building computer climate models is inadequate and, therefore, the forecasts are always wrong.
To get the results they wanted they created the data and selected the mechanisms. As IPCC reviewer Dr. David Wojick explained:
“The public is not well served by this constant drumbeat of false alarms fed by computer models manipulated by advocates.”
Professors Green and Armstrong note:
"We audited the forecasting processes described in Chapter 8 of the IPCC’s WG1 Report to assess the extent to which they complied with forecasting principles.”
“The forecasting procedures that were described violated 72 principles. Many of the violations were, by themselves, critical. … The forecasts in the Report were not the outcome of scientific procedures. In effect, they were the opinions of scientists transformed by mathematics and obscured by complex writing.
"Research on forecasting has shown that experts’ predictions are not useful."
However, there is one major reason they were able to fool most of the people. It relates to the unjustified adulation and lack of knowledge of the limitations of computers. Pierre Gallois explains.
If you put tomfoolery into a computer, nothing comes out but tomfoolery.
But this tomfoolery, having passed through a very expensive machine, is somehow ennobled, and no-one dares criticize it.
Proof that it is tomfoolery comes from the fact that every single forecast the IPCC ever made was wrong.
They don’t advertise the failures. Instead, they attack and try to silence the few who do criticize.