George Rebane
[This is the addended version of my regular KVMR commentary which was broadcast on 4 April 2014.]
Recently our government has had some problems forecasting events critical to the conduct of America’s foreign policy. We seem to have missed the boat on calling the response of Iran in its nuclear weapons development, the reaction of Syria confronting our ‘red line’, Al Qaueda’s attack in Benghazi, Putin’s reaction to warnings about Crimea, and the list goes on.
Similar problems have been encountered for years by government and private sector economists and analysts in forecasting our economy’s response to various fiscal and monetary policies, and, of course, how people would react to new social programs offered by the feds. The bottom line is that economists, analysts, bureaucrats, and politicians are always surprised by reality when it happens.
The art of forecasting such events and responses has been a mainstay of our intelligence community, and a lot of effort has gone into coming up with new methods and processes to improve the accuracy of forecasts. Today this research has involved lots of ordinary people who are neither forecasting nor intelligence specialists. The field is loosely described as soliciting ‘the wisdom of the crowd’.
It turns out that when you ask a lot of people to estimate some outcome, and then aggregate their answers, you wind up with a surprisingly good result. This was discovered about a century ago when fair goers in England were asked to estimate the weight of a steer. It turned out that the average of the several hundred submitted estimates came within a pound of the correct weight of the well over a half ton animal. Since then this has piqued the interest of researchers, and today there are several organizations using ‘crowd sourcing’ to forecast events.
A most interesting and productive effort, now ongoing for several years, is the Good Judgment Project at the University of Pennsylvania. The program is supported by IARPA or the Intelligence Advanced Research Projects Activity through its ACE Program, which is short for Aggregative Contingent Estimation Program. The Good Judgment people state that “the ACE Program aims ‘to dramatically enhance the accuracy, precision, and timeliness of forecasts for a broad range of event types, through the development of advanced techniques that elicit, weight, and combine the judgments of many intelligence analysts.’ The project is unclassified: our results will be published in traditional scholarly and scientific journals, and will be available to the general public.”
The really cool part of this forecasting effort is that it is done entirely online, involves realworld important issues and events, and that anyone can join the forecasting team. The part that I find exciting is that participants learn how to rate such future events by attaching probabilities to their occurrence. The part that IARPA finds most interesting is that ACE forecasts are frequently more accurate than those of their professional analysts. And the entire project is conducted openly with a lot of feedback to the forecasters so that they may improve their performance.
Parallel to the ACE program is a large research effort devising and testing new methods of aggregating and scoring all the thousands of probabilistic predictions collected through their website from plain folks like you and me. One of the added benefits that participants get is self-knowledge about their own outlook, about how they may be overconfident, or unreasonable, or suffer from one of the several kinks built into the human psyche that causes us to suffer biases which result in errors of judgment.
So if you want to find out more about the wisdom of crowds, and also become a member of a crowd that does important forecasting work, then go to goodjudgmentproject.com and sign up.
My name is Rebane, and I also expand on this and related themes on georgerebane.com where the transcript of this commentary is posted with relevant links, and where such issues are debated extensively. However my views are not necessarily shared by KVMR. Thank you for listening.
[Addendum – Several years back I wrote a piece entitled ‘Forecasting Follies’ (12 March 2006). It may have been posted on The Union’s website, I forget. But I ran across it in my files and thought it would provide some more perspective to the above commentary, so I’m including an edited version of it here.]
Economists make up one of the most over-trusted group of forecasters whose output is religiously followed by the press and the security markets. Now admittedly, forecasting economic indicators and measurements is a hard job, but the entire process of dealing with such forecasts ranges somewhere between the ludicrous and slapstick. Recall the vaudeville classics where the fool is punished again and again in a skit where he repeatedly gets suckered into the same situation and then pummeled by his antagonist. The fool repeats the same remedy expecting a different outcome and the crowd roars as he gets his due.
If a meteorologist predicted the next day’s temperature and usually came within plus or minus ten degrees of the actual temperature but almost never hit it exactly, would you make a big bet that the next prediction would be spot on and then go into histrionics when it turned out two degrees too low? Well, that’s exactly what happens in the security markets when an economic prediction of, say, number of new jobs created in the next quarter, or the nation’s GDP rate is later compared to the actually measured numbers. And the same circus goes on year after year with no sign that things will ever change.
People familiar with random processes and probability have a more measured response to such antics based on arguments that generally go as follows. The economy is a complex dynamic system that has yet to be well understood by humans, i.e. accurately modeled mathematically. Many different models have been attempted that have favor with different economists. These work by accepting values for a set of economic, financial, demographic, and other parameters, and then turn the crank to get the desired output number. Each model itself contains many approximations of realworld processes thought to be relevant to the required prediction, say, a particular definition of unemployment rate (there are many). Furthermore, the inputs themselves are not known exactly because of the complexity and reliability of the measurement processes and, perhaps, because of conflicting definitions of the input parameters themselves. For example, if there are competing definitions of what constitutes the workforce, then whose definition are you going to use and how will you measure it?
Here you get the picture of fuzzy inputs being shoveled into a highly-touted yet rickety number crunching machine (“the model”) which spits out a number. This number is then wiped off, polished up a bit, and ushered out into the public spotlight where it is introduced as the ‘latest consensus estimate’ or ‘quarterly forecast from the Mumbledrum School of Business’ at some Carnegie-Cabbage University. Everyone then genuflects and the number is placed squarely into the bullseye of a target and pinned to the wall with other such forecast numbers to await the arrows of reality.
Then for some unknown reason all the people place their bets, hold their breath, and expect reality to score bullseyes. Well, reality will do what it always does and fires arrows all over the place. Some even go into the targets but almost never into the bullseyes. And then the pandemonium in the markets follow as each bullseye is missed either high or low, or God knows where.
Now a more reasonable approach would be for the economists to never forecast a single number like 490,000, but instead announce a qualified range such as ‘I think there is a 90% chance that (it) will come in between 420,000 and 560,000’. This would be a more useful message to the markets about where and with what likelihood reality might strike.
The question that goes begging is why don’t they do that; why don’t the journalists require such reporting and why don’t the so-called experts come out with such assessments? My answer is that the experts would be relieved to comply in a heartbeat, but the journalists are too dumb to understand the greater utility of such forecasts. And if the odd journalist is smart enough, s/he thinks that their audience is too dumb to use such an answer. So single number forecasts continue to be issued with the inevitable overreactions to ‘the first quarter GDP number came in 0.5% under the analysts’ prediction and after early gains the markets closed off significantly.’ Stuff and nonsense.
For those readers who wish to take a deeper look at the follies and realities of forecasting and interpreting the causes of swings in financial/economic forecasts, I recommend the highly readable Fooled by Randomness by Nicholas Taleb, professor of mathematics, founder of a financial research laboratory, and former trader on Wall Street.


Leave a comment