In 2011, following a series of high-profile mistakes, the US intelligence community (IC) had been forced into some soul-searching. After the first Gulf War in 1991, it found that Saddam Hussein was much further along in nuclear weapons development than it had realised. Then 9/11 happened and it later emerged key agencies had missed opportunities to place several of the terrorists on watch lists and a number of suggested leads were not pursued. The community came under further criticism in the run-up to the second Gulf War in 2003 with too-confident pronouncements about Iraqi weapons of mass destruction, which ultimately proved to be completely wrong.
After that last debacle became apparent in the post-invasion years, a branch of the IC, the Intelligence Advanced Research Projects Activity, agreed to fund a number of private researchers to run tournaments that enlisted volunteers to do their best to answer fairly tricky questions about future world events. Launched in 2011, the four-year contest required participants to provide forecasts on 500 questions – ranging from the future price of oil, the global financial outlook and predicting the path of geopolitical rivalries. The winning team of super forecasters were discovered to be about 30% better at seeing the future than the intelligence community experts doing their best on exactly the same questions.
So, what was their secret? The victorious team was actually created by grading volunteers’ forecasting abilities, and better-performing participants were invited into a new round of questions. This was repeated until a final team of ‘winners’ was declared. An analysis of these individuals argued they shared some similar qualities: they were actively open-minded, seeking new data, updating opinions when this data required it, and pursuing reliable expert advice to inform their own views. Above all, however, the research revealed we can all get better at forecasting by tracking our successes and failures and adjusting accordingly. In other words, regularly rating ourselves, or more importantly, rating those forecasters we regularly turn to, will enable us to identify those that (may) add value.
The illusion of knowledge
In a world in which we rate restaurants on TripAdvisor and the effectiveness of power tools on Amazon, it seems odd that we fail to rate the economists who provide users the forecasts on which world-changing decisions are sometimes made.
The poor track record of economic forecasters is a topic that has occupied the highly respected investor, Howard Marks, for many years; with him first penning one of his famous essays on the subject in February 1993. His irritation centres around the argument that forecasting economists have no choice but to base judgements on models – be they complex or informal, mathematical or intuitive. In his most recent essay, titled ‘The illusion of knowledge’ he argues all models consist of assumptions: “If A happens, then B will happen.” However, using the case of the US economy with 330m people as his example, there are millions of consumers, alongside millions of workers, producers, intermediaries and government agents all interacting together, with many people falling into more than one category. Assuming the behaviour of an individual is difficult; predicting the same for many millions of people – and their myriad of interrelationships – in an economy this size, is incomprehensible complicated.
Furthermore, the reliance on these assumptions becomes even more questionable when the statistical principals underpinning them are further examined. In arguing his case, Marks’ essay presents two such cases in point:
Stationarity: An assumption that the past is a statistical guide to the future, based on the idea that the big forces impacting a system do not change over time. If you want to know how tall to build a dam, look at the last 100 years of flood data and assume the next 100 years will be the same. Stationarity is a concept that works right up until the moment it does not.
In other words; things that have never happened before, happen all the time.
Cromwell’s rule: If something has a one-in-a-billion chance of being true, and you interact with billions of things during your lifetime, you are nearly assured to experience some astounding surprises.
In other words; always leave open the possibility of the unthinkable coming true.
So where does all this leave us?
An article published by the FT at the end of 2023 advised that central bankers across major economies were rethinking their approach to economic forecasting after their high-profile failures to spot the recent inflationary pulse. The European Central Bank (ECB), the Federal Reserve (Fed), the Bank of England and other official forecasters failed to see how the end of Covid-19 lockdowns and an energy shock triggered by Russia’s invasion of Ukraine could pave the way for the worst inflationary spiral in a generation. After responding with aggressive rate rises, central banks have openly engaged in intensive postmortems as they analyse the reasons for their failure. This radical candour is to be applauded and Christine Lagarde, the ECB’s president, has implied the central bank needs to learn from its mistakes and ‘cannot just rely only on textbook cases and pure models’.
Similar messaging has also been leaching from the world’s biggest producer of economic forecasts, the Fed, home of more than 400 PhD economists. Its forward guidance program has been highly criticised, so much so that it has strained the central bank’s credibility. Chair Powell seems to agree that providing estimates of where the Fed sees interest rates, economic growth and inflation at different points in the future should be scrapped.
Implication for portfolios
Notwithstanding everything written here, we are consumers of financial forecasts and – in common with our peers – we subscribe to publications prepared by economists and periodically invite them to join us for briefings etc. In recent years, we have developed a growing appreciation of the need to assess how often they are right, however, in the spirit of radical candour, have yet to settle on a method to quantify their contributions to our investment returns. We are not alone in this and to further reference Marks the world seems incredibly short on information regarding the value added by macroeconomic forecasts, especially given the large number of people involved in this pursuit.
While recognising the pitfalls and near-impossibility of predicting future events, at the same time, one could argue it has never been more important to think about what might happen next. We say this in the context of the impact of Artificial Intelligence (AI). The tech giants and beyond are set to spend over $1tn on AI-capex in the coming years. Will this large spend ultimately pay off? Some commentators are increasingly sceptical, seeing only limited upside from AI over the next decade, and questioning its potential to solve the complex problems that would justify the costs. Others are much more optimistic about AI’s economic value, and its ability to generate returns beyond the current “picks and shovels” phase, even if the “killer application” has yet to emerge. Moreover, if it were to do so, what could constrain AI growth? Perhaps a GPU chip shortage, or possibly insufficient electricity to meet requirements?*
These questions point towards an investment opportunity we have been exploring of late. US power infrastructure is not yet prepared for the coming surge in demand from AI and other sources, setting up for a challenging energy crunch in the coming years. How will this growing problem be solved? The market is suggesting it will be, given the ongoing march of ‘The Magnificent Seven’; however, entire sectors needed to provide part of the solution – most notably materials, energy and utilities – are currently getting very little credit, having lagged substantially.
While we do not have a crystal ball, it is not hard to imagine a world where the value ascribed to various industries could shift dramatically. We remain open-minded to this possibility and ready to adapt accordingly.
Russell Waite and Jon Proudfoot
Sources: The Illusion of Knowledge – Memos from Howard Marks, September 2022
Why economic forecasting has always been a flawed science – The Guardian, Sept 2017
Central banks rethink forecasting after failures on inflation – The FT, December 2023
*Gen AI: too much spend, too little benefit? – Top of Mind, Goldman Sachs Global Macro Research June 2024
Affinity Private Wealth is a trading name for APW Investors Limited, which is regulated by the Jersey Financial Services Commission. Registered office 27 Esplanade, St Helier, Jersey JE4 9XJ.