Homo versus Machinam in Mercatis OEconomus

Share:

– Advertisement –

By Pascal vander Straeten March 27, 2023

Do you not find it interesting? Despite the fact that black box quantitative financial models were finger-pointed as being one of the culprits of the 2007-2008 Global Financial Crisis, the flow of quants that are now being hired by the financial services industry does not seem to be ebbing – quite the contrary: Wall Street (and even Main Street) is bulking up its quant teams and relying more heavily on computer algorithms for all kinds of processes (i.e., front-office, middle-office, risk, back-office) and investment decisions.

Many articles have been written positing against the massive use of quant models in the decision-making as it relates to the financial services industry. And even your humble servant was one of the proponents of this measure claiming that no machine is better than a man. Instead, over time I came to realize that we should embrace a hybrid approach “man-with-a-machine approach” to make the best of the tools that we have and find a smarter way for integrating these technologies. Thus, we should postulate that no machine is better than a man, and no man is better than a machine.

Granted, the financial services industry is very different from other domains, especially the academic world, for instance, because financial market-related problems usually do not carry a bare scientific truth from which a machine-learning algorithm can learn from (i.e., similar events can be associated with very different results). You require a massive volume of data because then you can make the algorithmic models work well at the expense of complexity and the ignorance of the underlying assumptions. But, in the realm of financial markets, complexity and ignorance most of the time kill you. Therefore, it is crucial to find the right equilibrium between complexity and simplicity. At one point everyone in the industry tends to make the mistake of over-fitting the models to data, and they do not realize it.

And, because of that, practitioners often tend to underestimate the risk they are taking on the books and get an unpleasant surprise. The future is never the same as the past, so it is crucial to crafting your objective function clearly and always remember the underlying assumptions: What types of risks is your organization willing to take? Is it catering an algorithmic solution to a particular kind of financial market environment, or does your organization want to use for many different types of markets? Often organizations are not aware of their objective functions until the market shows it to them.

Now, whether some will like it or not, we will see more Artificial Intelligence in trading programs given the gold rush in this space. Obviously, the trouble with a gold rush is greed and hope, and organizations should be cautious about that. Fat tails are one underlying reason that practitioners commit such mistakes, and serial correlation is another one. And, this applies to both financial markets and mother nature as well. For example, after a hundred-year earthquake, the tectonic plates are likely to be fragilized causing potential future earthquakes in the coming years to come. Indeed, earthquakes will exert pressure on highly stressed fault lines and trigger subsequent seismic events. Hence, when you have a hundred year earthquake, the chances of another earthquake the year after are pretty high – which is not what organizations typically expect.

And, so it goes with the risk management of financial markets. For one thing, once a 10,000-year event takes place on any given day it is much more likely to occur the next day as well. For another, the 10,000-year event truly transcends what happens on a given day. An event could have started weeks ago and would continue for several more months in all likelihood. And to use another metaphor: a wet cycle in the weather can give you a hundred year flood several years in a row. But, in a sense, it is a single event.

Now, it is clear that not all quant models are inherently wrong, but rather that an iterative process is required to maintain their usefulness. If we want to understand a financial crisis, we have to develop a storyboard and understand the limitations posed by the underlying hypotheses built around the model. This explains the importance of tweaking the narrative, or the model, as the storyline develops. Models need to be like fiction books, molding to turns, unexpected shifts and twists. So, while human error and emotion can be a blueprint for financial disaster (i.e., the growing importance of behavioral finance), putting blind faith in 100% computer-based strategies is not prudent either. So, again, we need a hybrid man-with-machine model.

And there’s nothing particularly magical about quantitative strategies. If anything, they are the opposite of magic. Models are designed, constricting hypotheses are built around the model, data is crunched, and the findings result in a set of decisions, such as for instance the increase/decrease/hold of risk exposures. Maybe a distortion, but the approach constitutes safety in numbers method, with the bonus of removing any emotional response that a human might inject into the process. But the models are only as good as the humans who build them, so a robust quantitative team will keep its models flexible to avoid derailment when unusual market events inevitably occur.

Another option would be to incorporate a blended approach that does not rely on back-testing or factors that look good only on paper. Instead, we could build financial models derived from the decision parameters inspired by some of the most successful risk managers and investors of all time, such as John Mack, Craig Broderick, Warren Buffett, or Peter Lynch. And, then integrate these models into systematic portfolio and risk management approaches.

Granted, none of these proposed models will ever be foolproof, and even based on long-term performance data, none of them will ever outperform the market all the time. However, what they will provide is a set of rules-based risk management and investing criteria that stack the odds of success to the benefit of your organization if followed with consistency and discipline.

Tragically, and as already mentioned, given the pace by which quantitative platforms are being embraced to suit the industry’s ends, the likelihood of another sudden market crash led by automated trading is almost inevitable. To magnify the severity of this concern note that since the Global Financial Crisis of 2007-2008 a series of sudden flash crashes took place, such as in May 2010, April 2013, and in June 2017 to name just a few. And feeding this concern is the fact that the use of automated trading strategies not only applies to the hedge fund community but also with the fund industry giants, such as PIMCO, Franklin Templeton, BlackRock, JP Morgan Chase, T. Rowe Price, Capital Research & Management, Fidelity, Vanguard, etc.

Indeed, quantitative investment strategies have become the active portfolio manager’s way of yielding higher returns to outperform the growing amount of economic capital moving into passive investing strategies, such as index funds. This hype for quant is illustrated by the fact that the number of quant funds over the years has grown from approximately 10% of the market in 2007 to nearly 17% in ten years later and 35% by 2020, as even more fundamental hedge funds are turning to quants or platforms that enable non-technical PMs to run their strategies or ideas through algorithms.

As these financial markets become even more interconnected and globalized, not to mention the fact that more and more funds or traders are using automated trading strategies, the effects of such a financial crash will not be suppressed. Instead, it will be felt by all stakeholders, and thus throughout the broader financial markets as a whole. Compounding the problem is that a whole bunch of risk managers and investors are not too savvy about these quant solutions enacting quantitative strategies as if these were their own.

And, then they are the big consulting firms that lack sufficient industry knowledge as very few of their consultants have ever before practiced as traders/ portfolio manager/ risk manager in the financial services industry. Most of them have freshly graduated from so-called Ivy league colleges with a quant degree in their back pocket, but never experienced a true global financial crisis as a participant (i.e., working in a bank, insurance company, asset management firm, etc.). So, who are they anyway to teach about best market practices when it relates to the use of algorithmic in financial risk management and trading, for example? As they say, the best experts are these poachers who turned gamekeepers. And, thus, those big consulting firms should better hire more experienced bankers (just saying).

And to make things even worse, there is the case of the regulators themselves. As markets become more complex and interconnected, the regulators need to make sure to understand how markets are connected and how actions in one market may or may not impact another. In other words, regulators seem to fail to understand the lack of transparency in these markets.

So, what is the solution?

As someone whose first real awakening to the complacency of highly paid CEOs, rating agencies, and consultants was the collapses of respectively Bear Stearns and Lehman Brothers, we need to learn from the past and take once again an in-depth look at the takeaways from the causes of the Global Financial Crisis.

As a result of an overzealous Washington that blindly (i.e., and in a populist way to please the electorate on Main Street) enacted a string of very complex banking regulation reforms, much of Wall Street lives today under the assumption that less is more regarding oversight. A decade ago, the industry’s behavior was that it did not matter if anyone understood the quality of the securitized mortgages packaged into CDOs or how the CDS markets and the real economy were genuinely connected to each other. Today, it’s as if no one cares that few understand how algorithms make trading and risk management decisions. As the financial markets and the assets in it become more complex and interconnected, there are fewer and fewer people involved, even those in the weeds making trades and risk management decisions on a daily basis, that understand.

A sense of deja-vu, right?

An analogy from structural engineering world comes to mind. If a large group of people on the second floor of a house begins to dance and jump in synch, no matter how solid the flooring is, at some point, it will start to warp and flinch. When this up and down movement inevitably becomes too extreme for the material with which the flooring was made to withstand, it will sag and crash.

Market volatility during the past decade is exhibiting a similarly exaggerated movement. With the increasing number of funds crowding into quantitative investment strategies, not to mention the significant number of new data analytics platforms that financial institutions integrate into their daily risk management activities, without the proper regulatory and corporate remediation the day is not too far off that the structure of the financial markets too may witness swings that are too extreme, ultimately bringing it to the point of breakdown. This is not to say this situation is necessarily going to take place; however, a remediative action is required to mitigate these threats, reduce the probability of such a downturn, and further support the market.

Tags: , , , ,

You May Also Like

The Interaction between Geopolitics and Financial Markets: Geofinance
The Art of Risk Management
– Advertisement –

Related