July 18, 2019

It has come to my attention that some of my readers are wondering if I'm too bearish on the stock market.

It's understandable in light of my recent articles describing a possible market melt-up. The melt-up is already here. It doesn't change my view that we are near the end of this magnificent bull market.

It's important to point out that my view of the stock market is not based on a hunch, or a feeling, or simply my gut reaction to geopolitical events. It's based on probability, specifically Bayesian Inference. This is a robust form of statistical analysis of possible future outcomes in an uncertain realm like the stock market.

So, this article will address my methodology for making market predictions. Warning: it gets down in the weeds of statistical analysis, but it also has a narrative that anyone can follow.

Bayesian Inference offers a rigorous approach to calculating probabilities based on new information. Today it’s everywhere, from Google search algos to Amazon shopping algos and self-driving cars. The U.S. coast Guard uses Bayesian Inference to identify the most likely area to search for someone at sea who has called May Day.

What if one does not know the exact probabilities of a future event happening but only has a rough estimate? This is where the subjective view comes into play.

Many investors put great emphasis on the estimates and simplified forecasts given by experts in the financial media. Using Bayesian Inference gives us the ability to confidently produce new estimates for new and more complicated questions introduced by the inevitable roadblocks in stock market forecasting.

Instead of guessing, we can now use Bayesian Inference if we have the right information with which to start.

What is Bayesian Inference?

Bayesian Inference is a type of statistical analysis. It’s a particular approach to applying probability to statistical problems. It provides us with mathematical tools to update our beliefs about random events in light of seeing new data or evidence about those events.

In particular Bayesian Inference interprets probability as a measure of believability or confidence that an individual may possess about the occurrence of a particular event.

We may have a prior belief about an event, but our beliefs are likely to change when new evidence is brought to light. Bayesian Inference gives us a solid mathematical means of incorporating our prior beliefs, and newly discovered evidence, to produce new posterior beliefs.

How it works

In the Bayesian framework an individual would apply a probability of 0 (zero) when they have no confidence in an event occurring, while they would apply a probability of 1 (100%) when they are absolutely certain of an event occurring. If they assign a probability between 0 and 1, this allows for weighted confidence in the possible outcomes.

In order to carry out Bayesian inference, we need to utilize Bayes' Rule and interpret it correctly. In the following box, we derive Bayes' rule using the definition of conditional probability. Buckle your seatbelts because this is going to get very wonky. If you don’t want to get into the weeds, skip this part.

Defining Bayes’ Rule

We begin by considering the definition of conditional probability, which gives us a rule for determining the probability of an event AA, given the occurrence of another event BB. An example question in this vein might be “What is the probability of rain occurring given that there are clouds in the sky?”

The mathematical definition of conditional probability is as follows:

P(A|B)=P(A∩B)P(B)P(A|B)=P(A∩B)P(B)

This simply states that the probability of rain given that we have seen clouds is equal to the probability of rain and clouds occurring together, relative to the probability of seeing clouds at all.

If we multiply both sides of this equation by P(B)P(B) we get:

P(B)P(A|B)=P(A∩B)P(B)P(A|B)=P(A∩B)

But we can simply make the same statement about P(B|A)P(B|A), which is akin to asking, “What is the probability of seeing clouds, given that it is raining?”:

P(B|A)=P(B∩A)P(A)P(B|A)=P(B∩A)P(A)

Note that P(A∩B)=P(B∩A)P(A∩B)=P(B∩A) and so by substituting the above and multiplying by P(A)P(A), we get:

P(A)P(B|A)=P(A∩B)P(A)P(B|A)=P(A∩B)

We are now able to set the two expressions for P(A∩B)P(A∩B) equal to each other:

P(B)P(A|B)=P(A)P(B|A)P(B)P(A|B)=P(A)P(B|A)

If we now divide both sides by P(B)P(B) we arrive at the celebrated Bayes’ rule:

P(A|B)=P(B|A)P(A)P(B)P(A|B)=P(B|A)P(A)P(B)

However, it will be helpful for later usage of Bayes’ rule to modify the denominator, P(B)P(B) on the right hand side of the above relation to be written in terms of P(B|A)P(B|A). We can actually write:

P(B)=∑a∈AP(B∩A)P(B)=∑a∈AP(B∩A)

This is possible because the events AA are an exhaustive partition of the sample space.

So that by substituting the definition of conditional probability we get:

P(B)=∑a∈AP(B∩A)=∑a∈AP(B|A)P(A)P(B)=∑a∈AP(B∩A)=∑a∈AP(B|A)P(A)

Finally, we can substitute this into Bayes’ rule from above to obtain an alternative version of Bayes’ rule, which is used heavily in Bayesian inference:

P(A|B)=P(B|A)P(A)∑a∈AP(B|A)P(A)P(A|B)=P(B|A)P(A)∑a∈AP(B|A)P(A)

Now that we have derived Bayes’ rule, we are able to apply it to statistical inference.

Bayes’ Rule for Bayesian Inference

P(θ|D)=P(D|θ)P(θ)/P(D)P(θ|D)=P(D|θ)P(θ)/P(D)

Where:

  • P(θ)P(θ) is the prior. This is the strength in our belief of θθ without considering the evidence DD. Our prior view on the probability of how fair the coin is.
  • P(θ|D)P(θ|D) is the posterior. This is the (refined) strength of our belief of θθ once the evidence DD has been taken into account. After seeing 4 heads out of 8 flips, say, this is our updated view on the fairness of the coin.
  • P(D|θ)P(D|θ) is the likelihood. This is the probability of seeing the data DD as generated by a model with parameter θθ. If we knew the coin was fair, this tells us the probability of seeing a number of heads in a particular number of flips.
  • P(D)P(D) is the evidence. This is the probability of the data as determined by summing (or integrating) across all possible values of θθ, weighted by how strongly we believe in those particular values of θθ. If we had multiple views of what the fairness of the coin is (but didn't know for sure), then this tells us the probability of seeing a certain sequence of flips for all possibilities of our belief in the coin's fairness.

The goal of Bayesian Inference is to provide us with a rational and mathematically sound procedure for incorporating our prior beliefs, with any evidence at hand, in order to produce an updated posterior belief. What makes it such a valuable technique is that posterior beliefs can themselves be used as prior beliefs under the generation of new data.

Hence Bayesian Inference allows us to continually adjust our beliefs under new data by repeatedly applying Bayes' rule.

Bayesian Inference gives us a rational procedure to go from an uncertain situation with limited information to a more certain situation with significant amounts of data.

Final thoughts

My two econometric models, one for the stock market and one for the economy, are available by subscription. Thanks to the work of Thomas Bayes, I have a high degree of confidence in the predictive value of my models.

About the author 

Erik Conley

Former head of equity trading, Northern Trust Bank, Chicago. Teacher, trainer, mentor, market historian, and perpetual student of all things related to the stock market and excellence in investing.

  1. There are assumptions and probabilities and causality relationships in your models that are guesstimates and assumptions, applying sound mathematical and statistical principles which work on accurate datasets to unproven datasets is finding false comfort in scientific approach to the social science of the markets. These seems like an exchange of error that could involve some or all of direction, magnitude and timing in exchange for calculateable process that leads to human comfort but no better investment outcomes.

  2. Thanks for your comments. I agree that my Bayesian models have assumptions but that’s how Bayesian Inference works. Start with a guesstimate and then refine it based on empirical data. Comfort is not part of the equation. Confidence is.

Comments are closed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}