Related resource notes: Statistical Rethinking
Acknowledgements: Tanay Biradar and Shivansh Dave for feedback and discussion!
Disclaimer: This is an idea that I have heard many times before, especially from a Bayesian inference point of view. I am just re-writing this idea in my own words to better incorporate it.
Summary
Our prior beliefs strongly affect how well new data/evidence is incorporated into our beliefs. If we have strong prior beliefs, it is really hard to actually change our beliefs based on evidence. In this perspective, having a weak belief is not just important to get a better sense of the world based on evidence/facts; it also makes us much more adaptable to changes in the world.
Experiments
How to test ideas about “prior beliefs” and how it influences our learning based on evidence? I used ideas from the book Statistical Rethinking - in the book, the author describes really nice experiment involving a globe and uses it to illustrate bayesian concepts.1
The experiment is simple - the goal is to find the proportion of water on earth (assuming we don’t know the answer). We start with a prior belief of what we think the proportion is. Then, we toss a globe into the air and catch it. We then look at the top most point of the globe and see if is land or water, and mark the toss as “L” or “W”. We repeat this multiple times and from it, look at what is predicted from our prior belief of the proportion and update it based on the new evidence (refer to this note if you want to jump in deeper).
Jargon used:
- prior or prior probability: prior belief.
- likelihood: probability/likelihood of the evidence based on prior belief.
- posterior or posterior probability: updated prior belief based on evidence.
Experiment 1: Effect of weak vs strong prior
The experiment is simple - I toss the globe 1000 times and get a dataset. Then, using a weak and a strong prior, I see how much our estimate of proportion (posterior distribution) changes based on the dataset.
The weak prior I am using is a gaussian distribution centered around 0.4, but has a large standard deviation. That is, I start by saying that I think the proportion of water on earth is around 40% but I am not really certain/confident about it. Based on the dataset, I update my prior beliefs.
Fig 1: Left: weak (gaussian) prior ; Right: Updated beliefs in 4 steps (250 tosses each)
Fig 2: Left: strong (gaussian) prior centered around 0.4 ; Right: Updated beliefs in 4 steps (250 tosses each)
Code
With a weak prior, on every update (250 tosses), our beliefs (posterior distribution) get stronger around a proportion based on data.
Now, if I start with a strong prior belief that the proportion of water is 0.4, then even 1000 tosses are not sufficient to move my beliefs by a lot. Our belief (posterior probability) remains close to 0.4 and is still strong (hasn’t gotten broader, indicating uncertainty).
So, with strong priors, one needs substantially more data to change prior beliefs. How much data do strong priors require, and how does it scale?
Experiment 2: Updating a weak and a strong prior
Before looking more into the data requirements for strong vs weak prior, I first come up with some definitions for strong and weak prior.
I define the confidence of the prior as the area under the curve for .2 If I set my initial mean to be 0.4 (as experiment 1), my confidence will be area under the curve between 0.35 and 0.45. The sharper the peak around my mean, the higher the confidence.
Fig 3: Defining confidence of prior beliefs using area under the curve around the mean.
Code
This is a made up measure but since area under the curve is maximally 1 for probability density, this should also reach 100% as the gaussian gets narrower and narrower, so it sort of makes sense (as seen in the figure above).
Fig 4: Updating a weak prior belief with data.
Code
Next, I looked at how the prior beliefs are updated with fresh data. Turns out, the weak priors and strong priors are updated in starkly different manners. With weak priors, a new peak emerges at the correct location and the old peak disappears (Fig 4, above). With strong priors, the peak never disappears - it just gets pushed towards the correct location (Fig 5, below).
Fig 5: Updating a strong prior belief with data.
Code
The reason for this difference3 is that updating beliefs involves multiplying the evidence with prior belief. If the prior belief is too strong and is almost zero everywhere (like in the previous figure), then multiplication with the evidence does not allow for a new peak to be formed. The only route for updating is slowly moving this narrow peak towards a location that is pointed by evidence. And this is a really slow process.
So how slow is the process?
Experiment 3: Evidence requirement for strong priors
To show this, I use the same definition as above to decide that the belief is in the “correct” location. I count the data required to update the belief (posterior probability) to have an area of 99% around . This is a crude definition as before,4 but sufficient enough to give some insights.
Plotting the number of tosses required given the confidence of the prior gives us this steep looking curve:
Fig 6: Scaling of data requirement with strength of priors to obtain “correct” beliefs.
Code
This clearly shows that the stronger the prior belief, the more data we need to update. The growth is quite fast indicating that very strong priors, like the one in Fig 5., require huge amounts of data before the final belief accurately captures the one represented by the evidence.
Real-world implications
So what do the above experiments mean for us in the real world?
In the above experiments, I tested the effect of having a prior belief about how much water covers earth, and how that is updated based on new evidence. If one assumes that our brain works in a Bayesian manner,5 we can assume that we have prior beliefs that are updated based on new evidence. So we can see the effect of prior beliefs on our ability to learn from new evidence.
Here are my takeaways:
- Having a weak prior belief, that is, thinking I don’t know too much about something, is actually good as it allows us to form accurate impressions based on a small amount of data. We can get from not knowing anything to confidently saying that the earth has about 70% water in ~1000 tosses (Fig 1).
- Having a strong prior belief makes us less amenable to change and requires a lot more data to change our mind. Being very confident that the earth only has ~40% water resulted in very little change after ~1000 tosses (Fig 2).
- This difference is because of the way learning happens when there are prior beliefs - weak prior beliefs are easily swayed by data but strong prior beliefs only grudgingly move with huge amounts of data (Fig 3).
- Finally, the amount of evidence required to change prior beliefs grows steeply with confidence in the belief.
Overall, my key realization from this exercise was that I need to be more cognizant on where my prior beliefs come from. If my prior beliefs are based on evidence I collected, it is okay to have strong priors. This is because the confidence in these beliefs reflects directly on the evidence I have encountered. Examples include things which I have experienced, researched and thought about myself.
Even here, though, it is good to be aware that I have a strong prior belief, so that when required I can relinquish it - like when things change drastically and previous priors are no longer applicable (like the current ML/AI revolution which is changing so fast that my previous priors no longer apply).
However, if my prior beliefs are from other sources, then I need to be very careful about my strength of my beliefs. If it is stuff I heard from social media or acquaintances, I should make sure that my prior remains weak. Examples include stuff like gossip, cultural stereotypes, etc. which should be quickly unlearned based on evidence (ideally they should not affect prior beliefs, but it is unreasonable to assume perfect rejection).
It is fine to accept prior beliefs from credible sources but I need to be cognizant that I am accepting someone’s beliefs under the assumption that they have done the hard work of updating it based on evidence. If not, I will fall in the same trap of believing conspiracy theories, which rely on accepting false beliefs with high confidence. These beliefs will cause idea stagnation by impeding learning from new evidences.
Another important realization for me is that in order to learn and change our minds, we need to weaken our priors. This is especially important when I need to listen to other perspectives and have a meaningful conversation. For the duration of the conversation, I think we need to weaken our priors so that we do not reject incoming evidence as this would impede a meaningful conversation.
Finally, I think we need to be cognizant of using social media to update our beliefs as it influences our perception of evidence by providing posts that matches beliefs. I feel that this will artificially boosts the confidence of our beliefs. I explore this more here.
Footnotes
-
Most of the code and plots here are from 2020_McElreath - Statistical Rethinking Statistical Rethinking]]. ↩
-
Standard deviation would work too (and so would 1/std), and that is what I use in the code, but I feel this measure is more intuitive. ↩
-
I initially thought this might be because of the grid approximation method I am using here, but I double checked by using quadratic estimation and it still persists. ↩
-
Disclaimer: I am only on the 4th chapter of the Statistical Rethinking|Statistical Rethinking]] book so there might be better ways of doing this. ↩
-
Reasonable assumption according to some neuroscientists and unreasonable according to others. ↩