Solved – Within the frequentist “school of thought” how are beliefs updated


Edit: I realize my use of the word "hypothesis" is confusing, I do not mean specifically a null hypothesis. I mean a proposition that something is true.

From my limited understanding, Bayesian probabilities represent beliefs. A scientist may therefore assign a belief/probability to the statement that a hypothesis is true before conducting an experiment or study, and then through formal mathematical reasoning calculate an updated belief as a numerical value (probability) when the results of the study are available.

From a frequentist point of view, a probability is not a belief. Nevertheless, it is common to find phrases along the lines of "Our study strengthens the evidence that H is true". Given that a study has produced results that gives "supporting evidence" to a hypothesis, it seems reasonable that a frequentist would have a "stronger belief" in this hypothesis. Regardless of whether the prior and posterior beliefs are represented by numbers or not (but not probabilities), it undoubtedly seems that there ought to be an order between them, such as "belief after study" > "belief before study". But exactly how this updating of beliefs would happen or how to convey how much more one believe a hypothesis is true after a study compared to before the study, is unclear to me. Granted, I am quite ignorant of statistics.

Question: Within the frequentist school of thought, is there a formal / mathematical procedure for updating beliefs?

If there is no such procedure, it seems difficult to make sense of a scientist saying that a study strengthens the evidence that something is true, beyond a "more than" and "less than" perspective. The mapping from prior and new data to beliefs seems a lot more opaque to me from the frequentist perspective compared to the Bayesian one. Sure, the Bayesian have subjective priors, but given those priors, the data and chosen analysis it seems very clear exactly how the beliefs are updated through Bayes' rule (although I know frequentists can use Bayes' rule too, just not for beliefs). On the other hand I hardly think someone employing a Bayesian methodology necessarily would actually let an obtained posterior probability represent their exact belief about something, since there can be a lot to doubt, disagree with or improve in a given analysis. I'm not trying to instill any debate between "Bayesian vs. Frequentist", I'm far too ignorant to have an opinion. Hopefully this question is not nonsensical, in that case I apologize.

Best Answer

If you're representing beliefs coherently with numbers you're Bayesian by definition. There are at least 46656 different kinds of Bayesian (counted here: but "quantitatively updating beliefs" is the one thing that unites them; if you do that, you're in the Bayesian club. Also, if you want to update beliefs, you have to update using Bayes rule; otherwise you'll be incoherent and get dutch-booked. Kinda funny how the one true path to normative rationality still admits so many varieties though.

Even though Bayesians have a monopoly on 'belief' (by definition) they don't have a monopoly on "strength of evidence". There's other ways you can quantify that, motivating the kind of language given in your example. Deborah Mayo goes into this in detail in "Statistical Inference as Severe Testing". Her preferred option is "severity". In the severity framework you don't ever quantify your beliefs, but you do get to say "this claim has been severely tested" or "this claim has not been severely tested" and you can add to severity incrementally by applying multiple tests over time. That sure feels a lot like strengthening belief; you just don't get to use that exact word to describe it (because the Bayesians own the word 'belief' now). And it really is a different thing, so it's good to avoid the possible terminology collision: what you get from high severity is good error control rates, not 'true(er) beliefs'. It behaves a lot like belief in the way it is open to continual updating though! Being picky about not calling it 'belief' is purely on the (important) technicality of not dealing in states-of-knowledge, distinguishing it from the thing Bayesians do.

Mayo writes and links to plenty more on this at Sounds like you might enjoy "Bernoulli's Fallacy" by Aubrey Clayton: it's pretty accessible popsci but really cuts to the roots of this question. Discussed in podcast form here

Similar Posts:

Rate this post

Leave a Comment