# Solved – Long-run behavior in coin tossing experiment

I have read the following information on this site and I am not sure if it is actually true (or if I misinterpret it):

Events that are random are not perfectly predictable, but they have long-term regularities that we can describe and quantify using probability. In contrast, haphazard events do not necessarily have long-term regularities. Moreover, we limit the term probability to events for which we can specify all possible outcomes. How a fair coin lands when it is tossed vigorously is a canonical example of a random event. One cannot predict perfectly whether the coin will land heads or tails; however, in repeated tosses, the fraction of times the coin lands heads will tend to settle down to a limit of 50%. The outcome of an individual toss is not perfectly predictable, but the long-run average behavior is predictable. Thus it is reasonable to consider the outcome of tossing a fair coin to be random.

Let's assume 100,000 coin tossing trials in total and a perfectly fair coin. If we observed that in the first 1000 trials, Heads occurred 600 times and Tails 400 times, then what the above paragraph tells us is (the way I understand this): we could predict that at some point, in the next 99,000 trials (and maybe for a relatively short number of trials), Tails will start coming more often than the Heads so that the final result (of 100,000 tossings) will be approximately 50,000 Heads and 50,000 Tails. Is that interpretation correct or is such predictability rule valid? If yes, is there a way to show this by Bayesian methods?

(This is not a student exercise. I am a postdoc in Bioinformatics and, probably, a bit confused.)

Contents