{ localUrl: '../page/probability_interpretations_examples.html', arbitalUrl: 'https://arbital.com/p/probability_interpretations_examples', rawJsonUrl: '../raw/4yn.json', likeableId: '2891', likeableType: 'page', myLikeValue: '0', likeCount: '2', dislikeCount: '0', likeScore: '2', individualLikes: [ 'NateSoares', 'SzymonWilczyski' ], pageId: 'probability_interpretations_examples', edit: '4', editSummary: '', prevEdit: '3', currentEdit: '4', wasPublished: 'true', type: 'wiki', title: 'Probability interpretations: Examples', clickbait: '', textLength: '9489', alias: 'probability_interpretations_examples', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'AlexeiAndreev', editCreatedAt: '2016-07-01 00:13:46', pageCreatorId: 'NateSoares', pageCreatedAt: '2016-06-30 07:39:17', seeDomainId: '0', editDomainId: 'AlexeiAndreev', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '153', text: '[summary: Consider evaluating, in June of 2016, the question: "What is the probability of Hillary Clinton winning the 2016 US presidential election?"\n\n- On the **propensity** view, Hillary has some fundamental chance of winning the election. To ask about the probability is to ask about this objective chance.\n- On the **subjective** view, saying that Hillary has an 80% chance of winning the election summarizes our *knowledge about* the election, or, equivalently, our *state of uncertainty* given what we currently know.\n- On the **frequentist** view, we cannot formally or rigorously say anything about the 2016 presidential election, because it only happens once.]\n\n## Betting on one-time events\n\nConsider evaluating, in June of 2016, the question: "What is the probability of Hillary Clinton winning the 2016 US presidential election?"\n\nOn the **propensity** view, Hillary has some fundamental chance of winning the election. To ask about the probability is to ask about this objective chance. If we see a prediction market in which prices move after each new poll — so that it says 60% one day, and 80% a week later — then clearly the prediction market isn't giving us very strong information about this objective chance, since it doesn't seem very likely that Clinton's *real* chance of winning is swinging so rapidly.\n\nOn the **frequentist** view, we cannot formally or rigorously say anything about the 2016 presidential election, because it only happens once. We can't *observe* a frequency with which Clinton wins presidential elections. A frequentist might concede that they would cheerfully buy for \\$1 a ticket that pays \\$20 if Clinton wins, considering this a favorable bet in an *informal* sense, while insisting that this sort of reasoning isn't sufficiently rigorous, and therefore isn't suitable for being included in science journals.\n\nOn the **subjective** view, saying that Hillary has an 80% chance of winning the election summarizes our *knowledge about* the election or our *state of uncertainty* given what we currently know. It makes sense for the prediction market prices to change in response to new polls, because our current state of knowledge is changing.\n\n## A coin with an unknown bias\n\nSuppose we have a coin, weighted so that it lands heads somewhere between 0% and 100% of the time, but we don't know the coin's actual bias.\n\nThe coin is then flipped three times where we can see it. It comes up heads twice, and tails once: HHT.\n\nThe coin is then flipped again, where nobody can see it yet. An honest and trustworthy experimenter lets you spin a wheel-of-gambling-odds,%note:The reason for spinning the wheel-of-gambling-odds is to reduce the worry that the experimenter might know more about the coin than you, and be offering you a deliberately rigged bet.% and the wheel lands on (2 : 1). The experimenter asks if you'd enter into a gamble where you win \\$2 if the unseen coin flip is tails, and pay \\$1 if the unseen coin flip is heads.\n\nOn a **propensity** view, the coin has some objective probability between 0 and 1 of being heads, but we just don't know what this probability is. Seeing HHT tells us that the coin isn't all-heads or all-tails, but we're still just guessing — we don't really know the answer, and can't say whether the bet is a fair bet.\n\nOn a **frequentist** view, the coin would (if flipped repeatedly) produce some long-run frequency $f$ of heads that is between 0 and 1. If we kept flipping the coin long enough, the actual proportion $p$ of observed heads is guaranteed to approach $f$ arbitrarily closely, eventually. We can't say that the *next* coin flip is guaranteed to be H or T, but we can make an objectively true statement that $p$ will approach $f$ to within epsilon if we continue to flip the coin long enough.\n\nTo decide whether or not to take the bet, a frequentist might try to apply an [unbiased_estimator unbiased estimator] to the data we have so far. An "unbiased estimator" is a rule for taking an observation and producing an estimate $e$ of $f$, such that the [4b5 expected value] of $e$ is $f$. In other words, a frequentist wants a rule such that, if the hidden bias of the coin was in fact to yield 75% heads, and we repeat many times the operation of flipping the coin a few times and then asking a new frequentist to estimate the coin's bias using this rule, the *average* value of the estimated bias will be 0.75. This is a property of the _estimation rule_ which is objective. We can't hope for a rule that will always, in any particular case, yield the true $f$ from just a few coin flips; but we can have a rule which will provably have an *average* estimate of $f$, if the experiment is repeated many times.\n\nIn this case, a simple unbiased estimator is to guess that the coin's bias $f$ is equal to the observed proportion of heads, or 2/3. In other words, if we repeat this experiment many many times, and whenever we see $p$ heads in 3 tosses we guess that the coin's bias is $\\frac{p}{3}$, then this rule definitely is an unbiased estimator. This estimator says that a bet of \\$2 vs. $\\1 is fair, meaning that it doesn't yield an expected profit, so we have no reason to take the bet.\n\nOn a **subjectivist** view, we start out personally unsure of where the bias $f$ lies within the interval [0, 1]. Unless we have any knowledge or suspicion leading us to think otherwise, the coin is just as likely to have a bias between 33% and 34%, as to have a bias between 66% and 67%; there's no reason to think it's more likely to be in one range or the other.\n\nEach coin flip we see is then [22x evidence] about the value of $f,$ since a flip H happens with different probabilities depending on the different values of $f,$ and we update our beliefs about $f$ using [1zj Bayes' rule]. For example, H is twice as likely if $f=\\frac{2}{3}$ than if $f=\\frac{1}{3}$ so by [1zm Bayes's Rule] we should now think $f$ is twice as likely to lie near $\\frac{2}{3}$ as it is to lie near $\\frac{1}{3}$.\n\nWhen we start with a uniform [219 prior], observe multiple flips of a coin with an unknown bias, see M heads and N tails, and then try to estimate the odds of the next flip coming up heads, the result is [21c Laplace's Rule of Succession] which estimates (M + 1) : (N + 1) for a probability of $\\frac{M + 1}{M + N + 2}.$\n\nIn this case, after observing HHT, we estimate odds of 2 : 3 for tails vs. heads on the next flip. This makes a gamble that wins \\$2 on tails and loses \\$1 on heads a profitable gamble in expectation, so we take the bet.\n\nOur choice of a [219 uniform prior] over $f$ was a little dubious — it's the obvious way to express total ignorance about the bias of the coin, but obviousness isn't everything. (For example, maybe we actually believe that a fair coin is more likely than a coin biased 50.0000023% towards heads.) However, all the reasoning after the choice of prior was rigorous according to the laws of [1bv probability theory], which is the [probability_coherence_theorems only method of manipulating quantified uncertainty] that obeys obvious-seeming rules about how subjective uncertainty should behave.\n\n## Probability that the 98,765th decimal digit of $\\pi$ is $0$.\n\nWhat is the probability that the 98,765th digit in the decimal expansion of $\\pi$ is $0$?\n\nThe **propensity** and **frequentist** views regard as nonsense the notion that we could talk about the *probability* of a mathematical fact. Either the 98,765th decimal digit of $\\pi$ is $0$ or it's not. If we're running *repeated* experiments with a random number generator, and looking at different digits of $\\pi,$ then it might make sense to say that the random number generator has a 10% probability of picking numbers whose corresponding decimal digit of $\\pi$ is $0$. But if we're just picking a non-random number like 98,765, there's no sense in which we could say that the 98,765th digit of $\\pi$ has a 10% propensity to be $0$, or that this digit is $0$ with 10% frequency in the long run.\n\nThe **subjectivist** considers probabilities to just refer to their own uncertainty. So if a subjectivist has picked the number 98,765 without yet knowing the corresponding digit of $\\pi,$ and hasn't made any observation that is known to them to be entangled with the 98,765th digit of $\\pi,$ and they're pretty sure their friend hasn't yet looked up the 98,765th digit of $\\pi$ either, and their friend offers a whimsical gamble that costs \\$1 if the digit is non-zero and pays \\$20 if the digit is zero, the Bayesian takes the bet.\n\nNote that this demonstrates a difference between the subjectivist interpretation of "probability" and Bayesian probability theory. A perfect Bayesian reasoner that knows the rules of logic and the definition of $\\pi$ must, by the axioms of probability theory, assign probability either 0 or 1 to the claim "the 98,765th digit of $\\pi$ is a $0$" (depending on whether or not it is). This is one of the reasons why [bayes_intractable perfect Bayesian reasoning is intractable]. A subjectivist that is not a perfect Bayesian nevertheless claims that they are personally uncertain about the value of the 98,765th digit of $\\pi.$ Formalizing the rules of subjective probabilities about mathematical facts (in the way that [-1bv] formalized the rules for manipulating subjective probabilities about empirical facts, such as which way a coin came up) is an open problem; this in known as the problem of [-logical_uncertainty].\n', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: [ '0', '0', '0', '0', '0', '0', '0', '0', '0', '0' ], muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'NateSoares', 'AlexeiAndreev' ], childIds: [], parentIds: [ 'probability_interpretations' ], commentIds: [], questionIds: [], tagIds: [], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: 'probability_interpretations', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '15002', pageId: 'probability_interpretations_examples', userId: 'AlexeiAndreev', edit: '4', type: 'newEdit', createdAt: '2016-07-01 00:13:46', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '14961', pageId: 'probability_interpretations_examples', userId: 'NateSoares', edit: '3', type: 'newEdit', createdAt: '2016-06-30 15:10:14', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '14954', pageId: 'probability_interpretations_examples', userId: 'NateSoares', edit: '2', type: 'newEdit', createdAt: '2016-06-30 07:40:44', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '14952', pageId: 'probability_interpretations_examples', userId: 'NateSoares', edit: '0', type: 'newParent', createdAt: '2016-06-30 07:39:18', auxPageId: 'probability_interpretations', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '14950', pageId: 'probability_interpretations_examples', userId: 'NateSoares', edit: '1', type: 'newEdit', createdAt: '2016-06-30 07:39:17', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }