{ localUrl: '../page/likelihood_not_pvalue_faq.html', arbitalUrl: 'https://arbital.com/p/likelihood_not_pvalue_faq', rawJsonUrl: '../raw/505.json', likeableId: '2923', likeableType: 'page', myLikeValue: '0', likeCount: '4', dislikeCount: '0', likeScore: '4', individualLikes: [ 'EricBruylant', 'TravisRivera', 'VladArber', 'NateSoares' ], pageId: 'likelihood_not_pvalue_faq', edit: '14', editSummary: '', prevEdit: '13', currentEdit: '14', wasPublished: 'true', type: 'wiki', title: 'Report likelihoods not p-values: FAQ', clickbait: '', textLength: '35017', alias: 'likelihood_not_pvalue_faq', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'TravisRivera', editCreatedAt: '2017-09-09 16:57:58', pageCreatorId: 'NateSoares', pageCreatedAt: '2016-07-04 05:30:47', seeDomainId: '0', editDomainId: 'AlexeiAndreev', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '746', text: 'This page answers frequently asked questions about the [4zd] proposal for experimental science.\n\n_(Note: This page is a personal [60p opinion page].)_\n\n[toc:]\n\n### What does this proposal entail?\n\nLet's say you have a coin and you don't know whether it's biased or not. You flip it six times and it comes up HHHHHT.\n\nTo report a [p_value p-value], you have to first declare which experiment you were doing — were you flipping it six times no matter what you saw and counting the number of heads, or were you flipping it until it came up tails and seeing how long it took? Then you have to declare a "null hypothesis," such as "the coin is fair." Only then can you get a p-value, which in this case, is either 0.11 (if you were going to toss the coin 6 times regardless) or 0.03 (if you were going to toss until it came up heads). The p-value of 0.11 means "if the null hypothesis _were_ true, then data that has as many H values as the observed data would only occur 11% of the time, if the declared experiment were repeated many many times."\n\nTo report a [bayes_likelihood likelihood], you don't have to do any of that "declare your experiment" stuff, and you don't have to single out one special hypothesis. You just pick a whole bunch of hypotheses that seem plausible, such as the set of hypotheses $H_b$ = "the coin has a bias of $b$ towards heads" for $b$ between 0% and 100%. Then you look at the actual data, and report how likely that data is according to each hypothesis. In this example, that yields a graph which looks like this:\n\n![L(e|H)](http://i.imgur.com/UwwxmCe.png)\n\nThis graph says that HHHHHT is about 1.56% likely under the hypothesis $H_{0.5}$ saying that the coin is fair, and about 5.93% likely under the hypothesis $H_{0.75}$ that the coin comes up heads 75% of the time, and only 0.17% likely under the hypothesis $H_{0.3}$ that the coin only comes up tails 30% of the time.\n\nThat's all you have to do. You don't need to make any arbitrary choice about which experiment you were going to run. You don't need to ask yourself what you "would have seen" in other cases. You just look at the actual data, and report how likely each hypothesis in your hypothesis class said that data should be.\n\n(If you want to compare how well the evidence supports one hypothesis or another, you just use the graph to get a [1rq likelihood ratio] between any two hypotheses. For example, this graph reports that the data HHHHHT supports the hypothesis $H_{0.75}$ over $H_{0.5}$ at odds of $\\frac{0.0593}{0.0156}$ = 3.8 to 1.)\n\nFor more of an explanation, see [4zd].\n\n### Why would reporting likelihoods be a good idea?\n\nExperimental scientists reporting likelihoods instead of p-values would likely help address many problems facing modern science, including [https://en.wikipedia.org/wiki/Data_dredging p-hacking], [http://slatestarcodex.com/2014/04/28/the-control-group-is-out-of-control/ the vanishing effect size problem], and [https://en.wikipedia.org/wiki/Publication_bias publication bias].\n\nIt would also make it easier for scientists to combine the results from multiple studies, and it would make it much much easier to conduct meta-analyses.\n\nIt would also make scientific statistics more intuitive, and easier to understand.\n\n### Likelihood functions are a Bayesian tool. Aren't Bayesian statistics subjective? Shouldn't science be objective?\n\nLikelihood functions are purely objective. In fact, there's only one degree of freedom in a likelihood function, and that's the choice of hypothesis class. This choice is no more arbitrary than the choice of a "null hypothesis" in standard statistics, and indeed, it's significantly less arbitrary (you can pick a large class of hypotheses, rather than just one; and none of them needs to be singled out as subjectively "special").\n\nThis is in stark contrast with p-values, which require that you pick an "experimental design" in advance, or that you talk about what data you "could have seen" if the experiment turned out differently. Likelihood functions only depend on the hypothesis class that you're considering, and the data that you actually saw. (This is one of the reasons why likelihood functions would solve p-hacking.)\n\nLikelihood functions are often used by Bayesian statisticians, and Bayesian statisticians do indeed use [4vr subjective probabilities], which has led some people to believe that reporting likelihood functions would somehow allow hated subjectivity to seep into the hallowed halls of science.\n\nHowever, it's the [1rm priors] that are subjective in Bayesian statistics, not likelihood functions. In fact, according to the [1lz laws of probability theory], likelihood functions are precisely that-which-is-left-over when you factor out all subjective beliefs from an observation of evidence. In other words, probability theory tells us that likelihoods are the best summary there is for capturing the objective evidence that a piece of data provides (assuming your goal is to help make people's beliefs more accurate).\n\n### How would reporting likelihoods solve p-hacking?\n\nP-values depend on what experiment the experimenter says they had in mind. For example, if the data is HHHHHT and the experimenter says "I was planning to flip it six times and count the number of Hs" then the p-value (for the fair coin hypothesis) is 0.11, which is not "significant." If instead the experimenter says "I was planing to flip it until I got a T" then the p-value is 0.03, which _is_ "significant." Experimenters can ([http://amstat.tandfonline.com/doi/pdf/10.1080/00031305.2016.1154108 and do!]) misuse or abuse this degree of freedom to make their results appear more significant than they actually are. This is known as "p-hacking."\n\nIn fact, when running complicated experiments, this can (and does!) happen to honest well-meaning researchers. Some experimenters are dishonest, and many others simply lack the time and patience to understand the subtleties of good experimental design. We don't need to put that burden on experimenters. We don't need to use statistical tools that depend on which experiment the experimenter had in mind. We can instead report the likelihood that each hypothesis assigned to the actual data.\n\nLikelihood functions don't have this "experiment" degree of freedom. They don't care what experiment you thought you were doing. They only care about the data you actually saw. To use likelihood functions correctly, all you have to do is look at stuff and then not lie about what you saw. Given the set of hypotheses you want to report likelihoods for, the likelihood function is completely determined by the data.\n\n### But what if the experimenter tries to game the rules by choosing how much data to collect?\n\nThat's a problem if you're reporting p-values, but it's not a problem if you're reporting likelihood functions.\n\nLet's say there's a coin that you think is fair, that I think might be biased 55% towards heads. If you're right, then every toss is going to (in [-4b5]) provide more evidence for "fair" than "biased." But sometimes (rarely), even if the coin is fair, you will flip it and it will generate a sequence that supports the "bias" hypothesis more than the "fair" hypothesis.\n\nHow often will this happen? It depends on how exactly you ask the question. If you can flip the coin at most 300 times, then there's about a 1.4% chance that at some point the sequence generated will support the hypothesis "the coin is biased 55% towards heads" 20x more than it supports the hypothesis "the coin is fair." (You can verify this yourself, and tweak the parameters, using [https://gist.github.com/Soares/941bdb13233fd0838f1882d148c9ac14 this code].)\n\nThis is an objective fact about coin tosses. If you look at a sequence of Hs and Ts generated by a fair coin, then some tiny fraction of the time, after some number $n$ of flips, it will support the "biased 55% towards heads" hypothesis 20x more than it supports the "fair" hypothesis. This is true no matter how or why you decided to look at those $n$ coin flips. It's true if you were always planning to look at $n$ coin flips since the day you were born. It's true if each coin flip costs $1 to look at, so you decided to only look until the evidence supported one hypothesis at least 20x better than the other. It's true if you have a heavy personal desire to see the coin come up biased, and were planning to keep flipping until the evidence supports "bias" 20x more than it supports "fair". It doesn't _matter_ why you looked at the sequence of Hs and Ts. The amount by which it supports "biased" vs "fair" is objective. If the coin really is fair, then the more you flip it the more the evidence will push towards "fair." It will only support "bias" a small unlucky fraction of the time, and that fraction is completely independent from your thoughts and intentions .\n\nLikelihoods are objective. They don't depend on your state of mind.\n\nP-values, on the other hand, run into some difficulties. A p-value is about a single hypothesis (such as "fair") in isolation. If the coin is fair, then [fair_coin_equally_likely all sequences of coin tosses are equally likely], so you need something more than the data in order to decide whether the data is "significant evidence" about fairness one way or the other. Which means you have to choose a "reference class" of ways the coin "could have come up." Which means you need to tell us which experiment you "intended" to run. And down the rabbit hole we go.\n\nThe p-value you report depends on how many coin tosses you say you were going to look at. If you lie about where you intended to stop, the p-value breaks. If you're out in the field collecting data, and the data just subconsciously begins to feel overwhelming, and so you stop collecting evidence (or if the data just subconsciously feels insufficient and so you collect more) then the p-value breaks. How badly to p-values break? If you can toss the coin at most 300 times, then by choosing when to stop looking, you can get a p < 0.05 significant result _21% of the time,_ and that's assuming you are required to look at at least 30 flips. If you're allowed to use small sample sizes, the number is more like 25%. You can verify this yourself, and tweak the parameters, using [https://gist.github.com/Soares/4955bb9268129476262b28e32b8ec979 this code].\n\nIt's no wonder that p-values are so often misused! To use p-values correctly, an experimenter has to meticulously report their intentions about the experimental design before collecting data, and then has to hold utterly unfaltering to that experiment design as the data comes in (even if it becomes clear that their experimental design was naive, and that there were crucial considerations that they failed to take into account). Using p-values correctly requires good intentions, constant vigilance, and inflexibility.\n\nContrast this with likelihood functions. Likelihood functions don't depend on your intentions. If you start collecting data until it looks overwhelming and then stop, that's great. If you start collecting data and it looks underwhelming so you keep collecting more, that's great too. Every new piece of data you do collect will support the true hypothesis more than any other hypothesis, in expectation — that's the whole point of collecting data. Likelihood functions don't depend upon your state of mind.\n\n\n\n### What if the experimenter uses some other technique to bias the result?\n\nThey can't. Or, at least, it's a theorem of [-1bv] that they can't. This law is known as [-conservation_expected_evidence conservation of expected evidence], and it says that for any hypothesis $H$ and any piece of evidence $e$, $\\mathbb P(H) = \\mathbb P(H \\mid e) \\mathbb P(e) + \\mathbb P(H \\mid \\lnot e) \\mathbb P(\\lnot e),$ where $\\mathbb P$ stands for my personal subjective probabilities.\n\nImagine that I'm going to take your likelihood function $\\mathcal L$ and blindly combine it with my personal beliefs using [1lz Bayes' rule]. The question is, can you use $\\mathcal L$ to manipulate my beliefs? The answer is clearly "yes" if you're willing to lie about what data you saw. But what if you're honestly reporting all the data you _actually_ saw? Then can you manipulate my beliefs, perhaps by being strategic about what data you look at and how long you look at it?\n\nClearly, the answer to that question is "sort of." If you have a fair coin, and you want to convince me it's biased, and you toss it 10 times, and it (by sheer luck) comes up HHHHHHHHHH, then that's a lot of evidence in favor of it being biased. But you can't use the "hope the coin comes up heads 10 times in a row by sheer luck" strategy to _reliably_ bias my beliefs; and if you try just flipping the coin 10 times and hoping to get lucky, then on average, you're going to produce data that convinces me that the coin is fair. The real question is, can you bias my beliefs _in expectation?_\n\nIf the answer is "yes," then there will be times when I should ignore $\\mathcal L$ even if you honestly reported what you saw. If the answer is "no," then there will be no such times — for every $e$ that would shift my beliefs heavily towards $H$ (such that you could say "Aha! How naive! If you look at this data and see it is $e$, then you will believe $H$, just as I intended"), there is an equal and opposite chance of alternative data which would push my beliefs _away_ from $H.$ So, can you set up a data collection mechanism that pushes me towards $H$ in expectation?\n\nAnd the answer to that question is _no,_ and this is a trivial theorem of probability theory. No matter what subjective belief state $\\mathbb P$ I use, if you honestly report the objective likelihood $\\mathcal L$ of the data you actually saw, and I update $\\mathbb P$ by [1lz multiplying it by $\\mathcal L$], there is no way (according to $\\mathbb P$) for you to bias my probability of $H$ on average — no matter how strategically you decide which data to look at or how long to look. For more on this theorem and its implications, see [conservation_expected_evidence Conservation of Expected Evidence].\n\nThere's a difference between metrics that can't be exploited in theory and metrics that can't be exploited in practice, and if a malicious experimenter really wanted to abuse likelihood functions, they could probably find some clever method. (At the least, they can always lie and make things up.) However, p-values aren't even provably inexploitable — they're so easy to exploit that sometimes well-meaning honest researchers exploit them _by accident_, and these exploits are already commonplace and harmful. When building better metrics, starting with metrics that are provably inexploitable is a good start.\n\n### What if you pick the wrong hypothesis class?\n\nIf you don't report likelihoods for the hypotheses that someone cares about, then that person won't find your likelihood function very helpful. The same problem exists when you report p-values (what if you pick the wrong null and alternative hypotheses?). Likelihood functions make the problem a little better, by making it easy to report how well the data supports a wide variety of hypotheses (instead of just ~2), but at the end of the day, there's no substitute for the raw data.\n\nLikelihoods are a summary of the data you saw. They're a useful summary, especially if you report likelihoods for a broad set of plausible hypotheses. They're a much better summary than many other alternatives, such as p-values. But they're still a summary, and there's just no substitute for the raw data.\n\n### How does reporting likelihoods help prevent publication bias?\n\nWhen you're reporting p-values, there's a stark difference between p-values that favor the null hypothesis (which are deemed "insignificant") and p-values that reject the null hypothesis (which are deemed "significant"). This "significance" occurs at arbitrary thresholds (e.g. p < 0.05), and significance is counted only in one direction (to be significant, you must reject the null hypothesis). Both these features contribute to publication bias: Journals only want to accept experiments that claim "significance" and reject the null hypothesis.\n\nWhen you're reporting [56s likelihood functions], a 20 : 1 [1rq ratio] is a 20 : 1 ratio is a 20 : 1 ratio. It doesn't matter if your likelihood function is peaked near "the coin is fair" or whether it's peaked near "the coin is biased 82% towards heads." If the ratio between the likelihood of one hypothesis and the likelihood of another hypothesis is 20 : 1 then the data provides the same strength of evidence either way. Likelihood functions don't single out one "null" hypothesis and incentivize people to only report data that pushes away from that null hypothesis; they just talk about the relationship between the data and _all_ the interesting hypotheses.\n\nFurthermore, there's no arbitrary significance threshold for likelihood functions. If you didn't have a ton of data, your likelihood function will be pretty spread out, but it won't be useless. If you find $5 : 1$ odds in favor of $H_1$ over $H_2$, and I independently find $6 : 1$ odds in favor of $H_1$ over $H_2$, and our friend independently finds $3 : 1$ odds in favor of $H_1$ over $H_2,$ then our studies as a whole constitute evidence that favors $H_1$ over $H_2$ by a factor of $90 : 1$ — hardly insignificant! With likelihood ratios (and no arbitrary "significance" cutoffs) progress can be made in small steps.\n\nOf course, this wouldn't solve the problem of publication bias in full, not by a long shot. There would still be incentives to report cool and interesting results, and the scientific community might still ask for results to pass some sort of "significance" threshold before accepting them for publication. However, reporting likelihoods would be a good start.\n\n### How does reporting likelihoods help address vanishing effect sizes?\n\nIn a field where an effect does not actually exist, we will often observe an initial study that finds a very large effect, followed by a number of attempts at replication that find smaller and smaller and smaller effects (until someone postulates that the effect doesn't exist, and does a meta-analysis to look for p-hacking and publication bias). This is known as the [https://en.wikipedia.org/wiki/Decline_effect decline effect]; see also [http://slatestarcodex.com/2014/04/28/the-control-group-is-out-of-control/ _The control group is out of control_].\n\nThe decline effect is possible in part because p-values look only at whether the evidence says we should "accept" or "reject" a special null hypothesis, without any consideration for what that evidence says about the alternative hypotheses. Let's say we have three studies, all of which reject the null hypothesis "the coin is fair." The first study rejects the null hypothesis with a 95% confidence interval of [0.7, 0.9] bias in favor of heads, but it was a small study and some of the experimenters were a bit sloppy. The second study is a bit bigger and a bit better organized, and rejects the null hypothesis with a 95% confidence interval of [0.53, 0.62]. The third study is high-powered, long-running, and rejects the null hypothesis with a 95% confidence interval of [0.503, 0.511]. It's easy to say "look, three separate studies rejected the null hypothesis!"\n\nBut if you look at the likelihood functions, you'll see that _something very fishy is going on_ — none of the studies actually agree with each other! The effect sizes are incompatible. Likelihood functions make this phenomenon easy to detect, because they tell you how much the data supports _all_ the relevant hypotheses (not just the null hypothesis). If you combine the three likelihood functions, you'll see that _none_ of the confidence intervals fare very well. Likelihood functions make it obvious when different studies contradict each other directly, which makes it much harder to summarize contradictory data down to "three studies rejected the null hypothesis".\n\n### What if I want to reject the null hypothesis without needing to have any particular alternative in mind?\n\nMaybe you don't want to report likelihoods for a large hypothesis class, because you are pretty sure you can't generate a hypothesis class that contains the correct hypothesis. "I don't want to have to make up a bunch of alternatives," you protest, "I just want to show that the null hypothesis is _wrong,_ in isolation."\n\nFortunately for you, that's possible using likelihood functions! The tool you're looking for is the notion of [227 strict confusion]. A hypothesis $H$ will tell you how low its likelihood is supposed to get, and if its likelihood goes a lot lower than that value, then you can be pretty confident that you've got the wrong hypothesis.\n\nFor example, let's say that your one and only hypothesis is $H_{0.9}$ = "the coin is biased 90% towards heads." Now let's say you flip the coin twenty times, and you see the sequence THTTHTTTHTTTTHTTTTTH. The [log_likelihood log-likelihood] that $H_{0.9}$ _expected_ to get on a sequence of 20 coin tosses was about -9.37 [evidence_bit bits],%%note: According to $H_{0.9},$ each coin toss carries $0.9 \\log_2(0.9) + 0.1 \\log_2(0.1) \\approx -0.469$ bits of evidence, so after 20 coin tosses, $H_{0.9}$ expects about $20 \\cdot 0.469 \\approx 9.37$ bits of [bayes_surprise surprise]. For more on why log likelihood is a convenient tool for measuring "evidence" and "surprise," see [1zh Bayes' rule: log odds form].%% for a likelihood score of about $2^{-9.37} \\approx$ $1.5 \\cdot 10^{-3},$ on average. The likelihood that $H_{0.9}$ actually gets on that sequence is -50.59 bits, for a likelihood score of about $5.9 \\cdot 10^{-16},$ which is _thirteen orders of magnitude less likely than expected._ You don't need to be clever enough to come up with an alternative hypothesis that explains the data in order to know that $H_{0.9}$ is not the right hypothesis for you.\n\nIn fact, likelihood functions make it easy to show that _lots_ of different hypotheses are strictly confused — you don't need to have a good hypothesis in your hypothesis class in order for reporting likelihood functions to be a useful service.\n\n### How does reporting likelihoods make it easier to combine multiple studies?\n\nWant to combine two studies that reported likelihood functions? Easy! Just multiply the likelihood functions together. If the first study reported 10 : 1 odds in favor of "fair coin" over "biased 55% towards heads," and the second study reported 12 : 1 odds in favor of "fair coin" over "biased 55% towards heads," then the combined studies support the "fair coin" hypothesis over the "biased 55% towards heads" hypothesis at a likelihood ratio of 120 : 1.\n\nIs it really that easy? Yes! That's one of the benefits of using a representation of evidence supported by a large edifice of [-1bv] — they're trivially easy to compose. You have to ensure that the studies are independent first, because otherwise you'll double-count the data. (If the combined likelihood ratios get really extreme, you should be suspicious about whether they were actually independent.) This isn't exactly a new problem in experimental science; we can just add it to the list of reasons why replication studies had better be independent of the original study. Also, you can only multiply the likelihood functions together on places where they're both defined: If one study doesn't report the likelihood for a hypothesis that you care about, you might need access to the raw data in order to extend their likelihood function. But if the studies are independent and both report likelihood functions for the relevant hypotheses, then all you need to do is multiply.\n\n(Don't try this with p-values. A p < 0.05 study and a p < 0.01 study don't combine into anything remotely like a p < 0.0005 study.)\n\n### How does reporting likelihoods make it easier to conduct meta-analyses?\n\nWhen studies report p-values, performing a meta-analysis is a complicated procedure that requires dozens of parameters to be finely tuned, and (lo and behold) bias somehow seeps in, and meta-analyses often find whatever the analyzer set out to find. When studies report likelihood functions, performing a meta-analysis is trivial and doesn't depend on you to tune a dozen parameters. Just multiply all the likelihood functions together.\n\nIf you want to be extra virtuous, you can check for anomalies, such as one likelihood function that's tightly peaked in a place that disagrees with all the other peaks. You can also check for [227 strict confusion], to get a sense for how likely it is that the correct hypothesis is contained within the hypothesis class that you considered. But mostly, all you've got to do is multiply the likelihood functions together.\n\n### How does reporting likelihood functions make it easier to detect fishy studies?\n\nWith likelihood functions, it's much easier to find the studies that don't match up with each other — look for the likelihood function that has its peak in a different place than all the other peaks. That study deserves scrutiny: either those experimenters had something special going on in the background of their experiment, or something strange happened in their data collection and reporting process.\n\nFurthermore, likelihoods combined with the notion of [227 strict confusion] make it easy to notice when something has gone seriously wrong. As per the above answers, you can combine multiple studies by multiplying their likelihood functions together. What happens if the likelihood function is super small everywhere? That means that either (a) some of the data is fishy, or (b) you haven't considered the right hypothesis yet.\n\nWhen you _have_ considered the right hypothesis, it will have decently high likelihood under _all_ the data. There's only one real world underlying all our data, after all — it's not like different experimenters are measuring different underlying universes. If you multiply all the likelihood functions together and _all_ the hypotheses turn out looking wildly unlikely, then you've got some work to do — you haven't yet considered the right hypothesis.\n\nWhen reporting p-values, contradictory studies feel like the norm. Nobody even _tries_ to make all the studies fit together, as if they were all measuring the same world. With likelihood functions, we could actually aspire towards a world where scientific studies on the same topic are _all_ combined. A world where people try to find hypotheses that fit _all_ the data at once, and where a single study's data being out of place (and making all the hypotheses currently under consideration become [-227]) is a big glaring "look over here!" signal. A world where it feels like studies are _supposed_ to fit together, where if scientists haven't been able to find a hypothesis that explains all the raw data, then they know they have their work cut out for them.\n\nWhatever the right hypothesis is, it will almost surely not be strictly confused under the actual data. Of course, when you come up with a completely new hypothesis (such as "the coin most of us have been using is fair but study #317 accidentally used a different coin") you're going to need access to the raw data of some of the previous studies in order to extend their likelihood functions and see how well they do on this new hypothesis. As always, there's just no substitute for raw data.\n\n### Why would this make statistics easier to do and understand?\n\np < 0.05 does not mean "the null hypothesis is less than 5% likely" (though that's what young students of statistics often _want_ it to mean). What the null hypothesis means is "given a particular experimental design (e.g., toss the coin 100 times and count the heads) and the data (e.g., the sequence of 100 coin flips), if the null hypothesis _were_ true, then data that matches my chosen statistic (e.g., the number of heads) would only occur 5% of the time, if we repeated this experiment over and over and over."\n\nWhy the complexity? Statistics is designed to keep subjective beliefs out of the hallowed halls of science. Your science paper shouldn't be able to conclude "and, therefore, I personally believe that the coin is very likely to be biased, and I'd bet on that at 20 : 1 odds." Still, much of this complexity is unnecessary. Likelihood functions achieve the same goal of objectivity, but without all the complexity.\n\n[51n $\\mathcal L_e(H)$] $< 0.05$ _also_ doesn't mean "$H$ is less than 5% likely", it means "H assigned less than 0.05 probability to $e$ happening." The student still needs to learn to keep "probability of $e$ given $H$" and "probability of $H$ given $e$" distinctly separate in their heads. However, likelihood functions do have a _simpler_ interpretation: $\\mathcal L_e(H)$ is the probability of the actual data $e$ occurring if $H$ were in fact true. No need to talk about experimental design, no need to choose a summary statistic, no need to talk about what "would have happened." Just look at how much probability each hypothesis assigned to the actual data; that's your likelihood function.\n\nIf you're going to report p-values, you need to be meticulous in considering the complexities and subtleties of experiment design, on pain of creating p-values that are broken in non-obvious ways (thereby contributing to the [https://en.wikipedia.org/wiki/Replication_crisis replication crisis]). When reading results, you need to take the experimenter's intentions into account. None of this is necessary with likelihoods.\n\nTo understand $\\mathcal L_e(H),$ all you need to know is how likely $e$ was according to $H.$ Done.\n\n### Isn't this just one additional possible tool in the toolbox? Why switch entirely away from p-values?\n\nThis may all sound too good to be true. Can one simple change really solve that many problems in modern science?\n\nFirst of all, you can be assured that reporting likelihoods instead of p-values would not "solve" all the problems above, and it would surely not solve all problems with modern experimental science. Open access to raw data, preregistration of studies, a culture that rewards replication, and many other ideas are also crucial ingredients to a scientific community that zeroes in on truth.\n\nHowever, reporting likelihoods would help solve lots of different problems in modern experimental science. This may come as a surprise. Aren't likelihood functions just one more statistical technique, just another tool for the toolbox? Why should we think that one single tool can solve that many problems?\n\nThe reason lies in [-1bv]. According to the axioms of probability theory, there is only one good way to account for evidence when updating your beliefs, and that way is via likelihood functions. Any other method is subject to inconsistencies and pathologies, as per the [probability_coherence_theorems coherence theorems of probability theory].\n\nIf you're manipulating equations like $2 + 2 = 4,$ and you're using methods that may or may not let you throw in an extra 3 on the right hand side (depending on the arithmetician's state of mind), then it's no surprise that you'll occasionally get yourself into trouble and deduce that $2 + 2 = 7.$ The laws of arithmetic show that there is only one correct set of tools for manipulating equations if you want to avoid inconsistency.\n\nSimilarly, the laws of probability theory show that there is only one correct set of tools for manipulating _uncertainty_ if you want to avoid inconsistency. According to [1lz those rules], the right way to represent evidence is through likelihood functions.\n\nThese laws (and a solid understanding of them) are younger than the experimental science community, and the statistical tools of that community predate a modern understanding of probability theory. Thus, it makes a lot of sense that the existing literature uses different tools. However, now that humanity _does_ possess a solid understanding of probability theory, it should come as no surprise that many diverse pathologies in statistics can be cleaned up by switching to a policy of reporting likelihoods instead of p-values.\n\n\n### If it's so great why aren't we doing it already?\n\n[1bv Probability theory] (and a solid understanding of all that it implies) is younger than the experimental science community, and the statistical tools of that community predate a modern understanding of probability theory. In particular, modern statistical tools were designed in an attempt to keep subjective reasoning out of the hallowed halls of science. You shouldn't be able to publish a scientific paper which concludes "and therefore, I personally believe that this coin is biased towards heads, and would bet on that at 20 : 1 odds." Those aren't the foundations upon which science can be built.\n\nLikelihood functions are strongly associated with Bayesian statistics, and Bayesian statistical tools tend to manipulate subjective probabilities. Thus, it wasn't entirely clear how to use tools such as likelihood functions without letting subjectivity bleed into science.\n\nNowadays, we have a better understanding of how to separate out subjective probabilities from objective claims, and it's known that likelihood functions don't carry any subjective baggage with them. In fact, they carry _less_ subjective baggage than p-values do: A likelihood function depends only on the data that you _actually saw,_ whereas p-values depend on your experimental design and your intentions.\n\nThere are good historical reasons why the existing scientific community is using p-values, but now that humanity _does_ possess a solid theoretical understanding of probability theory (and how to factor subjective probabilities out from objective claims), it's no surprise that a wide array of diverse problems in modern statistics can be cleaned up by reporting likelihoods instead of p-values.\n\n### Has this ever been tried?\n\nNo. Not yet. To our knowledge, most scientists haven't even considered this proposal — and for good reason! There are a lot of big fish to fry when it comes to addressing the [https://en.wikipedia.org/wiki/Replication_crisis replication crisis], [https://en.wikipedia.org/wiki/Data_dredging p-hacking], [http://slatestarcodex.com/2014/04/28/the-control-group-is-out-of-control/ the problem of vanishing effect sizes], [https://en.wikipedia.org/wiki/Publication_bias publication bias], and other problems facing science today. The scientific community at large is huge, decentralized, and has a lot of inertia. Most activists who are trying to shift it already have their hands full advocating for very important policies such as open access journals and pre-registration of trials. So it makes sense that nobody's advocating hard for reporting likelihoods instead of p-values — yet.\n\nNevertheless, there are good reasons to believe that reporting likelihoods instead of p-values would help solve many of the issues in modern experimental science. ', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: [ '0', '0', '0', '0', '0', '0', '0', '0', '0', '0' ], muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '16', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'NateSoares', 'EricBruylant', 'TravisRivera', '8r0', 'DianeRitter' ], childIds: [], parentIds: [ 'likelihoods_not_pvalues' ], commentIds: [ '51d' ], questionIds: [], tagIds: [ 'opinion_meta_tag' ], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: 'likelihoods_not_pvalues', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22990', pageId: 'likelihood_not_pvalue_faq', userId: 'DianeRitter', edit: '16', type: 'newEditProposal', createdAt: '2018-03-20 22:28:32', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22830', pageId: 'likelihood_not_pvalue_faq', userId: '8r0', edit: '15', type: 'newEditProposal', createdAt: '2017-10-11 17:21:41', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22749', pageId: 'likelihood_not_pvalue_faq', userId: 'TravisRivera', edit: '14', type: 'newEdit', createdAt: '2017-09-09 16:57:58', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22512', pageId: 'likelihood_not_pvalue_faq', userId: 'NateSoares', edit: '13', type: 'newEdit', createdAt: '2017-04-29 05:11:08', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '20042', pageId: 'likelihood_not_pvalue_faq', userId: 'NateSoares', edit: '12', type: 'newEdit', createdAt: '2016-10-11 06:25:49', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '3478', likeableType: 'changeLog', myLikeValue: '0', likeCount: '1', dislikeCount: '0', likeScore: '1', individualLikes: [], id: '19334', pageId: 'likelihood_not_pvalue_faq', userId: 'EricBruylant', edit: '11', type: 'newEdit', createdAt: '2016-08-27 22:34:37', auxPageId: '', oldSettingsValue: '', newSettingsValue: 'added TOC and link to opinion page tag' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '19333', pageId: 'likelihood_not_pvalue_faq', userId: 'EricBruylant', edit: '0', type: 'newTag', createdAt: '2016-08-27 22:32:29', auxPageId: 'opinion_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '16370', pageId: 'likelihood_not_pvalue_faq', userId: 'NateSoares', edit: '10', type: 'newEdit', createdAt: '2016-07-10 13:06:16', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '15926', pageId: 'likelihood_not_pvalue_faq', userId: 'NateSoares', edit: '9', type: 'newEdit', createdAt: '2016-07-07 05:23:43', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '15786', pageId: 'likelihood_not_pvalue_faq', userId: 'NateSoares', edit: '8', type: 'newEdit', createdAt: '2016-07-06 20:59:42', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '15263', pageId: 'likelihood_not_pvalue_faq', userId: 'NateSoares', edit: '7', type: 'newEdit', createdAt: '2016-07-04 17:15:11', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '15256', pageId: 'likelihood_not_pvalue_faq', userId: 'NateSoares', edit: '6', type: 'newEdit', createdAt: '2016-07-04 17:05:52', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '15255', pageId: 'likelihood_not_pvalue_faq', userId: 'NateSoares', edit: '5', type: 'newEdit', createdAt: '2016-07-04 16:16:35', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '15252', pageId: 'likelihood_not_pvalue_faq', userId: 'NateSoares', edit: '4', type: 'newEdit', createdAt: '2016-07-04 16:04:04', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '15211', pageId: 'likelihood_not_pvalue_faq', userId: 'NateSoares', edit: '3', type: 'newEdit', createdAt: '2016-07-04 05:32:17', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '15210', pageId: 'likelihood_not_pvalue_faq', userId: 'NateSoares', edit: '2', type: 'newEdit', createdAt: '2016-07-04 05:31:55', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '15208', pageId: 'likelihood_not_pvalue_faq', userId: 'NateSoares', edit: '0', type: 'newParent', createdAt: '2016-07-04 05:31:06', auxPageId: 'likelihoods_not_pvalues', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '15206', pageId: 'likelihood_not_pvalue_faq', userId: 'NateSoares', edit: '1', type: 'newEdit', createdAt: '2016-07-04 05:30:47', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }