"I feel like this paragraph might be a little ne..."

https://arbital.com/p/9j1

by Eyal Roth Mar 27 2019 updated Mar 27 2019


Bayesian: Are we going there? I guess we're going there\. My good Scientist, I mean that if you offered me either side of an even\-money bet on whether Plum committed the murder, I'd bet that he didn't do it\. But if you offered me a gamble that costs \$1 if Professor Plum is innocent and pays out \$5 if he's guilty, I'd cheerfully accept that gamble\. We only ran the 2012 US Presidential Election one time, but that doesn't mean that on November 7th you should've refused a \$10 bet that paid out \$1000 if Obama won\. In general when prediction markets and large liquid betting pools put 60% betting odds on somebody winning the presidency, that outcome tends to happen 60% of the time; they are well\-calibrated for probabilities in that range\. If they were systematically uncalibrated\-\-if in general things happened 80% of the time when prediction markets said 60%\-\-you could use that fact to pump money out of prediction markets\. And your pumping out that money would adjust the prediction\-market prices until they were well\-calibrated\. If things to which prediction markets assign 70% probability happen around 7 times out of 10, why insist for reasons of ideological purity that the probability statement is meaningless?

I feel like this paragraph might be a little necessary for someone who haven't read the bayes rule intro, but on the other hand is a bit off-topic in this context and quite distracting, as it raises questions which are not part of this "discussion"; mainly, questions regarding how to approach "one-off" events.

Say, what if I can't quantify the outcome of my decision so nicely like in the case of a bet? What if I need to decide whether to send Miss Scarlet to prison or not based on these likelihood probabilities?