"hm, do you actually need that discussion? In no..."

https://arbital.com/p/5s2

by Eric Bruylant Aug 5 2016


hm, do you actually need that discussion? In no case does an agent know in advance that their vote will decide the election, just that there is some (usually extraordinarily slim) chance that they will. A situation where all agents have the impossible piece of information (the election is close enough that my actions can tip it, and, importantly that their tipping won't be undone by others who are in identical positions) seems not the right situation to be looking at, and would unsurprisingly lead to crazy outputs. Sure, in retrospect all the agents can go "damn, I should've put massive effort into acquiring more votes" if the election was close enough that they could have tipped it in a way they expect would have large positive EV, but that seems like a correct and reasonable conclusion in hindsight, just not one which is foreseeable.

EV calc feels like a system I could actually use to weigh up the pros and cons, by looking at the statistics of closeness of various elections and estimating the value of tipping with maybe a few tens of hours of research, whereas estimating the correlation between my voting habits and various possible reference classes of voter seems in practice hopeless%%note: without, perhaps, having enough data to reconstruct key parts of large numbers of people's decision processes and massive effort classifying them, at which point you're not really running a process other people are likely to (unless you make your results publicly available, and things get recursive!)%%.

Maybe explaining this is more of a detour than you want, though, since it's less interesting from a decision theory perspective?