{ localUrl: '../page/missing_weird.html', arbitalUrl: 'https://arbital.com/p/missing_weird', rawJsonUrl: '../raw/43g.json', likeableId: '2621', likeableType: 'page', myLikeValue: '0', likeCount: '2', dislikeCount: '0', likeScore: '2', individualLikes: [ 'AlexeiAndreev', 'EricRogstad' ], pageId: 'missing_weird', edit: '5', editSummary: '', prevEdit: '4', currentEdit: '5', wasPublished: 'true', type: 'wiki', title: 'Missing the weird alternative', clickbait: 'People might systematically overlook "make tiny molecular smileyfaces" as a way of "producing smiles", because our brains automatically search for high-utility-to-us ways of "producing smiles".', textLength: '10932', alias: 'missing_weird', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'EliezerYudkowsky', editCreatedAt: '2016-06-27 01:16:55', pageCreatorId: 'EliezerYudkowsky', pageCreatedAt: '2016-06-09 00:43:18', seeDomainId: '0', editDomainId: 'EliezerYudkowsky', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '385', text: 'The "[47]" problem is alleged to be a foreseeable difficulty of coming up with a [3d9 good] goal for an [42g AGI] (part of the [2v alignment problem] for [2c advanced agents]). Roughly, an "unforeseen maximum" happens when somebody thinks that "produce smiles" would be a great goal for an AGI, because you can produce lots of smiles by making people happy, and making people happy is good. However, while it's true that making people happy by ordinary means will produce *some* smiles, what will produce even *more* smiles is administering regular doses of heroin or turning all matter within reach into tiny molecular smileyfaces.\n\n"Missing the weird alternative" is an attempt to [43h psychologize] about why people talking about AGI utility functions might make this kind of oversight systematically. To avoid [43k Bulverism], if you're not yet convinced that missing a weird alternative *would* be a dangerous oversight, please read [47] first or instead.\n\nIn what follows we'll use $U$ to denote a proposed utility function for an AGI, $V$ to denote our own [55 normative values], $\\pi_1$ to denote the high-$V$ policy that somebody thinks is the attainable maximum of $U,$ and $\\pi_0$ to denote what somebody else suggests is a higher-$U$ lower-$V$ alternative.\n\n# Alleged historical cases\n\nSome historical instances of AGI goal systems proposed in a publication or conference presentation, that have been argued to be "missing the weird alternative" are:\n\n- "Just program AIs to maximize their gains in compression of sensory data." Proposed by Juergen Schmidhuber, director of IDSIA, in a presentation at the 2009 Singularity Summit; see the entry on [47].\n - Claimed by Schmidhuber to motivate art and science.\n - Yudkowsky suggested that this would, e.g., motivate the AI to construct objects that encrypted streams of 1s or 0s, then revealed the encryption key to the AI.\n- Program an AI by showing it pictures/video of smiling faces to train (via supervised learning) which sensory events indicate good outcomes. Formally proposed twice, once by J. Storrs Hall in the book *Beyond AI,* once in an ACM paper by somebody who since exercised their [43l sovereign right to change their mind].\n - Claimed to motivate an AI to make people happy.\n - Suggested by Yudkowsky to motivate tiling the universe with tiny molecular smileyfaces.\n\nMany other instances of this alleged issue have allegedly been spotted in more informal dicussion.\n\n# Psychologized reasons to miss a weird alternative\n\n[43h Psychologizing] some possible reasons why some people might systematically "miss the weird alternative", assuming that was actually happening:\n\n## Our brain doesn't bother searching V-bad parts of policy space\n\nArguendo: The human brain is built to implicitly search for high-$V$ ways to accomplish a goal. Or not actually high-$V$, but high-$W$ where $W$ is what we intuitively want, which [313 has something to do with] $V.$ "Tile the universe with tiny smiley-faces" is low-$W$ so doesn't get considered.\n\nArguendo, your brain is built to search for policies *it* prefers. If you were looking for a way to open a stuck jar, your brain wouldn't generate the option of detonating a stick of dynamite, because that would be a policy ranked very low in your preference-ordering. So what's the point of searching that part of the policy space?\n\nThis argument seems to [3tc prove too much] in that it suggests that a chess player would be unable to search for their opponent's most preferred moves, if human brains could only search for policies that were high inside their own preference ordering. But there could be an explicit perspective-taking operation required, and somebody modeling an AI they had warm feelings about might fail to fully take the AI's perspective; that is, they fail to carry out an explicit cognitive step needed to switch off the "only $W$-good policies" filter.\n\nWe might also have a *limited* native ability to take perspectives on goals not our own. I.e., without further training, our brain can readily imagine that a chess opponent wants us to lose, or imagine that an AI wants to kill us because it hates us, and consider "reasonable" policy options along those lines. But this expanded policy search still fails to consider policies on the lines of "turn everything into tiny smileyfaces" when asking for ways to produce smiles, because *nobody* in the ancestral environment would have wanted that option and so our brain has a hard time natively modeling it.\n\n## Our brain doesn't automatically search weird parts of policy space\n\nArguendo: The human brain doesn't search "weird" (generalization-violating) parts of the policy space without an explicit effort.\n\nThe potential issue here is that "tile the galaxy with tiny smileyfaces" or "build environmental objects that encrypt streams of 1s or 0s, then reveal secrets" would be *weird* in the sense of violating generalizations that usually hold about policies or consequences in human experience. Not generalizations like, "nobody wants smiles smaller than an inch", but rather, "most problems are not solved with tiny molecular things".\n\n[2w] would tend to push the maximum (attainable optimum) of $U$ in "weird" or "extreme" directions - e.g., the *most* smiles can be obtained by making them very small, if this variable is not otherwise constrained. So the unforeseen maxima might tend to violate implicit generalizations that usually govern most goals or policies and that our brains take for granted. Aka, the unforeseen maximum isn't considered/generated by the policy search, because it's weird.\n\n## Conflating the helpful with the optimal\n\nArguendo: Someone might simply get as far as "$\\pi_1$ increases $U$" and then stop there and conclude that a $U$-agent does $\\pi_1.$\n\nThat is, they might just not realize that the argument "an advanced agent optimizing $U$ will execute policy $\\pi_1$" requires "$\\pi_1$ is the best way to optimize $U$" and not just "ceteris paribus, doing $\\pi_1$ is better for $U$ than doing nothing". So they don't realize that establishing "a $U$-agent does $\\pi_1$" requires establishing that no other $\\pi_k$ produces higher expected $U$. So they just never search for a $\\pi_k$ like that.\n\nThey might also be implicitly modeling $U$-agents as only weakly optimizing $U$, and hence not seeing a $U$-agent as facing tradeoffs or opportunity costs; that is, they implicitly model a $U$-agent as having no desire to produce any more $U$ than $\\pi_1$ produces. Again psychologizing, it does sometimes seem like people try to mentally model a $U$-agent as "an agent that sorta wants to produce some $U$ as a hobby, so long as nothing more important comes along" rather "an agent whose action-selection criterion entirely consists of doing whatever action is expected to lead to the highest $U$".\n\nThis would well-reflect the alleged observation that people allegedly "overlooking the weird alternative" seem more like they failed to search at all, than like they conducted a search but couldn't think of anything.\n\n## Political persuasion instincts on convenient instrumental strategies\n\nIf the above hypothetical was true - that people just hadn't thought of the possibility of higher-$U$ $\\pi_k$ existing - then we'd expect them to quickly change their minds upon this being pointed out. Actually, it's been empirically observed that there seems to be a lot more resistance than this.\n\nOne possible force that could produce resistance to the observation "$\\pi_0$ produces more $U$" - over and above the null hypothesis of ordinary pushback in argument, admittedly sometimes a very powerful force on its own - might be a brain running in a mode of "persuade another agent to execute a strategy $\\pi$ which is convenient to me, by arguing to the agent that $\\pi$ best serves the agent's own goals". E.g. if you want to persuade your boss to give you a raise, one would be wise to argue "you should give me a raise because it will make this project more efficient" rather than "you should give me a raise because I like money". By the general schema of the political brain, we'd be very likely to have built-in support for searching for arguments that policy $\\pi$ that we just happen to like, is a *great* way to achieve somebody else's goal $U.$\n\nThen on the same schema, a competing policy $\\pi_0$ which is *better* at achieving the other agent's $U$, but less convenient for us than $\\pi_1$, is an "enemy soldier" in the political debate. We'll automatically search for reasons why $\\pi_0$ is actually really bad for $U$ and $\\pi_1$ is actually really good, and feel an instinctive dislike of $\\pi_0.$ By the standard schema on the self-deceptive brain, we'd probably convince ourselves that $\\pi_0$ is really bad for $U$ and $\\pi_1$ is really best for $U.$ It would not be advantageous to our persuasion to go around noticing ourselves all the reasons that $\\pi_0$ is good for $U.$ And we definitely wouldn't start spontaneously searching for $\\pi_k$ that are $U$-better than $\\pi_1,$ once we'd already found some $\\pi_1$ that was very convenient to us.\n\n(For a general post on the "fear of third alternatives", see [here](http://lesswrong.com/lw/hu/the_third_alternative/). This essay also suggests that a good test for whether you might be suffering from "fear of third alternatives" is to ask yourself whether you instinctively dislike or automatically feel skeptical of any proposed other options for achieving the stated criterion.)\n\n## The [apple_pie_problem apple pie problem]\n\nSometimes people propose that the only utility function an AGI needs is $U$, where $U$ is something very good, like democracy or freedom or [apple_pie_problem apple pie].\n\nIn this case, perhaps it sounds like a good thing to say about $U$ that it is the only utility function an AGI needs; and refusing to agree with this is *not* praising $U$ as highly as possible, hence an enemy soldier against $U.$\n\nOr: The speaker may not realize that "$U$ is really quite amazingly fantastically good" is not the same proposition as "an agent that maximizes $U$ and nothing else is [3d9 beneficial]", so they treat contradictions of the second statement as though they contradicted the first.\n\nOr: Pointing out that $\\pi_0$ is high-$U$ but low-$V$ may sound like an argument against $U,$ rather than an observation that apple pie is not the only good. "A universe filled with nothing but apple pie has low value" is not the same statement as "apple pie is bad and should not be in our utility function".\n\nIf the "apple pie problem" is real, it seems likely to implicitly rely on or interact with some of the other alleged problems. For example, someone may not realize that their own complex values $W$ contain a number of implicit filters $F_1, F_2$ which act to filter out $V$-bad ways of achieving $U,$ because they themselves are implicitly searching only for high-$W$ ways of achieving $U.$', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'EliezerYudkowsky' ], childIds: [], parentIds: [ 'unforeseen_maximum' ], commentIds: [], questionIds: [], tagIds: [ 'psychologizing' ], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '14614', pageId: 'missing_weird', userId: 'EliezerYudkowsky', edit: '5', type: 'newEdit', createdAt: '2016-06-27 01:16:55', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '12120', pageId: 'missing_weird', userId: 'EliezerYudkowsky', edit: '4', type: 'newEdit', createdAt: '2016-06-09 00:58:28', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '12119', pageId: 'missing_weird', userId: 'EliezerYudkowsky', edit: '3', type: 'newEdit', createdAt: '2016-06-09 00:56:47', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '12118', pageId: 'missing_weird', userId: 'EliezerYudkowsky', edit: '2', type: 'newEdit', createdAt: '2016-06-09 00:47:23', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '12117', pageId: 'missing_weird', userId: 'EliezerYudkowsky', edit: '1', type: 'newEdit', createdAt: '2016-06-09 00:43:18', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '12071', pageId: 'missing_weird', userId: 'EliezerYudkowsky', edit: '1', type: 'newTag', createdAt: '2016-06-08 20:17:59', auxPageId: 'psychologizing', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '12070', pageId: 'missing_weird', userId: 'EliezerYudkowsky', edit: '1', type: 'newParent', createdAt: '2016-06-08 20:12:04', auxPageId: 'unforeseen_maximum', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '12069', pageId: 'missing_weird', userId: 'EliezerYudkowsky', edit: '0', type: 'deleteParent', createdAt: '2016-06-08 20:02:52', auxPageId: 'rationality', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '12055', pageId: 'missing_weird', userId: 'EliezerYudkowsky', edit: '1', type: 'newParent', createdAt: '2016-06-08 19:47:10', auxPageId: 'rationality', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }