{ localUrl: '../page/instrumental_goals_equally_tractable.html', arbitalUrl: 'https://arbital.com/p/instrumental_goals_equally_tractable', rawJsonUrl: '../raw/8v4.json', likeableId: '4092', likeableType: 'page', myLikeValue: '0', likeCount: '1', dislikeCount: '0', likeScore: '1', individualLikes: [ 'EricRogstad' ], pageId: 'instrumental_goals_equally_tractable', edit: '2', editSummary: '', prevEdit: '1', currentEdit: '2', wasPublished: 'true', type: 'wiki', title: 'Instrumental goals are almost-equally as tractable as terminal goals', clickbait: 'Getting the milk from the refrigerator because you want to drink it, is not vastly harder than getting the milk from the refrigerator because you inherently desire it.', textLength: '8853', alias: 'instrumental_goals_equally_tractable', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'EliezerYudkowsky', editCreatedAt: '2017-11-26 22:43:10', pageCreatorId: 'EliezerYudkowsky', pageCreatedAt: '2017-11-26 22:35:31', seeDomainId: '0', editDomainId: 'EliezerYudkowsky', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'false', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '88', text: '[summary: One counterargument to the [1y Orthogonality Thesis] asserts that agents with terminal preferences for goals like e.g. resource acquisition will always be much better at those goals than agents which merely try to acquire resources on the way to doing something else, like making paperclips. A reply is that any competent agent optimizing a utility function $U_0$ must have the ability to execute many subgoals and sub-subgoals $W_1, W_2, ... W_25$ that are all conditioned on arriving at the same future, and it is not especially easier to optimize $W_25$ if you promote $W_1$ to an unconditional terminal goal. E.g. it is not much harder to design a battery to power interstellar probes to gather resources to make paperclips than to design a battery to power interstellar probes to gather resources and hoard them.]\n\nOne counterargument to the Orthogonality Thesis asserts that agents with terminal preferences for goals like e.g. resource acquisition will always be much better at those goals than agents which merely try to acquire resources on the way to doing something else, like making paperclips. Therefore, by filtering on real-world competent agents, we filter out all agents which do not have terminal preferences for acquiring resources.\n\nA reply is that "figuring out how to do $W_4$ on the way to $W_3$, on the way to $W_2$, on the way to $W_1$, without that particular way of doing $W_4$ stomping on your ability to later achieve $W_2$" is such a ubiquitous idiom of cognition or supercognition that (a) any competent agent must already do that all the time, and (b) it doesn't seem like adding one more straightforward target $W_0$ to the end of the chain should usually result in greatly increased computational costs or greatly diminished ability to optimize $W_4$.\n\nE.g. contrast the necessary thoughts of a paperclip maximizer acquiring resources in order to turn them into paperclips, and an agent with a terminal goal of acquiring and hoarding resources.\n\nThe paperclip maximizer has a terminal utility function $U_0$ which counts the number of paperclips in the universe (or rather, paperclip-seconds in the universe's history). The paperclip maximizer then identifies a sequence of subgoals and sub-subgoals $W_1, W_2, W_3...W_N$ corresponding to increasingly fine-grained strategies for making paperclips, each of which is subject to the constraint that it doesn't stomp on the previous elements of the goal hierarchy. (For simplicity of exposition we temporarily pretend that each goal has only one subgoal rather than a family of conjunctive and disjunctive subgoals.)\n\nMore concretely, we can imagine that $W_1$ is "get matter under my control (in a way that doesn't stop me from making paperclips with it)", that is, if we were to consider the naive or unconditional description $W_1'$ "get matter under my 'control' (whether or not I can make paperclips with it)", we are here interested in a subset of states $W_1 \\subset W_1'$ such that $\\mathbb E[U_0|W_1]$ is high. Then $W_2$ might be "explore the universe to find matter (in such a way that it doesn't interfere with bringing that matter under control or turning it into paperclips)", $W_3$ might be "build interstellar probes (in such a way that ...)", and as we go further into the hierarchy we will find $W_10$ "gather all the materials for an interstellar probe in one place (in such a way that ...)", $W_20$ "lay the next 20 sets of rails for transporting the titanium cart", and $W_25$ "move the left controller upward".\n\nOf course by the time we're that deep in the hierarchy, any efficient planning algorithm is making some use of real independences where we can reason relatively myopically about how to lay train tracks without worrying very much about what the cart of titanium is being used for. (Provided that the strategies are constrained enough in domain to not include any strategies that stomp distant higher goals, e.g. the strategy "build an independent superintelligence that just wants to lay train tracks"; if the system were optimizing that broadly it would need to check distant consequences and condition on them.\n\nThe reply would then be that, in general, any feat of superintelligence requires making a ton of big, medium-sized, and little strategies all converge on a single future state in virtue of all of those strategies having been selected sufficiently well to optimize the expectation $\\mathbb E[U|W_1,W_2,...].$ for some $U.$ A ton of little and medium-sized strategies must have all managed not to collide with each other or with larger big-picture considerations. If you can't do this much then you can't win a game of Go or build a factory or even walk across the room without your limbs tangling up.\n\nThen there doesn't seem to be any good reason to expect an agent which is instead optimizing directly the utility function $U_1$ which is "acquire and hoard resources" to do a very much better job of optimizing $W_10$ or $W_25.$ When $W_25$ already needs to be conditioned in such a way as to not stomp on all the higher goals $W_2, W_3, ...$ it just doesn't seem that much less constraining to target $U_1$ versus $U_0, W_1.$ Most of the cognitive labor in the sequence does not seem like it should be going into checking for $U_0$ at the end instead of checking for $U_1$ at the end. It should be going into, e.g., figuring out how to make any kind of interstellar probe and figuring out how to build factories.\n\nIt has not historically been the case that the most computationally efficient way to play chess is to have competing agents inside the chess algorithm trying to optimize different unconditional utility functions and bidding on the right to make moves in order to pursue their own local goal of "protect the queen, regardless of other long-term consequences" or "control the center, regardless of other long-term consequences". What we are actually trying to get is the chess move such that, conditioning on that chess move and the sort of future chess moves we are likely to make, our chance of winning is the highest. The best modern chess algorithms do their best to factor in anything that affects long-range consequences whenever they know about those consequences. The best chess algorithms don't try to factor things into lots of colliding unconditional urges, because sometimes that's not how "the winning move" factors. You can extremely often do better by doing a deeper consequentialist search that conditions multiple elements of your strategy on longer-term consequences in a way that prevents your moves from stepping on each other. It's not very much of an exaggeration to say that this is why humans with brains that can imagine long-term consequences are smarter than, say, armadillos.\n\nSometimes there are subtleties we don't have the computing power to notice, we can't literally actually condition on the future. But "to make paperclips, acquire resources and use them to make paperclips" versus "to make paperclips, acquire resources regardless of whether they can be used to make paperclips" is not subtle. We'd expect a superintelligence that was [6s efficient relative to humans] to understand and correct at least those divergences between $W_1$ and $W_1'$ that a human could see, using at most the trivial amount of computing power represented by a human brain. To the extent that particular choices are being selected-on over a domain that is likely to include choices with huge long-range consequences, one expends the computing power to check and condition on the long-range consequences; but a supermajority of choices shouldn't require checks of this sort; and even choices about how to design train tracks that do require longer-range checks are not going to be very much more tractable depending on whether the distant top of the goal hierarchy is something like "make paperclips" or "hoard resources".\n\nEven supposing that there could be 5% more computational cost associated with checking instrumental strategies for stepping on "promote fun-theoretic eudaimonia", which might ubiquitously involve considerations like "make sure none of the computational processes you use to do this are themselves sentient", this doesn't mean you can't have competent agents that go ahead and spend 5% more computation. Iit's simply the correct choice to build subagents that expend 5% more computation to maintain coordination on achieving eudaimonia, rather than building subagents that expend 5% less computation to hoard resources and never give them back. It doesn't matter if the second kind of agents are less "costly" in some myopic sense, they are vastly less useful and indeed actively destructive. So nothing that is choosing so as to optimize its expectation of $U_0$ will build a subagent that generally optimizes its own expectation of $U_1.$', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'EliezerYudkowsky' ], childIds: [], parentIds: [ 'orthogonality' ], commentIds: [], questionIds: [], tagIds: [], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [ { id: '7750', parentId: 'timemachine_efficiency_metaphor', childId: 'instrumental_goals_equally_tractable', type: 'requirement', creatorId: 'EliezerYudkowsky', createdAt: '2017-11-26 22:36:46', level: '2', isStrong: 'false', everPublished: 'true' }, { id: '7751', parentId: 'orthogonality', childId: 'instrumental_goals_equally_tractable', type: 'requirement', creatorId: 'EliezerYudkowsky', createdAt: '2017-11-26 22:37:00', level: '1', isStrong: 'true', everPublished: 'true' }, { id: '7752', parentId: 'instrumental_convergence', childId: 'instrumental_goals_equally_tractable', type: 'requirement', creatorId: 'EliezerYudkowsky', createdAt: '2017-11-26 22:37:13', level: '1', isStrong: 'false', everPublished: 'true' }, { id: '7758', parentId: 'efficiency', childId: 'instrumental_goals_equally_tractable', type: 'requirement', creatorId: 'EliezerYudkowsky', createdAt: '2017-11-26 22:43:55', level: '2', isStrong: 'false', everPublished: 'true' } ], subjects: [ { id: '7754', parentId: 'paperclip_maximizer', childId: 'instrumental_goals_equally_tractable', type: 'subject', creatorId: 'EliezerYudkowsky', createdAt: '2017-11-26 22:37:53', level: '2', isStrong: 'false', everPublished: 'true' }, { id: '7755', parentId: 'orthogonality', childId: 'instrumental_goals_equally_tractable', type: 'subject', creatorId: 'EliezerYudkowsky', createdAt: '2017-11-26 22:38:11', level: '2', isStrong: 'false', everPublished: 'true' }, { id: '7756', parentId: 'instrumental_convergence', childId: 'instrumental_goals_equally_tractable', type: 'subject', creatorId: 'EliezerYudkowsky', createdAt: '2017-11-26 22:38:52', level: '2', isStrong: 'false', everPublished: 'true' } ], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: { '10h': [ '7ch' ], '1y': [ '1y' ] }, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22897', pageId: 'instrumental_goals_equally_tractable', userId: 'EliezerYudkowsky', edit: '0', type: 'newRequirement', createdAt: '2017-11-26 22:44:02', auxPageId: 'efficiency', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22896', pageId: 'instrumental_goals_equally_tractable', userId: 'EliezerYudkowsky', edit: '2', type: 'newEdit', createdAt: '2017-11-26 22:43:10', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22894', pageId: 'instrumental_goals_equally_tractable', userId: 'EliezerYudkowsky', edit: '0', type: 'newAlias', createdAt: '2017-11-26 22:39:31', auxPageId: '', oldSettingsValue: '8v4', newSettingsValue: 'instrumental_goals_equally_tractable' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22895', pageId: 'instrumental_goals_equally_tractable', userId: 'EliezerYudkowsky', edit: '0', type: 'newEditGroup', createdAt: '2017-11-26 22:39:31', auxPageId: 'EliezerYudkowsky', oldSettingsValue: '123', newSettingsValue: '2' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22893', pageId: 'instrumental_goals_equally_tractable', userId: 'EliezerYudkowsky', edit: '0', type: 'newParent', createdAt: '2017-11-26 22:39:08', auxPageId: 'orthogonality', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22891', pageId: 'instrumental_goals_equally_tractable', userId: 'EliezerYudkowsky', edit: '0', type: 'newSubject', createdAt: '2017-11-26 22:38:53', auxPageId: 'instrumental_convergence', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22889', pageId: 'instrumental_goals_equally_tractable', userId: 'EliezerYudkowsky', edit: '0', type: 'newSubject', createdAt: '2017-11-26 22:38:12', auxPageId: 'orthogonality', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22887', pageId: 'instrumental_goals_equally_tractable', userId: 'EliezerYudkowsky', edit: '0', type: 'newSubject', createdAt: '2017-11-26 22:37:54', auxPageId: 'paperclip_maximizer', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22885', pageId: 'instrumental_goals_equally_tractable', userId: 'EliezerYudkowsky', edit: '0', type: 'deleteRequirement', createdAt: '2017-11-26 22:37:45', auxPageId: 'paperclip_maximizer', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22883', pageId: 'instrumental_goals_equally_tractable', userId: 'EliezerYudkowsky', edit: '0', type: 'newRequirement', createdAt: '2017-11-26 22:37:37', auxPageId: 'paperclip_maximizer', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22882', pageId: 'instrumental_goals_equally_tractable', userId: 'EliezerYudkowsky', edit: '0', type: 'newRequirement', createdAt: '2017-11-26 22:37:14', auxPageId: 'instrumental_convergence', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22881', pageId: 'instrumental_goals_equally_tractable', userId: 'EliezerYudkowsky', edit: '0', type: 'newRequirement', createdAt: '2017-11-26 22:37:02', auxPageId: 'orthogonality', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22880', pageId: 'instrumental_goals_equally_tractable', userId: 'EliezerYudkowsky', edit: '0', type: 'newRequirement', createdAt: '2017-11-26 22:36:47', auxPageId: 'timemachine_efficiency_metaphor', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22879', pageId: 'instrumental_goals_equally_tractable', userId: 'EliezerYudkowsky', edit: '0', type: 'deleteSubject', createdAt: '2017-11-26 22:36:41', auxPageId: 'timemachine_efficiency_metaphor', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22877', pageId: 'instrumental_goals_equally_tractable', userId: 'EliezerYudkowsky', edit: '0', type: 'newSubject', createdAt: '2017-11-26 22:36:35', auxPageId: 'timemachine_efficiency_metaphor', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22875', pageId: 'instrumental_goals_equally_tractable', userId: 'EliezerYudkowsky', edit: '1', type: 'newEdit', createdAt: '2017-11-26 22:35:31', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }