{ localUrl: '../page/1gq.html', arbitalUrl: 'https://arbital.com/p/1gq', rawJsonUrl: '../raw/1gq.json', likeableId: '432', likeableType: 'page', myLikeValue: '0', likeCount: '1', dislikeCount: '0', likeScore: '1', individualLikes: [ 'RobBensinger2' ], pageId: '1gq', edit: '3', editSummary: '', prevEdit: '2', currentEdit: '3', wasPublished: 'true', type: 'comment', title: '"(This is hard without threa..."', clickbait: '', textLength: '5856', alias: '1gq', externalUrl: '', sortChildrenBy: 'recentFirst', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'PaulChristiano', editCreatedAt: '2015-12-31 21:12:52', pageCreatorId: 'PaulChristiano', pageCreatedAt: '2015-12-29 23:02:56', seeDomainId: '0', editDomainId: 'EliezerYudkowsky', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '5194', text: '(This is hard without threaded conversations. Responding to the "agree/disagree" from Eliezer)\n\n>The failure scenario that Paul visualizes for Orthogonality is something along the lines of, 'You can't have superintelligences that optimize any external factor, only things analogous to internal reinforcement.'\n\n>The failure scenario that Paul visualizes for Orthogonality is something along the lines of, 'The problem of reflective stability is unsolvable in the limit and no efficient optimizer with a unitary goal can be computationally large or self-improving.'\n\nI think there are a lot of plausible failure modes. The two failures you outline don't seem meaningfully distinct given our current understanding, and seem to roughly describe what I'm imagining. Possible examples:\n\n* Systems that simply want to reproduce and expand their own influence are at a fundamental advantage. To make this more concrete, imagine powerful agents that have lots of varied internal processes, and that constant effort is needed to prevent the proliferation of internal processes that are optimized for their own proliferation rather than pursuit of some overarching goal. Maybe this kind of effort is needed to obtain competent high-level behavior at all, but maybe if you have some simple values you can spend less effort and let your own internal character shift freely according to competitive pressures.\n* What we were calling "sensory optimization" may be a core feature of some useful algorithms, and it may require a constant fraction of one's resources to repurpose that sensory optimization towards non-sensory ends. This might just be a different way of articulating the last bullet point. I think we could talk about the same thing in many different ways, and at this point we only have a vague understanding of what those scenarios actually look like concretely.\n* It turns out that at some fixed level of organization, the behavior of a system needs to reflect something about the goals of that system---there is no way to focus "generic" medium-level behavior towards an arbitrary goal that isn't already baked into that behavior. (The alternative, which seems almost necessary for the literal form of orthogonality, is that you can have arbitrarily large internal computations that are mostly independent of the agent's goals.) This implies that systems with more complex goals need to do at least slightly more work to pursue those goals. For example, if the system only devotes 0.0000001% of its storage space/internal communication bandwidth to goal content, then that puts a clear lower bound on the scale at which the goals can inform behavior. Of course arbitrarily complex goals could probably be specified indirectly (e.g. I want whatever is written in the envelope over there), but if simple indirect representations are themselves larger than the representation of the simplest goals, this could still represent a real efficiency loss.\n\n> Paul is worried about something else / Eliezer has completely missed Paul's point.\n\nI do think the more general point, of "we really don't know what's going on here," is probably more important than the particular possible counterexamples. Even if I had no plausible counterexamples in mind, I just wouldn't especially confident.\n\nI think the only robust argument in favor is that unbounded agents are probably orthogonal. But (1) that doesn't speak to efficiency, and (2) even that is a bit dicey, so I wouldn't go for 99% even on the weaker form of orthogonality that neglects efficiency.\n\n\n> If you can get to 95% cognitive efficiency and 100% technological\n> efficiency, then a human value optimizer ought to not be at an\n> intergalactic-colonization disadvantage or a\n> take-over-the-world-in-an-intelligence-explosion disadvantage and not\n> even very much of a slow-takeoff disadvantage.\n\nIt sounds regrettable but certainly not catastrophic. Here is how I would think about this kind of thing (it's not something I've thought about quantitatively much, it doesn't seem particularly action-relevant).\n\nWe might think that the speed of development or productivity of projects varies a lot randomly. So in the "race to take over the world" model (which I think is the best case for an inefficient project maximizing its share of the future), we'd want to think about what kind of probabilistic disadvantage a small productivity gap introduces.\n\nAs a simple toy model, you can imagine two projects; the one that does better will take over the world.\n\nIf you thought that productivity was log normal with a standard deviation of */ 2, then a 5% productivity disadvantage corresponds to maybe a 48% chance of being more productive. Over the course of more time the disadvantage becomes more pronounced if randomness averages out. If productivity variation is larger or smaller then it decreases or increases the impact of an efficiency loss. If there are more participants, then the impact of a productivity hit becomes significantly large. If the good guys only have a small probability of losing, then the cost is proportionally lower. And so on.\n\nCombining with my other views, maybe one is looking at a cost of tenths of a percent. You would presumably hope to avoid this by having the world coordinate even a tiny bit (I thought about this a bit [here](https://medium.com/ai-control/technical-and-social-approaches-to-ai-safety-5e225ca30c46)). Overall I'll stick with regrettable but far from catastrophic.\n\n(My bigger issue in practice with efficiency losses is similar to your view that people ought to have really high confidence. I think it is easy to make sloppy arguments that one approach to AI is 10% as effective as another, when in fact it is 0.0001% as effective, and that holding yourself to asymptotic equivalence is a more productive standard unless it turns out to be unrealizable.)', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '0', maintainerCount: '0', userSubscriberCount: '0', lastVisit: '2016-02-27 11:08:37', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'PaulChristiano' ], childIds: [], parentIds: [ '1fr', 'orthogonality' ], commentIds: [], questionIds: [], tagIds: [], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4879', pageId: '1gq', userId: 'PaulChristiano', edit: '3', type: 'newEdit', createdAt: '2015-12-31 21:12:52', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4698', pageId: '1gq', userId: 'PaulChristiano', edit: '2', type: 'newEdit', createdAt: '2015-12-29 23:03:44', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4697', pageId: '1gq', userId: 'PaulChristiano', edit: '1', type: 'newEdit', createdAt: '2015-12-29 23:02:56', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4694', pageId: '1gq', userId: 'PaulChristiano', edit: '0', type: 'newParent', createdAt: '2015-12-29 22:22:03', auxPageId: 'orthogonality', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4696', pageId: '1gq', userId: 'PaulChristiano', edit: '0', type: 'newParent', createdAt: '2015-12-29 22:22:03', auxPageId: '1fr', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }