{ localUrl: '../page/2nd.html', arbitalUrl: 'https://arbital.com/p/2nd', rawJsonUrl: '../raw/2nd.json', likeableId: '1581', likeableType: 'page', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], pageId: '2nd', edit: '1', editSummary: '', prevEdit: '0', currentEdit: '1', wasPublished: 'true', type: 'comment', title: '"Presumably the advantage of..."', clickbait: '', textLength: '787', alias: '2nd', externalUrl: '', sortChildrenBy: 'recentFirst', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'PaulChristiano', editCreatedAt: '2016-03-16 16:39:08', pageCreatorId: 'PaulChristiano', pageCreatedAt: '2016-03-16 16:39:08', seeDomainId: '0', editDomainId: 'EliezerYudkowsky', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: 'The problem of conservatism is an extension of the supervised learning problem in which, given labeled examples, we try to generate further cases that are almost certainly positive examples of a concept, rather than demanding that we label all possible further examples correctly\\. Another way of looking at it is that, given labeled training data, we don't just want to learn a simple concept that fits the labeled data, we want to learn a simple small concept that fits the data \\- one that, subject to the constraint of labeling the training data correctly, predicts as few other positive examples as possible\\.', anchorText: 'we want to learn a simple small concept that fits the data \\- one that, subject to the constraint of labeling the training data correctly, predicts as few other positive examples as possible\\.', anchorOffset: '423', mergedInto: '', isDeleted: 'false', viewCount: '360', text: 'Presumably the advantage of this approach---rather than simply learning to imitate the human burrito-making process or even human burritos, is that it might be easier to do. Is that right?\n\nI think that's a valid goal, but I'm not sure how well "conservative generalizations" actually address the problem. Certainly it still leaves you at a significant disadvantage relative to a non-conservative agent, and it seems more natural to first consider direct approaches to making imitation effective (like bootstrapping + [meeting halfway](https://medium.com/ai-control/mimicry-maximization-and-meeting-halfway-c149dd23fc17)).\n\nOf course all of these approaches still involve a lot of extra work, so maybe the difference is are expectations about how different research angles will work out.', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'PaulChristiano' ], childIds: [], parentIds: [ 'taskagi_open_problems' ], commentIds: [], questionIds: [], tagIds: [], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8630', pageId: '2nd', userId: 'PaulChristiano', edit: '1', type: 'newEdit', createdAt: '2016-03-16 16:39:08', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8629', pageId: '2nd', userId: 'PaulChristiano', edit: '0', type: 'newParent', createdAt: '2016-03-16 16:33:24', auxPageId: 'taskagi_open_problems', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }