{ localUrl: '../page/3zs.html', arbitalUrl: 'https://arbital.com/p/3zs', rawJsonUrl: '../raw/3zs.json', likeableId: '2595', likeableType: 'page', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], pageId: '3zs', edit: '1', editSummary: '', prevEdit: '0', currentEdit: '1', wasPublished: 'true', type: 'comment', title: '"I have a few complaints/que..."', clickbait: '', textLength: '2129', alias: '3zs', externalUrl: '', sortChildrenBy: 'recentFirst', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'BenjyForstadt', editCreatedAt: '2016-06-03 00:39:19', pageCreatorId: 'BenjyForstadt', pageCreatedAt: '2016-06-03 00:39:19', seeDomainId: '0', editDomainId: 'EliezerYudkowsky', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'true', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '749', text: 'I have a few complaints/questions:\n\n1) "What is goodness made out of" is not really a particularly active discussion in professional philosophy. I feel that this was put in there just to make analytic philosophers look silly.\nAnd anyways, if one believes in naturalistic moral properties (the stuff that we value,) then "what is goodness made out of" really is the question "what is good," which I think is probably a fine question. In this case, rephrasing in terms of AI just makes philosophical discussions more wordy and less accessible.\n\n\n\n2) "Faced with any philosophically confusing issue, our task is to identify what cognitive algorithm humans are executing which feels from the inside like this sort of confusion, rather than, as in conventional philosophy, to try to clearly define terms and then weigh up all possible arguments for all 'positions'."\n\nI don't get what the problem is with clearly defining terms and weighing up pros and cons for positions. Is conceptual analysis (http://philpapers.org/browse/conceptual-analysis) so problematic that it has no place in an improved version of philosophy? I think that there are at least a few parallels between that project in philosophy and the sentiment expressed in https://arbital.com/p/3y6/, for example.\n\n3) "Most "philosophical issues" worth pursuing can and should be rephrased as subquestions of some primary question about how to design an Artificial Intelligence, even as a matter of philosophy qua philosophy."\n\nWhat is "philosophy qua philosophy?"\n\n"This imports the discipline of programming into philosophy. In particular, programmers learn that even if they have an inchoate sense of what a computer should do, when they actually try to write it out as code, they sometimes find that the code they have written fails (on visual inspection) to match up with their inchoate sense. Many ideas that sound sensible as English sentences are revealed as confused as soon as we try to write them out as code."\n\nHow would one translate questions like "Are there unverifiable truths?" or "under what conditions does the parthood relation hold?" into AI-speak?\n\n\n', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'BenjyForstadt' ], childIds: [], parentIds: [ 'executable_philosophy' ], commentIds: [], questionIds: [], tagIds: [], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '11698', pageId: '3zs', userId: 'BenjyForstadt', edit: '1', type: 'newEdit', createdAt: '2016-06-03 00:39:19', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '11696', pageId: '3zs', userId: 'BenjyForstadt', edit: '1', type: 'newParent', createdAt: '2016-06-02 23:52:50', auxPageId: 'executable_philosophy', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }