{
  localUrl: '../page/reinforcement_learning_linguistic_convention.html',
  arbitalUrl: 'https://arbital.com/p/reinforcement_learning_linguistic_convention',
  rawJsonUrl: '../raw/1v2.json',
  likeableId: 'donor_coordination',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: 'reinforcement_learning_linguistic_convention',
  edit: '3',
  editSummary: '',
  prevEdit: '2',
  currentEdit: '3',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Reinforcement learning and linguistic convention',
  clickbait: '',
  textLength: '3902',
  alias: 'reinforcement_learning_linguistic_convention',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'PaulChristiano',
  editCreatedAt: '2016-03-04 00:52:53',
  pageCreatorId: 'PaulChristiano',
  pageCreatedAt: '2016-02-03 00:42:48',
  seeDomainId: '0',
  editDomainId: '705',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '38',
  text: 'Existing machine learning techniques are most effective when we can provide concrete feedback — such as prediction accuracy or score in a game. For the purpose of AI control, I think that this is an important and natural category, and I really need a term for it.\n\nI’ve been referring to this regime as “supervised” learning, but that’s an extremely unusual use of the term and probably a mistake. I plan to use the term “reinforcement learning” in the future.\n\nSutton and Barto, who I will take as authorities, [define](https://webdocs.cs.ualberta.ca/~sutton/book/ebook/node7.html) reinforcement learning informally as:\n\n> learning what to do — how to map situations to actions — so as to maximize a numerical reward signal.\n\nIf we construe “actions” broadly, so as to include cognitive actions (like forming or updating a representation), then this is exactly what I want to talk about. It’s not clear whether Sutton and Barto mean to include supervised learning as a special case, but Barto at least [writes](http://www-anw.cs.umass.edu/pubs/2004/barto_d_04.pdf): “it it is possible to convert any supervised learning task into a reinforcement learning task.”\n\nThis category is much broader than what people normally mean when they talk about “reinforcement learning.” It’s probably even broader than what Sutton and Barto have in mind. But still, I think “reinforcement learning” captures what I want relatively well, and is definitely better than any other existing term. So I’m going to run with it.\n\nFrom my perspective, the key (and almost only) assumption of reinforcement learning is that the objective-to-be-optimized comes in the form of numerical rewards that are **actually provided to the algorithm**. This is required for many modern methods (e.g. anything using gradient descent), even those that we don’t normally think of as RL.\n\n### Clarifications\n\n- The term “reinforcement learning” is often meant to be exclusive of supervised learning or other learning problems that fit into a narrower framework. I definitely _don’t_ mean to use it in this narrower sense. I am using the term to refer to a very broad category that intentionally subsumes most modern machine learning.\n- Reinforcement learning can be coupled with reward engineering. For example, supervised learning fits into this definition of reinforcement learning, since we can use the data distribution and loss function to define rewards.\n- Reinforcement learning often refers to sequential decision problems, but I don’t mean to make this restriction. So far, I think these are generalizations that researchers in RL would agree with (though they’d likely consider sequential problems most interesting).\n- Reinforcement learning implies an interaction between an agent and an environment, but I don’t mean to make any assumptions on the nature of the environment. The “environment” is just whatever process computes the rewards and observations. It could be anything from a SAT checker, to a human reviewer, to a board game, to a rich and realistic environment.\n- Reinforcement learning often focuses on choosing actions, but I want to explicitly include cognitive actions. Some of these are clear fits — e.g. allocating memory effectively. Others don’t feel at all like “reinforcement learning” — e.g. learning to form sparse representations. But from a formal perspective a representation is just another kind of output, and the reinforcement learning framework captures these cases as well.\n\n### Supervised learning\n\nSupervised learning seems like an important special case.\n\nFrom the perspective of AI control, I think the most important simplifying assumption is that the information provided to the algorithm does not depend on its behavior. So there is no need for explicit exploration, data can be reused, and so on.\n',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'PaulChristiano'
  ],
  childIds: [],
  parentIds: [
    'paul_ai_control'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8269',
      pageId: 'reinforcement_learning_linguistic_convention',
      userId: 'JessicaChuan',
      edit: '3',
      type: 'newEdit',
      createdAt: '2016-03-04 00:52:53',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '7757',
      pageId: 'reinforcement_learning_linguistic_convention',
      userId: 'JessicaChuan',
      edit: '2',
      type: 'newEdit',
      createdAt: '2016-02-24 23:11:32',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6208',
      pageId: 'reinforcement_learning_linguistic_convention',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'deleteChild',
      createdAt: '2016-02-03 00:46:59',
      auxPageId: 'heterogenous_objectives',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6206',
      pageId: 'reinforcement_learning_linguistic_convention',
      userId: 'JessicaChuan',
      edit: '1',
      type: 'newChild',
      createdAt: '2016-02-03 00:46:49',
      auxPageId: 'heterogenous_objectives',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6205',
      pageId: 'reinforcement_learning_linguistic_convention',
      userId: 'JessicaChuan',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-02-03 00:42:48',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6204',
      pageId: 'reinforcement_learning_linguistic_convention',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'newParent',
      createdAt: '2016-02-03 00:41:29',
      auxPageId: 'paul_ai_control',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6202',
      pageId: 'reinforcement_learning_linguistic_convention',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'deleteParent',
      createdAt: '2016-02-03 00:41:21',
      auxPageId: 'Scalable_ai_control',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6200',
      pageId: 'reinforcement_learning_linguistic_convention',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'newParent',
      createdAt: '2016-02-03 00:41:10',
      auxPageId: 'Scalable_ai_control',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}