{
  localUrl: '../page/1s7.html',
  arbitalUrl: 'https://arbital.com/p/1s7',
  rawJsonUrl: '../raw/1s7.json',
  likeableId: '725',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: '1s7',
  edit: '1',
  editSummary: '',
  prevEdit: '0',
  currentEdit: '1',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Necessary conditions for expertise: the P-I-F-T method',
  clickbait: 'A domain-general method to help you assess whether a person has the necessary conditions for expert in a given domain.',
  textLength: '17844',
  alias: '1s7',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'TylerAlterman',
  editCreatedAt: '2016-01-29 08:38:44',
  pageCreatorId: 'TylerAlterman',
  pageCreatedAt: '2016-01-29 08:38:44',
  seeDomainId: '0',
  editDomainId: 'JamesBabcock',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '12',
  text: 'The claim: each of the four criteria below are necessary (but not sufficiently) conditions for expertise. You may find them to be pretty obvious, though ask yourself: do I explicitly assess for these things while engaging experts? If not, you may run the risk of, e.g., hiring the wrong people or acting on faulty information.\n\nTo gain fluency in assessing each of the following conditions, I recommend the following process:\n\n 1. Write down a list of examples where it is critical for you to trust either your own or someone else's knowledge or skills. These are examples where either you are the expert or you are counting on an expert.\n 2. Think of ways in which you can apply the criteria below to the expert.\n\nFurthermore, each of the examples below will test your intuitions on expertise through questions at the end of them. Please try to answer the questions before looking at the answer.\n\nP: *Processing* of relevant information\n----------------------------------------\n\nIs the person performing detailed mental operations upon relevant data, beyond the ones which non-experts perform? Is this the sort of processing which would plausibly yield expertise in the relevant domain?\n\n**Examples**\n\n{You want to invite a philosophy expert to speak at your conference.}\n\nBrad and Will encounter an argument. Brad, the analytical philosophy novice, assesses whether it feels intuitively true. Will, the analytical philosophy expert not only assesses whether the argument feels intuitively true, but also assesses its conceptual clarity and hidden premises. Who do you think has the better marker of expertise? Why?\n\n--\n\nAnswer: Brad lacks a robust process; Will does not. Invite Will.\n\n--\n\n{You want to learn persuasion.}\n\nSteve and Hilary must persuade Rochelle the French bureaucrat to expedite their visa process. Both Steve and Hilary note that Rochelle is wearing delightful chartreuse earrings. Steve, the persuasion novice, runs the mental process of noting that Rochelle is a person, and that people generally respond well to friendliness. Hilary, the persuasion expert, notes that Rochelle's chartreuse earrings are the only flamboyant clothing items that anyone in the office is wearing. Based on a large sample size of processed experience now stored in her system one, she guesses that the chartreuse earrings could signify that Rochelle wants to distinguish herself from her fellow bureaucrats - to show that she is not just another bureaucrat. Based on this, Hilary makes a gamble to say things which shows that she appreciates Rochelle's uniqueness - e.g., "You're the friendliest-looking embassy employee I've ever met!" Who do you think has the better marker of expertise? Why?\n\n--\n\nSteve lacks a robust process; Hilary does not. Learn from Hilary.\n\n--\n\n{You are a foundation program officer deciding who to give a grant to.} \n\nMartha and Leanne are both published neuroscientists who study the lateral geniculate nucleus. You have already read their grant proposals, and they seem to be of comparable quality. Even though you are not an expert in geniculate nuclei, let alone lateral ones, you must choose one person to give a grant to. Thus, you must decide who is the potentially more revolutionary scientist. Both Martha and Leanne seem to have broad knowledge of most published work on the topic. They appear to be equally intelligent. However, Leanne takes the novel approach of gem-mining dynamical models in physics and epidemiology that seem to describe similar phenomena in the lateral geniculate nucleus. She also spends a lot of time devising thought experiments and free-associating around tricky questions. These methods seem a bit unusual, but in your grant-making experience, you've found that - all else equal - scientists who use unusual methods tend to produce more innovative work. Who do you think has the better marker of expertise? Why?\n\n--\n\nYou bet on Leanne over Martha. In this case, both Martha and Leanne probably have robust processes. However, Martha lacks a process that yields revolutionary expertise; Leanne more likely does not. You probably made the right bet.\n\n--\n\n**Ways of assessing**\n\nI. Ask questions which will reveal the mental processes of experts, such as: \n \na. "Before you fix a computer, what's your general diagnostic process like?" (For a computer repair specialist)\n\nb. "The constructs of confidence and self-efficacy seem very similar. How do you tell the difference between them?" (For an expert in social psychology)\n\nc. "How does the quality of Richard Dawkin's work compare to that of other evolutionary biologists? Do you have any critiques?" (For an expert in evolutionary biology)\n\nd. "Let's say I want to stage a magic trick in an extremely crowded room. How would I do that? What about in a room with very loud music?" (For an expert party magician)\n\nII. Find out whether they've been part of a job, program, or mentorship what would have equipped them with special mental processes.\n\nI: *Interaction* with relevant information sources\n--------------------------------------------------\n\nIs the person regularly interfacing with relevant data? Is this the sort of data that an expert would plausibly engage? Note that merely *encountering* relevant sources is not sufficient. The expert needs to have *paid attention* to these sources, as the first example will illustrate. \n\n**Examples**\n\n{You want to hire a graphic designer.} Bob and Maria are walking down a city street. Bob, a graphic design novice, pays no attention to the signs and advertisements along the side of the street, even though they are within his field of vision. Maria, an expert, pays *full* attention to these things. She notes the lack of spatial alignment amongst elements in the dry cleaner's sign. As she passes a Louis Vuitton ad at the bus stop, she ogles the beautiful ball serifs of the Bauer Bodoni bold italic typeface (incidentally, [my favorite font!](https://www.dropbox.com/s/ae11gnau3uk1f8j/Screenshot%202015-12-30%2002.37.28.png?0?dl=0)). Color schemes, geometry, and visual flows all jump out at her as objects onto themselves. Who do you think has the better marker of expertise? Why? \n\n--\n\nBob *encounters* relevant data, but does not *engage* it; Maria both encounters *and* engages relevant data. Hire Maria.\n\n--\n\n{You want to learn how to fundraise.} \n\nCassandra is a quantitative finance expert but a novice at fundraising. Jake is an expert at fundraising. Jake is constantly immersing himself in fundraising case studies, talking to other experts, and meeting with funders. Cassandra, on the other hand, interfaces with sources like mathematical models of markets. Who do you think has the better marker of expertise? Why?\n\n--\n\nCassandra does not interface with relevant enough information sources; Jake does do so. Consult Jake.\n\n--\n\n{You want to improve the effectiveness of your team.} \n\nBrian is a Princeton academic who claims to be an expert in team effectiveness. The evidence: He has analyzed 1000 small family businesses and has been published multiple times in *Science*. Miranda does not claim to be an expert in team effectiveness, but several people have suggested that she might be. The evidence: she is the rare type of venture capitalist who formerly founded a successful startup, ran a large company, and now sits on nonprofits boards and invests in companies of all sizes (and has a winning track record doing so). Who do you think has the better marker of expertise? Why?\n\n--\n\nUnless your organization is a small family business, Brian has probably not interfaced with relevant information sources. Miranda, on the other hand, has engaged a wide variety of organizations. There is a good chance that her ideas about team effectiveness might be higher quality, since she will likely have abstracted organization-general lessons from a more diverse sample. This is a more difficult case than the other two, but if faced with a decision between the two, I would consult Miranda instead of Brian.\n\n--\n\n**Ways of assessing**\n\nI. Ask questions which will reveal what sorts of information they engage, such as: \n \na. "Tell me about individual cases in your management experience." (For a manager you might hire)\n\nb. "What sorts of things do you pay attention to when you're at an event?" (For an event director)\n\nc. "Roughly how many pieces do you edit in an average month?" (For an editor)\n\nd. "Which papers would you recommend reading to understand the cutting edge in hyperbolic geometry?" (For an expert in hyperbolic geometry)\n\nII. Find out whether they've been part of a job, program, or mentorship what would have given them strong samples of relevant information.\n\nIII. See how fluently they can generate examples of phenomena in the domain. The more examples they can generate, the better.\n\nF: *Feedback* with relevant metrics\n--------------------------------\n\nDoes the person have (or have they had) feedback loops that help them accurately calibrate whether they are increasing their expertise or making accurate judgements?\n\nIn domains where reality does not give good feedback, they need to have a set of well-honed heuristics or proxy feedback methods to correct for better output if the result is going to be reliably good (this goes for, e.g., philosophy, sociology, long-term prediction). In domains where reality *can* give good feedback, they don't necessarily need well-honed heuristics or proxy feedback methods (e.g., massage, auto repair, swordfighting, etc.). All else equal, superior feedback loops have the following attributes (idealized versions below):\n\n- Speed (you learn about discrepancies between current and desired output quickly after taking an action so you can course-correct)\n- Frequency (the feedback loop happens frequently, giving you more samples to calibrate on)\n- Validity (the feedback loop is helping you get closer to the output you actually care about)\n- Reliability (the feedback loop consistently returns similar discrepancies in response to you taking similar actions)\n- Detail (the feedback loop gives you a large amount of information about the difference between current and desired output)\n- Saliency (the feedback loop delivers attentionally or motivationally salient feedback)\n\n**Examples**\n\n{You want to predict technology timelines} Julie and Kate both claim to be experts in technological forecasting. When you ask Julie how she calibrate her predictions, she replies, "Mainly, I just have sense for these sorts of things. But I also do things like monitor Google Trends, read lots of articles on technology, and ask lots of people what they think will happen. I've been doing this for 20 years." She then points to a number of successful predictions she's made. When you ask the same question to Kate, she replies, "Well, in the short term, it's been shown that linear models of technological progress are the best, so I tend to use those to calibrate on the timespan of 1-3 years. If I make longer term predictions, I try to tell as many stories as possible for how those predictions may be false. Then I try to make careful arguments that rule out these stories. Furthermore, I always check whether my predictions diverge substantially from other technological forecasters. If they do, I try to figure out why. I've also identified a number of technological forecasters who have consistently good track records, and I study their methods, evidence, and predictions carefully. Finally, whenever one of my predictions turn out to be false, I spend about a week figuring out whether there is any general principle to be learned to guard against being wrong in the future." Who do you think has the better marker of expertise? Why?\n\n--\n\nTechnological forecasting is a domain in which reality doesn't provide strong feedback, so you need proxy feedback. Julie does not have good proxy feedback while Kate does have relatively decent proxy feedback methods. Barring special information about Julie, Kate's predictions are likely to be more reliable, all else equal.\n\n--\n\n{You want to choose a piano teacher} \n\nBoth Ned and Megan are piano teachers. Of the two, Ned is a much better pianist, having won many awards and played at Carnegie Hall many times. You ask both Ned and Megan how they can tell whether their teaching is working for a given student. Ned replies that he simply looks at the outcomes: if a student practices under him for several years, they become much better. "Basically, I show them how to play scales and pieces well, and then I check in about once every other week to make sure they are practices the drills I showed them." Megan replies with a detailed set of ways she can note rate of progress and how she adjusts her teaching accordingly. "For example, I know whether a student has 'chunked' a given chord through the following method: I stand behind the piano and quickly turn around a piece of paper with a chord on it and time how many milliseconds it takes for a student to react and play the chord. Also, I each week I ask them to honestly report on whether they feel as if the chord is still a series of notes or whether it feels more like 'one note.' This indicates that the chord has become a 'gestalt' in the students mind. Another example: whenever a student makes an error while playing a piece, I mark the corresponding area in the sheet music. Eventually, I can then tell what types of errors a student generally makes by analyzing the darkest areas on various pieces - the places with the most pen marks." Megan continues to tell you similar examples. Who do you think has the better marker of expertise? Why?\n\n--\n\nIn this case, while Ned may be the better pianist, he may not be the relative expert at *teaching* piano. It would seem he lacks relevant feedback loops to tell him whether he is successful at teaching. While he notes that his students improve over time, he is not entertaining the possibility that they may have improved counterfactually over time without his intervention.\n\n--\n\n{You want to hire a manager} \n\nBoth Todd and Greg have applied for a manager position at your organization. You ask each of them about their process for monitoring the rate at which their teams are making progress on goals. Todd: "I have everyone on a system where I can monitor the amount of [Pomodoros](https://en.wikipedia.org/wiki/Pomodoro_Technique) each person is completing. If certain team members are lagging behind in their amount of Pomodoros, I give them a pep talk, after which the amount tends to go back up." Greg: "I have each team member set daily subgoals. Then I look at two things: (a) whether these subgoals tend to align to the broader goals and (b) whether they are achieving the subgoals they set for themselves. If a team member is lagging behind in (a) or (b), I give them a pep talk, after which they tend to perform better." In this case, while Ned may be the better pianist, he may not be the relative expert at *teaching* piano.\n\n--\n\nIn this case, both Todd and Greg have decent feedback loops. However, Todd's feedback loop is more likely to fall victim to [Goodhart's law](https://en.wikipedia.org/wiki/Goodhart%27s_law). In other words, though his method might be high in reliability, the *measure* Pomodoro-maximization might accidentally become the *target*, even though the *intended target* is goal completion. Greg's feedback loop is higher in validity, in that it measures the target he actually cares about more tightly.\n\n--\n\n**Ways of assessing**\n\nI. Ask questions which will reveal the details of their feedback loops (and whether they have them), such as: \n \na. "Let's say I'm already a proficient coder, but I want to learn how to code at the level of a *master*. What sorts of problems might I practice on to move from proficiency to mastery? Are there any textbooks I should read?" (For a software engineer)\n\nb. "In what ways do people typically stumble when they try to improve at data analysis?" (For a data analyst)\n\nc. "How do tell whether a marketing campaign is working?" (For a professional marketer)\n\nd. "Can you tell me a bit about how you learn?"\n\nII. Find out whether they've been part of a job, program, or mentorship what would have given them strong feedback loops.\n\nIII. Sometimes, people with tacit expertise will not be able to articulate their feedback loops. Analyze whether reality provides robust feedback in their domain. For example, a bike-rider might not be able to describe the feedback loops through which they learned bike-riding. However, reality automatically provides feedback in the domain by causing novice bike-riders to fall over, until they accumulate enough procedural knowledge to balance on two wheels.\n\nT: *Time* spent on the above\n--------------------------------\n\nThis one is the most straightforward of all the necessary conditions for expertise. (Thus, I won't go into much detail.) Simply: An expert needs to have spent enough time processing and interacting with the relevant data with robust feedback loops.\n\nAsk: Has this expert put a plausibly sufficient amount of time into learning or using the skill in order to gain expertise?\n\nFor some skills, like using a spoon, there is a short latency between beginnerhood and expertise. For others, like having well-calibrated political views, there is quite a long latency. Accordingly, you can probably trust the average claim about spoon-use and should be suspicious of the average claim about politics. \n\n\n----------\n\n\nThere you have it: the PIFT method for assessing basic conditions for expertise. Here is what the underlying model looks like:\n![The PIL model](https://i.imgur.com/jzkcKv9.png?0)\n\nOne easy way to remember the method: "If the person claims to be an expert and is not, say, *'pift!'"*\n\nIf it seems that I have misidentified or failed to identify a necessary condition for expertise, please let me know!',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '2016-02-23 20:32:28',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'TylerAlterman'
  ],
  childIds: [],
  parentIds: [],
  commentIds: [],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '5911',
      pageId: '1s7',
      userId: 'TylerAlterman',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-01-29 08:38:44',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'false',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}