# By Tag

## Category theory

• Isomorphism A morphism between two objects which describes how they are "essentially equivalent" for the purposes of the theory under consideration. - Mark Chimes
• Morphism A morphism is the abstract representation of a relation between mathematical objects. Usually, it i… - Jaime Sevilla Molina

## Complexity of value

• Value-laden Cure cancer, but avoid any bad side effects? Categorizing "bad side effects" requires knowing what's "bad". If an agent needs to load complex human goals to evaluate something, it's "value-laden". - Eliezer Yudkowsky

## Context disaster

• Correlated coverage In which parts of AI alignment can we hope that getting many things right, will mean the AI gets everything right? - Eliezer Yudkowsky
• Low impact The open problem of having an AI carry out tasks in ways that cause minimum side effects and change as little of the rest of the universe as possible. - Eliezer Yudkowsky

## Disambiguation

• Bit The term "bit" refers to different concepts in different fields. The common theme across all the us… - Nate Soares
• Whole number A term that can refer to three different sets of "numbers that are not fractions". - Joe Zeng

## Edge instantiation

• Low impact The open problem of having an AI carry out tasks in ways that cause minimum side effects and change as little of the rest of the universe as possible. - Eliezer Yudkowsky

## Example problem

• Blue oysters A probability problem about blue oysters. - Nate Soares
• Diseasitis 20% of patients have Diseasitis. 90% of sick patients and 30% of healthy patients turn a tongue depressor black. You turn a tongue depressor black. What's the chance you have Diseasitis? - Eliezer Yudkowsky
• Lattice: Examples Here are some additional examples of lattices. $\newcommand{\nsubg}{\mathcal N \mbox{-} Sub~G}$ A f… - Kevin Clancy
• Sock-dresser search There's a 4/5 chance your socks are in one of your dresser's 8 drawers. You check 6 drawers at random. What's the probability they'll be in the next drawer you check? - Nate Soares
• Sparking widgets 10% of widgets are bad and 90% are good. 4% of good widgets emit sparks, and 12% of bad widgets emit… - Nate Soares

## Executable philosophy

• Rescuing the utility function If your utility function values 'heat', and then you discover to your horror that there's no ontologically basic heat, switch to valuing disordered kinetic energy. Likewise 'free will' or 'people'. - Eliezer Yudkowsky

## Goodness estimate biaser

• Edge instantiation When you ask the AI to make people happy, and it tiles the universe with the smallest objects that can be happy. - Eliezer Yudkowsky
• Goodhart's Curse The Optimizer's Curse meets Goodhart's Law. For example, if our values are V, and an AI's utility function U is a proxy for V, optimizing for high U seeks out 'errors'--that is, high values of U - V. - Eliezer Yudkowsky

## Group isomorphism

• Isomorphism A morphism between two objects which describes how they are "essentially equivalent" for the purposes of the theory under consideration. - Mark Chimes

## Humean degree of freedom

• Value-laden Cure cancer, but avoid any bad side effects? Categorizing "bad side effects" requires knowing what's "bad". If an agent needs to load complex human goals to evaluate something, it's "value-laden". - Eliezer Yudkowsky

## Philosophy

• Executable philosophy Philosophical discourse aimed at producing a trustworthy answer or meta-answer, in limited time, which can used in constructing an Artificial Intelligence. - Eliezer Yudkowsky

## Shutdown utility function

• Shutdown problem How to build an AGI that lets you shut it down, despite the obvious fact that this will interfere with whatever the AGI's goals are. - Eliezer Yudkowsky

## Thought experiment

• GalCom In the GalCom thought experiment, you live in the future, and make your money by living in the Dene… - Nate Soares

## Unforeseen maximum

• Low impact The open problem of having an AI carry out tasks in ways that cause minimum side effects and change as little of the rest of the universe as possible. - Eliezer Yudkowsky

## Utility indifference

• Shutdown problem How to build an AGI that lets you shut it down, despite the obvious fact that this will interfere with whatever the AGI's goals are. - Eliezer Yudkowsky