# Decimal notation

https://arbital.com/p/decimal_notation

by Michael Cohen Jun 24 2016 updated Jul 4 2016

The winning architecture for numerals

Seventeen is the number that represents as many things as there are x marks at the end of this sentence: xxxxxxxxxxxxxxxxx. Writing out numbers by saying "the number representing how many things there are in this pile:" gets unwieldy when the pile gets large. Thus, we represent numbers using the [numeral numerals] 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. Specifically, we write the number representing this many things: xxx as "3", and the number representing this many things: xxxxxxxxxxx as 11, and the number seventeen as "17". This is called "decimal notation," because there are ten different symbols that we use. Numbers don't have to be written down in decimal notation, it's also possible to write them down in other notations such as Binary notation. Some numbers can't even be written out in decimal notation (in full); consider, for example, the number $e$ which, in decimal notation, starts out with the digits 2.71828… and just keeps going.

# How decimal notation works

How do you know that 17 is the number that represents the number of xs in this sequence: xxxxxxxxxxxxxxxxx? In practice, you know this because the rules of decimal notation were ingrained in you in a young child. But do you know those rules explicitly? Could you write out a series of rules for taking in some input symbols like '2', '4', and '6' and using those to figure out how many pebbles to add to a pile?

The answer, of course, is this many:

xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

But how do we perform that conversion in general?

In short, the number 246 represents $(2 \cdot 100) + (4 \cdot 10) + (6 \cdot 1),$ so as long as we know how to do addition and multiplication, and as long as we know what the basic numerals 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9 mean, and as long as we know how to get to [power powers] of 10 (1, 10, 100, 1000, …), then we can explicitly understand decimal notation.

(What do the basic numerals 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9 mean? By convention, they represent as many things as are in the following ten sequences of xs: , x, xx, xxx, xxxx, xxxxx, xxxxxx, xxxxxxx, xxxxxxxx, and xxxxxxxxx, respectively.)

This explanation assumes that you're already quite familiar with decimal notation. Explaining decimal notation from scratch to someone who doesn't already know it (which was a task people actually had to do back when half the world was using Roman numerals, a much less convenient system for representing numbers) is a fun task; to see what that looks like, refer to [+representing_numbers_from_scratch].

# Other common notations

The above text made use of [-unary_notation], which is a method of representing numbers by making a number of marks that correspond to the represented number. For example, in unary notation, 17 is written xxxxxxxxxxxxxxxxx (or ||||||||||||||||| or whatever, the actual marks don't matter). This is perhaps somewhat easier to understand, but writing large numbers like 93846793284756 gets rather ungainly.

Historical notations include Roman numerals, which were a pretty bad way to represent numbers. (It took humanity quite some time to find good tools for representing numbers; the decimal notation that's been ingrained in your head since early childhood is the result of many centuries worth of effort. It's much harder to invent good representations of numbers when you don't even have good tools for writing down and reasoning about numbers. Furthermore, the modern tools for representing numbers aren't necessarily ideal!)

Common notations in modern times (aside from decimal notation) include Binary notation (often used by computers), [-hexadecimal_notation] (which is a useful format for humans reading binary notation). Binary notation and hexadecimal notation are very similar to decimal notation, with the difference that binary uses only two distinct symbols (instead of ten), and hexadecimal uses sixteen.