by Nate Soares May 29 2016 updated May 29 2016

[summary: An encoding $E(m)$ of a message $m$ is intradependent if the fact that $E(m)$ encodes $m$ can be deduced without looking at the whole encoding. For example, if you know you're going to see an 8-letter English word, and you see "aardv", you know that the message is "aardvark", even without looking at the last three letters.

Formally, intradependence is defined in terms of the mutual information shared by symbols in the encoding. Given an encoding is intradependent, it's possible (in principle) to develop a more efficient encoding that is less intradependent.]

We call an [encoding] $E(m)$ of a message $m$ "intradependent" if the fact that $E(m)$ encodes $m$ can be deduced without looking at the whole encoding. For example, imagine that you know you're going to see an 8-letter English word, and you see the letters "aardv". You can then deduce that the message is "aardvark" without looking at the last three letters, because that's the only valid message that starts with "aardv".

(Seeing "aard" is not enough, because aardwolfs exist.)

In an intradependent encoding, some parts of the encoding carry information about the other parts. For example, once you've seen "aard", there are $26^4 = 456976$ possible combinations of the next four letters, but "aard" cuts the space down to just two possibilities — "vark" and "wolf". This means that the first four letters carry $\log_2(26^4) - 1 \approx 17.8$ bits of information about the last four. (The fifth letter carries one final bit of information, in the choice between "v" and "w". The last three letters carry no new information.)

Formally, the intradependence of an encoding is defined to be the sum of mutual information between each symbol in the encoding and all the previous symbols. Given an intradependent encoding, it is possible in principle to design a more efficient encoding; see [+intradependent_compression].