The development of Artificial General Intelligence, as a scientific purpose for human life

by Jordan Bennett Mar 30 2018 updated Apr 4 2018

Purpose here is not to be confused for teleological argument/theism/deities/subjective endeavours. Instead, this thread refers to teleonomy, or purpose in the realm of science/objectivity.



  1. Reasonably, evolution is optimising ways of contributing to the increase of entropy, as systems very slowly approach equilibrium. (The universe’s hypothesized end)

    • a) Within that process, work or activities done through several ranges of intelligent behaviour are reasonably ways of contributing to the increase of entropy. (See source)

    • b) As species got more and more intelligent, reasonably, nature was finding better ways to contribute to increases of entropy. (Intelligent systems can be observed as being biased towards entropy maximization)

    • c) Humans are slowly getting smarter, but even if we augment our intellect by CRISPR-like routines or implants, we will reasonably be limited by how many computational units or neurons etc fit in our skulls.

    • d) AGI/ASI won’t be subject to the size of the human skull/human cognitive hardware. (Laws of physics/thermodynamics permits human exceeding intelligence in non biological form)

    • e) As AGI/ASI won’t face the limits that humans do, they are a subsequent step (though non biological) particularly in the regime of contributing to better ways of increasing entropy, compared to humans.

  2. The above is why the purpose of the human species, is reasonably to create AGI/ASI.


  1. There are many degrees of freedom or many ways to contribute to entropy increase. This degree sequence is a "configuration space" or "system space", or total set of possible actions or events, and in particular, there are "paths" along the space that simply describe ways to contribute to entropy maximization.
  2. These "paths" are activities in nature, over some time scale "τ" and beyond.
  3. As such, following equation (2) below, intelligent agents reasonably generate particular "paths" (intelligent activities) that prioritize efficiency in entropy maximization, over more general paths that don't care about or deal with intelligence. In this way, intelligent agents are "biased", because they occur in a particular region (do particular activities) in the "configuration space" or "system space" or total possible actions in nature.
  4. Observing equation (4) below, highly intelligent agents rationally aren't merely biased for the sake of doing distinct things (i.e. cognitive tasks, such as any human thing done in science and technology) compared to non intelligent, or other less intelligent agents in nature for contributing to entropy increase; they are biased by extension, for behaving in ways that are actually more effective ways for maximising entropy production, compared to non intelligent or less intelligent agents in nature.
  5. As such, the total system space, can be described wrt to a general function, in relation to how activities may generally increase entropy, afforded by degrees of freedom in said space:

$$~$S_c(X,\tau) = -k_B \int_{x(t)} Pr(x(t)|x(0)) ln Pr(x(t)|x(0)) Dx(t)$~$$

Figure 1 Equation(2).

  1. In general, agents reasonably approach more and more complicated macroscopic states (from smaller/earlier, less efficient entropy maximization states called "microstates"), while activities occur that are "paths" in the total system space.
    • 6.b) Highly intelligent agents, likely behave in ways that engender unique paths, (by doing cognitive tasks/activities compared to simple tasks done by lesser intelligences or non intelligent things) and by doing so they approach or consume or "reach" more of the aforementioned macroscopic states, in comparison to lesser intelligences, and non intelligence.
    • 6.c) In other words, highly intelligent agents likely access more of the total actions or configuration space or degrees of freedom in nature, the same degrees of freedom associated with entropy maximization.
    • 6.d) In a reasonably similar way to equation (4) below, there is a “causal force”, which likely constrains the degrees of freedom seen in the total configuration space or total ways to increase entropy, in the form of humans, and this constrained sequence of intelligent or cognitive activities is the way in which said highly intelligent things are said to be biased to maximize entropy:

$$~$F(X,\tau) = T_c \nabla_X S_c(X,\tau) | X_0$~$$

Figure 2 Equation(4)

  1. In the extension of equation (2), seen in equation (4) above, some notation similar to "$~$T_c$~$" is likely a way to observe the various unique states that a highly intelligent agent may occupy, over some time scale "$~$\tau$~$"….(The technical way to say this, is that "'$~$T_c$~$' parametrizes the agents' bias towards entropy maximization".)

  2. Beyond human intelligence, AGI/ASI are yet more ways that shall reasonably permit more and more access to activities or "paths" to maximise entropy increase.

Consciousness, unconsciousness and entropy

Mateos et al recently using Stirling approximation, (where $~$N$~$ is the total number of possible pairs of channels, $~$p$~$ is the number of connected pairs of signals, and $~$C$~$ represents the combinations of connections between diverse signals prior to Stirling approximation) reasonably showed that the further away from deep sleep the mind is (or the more awake the mind is), the larger the number of pairs of connected signals, the greater the information content, the larger the number of neuronal interactions, and thereafter the higher the values of entropy:

$$~$S = ( N \cdot ln(N/N − p) − p \cdot ln(p/N − p) ) \equiv lnC $~$$

Figure 3 Stirling approximation on human EEG data

Conclusively, one may cogitate the relation $~$C \in \{X\}$~$, where $~$C$~$ represents an ensemble or macrostate sequence via some distribution of entropy in human neuronal terms as underlined by Mateos et al, while $~$\{X\}$~$ (wrt figure 2 equation 4 by Alex Wissner Gross) describes some macrostate partition that reasonably encompasses constrained path capability, enveloping entropy maximization, as underlined by Dr. Alex Wissner Gross.

Furthermore, beyond the scope of humans (as indicated by $~$C$~$) one may additionally garner of some measure of $~$\{X\}$~$ that may subsume higher degrees of entropy, via Artificial General Intelligence.



  1. Alex Gross, Cameron Freer, “Causal Entropic Forces”, 2013.
  2. Jeremy England, “Dissipative adaptation in driven self-assembly”, 2015.
  3. Ramon Guevarra, Diego Martin Mateos et al, “Towards a statistical mechanics of consciousness: maximization of number of connections is associated with conscious awareness”, 2016.
  4. Wikipedia/Teleonomy (Shows purpose in the context of objectivity/science, rather than in the context of subjectivity/deities. Teleonomy ought not to be confused for the teleological argument, which is a religious/subjective concept contrary to teleonomy, a scientific/objective concept.)