9
$\begingroup$

My question is how does the above phenomenon, as mentioned in Numberphile here, occur in a semi-quantitative way through physical laws. (i.e saying statistical mechanics implies that is not descriptive).

In general, I have seen information being mentioned in statistical mechanics/thermal mechanics courses vaguely, from time to time, but what it is is really elusive.

Note: The video does mention a formula - maximum amount of information your head can store $S_{\text{max}} = \frac{A}{4L^2}$ where $L = 1.616\times 10^{-35}m$, and $A = 4\pi r^2$, I think surface area of spherical head, but where does this equations come from is not clear.

$\endgroup$

4 Answers 4

31
$\begingroup$

Simply put, what's stated in the video is misleading to an extent that it's fair to say it's wrong.

Ignoring problems of how you define the word 'imagining' (this is physics stack exchange, after all), a generous interpretation of what they're saying is that if you tried to, say, encode all the digits of Graham's number into atoms and compress all these atoms to fit inside your head so that they could be accessed by your neurons somehow, then it would collapse into a black hole. A cartoon description of black hole formation is that if you try to compress a large amount of mass-energy into a small space, then it will collapse into a black hole.

But Graham's number is an abstract concept, so it doesn't make sense to talk about it containing any mass-energy. The assumption that to access the digits of Graham's number you need to actually encode all the digits explicitly is not justified. There's no reason this is necessary; you could instead just run an algorithm to compute the digits. Basic algorithms are known to do this, they just take a long time to run the more digits you need.

Remember that representing a number in terms of digits is really just as good as representing it any other way, it just depends what you want to do with the number. It does not necessarily give any better understanding to represent a number as, say 1024 vs representing it as $2^{10}$. After all, 1024 just means: $1 \times 10^3 + 0 \times 10^2 + 2 \times 10^1 + 4 \times 10^0$.

EDIT: A commenter asked why don't we compute the smallest volume inside which you could represent Graham's number anyway. The problem is that this depends on a lot of assumptions; too many to give a concrete answer. Graham's number is not a random sequence of digits, it's a number with a known formula. So depending on how we define it, the amount of information required could range from 0 to 'very large'.

In general though, if you want to store a certain amount of information, the lower bound of volume required to store this information in is given by the Bekenstein bound.

$\endgroup$
9
  • 5
    $\begingroup$ Nice answer, +1. Just wanted to add that if we don't ignore possible definitions of the word "imagining," then there are lots of sensible definitions that do not require encoding Graham's number into atomic bits and stuffing them into a human head. For example, one could read about it, or watch a video about it, or follow the definition in terms of abstract arrow notation, or read the argument that leads to the arrow notation definition... In fact, "storing atoms in a head" is a pretty weird definition of "imagine", as you point out. (This presenter is notorious for oversimplification.) $\endgroup$
    – Andrew
    Commented Jun 30 at 20:07
  • 4
    $\begingroup$ I'm not sure that I really like this answer, since a function that outputs a number is not the same thing as the number itself. Sure, the function may not take up much space but you haven't actually "imagined" grahams number until you have run it. Perhaps instead of atoms you could write the output using photons, which might mean you get the universes largest supernova instead of a black hole. $\endgroup$
    – Turksarama
    Commented Jul 2 at 1:16
  • 2
    $\begingroup$ While this answer is true, I feel it kind of misses an opportunity. Presumably, there is actually a calculation you can do that says "if you try to store it in decimal representation it will turn into a black hole", and this answer comments on the meaning of that calculation without actually doing it, hence it sort of misses the point of the thing the question is asking about. (The calculation is described in @LawnmowerMan's answer, and note that it has nothing to do with atoms.) $\endgroup$
    – N. Virgo
    Commented Jul 2 at 3:40
  • 5
    $\begingroup$ @Turksarama The decimal representation of a number is also just a representation - a function, which can result in the number. "4 dozen" is also a representation - is it better or worse than decimal? Each representation is just a form of function which can be executed. There is no "one true" representation of a number. And there is no objectively better/worse. $\endgroup$
    – Falco
    Commented Jul 2 at 12:26
  • 2
    $\begingroup$ @Falco Actually, let's just represent the number in the same base as the number itself. Then we just need two digits to represent it: 10. However, to distinguish it from all the other numbers which are also 10 we need to know what the base is. Thus we require something that can distinguish between n states, where n is the number itself. We get a similar outcome if we use base 1, where this time we have n digits of 0. Now we have a continuum of encodings between these two extremes where the actual number of digits is in 2..n but the actual information content is the same. $\endgroup$
    – Michael
    Commented Jul 2 at 13:02
25
$\begingroup$

Bekenstein Bound

The Numberphile mathematicians were clearly describing an explicit representation of the bits in Graham's number, rather than any symbolic one (obviously, just writing "G" on a piece of paper could count if you go that route). So their claim, being more precise, would be something like: "Let $G$ be Graham's number. If you tried to fit $log(G)$ bits of entropy into a volume no larger than the size of your head, it would collapse into a black hole."

John Baez helpfully explains that a gram of water at STP contains about $10^{24}$ bits of entropy. So, if you could perfectly manipulate every atom of water in a 1 gram clump, you could theoretically explicitly represent a number with $10^{24}$ bits. Now, the amount of information entropy in a gram of water is a function of its degrees of freedom. Nobody claims that water is the ultimate information-dense substance. If you picked something else, you might be able to pack in more bits into the same volume.

But that just means if we are very clever, there is no end to the amount of information we can pack into a sphere the size of our heads, right? Not quite. The information content of ordinary matter is bounded per unit of mass, since each fundamental particle only has so many degrees of freedom, and massive particles can only get so small. But as you increase the mass density of your "storage medium", you reach a point where it collapses into a black hole. Then the information content becomes proportional to the surface area of the BH. More precisely, it is believed to be 1/4 nat per Planck area (also per Baez). Call that maybe 1/6 bit per "pixel". This limit is called the Bekenstein Bound.

So returning to the mathematics of the original question, we can pose it quite precisely. The human body has a total surface area around 2 $m^2$. The head comprises less than 10% of that. So it is fairly safe to say that the human head has no more than 1 $m^2$ of surface area. Thus, the Numberphile claim could be posed as: "log(G) > entropy of a BH with 1 $m^2$ event horizon". Hopefully, by studying the definition of Graham's number, you can convince yourself that this claim is trivially true.

EDIT

To get a sense for how big Graham's number is, I will provide some additional calculations, to address Michael's comment. Now a black hole is the densest way to store information in our universe (so far as our current understanding of physics is concerned), but not all BHs are created equal. Because the surface area only grows with the square of the radius, while the volume grows with the cube, very large BHs are much less information-dense than very small ones. So, for instance, a single black hole of mass equal to the purported mass of the observable universe would be much larger in volume than if we divided that mass up into many smaller BHs. Clearly, the limit then, must be the smallest possible BH, which would have a radius on the scale of the Planck length.

For convenience, let us assume these smallest black holes are cubes of length $l_P$ (the Planck length), which would mean they can encode about 1 bit of entropy each. We can fit about $10^{35}$ $l_P$ per meter, and about $10^{27}$ meters in the observable universe (approximating it to be a large cube). So the most information we could squeeze into the observable universe is about $10^{62*3}$ = $10^{186}$ bits. That's a lot of bits! Oh, and if we did manage to squeeze that much information into it, the entire universe would be at roughly big bang temperatures, which would make it a perhaps less-than-pleasant place to be. So I think it's safe to say that this is a very hard upper bound on how many bits we can squeeze into our corner of the universe.

Now, the problem with Graham's number is that it uses Knuth's up-arrow notation, which is a little obscure even by mathematician standards, since it is not needed in many contexts. But it is simply a generalization of our familiar sequence of addition -> multiplication -> exponentiation, etc. If you consider that each operation in the sequence is an iteration of the previous operation, you get a basic sense of what an "up arrow" is (and should notice how it is intended to be an extension of exponentiation, in a sense). So a number like $g_1$ = 3^^^^3 (where you should imagine the carets as up-arrows, since Physics MathJax does not render them natively) is already so large, it far exceeds the number of bits in our puny universe. How large is it? Well, it is roughly 3 raised to the power of itself a trillion times. And that's just the first term in a sequence which ends with Graham's number! The sequence iterates these "power towers" up to $g_{64}$, which is so incomprehensibly large that it is pretty meaningless to give any comparisons to it.

$\endgroup$
11
  • 3
    $\begingroup$ "rather than any symbolic one" a representation using digits is a symbolic representation. $\endgroup$
    – A Nejati
    Commented Jul 2 at 7:46
  • 4
    $\begingroup$ @ANejati Graham's number, like every number, requires a certain number of bits to represent it, because the pigeonhole principle says that there are many other numbers of the same scale but identifiably different. The minimum number of bits to explicitly represent a number N is log_2(N). Thermodynamic entropy is related to Shannon entropy by virtue of the fact that thermodynamic states provide a universe of "bits" by which information can be represented. A thermodynamic system with T states can encode log_2(T) bits of information. $\endgroup$ Commented Jul 2 at 15:15
  • 2
    $\begingroup$ Again this is only true if you add extra assumptions, like the number or its digits being drawn from a distribution. In this case, we have the formula for Graham's number and it's very short. $\endgroup$
    – A Nejati
    Commented Jul 2 at 17:49
  • 2
    $\begingroup$ @ANejati You're right that the Kolmogorov complexity of certain numbers is much smaller than other numbers that may be nearby. But when we say: "Imagine a number", most people don't mean: "Imagine an algorithm that produces this number". So this boils down to an interpretation of the plain English language in the video. It is obvious to me that they are referring to an explicit, random-access (meaning, produce any digit in O(1)) representation of the number. If your language community primarily traffics in algorithmic representations, I'd like to meet them. $\endgroup$ Commented Jul 2 at 18:43
  • 2
    $\begingroup$ Most people also don't mean "imagine all the digits of this number". Take pi for instance. $\endgroup$
    – A Nejati
    Commented Jul 2 at 19:19
1
$\begingroup$

Without watching the video and going with your headline, my first guess would have been to use some black hole thermodynamics. If we trust a well known internet black hole calculator, then a 1.6e-35m black hole has a mass of 1e-8kg and a lifetime of 6e-41s (approx. 1000 Planck times). It would have a mass-energy of about 1GJ, which already sounds suspiciously high.

Using Landauer's principle at room temperature the mass of a bit of information can be given an equivalent mass of 3e-38kg. If we equate these masses, then the video would estimate the information content of the human brain as roughly 3e29 bits if I am not mistaken. Typical estimates for the information stored in the human brain in form of synaptic weights seem to be around 1Petabit, which only gets us to 1e15 bits. I seem to be missing some 15 orders of magnitude here, so I am not sure how they got to these numbers. Maybe somebody can plug them into the actual formulas for you, just in case the internet calculators and the Landauer bit mass estimates are wrong.

$\endgroup$
5
  • $\begingroup$ I'm not sure why you're comparing the brain with a black hole of size 1.6e-35m $\endgroup$
    – A Nejati
    Commented Jun 30 at 20:36
  • $\begingroup$ @ANejati Maybe I should have mentioned that... the only length scale here is that of your L in the equation and the surface area of a brain is on the order of 1m^2, so it drops out in SI units. That's the only physical numbers I had to play with, so I just went for it and plugged it into the black hole equations (through the calculator). The result is that it doesn't fit by a wide margin, which is what I said at the end. It's not equating that length scale with a black hole length scale. $\endgroup$ Commented Jun 30 at 20:42
  • $\begingroup$ this number is the length given in the note, but I also don't know why exactly that number $\endgroup$ Commented Jun 30 at 20:42
  • 1
    $\begingroup$ The information content of the brain is the number of bits required to reconstruct a quantum clone of the brain, whereas the synaptic capacity of the brain is the number of bits that the brain can recall while operating as a brain. Obviously, brains need to do many things besides recalling bits (such as metabolizing glucose), which is why there is a massive difference in the two values you estimated. Matter is very dense in terms of Landauer bits, but most of those bits are not accessible to us as useful information. $\endgroup$ Commented Jul 1 at 14:40
  • $\begingroup$ @LawnmowerMan I understand that. I was merely asking the question whether they are evaluating the brain purely in terms of its "useful" information content or not. It's fairly obvious that they are not. Like I said... I just took the numbers and plugged them in to invalidate "the naive hypothesis". I think that worked. $\endgroup$ Commented Jul 1 at 22:25
1
$\begingroup$

A simple answer which slightly expands on the other (excellent) answers here, coming at it from an information-theory perspective:

The Kolmogorov complexity of Graham's number is significantly smaller than the Bekenstein bound of our brains. This is why we we have no problem conceptualizing infinitely recursive structures despite them being literally infinitely larger than Graham's number. We don't need to expand Graham's number into a string of digits that we have to store in our brain simultaneously just to "imagine" it.

The video in question plays on the fact that massive numbers are hard to reason with because they are so massive that we cannot fall back on anything in our experience to get an intuitive feeling about its size. That does not mean we cannot comprehend or reason about it, just that we must think about it in abstract terms (such as Knuth' up-arrow notation).

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.