Bekenstein Bound
The Numberphile mathematicians were clearly describing an explicit representation of the bits in Graham's number, rather than any symbolic one (obviously, just writing "G" on a piece of paper could count if you go that route). So their claim, being more precise, would be something like: "Let $G$ be Graham's number. If you tried to fit $log(G)$ bits of entropy into a volume no larger than the size of your head, it would collapse into a black hole."
John Baez helpfully explains that a gram of water at STP contains about $10^{24}$ bits of entropy. So, if you could perfectly manipulate every atom of water in a 1 gram clump, you could theoretically explicitly represent a number with $10^{24}$ bits. Now, the amount of information entropy in a gram of water is a function of its degrees of freedom. Nobody claims that water is the ultimate information-dense substance. If you picked something else, you might be able to pack in more bits into the same volume.
But that just means if we are very clever, there is no end to the amount of information we can pack into a sphere the size of our heads, right? Not quite. The information content of ordinary matter is bounded per unit of mass, since each fundamental particle only has so many degrees of freedom, and massive particles can only get so small. But as you increase the mass density of your "storage medium", you reach a point where it collapses into a black hole. Then the information content becomes proportional to the surface area of the BH. More precisely, it is believed to be 1/4 nat per Planck area (also per Baez). Call that maybe 1/6 bit per "pixel". This limit is called the Bekenstein Bound.
So returning to the mathematics of the original question, we can pose it quite precisely. The human body has a total surface area around 2 $m^2$. The head comprises less than 10% of that. So it is fairly safe to say that the human head has no more than 1 $m^2$ of surface area. Thus, the Numberphile claim could be posed as: "log(G) > entropy of a BH with 1 $m^2$ event horizon". Hopefully, by studying the definition of Graham's number, you can convince yourself that this claim is trivially true.
EDIT
To get a sense for how big Graham's number is, I will provide some additional calculations, to address Michael's comment. Now a black hole is the densest way to store information in our universe (so far as our current understanding of physics is concerned), but not all BHs are created equal. Because the surface area only grows with the square of the radius, while the volume grows with the cube, very large BHs are much less information-dense than very small ones. So, for instance, a single black hole of mass equal to the purported mass of the observable universe would be much larger in volume than if we divided that mass up into many smaller BHs. Clearly, the limit then, must be the smallest possible BH, which would have a radius on the scale of the Planck length.
For convenience, let us assume these smallest black holes are cubes of length $l_P$ (the Planck length), which would mean they can encode about 1 bit of entropy each. We can fit about $10^{35}$ $l_P$ per meter, and about $10^{27}$ meters in the observable universe (approximating it to be a large cube). So the most information we could squeeze into the observable universe is about $10^{62*3}$ = $10^{186}$ bits. That's a lot of bits! Oh, and if we did manage to squeeze that much information into it, the entire universe would be
at roughly big bang temperatures, which would make it a perhaps less-than-pleasant place to be. So I think it's safe to say that this is a very hard upper bound on how many bits we can squeeze into our corner of the universe.
Now, the problem with Graham's number is that it uses Knuth's up-arrow notation, which is a little obscure even by mathematician standards, since it is not needed in many contexts. But it is simply a generalization of our familiar sequence of addition -> multiplication -> exponentiation, etc. If you consider that each operation in the sequence is an iteration of the previous operation, you get a basic sense of what an "up arrow" is (and should notice how it is intended to be an extension of exponentiation, in a sense). So a number like $g_1$ = 3^^^^3 (where you should imagine the carets as up-arrows, since Physics MathJax does not render them natively) is already so large, it far exceeds the number of bits in our puny universe. How large is it? Well, it is roughly 3 raised to the power of itself a trillion times. And that's just the first term in a sequence which ends with Graham's number! The sequence iterates these "power towers" up to $g_{64}$, which is so incomprehensibly large that it is pretty meaningless to give any comparisons to it.