A very handy way of looking at the statistical mechanism of a system is by looking at its so-called phase space (read as state space).
Consider a microstate $X = (q_1, . . . , q_N, p_1, . . . , p_N)$ of a classical system that consists of a $N$ particles. Here, $q, p$ represents the position and momentum degree of freedom of the particles. Naturally, the dimension of this phase space is $2N$.
It is assumed that every microstate (a point in this phase space) is equally likely to be attained by a physical system. Each such specification of $X$ determines a macrostate. Now, if we wait for a long period of time, then every microstate would occur at least once. Such systems are called ergodic.
So yes, these $N$ particles can assume any microstate, which includes a brain popping out of nowhere (although with a vanishing probability). However, there are more microstates corresponding to certain macrostates than others. Since each microstate is equally likely, therefore the physical system of $N$ particles is more likely to be found in certain macrostates than others. These more likely macrostates are the states which maximise entropy solely because they contain more microstates and hence occupy a larger volume of the phase space.
But...
Above, I assumed that all the $N$ particles were free to move around (i.e., there were no constraints on their dynamics). Imposing constraints (depending on the details of the dynamics, e.g., due to internal forces or conservation of total energy which then sets an upper limit on the $p$'s) reduces the volume of the phase space and, therefore, makes certain regions of the original phase space inaccessible. As a result, only those states could be reached that satisfy the imposed constraints. As such, not every thing can form randomly.
For more, I redirect you to this nice article.