2
$\begingroup$

Below the Currie temperature, the Ising model shows first-order phase transition if we consider the external field as the control parameter. The order parameter jumps from e.g., $-m_0$ to $m_0$. The free energy has two local minima with the same depth at $\pm m_0$, so the mean magnetization must be zero. I also guess that statistical mechanical calculation (the derivative of the partition function with respect to the magnetization) may also give zero.

Can we really observe the zero mean magnetization in reality? I'm not sure that the two phases $\pm m_0$ coexist at the transition line like the case of liquid-gas transiton. If so, we could observe the mean zero. However, if it does not, we must wait very long time until very rare statistical fluctuation switches the states $+m_0$ and $-m_0$ to observe the mean zero. Which one is the reality? Is there any theoretical argument for the answer?

$\endgroup$

1 Answer 1

2
$\begingroup$

All of this is extremely well understood (I mean, even at the mathematicians' level of rigor).

The pressure is not differentiable at $h=0$ when $\beta>\beta_c$; it has, however, left- and right- derivatives, corresponding to the two possible limits $\pm m^*(\beta)$ of the magnetization as $h$ approaches $0$ from negative or positive values.

Which value of the magnetization you observe in an infinite sample is an ill-posed question, since, at $h=0$, there are infinitely many different Gibbs states, so you have to say which one you consider. Among these states, two are most natural: the pure phases $\mu_\beta^+$ and $\mu_\beta^-$, which can be obtained by taking the thermodynamic limit with $+$ and $-$ boundary conditions, respectively. Under these two states, typical configurations have an empirical magnetization equal to $+m^*(\beta)$ and $-m^*(\beta)$, respectively.

Of course, you may consider the state $\tfrac{1}{2}(\mu_\beta^+ + \mu_\beta^-)$, which is the one that is obtained by taking the thermodynamic limit using, for example, free or periodic boundary conditions. Under this measure, the average magnetization is indeed $0$. However, the empirical magnetization in a typical configuration will again be $+m^*(\beta)$ or $-m^*(\beta)$, each possibility occurring with probability $1/2$. (This macroscopic indeterminacy is the result of this state not being a pure phase, but a mixture.)

Finally, note that in dimension $3$ and more, at sufficiently low temperatures, there are Gibbs states describing the coexistence of the $+$ and $-$ phases, separated by a rigid interface. (In dimension $2$, the interface has unbounded fluctuations, so this does not happen.)

I have not discussed the dynamical features of these systems (at equilibrium). In infinite volume, a reversal of the magnetization never occurs.


Complement: Let me add a couple of words about what is meant above by "Gibbs states", as this is a fundamental notion that is often not discussed in classes. To describe a finite system (say defined in a finite subset $\Lambda$ of $\mathbb{Z}^d$) at equilibrium at inverse temperature $\beta$, one uses a probability measure $\mu_{\Lambda;\beta,h}$ that associates to any configuration $\sigma$ in $\Lambda$ a probability proportional to the Boltzmann weight $\exp(-\beta H_{\Lambda;h}(\sigma))$. Note that, in such a case, one has to specify the boundary condition used. One common (and, as it turns out, very relevant) way of doing that is to fix a configuration of spin along the exterior boundary of $\Lambda$; the most important choices are the $+$ and $-$ boundary conditions, that correspond to fixing all these spins to the value $1$, respectively $-1$. Other common boundary conditions are periodic (wrapping $\Lambda$ on a torus) and free (no interaction). It is then useful to explicitly indicate the boundary condition used by writing, for example, $\mu_{\Lambda;\beta,h}^+$ for the $+$ boundary condition.

Now, in order to have genuine phase transitions, it is necessary to take the thermodynamic limit, that is, to let $\Lambda \uparrow \mathbb{Z}^d$. It turns out that the system undergoes a first-order phase transition at $(\beta,h)$ if and only if this limiting procedure depends on the boundary condition chosen. In particular, if (and only if) $\beta>\beta_c$, $$ \lim_{\Lambda\uparrow\mathbb{Z}^d} \mu_{\Lambda;\beta,h=0}^+ \neq \lim_{\Lambda\uparrow\mathbb{Z}^d} \mu_{\Lambda;\beta,h=0}^-\ . $$ Each of these two limits describes a different phase, with positive, respectively negative, magnetization.

Roughly speaking, the Gibbs states are all the possible limits that can be reached, using any boundary condition. I have not explained what is precisely meant by these limits of probability measures, but I can add additional information on this issue if needed. Let me also mention the fact that one can characterize these Gibbs states directly in infinite-volume (without taking limits of finite-volume systems) in various ways.

$\endgroup$
4
  • 1
    $\begingroup$ I can tell you much more about these matters, but I prefer to wait to see what information you're after. Note also that I only discussed infinite systems, since only those display a genuine phase transition. For finite systems, things are more complicated and everything becomes much more "fuzzy". Nevertheless, things are also well understood in this case. $\endgroup$ Commented Sep 20, 2017 at 18:46
  • $\begingroup$ Thank you for the answer. I have some questions. By "Gibbs states", do you mean the local minima of the Gibbs free energy? In the introductory stat mech course I have taken, the effects of the boundary conditions were never mentioned. Could you give some more information with which I can start study? $\endgroup$
    – lyrin
    Commented Sep 21, 2017 at 16:15
  • $\begingroup$ By Gibbs states, I mean the limits of finite-volume Gibbs measures, as the size of the domain on which they are defined diverges. Now, in finite volume, you have to say what happens at the system's boundary (hence the boundary condition). Often, periodic boundary condition are used in introductory classes, but it is very important to consider other ones. Then, the main point is that, when $\beta>\beta_c$, different boundary conditions can lead to different limiting probability measures. $\endgroup$ Commented Sep 21, 2017 at 16:41
  • $\begingroup$ If you are interested in those issues, I would recommend having a look at a book I wrote with a colleague. It can be found here: unige.ch/math/folks/velenik/smbook/index.html . The most relevant for your question is chapter 3 (although chapter 6 offers a deeper, but substantially more abstract point of view on these matters). $\endgroup$ Commented Sep 21, 2017 at 16:43

Not the answer you're looking for? Browse other questions tagged or ask your own question.