You are currently browsing the monthly archive for June 2016.

I’ve just posted to the arXiv my paper “Finite time blowup for Lagrangian modifications of the three-dimensional Euler equation“. This paper is loosely in the spirit of other recent papers of mine in which I explore how close one can get to supercritical PDE of physical interest (such as the Euler and Navier-Stokes equations), while still being able to rigorously demonstrate finite time blowup for at least some choices of initial data. Here, the PDE we are trying to get close to is the incompressible inviscid Euler equations

\displaystyle \partial_t u + (u \cdot \nabla) u = - \nabla p

\displaystyle \nabla \cdot u = 0

in three spatial dimensions, where {u} is the velocity vector field and {p} is the pressure field. In vorticity form, and viewing the vorticity {\omega} as a {2}-form (rather than a vector), we can rewrite this system using the language of differential geometry as

\displaystyle \partial_t \omega + {\mathcal L}_u \omega = 0

\displaystyle u = \delta \tilde \eta^{-1} \Delta^{-1} \omega

where {{\mathcal L}_u} is the Lie derivative along {u}, {\delta} is the codifferential (the adjoint of the differential {d}, or equivalently the negative of the divergence operator) that sends {k+1}-vector fields to {k}-vector fields, {\Delta} is the Hodge Laplacian, and {\tilde \eta} is the identification of {k}-vector fields with {k}-forms induced by the Euclidean metric {\tilde \eta}. The equation{u = \delta \tilde \eta^{-1} \Delta^{-1} \omega} can be viewed as the Biot-Savart law recovering velocity from vorticity, expressed in the language of differential geometry.

One can then generalise this system by replacing the operator {\tilde \eta^{-1} \Delta^{-1}} by a more general operator {A} from {2}-forms to {2}-vector fields, giving rise to what I call the generalised Euler equations

\displaystyle \partial_t \omega + {\mathcal L}_u \omega = 0

\displaystyle u = \delta A \omega.

For example, the surface quasi-geostrophic (SQG) equations can be written in this form, as discussed in this previous post. One can view {A \omega} (up to Hodge duality) as a vector potential for the velocity {u}, so it is natural to refer to {A} as a vector potential operator.

The generalised Euler equations carry much of the same geometric structure as the true Euler equations. For instance, the transport equation {\partial_t \omega + {\mathcal L}_u \omega = 0} is equivalent to the Kelvin circulation theorem, which in three dimensions also implies the transport of vortex streamlines and the conservation of helicity. If {A} is self-adjoint and positive definite, then the famous Euler-Poincaré interpretation of the true Euler equations as geodesic flow on an infinite dimensional Riemannian manifold of volume preserving diffeomorphisms (as discussed in this previous post) extends to the generalised Euler equations (with the operator {A} determining the new Riemannian metric to place on this manifold). In particular, the generalised Euler equations have a Lagrangian formulation, and so by Noether’s theorem we expect any continuous symmetry of the Lagrangian to lead to conserved quantities. Indeed, we have a conserved Hamiltonian {\frac{1}{2} \int \langle \omega, A \omega \rangle}, and any spatial symmetry of {A} leads to a conserved impulse (e.g. translation invariance leads to a conserved momentum, and rotation invariance leads to a conserved angular momentum). If {A} behaves like a pseudodifferential operator of order {-2} (as is the case with the true vector potential operator {\tilde \eta^{-1} \Delta^{-1}}), then it turns out that one can use energy methods to recover the same sort of classical local existence theory as for the true Euler equations (up to and including the famous Beale-Kato-Majda criterion for blowup).

The true Euler equations are suspected of admitting smooth localised solutions which blow up in finite time; there is now substantial numerical evidence for this blowup, but it has not been proven rigorously. The main purpose of this paper is to show that such finite time blowup can at least be established for certain generalised Euler equations that are somewhat close to the true Euler equations. This is similar in spirit to my previous paper on finite time blowup on averaged Navier-Stokes equations, with the main new feature here being that the modified equation continues to have a Lagrangian structure and a vorticity formulation, which was not the case with the averaged Navier-Stokes equation. On the other hand, the arguments here are not able to handle the presence of viscosity (basically because they rely crucially on the Kelvin circulation theorem, which is not available in the viscous case).

In fact, three different blowup constructions are presented (for three different choices of vector potential operator {A}). The first is a variant of one discussed previously on this blog, in which a “neck pinch” singularity for a vortex tube is created by using a non-self-adjoint vector potential operator, in which the velocity at the neck of the vortex tube is determined by the circulation of the vorticity somewhat further away from that neck, which when combined with conservation of circulation is enough to guarantee finite time blowup. This is a relatively easy construction of finite time blowup, and has the advantage of being rather stable (any initial data flowing through a narrow tube with a large positive circulation will blow up in finite time). On the other hand, it is not so surprising in the non-self-adjoint case that finite blowup can occur, as there is no conserved energy.

The second blowup construction is based on a connection between the two-dimensional SQG equation and the three-dimensional generalised Euler equations, discussed in this previous post. Namely, any solution to the former can be lifted to a “two and a half-dimensional” solution to the latter, in which the velocity and vorticity are translation-invariant in the vertical direction (but the velocity is still allowed to contain vertical components, so the flow is not completely horizontal). The same embedding also works to lift solutions to generalised SQG equations in two dimensions to solutions to generalised Euler equations in three dimensions. Conveniently, even if the vector potential operator for the generalised SQG equation fails to be self-adjoint, one can ensure that the three-dimensional vector potential operator is self-adjoint. Using this trick, together with a two-dimensional version of the first blowup construction, one can then construct a generalised Euler equation in three dimensions with a vector potential that is both self-adjoint and positive definite, and still admits solutions that blow up in finite time, though now the blowup is now a vortex sheet creasing at on a line, rather than a vortex tube pinching at a point.

This eliminates the main defect of the first blowup construction, but introduces two others. Firstly, the blowup is less stable, as it relies crucially on the initial data being translation-invariant in the vertical direction. Secondly, the solution is not spatially localised in the vertical direction (though it can be viewed as a compactly supported solution on the manifold {{\bf R}^2 \times {\bf R}/{\bf Z}}, rather than {{\bf R}^3}). The third and final blowup construction of the paper addresses the final defect, by replacing vertical translation symmetry with axial rotation symmetry around the vertical axis (basically, replacing Cartesian coordinates with cylindrical coordinates). It turns out that there is a more complicated way to embed two-dimensional generalised SQG equations into three-dimensional generalised Euler equations in which the solutions to the latter are now axially symmetric (but are allowed to “swirl” in the sense that the velocity field can have a non-zero angular component), while still keeping the vector potential operator self-adjoint and positive definite; the blowup is now that of a vortex ring creasing on a circle.

As with the previous papers in this series, these blowup constructions do not directly imply finite time blowup for the true Euler equations, but they do at least provide a barrier to establishing global regularity for these latter equations, in that one is forced to use some property of the true Euler equations that are not shared by these generalisations. They also suggest some possible blowup mechanisms for the true Euler equations (although unfortunately these mechanisms do not seem compatible with the addition of viscosity, so they do not seem to suggest a viable Navier-Stokes blowup mechanism).

In logic, there is a subtle but important distinction between the concept of mutual knowledge – information that everyone (or almost everyone) knows – and common knowledge, which is not only knowledge that (almost) everyone knows, but something that (almost) everyone knows that everyone else knows (and that everyone knows that everyone else knows that everyone else knows, and so forth).  A classic example arises from Hans Christian Andersens’ fable of the Emperor’s New Clothes: the fact that the emperor in fact has no clothes is mutual knowledge, but not common knowledge, because everyone (save, eventually, for a small child) is refusing to acknowledge the emperor’s nakedness, thus perpetuating the charade that the emperor is actually wearing some incredibly expensive and special clothing that is only visible to a select few.  My own personal favourite example of the distinction comes from the blue-eyed islander puzzle, discussed previously here, here and here on the blog.  (By the way, I would ask that any commentary about that puzzle be directed to those blog posts, rather than to the current one.)

I believe that there is now a real-life instance of this situation in the US presidential election, regarding the following

Proposition 1.  The presumptive nominee of the Republican Party, Donald Trump, is not even remotely qualified to carry out the duties of the presidency of the United States of America.

Proposition 1 is a statement which I think is approaching the level of mutual knowledge amongst the US population (and probably a large proportion of people following US politics overseas): even many of Trump’s nominal supporters secretly suspect that this proposition is true, even if they are hesitant to say it out loud.  And there have been many prominent people, from both major parties, that have made the case for Proposition 1: for instance Mitt Romney, the Republican presidential nominee in 2012, did so back in March, and just a few days ago Hillary Clinton, the likely Democratic presidential nominee this year, did so in this speech:

I highly recommend watching the entirety of the (35 mins or so) speech, followed by the entirety of Trump’s rebuttal.

However, even if Proposition 1 is approaching the status of “mutual knowledge”, it does not yet seem to be close to the status of “common knowledge”: one may secretly believe that Trump cannot be considered as a serious candidate for the US presidency, but must continue to entertain this possibility, because they feel that others around them, or in politics or the media, appear to be doing so.  To reconcile these views can require taking on some implausible hypotheses that are not otherwise supported by any evidence, such as the hypothesis that Trump’s displays of policy ignorance, pettiness, and other clearly unpresidential behaviour are merely “for show”, and that behind this facade there is actually a competent and qualified presidential candidate; much like the emperor’s new clothes, this alleged competence is supposedly only visible to a select few.  And so the charade continues.

I feel that it is time for the charade to end: Trump is unfit to be president, and everybody knows it.  But more people need to say so, openly.

Important note: I anticipate there will be any number of “tu quoque” responses, asserting for instance that Hillary Clinton is also unfit to be the US president.  I personally do not believe that to be the case (and certainly not to the extent that Trump exhibits), but in any event such an assertion has no logical bearing on the qualification of Trump for the presidency.  As such, any comments that are purely of this “tu quoque” nature, and which do not directly address the validity or epistemological status of Proposition 1, will be deleted as off-topic.  However, there is a legitimate case to be made that there is a fundamental weakness in the current mechanics of the US presidential election, particularly with the “first-past-the-post” voting system, in that (once the presidential primaries are concluded) a voter in the presidential election is effectively limited to choosing between just two viable choices, one from each of the two major parties, or else refusing to vote or making a largely symbolic protest vote. This weakness is particularly evident when at least one of these two major choices is demonstrably unfit for office, as per Proposition 1.  I think there is a serious case for debating the possibility of major electoral reform in the US (I am particularly partial to the Instant Runoff Voting system, used for instance in my home country of Australia, which allows for meaningful votes to third parties), and I would consider such a debate to be on-topic for this post.  But this is very much a longer term issue, as there is absolutely no chance that any such reform would be implemented by the time of the US elections in November (particularly given that any significant reform would almost certainly require, at minimum, a constitutional amendment).

 

Note: the following is a record of some whimsical mathematical thoughts and computations I had after doing some grading. It is likely that the sort of problems discussed here are in fact well studied in the appropriate literature; I would appreciate knowing of any links to such.

Suppose one assigns {N} true-false questions on an examination, with the answers randomised so that each question is equally likely to have “true” as the correct answer as “false”, with no correlation between different questions. Suppose that the students taking the examination must answer each question with exactly one of “true” or “false” (they are not allowed to skip any question). Then it is easy to see how to grade the exam: one can simply count how many questions each student answered correctly (i.e. each correct answer scores one point, and each incorrect answer scores zero points), and give that number {k} as the final grade of the examination. More generally, one could assign some score of {A} points to each correct answer and some score (possibly negative) of {B} points to each incorrect answer, giving a total grade of {A k + B(N-k)} points. As long as {A > B}, this grade is simply an affine rescaling of the simple grading scheme {k} and would serve just as well for the purpose of evaluating the students, as well as encouraging each student to answer the questions as correctly as possible.

In practice, though, a student will probably not know the answer to each individual question with absolute certainty. One can adopt a probabilistic model, where for a given student {S} and a given question {n}, the student {S} may think that the answer to question {n} is true with probability {p_{S,n}} and false with probability {1-p_{S,n}}, where {0 \leq p_{S,n} \leq 1} is some quantity that can be viewed as a measure of confidence {S} has in the answer (with {S} being confident that the answer is true if {p_{S,n}} is close to {1}, and confident that the answer is false if {p_{S,n}} is close to {0}); for simplicity let us assume that in {S}‘s probabilistic model, the answers to each question are independent random variables. Given this model, and assuming that the student {S} wishes to maximise his or her expected grade on the exam, it is an easy matter to see that the optimal strategy for {S} to take is to answer question {n} true if {p_{S,n} > 1/2} and false if {p_{S,n} < 1/2}. (If {p_{S,n}=1/2}, the student {S} can answer arbitrarily.)

[Important note: here we are not using the term “confidence” in the technical sense used in statistics, but rather as an informal term for “subjective probability”.]

This is fine as far as it goes, but for the purposes of evaluating how well the student actually knows the material, it provides only a limited amount of information, in particular we do not get to directly see the student’s subjective probabilities {p_{S,n}} for each question. If for instance {S} answered {7} out of {10} questions correctly, was it because he or she actually knew the right answer for seven of the questions, or was it because he or she was making educated guesses for the ten questions that turned out to be slightly better than random chance? There seems to be no way to discern this if the only input the student is allowed to provide for each question is the single binary choice of true/false.

But what if the student were able to give probabilistic answers to any given question? That is to say, instead of being forced to answer just “true” or “false” for a given question {n}, the student was allowed to give answers such as “{60\%} confident that the answer is true” (and hence {40\%} confidence the answer is false). Such answers would give more insight as to how well the student actually knew the material; in particular, we would theoretically be able to actually see the student’s subjective probabilities {p_{S,n}}.

But now it becomes less clear what the right grading scheme to pick is. Suppose for instance we wish to extend the simple grading scheme in which an correct answer given in {100\%} confidence is awarded one point. How many points should one award a correct answer given in {60\%} confidence? How about an incorrect answer given in {60\%} confidence (or equivalently, a correct answer given in {40\%} confidence)?

Mathematically, one could design a grading scheme by selecting some grading function {f: [0,1] \rightarrow {\bf R}} and then awarding a student {f(p)} points whenever they indicate the correct answer with a confidence of {p}. For instance, if the student was {60\%} confident that the answer was “true” (and hence {40\%} confident that the answer was “false”), then this grading scheme would award the student {f(0.6)} points if the correct answer actually was “true”, and {f(0.4)} points if the correct answer actually was “false”. One can then ask the question of what functions {f} would be “best” for this scheme?

Intuitively, one would expect that {f} should be monotone increasing – one should be rewarded more for being correct with high confidence, than correct with low confidence. On the other hand, some sort of “partial credit” should still be assigned in the latter case. One obvious proposal is to just use a linear grading function {f(p) = p} – thus for instance a correct answer given with {60\%} confidence might be worth {0.6} points. But is this the “best” option?

To make the problem more mathematically precise, one needs an objective criterion with which to evaluate a given grading scheme. One criterion that one could use here is the avoidance of perverse incentives. If a grading scheme is designed badly, a student may end up overstating or understating his or her confidence in an answer in order to optimise the (expected) grade: the optimal level of confidence {q_{S,n}} for a student {S} to report on a question may differ from that student’s subjective confidence {p_{S,n}}. So one could ask to design a scheme so that {q_{S,n}} is always equal to {p_{S,n}}, so that the incentive is for the student to honestly report his or her confidence level in the answer.

This turns out to give a precise constraint on the grading function {f}. If a student {S} thinks that the answer to a question {n} is true with probability {p_{S,n}} and false with probability {1-p_{S,n}}, and enters in an answer of “true” with confidence {q_{S,n}} (and thus “false” with confidence {1-q_{S,n}}), then student would expect a grade of

\displaystyle p_{S,n} f( q_{S,n} ) + (1-p_{S,n}) f(1 - q_{S,n})

on average for this question. To maximise this expected grade (assuming differentiability of {f}, which is a reasonable hypothesis for a partial credit grading scheme), one performs the usual maneuvre of differentiating in the independent variable {q_{S,n}} and setting the result to zero, thus obtaining

\displaystyle p_{S,n} f'( q_{S,n} ) - (1-p_{S,n}) f'(1 - q_{S,n}) = 0.

In order to avoid perverse incentives, the maximum should occur at {q_{S,n} = p_{S,n}}, thus we should have

\displaystyle p f'(p) - (1-p) f'(1-p) = 0

for all {0 \leq p \leq 1}. This suggests that the function {p \mapsto p f'(p)} should be constant. (Strictly speaking, it only gives the weaker constraint that {p \mapsto p f'(p)} is symmetric around {p=1/2}; but if one generalised the problem to allow for multiple-choice questions with more than two possible answers, with a grading scheme that depended only on the confidence assigned to the correct answer, the same analysis would in fact force {p f'(p)} to be constant in {p}; we leave this computation to the interested reader.) In other words, {f(p)} should be of the form {A \log p + B} for some {A,B}; by monotonicity we expect {A} to be positive. If we make the normalisation {f(1/2)=0} (so that no points are awarded for a {50-50} split in confidence between true and false) and {f(1)=1}, one arrives at the grading scheme

\displaystyle f(p) := \log_2(2p).

Thus, if a student believes that an answer is “true” with confidence {p} and “false” with confidence {1-p}, he or she will be awarded {\log_2(2p)} points when the correct answer is “true”, and {\log_2(2(1-p))} points if the correct answer is “false”. The following table gives some illustrative values for this scheme:

Confidence that answer is “true” Points awarded if answer is “true” Points awarded if answer is “false”
{0\%} {-\infty} {1.000}
{1\%} {-5.644} {0.9855}
{2\%} {-4.644} {0.9709}
{5\%} {-3.322} {0.9260}
{10\%} {-2.322} {0.8480}
{20\%} {-1.322} {0.6781}
{30\%} {-0.737} {0.4854}
{40\%} {-0.322} {0.2630}
{50\%} {0.000} {0.000}
{60\%} {0.2630} {-0.322}
{70\%} {0.4854} {-0.737}
{80\%} {0.6781} {-1.322}
{90\%} {0.8480} {-2.322}
{95\%} {0.9260} {-3.322}
{98\%} {0.9709} {-4.644}
{99\%} {0.9855} {-5.644}
{100\%} {1.000} {-\infty}

Note the large penalties for being extremely confident of an answer that ultimately turns out to be incorrect; in particular, answers of {100\%} confidence should be avoided unless one really is absolutely certain as to the correctness of one’s answer.

The total grade given under such a scheme to a student {S} who answers each question {n} to be “true” with confidence {p_{S,n}}, and “false” with confidence {1-p_{S,n}}, is

\displaystyle \sum_{n: \hbox{ ans is true}} \log_2(2 p_{S,n} ) + \sum_{n: \hbox{ ans is false}} \log_2(2(1-p_{S,n})).

This grade can also be written as

\displaystyle N + \frac{1}{\log 2} \log {\mathcal L}

where

\displaystyle {\mathcal L} := \prod_{n: \hbox{ ans is true}} p_{S,n} \times \prod_{n: \hbox{ ans is false}} (1-p_{S,n})

is the likelihood of the student {S}‘s subjective probability model, given the outcome of the correct answers. Thus the grade system here has another natural interpretation, as being an affine rescaling of the log-likelihood. The incentive is thus for the student to maximise the likelihood of his or her own subjective model, which aligns well with standard practices in statistics. From the perspective of Bayesian probability, the grade given to a student can then be viewed as a measurement (in logarithmic scale) of how much the posterior probability that the student’s model was correct has improved over the prior probability.

One could propose using the above grading scheme to evaluate predictions to binary events, such as an upcoming election with only two viable candidates, to see in hindsight just how effective each predictor was in calling these events. One difficulty in doing so is that many predictions do not come with explicit probabilities attached to them, and attaching a default confidence level of {100\%} to any prediction made without any such qualification would result in an automatic grade of {-\infty} if even one of these predictions turned out to be incorrect. But perhaps if a predictor refuses to attach confidence level to his or her predictions, one can assign some default level {p} of confidence to these predictions, and then (using some suitable set of predictions from this predictor as “training data”) find the value of {p} that maximises this predictor’s grade. This level can then be used going forward as the default level of confidence to apply to any future predictions from this predictor.

The above grading scheme extends easily enough to multiple-choice questions. But one question I had trouble with was how to deal with uncertainty, in which the student does not know enough about a question to venture even a probability of being true or false. Here, it is natural to allow a student to leave a question blank (i.e. to answer “I don’t know”); a more advanced option would be to allow the student to enter his or her confidence level as an interval range (e.g. “I am between {50\%} and {70\%} confident that the answer is “true””). But now I do not have a good proposal for a grading scheme; once there is uncertainty in the student’s subjective model, the problem of that student maximising his or her expected grade becomes ill-posed due to the “unknown unknowns”, and so the previous criterion of avoiding perverse incentives becomes far less useful.

Archives