You are currently browsing the monthly archive for April 2007.

On Thursday, UCLA hosted a “Fields Medalist Symposium“, in which four of the six University of California-affiliated Fields Medalists (Vaughan Jones (1990), Efim Zelmanov (1994), Richard Borcherds (1998), and myself (2006)) gave talks of varying levels of technical sophistication. (The other two are Michael Freedman (1986) and Steven Smale (1966), who could not attend.) The slides for my own talks are available here.

The talks were in order of the year in which the medal was awarded: we began with Vaughan, who spoke on “Flatland: a great place to do algebra”, then Efim, who spoke on “Pro-finite groups”, Richard, who spoke on “What is a quantum field theory?”, and myself, on “Nilsequences and the primes.” The audience was quite mixed, ranging from mathematics faculty to undergraduates to alumni to curiosity seekers, and I severely doubt that every audience member understood every talk, but there was something for everyone, and for me personally it was fantastic to see some perspectives from first-class mathematicians on some wonderful areas of mathematics outside of my own fields of expertise.

Disclaimer: the summaries below are reconstructed from my notes and from some hasty web research; I don’t vouch for 100% accuracy of the mathematical content, and would welcome corrections.

Read the rest of this entry »

My paper “Resonant decompositions and the I-method for the cubic nonlinear Schrodinger equation on {\Bbb R}^2“, with Jim Colliander, Mark Keel, Gigliola Staffilani, and Hideo Takaoka (aka the “I-team“), has just been uploaded to the arXiv, and submitted to DCDS-A. In this (long-delayed!) paper, we improve our previous result on the global well-posedness of the cubic non-linear defocusing Schrödinger equation

i u_t+ \Delta u = |u|^2 u

in two spatial dimensions, thus u: {\Bbb R} \times {\Bbb R}^2 \to {\Bbb C}. In that paper we used the “first generation I-method” (centred around an almost conservation law for a mollified energy E(Iu)) to obtain global well-posedness in H^s({\Bbb R}^2) for s > 4/7 (improving on an earlier result of s > 2/3 by Bourgain). Here we use the “second generation I-method”, in which the mollified energy E(Iu) is adjusted by a correction term to damp out “non-resonant interactions” and thus lead to an improved almost conservation law, and ultimately to an improvement of the well-posedness range to s > 1/2. (The conjectured region is s \geq 0; beyond that, the solution becomes unstable and even local well-posedness is not known.) A similar result (but using Morawetz estimates instead of correction terms) has recently been established by Colliander-Grillakis-Tzirakis; this attains the superior range of s > 2/5, but in the focusing case it does not give global existence all the way up to the ground state due to a slight inefficiency in the Morawetz estimate approach. Our method is in fact rather robust and indicates that the “first-generation” I-method can be pushed further for a large class of dispersive PDE.

Read the rest of this entry »

[This post is authored by Gil Kalai, who has kindly “guest blogged” this week’s “open problem of the week”. – T.]

This is a problem in discrete and convex geometry. It seeks to quantify the intuitively obvious fact that large convex bodies are so “fat” that they cannot avoid “detection” by a small number of observation points. More precisely, we fix a dimension d and make the following definition (introduced by Haussler and Welzl):

  • Definition: Let X \subset {\Bbb R}^d be a finite set of points, and let 0 < \epsilon < 1. We say that a finite set Y \subset {\Bbb R}^d is a weak \epsilon-net for X (with respect to convex bodies) if, whenever B is a convex body which is large in the sense that |B \cap X| > \epsilon |X|, then B contains at least one point of Y. (If Y is contained in X, we say that Y is a strong \epsilon-net for X with respect to convex bodies.)

For example, in one dimension, if X = \{1,\ldots,N\}, and Y = \{ \epsilon N, 2 \epsilon N, \ldots, k \epsilon N \} where k is the integer part of 1/\epsilon, then Y is a weak \epsilon-net for X with respect to convex bodies. Thus we see that even when the original set X is very large, one can create a \epsilon-net of size as small as O(1/\epsilon). Strong \epsilon-nets are of importance in computational learning theory, and are fairly well understood via Vapnik-Chervonenkis (or VC) theory; however, the theory of weak \epsilon-nets is still not completely satisfactory.

One can ask what happens in higher dimensions, for instance when X is a discrete cube X = \{1,\ldots,N\}^d. It is not too hard to cook up \epsilon-nets of size O_d(1/\epsilon^d) (by using tools such as Minkowski’s theorem), but in fact one can create \epsilon-nets of size as small as O( \frac{1}{\epsilon} \log \frac{1}{\epsilon} ) simply by taking a random subset of X of this cardinality and observing that “up to errors of \epsilon“, the total number of essentially different ways a convex body can meet X grows at most polynomially in 1/\epsilon. (This is a very typical application of the probabilistic method.) On the other hand, since X can contain roughly 1/\epsilon disjoint convex bodies, each of which contains at least \epsilon of the points in X, we see that no \epsilon-net can have size much smaller than 1/\epsilon.

Now consider the situation in which X is now an arbitrary finite set, rather than a discrete cube. More precisely, let f(\epsilon,d) be the least number such that every finite set X possesses at least one weak \epsilon-net for X with respect to convex bodies of cardinality at most f(\epsilon,d). (One can also replace the finite set X with an arbitrary probability measure; the two formulations are equivalent.) Informally, f is the least number of “guards” one needs to place to prevent a convex body from covering more than \epsilon of any given territory.

  • Problem 1: For fixed d, what is the correct rate of growth of f as \epsilon \to 0?

Read the rest of this entry »

This problem lies in the highly interconnected interface between algebraic combinatorics (esp. the combinatorics of Young tableaux and related objects, including honeycombs and puzzles), algebraic geometry (particularly classical and quantum intersection theory and geometric invariant theory), linear algebra (additive and multiplicative, real and tropical), and the representation theory (classical, quantum, crystal, etc.) of classical groups. (Another open problem in this subject is to find a succinct and descriptive name for the field.) I myself haven’t actively worked in this area for several years, but I still find it a fascinating and beautiful subject. (With respect to the dichotomy between structure and randomness, this subject lies deep within the “structure” end of the spectrum.)

As mentioned above, the problems in this area can be approached from a variety of quite diverse perspectives, but here I will focus on the linear algebra perspective, which is perhaps the most accessible. About nine years ago, Allen Knutson and I introduced a combinatorial gadget, called a honeycomb, which among other things controlled the relationship between the eigenvalues of two arbitrary Hermitian matrices A, B, and the eigenvalues of their sum A+B; this was not the first such gadget that achieved this purpose, but it was a particularly convenient one for studying this problem, in particular it was used to resolve two conjectures in the subject, the saturation conjecture and the Horn conjecture. (These conjectures have since been proven by a variety of other methods.) There is a natural multiplicative version of these problems, which now relates the eigenvalues of two arbitrary unitary matrices U, V and the eigenvalues of their product UV; this led to the “quantum saturation” and “quantum Horn” conjectures, which were proven a couple years ago. However, the quantum analogue of a “honeycomb” remains a mystery; this is the main topic of the current post.

Read the rest of this entry »

For much of last week I was in Leiden, Holland, giving one of the Ostrowski prize lectures at the annual meeting of the Netherlands mathematical congress. My talk was not on the subject of the prize (arithmetic progressions in primes), as this was covered by a talk of Ben Green there, but rather on a certain “uniform uncertainty principle” in Fourier analysis, and its relation to compressed sensing; this is work which is joint with Emmanuel Candes and also partly with Justin Romberg.

Read the rest of this entry »

I’ve had a number of people ask me (especially in light of some recent publicity) exactly what “compressed sensing” means, and how a “single pixel camera” could possibly work (and how it might be advantageous over traditional cameras in certain circumstances). There is a large literature on the subject, but as the field is relatively recent, there does not yet appear to be a good non-technical introduction to the subject. So here’s my stab at the topic, which should hopefully be accessible to a non-mathematical audience.

For sake of concreteness I’ll primarily discuss the camera application, although compressed sensing is a more general measurement paradigm which is applicable to other contexts than imaging (e.g. astronomy, MRI, statistical selection, etc.), as I’ll briefly remark upon at the end of this post.

Read the rest of this entry »

The main topic of this post has absolutely nothing to do with mathematics, except insofar as the well-known analogies between mathematics and music are concerned, but the recent article “Pearls before breakfast” in the Washington Post makes for fascinating reading. They conduct a very interesting experiment regarding the relationship between quality and context: what would happen if one took one of the finest and most renowned violinists in the world, and made him give a concert-quality performance, while busking incognito at a busy subway station? The results… well, I won’t spoil them, but suffice to say that the article is well worth the read.

One wonders whether one could conduct a similar experiment in mathematics. The main new difficulty, of course, is that good music can (in principle) be appreciated by any attentive and interested listener, but the same is much less true of good mathematics. But perhaps there is a substitute experiment which would also be revealing. For instance, in Feynman’s well-known book (in the chapter “Alfred Nobel’s other mistake”), one reads that, to avoid crowds of curiosity-seekers, the Nobel prize-winning physicist would sometimes give public lectures unannounced, as a last minute substitute for the announced (and fictional) lecturer; but the published title and abstract (e.g. “On the structure of the proton”) would be absolutely accurate, and thus only draw in people who were interested in the subject matter and not in the lecturer himself. Indeed, the crowds were much reduced as a consequence.

[Update, April 25, 2008: The above article won the 2008 Pulitzer Prize for Feature Writing.]

[This lecture is also doubling as this week’s “open problem of the week”, as it (eventually) discusses the soliton resolution conjecture.]

In this third lecture, I will talk about how the dichotomy between structure and randomness pervades the study of two different types of partial differential equations (PDEs):

  • Parabolic PDE, such as the heat equation u_t = \Delta u, which turn out to play an important role in the modern study of geometric topology; and
  • Hamiltonian PDE, such as the Schrödinger equation u_t = i \Delta u, which are heuristically related (via Liouville’s theorem) to measure-preserving actions of the real line (or time axis) {\Bbb R}, somewhat in analogy to how combinatorial number theory and graph theory were related to measure-preserving actions of {\Bbb Z} and S_\infty respectively, as discussed in the previous lecture.

(In physics, one would also insert some physical constants, such as Planck’s constant \hbar, but for the discussion here it is convenient to normalise away all of these constants.)

Read the rest of this entry »

In this second lecture, I wish to talk about the dichotomy between structure and randomness as it manifests itself in four closely related areas of mathematics:

  • Combinatorial number theory, which seeks to find patterns in unstructured dense sets (or colourings) of integers;
  • Ergodic theory (or more specifically, multiple recurrence theory), which seeks to find patterns in positive-measure sets under the action of a discrete dynamical system on probability spaces (or more specifically, measure-preserving actions of the integers {\Bbb Z});
  • Graph theory, or more specifically the portion of this theory concerned with finding patterns in large unstructured dense graphs; and
  • Ergodic graph theory, which is a very new and undeveloped subject, which roughly speaking seems to be concerned with the patterns within a measure-preserving action of the infinite permutation group S_\infty, which is one of several models we have available to study infinite “limits” of graphs.

The two “discrete” (or “finitary”, or “quantitative”) fields of combinatorial number theory and graph theory happen to be related to each other, basically by using the Cayley graph construction; I will give an example of this shortly. The two “continuous” (or “infinitary”, or “qualitative”) fields of ergodic theory and ergodic graph theory are at present only related on the level of analogy and informal intuition, but hopefully some more systematic connections between them will appear soon.

On the other hand, we have some very rigorous connections between combinatorial number theory and ergodic theory, and also (more recently) between graph theory and ergodic graph theory, basically by the procedure of viewing the infinitary continuous setting as a limit of the finitary discrete setting. These two connections go by the names of the Furstenberg correspondence principle and the graph correspondence principle respectively. These principles allow one to tap the power of the infinitary world (for instance, the ability to take limits and perform completions or closures of objects) in order to establish results in the finitary world, or at least to take the intuition gained in the infinitary world and transfer it to a finitary setting. Conversely, the finitary world provides an excellent model setting to refine one’s understanding of infinitary objects, for instance by establishing quantitative analogues of “soft” results obtained in an infinitary manner. I will remark here that this best-of-both-worlds approach, borrowing from both the finitary and infinitary traditions of mathematics, was absolutely necessary for Ben Green and I in order to establish our result on long arithmetic progressions in the primes. In particular, the infinitary setting is excellent for being able to rigorously define and study concepts (such as structure or randomness) which are much “fuzzier” and harder to pin down exactly in the finitary world.

Read the rest of this entry »

This week I am in Boston, giving this year’s Simons lectures at MIT together with David Donoho. (These lectures, incidentally, are endowed by Jim Simons, who was mentioned in some earlier discussion here.) While preparing these lectures, it occurred to me that I may as well post my lecture notes on this blog, since this medium is essentially just an asynchronous version of a traditional lecture series, and the hypertext capability is in some ways more convenient and informal than, say, \LaTeX slides.

I am giving three lectures, each expounding on some aspects of the theme “the dichotomy between structure and randomness”, which I also spoke about (and wrote about) for the ICM last August. This theme seems to pervade many of the areas of mathematics that I work in, and my lectures aim to explore how this theme manifests itself in several of these. In this, the first lecture, I describe the dichotomy as it appears in Fourier analysis and in number theory. (In the second, I discuss the dichotomy in ergodic theory and graph theory, while in the third, I discuss PDE.)

Read the rest of this entry »

Archives