You are currently browsing the category archive for the ‘talk’ category.

Earlier this year, I gave a series of lectures at the Joint Mathematics Meetings at San Francisco. I am uploading here the slides for these talks:

I also have written a text version of the first talk, which has been submitted to the Notices of the American Mathematical Society.

My student, Jaume de Dios, has set up a web site to collect upcoming mathematics seminars from any institution that are open online.  (For instance, it has a talk that I will be giving in an hour.)   There is a form for adding further talks to the site; please feel free to contribute (or make other suggestions) in order to make the seminar list more useful.

UPDATE: Here are some other lists of mathematical seminars online:

Perhaps further links of this type could be added in the comments.  It would perhaps make sense to somehow unify these lists into a single one that can be updated through crowdsourcing.

EDIT: See also IPAM’s advice page on running virtual seminars.

At the most recent MSRI board of trustees meeting on Mar 7 (conducted online, naturally), Nicolas Jewell (a Professor of Biostatistics and Statistics at Berkeley, also affiliated with the Berkeley School of Public Health and the London School of Health and Tropical Disease), gave a presentation on the current coronavirus epidemic entitled “2019-2020 Novel Coronavirus outbreak: mathematics of epidemics, and what it can and cannot tell us”.  The presentation (updated with Mar 18 data), hosted by David Eisenbud (the director of MSRI), together with a question and answer session, is now on Youtube:

(I am on this board, but could not make it to this particular meeting; I caught up on the presentation later, and thought it would of interest to several readers of this blog.)  While there is some mathematics in the presentation, it is relatively non-technical.

Last week, we had Peter Scholze give an interesting distinguished lecture series here at UCLA on “Prismatic Cohomology”, which is a new type of cohomology theory worked out by Scholze and Bhargav Bhatt. (Video of the talks will be available shortly; for now we have some notes taken by two notetakers in the audience on that web page.) My understanding of this (speaking as someone that is rather far removed from this area) is that it is progress towards the “motivic” dream of being able to define cohomology {H^i(X/\overline{A}, A)} for varieties {X} (or similar objects) defined over arbitrary commutative rings {\overline{A}}, and with coefficients in another arbitrary commutative ring {A}. Currently, we have various flavours of cohomology that only work for certain types of domain rings {\overline{A}} and coefficient rings {A}:

  • Singular cohomology, which roughly speaking works when the domain ring {\overline{A}} is a characteristic zero field such as {{\bf R}} or {{\bf C}}, but can allow for arbitrary coefficients {A};
  • de Rham cohomology, which roughly speaking works as long as the coefficient ring {A} is the same as the domain ring {\overline{A}} (or a homomorphic image thereof), as one can only talk about {A}-valued differential forms if the underlying space is also defined over {A};
  • {\ell}-adic cohomology, which is a remarkably powerful application of étale cohomology, but only works well when the coefficient ring {A = {\bf Z}_\ell} is localised around a prime {\ell} that is different from the characteristic {p} of the domain ring {\overline{A}}; and
  • Crystalline cohomology, in which the domain ring is a field {k} of some finite characteristic {p}, but the coefficient ring {A} can be a slight deformation of {k}, such as the ring of Witt vectors of {k}.

There are various relationships between the cohomology theories, for instance de Rham cohomology coincides with singular cohomology for smooth varieties in the limiting case {A=\overline{A} = {\bf R}}. The following picture Scholze drew in his first lecture captures these sorts of relationships nicely:

20190312_145136

The new prismatic cohomology of Bhatt and Scholze unifies many of these cohomologies in the “neighbourhood” of the point {(p,p)} in the above diagram, in which the domain ring {\overline{A}} and the coefficient ring {A} are both thought of as being “close to characteristic {p}” in some sense, so that the dilates {pA, pA'} of these rings is either zero, or “small”. For instance, the {p}-adic ring {{\bf Z}_p} is technically of characteristic {0}, but {p {\bf Z}_p} is a “small” ideal of {{\bf Z}_p} (it consists of those elements of {{\bf Z}_p} of {p}-adic valuation at most {1/p}), so one can think of {{\bf Z}_p} as being “close to characteristic {p}” in some sense. Scholze drew a “zoomed in” version of the previous diagram to informally describe the types of rings {A,A'} for which prismatic cohomology is effective:

20190312_145157

To define prismatic cohomology rings {H^i_\Delta(X/\overline{A}, A)} one needs a “prism”: a ring homomorphism from {A} to {\overline{A}} equipped with a “Frobenius-like” endomorphism {\phi: A \to A} on {A} obeying some axioms. By tuning these homomorphisms one can recover existing cohomology theories like crystalline or de Rham cohomology as special cases of prismatic cohomology. These specialisations are analogous to how a prism splits white light into various individual colours, giving rise to the terminology “prismatic”, and depicted by this further diagram of Scholze:

20190313_152011

(And yes, Peter confirmed that he and Bhargav were inspired by the Dark Side of the Moon album cover in selecting the terminology.)

There was an abstract definition of prismatic cohomology (as being the essentially unique cohomology arising from prisms that obeyed certain natural axioms), but there was also a more concrete way to view them in terms of coordinates, as a “{q}-deformation” of de Rham cohomology. Whereas in de Rham cohomology one worked with derivative operators {d} that for instance applied to monomials {t^n} by the usual formula

\displaystyle d(t^n) = n t^{n-1} dt,

prismatic cohomology in coordinates can be computed using a “{q}-derivative” operator {d_q} that for instance applies to monomials {t^n} by the formula

\displaystyle d_q (t^n) = [n]_q t^{n-1} d_q t

where

\displaystyle [n]_q = \frac{q^n-1}{q-1} = 1 + q + \dots + q^{n-1}

is the “{q}-analogue” of {n} (a polynomial in {q} that equals {n} in the limit {q=1}). (The {q}-analogues become more complicated for more general forms than these.) In this more concrete setting, the fact that prismatic cohomology is independent of the choice of coordinates apparently becomes quite a non-trivial theorem.

 

In July I will be spending a week at Park City, being one of the mini-course lecturers in the Graduate Summer School component of the Park City Summer Session on random matrices.  I have chosen to give some lectures on least singular values of random matrices, the circular law, and the Lindeberg exchange method in random matrix theory; this is a slightly different set of topics than I had initially advertised (which was instead about the Lindeberg exchange method and the local relaxation flow method), but after consulting with the other mini-course lecturers I felt that this would be a more complementary set of topics.  I have uploaded an draft of my lecture notes (some portion of which is derived from my monograph on the subject); as always, comments and corrections are welcome.

[Update, June 23: notes revised and reformatted to PCMI format. -T.]

 

[Update, Mar 19 2018: further revision. -T.]

Just a short post here to note that the cover story of this month’s Notices of the AMS, by John Friedlander, is about the recent work on bounded gaps between primes by Zhang, Maynard, our own Polymath project, and others.

I may as well take this opportunity to upload some slides of my own talks on this subject: here are my slides on small and large gaps between the primes that I gave at the “Latinos in the Mathematical Sciences” back in April, and here are my slides on the Polymath project for the Schock Prize symposium last October.  (I also gave an abridged version of the latter talk at an AAAS Symposium in February, as well as the Breakthrough Symposium from last November.)

Due to some requests, I’m uploading to my blog the slides for my recent talk in Segovia (for the birthday conference of Michael Cowling) on “Hilbert’s fifth problem and approximate groups“.  The slides cover essentially the same range of topics in this series of lecture notes, or in this text of mine, though of course in considerably less detail, given that the slides are meant to be presented in an hour.

This is a blog version of a talk I recently gave at the IPAM workshop on “The Kakeya Problem, Restriction Problem, and Sum-product Theory”.

Note: the discussion here will be highly non-rigorous in nature, being extremely loose in particular with asymptotic notation and with the notion of dimension. Caveat emptor.

One of the most infamous unsolved problems at the intersection of geometric measure theory, incidence combinatorics, and real-variable harmonic analysis is the Kakeya set conjecture. We will focus on the following three-dimensional case of the conjecture, stated informally as follows:

Conjecture 1 (Kakeya conjecture) Let {E} be a subset of {{\bf R}^3} that contains a unit line segment in every direction. Then {\hbox{dim}(E) = 3}.

This conjecture is not precisely formulated here, because we have not specified exactly what type of set {E} is (e.g. measurable, Borel, compact, etc.) and what notion of dimension we are using. We will deliberately ignore these technical details in this post. It is slightly more convenient for us here to work with lines instead of unit line segments, so we work with the following slight variant of the conjecture (which is essentially equivalent):

Conjecture 2 (Kakeya conjecture, again) Let {{\cal L}} be a family of lines in {{\bf R}^3} that meet {B(0,1)} and contain a line in each direction. Let {E} be the union of the restriction {\ell \cap B(0,2)} to {B(0,2)} of every line {\ell} in {{\cal L}}. Then {\hbox{dim}(E) = 3}.

As the space of all directions in {{\bf R}^3} is two-dimensional, we thus see that {{\cal L}} is an (at least) two-dimensional subset of the four-dimensional space of lines in {{\bf R}^3} (actually, it lies in a compact subset of this space, since we have constrained the lines to meet {B(0,1)}). One could then ask if this is the only property of {{\cal L}} that is needed to establish the Kakeya conjecture, that is to say if any subset of {B(0,2)} which contains a two-dimensional family of lines (restricted to {B(0,2)}, and meeting {B(0,1)}) is necessarily three-dimensional. Here we have an easy counterexample, namely a plane in {B(0,2)} (passing through the origin), which contains a two-dimensional collection of lines. However, we can exclude this case by adding an additional axiom, leading to what one might call a “strong” Kakeya conjecture:

Conjecture 3 (Strong Kakeya conjecture) Let {{\cal L}} be a two-dimensional family of lines in {{\bf R}^3} that meet {B(0,1)}, and assume the Wolff axiom that no (affine) plane contains more than a one-dimensional family of lines in {{\cal L}}. Let {E} be the union of the restriction {\ell \cap B(0,2)} of every line {\ell} in {{\cal L}}. Then {\hbox{dim}(E) = 3}.

Actually, to make things work out we need a more quantitative version of the Wolff axiom in which we constrain the metric entropy (and not just dimension) of lines that lie close to a plane, rather than exactly on the plane. However, for the informal discussion here we will ignore these technical details. Families of lines that lie in different directions will obey the Wolff axiom, but the converse is not true in general.

In 1995, Wolff established the important lower bound {\hbox{dim}(E) \geq 5/2} (for various notions of dimension, e.g. Hausdorff dimension) for sets {E} in Conjecture 3 (and hence also for the other forms of the Kakeya problem). However, there is a key obstruction to going beyond the {5/2} barrier, coming from the possible existence of half-dimensional (approximate) subfields of the reals {{\bf R}}. To explain this problem, it easiest to first discuss the complex version of the strong Kakeya conjecture, in which all relevant (real) dimensions are doubled:

Conjecture 4 (Strong Kakeya conjecture over {{\bf C}}) Let {{\cal L}} be a four (real) dimensional family of complex lines in {{\bf C}^3} that meet the unit ball {B(0,1)} in {{\bf C}^3}, and assume the Wolff axiom that no four (real) dimensional (affine) subspace contains more than a two (real) dimensional family of complex lines in {{\cal L}}. Let {E} be the union of the restriction {\ell \cap B(0,2)} of every complex line {\ell} in {{\cal L}}. Then {E} has real dimension {6}.

The argument of Wolff can be adapted to the complex case to show that all sets {E} occuring in Conjecture 4 have real dimension at least {5}. Unfortunately, this is sharp, due to the following fundamental counterexample:

Proposition 5 (Heisenberg group counterexample) Let {H \subset {\bf C}^3} be the Heisenberg group

\displaystyle  H = \{ (z_1,z_2,z_3) \in {\bf C}^3: \hbox{Im}(z_1) = \hbox{Im}(z_2 \overline{z_3}) \}

and let {{\cal L}} be the family of complex lines

\displaystyle  \ell_{s,t,\alpha} := \{ (\overline{\alpha} z + t, z, sz + \alpha): z \in {\bf C} \}

with {s,t \in {\bf R}} and {\alpha \in {\bf C}}. Then {H} is a five (real) dimensional subset of {{\bf C}^3} that contains every line in the four (real) dimensional set {{\cal L}}; however each four real dimensional (affine) subspace contains at most a two (real) dimensional set of lines in {{\cal L}}. In particular, the strong Kakeya conjecture over the complex numbers is false.

This proposition is proven by a routine computation, which we omit here. The group structure on {H} is given by the group law

\displaystyle  (z_1,z_2,z_3) \cdot (w_1,w_2,w_3) = (z_1 + w_1 + z_2 \overline{w_3} - z_3 \overline{w_2}, z_2 +w_2, z_3+w_3),

giving {E} the structure of a {2}-step simply-connected nilpotent Lie group, isomorphic to the usual Heisenberg group over {{\bf R}^2}. Note that while the Heisenberg group is a counterexample to the complex strong Kakeya conjecture, it is not a counterexample to the complex form of the original Kakeya conjecture, because the complex lines {{\cal L}} in the Heisenberg counterexample do not point in distinct directions, but instead only point in a three (real) dimensional subset of the four (real) dimensional space of available directions for complex lines. For instance, one has the one real-dimensional family of parallel lines

\displaystyle  \ell_{0,t,0} = \{ (t, z, 0): z \in {\bf C}\}

with {t \in {\bf R}}; multiplying this family of lines on the right by a group element in {H} gives other families of parallel lines, which in fact sweep out all of {{\cal L}}.

The Heisenberg counterexample ultimately arises from the “half-dimensional” (and hence degree two) subfield {{\bf R}} of {{\bf C}}, which induces an involution {z \mapsto \overline{z}} which can then be used to define the Heisenberg group {H} through the formula

\displaystyle  H = \{ (z_1,z_2,z_3) \in {\bf C}^3: z_1 - \overline{z_1} = z_2 \overline{z_3} - z_3 \overline{z_2} \}.

Analogous Heisenberg counterexamples can also be constructed if one works over finite fields {{\bf F}_{q^2}} that contain a “half-dimensional” subfield {{\bf F}_q}; we leave the details to the interested reader. Morally speaking, if {{\bf R}} in turn contained a subfield of dimension {1/2} (or even a subring or “approximate subring”), then one ought to be able to use this field to generate a counterexample to the strong Kakeya conjecture over the reals. Fortunately, such subfields do not exist; this was a conjecture of Erdos and Volkmann that was proven by Edgar and Miller, and more quantitatively by Bourgain (answering a question of Nets Katz and myself). However, this fact is not entirely trivial to prove, being a key example of the sum-product phenomenon.

We thus see that to go beyond the {5/2} dimension bound of Wolff for the 3D Kakeya problem over the reals, one must do at least one of two things:

  • (a) Exploit the distinct directions of the lines in {{\mathcal L}} in a way that goes beyond the Wolff axiom; or
  • (b) Exploit the fact that {{\bf R}} does not contain half-dimensional subfields (or more generally, intermediate-dimensional approximate subrings).

(The situation is more complicated in higher dimensions, as there are more obstructions than the Heisenberg group; for instance, in four dimensions quadric surfaces are an important obstruction, as discussed in this paper of mine.)

Various partial or complete results on the Kakeya problem over various fields have been obtained through route (a) or route (b). For instance, in 2000, Nets Katz, Izabella Laba and myself used route (a) to improve Wolff’s lower bound of {5/2} for Kakeya sets very slightly to {5/2+10^{-10}} (for a weak notion of dimension, namely upper Minkowski dimension). In 2004, Bourgain, Katz, and myself established a sum-product estimate which (among other things) ruled out approximate intermediate-dimensional subrings of {{\bf F}_p}, and then pursued route (b) to obtain a corresponding improvement {5/2+\epsilon} to the Kakeya conjecture over finite fields of prime order. The analogous (discretised) sum-product estimate over the reals was established by Bourgain in 2003, which in principle would allow one to extend the result of Katz, Laba and myself to the strong Kakeya setting, but this has not been carried out in the literature. Finally, in 2009, Dvir used route (a) and introduced the polynomial method (as discussed previously here) to completely settle the Kakeya conjecture in finite fields.

Below the fold, I present a heuristic argument of Nets Katz and myself, which in principle would use route (b) to establish the full (strong) Kakeya conjecture. In broad terms, the strategy is as follows:

  1. Assume that the (strong) Kakeya conjecture fails, so that there are sets {E} of the form in Conjecture 3 of dimension {3-\sigma} for some {\sigma>0}. Assume that {E} is “optimal”, in the sense that {\sigma} is as large as possible.
  2. Use the optimality of {E} (and suitable non-isotropic rescalings) to establish strong forms of standard structural properties expected of such sets {E}, namely “stickiness”, “planiness”, “local graininess” and “global graininess” (we will roughly describe these properties below). Heuristically, these properties are constraining {E} to “behave like” a putative Heisenberg group counterexample.
  3. By playing all these structural properties off of each other, show that {E} can be parameterised locally by a one-dimensional set which generates a counterexample to Bourgain’s sum-product theorem. This contradiction establishes the Kakeya conjecture.

Nets and I have had an informal version of argument for many years, but were never able to make a satisfactory theorem (or even a partial Kakeya result) out of it, because we could not rigorously establish anywhere near enough of the necessary structural properties (stickiness, planiness, etc.) on the optimal set {E} for a large number of reasons (one of which being that we did not have a good notion of dimension that did everything that we wished to demand of it). However, there is beginning to be movement in these directions (e.g. in this recent result of Guth using the polynomial method obtaining a weak version of local graininess on certain Kakeya sets). In view of this (and given that neither Nets or I have been actively working in this direction for some time now, due to many other projects), we’ve decided to distribute these ideas more widely than before, and in particular on this blog.

Read the rest of this entry »

(This is an extended blog post version of my talk “Ultraproducts as a Bridge Between Discrete and Continuous Analysis” that I gave at the Simons institute for the theory of computing at the workshop “Neo-Classical methods in discrete analysis“. Some of the material here is drawn from previous blog posts, notably “Ultraproducts as a bridge between hard analysis and soft analysis” and “Ultralimit analysis and quantitative algebraic geometry“‘. The text here has substantially more details than the talk; one may wish to skip all of the proofs given here to obtain a closer approximation to the original talk.)

Discrete analysis, of course, is primarily interested in the study of discrete (or “finitary”) mathematical objects: integers, rational numbers (which can be viewed as ratios of integers), finite sets, finite graphs, finite or discrete metric spaces, and so forth. However, many powerful tools in mathematics (e.g. ergodic theory, measure theory, topological group theory, algebraic geometry, spectral theory, etc.) work best when applied to continuous (or “infinitary”) mathematical objects: real or complex numbers, manifolds, algebraic varieties, continuous topological or metric spaces, etc. In order to apply results and ideas from continuous mathematics to discrete settings, there are basically two approaches. One is to directly discretise the arguments used in continuous mathematics, which often requires one to keep careful track of all the bounds on various quantities of interest, particularly with regard to various error terms arising from discretisation which would otherwise have been negligible in the continuous setting. The other is to construct continuous objects as limits of sequences of discrete objects of interest, so that results from continuous mathematics may be applied (often as a “black box”) to the continuous limit, which then can be used to deduce consequences for the original discrete objects which are quantitative (though often ineffectively so). The latter approach is the focus of this current talk.

The following table gives some examples of a discrete theory and its continuous counterpart, together with a limiting procedure that might be used to pass from the former to the latter:

(Discrete) (Continuous) (Limit method)
Ramsey theory Topological dynamics Compactness
Density Ramsey theory Ergodic theory Furstenberg correspondence principle
Graph/hypergraph regularity Measure theory Graph limits
Polynomial regularity Linear algebra Ultralimits
Structural decompositions Hilbert space geometry Ultralimits
Fourier analysis Spectral theory Direct and inverse limits
Quantitative algebraic geometry Algebraic geometry Schemes
Discrete metric spaces Continuous metric spaces Gromov-Hausdorff limits
Approximate group theory Topological group theory Model theory

As the above table illustrates, there are a variety of different ways to form a limiting continuous object. Roughly speaking, one can divide limits into three categories:

  • Topological and metric limits. These notions of limits are commonly used by analysts. Here, one starts with a sequence (or perhaps a net) of objects {x_n} in a common space {X}, which one then endows with the structure of a topological space or a metric space, by defining a notion of distance between two points of the space, or a notion of open neighbourhoods or open sets in the space. Provided that the sequence or net is convergent, this produces a limit object {\lim_{n \rightarrow \infty} x_n}, which remains in the same space, and is “close” to many of the original objects {x_n} with respect to the given metric or topology.
  • Categorical limits. These notions of limits are commonly used by algebraists. Here, one starts with a sequence (or more generally, a diagram) of objects {x_n} in a category {X}, which are connected to each other by various morphisms. If the ambient category is well-behaved, one can then form the direct limit {\varinjlim x_n} or the inverse limit {\varprojlim x_n} of these objects, which is another object in the same category {X}, and is connected to the original objects {x_n} by various morphisms.
  • Logical limits. These notions of limits are commonly used by model theorists. Here, one starts with a sequence of objects {x_{\bf n}} or of spaces {X_{\bf n}}, each of which is (a component of) a model for given (first-order) mathematical language (e.g. if one is working in the language of groups, {X_{\bf n}} might be groups and {x_{\bf n}} might be elements of these groups). By using devices such as the ultraproduct construction, or the compactness theorem in logic, one can then create a new object {\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}} or a new space {\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}, which is still a model of the same language (e.g. if the spaces {X_{\bf n}} were all groups, then the limiting space {\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}} will also be a group), and is “close” to the original objects or spaces in the sense that any assertion (in the given language) that is true for the limiting object or space, will also be true for many of the original objects or spaces, and conversely. (For instance, if {\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}} is an abelian group, then the {X_{\bf n}} will also be abelian groups for many {{\bf n}}.)

The purpose of this talk is to highlight the third type of limit, and specifically the ultraproduct construction, as being a “universal” limiting procedure that can be used to replace most of the limits previously mentioned. Unlike the topological or metric limits, one does not need the original objects {x_{\bf n}} to all lie in a common space {X} in order to form an ultralimit {\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}; they are permitted to lie in different spaces {X_{\bf n}}; this is more natural in many discrete contexts, e.g. when considering graphs on {{\bf n}} vertices in the limit when {{\bf n}} goes to infinity. Also, no convergence properties on the {x_{\bf n}} are required in order for the ultralimit to exist. Similarly, ultraproduct limits differ from categorical limits in that no morphisms between the various spaces {X_{\bf n}} involved are required in order to construct the ultraproduct.

With so few requirements on the objects {x_{\bf n}} or spaces {X_{\bf n}}, the ultraproduct construction is necessarily a very “soft” one. Nevertheless, the construction has two very useful properties which make it particularly useful for the purpose of extracting good continuous limit objects out of a sequence of discrete objects. First of all, there is Łos’s theorem, which roughly speaking asserts that any first-order sentence which is asymptotically obeyed by the {x_{\bf n}}, will be exactly obeyed by the limit object {\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}; in particular, one can often take a discrete sequence of “partial counterexamples” to some assertion, and produce a continuous “complete counterexample” that same assertion via an ultraproduct construction; taking the contrapositives, one can often then establish a rigorous equivalence between a quantitative discrete statement and its qualitative continuous counterpart. Secondly, there is the countable saturation property that ultraproducts automatically enjoy, which is a property closely analogous to that of compactness in topological spaces, and can often be used to ensure that the continuous objects produced by ultraproduct methods are “complete” or “compact” in various senses, which is particularly useful in being able to upgrade qualitative (or “pointwise”) bounds to quantitative (or “uniform”) bounds, more or less “for free”, thus reducing significantly the burden of “epsilon management” (although the price one pays for this is that one needs to pay attention to which mathematical objects of study are “standard” and which are “nonstandard”). To achieve this compactness or completeness, one sometimes has to restrict to the “bounded” portion of the ultraproduct, and it is often also convenient to quotient out the “infinitesimal” portion in order to complement these compactness properties with a matching “Hausdorff” property, thus creating familiar examples of continuous spaces, such as locally compact Hausdorff spaces.

Ultraproducts are not the only logical limit in the model theorist’s toolbox, but they are one of the simplest to set up and use, and already suffice for many of the applications of logical limits outside of model theory. In this post, I will set out the basic theory of these ultraproducts, and illustrate how they can be used to pass between discrete and continuous theories in each of the examples listed in the above table.

Apart from the initial “one-time cost” of setting up the ultraproduct machinery, the main loss one incurs when using ultraproduct methods is that it becomes very difficult to extract explicit quantitative bounds from results that are proven by transferring qualitative continuous results to the discrete setting via ultraproducts. However, in many cases (particularly those involving regularity-type lemmas) the bounds are already of tower-exponential type or worse, and there is arguably not much to be lost by abandoning the explicit quantitative bounds altogether.

Read the rest of this entry »

Last week I gave a talk at the Trinity Mathematical Society at Trinity College, Cambridge UK.  As the audience was primarily undergraduate, I gave a fairly non-technical talk on the universality phenomenon, based on this blog article of mine on the same topic.  It was a quite light and informal affair, and this is reflected in the talk slides (which, in particular, play up quite strongly the role of former students and Fellows of Trinity College in this story).   There was some interest in making these slides available publicly, so I have placed them on this site here.  (Note: copyright for the images in these slides has not been secured.)

Archives