2
$\begingroup$

I'm studying Symmetric Functions and I came across a doubt that could be considered stupid but I need clarifications.

In the course I'm following we introduced symmetric functions as formal series of monomials that are invariant under 'permutations' of natural numbers. So, this means that if $x_1x_4^2x_5^3$ is present in the sum that defines a symmetric function $f$, then $f$ must contain all possible permutations of $1,4,5$, so for example $x_{100}x_{202}^2x_1^3$ must be in the sum that defines $f$.

This links well with the definition we gave of monomial functions associated with a partition of a certain positive integer, $\lambda \vdash n$. For example, $m_{(2,1)} = \sum_{i \neq j} x_ix_j^2$. Now, there is no correlation here between the number of variables and $n=3$, since obviously in this case the variables form an infinite countable set.

Now, this carries over to the combinatoric definition of Schur functions: if we say that for a certain partition $\lambda \vdash n$, we define the Schur function $s_\lambda$ as the sum of all monomials $x^T$, with $T$ a $SSYT$ of shape $\lambda$ and $x^T$ defined as $\prod x_i^{\alpha_i}$ with $\alpha_i = $# of times in which $i$ appears in a cell of the diagram.

Now, there is no indication whatsoever that I need to fill the cells with specific numbers, for instance if $\lambda = (2,1)$, I could consider the $SSYT$ $T$ that has $1000, 2000$ in the first line and $3000$ in its second one. This would mean that $x^T = x_{1000}x_{2000}x_{3000}$ and this term must appear in the formal series of $s_\lambda$.

Now, if we instead use the classical definition of Schur functions as the ratio of determinants, these are evaluated using a finite set of variables that depend on the partition I'm considering, and its number of components. For example, using $\lambda=(2,1)$ as before we obtain (I'll skip the calculations) $s_\lambda = x_1^2x_2 + x_1x_2^2 + x_1^2x_3 + x_1x_3^2 + x_2x_3^2 + x_2^2x_3 + 2x_1x_2x_3$. This again is pretty clear to me. I have one doubt about it: since the Schur function is formally defined here as $s_\lambda$ = $\frac{a_{\lambda + \delta}}{a_\lambda}$, I am confused as to how the sum is defined if $\delta$ is longer than $\lambda$. For example, if $\lambda=(3,1)$ and $\delta=(3,2,1)$, is $\delta + \lambda = (6,3,1)$ relationship true? Am I getting confused?

Now, back to the number of variables: when we say the two definitions two are equivalent, I'm lost, as clearly $x^T = x_{1000}x_{2000}x_{3000}$ isn't a member of the sum that defines the result we obtained with the classical definition. How can it be? Maybe if we consider just the terms that only contain the first $n$ variables? Like for instance in the $s_\lambda$ example above, should we cut from the combinatoric function all the terms with variables bigger than $3$? Aren't these just polynomials as opposed to functions?

Thank you in advance.

$\endgroup$

1 Answer 1

2
$\begingroup$

I'm not sure I've understood your question in detail, but there's a naming convention in symmetric function theory where symmetric functions are named in a way that is independent of the number of variables, and that might be throwing you off.

To my mind the easiest way to have this discussion is using the elementary symmetric polynomials $e_i$. First we consider the elementary symmetric functions in countably many variables, namely

$$e_1 = x_1 + x_2 + x_3 + \dots $$ $$e_2 = x_1 x_2 + x_1 x_3 + x_2 x_3 + x_1 x_4 + x_2 x_4 + \dots $$ $$e_3 = x_1 x_2 x_3 + x_1 x_2 x_4 + x_1 x_3 x_4 + \dots $$

where we take the sum over all squarefree monomials in countably many variables $x_i$, in a suitable completion of the polynomial algebra $K[x_i]$. The ring of symmetric functions in countably many variables is the polynomial algebra $K[e_i]$ on the $e_i$, or possibly some completion of this allowing certain infinite sums; it doesn't matter for the purposes of this question.

Sometimes when the terminology "symmetric function" is being used rather than "symmetric polynomial" it is referring to an element of this ring, since strictly speaking these are not polynomials in the $x_i$. However, I don't know how common this convention is.

For any particular $N$, the symmetric polynomials in $N$ variables $x_1, \dots x_N$ are given by setting all the other variables in the above expressions equal to zero. This has the effect of setting $e_{N+1}, e_{N+2}, \dots $ equal to zero but does not affect the lower elementary symmetric polynomials; the result is the familiar ring $K[e_1, \dots e_N]$ of symmetric polynomials in $N$ variables.

This sequence $K[e_1, \dots e_N]$ of symmetric polynomials in $N$ variables "converges" to the ring of symmetric functions in a suitable sense as $N \to \infty$ (more precisely we get a completion allowing certain infinite sums if we take the cofiltered (inverse) limit over the collection of quotient maps above obtained by setting variables equal to zero). Moreover this sequence also "stabilizes" in the sense that once $N \ge n$ then the polynomials $e_1, \dots e_n$ make sense and have the same relationships to each other regardless of how many more variables you add.

This is why we use notation for them that does not explicitly name the number of variables involved, because we are giving ourselves the freedom to add more variables arbitrarily and still call the result the "same" symmetric function. This discussion happened in the $e_{\lambda}$ basis but essentially the same holds for the other common ones, they all stabilize to a basis (in a suitable sense) of the full ring of symmetric functions. So we freely identify the symmetric polynomial $e_i \in K[e_1, \dots e_N]$ once $N \ge i$ and the symmetric function $e_i \in K[e_1, \dots ]$, and similarly for all the other symmetric functions.

$\endgroup$
3
  • $\begingroup$ First of all, this was extremely helpful, so thank you. So we can essentially work with how many variables we want, and so if we reduce the number of variables all the facts of symmetric functions still stand, so for example the Schur functions being a basis, all the stuff regarding the automorphism $\omega$ and the scalar product that makes the Schur basis orthonormal and whatnot... Can this relationship be explained as, for example in the Schur functions case, that if we consider a partition $\lambda \vdash n$, we can fill the SSYT with numbers that are at most equal to $n$? $\endgroup$ Commented Jun 26 at 10:27
  • $\begingroup$ Also, is what I wrote regarding the partitions with different sizes true? Is the sum evaluated in that way? $\endgroup$ Commented Jun 26 at 10:29
  • 1
    $\begingroup$ @Marco: it's been awhile since I thought about Schur functions but yes, I think the general pattern is that $s_{\lambda}$ makes sense as soon as the number of variables is at least $n$, but can be anything larger (including countable). The determinant definition is not well-suited to seeing that there is something "stable" going on as the number of variables goes to infinity. And yes, the general pattern is you pad the partition with zeroes as necessary. $\endgroup$ Commented Jun 26 at 16:06

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .