Café Math : Grassmannians

Hi, today is my birthday, I'm 32 years old. It's been more than a year since my last post on this blog. For this new post I would like to talk about grassmannian varieties and Schubert cell decomposition of those varieties. Along the way we are going to talk about the quantum binomial coefficients, the Young diagrams and some related pretty combinatorics.

The subject of grassmannian varieties is about classifying the subspaces of a given dimension $k$ in a vector space of dimension $n$. In fancy language we say that the grassmannian variety $\mathrm{Gr}(k,n)$ is the moduli space of those subspaces. Topologicaly, if we take vector spaces over a topological field such as $\mathbb{R}$ or $\mathbb{C}$ this is a manifold of finite dimension, and since it can be defined by polynomial equations, it is even an algebraic variety.

Example 1. The projective $n$-space $\mathbb{P}^n$ is an example of grassmannian variety as it is the moduli space of lines in a vector space of dimension $n+1$ that is, \begin{align} \mathbb{P}^n = \mathrm{Gr}(n+1,1) \end{align}

What follows isn't special to a particular field, so let's forget about it for a moment, just keep in mind that it could be any field ranging from the usual $\mathbb{R}$ or $\mathbb{C}$ but also finite fields like $\mathbb{F}_q$, $p$-adic fields and even the mysterious one element field $\mathbb{F}_1$ (more on that later).So let $V$ be a vector space of dimension $n$. We are to classify the subspaces of dimension $k$. The question is what precisely we call a subspace of dimension $k$. An idea that comes to mind in order to reify that notion is to consider a vector space $U$ of dimension $k$ together with a inclusion map $u : U \to V$ which is injective. The problem with that idea, is that what we are interested in is the image of the inclusion map $i$ and that there could be many inclusion maps $u': U \to V$ with exactly the same image as $u$. Two such maps differ by an isomorphism $a$ of their source $U$. That means that between two such maps there is an isomorphism $a$ which makes the following diagram commute.

By the way, grassmannians are not in general plain projective spaces, far from that, instead they are subvarieties of projective spaces. This is because Plücker coordinates are related by special relations called Plücker relations. Indeed, it is not surprising that the partial determinants we've considered are constrained by relations. Those relation are in fact homogeneous polynomial relations so the grassmannians are projective varieties.

The matrix $A$ can be decomposed in a finite product of Gauss matrices.
Letting Gauss matrices act on the *right* of $S$ correspond to elementary *column* operations. This is different of what we are used to do in linear algebra when we are solving a linear system of equations, because to do that we were told to do elementary *row* operations and not column operations.
This is because when solving a linear system like $MX= Y$ we let an invertible linear transformation act on the target space of the linear transformation $M$ this corresponds to an action of an invertible matrix $A$ on the *left* of $M$ and $Y$ to get an equivalent matrix equation : $AMX = AY$.
\begin{align}
\begin{split}
MX = Y
\quad &\Rightarrow \quad
AMX= AY \\
&\Rightarrow \quad
A^{-1} A M X = A^{-1} A Y \\
&\Rightarrow \quad
MX = Y
\end{split}
\end{align}
To synthesize, let's say that Gauss transformations acting on the *left* preserve *source* space and correspond to elementary *row* operations, whereas Gauss transformations acting on the *right* preserve *target* space and correspond to elementary *column* operations.

So adapting what we know about Gaussian elimination in the context of row operation to that of column operations, we can find that there is one and only one matrix in echelon form that is equivalent to our initial matrix by elementary column operations. By echelon form we mean *reduced column echelon form* that is matrices with the choice for all column of an pivot element whose coefficient is $1$ and each pivot element having all other entries in the same raw and under it, vanish.

Example 2.

For example with $n=5$ and $k=3$ we get the following ten echelon forms, \begin{align} \begin{aligned} \left[ \begin{matrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{matrix} \right] && \left[ \begin{matrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ * & 0 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{matrix} \right] && \left[ \begin{matrix} 0 & 0 & 1 \\ * & * & 0 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{matrix} \right] && \left[ \begin{matrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ * & 0 & 0 \\ * & 0 & 0 \\ 1 & 0 & 0 \end{matrix} \right] && \left[ \begin{matrix} * & * & * \\ 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \\ \end{matrix} \right] \\ \\ \left[ \begin{matrix} 0 & 0 & 1 \\ * & * & 0 \\ 0 & 1 & 0 \\ * & 0 & 0 \\ 1 & 0 & 0 \end{matrix} \right] &\quad& \left[ \begin{matrix} * & * & * \\ 0 & 0 & 1 \\ 0 & 1 & 0 \\ * & 0 & 0 \\ 1 & 0 & 0 \end{matrix} \right] &\quad& \left[ \begin{matrix} 0 & 0 & 1 \\ * & * & 0 \\ * & * & 0 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{matrix} \right] &\quad& \left[ \begin{matrix} * & * & * \\ 0 & 0 & 1 \\ * & * & 0 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{matrix} \right] &\quad& \left[ \begin{matrix} * & * & * \\ * & * & * \\ 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{matrix} \right] \end{aligned} \end{align} Where the $*$ symbols represent any value.

We know from elementary algebra that every such inclusion matrix $S$ corresponds to one and only one echelon form and that the set of all matrix sharing the same echelon form are related by an isomorphism $a$ of the source space $U$ and vice versa. This way we can think of points of the Grassman variety $\mathrm{Gr}(n,k)$ as matrices in echelon form with $n$ rows and $k$ columns.

Schubert cells of a grassmannian $\mathrm{Gr}(n,k)$ are constituted of echelon forms having the same overall shape. They are affine spaces parameterized by the $*$ elements of the echelon forms.

Now removing the lines containing a pivot elements and every elements under these in an echelon matrix, we see that the reminding coefficients arrange in the shape of a young diagram. Moreover, this Young diagram completely determines shape of the echelon matrix so that Schubert cells are indexed by Young diagrams.

Example 3. Continuing the previous example, the corresponding ten Young diagrams are the following,

Recall that the ordinary binomial coefficients $\binom{n}{k}$ are computed using the famous Pascal triangle,

The quantum binomial coefficient also called Gauss binomial coefficient have similar properties but correspond to a computation where the two moves $U$ and $V$ satisfy a different relation of commutativity, called $q$-commutativity, $VU = qUV$ where $q$ is an operation commuting to both $U$ and $V$. So what becomes the binomial formula in this context ? For example let's compute, \begin{align*} (U+V)^3 &= U^3+U^2V+UVU+VU^2+UV^2+VUV+V^2U+V^3 \\ &= U^3 + U^2V+qU^2V+q^2U^2V+UV^2+qUV^2 +q^2UV^2 +V^3 \\ &=U^3 + (1+q+q^2)U^2V+(1+q+q^2)UV^2+V^3 \end{align*} We still get a binomial formula, discovered by Gauss, \begin{align} (U+V)^n = \sum_{k=0}^n {n \brack k}_q U^kV^{n-k} \end{align} The coefficients ${n \brack k}$ in this formula are called the quantum binomial coefficients or Gauss binomial coefficients. They satisfy slightly deformed Pascal relations, \begin{align} {n \brack k}_q = {n-1 \brack k-1}_q+q^k{n-1 \brack k}_q \end{align} And they can be computed using a quantum version of the Pascal triangle,

Alfredo de la Fuente posted 2013-08-31 19:31:18

Really illustratve example. Happy Birtdhay, by the way.

Samuel Vidal posted 2013-08-31 19:52:35

Thank you my friend ;-)

- - -

- - -

means $1$

O - -

- - -

means $q$

O O -

- - -

means $q^2$

O - -

O - -

means $q^2$ again so $2 q^2$

O O -

O - -

O O O

- - -

means $2 q^3$

O O O

O - -

O O -

O O -

means $2 q^4$

O O O

O O -

O O O

O O O

$q^5 + q^6$

Ok so $q-\mathrm{binom}(5,3) = 1 + q + 2 q^2 + 2 q^3 + 2 q^4 + q^5 + q^6$