Café Math : Grassmannians
1, 1, 2, 2, 1, 8, 6, 7, 14, 27, 26, 80, 133, 170, 348, 765, 1002, 2176, 4682, ...   (A121350)
Math Blog
Articles
Talks
Teaching
Phd Thesis
Financial Maths
About me
PrevNext

Grassmannians

Sat, 31 Aug 2013 15:34:32 GMT


Hi, today is my birthday, I'm 32 years old. It's been more than a year since my last post on this blog. For this new post I would like to talk about grassmannian varieties and Schubert cell decomposition of those varieties. Along the way we are going to talk about the quantum binomial coefficients, the Young diagrams and some related pretty combinatorics.

The subject of grassmannian varieties is about classifying the subspaces of a given dimension $k$ in a vector space of dimension $n$. In fancy language we say that the grassmannian variety $\mathrm{Gr}(k,n)$ is the moduli space of those subspaces. Topologicaly, if we take vector spaces over a topological field such as $\mathbb{R}$ or $\mathbb{C}$ this is a manifold of finite dimension, and since it can be defined by polynomial equations, it is even an algebraic variety.

Example 1. The projective $n$-space $\mathbb{P}^n$ is an example of grassmannian variety as it is the moduli space of lines in a vector space of dimension $n+1$ that is, \begin{align} \mathbb{P}^n = \mathrm{Gr}(n+1,1) \end{align}

What follows isn't special to a particular field, so let's forget about it for a moment, just keep in mind that it could be any field ranging from the usual $\mathbb{R}$ or $\mathbb{C}$ but also finite fields like $\mathbb{F}_q$, $p$-adic fields and even the mysterious one element field $\mathbb{F}_1$ (more on that later).

1. Grassman Varieties of Subspaces

So let $V$ be a vector space of dimension $n$. We are to classify the subspaces of dimension $k$. The question is what precisely we call a subspace of dimension $k$. An idea that comes to mind in order to reify that notion is to consider a vector space $U$ of dimension $k$ together with a inclusion map $u : U \to V$ which is injective. The problem with that idea, is that what we are interested in is the image of the inclusion map $i$ and that there could be many inclusion maps $u': U \to V$ with exactly the same image as $u$. Two such maps differ by an isomorphism $a$ of their source $U$. That means that between two such maps there is an isomorphism $a$ which makes the following diagram commute.

Let's put that in matrix form. Let $A$, $S$ and $S'$ the matrix of the respective linear transformation $a$, $s$ and $s'$, and let $X$ and $Y$ be the coordinates of a vector of $U$ and $V$ respectively. Recall from elementary algebra that the columns of $A$ are the coordinates in the basis $v_j$ of $V$ of the image of the corresponding elements of the basis $u_i$ of $U$. \begin{align} \left[ \begin{matrix} s_1^1 & \cdots & s_1^k \\ \vdots & & \vdots \\ \vdots & & \vdots \\ s_n^1 & \cdots & s_n^k \\ \end{matrix} \right] \left[ \begin{matrix} x^1 \\ \vdots \\ x^k \end{matrix} \right] = \left[ \begin{matrix} y^1 \\ \vdots \\ \vdots \\ y^n \end{matrix} \right] \end{align} Then the matrix of the inclusion $s'$ is simply, \begin{align} S' = \left[ \begin{matrix} s_1^1 & \cdots & s_1^k \\ \vdots & & \vdots \\ \vdots & & \vdots \\ s_n^1 & \cdots & s_n^k \\ \end{matrix} \right] \left[ \begin{matrix} a_1^1 & \cdots & a_1^k \\ \vdots & & \vdots \\ a_k^1 & \cdots & a_k^k \\ \end{matrix} \right] \end{align}

Plücker coordinates

Let's consider the maximum minors of the $S$ matrix. They are obtained as the determinants of the square matrices of size $k$ one gets by keeping from the matrix $S$ only the rows indexed by a list of integers $k$ distinct integers. \begin{align} P_{j_1,...,j_k}(S) &= \left|\, \begin{matrix} s_{j_1}^1 & \cdots & s_{j_1}^k \\ \vdots & & \vdots \\ s_{j_k}^1 & \cdots & s_{j_k}^k \\ \end{matrix} \,\right| & (1\le j_1 < \dots < j_k \le n) \end{align} Those minors are called Plücker coordinates. They are independent of the matrix transform $A$ on the right as long as it has unit determinant but get scaled by the determinant of $A$ otherwise. In other words, \begin{align} P_{j_1,...,j_k}(S') = P_{j_1,...,j_k}(S) \det(A) \end{align} The condition telling that $S$ is the matrix of an injective linear transform tells us that at least one of those minors doesn't vanish. This way one can think of Plücker coordinates as homogeneous coordinates in a projective space.

By the way, grassmannians are not in general plain projective spaces, far from that, instead they are subvarieties of projective spaces. This is because Plücker coordinates are related by special relations called Plücker relations. Indeed, it is not surprising that the partial determinants we've considered are constrained by relations. Those relation are in fact homogeneous polynomial relations so the grassmannians are projective varieties.

Schubert cell decomposition

The matrix $A$ can be decomposed in a finite product of Gauss matrices. Letting Gauss matrices act on the right of $S$ correspond to elementary column operations. This is different of what we are used to do in linear algebra when we are solving a linear system of equations, because to do that we were told to do elementary row operations and not column operations. This is because when solving a linear system like $MX= Y$ we let an invertible linear transformation act on the target space of the linear transformation $M$ this corresponds to an action of an invertible matrix $A$ on the left of $M$ and $Y$ to get an equivalent matrix equation : $AMX = AY$. \begin{align} \begin{split} MX = Y \quad &\Rightarrow \quad AMX= AY \\ &\Rightarrow \quad A^{-1} A M X = A^{-1} A Y \\ &\Rightarrow \quad MX = Y \end{split} \end{align} To synthesize, let's say that Gauss transformations acting on the left preserve source space and correspond to elementary row operations, whereas Gauss transformations acting on the right preserve target space and correspond to elementary column operations.

So adapting what we know about Gaussian elimination in the context of row operation to that of column operations, we can find that there is one and only one matrix in echelon form that is equivalent to our initial matrix by elementary column operations. By echelon form we mean reduced column echelon form that is matrices with the choice for all column of an pivot element whose coefficient is $1$ and each pivot element having all other entries in the same raw and under it, vanish.

Example 2.

For example with $n=5$ and $k=3$ we get the following ten echelon forms, \begin{align} \begin{aligned} \left[ \begin{matrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{matrix} \right] && \left[ \begin{matrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ * & 0 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{matrix} \right] && \left[ \begin{matrix} 0 & 0 & 1 \\ * & * & 0 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{matrix} \right] && \left[ \begin{matrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ * & 0 & 0 \\ * & 0 & 0 \\ 1 & 0 & 0 \end{matrix} \right] && \left[ \begin{matrix} * & * & * \\ 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \\ \end{matrix} \right] \\ \\ \left[ \begin{matrix} 0 & 0 & 1 \\ * & * & 0 \\ 0 & 1 & 0 \\ * & 0 & 0 \\ 1 & 0 & 0 \end{matrix} \right] &\quad& \left[ \begin{matrix} * & * & * \\ 0 & 0 & 1 \\ 0 & 1 & 0 \\ * & 0 & 0 \\ 1 & 0 & 0 \end{matrix} \right] &\quad& \left[ \begin{matrix} 0 & 0 & 1 \\ * & * & 0 \\ * & * & 0 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{matrix} \right] &\quad& \left[ \begin{matrix} * & * & * \\ 0 & 0 & 1 \\ * & * & 0 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{matrix} \right] &\quad& \left[ \begin{matrix} * & * & * \\ * & * & * \\ 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{matrix} \right] \end{aligned} \end{align} Where the $*$ symbols represent any value.

We know from elementary algebra that every such inclusion matrix $S$ corresponds to one and only one echelon form and that the set of all matrix sharing the same echelon form are related by an isomorphism $a$ of the source space $U$ and vice versa. This way we can think of points of the Grassman variety $\mathrm{Gr}(n,k)$ as matrices in echelon form with $n$ rows and $k$ columns.

Schubert cells of a grassmannian $\mathrm{Gr}(n,k)$ are constituted of echelon forms having the same overall shape. They are affine spaces parameterized by the $*$ elements of the echelon forms.

Now removing the lines containing a pivot elements and every elements under these in an echelon matrix, we see that the reminding coefficients arrange in the shape of a young diagram. Moreover, this Young diagram completely determines shape of the echelon matrix so that Schubert cells are indexed by Young diagrams.

Example 3. Continuing the previous example, the corresponding ten Young diagrams are the following,

2. Quantum Binomial Coefficients

Recall that the ordinary binomial coefficients $\binom{n}{k}$ are computed using the famous Pascal triangle,

Every entry of that triangle is the sum of the two entries above. This leads to the following Pascal relation, \begin{align} \binom{n}{k} = \binom{n-1}{k-1} + \binom{n-1}{k} \end{align} Combinatorially, the binomial coefficients in the lattice is the number of path going down to them from the top $1$. For instance the left $3$ in the fourth line of the triangle is sum of the three following paths each weighted by $1$.

If you call $U$ the operation of going down to the left and $V$ the operation of going down to the right, each of these path correspond to a sequence of those operations. In a path going to the $k^\text{th}$ position in the $n^\text{th}$ row, the move $U$ is represented $k$ times and the move $V$ is represented $n-k$ times. This correspond to a computation where the two moves $U$ and $V$ commute, $VU =UV$. In this context we have, \begin{align} (U + V)^3 = U^3+3\,U^2V+3\,UV^2 + V^3 \end{align} And more generally, \begin{align} (U+V)^n = \sum_{k=0}^n \binom{n}{k} U^kV^{n-k} \end{align}

The quantum binomial coefficient also called Gauss binomial coefficient have similar properties but correspond to a computation where the two moves $U$ and $V$ satisfy a different relation of commutativity, called $q$-commutativity, $VU = qUV$ where $q$ is an operation commuting to both $U$ and $V$. So what becomes the binomial formula in this context ? For example let's compute, \begin{align*} (U+V)^3 &= U^3+U^2V+UVU+VU^2+UV^2+VUV+V^2U+V^3 \\ &= U^3 + U^2V+qU^2V+q^2U^2V+UV^2+qUV^2 +q^2UV^2 +V^3 \\ &=U^3 + (1+q+q^2)U^2V+(1+q+q^2)UV^2+V^3 \end{align*} We still get a binomial formula, discovered by Gauss, \begin{align} (U+V)^n = \sum_{k=0}^n {n \brack k}_q U^kV^{n-k} \end{align} The coefficients ${n \brack k}$ in this formula are called the quantum binomial coefficients or Gauss binomial coefficients. They satisfy slightly deformed Pascal relations, \begin{align} {n \brack k}_q = {n-1 \brack k-1}_q+q^k{n-1 \brack k}_q \end{align} And they can be computed using a quantum version of the Pascal triangle,

For instance the computation of the following quantum binomial coefficient, \begin{align} {5 \brack 3}_q = 1+q+2\,q^2+2\,q^3+2\,q^4+q^5+q^6 \end{align} involves the following part of the quantum triangle. The result being at the bottom of the lattice.

Combinatorially, it corresponds to the same sum of the path from the top to the corresponding position in the lattice, but this time, each path is weighted by the cost of the deformation to get the leftmost path. An elementary deformation consists of a local flip $VU \to q\, UV$ deforming the path locally left pointing angle of the path to a left pointing one. Its cost is multiplying by $q$ to take account of the $q$-commutativity rule.

3 Comments

Samuel VIDAL  posted 2013-08-31 18:20:22

Some quantum calculus at break first, just imagine your friends.

- - -
- - -

means $1$

O - -
- - -

means $q$

O O -
- - -

means $q^2$

O - -
O - -

means $q^2$ again so $2 q^2$

O O -
O - -

O O O
- - -

means $2 q^3$

O O O
O - -

O O -
O O -

means $2 q^4$

O O O
O O -

O O O
O O O

$q^5 + q^6$

Ok so $q-\mathrm{binom}(5,3) = 1 + q + 2 q^2 + 2 q^3 + 2 q^4 + q^5 + q^6$

Alfredo de la Fuente  posted 2013-08-31 19:31:18

Really illustratve example. Happy Birtdhay, by the way.

Samuel Vidal  posted 2013-08-31 19:52:35

Thank you my friend ;-)

Comment:

Name:

Email:

Website: