Home Calculus III - Lecture 3
Post
Cancel

Calculus III - Lecture 3

I cannot claim to have a summary of the whole lecture, that’s why I switched the wording to ‘highlights’

Lecture highlights:

  1. tensor algebra
  2. graded vector space
  3. determinants

M: Do abstract nonsense, then start explaining that algebra has bilinear operation and what is a bilinear operation for an entire minute
B: Beuh

He literally defined notations in the second lecture and proceed to ignore it.

M: This will probably not be used in this course.

M: These stuff are probably too difficult for this course.

Not much non-mathematical content this time. To make up for it, below is a half complete summary of the lecture. I am just Chinese roomming (search Chinese room) and sometimes I don’t even understand my own notes so I draw my notes using a commutative diagram software. Note that sometimes it is not a commutative diagram and just a rough pictorial transcription of my notes.

All vector spaces below are finite dimensional.

A tensor product is a vector space $V_1 \otimes \cdots \otimes V_k$ universal multilinear thing $\iota$ as defined below:

where $Z$ is any vector space.

There is a proof that all such universal maps $u$ are naturally identifiable by general nonsense, where the maps below are all bilinear:

A tensor algebra is a vector space equipped with a multilinear product $\otimes$.

We have

\[T^0 V := \mathbb{F}\] \[T^1 V := V\] \[T^k V := \bigotimes_{i=1}^k V_i\]

We have a $\mathbb{Z}$-graded vector space

\[T^\bullet V := \bigoplus_{k \in \mathbb{Z}} T^k V.\]

Where does the negative grading come from? I have no idea.

The $k$-th homogeneous piece is $T^k V$.

Example (the polynomial space):

\[\mathbb{R}[t] = \bigoplus_{k=0}^\infty \mathbb{R} t^k\]

We claim that $T^\bullet$ is associative. The proof is as follows: (I have no idea what happened my notes are too messy)

The ideal of a vector space is a subspace, i.e. closed under linear combination. The ideal of an algebra is additionally closed under the bilinear product.

Let’s consider the symmetric tensor algebra $S^\bullet V$. This is also a $\mathbb{Z}$-graded algebra.

Modding by this ideal forces $u\cdot v = v \cdot u$, where $\cdot$ denotes our symmetric tensor product.

Now we consider the anti-symmetric tensor, or exterior, algebra $\Lambda^\bullet V$.

There is also the Heisenberg and Clifford algebra, which are quantizations.

The cross product satisfies the Jacobean identity

Now we consider Lie algebras. $\mathcal{U}_{\mathfrak{g}}$ is the universal enveloping algebra of the Lie algebra $\mathfrak{g}$, defined as a quotient.

This breaks the $Z$ grading due to the ideal consisting of 2 grades.

Similarly, many algebras we consider are quotients of the tensor algebras.

The basis of $U\otimes V$ is the minimal spanning set of $u_i \otimes v_j$, where $u_i \in \mathcal{B}_U$ and $v_j \in \mathcal{B}_V$. It has dimension $mn$, where $m = \dim U$ and $n = \dim V$.

In the algebra $S^\bullet V$, the minimal spanning set of $S^k V$ is $ { v_{i_1} \cdot v_{i_2} \cdot \cdots \cdot v_{i_k} : 1 \leq i_1 \leq \cdots \leq i_k \leq n } $, which has a dimension that Prof. Meng was too lazy to calculate, and I am also too lazy to calculate it.

We have a similar case for $\Lambda^k V$, this has dimension $ n \choose k $. These dimensions can be calculated by generating functions:

\[\sum_{k=0}^n \dim \Lambda^k V = (1+t)^{\dim V},\] \[\sum_{k=0}^\infty \dim S^k V = (1-t)^{- \dim V},\]

This negative exponent means you have to expand to obtain its power series.

Actually it is interesting how it is $\infty$.

We define the determinant of a vector space $V$ to be

\[\det V := \Lambda^{\dim V}.\]

This is also called the determinant line of $V$ because it is one dimensional.

The below wedge product is nonzero because $b_i$ form basis and so are linearly indepedent by definition.

An orientation of a vector space is an equivalence class of $\mathcal{B}_V$. By fixing an origin on its determinant line, and mapping the wedge product of a chosen basis to an element on either side of it.

$(v_1, \dots, v_n) \sim (u_1,\dots, u_n)$ if and only if $v_1 \wedge \cdots \wedge v_n = c \Lambda u_i$ for some positive $c$.

\[\bigotimes^n_{i=1} v_i \equiv \mathrm{Map}^{\mathsf{ML}} (\times^n_{i=1} V_i , \mathbb{F})\]

We also have the below natural equivalences:

\[\mathrm{Hom}(U, \mathrm{Hom}(V, W)) \equiv \mathrm{Hom}(U\otimes V, W),\] \[\mathrm{End} (V) \equiv V^\ast \otimes V.\]

Get ready for the epic moment of this lecture.

So the trace of an endomorphism is actually equivalent to $I$.

This concludes the discussion about tensor algebras. There is a little treat for sticking with us.

Consider an $n\times n$ matrix $\mathbf{A}$. We have an expansion for a very important polynomial

\[\det (\mathbf{I} + t \mathbf{A}) = 1 + (\mathrm{tr}\, \mathbf{A}) t + \left( \frac{(\mathrm{tr}\, \mathbf{A})^2}{2!} - \frac{\mathrm{tr} (\mathbf{A}^2)}{2} \right)t^2 + \left( \frac{(\mathrm{tr}\mathbf{A})^3}{3!} - \frac{(\mathrm{tr} \, \mathbf{A^2}) \mathrm{tr}\, \mathbf{A}}{2} + \frac{\mathrm{tr}(\mathbf{A}^3)}{3} \right) + \cdots\]

This is computed with Feynman diagrams.

I actually don’t know why the constant term is $1$ and not $\det \mathbf{A}$. Maybe he assumed the determinant is one.

We will talk about affine spaces next lecture.

This post is licensed under CC BY 4.0 by the author.
Trending Tags

Calculus III - Lecture 2

This Week I Learned - W06 - 2024

Trending Tags