\documentclass[12pt,final,notitlepage,onecolumn]{article}%
\usepackage[all,cmtip]{xy}
\usepackage{lscape}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsmath}
%TCIDATA{OutputFilter=latex2.dll}
%TCIDATA{Version=5.50.0.2960}
%TCIDATA{CSTFile=LaTeX article (bright).cst}
%TCIDATA{Created=Sat Mar 27 17:33:36 2004}
%TCIDATA{LastRevised=Wednesday, December 14, 2016 01:36:43}
%TCIDATA{SuppressPackageManagement}
%TCIDATA{}
%TCIDATA{}
%TCIDATA{}
%TCIDATA{BibliographyScheme=Manual}
%BeginMSIPreambleData
\providecommand{\U}[1]{\protect\rule{.1in}{.1in}}
%EndMSIPreambleData
\voffset=-2.5cm
\hoffset=-1.5cm
\setlength\textheight{24cm}
\setlength\textwidth{15cm}
\begin{document}
\begin{center}
\textbf{Classical Invariant Theory - A Primer}
\textit{Hanspeter Kraft and Claudio Procesi}
Preliminary Version, July 1996
\textbf{Errata and questions (by Darij Grinberg)\footnote{updated 13 December
2016}}
\bigskip
\end{center}
\section*{Section 1}
\begin{itemize}
\item \textbf{Page 2:} Between the definition and Exercise 4, you write: ``and
the \textit{stabilizer} of $w$ it the subgroup $G_{w}:=\left\{ g\in
G\ \mid\ gw=w\right\} $''. The word ``it'' should be ``is'' here.
\item \textbf{Page 4, Example 3:} In the formula%
\[
\sigma_{2}:=x_{1}x_{2}+x_{1}x_{3}+\cdots x_{n-1}x_{n},
\]
a plus sign is missing before $x_{n-1}x_{n}$.
\item \textbf{Page 5, first line:} The verb form of ``proof'' is ``prove'',
not ``proof''. This mistake is repeated several times through the text, and if
you wish to correct it, the fastest way would be to search for ``proof'' in
your tex file.
\item \textbf{Page 6, Exercise 9:} The sentence ``The $V_{n}$ are the
classical \textit{binary forms} of degree $n$.'' is a bit confusing. I think
``The $V_{n}$ are the classical \textit{spaces of binary forms} of degree
$n$.'' would be better.
\item \textbf{Page 6, Exercise 11:} I think it would be better to introduce
$\gamma_{0},...,\gamma_{n}$ before introducing $f$ and $h$, since the point is
to find $\gamma_{0},...,\gamma_{n}$ which do not depend on $f$ and $h$.
\item \textbf{Page 7:} Replace \textquotedblleft is a restriction an
invariant\textquotedblright\ by \textquotedblleft is a restriction of an
invariant\textquotedblright.
\item \textbf{Page 9, Exercise 21:} I am surprised that you never come back to
this nice exercise! It nicely generalizes to $\operatorname*{SL}_{n}$ for
arbitrary $n\in\mathbb{N}$ whenever $K$ is a field of characteristic $0$. The
coordinate ring of $\operatorname*{SL}\nolimits_{n}$ is $K\left[
\operatorname*{SL}_{n}\right] =K\left[ \operatorname*{M}_{n}\right]
\diagup\left( \det-1\right) $, and the invariant ring $K\left[
\operatorname*{SL}_{n}\right] ^{U_{n}}$ (where $U_{n}$ acts on
$\operatorname*{SL}\nolimits_{n}$ by left multiplication) is generated by the
$k\times k$ minors extracted from the last $k$ rows of the matrix for
$k=1,2,...,n-1$ (or $k=1,2,...,n$, which doesn't change anything). In order to
prove this, we can proceed as follows\footnote{The following outline of a
proof uses the results of Sections 5--7.}:
\begin{itemize}
\item Fix $n\in\mathbb{N}$. It is easy to see that $K\left[
\operatorname*{SL}_{n}\right] =K\left[ \operatorname*{M}_{n}\right]
\diagup\left( \det-1\right) $.
\item Recall the fact that a surjective $G$-linear homomorphism
$f:A\rightarrow B$ between two completely reducible representations $A$ and
$B$ of a group $G$ always restricts to a surjective homomorphism
$A^{G}\rightarrow B^{G}$. This can be generalized as follows: If $H$ is a
subgroup of some group $G$, and if $A$ and $B$ are two representations of $G$
such that $A$ is completely reducible (as a $G$-module), and if
$f:A\rightarrow B$ is a surjective $G$-linear homomorphism, then $f$ restricts
to a surjective homomorphism $A^{H}\rightarrow B^{H}$ of vector
spaces\footnote{\textit{Sketch of a proof.} Let $H$ be a subgroup of some
group $G$. Let $A$ and $B$ be two representations of $G$ such that $A$ is
completely reducible (as a $G$-module). Let $f:A\rightarrow B$ be a surjective
$G$-linear homomorphism. We need to prove that $f$ restricts to a surjective
homomorphism $A^{H}\rightarrow B^{H}$ of vector spaces. In other words, we
need to prove that $f\left( A^{H}\right) =B^{H}$ (since it is clear that $f$
restricts to a homomorphism $A^{H}\rightarrow B^{H}$ of vector spaces (since
$f\left( A^{H}\right) \subseteq\left( \underbrace{f\left( A\right)
}_{\subseteq B}\right) ^{H}\subseteq B^{H}$)). Clearly, $\operatorname*{Ker}%
f$ is a $G$-submodule of $A$ (since $f$ is a $G$-linear homomorphism). But the
$G$-module $A$ is completely reducible. In other words, every $G$-submodule of
$A$ is a direct summand of $A$. Thus, $\operatorname*{Ker}f$ is a direct
summand of $A$ (since $\operatorname*{Ker}f$ is a $G$-submodule of $A$). In
other words, there exists a $G$-submodule $A^{\prime}$ of $A$ such that
$A=A^{\prime}\oplus\operatorname*{Ker}f$. Consider this $A^{\prime}$. We have
$A=A^{\prime}\oplus\operatorname*{Ker}f=A^{\prime}+\operatorname*{Ker}f$ and
$A^{\prime}\cap\operatorname*{Ker}f=0$ (since $A^{\prime}\oplus
\operatorname*{Ker}f$ is an internal direct sum). Since $f$ is surjective, we
have $B=f\left( \underbrace{A}_{=A^{\prime}+\operatorname*{Ker}f}\right)
=f\left( A^{\prime}+\operatorname*{Ker}f\right) =f\left( A^{\prime}\right)
+\underbrace{f\left( \operatorname*{Ker}f\right) }_{=0}=f\left( A^{\prime
}\right) =\left( f\mid_{A^{\prime}}\right) \left( A^{\prime}\right) $,
and thus the map $f\mid_{A^{\prime}}:A^{\prime}\rightarrow B$ is surjective.
The map $f\mid_{A^{\prime}}$ is also injective (since $\operatorname*{Ker}%
\left( f\mid_{A^{\prime}}\right) =A^{\prime}\cap\operatorname*{Ker}f=0$) and
$G$-linear. Thus, $f\mid_{A^{\prime}}:A^{\prime}\rightarrow B$ is an
isomorphism of $G$-modules. Hence, $\left( f\mid_{A^{\prime}}\right) \left(
\left( A^{\prime}\right) ^{H}\right) =B^{H}$. Thus, $B^{H}=\left(
f\mid_{A^{\prime}}\right) \left( \left( A^{\prime}\right) ^{H}\right)
=f\left( \underbrace{\left( A^{\prime}\right) ^{H}}_{\subseteq A^{H}%
}\right) \subseteq f\left( A^{H}\right) $. Combined with $f\left(
A^{H}\right) \subseteq\left( \underbrace{f\left( A\right) }_{\subseteq
B}\right) ^{H}\subseteq B^{H}$, this yields $f\left( A^{H}\right) =B^{H}$,
qed.}. Let us call this generalized fact the \textquotedblleft extended Schmid
lemma\textquotedblright\ (since my use of this lemma is similar to the trick
used by Barbara Schmid in her proof of your Exercise 33).
\item Now, let $V$ be the $K$-vector space $K^{n}$, and let $p\in\mathbb{N}$
be arbitrary. Then, we can identify the $K$-vector space $V^{p}$ with the
$K$-vector space $K^{n\times p}$ of $n\times p$-matrices (by equating every
$p$-tuple $\left( v_{1},v_{2},\ldots,v_{p}\right) \in V^{p}$ with the
$n\times p$-matrix whose columns are $v_{1},v_{2},\ldots,v_{p}$). The group
$\operatorname*{GL}\nolimits_{p}$ thus acts on $V^{p}$ via right
multiplication.\footnote{This is the same as saying that the group
$\operatorname*{GL}\nolimits_{p}$ acts on $V^{p}$ via the identification
$V^{p}=\operatorname*{Hom}\left( K^{p},V\right) $.} Let us denote this
action by $\rightharpoonup$ (that is, we write $A\rightharpoonup B$ for the
image of a $B\in V^{p}$ under the action of an $A\in\operatorname*{GL}%
\nolimits_{p}$ with respect to this action).
\item We let $U_{p}^{-}$ denote the group of the lower triangular unipotent
matrices in $\operatorname*{GL}\nolimits_{n}$. The group $U_{p}^{-}$ is a
subgroup of $\operatorname*{GL}\nolimits_{p}$, and thus also acts on $V^{p}$
(by restricting the $\operatorname*{GL}\nolimits_{p}$-action $\rightharpoonup$
on $V^{p}$). We denote this latter action by $\rightharpoonup$ as well.
\item We define a group homomorphism $\zeta:U_{p}\rightarrow U_{p}^{-}$ by
setting $\zeta\left( A\right) =\left( A^{T}\right) ^{-1}$ for every $A\in
U_{p}$. This $\zeta$ allows to transform the action $\rightharpoonup$ of
$U_{p}^{-}$ on $V^{p}$ into an action of $U_{p}$ on $V^{p}$ (by restriction);
let us denote this latter action by $\rightharpoondown$ (that is, we write
$A\rightharpoondown B$ for the image of a $B\in V^{p}$ under the action of an
$A\in U_{p}$ with respect to this latter action). Then,%
\[
A\rightharpoondown B=\underbrace{\zeta\left( A\right) }_{=\left(
A^{T}\right) ^{-1}}\rightharpoonup B=\left( A^{T}\right) ^{-1}%
\rightharpoonup B
\]
for any $A\in U_{p}$ and $B\in V^{p}$. If we identify $V^{p}$ with $K^{n\times
p}$ as explained above, then this simplifies to%
\begin{align}
A\rightharpoondown B & =\left( A^{T}\right) ^{-1}\rightharpoonup
B=B\left( \left( A^{T}\right) ^{-1}\right) ^{-1}\nonumber\\
& \ \ \ \ \ \ \ \ \ \ \left( \text{since the action }\rightharpoonup\text{
is given by right multiplication}\right) \nonumber\\
& =BA^{T} \label{p9.exe21.eil}%
\end{align}
for any $A\in U_{p}$ and $B\in V^{p}=K^{n\times p}$.
\item Notice that $\zeta$ is a group isomorphism. Hence, the action
$\rightharpoonup$ of $U_{p}^{-}$ on $V^{p}$ and the action $\rightharpoondown$
of $U_{p}$ on $V^{p}$ are \textquotedblleft the same action modulo renaming
the group elements\textquotedblright. In particular, this shows that $K\left[
V^{p}\right] ^{U_{p}^{-}}=K\left[ V^{p}\right] ^{U_{p}}$ (where the action
$\rightharpoonup$ is used in defining $K\left[ V^{p}\right] ^{U_{p}^{-}}$,
and the action $\rightharpoondown$ is used in defining $K\left[ V^{p}\right]
^{U_{p}}$).
\item In 5.7 Corollary 1, we have shown that if $W$ is a $\operatorname*{GL}%
\nolimits_{n}$-module, then $W$ is simple if and only if $\dim W^{U_{n}}=1$. A
similar argument shows that if $W$ is a $\operatorname*{GL}\nolimits_{n}%
$-module, then $W$ is simple if and only if $\dim W^{U_{n}^{-}}=1$%
\ \ \ \ \footnote{To prove this, it is enough to make some straightforward
changes to the proofs of Proposition 5.7 and 5.7 Corollary 1 (that is, replace
\textquotedblleft$U_{n}$\textquotedblright, \textquotedblleft$U_{n}^{-}%
$\textquotedblright, \textquotedblleft$ij$\textquotedblright, \textquotedblleft$\succ
$\textquotedblright, \textquotedblleft$\prec$\textquotedblright,
\textquotedblleft minimal\textquotedblright\ and \textquotedblleft upper
triangular\textquotedblright, respectively).}. Using this fact, we can see
that%
\begin{equation}
\dim L_{\lambda}\left( p\right) ^{U_{p}^{-}}=1 \label{p9.exe21.5}%
\end{equation}
whenever $\lambda$ is a dominant weight of height $\leq p$. (The proof of
(\ref{p9.exe21.5}) is analogous to the proof of $\dim L_{\lambda}\left(
p\right) ^{U_{p}}=1$.)
\item Proposition 7.8 shows that the invariant ring $K\left[ V^{p}\right]
^{U_{p}}$, where the action of $U_{p}$ on $V^{p}$ is given by restricting the
action $\rightharpoonup$ of $\operatorname*{GL}\nolimits_{p}$ on $V^{p}$, is
generated by the $k\times k$-minors extracted from the first $k$ columns of
the matrix $X$ for $k=1,2,...,n$. Similarly we can show that the invariant
ring $K\left[ V^{p}\right] ^{U_{p}^{-}}$ is generated by the $k\times
k$-minors extracted from the \textbf{last} $k$ columns of the matrix $X$ for
$k=1,2,...,n$. (The proof is analogous to the proof of Proposition 7.8, but
now we need to use (\ref{p9.exe21.5}) instead of $\dim L_{\lambda}\left(
p\right) ^{U_{p}}=1$.)
\item Now, set $p=n$, so that $V^{p}=K^{n\times p}=K^{n\times n}$. Let
$\mathbf{i}:\operatorname*{M}\nolimits_{n}\rightarrow K^{n\times n}$ be the
$K$-vector space isomorphism which sends every matrix $A\in\operatorname*{M}%
\nolimits_{n}$ to its transpose $A^{T}\in K^{n\times n}$. Of course,
$\operatorname*{M}\nolimits_{n}$ and $K^{n\times n}$ are identical as sets,
but we regard them as endowed with two different $U_{n}$-module structures:
Namely, the $U_{n}$-module structure on $\operatorname*{M}\nolimits_{n}$ is
given by left multiplication, whereas the $U_{n}$-module structure on
$K^{n\times n}$ is the $U_{p}$-module structure $\rightharpoondown$ on $V^{p}$
defined above (this makes sense since $n=p$ and $K^{n\times n}=V^{p}$). Recall
that this latter structure satisfies (\ref{p9.exe21.eil}) for every $A\in
U_{n}$ and $B\in K^{n\times n}$.
\item It is straightforward to show that any $A\in U_{n}$ and $B\in
\operatorname*{M}\nolimits_{n}$ satisfy $\mathbf{i}\left( AB\right)
=A\rightharpoondown\mathbf{i}\left( B\right) $. In other words,
$\mathbf{i}:\operatorname*{M}\nolimits_{n}\rightarrow K^{n\times n}$ is a
$U_{n}$-module homomorphism with respect to the $U_{n}$-module structures on
$\operatorname*{M}\nolimits_{n}$ and on $K^{n\times n}$ that we have just
described. Thus, $\mathbf{i}:\operatorname*{M}\nolimits_{n}\rightarrow
K^{n\times n}$ is a $U_{n}$-module isomorphism with respect to these
structures (since $\mathbf{i}$ is a $K$-vector space isomorphism). Thus, it
induces a $U_{n}$-module isomorphism $K\left[ \mathbf{i}\right] :K\left[
K^{n\times n}\right] \rightarrow K\left[ \operatorname*{M}\nolimits_{n}%
\right] $ (which sends every polynomial map $p\in K\left[ K^{n\times
n}\right] $ to the composition $p\circ\mathbf{i}$). We have $K\left[
\operatorname*{M}\nolimits_{n}\right] ^{U_{n}}=\left( K\left[
\mathbf{i}\right] \right) \left( K\left[ K^{n\times n}\right] ^{U_{n}%
}\right) $ (since $K\left[ \mathbf{i}\right] $ is a $U_{n}$-module
isomorphism). But since $K^{n\times n}=V^{p}$ and $n=p$, we have%
\[
K\left[ K^{n\times n}\right] ^{U_{n}}=K\left[ V^{p}\right] ^{U_{p}%
}=K\left[ V^{p}\right] ^{U_{p}^{-}}%
\]
(where the action $\rightharpoonup$ is used in defining $K\left[
V^{p}\right] ^{U_{p}^{-}}$, and the action $\rightharpoondown$ is used in
defining $K\left[ V^{p}\right] ^{U_{p}}$). Thus,%
\[
K\left[ \operatorname*{M}\nolimits_{n}\right] ^{U_{n}}=\left( K\left[
\mathbf{i}\right] \right) \left( \underbrace{K\left[ K^{n\times n}\right]
^{U_{n}}}_{=K\left[ V^{p}\right] ^{U_{p}^{-}}}\right) =\left( K\left[
\mathbf{i}\right] \right) \left( K\left[ V^{p}\right] ^{U_{p}^{-}%
}\right) .
\]
Thus, the ring $K\left[ \operatorname*{M}\nolimits_{n}\right] ^{U_{n}}$ is
generated by the $k\times k$-minors extracted from the last $k$ \textbf{rows}
of the matrix $X$ for $k=1,2,...,n$ (because the ring $K\left[ V^{p}\right]
^{U_{p}^{-}}$ is generated by the $k\times k$-minors extracted from the last
$k$ \textbf{columns} of the matrix $X$ for $k=1,2,...,n$, and because
$\mathbf{i}$ is the map which sends every matrix to its transpose).
\item We have $K\left[ \operatorname*{SL}_{n}\right] =K\left[
\operatorname*{M}_{n}\right] \diagup\left( \det-1\right) $. That is, we
have a canonical surjection $K\left[ \operatorname*{M}_{n}\right]
\rightarrow K\left[ \operatorname*{SL}_{n}\right] $. This surjection is a
ring homomorphism and is $\operatorname*{SL}\nolimits_{n}$-linear (where
$\operatorname*{SL}\nolimits_{n}$ acts on both $\operatorname*{M}%
\nolimits_{n}$ and $\operatorname*{SL}\nolimits_{n}$ by left multiplication).
Let us denote this surjection by $f$. Applying the extended Schmid lemma to
$G=\operatorname*{SL}\nolimits_{n}$, $H=U_{n}$, $A=K\left[ \operatorname*{M}%
_{n}\right] $ and $B=K\left[ \operatorname*{SL}_{n}\right] $, we thus
conclude that $f$ restricts to a surjective homomorphism $K\left[
\operatorname*{M}_{n}\right] ^{U_{n}}\rightarrow K\left[ \operatorname*{SL}%
\nolimits_{n}\right] ^{U_{n}}$ (since we know that $K\left[
\operatorname*{M}_{n}\right] $ is completely reducible as a
$\operatorname*{SL}\nolimits_{n}$-module (according to 5.4 Proposition 2)).
Thus, $K\left[ \operatorname*{SL}\nolimits_{n}\right] ^{U_{n}}=f\left(
K\left[ \operatorname*{M}_{n}\right] ^{U_{n}}\right) $. Hence, the ring
$K\left[ \operatorname*{SL}\nolimits_{n}\right] ^{U_{n}}$ is generated by
the $k\times k$-minors extracted from the last $k$ rows of the matrix $X$ for
$k=1,2,...,n$. The $k\times k$-minors for $k=n$ in this generating set are
redundant (because there is only one of them -- namely, $\det X$ --, and it is
identically to $1$ (since we are in $K\left[ \operatorname*{SL}%
\nolimits_{n}\right] $)); thus, we conclude that the ring $K\left[
\operatorname*{SL}\nolimits_{n}\right] ^{U_{n}}$ is generated by the $k\times
k$-minors extracted from the last $k$ rows of the matrix $X$ for
$k=1,2,...,n-1$.
\end{itemize}
Thus, our proof is complete.
\item \textbf{Page 9, second line from the bottom:} You write:
\textquotedblleft and let $p:V_{1}\otimes V_{2}\rightarrow U$ be a linear
projection [...]\textquotedblright. The \textquotedblleft
linear\textquotedblright\ means \textquotedblleft$G$-linear\textquotedblright%
\ here, not "$K$-linear" as I first thought. This may be worth pointing out.
\item \textbf{Page 10, Example 3:} There are several $\operatorname*{GL}%
\nolimits_{n}$-module structures on $\operatorname*{M}\nolimits_{n}$. Here you
apparently mean the adjoint structure; better to state this explicitly?
\item \textbf{Page 13, Exercise 30:} Nothing wrong here, but I believe you can
strengthen the \textquotedblleft canonical\textquotedblright\ to
\textquotedblleft unique\textquotedblright\ here. \newline Besides, the
$K$-algebra $A$ needs not be commutative. It can even be any arbitrary
$K$-vector space with a $K$-bilinear \textquotedblleft
multiplication\textquotedblright, such as a Lie algebra.
\item \textbf{Page 14, two lines below Exercise 31:} \textquotedblleft
characterstic\textquotedblright\ should be \textquotedblleft
characteristic\textquotedblright.
\item \textbf{Page 14, proof of Theorem 2:} You write: \textquotedblleft
Clearly, we have $p_{j}=\sum_{\left\vert \rho\right\vert =j}j_{\rho}\cdot
z_{1}^{\rho_{1}}z_{2}^{\rho_{2}}\cdots z_{n}^{\rho_{n}}$.\textquotedblright%
\ This should be $p_{j}=\sum_{\left\vert \rho\right\vert =j}\dbinom{j}%
{\rho_{1},\rho_{2},...,\rho_{n}}j_{\rho}\cdot z_{1}^{\rho_{1}}z_{2}^{\rho_{2}%
}\cdots z_{n}^{\rho_{n}}$, where $\dbinom{j}{\rho_{1},\rho_{2},...,\rho_{n}}$
denotes a multinomial coefficient. Fortunately, this multinomial coefficient
is positive, so it doesn't create any troubles in the proof (neither in the
$\operatorname*{char}K=0$ nor in the $\operatorname*{char}K>\left\vert
G\right\vert $ case).
\item \textbf{Page 15, first line:} Typo: \textquotedblleft
symmertic\textquotedblright\ should be \textquotedblleft
symmetric\textquotedblright.
\end{itemize}
\section*{Section 2}
\begin{itemize}
\item \textbf{Page 18, second line from the bottom:} ``bases'' is the plural
form of ``basis''. The right singular form is ``basis'', not ``bases''. This
mistake is repeated some more times in your text.
\item \textbf{Page 20, Exercise 4:} I am not sure about this one, but I
\textit{believe} that you mean ``geometrically diagonalizable matrices'' (i.
e., matrices diagonalizable over the algebraic closure of $K$) when you say
``diagonalizable matrices'' here. Otherwise I really have no idea how to solve
the exercise with your hint. Fortunately, the power of this exercise does not
dwindle from restricting it to geometrically diagonalizable matrices.
\item \textbf{Page 21, Example:} You write: ``Since $f$ is a polynomial
function on $\operatorname*{M}\nolimits_{2}^{\prime}\times\operatorname*{M}%
\nolimits_{2}^{\prime}$ and the given invariants are algebraically
independent, it follows that $f$ must be a polynomial function in these
invariants.'' I don't understand this step - as far as I understand (from
\texttt{http://mathoverflow.net/questions/32427} ) the problem of finding a
generating set for the quotient field of the invariant ring is much easier
than the problem of finding a generating set for the invariant ring itself,
and algebraic independence of the generating set isn't enough either.
\item \textbf{Page 22, Remark:} You refer to ``Chapter II'' -- is this some
sequel that is being planned for the text?
\item \textbf{Page 22, Remark:} Maybe it would be better to remind the reader
that $n$ means $\dim V$ here.
\end{itemize}
\section*{Section 3}
\begin{itemize}
\item \textbf{Page 24, Proof of Lemma:} Replace ``an'' by ``and'' in ``[...]
belong to the same orbit under $\mathcal{S}_{m}$ if an only if [...]''.
\item \textbf{Page 27, Decomposition Theorem, part (c):} In $\mathrm{M}%
_{\lambda}\otimes L_{\lambda}$, the $\mathrm{M}$ should be an italicized $M$.
\item \textbf{Page 27, proof of the Decomposition Theorem:} You write: ``For
the last statement it remains to show that the endomorphism ring of every
simple $\mathcal{S}_{m}$-modules $M_{\lambda}$ [...]''. There is a typo here
(``$\mathcal{S}_{m}$-modules'' should be ``$\mathcal{S}_{m}$-module'').
\item \textbf{Page 28, Remark:} You write that ``$M_{\lambda}=M_{\lambda
}^{\circ}\otimes_{\mathbb{Q}}K$ and $L_{\lambda}=L_{\lambda}^{\circ}%
\otimes_{\mathbb{Q}}K$, where $M_{\lambda}^{\circ}$ is a simple $\mathbb{Q}%
\left[ \mathcal{S}_{m}\right] $-module $L_{\lambda}^{\circ}$ a simple
$\operatorname*{GL}\left( \mathbb{Q}\right) $-module.'' First, there is an
``and'' missing in this sentence, but there is some more substantial problem:
What does $\operatorname*{GL}\left( \mathbb{Q}\right) $ mean? Probably you
want to say $\operatorname*{GL}\left( V^{\prime}\right) $ where $V^{\prime}$
is a $\mathbb{Q}$-vector space such that $V\cong V^{\prime}\otimes
_{\mathbb{Q}}K$. It seems to me that there is a better way to formulate this:
If $V$ is a $\mathbb{Q}$-vector space, then $L_{\lambda}\left( V\otimes
_{\mathbb{Q}}K\right) =L_{\lambda}\left( V\right) \otimes_{\mathbb{Q}}K$.
\end{itemize}
\section*{Section 4}
\begin{itemize}
\item \textbf{Page 32, the end of \S 4.2:} The fifth line of a 5-lines long
computation says:%
\[
=f_{\sigma^{-1}}\left( v_{1}\otimes\cdots\otimes v_{m}\otimes\varphi
_{1}\otimes\cdots\otimes\varphi_{m}\right) .
\]
The $\sigma^{-1}$ should be a $\sigma$ here, unless I am mistaken.
\item \textbf{Page 32, the end of \S 4.2:} You write: ``Thus, $\alpha
\left\langle \mathcal{S}_{m}\right\rangle =\left\langle f_{\sigma}%
\ \mid\ \sigma\in\mathcal{S}_{p}\right\rangle $ and the claim follows.'' I
guess the $p$ here should be an $m$.
\item \textbf{Page 33, proof of the Claim:} You write: ``Hence, the dual map
$\widetilde{\beta}^{\ast}$ identifies the multilinear invariants of
$\operatorname*{End}\left( V\right) ^{m}$ with those of $V^{m}\otimes
V^{\ast m}$.'' Isn't the $\otimes$ symbol supposed to be a $\oplus$ symbol?
\item \textbf{Page 33, proof of the Claim:} The second line of a 5-lines long
computation says:%
\[
=\operatorname*{Tr}\nolimits_{\sigma}\left( \beta\left( v_{1}\otimes
\varphi_{1}\right) \beta\left( v_{2}\otimes\varphi_{2}\right)
\cdots\right) .
\]
I think there should be commata between the $\beta$'s here:%
\[
=\operatorname*{Tr}\nolimits_{\sigma}\left( \beta\left( v_{1}\otimes
\varphi_{1}\right) ,\beta\left( v_{2}\otimes\varphi_{2}\right)
,\cdots\right) .
\]
\item \textbf{Page 33, proof of the Claim:} On the right hand side of the
formula%
\[
\prod_{i=1}^{m}\varphi_{i}\left( v_{\sigma\left( i\right) }\right)
=f_{\sigma}\left( v_{1}\otimes\cdots\otimes v_{m}\otimes\varphi_{1}%
\otimes\cdots\otimes\varphi_{m}\right) ,
\]
the $\sigma$ should be a $\sigma^{-1}$ this time.
\item \textbf{Page 35, Lemma:} In the formula%
\[
\mathcal{P}:K\left[ V_{1}\oplus\cdots\oplus V_{r}\right] _{\left(
d_{1},...,v_{r}\right) }\rightarrow K\left[ V_{1}^{d_{1}}\oplus\cdots\oplus
V_{r}^{d_{r}}\right] _{\operatorname*{multlin}},
\]
the index $\left( d_{1},...,v_{r}\right) $ should be $\left( d_{1}%
,...,d_{r}\right) $.
\item \textbf{Page 36, \S 4.7:} You write: ``The restitution of the invariant
$\operatorname*{Tr}\nolimits_{\sigma}$ is a product of functions of the form
$\operatorname*{Tr}\left( i_{1},...,i_{k}\right) $.'' What you call
$\operatorname*{Tr}\left( i_{1},...,i_{k}\right) $ here has originally been
denoted by $\operatorname*{Tr}\nolimits_{i_{1}...i_{k}}$ in \S 2.5.
\item \textbf{Page 37:} You write: ``Now it follows from the FFT for
$\operatorname*{GL}\left( V\right) $ (2.1) that $rd=s$ and that $H$ is a
scalar multiple of the invariant $\left( 1\mid1\right) ^{d}\left(
2\mid1\right) ^{d}\cdots\left( r\mid1\right) ^{d}$.'' I think $\left(
1\mid1\right) ^{d}\left( 2\mid1\right) ^{d}\cdots\left( r\mid1\right)
^{d}$ should be $\left( 1\mid1\right) ^{d}\left( 1\mid2\right) ^{d}%
\cdots\left( 1\mid r\right) ^{d}$ here.
\item \textbf{Page 37:} You write: ``On the other hand, starting with
$h=\varepsilon$ [...]''. I think that starting with $h=\varepsilon$ does not
help, as the polarization of a polynomial of degree $r$ (such as $h$) has
nothing to do with the polarization of a polynomial of degree $1$ (such as
$\varepsilon$). It would rather make sense to start with $h=\varepsilon^{r}$.
Am I missing something?
\end{itemize}
\section*{Section 5}
\begin{itemize}
\item \textbf{Page 38, Exercise 1:} A word ``be'' is missing in ``Let
$\rho:\operatorname*{GL}\left( V\right) \rightarrow\operatorname*{GL}%
\nolimits_{N}\left( K\right) $ an irreducible [...]''.
\item \textbf{Page 40, Exercise 5:} I would rather write%
\[
\lambda_{1}\wedge\cdots\wedge\lambda_{j}\mapsto\left( v_{1}\wedge\cdots\wedge
v_{j}\mapsto\sum_{\sigma\in\mathcal{S}_{j}}\operatorname*{sgn}\sigma
\ \lambda_{1}\left( v_{\sigma\left( 1\right) }\right) \cdots\lambda
_{j}\left( v_{\sigma\left( j\right) }\right) \right)
\]
instead of%
\[
\lambda_{1}\wedge\cdots\wedge\lambda_{j}:v_{1}\wedge\cdots\wedge v_{j}%
\mapsto\sum_{\sigma\in\mathcal{S}_{j}}\operatorname*{sgn}\sigma\ \lambda
_{1}\left( v_{\sigma\left( 1\right) }\right) \cdots\lambda_{j}\left(
v_{\sigma\left( j\right) }\right)
\]
here.
\item \textbf{Page 40, Exercise 6:} There are some mistakes here:
\begin{itemize}
\item The definition of $\mu$ has two typos: A $\wedge$ sign is missing in the
$e_{1}\wedge\cdots\wedge\widehat{e_{i}}\wedge\cdots e_{n}$ term, and (more
importantly) there is a factor of $\left( -1\right) ^{i}$ (or $\left(
-1\right) ^{i-1}$, depending on your preferences) missing before this term.
\item In Assertion (a), the equality \textquotedblleft$\mu\left(
g\omega\right) =\det g\cdot\mu\left( \omega\right) $\textquotedblright%
\ should be \textquotedblleft$\mu\left( g\omega\right) =\det g\cdot
g\mu\left( \omega\right) $\textquotedblright\ instead.
\end{itemize}
\item \textbf{Page 42, proof of Proposition:} You write: \textquotedblleft Let
$\rho:\operatorname*{GL}\left( V\right) \rightarrow\operatorname*{GL}\left(
W\right) $ be a polynomial representation and $\widetilde{\rho}%
:\operatorname*{End}\left( V\right) \rightarrow\operatorname*{End}\left(
W\right) $ its extension (Lemma 6.2(b)).\textquotedblright\ I think the Lemma
you are referring to is 5.2 (b), not 6.2(b).
\item \textbf{Page 42, proof of Proposition:} At the end of the proof, you
construct an embedding of $\operatorname*{S}\nolimits^{m}V$ into $V^{\otimes
m}$. This is an embedding only if $\operatorname*{char}K=0$ (or at least
$\operatorname*{char}K>m$). Is it possible that you assume
$\operatorname*{char}K=0$ in the Proposition? I am a bit confused here because
you explicitely require $\operatorname*{char}K=0$ in Corollary 1 but you don't
mention $\operatorname*{char}K$ in the Proposition.
\item \textbf{Page 42, Remark:} You write: \textquotedblleft Then we show that
every finite dimensional subrepresentation of $\operatorname*{End}\left(
V\right) $ is contained in a direct sum $\oplus_{i}V^{\otimes n_{i}}%
$.\textquotedblright\ You mean $K\left[ \operatorname*{End}\left( V\right)
\right] $ here, not $\operatorname*{End}\left( V\right) $.
\item \textbf{Page 43:} You write: \textquotedblleft Clearly, the two coincide
if and only if $G\subset\operatorname*{SL}\left( V\right) $%
.\textquotedblright\ I think that \textquotedblleft clearly\textquotedblright%
\ the opposite is the case: take $G=\left\{ s\in\operatorname*{GL}\left(
V\right) \ \mid\ \left( \det s\right) ^{2}=1\right\} $. Or is it me who
doesn't understand something here? I know that my counterexample is perverse
from an algebraic-geometric viewpoint (it is not even connected), and I am
wondering whether a simple additional condition rescues the assertion.
Otherwise it would be probably wiser to explicitly list the groups $G$ for
which you claim the assertion to hold.
\item \textbf{Page 45, first line:} There is a typo here: \textquotedblleft
representaiton\textquotedblright\ should be \textquotedblleft
representation\textquotedblright.
\item \textbf{Page 45:} In the last sentence before Exercise 13, you write:
\textquotedblleft Hence, every irreducible representation of
$\operatorname*{SL}\left( V\right) $ occurs in some $V^{\otimes m}$
[...]\textquotedblright. To be more precise, \textquotedblleft
irreducible\textquotedblright\ should be \textquotedblleft irreducible
polynomial\textquotedblright\ here.
\item \textbf{\S 5.5:} There is no mistake on your part here, but honestly I
would find it better if you would explain once again that $K\left[ G\right]
$ means the ring of polynomial functions on $G$, while $KG$ means the group
ring of $G$ (the ring of formal linear combinations of elements of $G$).
Unfortunately, several people (one of them being myself) have the habit of
reading \textit{both} $K\left[ G\right] $ and $KG$ as the group ring of $G$,
which conflicts with your notation here.
\item \textbf{Page 46, Exercise 14:} What you call $\operatorname*{Map}$ here
was called $\operatorname*{Mor}$ one page above.
\item \textbf{Page 47:} You write: \textquotedblleft In other words,
$\chi=r_{1}\varepsilon_{1}+r_{2}\varepsilon_{2}+\cdots r_{n}\varepsilon_{n}%
\in\mathcal{X}\left( T_{n}\right) $ [...]\textquotedblright. There is a plus
sign missing (before $r_{n}\varepsilon_{n}$).
\item \textbf{Page 47:} You write: \textquotedblleft the eigenspaces
$W_{\lambda}$ are the corresponding \textit{weight space},\textquotedblright.
This should be a plural: \textquotedblleft weight space\textbf{s}%
\textquotedblright.
\item \textbf{Page 50, between Corollary 1 and Definition 2:} You refer to
\textquotedblleft3.3 Corollary 1\textquotedblright. I think you mean
\textquotedblleft5.3 Corollary 1\textquotedblright.
\item \textbf{Page 51, Example (2):} In the equation $p_{n}\varepsilon
_{1}+p_{n-1}\varepsilon_{2}+\cdots p_{1}\varepsilon_{n}=\sigma_{0}\lambda$,
there is a plus sign missing in front of the $p_{1}\varepsilon_{n}$ term. This
is the third time I am seeing this in your text - maybe it has a meaning I
don't understand?
\item \textbf{Page 51, Exercise 22:} You write: \textquotedblleft(Cf. 3.3.
Exercise 4.)\textquotedblright\ Actually Exercise 4 is in \S 3.2.
\item \textbf{Page 52:} You write: \textquotedblleft Therefore, we get an
action of $\mathcal{S}_{n}$ on the character group $\mathcal{X}\left(
T_{n}\right) $ defined by $\sigma\left( \chi\left( t\right) \right)
:=\chi\left( \sigma^{-1}t\sigma\right) $ [...]\textquotedblright. The
$\sigma\left( \chi\left( t\right) \right) $ term should be $\left(
\sigma\left( \chi\right) \right) \left( t\right) $, apparently.
\item \textbf{Page 52, proof of Proposition 1:} There is a wrong reference in
\textquotedblleft where $\omega_{j}:=\varepsilon_{1}+\cdots+\varepsilon_{j}$
is the highest weight of $\wedge^{j}K^{n}$ (5.7 Example
(2)).\textquotedblright\ You want Example (1), not (2).
\item \textbf{Page 52, proof of Proposition 1:} You claim that the element $w$
\textquotedblleft is fixed under $U_{n}$ and has weight $\lambda^{\prime
}:=\sum_{i=1}^{n-1}p_{i}\omega_{i}$\textquotedblright. It seems to me that the
weight should rather be $\lambda^{\prime}:=\sum_{i=1}^{n-1}m_{i}\omega_{i}$.
\item \textbf{Page 52, proof of Proposition 1:} Another incorrect reference:
``It follows from Proposition 6.6'' should be ``It follows from Proposition 5.7''.
\item \textbf{Page 53, definition of ``height'':} Unless I have overlooked it,
there is no definition of $\operatorname*{ht}\lambda$ in your text. It would
be enough to say that $\operatorname*{ht}\lambda$ is an abbreviation for the
height of $\lambda$.
\item \textbf{Page 53, Exercise 25:} Are you sure that $n=k\cdot\left\vert
\lambda\right\vert $ and not $\left\vert \lambda\right\vert =nk$ ?
\item \textbf{Page 53, \S 5.9:} Two wrong references in the first absatz here
(before Proposition 1): ``Theorem 3.5'' should be ``Theorem 3.3'', and ``In
\S 7'' should be ``In \S 6''.
\item \textbf{Page 54, Remark 1:} You write: ``Moreover, embedding
$\operatorname*{GL}\nolimits_{n}\subset\operatorname*{GL}\nolimits_{n+1}$ we
get a canonical inclusion%
\[
L_{\lambda}\left( n\right) \subset L_{\lambda}\left( n+1\right)
=\left\langle L_{\lambda}\left( n\right) \right\rangle _{\operatorname*{GL}%
\nolimits_{n+1}},
\]
[...]''. How exactly do you get this inclusion? (My preferred way to get an
inclusion $L_{\lambda}\left( n\right) \subset L_{\lambda}\left( n+1\right)
$ is to use the functoriality of $L_{\lambda}$, but you don't introduce this
until Remark 2.)
\item \textbf{Page 54, Proof of Lemma:} You write: ``Clearly, this is the
regular representation, i.e., $\left( V^{\otimes n}\right) _{\det}%
\simeq\bigoplus_{\lambda}M_{\lambda}\otimes M_{\lambda}$.'' I think you are
silently using $M_{\lambda}^{\ast}\simeq M_{\lambda}$ here; maybe it would be
better if you are more explicit about it.
\item \textbf{Page 55:} In the formula%
\[
V_{n}\left( \lambda\right) _{\det}\simeq M_{\lambda}\otimes L_{\lambda
}\left( n\right) _{\det}\simeq M_{\lambda}\otimes M_{\lambda},
\]
the $V_{n}\left( \lambda\right) $ should be $V_{\lambda}\left( n\right) $.
\item \textbf{Page 57, Exercise 2:} ``representation'' should be ``rational
representation'' here, I think.
\end{itemize}
\section*{Section 6}
\begin{itemize}
\item \textbf{Page 59, Exercise 4:} I believe this is wrong (as, e.g., the
example of $n=2$, $\lambda=\left( 2\right) $ and $r=1$ shows, in which your
definition yields $\lambda^{\vee}=\left( 0\right) =\varnothing$). There
might be several ways to fix it. The one that I know is the following: If
$\lambda$ is a partition of height $\leq n$, and if $m$ is an integer such
that $m\geq\lambda_{1}$, then
\[
s_{\lambda^{\vee}}\left( x_{1},x_{2},\ldots,x_{n}\right) =\left( x_{1}%
x_{2}\cdots x_{n}\right) ^{m}\cdot s_{\lambda}\left( x_{1}^{-1},x_{2}%
^{-1},\ldots,x_{n}^{-1}\right) ,
\]
where $\lambda^{\vee}$ is the partition $\left( m-\lambda_{n},m-\lambda
_{n-1},\ldots,m-\lambda_{1}\right) $.
\item \textbf{Page 60, Cauchy's formula:} Replace $y_{n}$ by $y_{m}$ in
\textquotedblleft where both sides are considered as elements in the ring
$\mathbb{Z}\left[ \left[ x_{1},...,x_{n},y_{1},...,y_{n}\right] \right]
$\textquotedblright.
\item \textbf{Page 60, Proof of Cauchy's formula:} In the second line of this
proof, replace $x_{m+1}=...=x_{n}=0$ by $y_{m+1}=...=y_{n}=0$.
\item \textbf{Page 61:} In the middle of the page, the determinant%
\[
\det\left(
\begin{array}
[c]{cccc}%
1 & 0 & \cdots & 0\\
\dfrac{y_{1}}{1-x_{2}y_{1}} & \dfrac{y_{2}}{1-x_{2}y_{2}} & \cdots &
\dfrac{y_{n}}{1-x_{2}y_{n}}\\
\vdots & & & \vdots
\end{array}
\right)
\]
should either be%
\[
\det\left(
\begin{array}
[c]{cccc}%
1 & 1 & \cdots & 1\\
\dfrac{y_{1}}{1-x_{2}y_{1}} & \dfrac{y_{2}}{1-x_{2}y_{2}} & \cdots &
\dfrac{y_{n}}{1-x_{2}y_{n}}\\
\vdots & & & \vdots
\end{array}
\right)
\]
(the $0$'s have been replaced by $1$'s) or be%
\[
\det\left(
\begin{array}
[c]{cccc}%
1 & 0 & \cdots & 0\\
\dfrac{y_{1}}{1-x_{2}y_{1}} & \dfrac{y_{2}}{1-x_{2}y_{2}}-\dfrac{y_{1}%
}{1-x_{2}y_{1}} & \cdots & \dfrac{y_{n}}{1-x_{2}y_{n}}-\dfrac{y_{n}}%
{1-x_{2}y_{n}}\\
\vdots & & & \vdots
\end{array}
\right) .
\]
\item \textbf{Page 62, \S 6.3:} In the uppermost formula on page 62, the term
$\sum\limits_{\sigma\in\mathcal{S}_{n}}\operatorname*{sgn}\sigma\cdot
y_{\tau\sigma\left( 1\right) }^{\nu_{1}}\cdots y_{\tau\sigma\left(
n\right) }^{\nu_{n}}$ should be $\sum\limits_{\sigma\in\mathcal{S}_{n}%
}\operatorname*{sgn}\sigma\cdot y_{\sigma\tau\left( 1\right) }^{\nu_{1}%
}\cdots y_{\sigma\tau\left( n\right) }^{\nu_{n}}$, as I think. (Of course,
$\sum\limits_{\sigma\in\mathcal{S}_{n}}\operatorname*{sgn}\sigma\cdot
y_{\tau\sigma\left( 1\right) }^{\nu_{1}}\cdots y_{\tau\sigma\left(
n\right) }^{\nu_{n}}$ is correct as well, but $\sum\limits_{\sigma
\in\mathcal{S}_{n}}\operatorname*{sgn}\sigma\cdot y_{\sigma\tau\left(
1\right) }^{\nu_{1}}\cdots y_{\sigma\tau\left( n\right) }^{\nu_{n}}$ is the
term you get by replacing the summation by a double summation as described in
your text.)
\item \textbf{Page 62, \S 6.4:} When you define power sums, it might be useful
to state your policy regarding $n_{0}$: is it undefined, is it defined as $1$,
is it defined as $n$ ? Unless you define it as $1$, the definition%
\[
n_{\mu}\left( x_{1},...,x_{n}\right) :=\prod_{i\geq1}n_{\mu_{i}}\left(
x_{1},...,x_{n}\right)
\]
should be
\[
n_{\mu}\left( x_{1},...,x_{n}\right) :=\prod_{i=1}^{k}n_{\mu_{i}}\left(
x_{1},...,x_{n}\right)
\]
where $k=\max\left\{ i\in\mathbb{N}\text{\ }\mid\ \mu_{i}\neq0\right\} $.
\item \textbf{Page 63, Proof of }$\operatorname*{Tr}\varphi=n_{\mu}\left(
x_{1},...,x_{n}\right) $\textbf{:} Here you write \textquotedblleft In fact,
the lines $K\left( e_{i_{1}}\otimes\cdots\otimes e_{i_{m}}\right) \subset
V^{\otimes m}$ are stable under $\varphi$.\textquotedblright\ It seems to me
that \textquotedblleft stable\textquotedblright\ is not the right word here;
they are permuted (i. e. mapped to each other) by $\varphi$.
\item \textbf{Page 63, proof of Lemma 1:} Three times on this page, you write
$\sum\limits_{\nu\geq0}$ while you actually mean $\sum\limits_{\nu\geq1}$.
\item \textbf{Page 63, proof of Lemma 1:} Here you write \textquotedblleft we
can calculate the term of degree $m$ [...]\textquotedblright. Actually you
mean the term of degree $2m$, at least as far as the total degree in all
variables together is concerned. Of course, you can also describe it as the
term of degree $m$ in the variables $x_{1}$, $x_{2}$, $...$, $x_{n}$.
\item \textbf{Page 64:} In the long calculation of $R_{m}$ (in the middle of
page 64), there is a minor typo: $\sum\limits_{\mu\in\mathcal{P}_{\mathcal{m}%
}}$ should be $\sum\limits_{\mu\in\mathcal{P}_{m}}$.
\item \textbf{Page 65, Lemma 2 (c):} You might want to add that you consider
$b_{\lambda}$ as a class function on $\mathcal{S}_{m}$ here (and not just as a
function on partitions of $m$).
\item \textbf{Page 65, Lemma 2 (c):} You write $S_{\lambda}:=S_{\lambda_{1}%
}\times\cdots\times S_{\lambda_{r}}$. Probably the $r$ means $n$ here.
\item \textbf{Page 65, Lemma 2:} It seems that you are using a normal italic
letter $S$ here for the symmetric groups, whereas you use a calligraphic
$\mathcal{S}$ in the rest of the text.
\item \textbf{Page 65, Proof of Lemma 2 (c):} You write: \textquotedblleft
This shows that $b_{\lambda}\left( \mu\right) $ is the number of
possibilities to decompose the set $M=\left\{ \mu_{1},\mu_{2},...,\mu
_{m}\right\} $ into $m$ disjoint subsets $M=M_{1}\cup M_{2}\cup...\cup M_{m}$
such that the sum of the $\mu_{j}$'s in $M_{i}$ is equal to $\lambda_{i}%
$.\textquotedblright\ There are three mistakes here: First, we don't want $m$
disjoint subsets $M=M_{1}\cup M_{2}\cup...\cup M_{m}$, but we want $n$
disjoint subsets $M=M_{1}\cup M_{2}\cup...\cup M_{n}$. Secondly, $\left\{
\mu_{1},\mu_{2},...,\mu_{m}\right\} $ should be $\left\{ \mu_{1},\mu
_{2},...,\mu_{\phi}\right\} $, where $\phi$ is the greatest integer
satisfying $\mu_{\phi}\neq0$ (in fact, we do need this, because the product
$\left( x_{1}^{\mu_{1}}+x_{2}^{\mu_{1}}+\cdots\right) \left( x_{1}^{\mu
_{2}}+x_{2}^{\mu_{2}}+\cdots\right) \cdots$ is supposed to end with this
$\left( x_{1}^{\mu_{\phi}}+x_{2}^{\mu_{\phi}}+\cdots\right) $ and not to go
on infinitely). Finally, we are not decomposing the set $M=\left\{ \mu
_{1},\mu_{2},...,\mu_{\phi}\right\} $, but rather the set $M=\left\{
1,2,...,\phi\right\} $ (in such a way that $\sum\limits_{j\in M_{i}}\mu
_{j}=\lambda_{i}$). The difference is that some $\mu_{j}$'s may be equal while
the corresponding $j$'s are not. Accordingly, \textquotedblleft the set
$M=\left\{ \mu_{1},...,\mu_{m}\right\} $ of the cycle lengths of $\sigma
$\textquotedblright\ should be \textquotedblleft the set $M=\left\{
1,2,...,\phi\right\} $ labeling the cycles of $\sigma$\textquotedblright.
\item \textbf{Page 66, Proof of Theorem:} You write: \textquotedblleft Using
Lemma 2 (b) (and again (c)) we see that the sign must be $+1$ since
[...]\textquotedblright. This is slightly incomplete - in fact, you need to
know that $a_{\lambda}\neq-a_{\mu}$ for $\lambda\neq\mu$ here, because
otherwise it could \textquotedblleft cancel\textquotedblright\ against some
$a_{\mu}$ with $\mu>\lambda$.
\item \textbf{Page 66, Exercise 11:} Replace \textquotedblleft where $\ell
_{i}=\lambda_{i}+m-i$\textquotedblright\ by \textquotedblleft where $\ell
_{i}=\lambda_{i}+r-i$\textquotedblright.
\item \textbf{Page 66, Exercise 11:} Replace $\Delta\left( x_{1}+\cdots
+x_{r}\right) ^{r}$ by $\Delta\left( x_{1}+\cdots+x_{r}\right) ^{m}$ in the Hint.
\item \textbf{Page 66, Exercise 12:} After \textquotedblleft we associate a
\textit{hook} consisting of all boxes below or to the right hand side of
$B$\textquotedblright, you might include \textquotedblleft(including $B$
itself)\textquotedblright.
\item \textbf{Page 67, Example (2):} At the end of this example, $K^{n}\diagup
K\left( 1,1,...,1\right) $ should be $K^{m}\diagup K\left(
1,1,...,1\right) $.
\item \textbf{Page 67, Proof of Theorem:} You write: \textquotedblleft Since
the $a_{\lambda}$ form a $\mathbb{Z}$-basis of the class functions
[...]\textquotedblright. In fact they don't. They form a $\mathbb{Q}$-basis
only (but this is enough for the proof). Directly after that, $\widetilde{s}%
_{\lambda}\in\mathbb{Z}\left[ x_{1},...,x_{n}\right] $ should be replaced by
$\widetilde{s}_{\lambda}\in\mathbb{Q}\left[ x_{1},...,x_{n}\right] $.
\item \textbf{Page 67, Proof of Theorem:} In the formula%
\[
\chi\left( \sigma,x_{1},...,x_{n}\right) =n_{\mu}\left( x_{1}%
,...,x_{n}\right) ,
\]
there should be a semicolon instead of a comma after the $\sigma$ (at least,
this is how you introduced the $\chi$ notation).
\item \textbf{Page 69, proof of Corollary 2:} In the formula%
\[
\prod_{i=1...n,j=1...,m}\dfrac{1}{1-x_{i}y_{j}},
\]
the commata below the product sign are inconsistent.
\item \textbf{Page 69, proof of Corollary 2:} What do you mean by
\textquotedblleft Now we can argue as in the proof of the Theorem
above\textquotedblright? You only need to say that a representation is
uniquely determined by its character, or is there some other trick that you
are using here?
\item \textbf{Page 69, Corollary 3:} You have misspelt \textquotedblleft
is\textquotedblright\ as \textquotedblleft if\textquotedblright\ twice (in the
context \textquotedblleft where the sum if over all partitions
[...]\textquotedblright; this appears one time after the $S^{m}$ formula and
once again after the $\bigwedge^{m}$ formula).
\item \textbf{Page 69, Exercise 13:} \textquotedblleft Show that
$\operatorname*{ht}\lambda$ is the smallest integer [...]\textquotedblright\ -
are you sure about it? I think the smallest such integer is $\lambda_{1}$. The
statement that \textquotedblleft$\det\nolimits^{\operatorname*{ht}\lambda
}L_{\lambda}\left( n\right) ^{\ast}\simeq L_{\lambda^{c}}\left( n\right)
$\textquotedblright\ should be replaced by \textquotedblleft$\det
\nolimits^{m}L_{\lambda}\left( n\right) ^{\ast}\cong L_{\lambda^{\vee}%
}\left( n\right) $ for any integer $m\geq\lambda_{1}$, where $\lambda^{\vee
}$ denotes the partition $\left( m-\lambda_{n},m-\lambda_{n-1},\ldots
,m-\lambda_{1}\right) $\textquotedblright.
\item \textbf{Page 70:} You write: \textquotedblleft We will first show that
there is an interesting relation between such multiplicities for the general
linear group and those for the symmetric group.\textquotedblright\ This is a
bit misleading; the multiplicities are of different types for the general
linear group and for the symmetric group. For the general linear group, you
take the interior tensor product between two representations of \textit{one
and the same} $\operatorname*{GL}\nolimits_{n}$. For the symmetric group, you
take the tensor product of a representation of $S_{a}$ with a representation
of $S_{b}$, generally for different $a$ and $b$. The question to decompose the
interior tensor product of two representations of the \textit{same} symmetric
group $S_{n}$ is harder.
\item \textbf{Page 70:} \textquotedblleft\textsc{Pieri}'s
formula\textquotedblright\ is misspelt \textquotedblleft\textsc{Perie}'s
formula\textquotedblright\ here.
\end{itemize}
\section*{Section 7}
\begin{itemize}
\item \textbf{Page 74, Exercise 3:} Could not $N>0$ be weakened to $N\geq0$
here? This way the result would also encompass groups which don't have any
nonconstant invariant.
\item \textbf{Page 74, \S 7.2:} Before Corollary 1, you write: ``Part (a) of
the following corollary is \textsc{Weyl}'s Theorem A of the previous section
7.1.'' I don't see how the $K\left[ V^{p}\right] ^{G}=\left\langle K\left[
V^{n}\right] ^{G}\right\rangle _{\operatorname*{GL}\nolimits_{p}}$ part of
Weyl's Theorem A should directly follow from Corollary 1 (a). While Corollary
1 (a) clearly yields that $K\left[ V^{p}\right] ^{G}$ is \textit{generated}
by $\left\langle K\left[ V^{n}\right] ^{G}\right\rangle _{\operatorname*{GL}%
\nolimits_{p}}$, I don't see a direct reason why it \textit{equals}
$\left\langle K\left[ V^{n}\right] ^{G}\right\rangle _{\operatorname*{GL}%
\nolimits_{p}}$, i. e., why $\left\langle K\left[ V^{n}\right]
^{G}\right\rangle _{\operatorname*{GL}\nolimits_{p}}$ is a $K$-algebra (i. e.,
why it is closed under multiplication).
\item \textbf{Page 74, \S 7.2:} You use the notion of a ``multihomogeneous
subspace'' without defining it (you only defined a multihomogeneous component
some time before). I guess you mean a subspace which is the (direct) sum of
its multihomogeneous components?
\item \textbf{Page 74, Lemma.} You might add that the lemma also holds more
generally for every $T_{p}$-stable subspace $F\subset K\left[ V^{p}\right] $.
\item \textbf{Page 75, but also many more times throughout the text:} You use
the $\subset$ and $\subseteq$ signs as synonyms (both meaning ``subset'', not
``proper subset''). Find/Replace should do the job here.
\item \textbf{Page 76, Lemma:} I think the right hand side only makes sense
for $i\neq j$ (unless we are talking about a topological field such as
$\mathbb{R}$ or $\mathbb{C}$).
\item \textbf{Page 77, Proof of Lemma.} In the very first formula of page 77,
the left hand side should be $f_{\nu}\left( v_{1},...,v_{p}\right) $ rather
than $f_{\nu}\left( x_{1},...,x_{p}\right) $.
\item \textbf{Page 77, Example (c):} You write: ``assuming that $f=f\left(
v_{0}\right) $ is a function depending only on the first copy of $V^{d+1}$
''. This would be less ambiguous if worded ``[...] on the first copy of $V$ in
$V^{d+1}$ ''.
\item \textbf{Page 78, Proof of the Proposition:} After (3), you write:
``Clearly the sum is finite [...]''. It is only finite for $i\neq j$, and you
need an additional argument (actually, a reference to the lemma that a $T_{p}%
$-stable subspace is always multihomogeneous) to handle the case $i=j$.
\item \textbf{Page 79, \S 7.5:} The notion ``unimodular'' has never been
defined in the text. Is a linear group said to be unimodular if it is included
in $\operatorname*{SL}V$ ? In this case, what is the relation to the standard
definition of ``unimodular'' in Lie group theory?
\item \textbf{Page 79, \S 7.5:} Nitpicking: You write: ``the determinant of
the $n\times n$ matrix consisting of the column vectors $v_{1},...v_{n}$''. A
comma is missing before $v_{n}$ here.
\item \textbf{Page 79, Exercise 5:} More nitpicking: ``or again a
determinant'' should be a ``or again $\pm$ a determinant'', since you are only
counting the $\left[ i_{1},...,i_{n}\right] $ with $i_{1}p$. If I were you, I would replace the $\sum_{i_{1},...,i_{p}}$ sign by
a $\sum_{i_{1}<...2$ can be expressed as a polynomial in the traces $\operatorname*{Tr}%
\nolimits_{i}$ and $\operatorname*{Tr}\nolimits_{ij}$.'' But what about
$\operatorname*{Tr}\nolimits_{ijk}$ ?
\end{itemize}
\section*{References}
\begin{itemize}
\item \textbf{[Der91]:} Typo: ``fomres''.
\item \textbf{[DiC70]:} ``Als Buch bei'' is German.
\item \textbf{[For87]:} This is not vol. 1287 but vol. 1278.
\item \textbf{[How95]:} This appears two times in the list of references.
\end{itemize}
%\section*{``Constructive Invariant Theory'' paper (by Harm Derksen and Hanspeter
%Kraft)}
%\begin{itemize}
%\item \textbf{Page 224:} The equation $\left[ i,j\right] :=\det\left(
%\begin{array}
%[c]{cc}%
%a_{i} & a_{j}\\
%b_{1} & b_{j}%
%\end{array}
%\right) $ has a typo (the $1$ should be an $i$).
%\item \textbf{Theorem 2.3:} The formulation of this theorem would be much
%clearer if a comma after ``for all $i$'' would be added.
%\item \textbf{Theorem 5.1:} ``finite groups'' should be ``finite group''.
%\end{itemize}
\end{document}