\documentclass[numbers=enddot,12pt,final,onecolumn,notitlepage]{scrartcl}%
\usepackage[headsepline,footsepline,manualmark]{scrlayer-scrpage}
\usepackage[all,cmtip]{xy}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{framed}
\usepackage{amsmath}
\usepackage{comment}
\usepackage{color}
\usepackage{hyperref}
\usepackage[sc]{mathpazo}
\usepackage[T1]{fontenc}
\usepackage{amsthm}
%TCIDATA{OutputFilter=latex2.dll}
%TCIDATA{Version=5.50.0.2960}
%TCIDATA{LastRevised=Monday, July 08, 2019 07:18:00}
%TCIDATA{SuppressPackageManagement}
%TCIDATA{<META NAME="GraphicsSave" CONTENT="32">}
%TCIDATA{<META NAME="SaveForMode" CONTENT="1">}
%TCIDATA{BibliographyScheme=Manual}
%BeginMSIPreambleData
\providecommand{\U}[1]{\protect\rule{.1in}{.1in}}
%EndMSIPreambleData
\theoremstyle{definition}
\newtheorem{theo}{Theorem}[section]
\newenvironment{theorem}[1][]
{\begin{theo}[#1]\begin{leftbar}}
{\end{leftbar}\end{theo}}
\newtheorem{lem}[theo]{Lemma}
\newenvironment{lemma}[1][]
{\begin{lem}[#1]\begin{leftbar}}
{\end{leftbar}\end{lem}}
\newtheorem{prop}[theo]{Proposition}
\newenvironment{proposition}[1][]
{\begin{prop}[#1]\begin{leftbar}}
{\end{leftbar}\end{prop}}
\newtheorem{defi}[theo]{Definition}
\newenvironment{definition}[1][]
{\begin{defi}[#1]\begin{leftbar}}
{\end{leftbar}\end{defi}}
\newtheorem{remk}[theo]{Remark}
\newenvironment{remark}[1][]
{\begin{remk}[#1]\begin{leftbar}}
{\end{leftbar}\end{remk}}
\newtheorem{coro}[theo]{Corollary}
\newenvironment{corollary}[1][]
{\begin{coro}[#1]\begin{leftbar}}
{\end{leftbar}\end{coro}}
\newtheorem{conv}[theo]{Convention}
\newenvironment{condition}[1][]
{\begin{conv}[#1]\begin{leftbar}}
{\end{leftbar}\end{conv}}
\newtheorem{quest}[theo]{Question}
\newenvironment{algorithm}[1][]
{\begin{quest}[#1]\begin{leftbar}}
{\end{leftbar}\end{quest}}
\newtheorem{warn}[theo]{Warning}
\newenvironment{conclusion}[1][]
{\begin{warn}[#1]\begin{leftbar}}
{\end{leftbar}\end{warn}}
\newtheorem{conj}[theo]{Conjecture}
\newenvironment{conjecture}[1][]
{\begin{conj}[#1]\begin{leftbar}}
{\end{leftbar}\end{conj}}
\newtheorem{exmp}[theo]{Example}
\newenvironment{example}[1][]
{\begin{exmp}[#1]\begin{leftbar}}
{\end{leftbar}\end{exmp}}
\iffalse
\newenvironment{proof}[1][Proof]{\noindent\textbf{#1.} }{\ \rule{0.5em}{0.5em}}
\fi
\newenvironment{verlong}{}{}
\newenvironment{vershort}{}{}
\newenvironment{noncompile}{}{}
\excludecomment{verlong}
\includecomment{vershort}
\excludecomment{noncompile}
\newcommand{\kk}{\mathbf{k}}
\newcommand{\id}{\operatorname{id}}
\newcommand{\ev}{\operatorname{ev}}
\newcommand{\Comp}{\operatorname{Comp}}
\newcommand{\bk}{\mathbf{k}}
\newcommand{\Nplus}{\mathbb{N}_{+}}
\newcommand{\NN}{\mathbb{N}}
\let\sumnonlimits\sum
\let\prodnonlimits\prod
\renewcommand{\sum}{\sumnonlimits\limits}
\renewcommand{\prod}{\prodnonlimits\limits}
\DeclareSymbolFont{bbold}{U}{bbold}{m}{n}
\DeclareSymbolFontAlphabet{\mathbbold}{bbold}
\setlength\textheight{22.5cm}
\setlength\textwidth{15cm}
\ihead{Errata to SymFuncs2017}
\ohead{\today}
\begin{document}

\begin{center}
\textbf{An involutive introduction to symmetric functions}

\textit{Mark Wildon}

\url{http://www.ma.rhul.ac.uk/~uvah099/Maths/Sym/SymFuncs2017.pdf}

version of 20 March 2019

\textbf{Errata and addenda by Darij Grinberg}

\bigskip
\end{center}

%\setcounter{section}{}


\section{Errata}

\begin{itemize}
\item \textbf{pages 1--2, Preface:} Something similar to your solution to
Question 21 appears in the proof of Theorem 6.3 of:

\qquad Anthony Mendes, Jeffrey Remmel,

\qquad\textit{Counting with Symmetric Functions},

\qquad Springer 2015.

Might be worth a brief comparison.

Also, the book

\qquad Nicholas A. Loehr,

\qquad\textit{Bijective Combinatorics},

\qquad CRC Press 2011

is worth mention: its Chapters 11 and 12 have a similar goal as your notes
(but differ in their coverage and proofs).

\item \textbf{page 4, \S 1.3:} \textquotedblleft acts as a group of linear
transformations of $\widehat{\mathbf{C}}\left[  x_{1},x_{2},\ldots\right]  $
by linear extension of $x_{i}\sigma=x_{i\sigma}$\textquotedblright%
\ $\rightarrow$ \textquotedblleft acts as a group of formally continuous
$\mathbf{C}$-algebra endomorphisms of $\widehat{\mathbf{C}}\left[  x_{1}%
,x_{2},\ldots\right]  $ by requiring $x_{i}\sigma=x_{i\sigma}$ (where a
$\mathbf{C}$-linear map from $\widehat{\mathbf{C}}\left[  x_{1},x_{2}%
,\ldots\right]  $ is said to be \textit{formally continuous} if it respects
not just finite $\mathbf{C}$-linear combinations, but also infinite ones as
long as they are well-defined)\textquotedblright.

I admit that invoking some kind of continuity appears a bit incongruous in a
combinatorics text, but I don't see an easy way to avoid it.

\item \textbf{page 5, Lemma 1.2:} It would be helpful to explain what size the
0-1 matrices are supposed to have. Namely, they either are $\infty\times
\infty$-matrices, or they are finite matrices, but in the latter case you
should say that two such matrices count as equal if they only differ in zero
rows at the bottom or zero columns on the right\footnote{i.e., the two 0-1
matrices $\left(
\begin{array}
[c]{cc}%
0 & 1\\
1 & 1
\end{array}
\right)  $ and $\left(
\begin{array}
[c]{cccc}%
0 & 1 & 0 & 0\\
1 & 1 & 0 & 0\\
0 & 0 & 0 & 0
\end{array}
\right)  $ are considered to be identical} (otherwise you'll overcount them).

\item \textbf{page 6, proof of Proposition 1.3:} Please explain how exactly
you regard $X$ as a matrix. (Per se, $X$ is just a family of complex numbers
indexed by pairs of partitions of $n$. To make it a matrix, you need to
totally order these partitions in a way that extends the dominance order.)

\item \textbf{page 6:} Under the example, it would be good to clarify that
expressions such as $X_{\nu\mu}^{-1}$ mean $\left(  X^{-1}\right)  _{\nu\mu}$
(and not $\left(  X_{\nu\mu}\right)  ^{-1}$).

\item \textbf{page 6:} Under the example, after \textquotedblleft we get
$\sum_{\mu\trianglerighteq\kappa}X_{\mu\kappa}^{-1}e_{\mu}=\operatorname*{mon}%
\nolimits_{\kappa^{\prime}}$\textquotedblright, add a period.

\item \textbf{page 7, \S 1.5:} In \textquotedblleft$H\left(  t\right)
=\prod_{i=1}^{\infty}\dfrac{1}{1-x_{i}t}\in\widehat{\mathbf{C}}\left[  \left[
t\right]  \right]  $\textquotedblright, replace \textquotedblleft%
$\widehat{\mathbf{C}}$\textquotedblright\ by \textquotedblleft%
$\widehat{\mathbf{C}}\left[  x_{1},x_{2},\ldots\right]  $\textquotedblright.

\item \textbf{page 8, \S 1.5:} Various bugs in the last sentence of \S 1.5:
First of all, it's Question 5, not \textquotedblleft Question
6\textquotedblright. Second, there is no \textquotedblleft Problem Sheet
1\textquotedblright\ in the notes, so I'd say \textquotedblleft in Section
6\textquotedblright\ instead. Finally, \textquotedblleft the matrix
$R$\textquotedblright\ is only defined in the question, so it makes no sense
to refer to this matrix here by its name.

\item \textbf{page 8, \S 1.6:} When defining $\widehat{\operatorname*{ev}}%
_{N}$, you should again require that it be a formally continuous $\mathbf{C}%
$-linear map.

\item \textbf{page 8, \S 1.6:} I'd replace \textquotedblleft As for $\Lambda$,
this ring is graded\textquotedblright\ by \textquotedblleft Just as $\Lambda$,
this ring is graded\textquotedblright. (Indeed, \textquotedblleft As for
$\Lambda$, this ring is graded\textquotedblright\ might be misread as
\textquotedblleft On the other hand, $\Lambda$ is graded\textquotedblright.)

\item \textbf{page 9:} In the definition of $q_{N}$, it is a bit inappropriate
to say that $q_{N}$ is \textquotedblleft defined by $x_{N}\mapsto0$ and
$x_{k}\mapsto x_{k}$ for $k<N$\textquotedblright, since $q_{N}$ is a map from
the \textbf{symmetric} polynomials and thus does not act on single variables
like $x_{k}$ alone. Instead, it is better to say that $q_{N}$ is the
restriction of the $\mathbf{C}$-algebra homomorphism $\mathbf{C}\left[
x_{1},\ldots,x_{N}\right]  \rightarrow\mathbf{C}\left[  x_{1},\ldots
,x_{N-1}\right]  $ that sends $x_{N}\mapsto0$ and $x_{k}\mapsto x_{k}$ for
$k<N$. (Once again, it needs to be required that it is a $\mathbf{C}$-algebra
homomorphism; otherwise, the definition is incomplete.)

\item \textbf{page 10, Example 1.7 (1):} \textquotedblleft has constant
degree\textquotedblright\ $\rightarrow$ \textquotedblleft has homogeneous
degree\textquotedblright.

\item \textbf{page 10, Example 1.7 (1):} When you say \textquotedblleft taking
fixed points does not commute with inverse limits\textquotedblright, I think
you really mean \textquotedblleft taking the completion does not commute with
inverse limits\textquotedblright.

\item \textbf{page 10, Example 1.7 (2):} I suspect \textquotedblleft%
$\widehat{\mathbf{C}}\left[  \left[  t\right]  \right]  $\textquotedblright%
\ should be \textquotedblleft$\mathbf{C}\left[  \left[  t\right]  \right]
$\textquotedblright\ or \textquotedblleft$\widehat{\mathbf{C}}\left[
x_{1},x_{2},\ldots\right]  \left[  \left[  t\right]  \right]  $%
\textquotedblright.

\item \textbf{page 10, Example 1.7 (3):} Add comma before \textquotedblleft
have many properties\textquotedblright.

\item \textbf{page 11, Lemma 1.8:} Maybe a short reminder on what
$\operatorname*{Sym}\nolimits^{n}\left(  Bt\right)  $ means would be nice
here. Also, you could rewrite $\operatorname*{Tr}\operatorname*{Sym}%
\nolimits^{n}\left(  Bt\right)  $ as $\left(  \operatorname*{Tr}%
\operatorname*{Sym}\nolimits^{n}B\right)  t^{n}$ in order to avoid talking of
linear algebra over the base ring $\mathbf{C}\left[  t\right]  $.

\item \textbf{page 11, Example 1.9:} At the end of $(\star)$, the period
should be a comma.

\item \textbf{page 12, Example 1.9:} I'd replace \textquotedblleft and then
$\left(  xz\right)  ^{m}$\textquotedblright\ by \textquotedblleft and then
$\left(  zx\right)  ^{m}$\textquotedblright.

\item \textbf{page 13, \S 1.10:} After \textquotedblleft Hence $p_{n}%
=nh_{n}-\sum_{k=1}^{n-1}p_{k}h_{n-k}$\textquotedblright, add \textquotedblleft
for each $n\in\mathbf{N}$\textquotedblright\ (not for $n=0$).

\item \textbf{page 13, \S 1.11:} In \textquotedblleft the coefficient of
$p_{\alpha}t^{n}$ is $\prod_{k=1}^{n}1/\left(  k^{a_{i}}a_{i}!\right)
$\textquotedblright, replace both \textquotedblleft$i$\textquotedblright s by
\textquotedblleft$k$\textquotedblright s.

\item \textbf{page 15, \S 1.12:} \textquotedblleft let $\beta$ be a
composition of $\beta_{1}+2\beta_{2}+\cdots=b$\textquotedblright%
\ $\rightarrow$ \textquotedblleft let $\beta$ be a composition, set
$b=\beta_{1}+2\beta_{2}+\cdots$,\textquotedblright. (Note that $\beta
_{1}+2\beta_{2}+\cdots$ is not the size of $\beta$.)

\item \textbf{page 16, Definition 1.14:} \textquotedblleft be the set of
$\lambda$-tableaux with content $\alpha$\textquotedblright\ $\rightarrow$
\textquotedblleft be the set of semistandard $\lambda$-tableaux with content
$\alpha$\textquotedblright.

\item \textbf{page 18, Example 2.2:} Obvious as it may be to us
combinatorialists, it's probably necessary to mention that \textquotedblleft
paths\textquotedblright\ are supposed to only consist of steps $1$ unit to the
east and steps $1$ unit to the north (rather than arbitrary steps back and forth).

\item \textbf{page 21:} It is worth pointing out that Theorem 2.1 immediately
yields a new proof of Proposition 1.16.

\item \textbf{page 21, proof of Corollary 2.3:} In \textquotedblleft$\left(
\lambda_{1}-1+c_{1},\ldots,\lambda_{k}-k+c_{k}\right)  $\textquotedblright,
replace each \textquotedblleft$k$\textquotedblright\ by \textquotedblleft%
$M$\textquotedblright.

\item \textbf{page 21, proof of Corollary 2.3:} In \textquotedblleft$\left\{
c_{1},\ldots,c_{k}\right\}  =\left\{  1,\ldots,k\right\}  $\textquotedblright,
replace each \textquotedblleft$k$\textquotedblright\ by \textquotedblleft%
$M$\textquotedblright.

\item \textbf{page 22, \S 3.1:} \textquotedblleft Given a $N$%
-strict\textquotedblright\ $\rightarrow$ \textquotedblleft Given an
$N$-strict\textquotedblright.

\item \textbf{page 22, \S 3.1:} In \textquotedblleft and $\Gamma_{n}$ has as a
basis $\left\{  a_{\lambda+\delta}:\lambda\vdash n,\ \ell\left(
\lambda\right)  \leq N\right\}  $\textquotedblright, replace \textquotedblleft%
$\lambda\vdash n$\textquotedblright\ by \textquotedblleft$\lambda\vdash
n-N\left(  N-1\right)  /2$\textquotedblright.

\item \textbf{page 23, \S 3.2:} \textquotedblleft starting in the SW corner
with coordinate $\left(  0,\lambda_{1}^{\prime}\right)  $ and ending at the NE
corner at coordinate $\left(  \lambda_{1},0\right)  $\textquotedblright%
\ $\rightarrow$ \textquotedblleft starting in the SW corner with coordinate
$\left(  \lambda_{1}^{\prime},0\right)  $ and ending at the NE corner at
coordinate $\left(  0,\lambda_{1}\right)  $\textquotedblright.

More importantly, these two \textquotedblleft corners\textquotedblright\ do
not actually belong to the rim. Instead, they are being used as a starting
line and a finish line. This should probably be explained somehow lest readers
get confused about whether row/column indexing starts at $0$ (it does not, but
your definitions of the corners suggests it does).

Maybe it would also be neat to see the actual walk drawn into the picture, as
a sequence of arrows?

\item \textbf{page 23, \S 3.2:} \textquotedblleft and for each step
right\textquotedblright\ $\rightarrow$ \textquotedblleft and for each step
upwards\textquotedblright.

\item \textbf{page 23, \S 3.2:} The word \textquotedblleft
represents\textquotedblright\ (as in \textquotedblleft an abacus represents a
partition\textquotedblright) probably needs to be defined. It's a bit
unexpected, since you have never explicitly said that you are planning to use
abaci to represent partitions.

\item \textbf{page 23, Definition 3.3:} After \textquotedblleft Fix an abacus
for $\lambda$ with exactly $N$ beads and no final gaps\textquotedblright, I'd
point out that this abacus is actually uniquely determined by $\lambda$ and
$N$.

\item \textbf{page 23, \S 3.2:} \textquotedblleft acts
transitively\textquotedblright\ $\rightarrow$ \textquotedblleft acts
transitively and freely\textquotedblright. (You use the freedom later.)

\item \textbf{page 24, \S 3.2:} In the computation of $a_{\left(  3,1\right)
+\left(  2,1,0\right)  }$, the \textquotedblleft$+x_{3}^{2}x_{1}^{5}-x_{1}%
^{2}x_{3}^{5}$\textquotedblright\ part should be \textquotedblleft$-x_{3}%
^{2}x_{1}^{5}+x_{1}^{2}x_{3}^{5}$\textquotedblright.

\item \textbf{page 24, Lemma 3.6:} This lemma should require $\ell\left(
\lambda\right)  \leq N$.

\item \textbf{page 24, \S 3.2:} Every $A\in\operatorname*{Abc}\left(
\lambda\right)  $ and $\sigma\in\operatorname*{Sym}\nolimits_{N}$ satisfy
$x^{A\sigma}=x^{A}\sigma$. This simple fact is used tacitly in the proof of
Lemma 3.6, so I'd suggest stating it somewhere before that proof.

\item \textbf{page 24, Theorem 3.7:} Here you probably want to require
$\ell\left(  \lambda\right)  \leq N$.

\item \textbf{page 24, \S 3.3:} In the example, replace \textquotedblleft%
$e_{r}$\textquotedblright\ by \textquotedblleft$e_{2}$\textquotedblright.

\item \textbf{page 25:} In the first case (\textquotedblleft If there are no
collisions\textquotedblright) of the definition of $J\left(  A,S\right)  $, I
briefly stumbled over the question of what to do if the first bead we want to
move right is already in the rightmost position. Thinking about the purpose of
the construction, I soon realized that in this case, the abacus is simply
extended by one gap to the right before moving the bead. This is probably
worth writing out.

\item \textbf{page 25:} In th proof of Claim (2), the sentence that begins
with \textquotedblleft Let $g=\prod_{k}x_{k}^{\alpha_{k}}$\textquotedblright%
\ has incorrect notations. First, the \textquotedblleft$k$\textquotedblright%
\ should be renamed into another letter, since $k$ already stands for the
label of the bead into which the bead labelled $j$ bumps. Second,
\textquotedblleft$\neq i,j$\textquotedblright\ should be \textquotedblleft%
$\neq j,k$\textquotedblright.

\item \textbf{page 26, Theorem 3.8:} Here you probably want to require
$\ell\left(  \lambda\right)  \leq N$.

\item \textbf{page 26, proof of Corollary 3.9:} It might be worth explaining
what a \textquotedblleft Young's Rule addition\textquotedblright\ is (i.e.,
adding boxes in such a way that no two boxes are added in the same column).

\item \textbf{page 26, proof of Corollary 3.9:} Replace \textquotedblleft the
added boxes\textquotedblright\ by \textquotedblleft the boxes added in the
$i$-th step\textquotedblright.

\item \textbf{page 27, \S 3.5, and many places below:} What you call
\textquotedblleft hook\textquotedblright\ is commonly called \textquotedblleft
rim hook\textquotedblright\ or \textquotedblleft ribbon\textquotedblright\ or
\textquotedblleft border strip\textquotedblright\ in the theory of symmetric
functions (e.g., in the books by Stanley and by Loehr). The notion of a
\textquotedblleft hook\textquotedblright\ means something different (a
\textquotedblleft hook\textquotedblright\ is a partition of the form $\left(
a,1,1,\ldots,1\right)  $). While these notions sometimes lead to the same
result (e.g., removing a hook is tantamount to removing a rim hook), often
enough they don't (e.g., when you say \textquotedblleft$\mu/\lambda$ is a
hook\textquotedblright, you always mean \textquotedblleft$\mu/\lambda$ is a
rim hook\textquotedblright, not \textquotedblleft$\mu/\lambda$ is a parallel
translate of a partition of the form $\left(  a,1,1,\ldots,1\right)
$\textquotedblright), so I would advise you to fix the notation by
find/replace to avoid teaching an unusual terminology that conflicts with the standard.

\item \textbf{page 27, \S 3.5:} Add comma after \textquotedblleft denoted
$\operatorname*{ht}\left(  \mu/\lambda\right)  $\textquotedblright.

\item \textbf{page 27, \S 3.5:} \textquotedblleft that it
meets\textquotedblright\ is somewhat inappropriate here, because
\textquotedblleft it\textquotedblright\ is a formal symbol $\mu/\lambda$ (you
have not defined skew diagrams yet) and has no \textquotedblleft physical
body\textquotedblright\ that could meet a row. Instead you might want to say
\textquotedblleft that have a nonempty intersection with the set $\left[
\mu\right]  \setminus\left[  \lambda\right]  $\textquotedblright.

\item \textbf{page 27, \S 3.5:} I'd add the remark that (for any partition
$\lambda$) we say that $\lambda/\lambda$ is a $0$-strip, and that its sign
$\operatorname*{sgn}\left(  \lambda/\lambda\right)  $ is defined to be $1$
(contrary to what the definition of sign would suggest). This convention is
important in making Corollary 3.13 work (keep in mind that $\alpha_{i}$ can be
$0$ in a composition $\alpha$).

\item \textbf{page 28, caption to Figure 1:} In \textquotedblleft$\circ
\bullet^{b}\circ\circ\bullet\circ\bullet\bullet\circ\circ\bullet
$\textquotedblright, either a \textquotedblleft$\bullet$\textquotedblright\ is
missing before the \textquotedblleft$^{b}$\textquotedblright, or the
\textquotedblleft$^{b}$\textquotedblright\ itself signifies a bead (which, if
true, should be pointed out).

\item \textbf{page 28, caption to Figure 1:} \textquotedblleft$\left(
6,5,5,5,4\right)  $\textquotedblright\ $\rightarrow$ \textquotedblleft$\left(
6,5,5,5,4,1\right)  $\textquotedblright.

\item \textbf{page 28:} I'd replace \textquotedblleft where $B$ represents
$\mu$\textquotedblright\ by \textquotedblleft where $B\in\operatorname*{Abc}%
\left(  \mu\right)  $\textquotedblright, since \textquotedblleft
represent\textquotedblright\ (besides being a bit of a weasel word) has so far
been used for unlabelled abaci only.

\item \textbf{page 29, proof of Theorem 3.11:} In \textquotedblleft%
$\tau=\sigma\left(  j,i_{1},\ldots i_{h}\right)  $\textquotedblright, add a
comma before \textquotedblleft$i_{h}$\textquotedblright.

\item \textbf{page 29, Definition 3.12:} Replace \textquotedblleft and let
$\left(  \alpha_{1},\ldots,\alpha_{k}\right)  \models n$\textquotedblright\ by
\textquotedblleft and let $\alpha=\left(  \alpha_{1},\ldots,\alpha_{t}\right)
\models n$\textquotedblright. (Notice that \textquotedblleft$k$%
\textquotedblright\ has become a \textquotedblleft$t$\textquotedblright\ since
you use the letter $t$ further down; but of course, you can just as well
replace all \textquotedblleft$t$\textquotedblright s by \textquotedblleft%
$k$\textquotedblright s instead.)

\item \textbf{page 30, proof of Corollary 3.13:} I'd mention here that you are
using Theorem 3.11 for all $r\in\mathbf{N}_{0}$, not just for $r\in\mathbf{N}%
$. (Of course, Theorem 3.11 for $r=0$ is obvious.)

\item \textbf{page 30, \S 3.6:} After \textquotedblleft just observe that
$P\left(  1,2,2,1\right)  =\left(  2,2,1,1\right)  $\textquotedblright, add
\textquotedblleft$=P\left(  1,1,2,2\right)  $\textquotedblright, in order to
clarify what this has to do with $2$-hooks.

\item \textbf{page 30, \S 3.6:} You say that \textquotedblleft$\left(
3,2,1\right)  $ has no $2$-hooks\textquotedblright, but this makes no sense
until you define what it means for a partition to have an $r$-hook. (You only
defined when $\mu/\lambda$ is a $r$-hook.) I suggest you say \textquotedblleft
there is no $2$-hook of the form $\left(  3,2,1\right)  /\kappa$%
\textquotedblright\ instead, or define this concept.

\item \textbf{page 30, \S 3.6:} \textquotedblleft see Question 17 and
19\textquotedblright: Is Question 19 really about applying Murnaghan-Nakayama?

\item \textbf{page 30, \S 4.1:} Add a comma after \textquotedblleft denoted
$\operatorname*{w}\left(  t\right)  $\textquotedblright.

\item \textbf{page 30, Definition 4.1:} \textquotedblleft when $k$ is read
from left to right\textquotedblright\ $\rightarrow$ \textquotedblleft when $w$
is read from left to right\textquotedblright.

\item \textbf{page 30, Definition 4.1:} \textquotedblleft when $k$ is read
right to left\textquotedblright\ $\rightarrow$ \textquotedblleft when $w$ is
read right to left\textquotedblright.

\item \textbf{page 30, Definition 4.1:} \textquotedblleft subword of unpaired
entries\textquotedblright\ $\rightarrow$ \textquotedblleft subword of
$k$-unpaired entries\textquotedblright.

\item \textbf{page 30, Definition 4.1:} I think an example illustrating the
concepts of \textquotedblleft excess\textquotedblright\ and \textquotedblleft
record\textquotedblright\ used in this definition would be helpful. For
example, in order to find the $1$-unpaired $1$s in $121321132$, we make the
following table:%
\[
\left(
\begin{array}
[c]{ccccccccc}%
1 & 2 & 1 & 3 & 2 & 1 & 1 & 3 & 2\\
1 & 0 & 1 & 1 & 0 & 1 & 2 & 2 & 1\\
\ast &  &  &  &  &  & \ast &  &
\end{array}
\right)  .
\]
The top row is the word $w=121321132$. The middle row shows, for each entry of
this word, the excess of $1$s over $2$s in the part of the word reaching up to
this entry (when the word is read from left to right). The bottom row has an
asterisk $\ast$ in each column where the excess achieves a new record; thus,
the $1$-unpaired $1$s in $w$ are exactly the entries which have a $\ast$ under
them. A similar table can be made for finding $1$-unpaired $2$s.

\item \textbf{page 30, \S 4.1:} \textquotedblleft of the form $k+1vk$%
\textquotedblright\ $\rightarrow$ \textquotedblleft of the form $\left(
k+1\right)  vk$\textquotedblright.

\item \textbf{page 31, \S 4.1:} \textquotedblleft the subword $k+1vk$%
\textquotedblright\ $\rightarrow$ \textquotedblleft the subword $\left(
k+1\right)  vk$\textquotedblright.

\item \textbf{page 31, \S 4.1:} \textquotedblleft If $k=2$ the parenthesised
word is $45)((411)(($. The unpaired subword is $233$, in positions $3$, $10$
and $11$.\textquotedblright\ is wrong. The opening parenthesis in position $4$
is unpaired, too, and the unpaired subword is $2333$.

\item \textbf{page 31, proof of Lemma 4.2:} In \textquotedblleft all $\left(
k+1\right)  s$\textquotedblright, the \textquotedblleft$s$\textquotedblright%
\ should be a textmode \textquotedblleft s\textquotedblright.

\item \textbf{page 31, proof of Lemma 4.2:} You write: \textquotedblleft since
every $k+1$ to the left of position $i$ is paired, this new $k$ is
unpaired\textquotedblright. I believe this isn't so simple. Couldn't this new
$k$ grab a $k+1$ to its left that was previously paired with some other $k$ in
$w$, and thus mess up the pairing of parentheses?

Let me suggest two valid proofs of this claim (though I cannot say any of them
is particularly readable).

I shall refer to the third sentence of Lemma 4.2 (\textquotedblleft Changing
the letters [...] entries of $w$\textquotedblright) as Lemma 4.2 (b).

\textit{First proof of Lemma 4.2 (b):} Let $w^{\prime}$ be the word obtained
from $w$ by the change indicated in Lemma 4.2 (b).

Regard the $k$s and $\left(  k+1\right)  $s in $w$ as closing and opening
parentheses, respectively. The paired $k$s and the paired $\left(  k+1\right)
$s then correspond to parentheses that are paired according to the usual rules
of bracketing. This pairing has the following property: Between any paired
parenthesis and its partner\footnote{The \textit{partner} of a paired
parenthesis is the other parenthesis that it is paired with.}, there are no
unpaired parentheses\footnote{In fact, any unpaired parenthesis between them
would have prevented them from getting paired with each other.}. Therefore,
any change to the unpaired parentheses in $w$ does not interfere with the
paired parentheses; in particular, it does not render their pairing
invalid\footnote{\textquotedblleft Invalid\textquotedblright\ would mean that
two parentheses that were paired to each other before the change could end up
not paired to each other after the change. This cannot happen, because there
were no unpaired entries between them (as we have just seen), and so none of
the letters between them have changed.}. In general, such a change might
introduce some new paired parentheses; however, the change indicated in Lemma
4.2 (b) cannot do this, because it replaces the unpaired subword $k^{c}\left(
k+1\right)  ^{d}$ by a subword of the form $k^{c^{\prime}}\left(  k+1\right)
^{d^{\prime}}$, which clearly creates no opportunity for further pairing.
Therefore, the paired parentheses in $w^{\prime}$ are exactly the paired
parentheses in $w$ (in particular, they occupy the same positions in
$w^{\prime}$ as in $w$); consequently, the $k$-unpaired entries of $w^{\prime
}$ are in the same positions as the $k$-unpaired entries of $w$. This proves
Lemma 4.2.

\textit{Second proof of Lemma 4.2 (b):} We proceed by strong induction on the
length of the word. Thus, we fix our $w$, $k$, $c$, $d$, $c^{\prime}$ and
$d^{\prime}$, but we assume that Lemma 4.2 (b) is already proven for all words
shorter than $w$ in the place of $w$.

A word is said to be \textit{simple} if it has the form $\left(  k+1\right)
vk$, where $v$ is a word (possibly empty) containing neither of the letters
$k$ and $k+1$. (Of course, the letter $k$ is fixed here.) Let $w^{\prime}$ be
the word obtained from $w$ by the change indicated in Lemma 4.2 (b).

If the word $w$ contains no simple factor, then Lemma 4.2 (b) is obvious
(indeed, in this case, all $k$s and all $\left(  k+1\right)  $s are unpaired
in $w$, and the same holds for $w^{\prime}$). We thus assume that the word $w$
contains a simple factor. In this case, we choose some simple factor of $w$;
we denote this factor by $u$, and we let $p$ and $q$ be the positions (in $w$)
of its first and last letter. For any word $z$ having at least $q$ letters, we
let $\overline{z}$ be the word obtained from $z$ by removing the letters at
positions $p,p+1,\ldots,q$.

Now, the pairing of the $k$s and $\left(  k+1\right)  $s in $w$ (regarded as
closing and opening parentheses) has the following property: The $k+1$ in
position $p$ is paired with the $k$ in position $q$ (since there are no $k$s
and no $\left(  k+1\right)  $s between them), and the pairing of the remaining
$k$s and $\left(  k+1\right)  $s in $w$ is precisely the same as if the simple
factor $u$ (starting at position $p$ and ending at position $q$) was absent
(i.e., it is the same as for the word $\overline{w}$). Exactly the same holds
for the word $w^{\prime}$, because the simple factor $u$ is unaffected by the
change that transforms $w$ into $w^{\prime}$ (indeed, the change only modifies
unpaired letters, but there are no unpaired letters in $u$). Hence, in order
to prove Lemma 4.2 (b) for our word $w$, it suffices to prove Lemma 4.2 (b)
for the word $\overline{w}$ (since the word $\overline{w^{\prime}}$ is
obtained from $\overline{w}$ by the same change that transforms $w$ into
$w^{\prime}$). But this follows from the induction hypothesis, since the word
$\overline{w}$ is shorter than $w$. This concludes the proof of Lemma 4.2 (b).

\item \textbf{page 32:} In the figure on top of the page, in the
southeasternmost diagram, the bottom row should be $2444$, not $2334$.

\item \textbf{page 32, Lemma 4.4:} Replace \textquotedblleft$\alpha^{\prime
}=\left(  \alpha_{1},\ldots,\alpha_{k+1},\alpha_{k},\ldots,\alpha_{N}\right)
$\textquotedblright\ by \textquotedblleft$\alpha^{\prime}=\left(  \alpha
_{1},\ldots,\alpha_{k-1},\alpha_{k+1},\alpha_{k},\alpha_{k+2},\ldots
,\alpha_{N}\right)  $\textquotedblright\ (to avoid misunderstanding
as\newline$\left(  \underbrace{\alpha_{1},\ldots,\alpha_{k+1}}_{k+1\text{
entries}},\underbrace{\alpha_{k},\ldots,\alpha_{N}}_{N-k+1\text{ entries}%
}\right)  $).

\item \textbf{page 32, Lemma 4.4:} The claim that \textquotedblleft$S_{k}%
E_{k}:\operatorname*{SSYT}\nolimits_{k+1}\left(  \mu,\alpha\right)
\rightarrow\operatorname*{SSYT}\nolimits_{k+1}\left(  \mu,\alpha^{\prime
}-\epsilon\left(  k\right)  \right)  $ is an involution\textquotedblright\ is
misstated: An involution must be a bijection from a set to itself, not to
another set. What you mean, of course, is that if you combine the maps
$S_{k}E_{k}:\operatorname*{SSYT}\nolimits_{k+1}\left(  \mu,\alpha\right)
\rightarrow\operatorname*{SSYT}\nolimits_{k+1}\left(  \mu,\alpha^{\prime
}-\epsilon\left(  k\right)  \right)  $ for all $\alpha$ into one large map
$S_{k}E_{k}:\operatorname*{SSYT}\nolimits_{k+1}\left(  \mu\right)
\rightarrow\operatorname*{SSYT}\nolimits_{k+1}\left(  \mu\right)  $, where
$\operatorname*{SSYT}\nolimits_{k+1}\left(  \mu\right)  =\bigsqcup_{\alpha
}\operatorname*{SSYT}\nolimits_{k+1}\left(  \mu,\alpha\right)  $, then this
large map $S_{k}E_{k}$ is an involution.

\item \textbf{page 32, proof of Lemma 4.4:} You write: \textquotedblleft If
$t^{\prime}$ is not semistandard then $t\left(  a-1,b\right)  =k$%
\textquotedblright. This requires proof. A priori, it is clear that if
$t^{\prime}$ is not semistandard, then either $t\left(  a-1,b\right)  =k$ or
$t\left(  a,b-1\right)  =k+1$ (or both). To obtain your claim, we need to rule
out that $t\left(  a,b-1\right)  =k+1$. Fortunately, this is easy: If we had
$t\left(  a,b-1\right)  =k+1$, then the letter $k+1$ of $\operatorname*{w}%
\left(  t\right)  $ corresponding to the entry $k+1$ in position $\left(
a,b-1\right)  $ of $t$ would be a $k$-unpaired $k+1$ (indeed, the letter
immediately following it is a $k$-unpaired $k+1$, but there is a fact (easily
proven using Definition 4.1) that if a letter $p$ in a word $w$ is a $k+1$,
and if the letter immediately following it is a $k$-unpaired $k+1$, then the
letter $p$ must also be a $k$-unpaired $k+1$), but this would contradict the
fact that the leftmost unpaired $k+1$ in $\operatorname*{w}\left(  t\right)  $
is the letter corresponding to the entry $t\left(  a,b\right)  $ (which is
further right than the letter we are talking about).

For some reason, every argument I make about coplactic maps degenerates into a
run-on sentence like this...

\item \textbf{page 32, proof of Lemma 4.4:} I'd add a sentence somewhere in
this proof (after proving that $E_{k}$ and $F_{k}$ are well-defined) saying
something like \textquotedblleft For each $t\in\operatorname*{SSYT}%
\nolimits_{k}\left(  \mu,\alpha\right)  $, the tableau $S_{k}\left(  t\right)
$ is semistandard again (since it can be written either in the form $\left(
E_{k}\right)  ^{\ell}\left(  t\right)  $ or in the form $\left(  F_{k}\right)
^{\ell}\left(  t\right)  $ for some $\ell\in\mathbf{N}_{0}$), and belongs to
$\operatorname*{SSYT}\nolimits_{k}\left(  \mu,\alpha^{\prime}\right)  $ (since
the operation $S_{k}$ switches the number of unpaired $k$s with the number of
unpaired $\left(  k+1\right)  $s, whereas the numbers of paired $k$s and of
paired $\left(  k+1\right)  $s were equal to begin with).\textquotedblright.

\item \textbf{page 33, proof of Lemma 4.4:} You say \textquotedblleft$S_{k}$
and $S_{k}E_{k}$ are involutions\textquotedblright. Well, almost... In order
to be able to say that $S_{k}$ is an involution, you need to extend $S_{k}$ to
a map $\operatorname*{SSYT}\left(  \mu,\alpha\right)  \rightarrow
\operatorname*{SSYT}\left(  \mu,\alpha\right)  $ (rather than merely
$\operatorname*{SSYT}\nolimits_{k}\left(  \mu,\alpha\right)  \rightarrow
\operatorname*{SSYT}\nolimits_{k+1}\left(  \mu,\alpha\right)  $). Fortunately,
this is easy (just use the same definition as before).

\item \textbf{page 33, \S 4.3:} After \textquotedblleft the action is not
well-defined\textquotedblright, I'd add \textquotedblleft(at least not as a
right action)\textquotedblright.

\item \textbf{page 33, Definition 4.5:} Add \textquotedblleft
infinite\textquotedblright\ before \textquotedblleft integer
sequence\textquotedblright\ (your notion of \textquotedblleft
sequence\textquotedblright, per se, allows finite tuples just as well).

\item \textbf{page 34, proof of Lemma 4.7:} The wording \textquotedblleft
differ only in the positions of the $k$-unpaired entries of $\operatorname*{w}%
\left(  t\right)  $\textquotedblright\ is ambiguous: It can be interpreted
both as \textquotedblleft differ only in the positions $i_{1},i_{2}%
,\ldots,i_{s}$, where $i_{1},i_{2},\ldots,i_{s}$ are the positions of the
$k$-unpaired entries of $\operatorname*{w}\left(  t\right)  $%
\textquotedblright\ and as \textquotedblleft their only difference is where
the $k$-unpaired entries are placed\textquotedblright. I assume that you mean
the first interpretation.

\item \textbf{page 34, proof of Lemma 4.7:} \textquotedblleft Let the
rightmost $k+1$ in $\operatorname*{w}\left(  t\right)  $\textquotedblright%
\ $\rightarrow$ \textquotedblleft Let the rightmost $k$-unpaired $k+1$ in
$\operatorname*{w}\left(  t\right)  $\textquotedblright.

\item \textbf{page 34, proof of Lemma 4.7:} \textquotedblleft For (ii), if
$J\left(  t\right)  =t$\textquotedblright\ $\rightarrow$ \textquotedblleft For
(ii), if $t$ is latticed\textquotedblright.

\item \textbf{page 34, proof of Lemma 4.7:} \textquotedblleft$t\in
\operatorname*{SSYT}\left(  \mu,\lambda\right)  $ by Question
22\textquotedblright\ $\rightarrow$ \textquotedblleft$\sigma
=\operatorname*{id}\nolimits_{\operatorname*{Sym}\nolimits_{N}}$ by Question
22 (b) and therefore $t\in\operatorname*{SSYT}\left(  \mu,\lambda\right)
$\textquotedblright.

\item \textbf{page 34, proof of Lemma 4.7:} \textquotedblleft equal to
$i$\textquotedblright\ $\rightarrow$ \textquotedblleft are equal to
$i$\textquotedblright.

\item \textbf{page 34, proof of Lemma 4.7:} At the very end of this proof, it
wouldn't hurt to explicitly mention that $t$ is the unique element of
$\operatorname*{SSYT}\left(  \lambda,\lambda\right)  $ because $\left\vert
\operatorname*{SSYT}\left(  \lambda,\lambda\right)  \right\vert =K_{\lambda
\lambda}=1$ by Question 11 (c).

\item \textbf{page 34:} After \textquotedblleft$J$ has a unique fixed point in
$\mathcal{T}$\textquotedblright, add \textquotedblleft if $\mu=\lambda$, and
no fixed points if $\mu\neq\lambda$\textquotedblright.

Also, this is not quite obvious. In order to prove that an unlatticed tableau
$t\in\mathcal{T}$ cannot be a fixed point of $J$, you need to observe that the
content of $J\left(  t\right)  $ is different from the content of $t$ (because
Question 22 (c) shows that $\lambda\cdot\left(  \sigma\left(  k,k+1\right)
\right)  \neq\lambda\cdot\sigma$).

\item \textbf{page 34, Example 3.8:} I don't see how Question 22 shows that
\textquotedblleft there is no need to carry on once a negative entry
appears\textquotedblright. Is this a statement about the weak or the strong
Bruhat order? (I.e., in what exact way do we follow the arrows?)

\item \textbf{page 35, Example 3.8:} \textquotedblleft Young's Rule (Theorem
3.8)\textquotedblright\ $\rightarrow$ \textquotedblleft Young's Rule
(Corollary 3.9)\textquotedblright.

\item \textbf{page 35, Example 3.8:} A closing parenthesis is missing in
\textquotedblleft$\left\vert \operatorname*{SSYT}(\mu,\left(  2,0,7\right)
\right\vert $\textquotedblright. (One gets mindful of such things after a few
pages on coplactic maps.)

\item \textbf{page 35, Example 3.8:} \textquotedblleft the coefficient of
$\operatorname*{ev}\nolimits_{3}a_{\delta+\mu}$\textquotedblright%
\ $\rightarrow$ \textquotedblleft the coefficient of $a_{\delta+\mu}%
$\textquotedblright.

\item \textbf{page 36, proof of Theorem 4.10:} Replace \textquotedblleft%
$\sum_{\mu\vdash n}\operatorname*{SSYT}\left(  \mu,\lambda\cdot\sigma\right)
s_{\mu}$\textquotedblright\ by\newline\textquotedblleft$\sum_{\substack{\mu
\vdash n;\\\ell\left(  \mu\right)  \leq N}}\left\vert \operatorname*{SSYT}%
\left(  \mu,\lambda\cdot\sigma\right)  \right\vert a_{\mu+\delta}/a_{\delta}%
$\textquotedblright.

\item \textbf{page 36, proof of Theorem 4.10:} I'm not sure what you want to
say by \textquotedblleft(Recall that the results in \S \ 3 apply to the
antisymmetric Schur polynomials defined by $a_{\delta+\mu}/a_{\delta}%
$.)\textquotedblright. This sentence is neither necessary nor quite correct
(the polynomials $a_{\delta+\mu}/a_{\delta}$ are symmetric, not antisymmetric).

\item \textbf{page 36, proof of Theorem 4.10:} Replace \textquotedblleft%
$\sum_{\mu\vdash n}c_{\mu}a_{\delta+\mu}$\textquotedblright\ by
\textquotedblleft$\sum_{\substack{\mu\vdash n;\\\ell\left(  \mu\right)  \leq
N}}c_{\mu}a_{\delta+\mu}$\textquotedblright.

\item \textbf{page 36, proof of Theorem 4.10:} When speaking of
\textquotedblleft contributions to $c_{\mu}$\textquotedblright, you again
tacitly use the fact that the union $\bigcup{}_{\sigma\in\operatorname*{Sym}%
\nolimits_{N}}\operatorname*{SSYT}\left(  \mu,\lambda\cdot\sigma\right)  $ is
a disjoint union. This is a consequence of Question 22 (c), and should
probably be stated as such.

I would actually go as far as adding further detail: I'd define the
\textit{sign} $\operatorname*{sgn}\left(  t\right)  $ of a tableau
$t\in\mathcal{T}$ to be $\operatorname*{sgn}\sigma$, where $\sigma$ is the
unique permutation in $\operatorname*{Sym}\nolimits_{N}$ satisfying
$t\in\operatorname*{SSYT}\left(  \mu,\lambda\cdot\sigma\right)  $. (The
uniqueness here follows from Question 22 (c).) Then, I would rewrite the
definition of $c_{\mu}$ as $c_{\mu}=\sum_{t\in\mathcal{T}}\operatorname*{sgn}%
\left(  t\right)  $; this makes it clear that a sign-reversing involution
would help in simplifying $c_{\mu}$. And Lemma 4.7 (i) shows precisely that
the involution $J$ is sign-reversing on the unlatticed tableaux $t\in
\mathcal{T}$.

\item \textbf{page 36, proof of Theorem 4.10:} After \textquotedblleft is the
unique element of $\operatorname*{SSYT}\left(  \lambda,\lambda\right)
$\textquotedblright, add \textquotedblleft when $\mu=\lambda$, and does not
exist otherwise\textquotedblright.

\item \textbf{page 36, proof of Theorem 4.10:} I would replace
\textquotedblleft Therefore $c_{\mu}=0$ unless $\mu=\lambda$ and $c_{\lambda
}=1$\textquotedblright\ by \textquotedblleft Therefore $c_{\lambda}=1$ and
$c_{\mu}=0$ unless $\mu=\lambda$\textquotedblright\ for clarity.

\item \textbf{page 37, \S 4.6:} \textquotedblleft Murnaghan
Nakayama\textquotedblright\ $\rightarrow$ \textquotedblleft
Murnaghan--Nakayama\textquotedblright.

\item \textbf{page 37, \S 5:} Is your inner product bilinear or sesquilinear,
and in the latter case, which argument is it linear in? This is probably not
particularly important for what you do (although Lemma 5.1 at least needs
$\left\langle \cdot,\cdot\right\rangle $ to be linear in its first argument),
but it might help to clear up the confusion that readers might have.

\item \textbf{page 37, \S 5.1:} The period in \textquotedblleft
otherwise.\textquotedblright\ should be a comma.

\item \textbf{page 37, proof of Theorem 5.2:} I'd replace \textquotedblleft By
the combinatorial definition of $s_{\lambda}$\textquotedblright\ by
\textquotedblleft By (10)\textquotedblright.

\item \textbf{page 37, proof of Theorem 5.2:} After the first displayed
equation in this proof, add \textquotedblleft for all $\lambda\vdash
n$\textquotedblright.

\item \textbf{page 38, proof of Theorem 5.2:} After the second displayed
equation in this proof, add \textquotedblleft for all $\nu\vdash
n$\textquotedblright.

\item \textbf{page 38:} After \textquotedblleft and let $\pi^{\mu}\left(
\alpha\right)  $ be the number of ordered set partitions $\left(  P_{1}%
,\ldots,P_{k}\right)  $\textquotedblright, add \textquotedblleft of $\left\{
1,2,\ldots,n\right\}  $ with $\left\vert P_{i}\right\vert =\mu_{i}%
$\textquotedblright.

\item \textbf{page 38, proof of Theorem 5.3:} \textquotedblleft an $k\times n$
matrix\textquotedblright\ $\rightarrow$ \textquotedblleft a $k\times n$
matrix\textquotedblright\ (or do you pronounce \textquotedblleft%
$k$\textquotedblright\ differently in Britain?).

\item \textbf{page 38, proof of Theorem 5.3:} In Claim 1, it might be better
to replace \textquotedblleft$\dfrac{a_{j}!}{C_{1j}!\cdots C_{kj}!}%
$\textquotedblright\ by \textquotedblleft$\dbinom{a_{j}}{C_{1,j}%
,\ldots,C_{k,j}}$\textquotedblright\ (after perhaps reminding the reader of
the definition of multinomial coefficients). After all, you always write it as
a multinomial coefficient later on.

\item \textbf{page 38, proof of Theorem 5.3:} In Claim 1, \textquotedblleft
all packing\textquotedblright\ should be \textquotedblleft all $\alpha
$-packing\textquotedblright.

\item \textbf{page 38, proof of Theorem 5.3:} In the proof of Claim 2, the
second factor \textquotedblleft$\left(  x_{1}+\cdots+x_{N}\right)  ^{a_{2}}%
$\textquotedblright\ should be \textquotedblleft$\left(  x_{1}^{2}%
+\cdots+x_{N}^{2}\right)  ^{a_{2}}$\textquotedblright.

\item \textbf{page 38, proof of Theorem 5.3:} In the proof of Claim 2,
\textquotedblleft the product $\left(  x_{1}^{j}+\cdots+x_{N}\right)  ^{a_{j}%
}$\textquotedblright\ should be \textquotedblleft the product $\left(
x_{1}^{j}+\cdots+x_{N}^{j}\right)  ^{a_{j}}$\textquotedblright.

\item \textbf{page 38, proof of Theorem 5.3:} In the proof of Claim 2, replace
\textquotedblleft$\dbinom{a_{j}}{C_{1j}\ldots C_{kj}}$\textquotedblright\ by
\textquotedblleft$\dbinom{a_{j}}{C_{1j},\ldots,C_{Nj}}$\textquotedblright.
Yes, I have not just added commas but also replaced \textquotedblleft%
$k$\textquotedblright\ by \textquotedblleft$N$\textquotedblright\ since you
can't yet restrict yourself to $k\times n$ matrices.

\item \textbf{page 39, proof of Theorem 5.3:} I'd replace \textquotedblleft By
Claim 2 we have\textquotedblright\ by \textquotedblleft By Claim 2 and Theorem
5.2 we have\textquotedblright.

\item \textbf{page 39, proof of Theorem 5.3:} In (18), replace each
\textquotedblleft$\nu$\textquotedblright\ by \textquotedblleft$\alpha
$\textquotedblright, because it's called $\alpha$ both in Claim 2 and in Claim 4.

\item \textbf{page 40, proof of Lemma 5.4:} In the last computation of this
proof, you are tacitly using the identity $\left\langle s_{\lambda},h_{\mu
}\right\rangle =K_{\lambda\mu}$ (for any partitions $\lambda$ and $\mu$). This
is probably worth stating earlier on.

\item \textbf{page 40, Definition 5.5:} \textquotedblleft ring
homomorphism\textquotedblright\ $\rightarrow$ \textquotedblleft$\mathbb{C}%
$-algebra homomorphism\textquotedblright.

\item \textbf{page 40, \S 5.3:} After \textquotedblleft$\omega\left(
p_{n}\right)  =\left(  -1\right)  ^{n-1}p_{n}$\textquotedblright, add
\textquotedblleft for $n\in\mathbf{N}$\textquotedblright.

\item \textbf{page 41, proof of Lemma 5.6:} This proof seems to be built upon
the illusion that%
\[
\left\{  \lambda\vdash n\ \mid\ \lambda\trianglerighteq\mu^{\star}\right\}
=\left\{  \lambda\vdash n\ \mid\ \lambda\trianglerighteq\mu\right\}
\cup\left\{  \mu^{\star}\right\}
\]
(or else, I am not sure how you get the second displayed equality of the
proof). But this is false. What you probably want to do instead is forget
about $\mu^{\star}$, and just derive $\omega\left(  s_{\mu}\right)
=s_{\mu^{\prime}}$ after assuming that every $\lambda\vartriangleright\mu$
(not $\lambda\trianglerighteq\mu$) satisfies $\omega\left(  s_{\lambda
}\right)  =s_{\lambda^{\prime}}$. This strategy would also have less
notational ballast.

Notice that this is a strong induction, so the base case is not required.

(Notice also that the solution of Question 25 (a) is more or less the same
proof. Rather than sketching it twice, maybe it's worth showing it once in
more detail?)

\item \textbf{page 41, \S 5.3, Alternative proof:} After \textquotedblleft By
the Murnaghan--Nakayama rule\textquotedblright, I would add \textquotedblleft%
(Corollary 3.13, rewritten using Theorem 4.10)\textquotedblright.

\item \textbf{page 41, \S 5.3, Alternative proof:} \textquotedblleft weighted
sum\textquotedblright\ $\rightarrow$ \textquotedblleft sum\textquotedblright.
(You are summing the signs; you need no further weights here.)

\item \textbf{page 41, \S 5.3, Alternative proof:} \textquotedblleft
Hence\textquotedblright\ $\rightarrow$ \textquotedblleft Hence, if $\lambda$
is a partition of $n$, then Theorem 5.3 yields\textquotedblright.

\item \textbf{page 41, \S 5.3, Alternative proof:} I don't know how detailed
this all is supposed to be, but I feel like there are a lot of silent steps
here. In particular, it would help pointing out (probably somewhere in \S 3)
how the abacus of a partition $\lambda$ is related to the abacus of its
conjugate $\lambda^{\prime}$; this is beautiful and explains why the
border-strip tableaux for $\lambda$ are in bijection with those of
$\lambda^{\prime}$.

\item \textbf{page 41, \S 5.4:} Have you ever defined what a skew-partition
is, and how its Young diagram is defined?

\item \textbf{page 41, \S 5.5:} \textquotedblleft Let $n\in\mathbf{N}%
$\textquotedblright\ $\rightarrow$ \textquotedblleft Let $n\in\mathbf{N}_{0}%
$\textquotedblright\ (since you later take the direct sum $\bigoplus
_{n\in\mathbf{N}_{0}}\operatorname*{Cl}\left(  \operatorname*{Sym}%
\nolimits_{n}\right)  $).

\item \textbf{page 41, \S 5.5:} \textquotedblleft indicator functions
$f_{\alpha}$\textquotedblright\ $\rightarrow$ \textquotedblleft indicator
functions $\mathbbold{1}_{\alpha}$\textquotedblright.

\item \textbf{page 41, \S 5.5, and 3 other places in the text:}
\textquotedblleft cycle type\textquotedblright\ $\rightarrow$
\textquotedblleft cycle-type\textquotedblright\ (in order to keep your
notations consistent).

\item \textbf{page 41, \S 5.5:} It can't possibly hurt to say somewhere that
the \textquotedblleft$\omega$-involution\textquotedblright\ means the
involution $\omega$.

\item \textbf{page 42, proof of Proposition 5.7:} In the last sentence,
\textquotedblleft the image of $s_{\lambda}$\textquotedblright\ should be
\textquotedblleft the image of $s_{\mu}$\textquotedblright. But more
importantly, I am not sure how you conclude that this irreducible constituent
is actually the image of $s_{\mu}$. (This is not hard to check -- e.g., there
is a standard trick that uses $\left\langle \chi^{\mu},\chi^{\mu}\right\rangle
=\left\langle s_{\mu},s_{\mu}\right\rangle =1$ to show that $\chi^{\mu}$ is
$\pm$ an irreducible character, and then we can use $\left\langle \chi^{\mu
},\pi^{\mu}\right\rangle =1>0$ to conclude that the $\pm$ is in fact a $+$.)

\item \textbf{page 42:} \textquotedblleft is the signed sum\textquotedblright%
\ $\rightarrow$ \textquotedblleft is the sum of the signs\textquotedblright.

\item \textbf{page 42:} \textquotedblleft and content $\alpha$%
\textquotedblright\ $\rightarrow$ \textquotedblleft and type $\alpha
$\textquotedblright.

\item \textbf{page 42:} Replace \textquotedblleft Let $\operatorname*{ch}%
:\Lambda\rightarrow\bigoplus_{n\in\mathbf{N}_{0}}\operatorname*{Cl}\left(
\operatorname*{Sym}\nolimits_{n}\right)  $\textquotedblright\ by
\textquotedblleft Let $\operatorname*{ch}:\bigoplus_{n\in\mathbf{N}_{0}%
}\operatorname*{Cl}\left(  \operatorname*{Sym}\nolimits_{n}\right)
\rightarrow\Lambda$\textquotedblright.

\item \textbf{page 42:} \textquotedblleft The right-hand
side\textquotedblright\ $\rightarrow$ \textquotedblleft The left-hand
side\textquotedblright.

\item \textbf{page 43, Theorem 5.8:} Replace \textquotedblleft%
$\operatorname*{ch}\left(  \phi\operatorname*{sgn}%
\nolimits_{\operatorname*{Sym}\nolimits_{n}}\right)  =\omega\left(
\phi\right)  $\textquotedblright\ by \textquotedblleft$\operatorname*{ch}%
\left(  \phi\operatorname*{sgn}\nolimits_{\operatorname*{Sym}\nolimits_{n}%
}\right)  =\omega\left(  \operatorname*{ch}\phi\right)  $\textquotedblright.

\item \textbf{page 43, proof of Theorem 5.8:} Replace \textquotedblleft%
$h_{\lambda}h_{\nu}$\textquotedblright\ by \textquotedblleft$h_{\lambda}%
h_{\mu}$\textquotedblright\ in the displayed equation.

\item \textbf{page 43, Remark 5.9:} In \textquotedblleft by $s_{\lambda
}\left(  x_{1},\ldots,x_{N}\right)  \rightarrow\chi^{\lambda}$%
\textquotedblright, the \textquotedblleft%
%TCIMACRO{\TEXTsymbol{\backslash}}%
%BeginExpansion
$\backslash$%
%EndExpansion
to\textquotedblright\ arrow should be a \textquotedblleft%
%TCIMACRO{\TEXTsymbol{\backslash}}%
%BeginExpansion
$\backslash$%
%EndExpansion
mapsto\textquotedblright.

\item \textbf{page 43, proof of Corollary 5.10:} \textquotedblleft$\left(
\chi_{\mu}\times\chi_{\nu}\right)  $\textquotedblright\ $\rightarrow$
\textquotedblleft$\left(  \chi^{\mu}\times\chi^{\nu}\right)  $%
\textquotedblright.

\item \textbf{page 43, proof of Corollary 5.10:} \textquotedblleft%
$\uparrow_{S_{m}\times S_{n}}^{S_{m+n}}$\textquotedblright\ $\rightarrow$
\textquotedblleft$\uparrow_{\operatorname*{Sym}\nolimits_{m}\times
\operatorname*{Sym}\nolimits_{n}}^{\operatorname*{Sym}\nolimits_{m+n}}%
$\textquotedblright.

\item \textbf{page 43, proof of Corollary 5.10:} Strictly speaking, you have
not shown that all irreducible characters of $\operatorname*{Sym}%
\nolimits_{n}$ are of the form $\chi^{\lambda}$, so the proof is incomplete.
(I am not saying that this is difficult, but it needs a couple more lines.)

\item \textbf{page 44, Question 2:} Part (a) of this Question is false. A
counterexample follows by observing that the Young diagram $\left[  \left(
2,2,2,1\right)  \right]  $ can be obtained from $\left[  \left(  3,2,2\right)
\right]  $ by moving the single box $\left(  1,3\right)  $ into the first
available position below it, but $\left(  3,2,2\right)  $ is not a dominance
neighbor of $\left(  2,2,2,1\right)  $ (indeed, $\left(  3,2,2\right)
\vartriangleright\left(  3,2,1,1\right)  \vartriangleright\left(
2,2,2,1\right)  $).

The proper condition for dominance neighbors is subtler. Fortunately, part (b)
of the Question can be easily solved without ever passing through these
slippery places. One such solution appears in the solution to Exercise 2.2.9 in

\qquad Darij Grinberg and Victor Reiner,

\qquad\textit{Hopf Algebras in Combinatorics},

\qquad version of 11 May 2018,

\qquad\url{http://www.cip.ifi.lmu.de/~grinberg/algebra/HopfComb-sols.pdf}
(also available at \href{https://arxiv.org/abs/1409.8356v5}{arXiv:1409.8356v5})

(beware that the numbering on my website might have changed by the time you're
reading this, but the numbering on
\href{https://arxiv.org/abs/1409.8356v5}{arXiv:1409.8356v5} will never change).

Incidentally, a generalization of your Question 2 appears in Propositions 1.1
and 1.2 of

\qquad C. DeConcini, David Eisenbud, and C. Procesi,

\qquad\textit{\href{https://eudml.org/doc/142693}{\textit{Young Diagrams and
Determinantal Varieties},}}

\qquad Inventiones math. 56 (1980), pp. 129--165.

\item \textbf{page 45, Question 4:} Replace \textquotedblleft Lemma
1.3\textquotedblright\ by \textquotedblleft Lemma 1.2\textquotedblright.

\item \textbf{page 45, Question 7:} In the first line of this exercise, in
\textquotedblleft$v\left(  \alpha\right)  =\left(  1,\ldots,1,\ldots
,n\ldots,n\right)  $\textquotedblright, a comma is missing after the first
\textquotedblleft$n$\textquotedblright.

\item \textbf{page 46, Question 7:} \textquotedblleft Work with symmetric
functions\textquotedblright\ $\rightarrow$ \textquotedblleft Work with
symmetric polynomials\textquotedblright.

\item \textbf{page 46, Question 7:} Here is an easier way to solve part (g)
(which also shows that you can replace \textquotedblleft$\ell\in\mathbf{N}%
$\textquotedblright\ by \textquotedblleft$\ell\in\mathbf{N}_{0}$%
\textquotedblright):

\textit{Step 1:} We observe that every $N\geq0$ satisfies%
\begin{equation}
\sum_{i=0}^{N}\dbinom{N}{i}d_{\left(  1^{i}\right)  }=N!. \label{p46.e7.g.1}%
\end{equation}
(This follows by noticing that $\dbinom{N}{i}d_{\left(  1^{i}\right)  }$ is
the number of permutations $\sigma\in\operatorname*{Sym}\nolimits_{N}$ that
have exactly $i$ fixed points.)

\textit{Step 2:} Now, fix $n\in\mathbf{N}_{0}$. For each $\ell\in\left\{
0,1,\ldots,n\right\}  $, we set%
\begin{equation}
w_{\ell}=\left(  -1\right)  ^{\ell+1}\sum_{m=\ell+1}^{n}\dbinom{m-1}{\ell
}\dbinom{n}{m}d_{\left(  1^{n-m}\right)  }. \label{p46.e7.g.wl=}%
\end{equation}
Thus, our goal is to prove that%
\[
d_{\left(  1^{n}\right)  }=\dfrac{n!}{\ell!}d_{\left(  1^{\ell}\right)
}+w_{\ell}\ \ \ \ \ \ \ \ \ \ \text{for each }\ell\in\left\{  0,1,\ldots
,n\right\}  .
\]
This we shall prove by induction over $n-\ell$. The base case ($n-\ell=0$) is
obvious (since $w_{n}=0$). For the induction step, it suffices to prove that%
\begin{equation}
\dfrac{n!}{\ell!}d_{\left(  1^{\ell}\right)  }+w_{\ell}=\dfrac{n!}{\left(
\ell+1\right)  !}d_{\left(  1^{\ell+1}\right)  }+w_{\ell+1} \label{p46.e7.g.3}%
\end{equation}
for each $\ell\in\left\{  0,1,\ldots,n-1\right\}  $. Thus we shall focus on
proving (\ref{p46.e7.g.3}).

\textit{Step 3:} Fix $\ell\in\left\{  0,1,\ldots,n-1\right\}  $. The
definition of $w_{\ell+1}$ yields%
\begin{align*}
w_{\ell+1}  &  =\left(  -1\right)  ^{\ell+2}\sum_{m=\ell+2}^{n}\dbinom
{m-1}{\ell+1}\dbinom{n}{m}d_{\left(  1^{n-m}\right)  }\\
&  =\left(  -1\right)  ^{\ell+2}\sum_{m=\ell+1}^{n}\dbinom{m-1}{\ell+1}%
\dbinom{n}{m}d_{\left(  1^{n-m}\right)  }\\
&  \ \ \ \ \ \ \ \ \ \ \left(
\begin{array}
[c]{c}%
\text{here, we have extended the range of the sum by one}\\
\text{extra addend, which is zero}\\
\text{(since }\dbinom{m-1}{\ell+1}=0\text{ when }m=\ell+1\text{)}%
\end{array}
\right) \\
&  =-\left(  -1\right)  ^{\ell+1}\sum_{m=\ell+1}^{n}\dbinom{m-1}{\ell
+1}\dbinom{n}{m}d_{\left(  1^{n-m}\right)  }.
\end{align*}
Subtracting this equality from (\ref{p46.e7.g.wl=}), we find%
\begin{align*}
&  w_{\ell}-w_{\ell+1}\\
&  =\left(  -1\right)  ^{\ell+1}\sum_{m=\ell+1}^{n}\dbinom{m-1}{\ell}%
\dbinom{n}{m}d_{\left(  1^{n-m}\right)  }\\
&  \ \ \ \ \ \ \ \ \ \ -\left(  -\left(  -1\right)  ^{\ell+1}\sum_{m=\ell
+1}^{n}\dbinom{m-1}{\ell+1}\dbinom{n}{m}d_{\left(  1^{n-m}\right)  }\right) \\
&  =\left(  -1\right)  ^{\ell+1}\sum_{m=\ell+1}^{n}\underbrace{\left(
\dbinom{m-1}{\ell}+\dbinom{m-1}{\ell+1}\right)  }_{\substack{=\dbinom{m}%
{\ell+1}\\\text{(by the recursion of the}\\\text{binomial coefficients)}%
}}\dbinom{n}{m}d_{\left(  1^{n-m}\right)  }\\
&  =\left(  -1\right)  ^{\ell+1}\sum_{m=\ell+1}^{n}\underbrace{\dbinom{m}%
{\ell+1}\dbinom{n}{m}}_{\substack{=\dbinom{n}{\ell+1}\dbinom{n-\left(
\ell+1\right)  }{n-m}\\\text{(by straightforward manipulations)}}}d_{\left(
1^{n-m}\right)  }\\
&  =\left(  -1\right)  ^{\ell+1}\dbinom{n}{\ell+1}\underbrace{\sum_{m=\ell
+1}^{n}\dbinom{n-\left(  \ell+1\right)  }{n-m}d_{\left(  1^{n-m}\right)  }%
}_{\substack{=\sum_{i=0}^{n-\left(  \ell+1\right)  }\dbinom{n-\left(
\ell+1\right)  }{i}d_{\left(  1^{i}\right)  }\\\text{(here, we have
substituted }i\\\text{for }n-m\text{ in the sum)}}}\\
&  =\left(  -1\right)  ^{\ell+1}\dbinom{n}{\ell+1}\underbrace{\sum
_{i=0}^{n-\left(  \ell+1\right)  }\dbinom{n-\left(  \ell+1\right)  }%
{i}d_{\left(  1^{i}\right)  }}_{\substack{=\left(  n-\left(  \ell+1\right)
\right)  !\\\text{(by (\ref{p46.e7.g.1}) (applied to }N=n-\left(
\ell+1\right)  \text{))}}}\\
&  =\left(  -1\right)  ^{\ell+1}\underbrace{\dbinom{n}{\ell+1}\left(
n-\left(  \ell+1\right)  \right)  !}_{=\dfrac{n!}{\left(  \ell+1\right)  !}%
}=\left(  -1\right)  ^{\ell+1}\dfrac{n!}{\left(  \ell+1\right)  !}.
\end{align*}
Comparing this with%
\begin{align*}
&  \dfrac{n!}{\left(  \ell+1\right)  !}\underbrace{d_{\left(  1^{\ell
+1}\right)  }}_{\substack{=\left(  \ell+1\right)  d_{\left(  1^{\ell}\right)
}+\left(  -1\right)  ^{\ell+1}\\\text{(by the well-known recursion}\\\text{for
derangement numbers)}}}-\dfrac{n!}{\ell!}d_{\left(  1^{\ell}\right)  }\\
&  =\dfrac{n!}{\left(  \ell+1\right)  !}\left(  \left(  \ell+1\right)
d_{\left(  1^{\ell}\right)  }+\left(  -1\right)  ^{\ell+1}\right)  -\dfrac
{n!}{\ell!}d_{\left(  1^{\ell}\right)  }\\
&  =\underbrace{\dfrac{n!}{\left(  \ell+1\right)  !}\left(  \ell+1\right)
}_{=\dfrac{n!}{\ell!}}d_{\left(  1^{\ell}\right)  }+\dfrac{n!}{\left(
\ell+1\right)  !}\left(  -1\right)  ^{\ell+1}-\dfrac{n!}{\ell!}d_{\left(
1^{\ell}\right)  }\\
&  =\dfrac{n!}{\ell!}d_{\left(  1^{\ell}\right)  }+\dfrac{n!}{\left(
\ell+1\right)  !}\left(  -1\right)  ^{\ell+1}-\dfrac{n!}{\ell!}d_{\left(
1^{\ell}\right)  }=\dfrac{n!}{\left(  \ell+1\right)  !}\left(  -1\right)
^{\ell+1}=\left(  -1\right)  ^{\ell+1}\dfrac{n!}{\left(  \ell+1\right)  !},
\end{align*}
we obtain%
\[
w_{\ell}-w_{\ell+1}=\dfrac{n!}{\left(  \ell+1\right)  !}d_{\left(  1^{\ell
+1}\right)  }-\dfrac{n!}{\ell!}d_{\left(  1^{\ell}\right)  }.
\]
This is clearly equivalent to (\ref{p46.e7.g.3}). Thus, (\ref{p46.e7.g.3}) is
proven. This completes the induction step.

\item \textbf{pages 46 and 47, Question 8:} Replace every \textquotedblleft%
$r$\textquotedblright\ by \textquotedblleft$n$\textquotedblright\ here (since
part (a) of the question speaks of $\operatorname*{Sym}\nolimits_{n}$).

\item \textbf{page 47, Question 10:} \textquotedblleft ring
homomorphism\textquotedblright\ $\rightarrow$ \textquotedblleft$\mathbb{C}%
$-algebra homomorphism\textquotedblright.

\item \textbf{page 48, Question 12:} \textquotedblleft symmetric
function\textquotedblright\ $\rightarrow$ \textquotedblleft$\operatorname*{ev}%
\nolimits_{N}$ of the symmetric function\textquotedblright.

\item \textbf{page 48, Question 12:} \textquotedblleft the signed
weight\textquotedblright\ $\rightarrow$ \textquotedblleft the sum of the
signed weights\textquotedblright.

\item \textbf{page 48, Question 12:} \textquotedblleft tuples $\left(
P_{1},\ldots,P_{M}\right)  $\textquotedblright\ $\rightarrow$
\textquotedblleft tuples $\left(  P_{M},\ldots,P_{1}\right)  $%
\textquotedblright\ (remember that you started them with $P_{3}$ in Example 2.2).

\item \textbf{page 48, Question 13:} \textquotedblleft path triples $\left(
P_{1},P_{2},P_{3}\right)  $\textquotedblright\ $\rightarrow$ \textquotedblleft
path triples $\left(  P_{3},P_{2},P_{1}\right)  $\textquotedblright.

\item \textbf{page 48, Question 14:} Isn't it too early at this point to speak
of the $\omega$ involution, let alone use Lemma 5.6?

\item \textbf{page 49, Question 17:} I assume $w$ should be an arbitrary
element of $\mathbf{N}_{0}$ ?

\item \textbf{page 50, Question 22:} The \textquotedblleft Of
course\textquotedblright\ sentence in part (c) looks out of place here: what
coefficients, and why should we care about them? (Example 4.8, which it
probably is referring to, is far away.) Maybe it is better to say that
\textquotedblleft Thus, the union in Definition 4.6 is a union of disjoint
sets\textquotedblright\ instead.

\item \textbf{page 50, Question 23:} I know it's a stupid remark, but you have
never actually defined the notion of a \textquotedblleft coplactic
map\textquotedblright.

\item \textbf{page 50, Question 24:} \textquotedblleft and
let\textquotedblright\ $\rightarrow$ \textquotedblleft Let\textquotedblright.

\item \textbf{page 51, Question 25:} In part (a), the two equalities need to
be switched, since the first one follows from Pieri's rule and the second from
Young's (and you do say \textquotedblleft respectively\textquotedblright).

\item \textbf{page 52, Question 31:} \textquotedblleft of content $\nu
$\textquotedblright\ $\rightarrow$ \textquotedblleft of content $\mu
$\textquotedblright.

\item \textbf{page 53, solution to Question 3:} In the second displayed
equation, replace \textquotedblleft$p_{k}t^{k}$\textquotedblright\ by
\textquotedblleft$p_{k}t^{k-1}$\textquotedblright.

\item \textbf{page 53, solution to Question 3:} The left hand side of the last
displayed equality in this solution should be \textquotedblleft$tH^{\prime
}\left(  t\right)  E\left(  t\right)  $\textquotedblright, not
\textquotedblleft$tH^{\prime}\left(  t\right)  E\left(  -t\right)
$\textquotedblright.

\item \textbf{page 53, solution to Question 3:} The right hand side of the
last displayed equality in this solution should be \textquotedblleft%
$tQ^{\prime}\left(  t\right)  $\textquotedblright, not \textquotedblleft%
$tQ\left(  t\right)  $\textquotedblright.

\item \textbf{page 54, solution to Question 5:} \textquotedblleft for each
$k\in\mathbf{N}_{0}$\textquotedblright\ $\rightarrow$ \textquotedblleft for
each $k\in\mathbf{N}$\textquotedblright.

\item \textbf{page 54, solution to Question 5:} \textquotedblleft we have
$R_{\mu\left(  1^{n}\right)  }=1$\textquotedblright\ $\rightarrow$
\textquotedblleft we have $R_{\left(  1^{n}\right)  \mu}=1$\textquotedblright.

\item \textbf{page 54, solution to Question 7:} \textquotedblleft%
$d_{(1^{n-m)}}$\textquotedblright\ $\rightarrow$ \textquotedblleft$d_{\left(
1^{n-m}\right)  }$\textquotedblright\ (wrong placement of the closing parenthesis).

\item \textbf{page 55, solution to Question 8:} \textquotedblleft$S_{a_{i}}%
$\textquotedblright\ $\rightarrow$ \textquotedblleft$\operatorname*{Sym}%
\nolimits_{a_{i}}$\textquotedblright\ on the third line of the solution.

Actually, there is one more imprecision here: \textquotedblleft$\prod_{i}%
C_{i}\wr S_{a_{i}}$\textquotedblright\ is not a wreath product, but a direct
product of wreath products :)

\item \textbf{page 56, solution to Question 10:} There is no such thing as
\textquotedblleft Question 33(f)\textquotedblright. You probably mean
\textquotedblleft Question 3(f)\textquotedblright.

\item \textbf{page 56, solution to Question 10:} After \textquotedblleft%
$\omega\left(  p_{\lambda}\right)  =\left(  -1\right)  ^{n-\ell\left(
\lambda\right)  }$\textquotedblright, add \textquotedblleft$p_{\lambda}%
$\textquotedblright.

\item \textbf{page 56, solution to Question 10:} \textquotedblleft$\lambda
\in\operatorname*{Sym}\nolimits_{n}$\textquotedblright\ $\rightarrow$
\textquotedblleft$\sigma_{\lambda}\in\operatorname*{Sym}\nolimits_{n}%
$\textquotedblright.

\item \textbf{page 56, solution to Question 12:} \textquotedblleft the
Lindstr\"{o}m\textquotedblright\ $\rightarrow$ \textquotedblleft
Lindstr\"{o}m\textquotedblright.

\item \textbf{page 57, solution to Question 14:} Before the first sentence, it
would be helpful to inform the reader that you are again defining an
involution on $M$-tuples of paths as in Example 2.2, although now you will
have different starting points and ending points and the weights too will be
defined differently.

\item \textbf{page 57, solution to Question 14:} \textquotedblleft involution
defining\textquotedblright\ $\rightarrow$ \textquotedblleft involution
defined\textquotedblright.

\item \textbf{page 57, solution to Question 14:} \textquotedblleft%
\textit{intersect then, they}\textquotedblright\ $\rightarrow$
\textquotedblleft\textit{intersect, then they}\textquotedblright.

\item \textbf{page 57, solution to Question 14:} \textquotedblleft paths
$\left(  P_{1},\ldots,P_{M}\right)  $\textquotedblright\ $\rightarrow$
\textquotedblleft paths $\left(  P_{M},\ldots,P_{1}\right)  $%
\textquotedblright.

\item \textbf{page 57, solution to Question 14:} \textquotedblleft
tabloid\textquotedblright\ $\rightarrow$ \textquotedblleft
tableau\textquotedblright.

\item \textbf{page 57, solution to Question 14:} In \textquotedblleft
therefore $P_{i}$ and $P_{i+1}$ meet on or before step $a$\textquotedblright,
you probably mean \textquotedblleft right step $a$\textquotedblright\ when you
say \textquotedblleft step $a$\textquotedblright.

\item \textbf{page 57, solution to Question 15 (b):} It took me a while to
understand what you mean by \textquotedblleft$\theta^{n}$\textquotedblright.
(You mean the map $\operatorname*{Sym}\nolimits^{n}V\rightarrow\mathbf{C}$
sending each $v_{1}v_{2}\cdots v_{n}$ to $\theta\left(  v_{1}\right)
\cdots\theta\left(  v_{n}\right)  $.)

\item \textbf{page 58, solution to Question 15 (b):} Replace \textquotedblleft%
$g\left(  u,\ldots,u\right)  =\theta\left(  u\right)  ^{n}$\textquotedblright%
\ by \textquotedblleft$g\left(  u\right)  =f\left(  u,\ldots,u\right)
=\theta\left(  u\right)  ^{n}$\textquotedblright.

\item \textbf{page 58, Remark after the solution to Question 15:}
\textquotedblleft polynomial\textquotedblright\ $\rightarrow$
\textquotedblleft$n$-multilinear map\textquotedblright.

\item \textbf{page 58, solution to Question 19:} I am not personally fond of
references to talk slides, but let me make an exception here: Drew Armstrong's
slides from FPSAC 2017 (
\url{http://www.math.miami.edu/~armstrong/Talks/RCC_FPSAC_17.pdf} ) give
gorgeous picture proofs of all parts of Question 19 (as well as mentioning
further results).

\item \textbf{page 58, solution to Question 19:} In part (a), replace
\textquotedblleft$b=0$\textquotedblright\ by \textquotedblleft$a=0$%
\textquotedblright.

\item \textbf{page 59, solution to Question 21:} \textquotedblleft By Question
3(e)\textquotedblright\ $\rightarrow$ \textquotedblleft By Question
3(f)\textquotedblright.

\item \textbf{page 59, solution to Question 21:} After \textquotedblleft%
$\mu/\lambda$-tableaux\textquotedblright, insert \textquotedblleft with
entries from $\left\{  1,2\right\}  $\textquotedblright.

\item \textbf{page 59, solution to Question 21:} After condition (c), add the
condition \textquotedblleft(d) Each row and each column are weakly
increasing.\textquotedblright.

\item \textbf{page 59, solution to Question 21:} Maybe explain what
\textquotedblleft disjoint union\textquotedblright\ means (in
\textquotedblleft disjoint union of hooks\textquotedblright).

\item \textbf{page 59, solution to Question 21:} \textquotedblleft such that
$\left(  i+1,j\right)  \notin\left[  \nu/\lambda\right]  $ and $\left(
i,j-1\right)  \notin\left[  \nu/\lambda\right]  $\textquotedblright%
\ $\rightarrow$ \textquotedblleft such that $\left(  i-1,j\right)
\notin\left[  \nu/\lambda\right]  $ and $\left(  i,j+1\right)  \notin\left[
\nu/\lambda\right]  $\textquotedblright. (This is if I am understanding you
right, that if I regard as a rim hook as a snake crawling to the northwest,
then its terminal point is its head. Your examples suggest this, at least.)

\item \textbf{page 59, solution to Question 21, Example:} \textquotedblleft%
$\nu=\left(  3,2\right)  $\textquotedblright\ $\rightarrow$ \textquotedblleft%
$\mu=\left(  3,2\right)  $\textquotedblright.

\item \textbf{page 59, solution to Question 21, Example:} \textquotedblleft%
$\nu=\left(  3,2,2\right)  $\textquotedblright\ $\rightarrow$
\textquotedblleft$\mu=\left(  3,2,2\right)  $\textquotedblright.

\item \textbf{page 61, solution to Question 26:} I'd replace the
\textquotedblleft We have\textquotedblright\ at the beginning by
\textquotedblleft By Claim 3 in the proof of Theorem 5.3, we
have\textquotedblright.

\item \textbf{page 61, solution to Question 26:} Replace \textquotedblleft%
$\alpha\left(  1\right)  ,\ldots,\alpha\left(  k\right)  $\textquotedblright%
\ by \textquotedblleft$\beta\left(  1\right)  ,\ldots,\beta\left(  k\right)
$\textquotedblright.

\item \textbf{page 61, solution to Question 26:} Replace \textquotedblleft%
$\left\vert \alpha\left(  i\right)  \right\vert $\textquotedblright\ by
\textquotedblleft$\left\vert \beta\left(  i\right)  \right\vert $%
\textquotedblright.

\item \textbf{page 61, solution to Question 26:} In the second displayed
equality, replace each \textquotedblleft$b$\textquotedblright\ by a
\textquotedblleft$B$\textquotedblright, and also add commas into the
multinomial coefficients.

\item \textbf{page 61, solution to Question 27:} \textquotedblleft%
$\left\langle s_{\lambda/\nu},f\right\rangle =\left\langle s_{\nu}%
f,s_{\lambda}\right\rangle $\textquotedblright\ $\rightarrow$
\textquotedblleft$\left\langle s_{\lambda/\nu},f\right\rangle =\left\langle
s_{\lambda},s_{\nu}f\right\rangle $\textquotedblright. (This is important if
your form $\left\langle \cdot,\cdot\right\rangle $ is sesquilinear; but even
if it is bilinear, it is probably better to avoid moving $f$ from right to
left argument without purpose.)

\item \textbf{page 61, solution to Question 28:} This doesn't answer the
question about \textquotedblleft a version of the Murnaghan--Nakayama rule for
skew-Schur functions\textquotedblright, as no skew Schur functions appear here!

(I know only one version of Murnaghan--Nakayama for skew Schur functions, and
it involves skewing operators, which you haven't introduced. So maybe you
didn't mean to say \textquotedblleft skew-Schur\textquotedblright\ in the exercise.)

\item \textbf{page 62, solution to Question 29:} Replace every
\textquotedblleft$\mu$\textquotedblright\ by \textquotedblleft$\nu
$\textquotedblright\ here.

\item \textbf{page 62, solution to Question 29:} \textquotedblleft at $\left(
\lambda_{j\tau}+M-j\tau\right)  $ and $\left(  \lambda_{i\tau}+M-i\tau\right)
$\textquotedblright\ $\rightarrow$ \textquotedblleft at $\left(
\lambda_{j\tau}+M-j\tau,N\right)  $ and $\left(  \lambda_{i\tau}%
+M-i\tau,N\right)  $\textquotedblright.

\item \textbf{page 62, solution to Question 30:} I don't think you want to say
\textquotedblleft with $k$ minimal\textquotedblright\ here. As I understand,
the $k$ in \textquotedblleft$k$-unlatticed\textquotedblright\ is chosen by
looking at which of the records is broken first when reading
$\operatorname*{w}\left(  t\right)  $ from right to left, not by minimality of
$k$.
\end{itemize}


\end{document}