\documentclass[12pt,final,notitlepage,onecolumn]{article}%
\usepackage[all,cmtip]{xy}
\usepackage{lscape}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{color}
\usepackage{comment}
\usepackage{amsmath}
\usepackage{hyperref}
%TCIDATA{OutputFilter=latex2.dll}
%TCIDATA{Version=5.50.0.2960}
%TCIDATA{CSTFile=LaTeX article (bright).cst}
%TCIDATA{Created=Sat Mar 27 17:33:36 2004}
%TCIDATA{LastRevised=Thursday, September 17, 2015 18:22:54}
%TCIDATA{SuppressPackageManagement}
%TCIDATA{}
%TCIDATA{}
%TCIDATA{}
%TCIDATA{BibliographyScheme=Manual}
%BeginMSIPreambleData
\providecommand{\U}[1]{\protect\rule{.1in}{.1in}}
%EndMSIPreambleData
\newenvironment{details}{\color{blue}}{}
\newenvironment{statement}{\begin{quote}}{\end{quote}}
\definecolor{violet}{RGB}{143,0,255}
\definecolor{forestgreen}{RGB}{34, 100, 34}
\voffset=-2.5cm
\hoffset=-2.5cm
\newenvironment{verlong}{}{}
\newenvironment{vershort}{}{}
\newenvironment{noncompile}{}{}
\excludecomment{verlong}
\includecomment{vershort}
\excludecomment{noncompile}
\setlength\textheight{24cm}
\setlength\textwidth{15.5cm}
\begin{document}
\begin{center}
\textbf{The Robinson-Schensted and Sch\"{u}tzenberger algorithms, an
elementary approach}
\textit{Marc A. A. van Leeuwen}
version of 30 January 2013
\textbf{Errata by Darij Grinberg - I}
[updated version of 15 August 2015]
\bigskip
\end{center}
The following text is an annotation to Marc A. A. van Leeuwen's paper
\textquotedblleft The Robinson-Schensted and Sch\"{u}tzenberger algorithms, an
elementary approach\textquotedblright\ in its version of 30 January 2013.
This annotation contains corrections of mistakes (or what I believe to be
mistakes) and additional comments (in particular, elaborations of some
arguments that I found insufficiently detailed in the paper). The latter are
printed in \textcolor{blue}{blue}.
Different comments are separated by horizontal lines, like this:
\rule{17cm}{0.3mm}
\textbf{Page 3:} Replace \textquotedblleft complementary the
approach\textquotedblright\ by \textquotedblleft complementary to the
approach\textquotedblright.
\rule{17cm}{0.3mm}
\textbf{Page 5, \S 1.1:} Replace \textquotedblleft a `differential poset' as
defined in [Stan3]\textquotedblright\ by \textquotedblleft a `$1$-differential
poset' as defined in [Stan3]\textquotedblright; and even this is not
completely correct, since a $1$-differential poset as defined in [Stan3] is
also required to have a global minimum and to be locally finite.
Also, replace \textquotedblleft by any differential poset\textquotedblright%
\ by \textquotedblleft by any $1$-differential poset\textquotedblright.
The same kind of inaccuracy appears in \S 3.1 on page 16.
\rule{17cm}{0.3mm}
\textbf{Page 5, \S 1.2:} I think it should be explained that a
\textquotedblleft saturated decreasing chain in $\mathcal{P}$%
\textquotedblright\ means a sequence $\left( \lambda_{\left[ 0\right]
},\lambda_{\left[ 1\right] },\ldots,\lambda_{\left[ k\right] }\right) $
of partitions such that $\lambda_{\left[ k\right] }=\left( 0\right) $ and
such that every $i\in\left\{ 0,1,\ldots,k-1\right\} $ satisfies
$\lambda_{\left[ i+1\right] }\in\left( \lambda_{\left[ i\right] }\right)
^{-}$ (or, equivalently, $\lambda_{\left[ i\right] }\in\left(
\lambda_{\left[ i+1\right] }\right) ^{+}$). (So, unlike in [Stan2], you are
requiring the chain to end at $\left( 0\right) $.) At least, this is what
you mean by \textquotedblleft saturated decreasing chain in $\mathcal{P}%
$\textquotedblright\ up to \S 4. (In \S 5, you do consider saturated
decreasing chains which do not end at $\left( 0\right) $.)
Similarly, if $\mathbf{P}$ is a poset which has a unique minimal element
$\widehat{0}$, then a \textquotedblleft saturated decreasing
chain\textquotedblright\ in $\mathbf{P}$ means a sequence $\left( p_{0}%
,p_{1},\ldots,p_{k}\right) $ of elements of $\mathbf{P}$ such that
$p_{k}=\widehat{0}$ and such that, for every $i\in\left\{ 0,1,\ldots
,k-1\right\} $, the element $p_{i+1}$ is covered by $p_{i}$ in $\mathbf{P}$
(this means that $p_{i+1} T$. (You only drop this assumption after (8).)
\rule{17cm}{0.3mm}
\textbf{Page 15:} Replace \textquotedblleft presevers\textquotedblright\ by
\textquotedblleft preserves\textquotedblright.
\rule{17cm}{0.3mm}
\textbf{Page 15:} Replace \textquotedblleft the conditions $m>T$%
\textquotedblright\ by \textquotedblleft the condition $m>T$\textquotedblright.
\rule{17cm}{0.3mm}
\textbf{Page 15:} Replace \textquotedblleft do determine\textquotedblright\ by
\textquotedblleft to determine\textquotedblright.
\rule{17cm}{0.3mm}
\textbf{Page 16, \S 3.1:} Replace \textquotedblleft tableaux of equal
shape\textquotedblright\ by \textquotedblleft normalized tableaux of equal
shape $\in\mathcal{P}_{n}$\textquotedblright.
\rule{17cm}{0.3mm}
\textbf{Page 16, \S 3.1:} In \textquotedblleft another bijection
$RS^{t}:\mathbf{S}_{n}\rightarrow\bigcup_{\lambda\in\mathcal{P}}%
\mathcal{T}_{\lambda}\times\mathcal{T}_{\lambda}$\textquotedblright, replace
\textquotedblleft$\mathcal{P}$\textquotedblright\ by \textquotedblleft%
$\mathcal{P}_{n}$\textquotedblright.
\rule{17cm}{0.3mm}
\textbf{Page 17, proof of Theorem 3.2.1:} At the beginning of this proof, add
the following sentence: \textquotedblleft Set $\left( P,Q\right) =RS\left(
\sigma\right) $; we then need to show that $RS\left( \sigma^{-1}\right)
=\left( Q,P\right) $.\textquotedblright\ (Indeed, the way you formulate
Theorem 3.2.1, it is not clear a-priori what $P$ and $Q$ are.)
\rule{17cm}{0.3mm}
\textbf{Page 17, proof of Theorem 3.2.1:} Replace \textquotedblleft the the
set\textquotedblright\ by \textquotedblleft the set\textquotedblright.
\rule{17cm}{0.3mm}
\begin{details}
\textbf{Page 17, proof of Theorem 3.2.1:} Here you write that
\textquotedblleft Any configuration $\left(
\begin{array}
[c]{cc}%
\lambda^{\left[ i-1,j-1\right] } & \lambda^{\left[ i-1,j\right] }\\
\lambda^{\left[ i,j-1\right] } & \lambda^{\left[ i,j\right] }%
\end{array}
\right) $ is of type $RS1$ if $j=\sigma_{i}$, and otherwise of type $RS0$,
$RS2$, or $RS3$\textquotedblright. This claim (which I will call the
\textit{growth-cell claim}) is correct, but in my opinion not trivial enough
to deserve no justification. Here is a sketch of how this is proven:
If $\lambda$ is a partition, then we treat any Young tableau of shape
$\lambda$ as a map from $Y\left( \lambda\right) $ to $\mathbf{Z}$. Thus, it
makes sense to speak (for example) of the restriction of a Young tableau to a
subset of $Y\left( \lambda\right) $ (because it makes sense to speak of the
restriction of a map to a subset of its domain). For every Young tableau $Z$
and every integer $j$, we let $Z\mid_{\leq j}$ denote the restriction of the
tableau $Z$ to the set of its squares whose entry does not exceed $j$.
(Clearly, this restriction is again a Young tableau.)
Now, we can state a general fact:
\begin{statement}
\textbf{Proposition 3.2.1a.} Let $T$ be a Young tableau. Let $m$ be an integer
which does not appear in $T$. Let $j$ be an integer. Then, the configuration
$\left(
\begin{array}
[c]{cc}%
\operatorname*{sh}\left( T\mid_{\leq j-1}\right) & \operatorname*{sh}\left(
T\mid_{\leq j}\right) \\
\operatorname*{sh}\left( \left( T\leftarrow m\right) \mid_{\leq j-1}\right)
& \operatorname*{sh}\left( \left( T\leftarrow m\right) \mid_{\leq
j}\right)
\end{array}
\right) $ is of type $RS1$ if $j=m$, and otherwise is of type $RS0$, $RS2$,
or $RS3$.
\end{statement}
The growth-cell claim follows from Proposition 3.2.1a (applied to $T=P_{i-1}$
and $m=\sigma_{i}$), because we have $P^{\left[ i,j\right] }=P_{i}\mid_{\leq
j}$ and $P_{i}=P_{i-1}\leftarrow\sigma_{i}$. Thus, it remains to prove
Proposition 3.2.1a.
The proof of Proposition 3.2.1a mainly rests on the following two results:
\begin{statement}
\textbf{Proposition 3.2.1b.} Let $T$ be a Young tableau. Let $m$ be an integer
which does not appear in $T$. Let $j$ be an integer.
\textbf{(a)} If $m\leq j$, then $\left( T\leftarrow m\right) \mid_{\leq
j}=\left( T\mid_{\leq j}\right) \leftarrow m$.
\textbf{(b)} If $m>j$, then $\left( T\leftarrow m\right) \mid_{\leq j}%
=T\mid_{\leq j}$.
\textbf{(c)} Let $\alpha$ and $\beta$ be two partitions such that
$\alpha=\operatorname*{sh}\left( \left( T\leftarrow m\right) \mid_{\leq
j}\right) $ and $\beta=\operatorname*{sh}\left( T\mid_{\leq j}\right) $.
Then, $\alpha\in\left\{ \beta\right\} \cup\beta^{+}$.
\textbf{Proposition 3.2.1c.} Let $T$ be a nonempty Young tableau. Let $m$ be
an integer which does not appear in $T$. Assume that $m\not > T$. Then, the
configuration $\left(
\begin{array}
[c]{cc}%
\operatorname*{sh}\left( T^{-}\right) & \operatorname*{sh}T\\
\operatorname*{sh}\left( \left( T\leftarrow m\right) ^{-}\right) &
\operatorname*{sh}\left( T\leftarrow m\right)
\end{array}
\right) $ is of type $RS2$, or $RS3$.
\end{statement}
While I still would not call these results trivial, they are the kinds of
results I would feel comfortable leaving to the reader.\footnote{The main idea
of the proof of Proposition 3.2.1b \textbf{(a)} is the following: The entries
being displaced during the insertion procedure form an increasing sequence;
hence, the entries $\leq j$ are getting displaced before the entries $>j$.
Thus, the insertion procedure for $I\left( T,m\right) $ first emulates the
insertion procedure for $I\left( T\mid_{\leq j},m\right) $, and after that
only displaces entries that are $>j$ (whence the entries $\leq j$ remain
unmoved). This proves Proposition 3.2.1b \textbf{(a)}. Proposition 3.2.1b
\textbf{(b)}is proven the same way, except that the entries $\leq j$ are never
touched to begin with (since we are inserting an entry $>j$ right away). Part
\textbf{(c)} of Proposition 3.2.1b follows from parts \textbf{(a)} and
\textbf{(b)}.
\par
Proposition 3.2.1c is just a restatement of the analysis of the case $m\not >
T$ that you have made between Definition 3.1.1 and Definition 3.1.2.}
Finally, Proposition 3.2.1a can easily be derived from Proposition 3.2.1c
using Proposition 3.2.1b and (7):
\textit{Proof of Proposition 3.2.1a.} We need to prove the following two statements:
\textit{Statement P3.2.1.a.1:} If $j=m$, then the configuration $\left(
\begin{array}
[c]{cc}%
\operatorname*{sh}\left( T\mid_{\leq j-1}\right) & \operatorname*{sh}\left(
T\mid_{\leq j}\right) \\
\operatorname*{sh}\left( \left( T\leftarrow m\right) \mid_{\leq j-1}\right)
& \operatorname*{sh}\left( \left( T\leftarrow m\right) \mid_{\leq
j}\right)
\end{array}
\right) $ is of type $RS1$.
\textit{Statement P3.2.1.a.2:} If $j\neq m$, then the configuration $\left(
\begin{array}
[c]{cc}%
\operatorname*{sh}\left( T\mid_{\leq j-1}\right) & \operatorname*{sh}\left(
T\mid_{\leq j}\right) \\
\operatorname*{sh}\left( \left( T\leftarrow m\right) \mid_{\leq j-1}\right)
& \operatorname*{sh}\left( \left( T\leftarrow m\right) \mid_{\leq
j}\right)
\end{array}
\right) $ is of type $RS0$, $RS2$, or $RS3$.
\textit{Proof of Statement P3.2.1.a.1:} Assume that $j=m$. Thus, Proposition
3.2.1b \textbf{(a)} yields that $\left( T\leftarrow m\right) \mid_{\leq
j}=\left( T\mid_{\leq j}\right) \leftarrow m$.
Recall that the integer $m$ does not appear in $T$. Since $j=m$, this rewrites
as follows: The integer $j$ does not appear in $T$. Hence, $T\mid_{\leq
j-1}=T\mid_{\leq j}$.
On the other hand, $m=j>j-1$. Hence, Proposition 3.2.1b \textbf{(b)} (applied
to $j-1$ instead of $j$) yields that $\left( T\leftarrow m\right) \mid_{\leq
j-1}=T\mid_{\leq j-1}$.
Furthermore, $m=j>T\mid_{\leq j-1}=T\mid_{\leq j}$. Hence, (6) (applied to
$T\mid_{\leq j}$ instead of $T$) shows that $\operatorname*{ch}\left( \left(
T\mid_{\leq j}\right) \leftarrow m\right) =\rho^{+}\left(
\operatorname*{sh}\left( T\mid_{\leq j}\right) \right) :\operatorname*{ch}%
\left( T\mid_{\leq j}\right) $. Hence, $\operatorname*{sh}\left( \left(
T\mid_{\leq j}\right) \leftarrow m\right) =\rho^{+}\left(
\operatorname*{sh}\left( T\mid_{\leq j}\right) \right) $. Hence,%
\[
\operatorname*{sh}\left( \underbrace{\left( T\leftarrow m\right) \mid_{\leq
j}}_{=\left( T\mid_{\leq j}\right) \leftarrow m}\right) =\operatorname*{sh}%
\left( \left( T\mid_{\leq j}\right) \leftarrow m\right) =\rho^{+}\left(
\operatorname*{sh}\left( T\mid_{\leq j}\right) \right) .
\]
Now, set $\kappa=\operatorname*{sh}\left( T\mid_{\leq j-1}\right) $,
$\lambda=\operatorname*{sh}\left( T\mid_{\leq j}\right) $, $\mu
=\operatorname*{sh}\left( \left( T\leftarrow m\right) \mid_{\leq
j-1}\right) $ and $\nu=\operatorname*{sh}\left( \left( T\leftarrow
m\right) \mid_{\leq j}\right) $. Then,%
\[
\kappa=\operatorname*{sh}\left( \underbrace{T\mid_{\leq j-1}}_{=T\mid_{\leq
j}}\right) =\operatorname*{sh}\left( T\mid_{\leq j}\right) =\lambda
\]
and%
\[
\mu=\operatorname*{sh}\left( \underbrace{\left( T\leftarrow m\right)
\mid_{\leq j-1}}_{=T\mid_{\leq j-1}}\right) =\operatorname*{sh}\left(
T\mid_{\leq j-1}\right) =\kappa=\lambda
\]
From $\lambda=\kappa$, we obtain $\lambda\in\left\{ \kappa\right\}
\subseteq\left\{ \kappa\right\} \cup\kappa^{+}$. From $\mu=\kappa$, we
obtain $\mu\in\left\{ \kappa\right\} \subseteq\left\{ \kappa\right\}
\cup\kappa^{+}$.
Furthermore, $\nu=\operatorname*{sh}\left( \left( T\leftarrow m\right)
\mid_{\leq j}\right) =\rho^{+}\left( \underbrace{\operatorname*{sh}\left(
T\mid_{\leq j}\right) }_{=\lambda}\right) =\rho^{+}\left( \lambda\right)
\in\lambda^{+}$, so that $\lambda\in\nu^{-}\subseteq\left\{ \nu\right\}
\cup\nu^{-}$. Hence, $\mu=\lambda\in\left\{ \nu\right\} \cup\nu^{-}$ and
$\nu=\lambda\in\left\{ \nu\right\} \cup\nu^{-}$.
Combining $\lambda,\mu\in\left\{ \kappa\right\} \cup\kappa^{+}$ with
$\lambda,\mu\in\left\{ \nu\right\} \cup\nu^{-}$, we obtain $\lambda,\mu
\in\left( \left\{ \kappa\right\} \cup\kappa^{+}\right) \cap\left(
\left\{ \nu\right\} \cup\nu^{-}\right) $. Combined with $\kappa=\lambda
=\mu$ and $\nu=\rho^{+}\left( \lambda\right) $, this shows that $\left(
\begin{array}
[c]{cc}%
\kappa & \lambda\\
\mu & \nu
\end{array}
\right) $ is a configuration of type $RS1$. Since $\kappa=\operatorname*{sh}%
\left( T\mid_{\leq j-1}\right) $, $\lambda=\operatorname*{sh}\left(
T\mid_{\leq j}\right) $, $\mu=\operatorname*{sh}\left( \left( T\leftarrow
m\right) \mid_{\leq j-1}\right) $ and $\nu=\operatorname*{sh}\left( \left(
T\leftarrow m\right) \mid_{\leq j}\right) $, this is precisely Statement
P3.2.1.a.1. Thus, Statement P3.2.1.a.1 is proven.
\textit{Proof of Statement P3.2.1.a.2:} Assume that $j\neq m$. Let
$Q=T\mid_{\leq j}$. The integer $m$ does not appear in $Q$ (since $m$ does not
appear in $T$).
Set $\kappa=\operatorname*{sh}\left( T\mid_{\leq j-1}\right) $,
$\lambda=\operatorname*{sh}\left( T\mid_{\leq j}\right) $, $\mu
=\operatorname*{sh}\left( \left( T\leftarrow m\right) \mid_{\leq
j-1}\right) $ and $\nu=\operatorname*{sh}\left( \left( T\leftarrow
m\right) \mid_{\leq j}\right) $. Then, it is easy to see that $\lambda
\in\left\{ \kappa\right\} \cup\kappa^{+}$ and $\mu\in\left\{ \kappa
\right\} \cup\kappa^{+}$ (by Proposition 3.2.1b \textbf{(c)}, applied to
$j-1$, $\mu$ and $\kappa$ instead of $j$, $\alpha$ and $\beta$). Also, $\nu
\in\left\{ \mu\right\} \cup\mu^{+}$, and thus $\mu\in\left\{ \nu\right\}
\cup\nu^{-}$. Furthermore, $\nu\in\left\{ \lambda\right\} \cup\lambda^{+}$
(by Proposition 3.2.1b \textbf{(c)}, applied to $\nu$ and $\lambda$ instead of
$\alpha$ and $\beta$). Thus, $\lambda\in\left\{ \nu\right\} \cup\nu^{-}$.
Combining $\lambda,\mu\in\left\{ \kappa\right\} \cup\kappa^{+}$ with
$\lambda,\mu\in\left\{ \nu\right\} \cup\nu^{-}$, we obtain $\lambda,\mu
\in\left( \left\{ \kappa\right\} \cup\kappa^{+}\right) \cap\left(
\left\{ \nu\right\} \cup\nu^{-}\right) $.
Now, we shall show that
\begin{equation}
\text{the configuration }\left(
\begin{array}
[c]{cc}%
\kappa & \lambda\\
\mu & \nu
\end{array}
\right) \text{ is of type }RS0\text{, }RS2\text{, or }RS3\text{.}
\label{p17.proof-of-3.2.1.a2.cl}%
\end{equation}
\textit{Proof of (\ref{p17.proof-of-3.2.1.a2.cl}):} We are in one of the
following two cases:
\textit{Case 1:} The integer $j$ appears in $T$.
\textit{Case 2:} The integer $j$ does not appear in $T$.
Let us first consider Case 1. In this case, the integer $j$ appears in $T$.
Thus, $j$ is the highest entry of $T\mid_{\leq j}$. Hence, $T\mid_{\leq
j-1}=\left( \underbrace{T\mid_{\leq j}}_{=Q}\right) ^{-}=Q^{-}$. In
particular, $Q^{-}$ is well-defined. Hence, the tableau $Q$ is nonempty.
Also, the integer $j$ appears in $T\leftarrow m$ (since $j$ appears in $T$).
Thus, $j$ is the highest entry of $\left( T\leftarrow m\right) \mid_{\leq
j}$. Therefore, $\left( T\leftarrow m\right) \mid_{\leq j-1}=\left( \left(
T\leftarrow m\right) \mid_{\leq j}\right) ^{-}$.
Now, we must be in one of the following two subcases:
\textit{Subcase 1.1:} We have $m\leq j$.
\textit{Subcase 1.2:} We have $m>j$.
Let us first consider Subcase 1.1. In this Subcase, we have $m\leq j$.
Combined with $j\neq m$, this shows that $m Q$ (since $m\leq j$). Therefore, Proposition 3.2.1c (applied
to $Q$ instead of $T$) shows that the configuration $\left(
\begin{array}
[c]{cc}%
\operatorname*{sh}\left( Q^{-}\right) & \operatorname*{sh}Q\\
\operatorname*{sh}\left( \left( Q\leftarrow m\right) ^{-}\right) &
\operatorname*{sh}\left( Q\leftarrow m\right)
\end{array}
\right) $ is of type $RS2$, or $RS3$. Since $\operatorname*{sh}\left(
Q^{-}\right) =\kappa$, $\operatorname*{sh}Q=\lambda$, $\operatorname*{sh}%
\left( \left( Q\leftarrow m\right) ^{-}\right) =\mu$ and
$\operatorname*{sh}\left( Q\leftarrow m\right) =\nu$, this rewrites as
follows: The configuration $\left(
\begin{array}
[c]{cc}%
\kappa & \lambda\\
\mu & \nu
\end{array}
\right) $ is of type $RS2$, or $RS3$. Hence, (\ref{p17.proof-of-3.2.1.a2.cl})
is proven in Subcase 1.1.
Let us now consider Subcase 1.2. In this Subcase, we have $m>j$. Thus,
$m>j>j-1$.
Now, Proposition 3.2.1b \textbf{(b)} shows that $\left( T\leftarrow m\right)
\mid_{\leq j}=T\mid_{\leq j}$. Hence, $\nu=\operatorname*{sh}\left(
\underbrace{\left( T\leftarrow m\right) \mid_{\leq j}}_{=T\mid_{\leq j}%
}\right) =\operatorname*{sh}\left( T\mid_{\leq j}\right) =\lambda$. In
other words, $\lambda=\nu$. Also, Proposition 3.2.1b \textbf{(b)} (applied to
$j-1$ instead of $j$) shows that $\left( T\leftarrow m\right) \mid_{\leq
j-1}=T\mid_{\leq j-1}$ (since $m>j-1$). Hence, $\mu=\operatorname*{sh}\left(
\underbrace{\left( T\leftarrow m\right) \mid_{\leq j-1}}_{=T\mid_{\leq j-1}%
}\right) =\operatorname*{sh}\left( T\mid_{\leq j-1}\right) =\kappa$, so
that $\kappa=\mu$.
Now, we have $\kappa=\mu\wedge\lambda=\nu$. Hence, $\left( \kappa
=\lambda\wedge\mu=\nu\text{ or }\kappa=\mu\wedge\lambda=\nu\right) $.
Thus, we know that $\lambda,\mu\in\left( \left\{ \kappa\right\} \cup
\kappa^{+}\right) \cap\left( \left\{ \nu\right\} \cup\nu^{-}\right) $ and
$\left( \kappa=\lambda\wedge\mu=\nu\text{ or }\kappa=\mu\wedge\lambda
=\nu\right) $. In other words, the configuration $\left(
\begin{array}
[c]{cc}%
\kappa & \lambda\\
\mu & \nu
\end{array}
\right) $ is of type $RS0$. Hence, (\ref{p17.proof-of-3.2.1.a2.cl}) is proven
in Subcase 1.2.
We have thus proven (\ref{p17.proof-of-3.2.1.a2.cl}) in each of the two
Subcases 1.1 and 1.2. Therefore, (\ref{p17.proof-of-3.2.1.a2.cl}) always holds
in Case 1.
Let us now consider Case 2. In this Case, the integer $j$ does not appear in
$T$. Thus, $T\mid_{\leq j-1}=T\mid_{\leq j}$. Hence, $\kappa
=\operatorname*{sh}\left( \underbrace{T\mid_{\leq j-1}}_{=T\mid_{\leq j}%
}\right) =\operatorname*{sh}\left( T\mid_{\leq j}\right) =\lambda$.
But the integer $j$ does not appear in $T\leftarrow m$ (since $j$ neither
appears in $T$, nor equals $m$).\textit{ }Thus, $\left( T\leftarrow m\right)
\mid_{\leq j-1}=\left( T\leftarrow m\right) \mid_{\leq j}$, so that
$\mu=\operatorname*{sh}\left( \underbrace{\left( T\leftarrow m\right)
\mid_{\leq j-1}}_{=\left( T\leftarrow m\right) \mid_{\leq j}}\right)
=\operatorname*{sh}\left( \left( T\leftarrow m\right) \mid_{\leq j}\right)
=\nu$.
Now, we have $\kappa=\lambda\wedge\mu=\nu$. Hence, $\left( \kappa
=\lambda\wedge\mu=\nu\text{ or }\kappa=\mu\wedge\lambda=\nu\right) $.
Thus, we know that $\lambda,\mu\in\left( \left\{ \kappa\right\} \cup
\kappa^{+}\right) \cap\left( \left\{ \nu\right\} \cup\nu^{-}\right) $ and
$\left( \kappa=\lambda\wedge\mu=\nu\text{ or }\kappa=\mu\wedge\lambda
=\nu\right) $. In other words, the configuration $\left(
\begin{array}
[c]{cc}%
\kappa & \lambda\\
\mu & \nu
\end{array}
\right) $ is of type $RS0$. Hence, (\ref{p17.proof-of-3.2.1.a2.cl}) is proven
in Case 2.
We have thus proven (\ref{p17.proof-of-3.2.1.a2.cl}) in each of the two Cases
1 and 2. Thus, (\ref{p17.proof-of-3.2.1.a2.cl}) is proven.
Now, recall that $\kappa=\operatorname*{sh}\left( T\mid_{\leq j-1}\right) $,
$\lambda=\operatorname*{sh}\left( T\mid_{\leq j}\right) $, $\mu
=\operatorname*{sh}\left( \left( T\leftarrow m\right) \mid_{\leq
j-1}\right) $ and $\nu=\operatorname*{sh}\left( \left( T\leftarrow
m\right) \mid_{\leq j}\right) $. Hence, (\ref{p17.proof-of-3.2.1.a2.cl})
rewrites as follows: The configuration $\left(
\begin{array}
[c]{cc}%
\operatorname*{sh}\left( T\mid_{\leq j-1}\right) & \operatorname*{sh}\left(
T\mid_{\leq j}\right) \\
\operatorname*{sh}\left( \left( T\leftarrow m\right) \mid_{\leq j-1}\right)
& \operatorname*{sh}\left( \left( T\leftarrow m\right) \mid_{\leq
j}\right)
\end{array}
\right) $ is of type $RS0$, $RS2$, or $RS3$. This proves Statement P3.2.1.a.2.
Now, we have proven both Statement P3.2.1.a.1 and Statement P3.2.1.a.2.
Combining these two statements, we obtain Proposition 3.2.1a.
\end{details}
\rule{17cm}{0.3mm}
\textbf{Page 17, proof of Theorem 3.2.1:} Replace \textquotedblleft the final
case\textquotedblright\ by \textquotedblleft the case $RS0$\textquotedblright.
\rule{17cm}{0.3mm}
\begin{details}
\textbf{Page 17, proof of Theorem 3.2.1:} Here is the reason why
\textquotedblleft$\lambda^{\prime\left[ i,j\right] }=\lambda^{\left[
j,i\right] }$\textquotedblright\ holds:
We can prove $\lambda^{\prime\left[ i,j\right] }=\lambda^{\left[
j,i\right] }$ (for every $\left( i,j\right) \in\left\{ 0,1,\ldots
,n\right\} ^{2}$) by strong induction over $i+j$:
\begin{itemize}
\item In the case when one of $i$ and $j$ is $0$, it follows from
$\lambda^{\prime\left[ i,j\right] }=\left( 0\right) =\lambda^{\left[
j,i\right] }$.
\item In the other case, it can be verified as follows: Neither $i$ nor $j$ is
$0$; hence, $i$ and $j$ both belong to $\left\{ 1,2,\ldots,n\right\} $.
Thus, the growth-cell claim\footnote{This is the claim that \textquotedblleft
Any configuration $\left(
\begin{array}
[c]{cc}%
\lambda^{\left[ i-1,j-1\right] } & \lambda^{\left[ i-1,j\right] }\\
\lambda^{\left[ i,j-1\right] } & \lambda^{\left[ i,j\right] }%
\end{array}
\right) $ is of type $RS1$ if $j=\sigma_{i}$, and otherwise of type $RS0$,
$RS2$, or $RS3$\textquotedblright\ made in the proof of Theorem 3.2.1}
(applied to $j$ and $i$ instead of $i$ and $j$) shows that the configuration
$\left(
\begin{array}
[c]{cc}%
\lambda^{\left[ j-1,i-1\right] } & \lambda^{\left[ j-1,i\right] }\\
\lambda^{\left[ j,i-1\right] } & \lambda^{\left[ j,i\right] }%
\end{array}
\right) $ is of type $RS1$ if $i=\sigma_{j}$, and otherwise of type $RS0$,
$RS2$, or $RS3$. In other words,
\begin{equation}
\left(
\begin{array}
[c]{c}%
\text{the configuration }\left(
\begin{array}
[c]{cc}%
\lambda^{\left[ j-1,i-1\right] } & \lambda^{\left[ j,i-1\right] }\\
\lambda^{\left[ j-1,i\right] } & \lambda^{\left[ j,i\right] }%
\end{array}
\right) \text{ is of type }RS1\\
\text{if }i=\sigma_{j}\text{, and otherwise of type }RS0\text{, }RS2\text{, or
}RS3
\end{array}
\right) \label{p17.proof-of-3.2.1.alm1}%
\end{equation}
(because the statement that a configuration $\left(
\begin{array}
[c]{cc}%
\kappa & \lambda\\
\mu & \nu
\end{array}
\right) $ has any of the types $RS1$, $RS0$, $RS2$, and $RS3$ is symmetric in
$\lambda$ and $\mu$). But by the inductive hypothesis, we have $\lambda
^{\prime\left[ i-1,j\right] }=\lambda^{\left[ j,i-1\right] }$,
$\lambda^{\prime\left[ i,j-1\right] }=\lambda^{\left[ j-1,i\right] }$ and
$\lambda^{\prime\left[ i-1,j-1\right] }=\lambda^{\left[ j-1,i-1\right] }$.
On the other hand, the growth-cell claim (applied to $\sigma^{-1}$ and
$\lambda^{\prime\left[ u,v\right] }$ instead of $\sigma$ and $\lambda
^{\left[ u,v\right] }$) shows that the configuration $\left(
\begin{array}
[c]{cc}%
\lambda^{\prime\left[ i-1,j-1\right] } & \lambda^{\prime\left[
i-1,j\right] }\\
\lambda^{\prime\left[ i,j-1\right] } & \lambda^{\prime\left[ i,j\right] }%
\end{array}
\right) $ is of type $RS1$ if $j=\left( \sigma^{-1}\right) _{i}$, and
otherwise of type $RS0$, $RS2$, or $RS3$. In other words, the configuration
$\left(
\begin{array}
[c]{cc}%
\lambda^{\prime\left[ i-1,j-1\right] } & \lambda^{\prime\left[
i-1,j\right] }\\
\lambda^{\prime\left[ i,j-1\right] } & \lambda^{\prime\left[ i,j\right] }%
\end{array}
\right) $ is of type $RS1$ if $i=\sigma_{j}$, and otherwise of type $RS0$,
$RS2$, or $RS3$ (because $j=\left( \sigma^{-1}\right) _{i}$ is equivalent to
$i=\sigma_{j}$). Since $\lambda^{\prime\left[ i-1,j\right] }=\lambda
^{\left[ j,i-1\right] }$, $\lambda^{\prime\left[ i,j-1\right] }%
=\lambda^{\left[ j-1,i\right] }$ and $\lambda^{\prime\left[ i-1,j-1\right]
}=\lambda^{\left[ j-1,i-1\right] }$, this rewrites as follows:
\begin{equation}
\left(
\begin{array}
[c]{c}%
\text{the configuration }\left(
\begin{array}
[c]{cc}%
\lambda^{\left[ j-1,i-1\right] } & \lambda^{\left[ j,i-1\right] }\\
\lambda^{\left[ j-1,i\right] } & \lambda^{\prime\left[ i,j\right] }%
\end{array}
\right) \text{ is of type }RS1\\
\text{if }i=\sigma_{j}\text{, and otherwise of type }RS0\text{, }RS2\text{, or
}RS3
\end{array}
\right) . \label{p17.proof-of-3.2.1.alm2}%
\end{equation}
Comparing this with (\ref{p17.proof-of-3.2.1.alm1}), we conclude that
$\lambda^{\prime\left[ i,j\right] }=\lambda^{\left[ j,i\right] }$, because
of the observation that if a configuration $\left(
\begin{array}
[c]{cc}%
\kappa & \lambda\\
\mu & \nu
\end{array}
\right) $ has any of the types $RS1$, $RS2$, $RS3$, and $RS0$, then $\nu$ is
uniquely determined by $\kappa$, $\lambda$ and $\mu$ and the information
whether $RS1$ applies.
\end{itemize}
\end{details}
\rule{17cm}{0.3mm}
\textbf{Page 17:} You write: \textquotedblleft For a rectangular region we get
the algorithm of [SaSt].\textquotedblright. It would be better to replace
\textquotedblleft\lbrack SaSt]\textquotedblright\ by \textquotedblleft\lbrack
SaSt, Theorem 5.1]\textquotedblright\ here, since only the case of partial
permutations (not matrix words) is obtained directly from your growth approach.
\rule{17cm}{0.3mm}
\textbf{Page 17:} You write: \textquotedblleft The moves occurring in the
original formulation of the insertion procedure are related to configurations
of types $RS1$ and $RS2$; for a description in terms of chains of partitions,
type $RS3$ has to be considered as well\textquotedblright. I have not been
able to find a correct way to make this statement precise. In my opinion,
$RS3$ \textquotedblleft belongs\textquotedblright\ to the Insertion procedure
no less than $RS1$ and $RS2$ do, and appears when a bumping-out is happening
\textquotedblleft deep inside the tableau\textquotedblright\ (i.e., far away
from the corners). What was your intended interpretation?
\rule{17cm}{0.3mm}
\textbf{Page 18, \S 3.2:} Replace \textquotedblleft then its own
number\textquotedblright\ by \textquotedblleft than its own
number\textquotedblright.
\rule{17cm}{0.3mm}
\textbf{Page 18, \S 3.2:} Replace \textquotedblleft the poset associated
to\textquotedblright\ by \textquotedblleft the partition associated
to\textquotedblright.
\rule{17cm}{0.3mm}
\textbf{Page 19, \S 4.1:} Replace \textquotedblleft\lbrack Sch\"{u}1] not
only\textquotedblright\ by \textquotedblleft\lbrack Sch\"{u}1] is not
only\textquotedblright.
\rule{17cm}{0.3mm}
\textbf{Page 20:} Replace \textquotedblleft configurations
type\textquotedblright\ by \textquotedblleft configurations of
type\textquotedblright.
\rule{17cm}{0.3mm}
\textbf{Page 20:} Replace \textquotedblleft$RS^{t}2$ and $RS^{t}3$ are
identical to $RS2$ and $RS3$, respectively\textquotedblright\ by
\textquotedblleft$RS^{t}0$ and $RS^{t}3$ are identical to $RS0$ and $RS3$,
respectively\textquotedblright.
\rule{17cm}{0.3mm}
\textbf{Page 20, proof of Theorem 4.1.1:} You write: \textquotedblleft For
$i+k=n+1$ we either have $j=\sigma_{i}=\sigma_{n+1-k}$, in which case
$\lambda^{\left[ i,j-1,k\right] }=\left( 0\right) $ and $\lambda^{\left[
i,j,k\right] }=\left( 1\right) $, or $j\neq\sigma_{i}=\sigma_{n+1-k}$, in
which case $\lambda^{\left[ i,j-1,k\right] }=\lambda^{\left[ i,j,k\right]
}$\textquotedblright. This raises the question of how to construct such a
family of $\lambda^{\left[ i,j,k\right] }$. The answer is simple, but (I
think) is worth writing out explicitly: We set $\lambda^{\left[ i,j,k\right]
}=%
\begin{cases}
\left( 0\right) , & \text{if }j<\sigma_{i};\\
\left( 1\right) , & \text{if }j\geq\sigma_{i}%
\end{cases}
$ for every $\left( i,j,k\right) $ satisfying $i+k=n+1$.
\rule{17cm}{0.3mm}
\textbf{Page 20, proof of Theorem 4.1.1:} Replace \textquotedblleft such
such\textquotedblright\ by \textquotedblleft such\textquotedblright.
\rule{17cm}{0.3mm}
\textbf{Page 20, proof of Theorem 4.1.1:} Replace \textquotedblleft by
indiction\textquotedblright\ by \textquotedblleft by
induction\textquotedblright, unless you wish to imply that the jury is still
out on the claim about the configurations.
\rule{17cm}{0.3mm}
\textbf{Page 25:} Replace \textquotedblleft the the
different\textquotedblright\ by \textquotedblleft to the
different\textquotedblright.
\rule{17cm}{0.3mm}
\textbf{Page 27, \S 5:} Replace \textquotedblleft to to\textquotedblright\ by
\textquotedblleft to\textquotedblright.
\rule{17cm}{0.3mm}
\textbf{Page 27, Definition 5.1:} Remove the \textquotedblleft
a\textquotedblright\ in \textquotedblleft be a skew tableaux\textquotedblright.
\rule{17cm}{0.3mm}
\textbf{Page 28:} I think it is worth pointing out that the notion of
\textquotedblleft one-step glissement\textquotedblright\ is equivalent to what
is commonly called a \textquotedblleft(jeu-de-taquin) slide\textquotedblright%
\ (e.g., in Fulton's \textquotedblleft Young tableaux\textquotedblright, with
the only difference that Fulton carries around the partitions $\lambda$ and
$\mu$ defining the skew shape $\lambda/\mu$ and not just the set $Y\left(
\lambda\right) \setminus Y\left( \mu\right) $). (That said, the proof of
this equivalence is not completely obvious.)
\rule{17cm}{0.3mm}
\textbf{Page 28, Lemma 5.3:} Typo ``their their''.
\rule{17cm}{0.3mm}
\textbf{Page 28, proof of Lemma 5.3:} In \textquotedblleft From the definition
of $R$\textquotedblright, replace \textquotedblleft$R$\textquotedblright\ by
\textquotedblleft$RS^{-1}$\textquotedblright.
\rule{17cm}{0.3mm}
\textbf{Page 29, proof of Theorem 5.4:} \textquotedblleft Then
existence\textquotedblright\ should be \textquotedblleft The
existence\textquotedblright.
\rule{17cm}{0.3mm}
\textbf{Page 29:} \textquotedblleft the square $s$ it the
corner\textquotedblright\ should be \textquotedblleft the square $s$ is the
corner\textquotedblright.
\rule{17cm}{0.3mm}
\textbf{Page 29:} Typo: ``be be''.
\rule{17cm}{0.3mm}
\textbf{Page 31, reference [Garf]:} \textquotedblleft
Algebra's\textquotedblright\ should be \textquotedblleft
Algebras\textquotedblright.
\rule{17cm}{0.3mm}
\end{document}