\documentclass[numbers=enddot,12pt,final,onecolumn,notitlepage]{scrartcl}%
\usepackage[headsepline,footsepline,manualmark]{scrlayer-scrpage}
\usepackage[all,cmtip]{xy}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{framed}
\usepackage{amsmath}
\usepackage{comment}
\usepackage{color}
\usepackage{hyperref}
\usepackage[sc]{mathpazo}
\usepackage[T1]{fontenc}
\usepackage{amsthm}
%TCIDATA{OutputFilter=latex2.dll}
%TCIDATA{Version=5.50.0.2960}
%TCIDATA{LastRevised=Friday, June 12, 2015 22:40:01}
%TCIDATA{SuppressPackageManagement}
%TCIDATA{}
%TCIDATA{}
%TCIDATA{BibliographyScheme=Manual}
%BeginMSIPreambleData
\providecommand{\U}[1]{\protect\rule{.1in}{.1in}}
%EndMSIPreambleData
\theoremstyle{definition}
\newtheorem{theo}{Theorem}[section]
\newenvironment{theorem}[1][]
{\begin{theo}[#1]\begin{leftbar}}
{\end{leftbar}\end{theo}}
\newtheorem{lem}[theo]{Lemma}
\newenvironment{lemma}[1][]
{\begin{lem}[#1]\begin{leftbar}}
{\end{leftbar}\end{lem}}
\newtheorem{prop}[theo]{Proposition}
\newenvironment{proposition}[1][]
{\begin{prop}[#1]\begin{leftbar}}
{\end{leftbar}\end{prop}}
\newtheorem{defi}[theo]{Definition}
\newenvironment{definition}[1][]
{\begin{defi}[#1]\begin{leftbar}}
{\end{leftbar}\end{defi}}
\newtheorem{remk}[theo]{Remark}
\newenvironment{remark}[1][]
{\begin{remk}[#1]\begin{leftbar}}
{\end{leftbar}\end{remk}}
\newtheorem{coro}[theo]{Corollary}
\newenvironment{corollary}[1][]
{\begin{coro}[#1]\begin{leftbar}}
{\end{leftbar}\end{coro}}
\newtheorem{conv}[theo]{Convention}
\newenvironment{condition}[1][]
{\begin{conv}[#1]\begin{leftbar}}
{\end{leftbar}\end{conv}}
\newtheorem{quest}[theo]{Question}
\newenvironment{algorithm}[1][]
{\begin{quest}[#1]\begin{leftbar}}
{\end{leftbar}\end{quest}}
\newtheorem{warn}[theo]{Warning}
\newenvironment{conclusion}[1][]
{\begin{warn}[#1]\begin{leftbar}}
{\end{leftbar}\end{warn}}
\newtheorem{conj}[theo]{Conjecture}
\newenvironment{conjecture}[1][]
{\begin{conj}[#1]\begin{leftbar}}
{\end{leftbar}\end{conj}}
\newtheorem{exmp}[theo]{Example}
\newenvironment{example}[1][]
{\begin{exmp}[#1]\begin{leftbar}}
{\end{leftbar}\end{exmp}}
\newenvironment{proof2}[1][]
{\begin{proof}[#1]}{\end{proof}}
\newenvironment{verlong}{}{}
\newenvironment{vershort}{}{}
\newenvironment{noncompile}{}{}
\excludecomment{verlong}
\includecomment{vershort}
\excludecomment{noncompile}
\newcommand{\kk}{\mathbf{k}}
\newcommand{\id}{\operatorname{id}}
\newcommand{\ev}{\operatorname{ev}}
\newcommand{\Comp}{\operatorname{Comp}}
\newcommand{\bk}{\mathbf{k}}
\newcommand{\Nplus}{\mathbb{N}_{+}}
\newcommand{\NN}{\mathbb{N}}
\let\sumnonlimits\sum
\let\prodnonlimits\prod
\renewcommand{\sum}{\sumnonlimits\limits}
\renewcommand{\prod}{\prodnonlimits\limits}
\setlength\textheight{22.5cm}
\setlength\textwidth{15cm}
\ihead{Errata to ``An Introduction to Hyperplane Arrangements''}
\ohead{12 June 2015}
\begin{document}
\begin{center}
\textbf{An Introduction to Hyperplane Arrangements}
\textit{Richard P. Stanley}
IAS/Park City Mathematics Series
Volume 14, 2004
version of February 26, 2006
\url{http://www.cis.upenn.edu/~cis610/sp06stanley.pdf}
---------------------------------------------------------------------------------------
\textbf{List of additional errata and questions - I}
(version 2)
Darij Grinberg, 11 June 2015
\bigskip
\end{center}
Page numbers refer to the page numbers at the top of the pages, not to the
page count of the PDF file.
\begin{itemize}
\item \textbf{page 2:} You speak of the \textquotedblleft usual dot
product\textquotedblright, but there is no \textquotedblleft usual dot
product\textquotedblright\ on a general finite-dimensional $K$-vector space.
If you want to work in this generality, you should use the dual space. (Or
else just identify $V$ with $K^{n}$ for this definition, or require the choice
of a nondegenerate symmetric bilinear form $V\times V\rightarrow K$ which is
to serve as a dot product. I personally find it easier to invoke the dual
space, because as soon as one introduces additional structures like a basis or
a bilinear form, it starts clouding further definitions. Besides, you use
linear forms in the next paragraph, even though in the next paragraph you
actually use a basis!)
Similarly, in the next paragraph, \textquotedblleft$x=\left( x_{1}%
,\ldots,x_{n}\right) $\textquotedblright\ assumes an isomorphism
$V\rightarrow K^{n}$ to be given. And after that, the notion of
\textquotedblleft normals\textquotedblright\ assumes a dot product.
\item \textbf{pages 2-3:} You write: \textquotedblleft Let $Y$ be a
complementary space in $K^{n}$ to the subspace $X$ spanned by the normals to
hyperplanes in $\mathcal{A}$. Define%
\[
W=\left\{ v\in V\ :\ v\cdot y=0\ \forall y\in Y\right\} .
\]
If $\operatorname*{char}\left( K\right) =0$ then we can simply take
$W=X$.\textquotedblright
I understand what you mean here, but it is not correctly explained. First,
even if $\operatorname*{char}\left( K\right) =0$, then we cannot take $W=X$
unless $Y$ is \textit{the orthogonal complement} to $X$ (instead of just a
random complementary space in $K^{n}$ to the subspace $X$); I think you should
say \textquotedblleft we can simply take $Y=X^{\perp}$ and $W=X$%
\textquotedblright\ instead of \textquotedblleft we can simply take
$W=X$\textquotedblright\ because otherwise your wording suggests that $Y$ can
be taken arbitrary here. Second, $\operatorname*{char}\left( K\right) =0$
does not guarantee that the orthogonal complement to $X$ is indeed a
complementary subspace to $X$ (for example, the orthogonal complement to the
subspace $\left\langle \left( 1,i\right) \right\rangle $ of $\mathbb{C}^{2}$
is \textit{not} a complementary subspace to $\left\langle \left( 1,i\right)
\right\rangle $, even though $\operatorname*{char}\mathbb{C}=0$). So to make
sure that we can actually take $W=X$, we need to require that $K$ is a
formally real field (which is a stronger assertion than $\operatorname*{char}%
\left( K\right) =0$).
Once again, the whole situation simplifies if you use the dual space. Now, the
\textquotedblleft normals to hyperplanes in $\mathcal{A}$\textquotedblright%
\ become the \textquotedblleft linear forms defining the hyperplanes in
$\mathcal{A}$\textquotedblright, and they form a subspace $\widetilde{X}$ of
the dual space $V^{\ast}$. The orthogonal space of this $\widetilde{X}$ (= the
joint kernel of the linear forms defining the hyperplanes in $\mathcal{A}$) is
a subspace of $V$; we call it $\widetilde{Y}$. Then, the images of the
hyperplanes in $\mathcal{A}$ under the projection map $V\rightarrow
V/\widetilde{Y}$ are hyperplanes in $V/\widetilde{Y}$ (this is easy to
prove\footnote{All that needs to be checked is that every hyperplane
$H\in\mathcal{A}$ satisfies $\widetilde{Y}\subseteq H^{\prime}$, where
$H^{\prime}$ is the translate of $H$ that passes through the origin. The proof
is easy: From $H^{\perp}\subseteq\widetilde{X}$, we obtain $\widetilde{X}%
^{\perp}\subseteq\left( H^{\perp}\right) ^{\perp}=H$, thus $Y=\widetilde{X}%
^{\perp}\subseteq H$. Here, the $\perp$ sign is defined with reference to the
canonical pairing $V^{\ast}\times V\rightarrow K$, not to any non-canonical
bilinear form on $V$.} -- a lot easier than checking your equality (1)). The
arrangement they form in $V/\widetilde{Y}$ is isomorphic to your
$\mathcal{A}_{W}$, but defined canonically (thus eliminating the necessity of
checking that your $\mathcal{A}_{W}$ is independent on the choice of $W$ up to isomorphism).
\item \textbf{page 3:} When you write \textquotedblleft$H^{\prime}%
\in\mathcal{A}_{W}$ if and only if $H^{\prime}\oplus W^{\perp}\in\mathcal{A}%
$\textquotedblright, it would be good to point out that $H^{\prime}$ is
supposed to be a subspace of $W$. I would also replace the $\oplus$ sign by a
$+$ sign, since $\oplus$ has not been defined for \textit{affine} subspaces
(and its standard meaning that involves the intersection being $0$ is not
correct for affine subspaces).
\item \textbf{page 3:} You write: \textquotedblleft in characteristic $p$ this
type of reasoning fails\textquotedblright. Yes, but not only in characteristic
$p$. Also for $K=\mathbb{C}$, as I explained above.
\item \textbf{page 3:} You write: \textquotedblleft then $R\in\mathcal{R}%
\left( \mathcal{A}\right) $ if and only if $R\cap W\in\mathcal{R}\left(
\mathcal{A}_{W}\right) $\textquotedblright. This makes little sense ($R$
cannot be any subset of $\mathbb{R}^{n}$, but what should it be?). I assume
you mean that the map%
\begin{align*}
\mathcal{R}\left( \mathcal{A}\right) & \rightarrow\mathcal{R}\left(
\mathcal{A}_{W}\right) ,\\
R & \mapsto R\cap W
\end{align*}
is well-defined and bijective, with inverse%
\begin{align*}
\mathcal{R}\left( \mathcal{A}_{W}\right) & \rightarrow\mathcal{R}\left(
\mathcal{A}\right) ,\\
R & \mapsto R+W^{\perp}.
\end{align*}
\item \textbf{page 4:} Trivial nitpick: In the definition of \textquotedblleft
general position\textquotedblright, the $H_{1},\ldots,H_{p}$ should be assumed
distinct in the formulas.
\item \textbf{page 4, second line of Example 1.2:} \textquotedblleft$L$
line\textquotedblright\ should be \textquotedblleft line $L$\textquotedblright.
\item \textbf{page 4, second line of Example 1.2:} \textquotedblleft%
$\mathcal{A}_{K}$\textquotedblright\ should be \textquotedblleft%
$\mathcal{A}_{k}$\textquotedblright.
\item \textbf{page 8:} You define saturated chains, but you do not define
maximal chains. The fact that you use the word \textquotedblleft
maximal\textquotedblright\ in the next sentence (\textquotedblleft if every
maximal chain of $P$ has length $n$\textquotedblright) creates the incorrect
impression that \textquotedblleft maximal\textquotedblright\ is a synonym for
\textquotedblleft saturated\textquotedblright.
\item \textbf{page 8:} You write: \textquotedblleft If $x\widehat{0}$. In the M\"{o}bius algebra $A\left( L\right) $ (defined in
\S 2), we have $\sigma_{\widehat{0}}=\underbrace{\sum_{y\geq\widehat{0}}%
}_{=\sum_{y\in L}}\underbrace{\mu\left( \widehat{0},y\right) }_{=\mu\left(
y\right) }y=\sum_{y\in L}\mu\left( y\right) y=\sum_{x\in L}\mu\left(
x\right) x$ and thus%
\begin{align*}
\sum_{x\in L}\mu\left( x\right) \underbrace{x\vee a}_{=xa} & =\sum_{x\in
L}\mu\left( x\right) xa=\underbrace{\left( \sum_{x\in L}\mu\left(
x\right) x\right) }_{=\sigma_{\widehat{0}}}\underbrace{a}_{\substack{=\sum
_{y\geq a}\sigma_{y}\\\text{(by (9))}}}=\sigma_{\widehat{0}}\left(
\sum_{y\geq a}\sigma_{y}\right) \\
& =\sum_{y\geq a}\underbrace{\sigma_{\widehat{0}}\sigma_{y}}%
_{\substack{=\delta_{\widehat{0},y}\sigma_{\widehat{0}}\\\text{(by the second
sentence}\\\text{of Theorem 2.3)}}}=\sum_{y\geq a}\underbrace{\delta
_{\widehat{0},y}}_{\substack{=0\\\text{(since }y\geq a>\widehat{0}\\\text{and
thus }y\neq\widehat{0}\text{)}}}\sigma_{\widehat{0}}=0.
\end{align*}
Comparing coefficients before $\widehat{1}$ in this equality yields
$\sum_{\substack{x\in L;\\x\vee a=\widehat{1}}}\mu\left( x\right) =0$.
Theorem 3.9 is proven.
\item \textbf{page 38, proof of Theorem 3.10:} It took me a while to
understand why \textquotedblleft The sum on the right is
nonempty\textquotedblright. The simplest proof of this that I can find is the
following: Let $A$ be the set of all atoms of $M$. The map $\Psi$ defined by
(\ref{p36.Psi}) is injective (since we have found an inverse to it). Thus,
$\bigvee A\neq\bigvee\left( A\setminus\left\{ a\right\} \right) $. Hence,
$\bigvee\left( A\setminus\left\{ a\right\} \right) <\bigvee A$.
Semimodularity of $L$ easily shows that $\bigvee\left( A\setminus\left\{
a\right\} \right) \lessdot\bigvee A$. But $\bigvee A=\widehat{1}$ (which is
easy to prove using atomicity of $L$: the element $\widehat{1}$ must be a join
of \textit{some} set of atoms, and thus also of the set $A$ of all atoms), so
this becomes $\bigvee\left( A\setminus\left\{ a\right\} \right) \lessdot
1$. But $a\not \leq \bigvee\left( A\setminus\left\{ a\right\} \right) $
(since otherwise, we would have $\bigvee\left( A\setminus\left\{ a\right\}
\right) =\bigvee A$, contradicting $\bigvee A\neq\bigvee\left(
A\setminus\left\{ a\right\} \right) $). Thus, there exists an $x\in L$
satisfying $a\not \leq x\lessdot1$ (namely, $x=\bigvee\left( A\setminus
\left\{ a\right\} \right) $).
\item \textbf{page 38, (26):} Replace \textquotedblleft$M$\textquotedblright%
\ by \textquotedblleft$M_{\mathcal{A}}$\textquotedblright.
\item \textbf{page 41, \S 4.1:} Throughout this section, whenever you work
with $\operatorname*{BC}\left( M\right) $, you need to require $M$ to have
no loops. Otherwise, $\operatorname*{BC}\left( M\right) $ is the empty set
(since the empty set is a broken circuit), and thus not a simplicial complex.
\item \textbf{page 42, Lemma 4.4:} Replace \textquotedblleft$-c_{1}%
+c_{2}-c_{3}+\cdots$\textquotedblright\ by \textquotedblleft$c_{0}-c_{1}%
+c_{2}-c_{3}+\cdots$\textquotedblright, in order for the lemma to still be
valid when $P$ is the one-element poset.
\item \textbf{page 42, }\textsc{Note}\textbf{ after Lemma 4.4:} The equality
\textquotedblleft$\mu\left( \widehat{0},\widehat{1}\right) =\widetilde{\chi
}\left( \Delta\left( P^{\prime}\right) \right) $\textquotedblright%
\ requires that $P$ contain more than one element.
\item \textbf{page 43:} This is absolutely not an erratum, and not even a
suggestion, but I just felt like sharing another proof of Theorem 4.11 (though
the probability that it is not new to you is high).
\textit{Proof sketch for Theorem 4.11.} Let $K=\mathbb{Q}$. For every
$i\in\mathbb{P}$, define a function $\zeta_{i}:\operatorname*{Int}P\rightarrow
K$ by%
\[
\zeta_{i}\left[ x,y\right] =\left[ x\lessdot y\text{ and }\lambda\left(
x,y\right) =i\right] .
\]
Here, we are using the Iverson bracket notation (that is, $\left[
\mathcal{A}\right] =\left\{
\begin{array}
[c]{c}%
1,\text{ if }\mathcal{A}\text{ is true;}\\
0,\text{ if }\mathcal{A}\text{ is false}%
\end{array}
\right. $ for any logical statement $\mathcal{A}$).
Recall that the functions $\operatorname*{Int}P\rightarrow K$ form a
$K$-algebra $\mathcal{I}\left( P\right) $ (defined in \S 1.3, and called the
incidence algebra of $P$). So all of the $\zeta_{i}$ are elements of this
$K$-algebra $\mathcal{I}\left( P\right) $. These elements $\zeta_{i}$ are
locally nilpotent (since they send one-element intervals $\left[ x,x\right]
$ to $0$), and the infinite products $\cdots\left( 1-\zeta_{3}\right)
\left( 1-\zeta_{2}\right) \left( 1-\zeta_{1}\right) $ and $\left(
1-\zeta_{1}\right) ^{-1}\left( 1-\zeta_{2}\right) ^{-1}\left( 1-\zeta
_{3}\right) ^{-1}\cdots$ is well-defined. Here, $1$ stands for the unity of
the $K$-algebra $\mathcal{I}\left( P\right) $; this is its element $\delta$.
We have%
\begin{align*}
& \underbrace{\left( 1-\zeta_{1}\right) ^{-1}}_{=\sum_{m\in\mathbb{N}}%
\zeta_{1}^{m}}\underbrace{\left( 1-\zeta_{2}\right) ^{-1}}_{=\sum
_{m\in\mathbb{N}}\zeta_{2}^{m}}\underbrace{\left( 1-\zeta_{3}\right) ^{-1}%
}_{=\sum_{m\in\mathbb{N}}\zeta_{3}^{m}}\cdots\\
& =\left( \sum_{m\in\mathbb{N}}\zeta_{1}^{m}\right) \left( \sum
_{m\in\mathbb{N}}\zeta_{2}^{m}\right) \left( \sum_{m\in\mathbb{N}}\zeta
_{3}^{m}\right) \cdots\\
& =\sum_{\substack{\left( m_{1},m_{2},m_{3},\ldots\right) \text{ is
a}\\\text{weak composition}}}\zeta_{1}^{m_{1}}\zeta_{2}^{m_{2}}\zeta
_{3}^{m_{3}}\cdots=\sum_{1\leq a_{1}\leq a_{2}\leq\cdots\leq a_{k}}%
\zeta_{a_{1}}\zeta_{a_{2}}\cdots\zeta_{a_{k}}.
\end{align*}
Hence, every $x\leq y$ in $P$ satisfy%
\begin{align}
& \left( \left( 1-\zeta_{1}\right) ^{-1}\left( 1-\zeta_{2}\right)
^{-1}\left( 1-\zeta_{3}\right) ^{-1}\cdots\right) \left[ x,y\right]
\nonumber\\
& =\left( \sum_{1\leq a_{1}\leq a_{2}\leq\cdots\leq a_{k}}\zeta_{a_{1}}%
\zeta_{a_{2}}\cdots\zeta_{a_{k}}\right) \left[ x,y\right] \label{p43.5a}\\
& =\sum_{1\leq a_{1}\leq a_{2}\leq\cdots\leq a_{k}}\underbrace{\left(
\zeta_{a_{1}}\zeta_{a_{2}}\cdots\zeta_{a_{k}}\right) \left[ x,y\right]
}_{=\sum_{x=x_{0}\leq x_{1}\leq x_{2}\leq\cdots\leq x_{k}\leq y}\prod
_{i=1}^{k}\zeta_{a_{i}}\left[ x_{i-1},x_{i}\right] }\nonumber\\
& =\sum_{1\leq a_{1}\leq a_{2}\leq\cdots\leq a_{k}}\sum_{x=x_{0}\leq
x_{1}\leq x_{2}\leq\cdots\leq x_{k}\leq y}\prod_{i=1}^{k}\underbrace{\zeta
_{a_{i}}\left[ x_{i-1},x_{i}\right] }_{\substack{=\left[ x_{i-1}\lessdot
x_{i}\text{ and }\lambda\left( x_{i-1},x_{i}\right) =a_{i}\right]
\\\text{(by the definition of }\zeta_{a_{i}}\text{)}}}\nonumber\\
& =\sum_{1\leq a_{1}\leq a_{2}\leq\cdots\leq a_{k}}\sum_{x=x_{0}\leq
x_{1}\leq x_{2}\leq\cdots\leq x_{k}\leq y}\underbrace{\prod_{i=1}^{k}\left[
x_{i-1}\lessdot x_{i}\text{ and }\lambda\left( x_{i-1},x_{i}\right)
=a_{i}\right] }_{=\left[ x_{0}\lessdot x_{1}\lessdot\cdots\lessdot
x_{k}\text{ and each }i\text{ satisfies }\lambda\left( x_{i-1},x_{i}\right)
=i\right] }\nonumber\\
& =\sum_{1\leq a_{1}\leq a_{2}\leq\cdots\leq a_{k}}\underbrace{\sum
_{x=x_{0}\leq x_{1}\leq x_{2}\leq\cdots\leq x_{k}\leq y}\left[ x_{0}\lessdot
x_{1}\lessdot\cdots\lessdot x_{k}\text{ and each }i\text{ satisfies }%
\lambda\left( x_{i-1},x_{i}\right) =i\right] }_{=\sharp\left\{
x=x_{0}\lessdot x_{1}\lessdot\cdots\lessdot x_{k}=y\ :\ \text{each }i\text{
satisfies }\lambda\left( x_{i-1},x_{i}\right) =i\right\} }\nonumber\\
& =\sum_{1\leq a_{1}\leq a_{2}\leq\cdots\leq a_{k}}\sharp\left\{
x=x_{0}\lessdot x_{1}\lessdot\cdots\lessdot x_{k}=y\ :\ \text{each }i\text{
satisfies }\lambda\left( x_{i-1},x_{i}\right) =i\right\} \nonumber\\
& =\sharp\left\{ x=x_{0}\lessdot x_{1}\lessdot\cdots\lessdot x_{k}%
=y\ :\ \lambda\left( x_{0},x_{1}\right) \leq\lambda\left( x_{1}%
,x_{2}\right) \leq\cdots\leq\lambda\left( x_{k-1},x_{k}\right) \right\}
\label{p43.5b}\\
& =1\ \ \ \ \ \ \ \ \ \ \left( \text{by Definition 4.11}\right) \nonumber\\
& =\zeta\left[ x,y\right] .\nonumber
\end{align}
Hence, $\left( 1-\zeta_{1}\right) ^{-1}\left( 1-\zeta_{2}\right)
^{-1}\left( 1-\zeta_{3}\right) ^{-1}\cdots=\zeta$. Inverting both sides of
this equality, we obtain $\cdots\left( 1-\zeta_{3}\right) \left(
1-\zeta_{2}\right) \left( 1-\zeta_{1}\right) =\zeta^{-1}=\mu$. Thus,%
\[
\mu=\cdots\left( 1-\zeta_{3}\right) \left( 1-\zeta_{2}\right) \left(
1-\zeta_{1}\right) =\sum_{a_{1}>a_{2}>\cdots>a_{k}\geq1}\left( -1\right)
^{k}\zeta_{a_{1}}\zeta_{a_{2}}\cdots\zeta_{a_{k}},
\]
Hence, every $x\leq y$ satisfy%
\begin{align*}
& \mu\left[ x,y\right] \\
& =\left( \sum_{a_{1}>a_{2}>\cdots>a_{k}\geq1}\left( -1\right) ^{k}%
\zeta_{a_{1}}\zeta_{a_{2}}\cdots\zeta_{a_{k}}\right) \left[ x,y\right] \\
& =\sum_{k\in\mathbb{N}}\left( -1\right) ^{k}\sharp\left\{ x=x_{0}\lessdot
x_{1}\lessdot\cdots\lessdot x_{k}=y\ :\ \lambda\left( x_{0},x_{1}\right)
>\lambda\left( x_{1},x_{2}\right) >\cdots>\lambda\left( x_{k-1}%
,x_{k}\right) \right\} \\
& \ \ \ \ \ \ \ \ \ \ \left( \text{similarly to how we got from
(\ref{p43.5a}) to (\ref{p43.5b})}\right) .
\end{align*}
The sum on the right hand side has only one (or, rather, at most one) nonzero
term, namely the one for $k=\operatorname*{rk}\left( x,y\right) $ (since $P$
is graded). Hence, this equality rewrites as%
\begin{align*}
& \mu\left[ x,y\right] \\
& =\left( -1\right) ^{\operatorname*{rk}\left( x,y\right) }\sharp\left\{
x=x_{0}\lessdot x_{1}\lessdot\cdots\lessdot x_{k}=y\ :\ \lambda\left(
x_{0},x_{1}\right) >\lambda\left( x_{1},x_{2}\right) >\cdots>\lambda\left(
x_{k-1},x_{k}\right) \right\} ,
\end{align*}
and we are done.
Now that I have written up this proof, I guess I understand why you didn't
want to do it...
\item \textbf{page 44, proof of Theorem 4.11:} In (27), replace
\textquotedblleft$n$\textquotedblright\ by \textquotedblleft$n-1$%
\textquotedblright.
\item \textbf{page 44, proof of Theorem 4.11:} Replace \textquotedblleft%
$\lambda\left( x_{i}\right) >\lambda\left( x_{i+1}\right) $%
\textquotedblright\ by \textquotedblleft$\lambda\left( x_{i-1},x_{i}\right)
>\lambda\left( x_{i},x_{i+1}\right) $\textquotedblright.
\item \textbf{page 44, proof of Theorem 4.11:} The case of $n=0$ should be
ruled out somewhere near the beginning of the proof, as there are arguments
that tacitly use $n>0$ throughout the proof. (Compare what I wrote about Lemma 4.4.)
\item \textbf{page 45, proof of Theorem 4.12:} Again, I think that assuming
that $M$ is simple is not worth the hassle. We already have the hypothesis
that $M$ has no loops (else, $\operatorname*{BC}\left( M\right) $ is not a
simplicial complex). Without the assumption that $M$ be simple, we can no
longer identify the atoms of $L\left( M\right) $ with the points of $M$. But
we still have a surjective map%
\begin{align*}
M & \rightarrow\left\{ \text{atoms of }L\left( M\right) \right\} ,\\
x_{i} & \mapsto\overline{\left\{ x_{i}\right\} },
\end{align*}
and your proof goes through if some of the $x_{i}$'s appearing in it are
replaced by the corresponding $\overline{\left\{ x_{i}\right\} }$'s.
\item \textbf{page 45, proof of Theorem 4.12:} In \textquotedblleft Figure 3
shows the lattice of flats of the matroid $M$ of Figure 1 with the edge
labeling (30)\textquotedblright, add \textquotedblleft the ordering
$\mathcal{O}$ and\textquotedblright\ after the \textquotedblleft
with\textquotedblright.
\item \textbf{page 45, proof of Theorem 4.12:} In \textquotedblleft Moreover,
there is a unique $y_{1}$ satisfying $x=x_{0}\lessdot y_{1}\leq y$ and
$\widetilde{\lambda}\left( x_{0},y_{1}\right) =j$, viz., $y_{1}=x_{0}\vee
x_{j}$. (Note that $y_{1}\gtrdot x_{0}$ by semimodularity.)\textquotedblright,
replace every of the four occurrences of \textquotedblleft$x_{0}%
$\textquotedblright\ by \textquotedblleft$x$\textquotedblright. Then, define
$y_{0}$ (not $x_{0}$) to mean $x$ (this notation is used in the next sentence).
\item \textbf{page 46, proof of Theorem 4.12:} In \textquotedblleft%
$\widetilde{\lambda}\left( y_{0},y_{1}\right) =j>\widetilde{\lambda}\left(
y_{1},y_{2}\right) $\textquotedblright, replace the \textquotedblleft%
$>$\textquotedblright\ sign by a \textquotedblleft$\geq$\textquotedblright\ sign.
\item \textbf{page 46, proof of Theorem 4.12:} In Claim 2, replace both
appearances of \textquotedblleft$\lambda\left( C\right) $\textquotedblright%
\ by \textquotedblleft$\widetilde{\lambda}\left( C\right) $%
\textquotedblright.
Also, it would be good to define what $\widetilde{\lambda}\left( C\right) $
means, and explain the abuse of notation. As far as I understand, you define
$\widetilde{\lambda}\left( C\right) $ as follows: If $C$ is a chain
$0=y_{0}\lessdot y_{1}\lessdot\cdots\lessdot y_{k}$, then you define
$\widetilde{\lambda}\left( C\right) $ to be the sequence $\left(
\widetilde{\lambda}\left( y_{0},y_{1}\right) ,\widetilde{\lambda}\left(
y_{1},y_{2}\right) ,\ldots,\widetilde{\lambda}\left( y_{k-1},y_{k}\right)
\right) $. Sometimes you denote the set of the entries of this sequence
(rather than this sequence itself) as $\widetilde{\lambda}\left( C\right) $.
Moreover, you identify this set with the set $\left\{ x_{\widetilde{\lambda
}\left( y_{0},y_{1}\right) },x_{\widetilde{\lambda}\left( y_{1}%
,y_{2}\right) },\ldots,x_{\widetilde{\lambda}\left( y_{k-1},y_{k}\right)
}\right\} \subseteq S$.
\item \textbf{page 46, proof of Theorem 4.12:} In Claim 2, replace
\textquotedblleft increasing chain\textquotedblright\ by \textquotedblleft
strictly increasing chain\textquotedblright.
\item \textbf{page 46, proof of Theorem 4.12:} In the proof of Claim 2,
replace \textquotedblleft$\lambda\left( C\right) $\textquotedblright\ by
\textquotedblleft$\widetilde{\lambda}\left( C\right) $\textquotedblright%
\ (in \textquotedblleft To prove the distinctness of the labels $\lambda
\left( C\right) $\textquotedblright).
\item \textbf{page 46, proof of Theorem 4.12:} In the proof of Claim 2,
replace \textquotedblleft$\widehat{0}:=y_{0}\lessdot y_{1}\lessdot
\cdots\lessdot y_{k}$\textquotedblright\ by \textquotedblleft$\widehat{0}%
=y_{0}\lessdot y_{1}\lessdot\cdots\lessdot y_{k}$\textquotedblright\ (no
colon, since you are not defining anything).
\item \textbf{page 46, proof of Theorem 4.12:} In the proof of Claim 2, you
write: \textquotedblleft Note that $C$ is saturated by
semimodularity\textquotedblright. This is only half of the story, because it
has to be checked that no $y_{i}$ equals $y_{i-1}$. This latter statement
follows from%
\begin{align*}
\operatorname*{rk}\left( y_{i}\right) & =\operatorname*{rk}\left(
\overline{\left\{ x_{a_{1}}\right\} }\vee\overline{\left\{ x_{a_{2}%
}\right\} }\vee\cdots\vee\overline{\left\{ x_{a_{i}}\right\} }\right)
=\operatorname*{rk}\overline{\left\{ x_{a_{1}},x_{a_{2}},\ldots,x_{a_{i}%
}\right\} }\\
& =\operatorname*{rk}\left\{ x_{a_{1}},x_{a_{2}},\ldots,x_{a_{i}}\right\}
=i\\
& \ \ \ \ \ \ \ \ \ \ \left(
\begin{array}
[c]{c}%
\text{since }\left\{ x_{a_{1}},x_{a_{2}},\ldots,x_{a_{i}}\right\} \text{ is
a subset of the independent set }T\text{,}\\
\text{and thus itself independent}%
\end{array}
\right) ,
\end{align*}
where we are tacitly using that the lattice $L\left( M\right) $ is graded by
the rank of flats in the matroid $M$.
\item \textbf{page 46, proof of Theorem 4.12:} In the proof of Claim 2, you
write: \textquotedblleft Thus%
\[
\operatorname*{rk}\left( T\right) =\operatorname*{rk}\left( T\cup\left\{
x_{j}\right\} \right) =i.
\]
Since $T$ is independent, $T\cup\left\{ x_{j}\right\} $ contains a circuit
$Q$ satisfying $x_{j}\in Q$, so $T$ contains a broken
circuit.\textquotedblright\ This is wrong in two places: first,
$\operatorname*{rk}\left( T\right) $ is not $i$, and second, $x_{j}$ might
not be larger than $\max T$. Let me suggest the following corrected argument:
\textquotedblleft Let $T_{i}$ be the subset $\left\{ x_{a_{1}},\ldots
,x_{a_{i}}\right\} $ of $T$; then, $T_{i}$ is independent (since $T$ is
independent). Moreover, $y_{i}=\bigvee_{t\in T_{i}}\overline{\left\{
t\right\} }=\overline{T_{i}}$. Hence, from $y_{i-1}\vee\overline{\left\{
x_{j}\right\} }=y_{i}$, we obtain $\overline{\left\{ x_{j}\right\} }\leq
y_{i}$, so $\overline{\left\{ x_{j}\right\} }\subseteq y_{i}=\overline
{T_{i}}$. Thus, the set $T_{i}\cup\left\{ x_{j}\right\} $ is dependent, and
thus contains a circuit $Q$ satisfying $x_{j}\in Q$ (since $T_{i}$ is
independent). Therefore, $T_{i}$ contains a broken circuit (namely,
$Q\setminus\left\{ x_{j}\right\} $, since $j>a_{i}>a_{i-1}>\cdots>a_{1}$).
Thus, $T$ contains a broken circuit (since $T_{i}\subseteq T$), which is
absurd.\textquotedblright
\item \textbf{page 47, Example 4.9 (c):} Replace \textquotedblleft and
$\operatorname*{rk}\left( y\right) =2$\textquotedblright\ by
\textquotedblleft with $\operatorname*{rk}\left( y\right) =2$%
\textquotedblright.
\item \textbf{page 47, Example 4.9 (e):} Replace \textquotedblleft%
$\mathbb{F}_{n}\left( q\right) $\textquotedblright\ by \textquotedblleft%
$\mathbb{F}_{q}^{n}$\textquotedblright.
\item \textbf{page 48, Example 4.9 (e):} Replace \textquotedblleft$L$ is a
modular geometric lattice\textquotedblright\ by \textquotedblleft$B_{n}\left(
q\right) $ is a modular geometric lattice\textquotedblright.
\item \textbf{page 48, Example 4.9 (e):} Replace \textquotedblleft every $x\in
L$ is modular\textquotedblright\ by \textquotedblleft every $x\in B_{n}\left(
q\right) $ is modular\textquotedblright.
\item \textbf{page 48, Example 4.9 (e), }\textsc{Note}\textbf{:} Replace
\textquotedblleft every two points\textquotedblright\ by \textquotedblleft
every two distinct points\textquotedblright. Similarly, replace
\textquotedblleft every two lines\textquotedblright\ by \textquotedblleft
every two distinct lines\textquotedblright.
\item \textbf{page 49, Example 4.9 (f):} In \textquotedblleft$\left\{
a,b,B_{1}-a,B_{2}-b,\ldots,B_{3},\ldots,B_{k}\right\} $\textquotedblright,
remove the first \textquotedblleft$\ldots$\textquotedblright.
\item \textbf{page 49, Theorem 4.13:} The \textquotedblleft of rank
$n$\textquotedblright\ is slightly ambiguous: does it refer to the lattice or
to $z$ ? (It is meant to refer to $L$, of course, rather unsurprisingly, but
I'd still split such a sentence into two if I were to write it.)
\item \textbf{page 49, Theorem 4.13:} If I am not mistaken, $\chi_{L}$ and
$\chi_{\left[ \widehat{0},z\right] }$ have never been defined: You defined
$\chi_{M}$ for matroids, but not $\chi_{L}$ for lattices. I guess it wouldn't
be wrong to address this on a more general level and define $\chi_{P}$ for
every finite graded poset $P$ which has a $\widehat{0}$ and a $\widehat{1}$,
by setting%
\[
\chi_{P}\left( t\right) =\sum_{x\in P}\mu\left( \widehat{0},x\right)
t^{\operatorname*{rk}\widehat{1}-\operatorname*{rk}x}.
\]
\item \textbf{page 50:} In the first paragraph of this page, \textquotedblleft
begins $x^{n}-ax^{n-1}+\cdots$\textquotedblright\ should be \textquotedblleft
begins $t^{n}-at^{n-1}+\cdots$\textquotedblright.
\item \textbf{page 52, proof of Theorem 4.13:} It took me a while to
understand what you mean by \textquotedblleft the product will be
preserved\textquotedblright. Your argument, set up more algebraically, seems
to be this: We define a $K$-module homomorphism $\alpha:K\left[
\widehat{0},z\right] \rightarrow K\left[ t\right] $ by $\alpha\left(
v\right) =t^{\operatorname*{rk}z-\operatorname*{rk}v}$ for every $v\in\left[
\widehat{0},z\right] $. We define a $K$-module homomorphism $\beta:K\left\{
w\in L\ \mid\ w\wedge z=\widehat{0}\right\} \rightarrow K\left[ t\right] $
by $\beta\left( y\right) =t^{n-\operatorname*{rk}y-\operatorname*{rk}z}$ for
every $y\in L$ satisfying $y\wedge z=\widehat{0}$. We define a $K$-module
homomorphism $\gamma:KL\rightarrow K\left[ t\right] $ by $\gamma\left(
x\right) =t^{n-\operatorname*{rk}x}$ for every $x\in L$. Then, you show
(using Claim 2) the equality%
\begin{equation}
\alpha\left( v\right) \beta\left( y\right) =\gamma\left( v\vee y\right)
\label{p52.3}%
\end{equation}
for every $v\in L$ and $y\in L$ satisfying $v\leq z$ and $y\wedge
z=\widehat{0}$. By linearity, the same equality thus holds for every $v\in
K\left[ \widehat{0},z\right] $ and $y\in K\left\{ y\in L\ \mid\ y\wedge
z=\widehat{0}\right\} $. Now, you apply the map $\gamma$ to both sides of
(33), and simplify the right hand side using (\ref{p52.3}).
\item \textbf{page 53, Definition 4.13:} Replace \textquotedblleft%
$L_{\mathcal{A}}$\textquotedblright\ by \textquotedblleft$L\left(
\mathcal{A}\right) $\textquotedblright.
\item \textbf{page 54, Example 4.11 (c):} In \textquotedblleft$B_{1}\subset
B_{2}\cdots\subset B_{n-1}$\textquotedblright, you forgot a \textquotedblleft%
$\subset$\textquotedblright\ sign.
\item \textbf{page 54, Example 4.11 (c):} \textquotedblleft The atoms covered
by $\pi_{i}$\textquotedblright\ should be \textquotedblleft The atoms $\leq
\pi_{i}$\textquotedblright.
\item \textbf{page 54, Example 4.11 (c):} On the last line of the page,
replace \textquotedblleft$\mathcal{B}_{n}\left( t\right) $\textquotedblright%
\ by \textquotedblleft$\mathcal{B}_{n}$\textquotedblright.
\item \textbf{page 55:} Again, \textquotedblleft$L_{\mathcal{A}}%
$\textquotedblright\ should be \textquotedblleft$L\left( \mathcal{A}\right)
$\textquotedblright\ (two lines above Theorem 4.14).
\item \textbf{page 61, proof of Proposition 5.13:} On line 2 of the proof,
replace \textquotedblleft$v_{i},a_{i}\in\mathbb{Z}^{n}$\textquotedblright\ by
\textquotedblleft$v_{i}\in\mathbb{Z}^{n}$ and $a_{i}\in\mathbb{Z}%
$\textquotedblright.
\item \textbf{page 62, proof of Proposition 5.13:} On the second line of the
page, you write \textquotedblleft if and only if at least
one\textquotedblright. I understand the \textquotedblleft only
if\textquotedblright. The \textquotedblleft if\textquotedblright\ might be
true, but is probably not easy to prove (the point is to rule out accidental
isomorphisms $L\left( \mathcal{A}\right) \cong L\left( \mathcal{A}%
_{p}\right) $ that could happen if hyperplanes becoming parallel
\textquotedblleft undo\textquotedblright\ the damage done by hyperplanes
becoming concurrent); either way it is a distraction from the proof.
\item \textbf{page 62, proof of Theorem 5.15:} Replace \textquotedblleft%
$F_{q}$\textquotedblright\ by \textquotedblleft$\mathbb{F}_{q}$%
\textquotedblright.
\end{itemize}
\end{document}