\documentclass[12pt,final,notitlepage,onecolumn,german]{article}%
\usepackage[all,cmtip]{xy}
\usepackage{lscape}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{amsmath}%
\setcounter{MaxMatrixCols}{30}
%TCIDATA{OutputFilter=latex2.dll}
%TCIDATA{Version=5.50.0.2960}
%TCIDATA{CSTFile=LaTeX article (bright).cst}
%TCIDATA{Created=Sat Mar 27 17:33:36 2004}
%TCIDATA{LastRevised=Thursday, November 10, 2011 23:59:06}
%TCIDATA{}
%TCIDATA{}
%TCIDATA{BibliographyScheme=Manual}
%BeginMSIPreambleData
\providecommand{\U}[1]{\protect\rule{.1in}{.1in}}
%EndMSIPreambleData
\newcommand{\id}{\operatorname*{id}}
\voffset=-2.5cm
\hoffset=-2.5cm
\setlength\textheight{24cm}
\setlength\textwidth{15.5cm}
\begin{document}
\begin{center}
\textbf{Chip-Firing and Rotor-Routing on Directed Graphs}
\textit{Alexander E. Holroyd, Lionel Levine, Karola M\'{e}sz\'{a}ros, Yuval
Peres, James Propp and David B. Wilson}
arXiv:0801.3306v3
\textbf{Errata, questions and addenda (Darij Grinberg)}
\bigskip
\end{center}
\begin{itemize}
\item \textbf{Page 3, definition of the Laplacian:} You write:%
\[
\Delta_{ij}=\left\{
\begin{array}
[c]{c}%
-a_{ij}\ \ \ \ \ \ \ \ \ \ \text{for }i\neq j,\\
d_{i}-a_{ii}\ \ \ \ \ \ \ \ \ \ \text{for }i=j
\end{array}
\right. .
\]
It should be pointed out that $d_{i}$ is a shorthand notation for $d_{v_{i}}$.
\item \textbf{Page 3, definition of "fire":} You write: "An active vertex $v$
can \textbf{fire}, resulting in a new chip configuration $\sigma^{\prime}$
obtained by moving one chip along each of the $d_{v}$ edges emanating from
$v$". At this point it would help to explicitly state that chips moving to the
sink simply disappear (i. e., are forgotten). Otherwise readers might think
that we keep track of how many chips went into the sink (this is a
misunderstanding I had first) or that edges to the sink are not counted in the
degree of a vertex (this is also a misunderstanding).
\item \textbf{Page 3, Lemma 2.2:} In part 2 of the lemma, $\sigma_{m}$ should
be $\sigma_{m}^{\prime}$.
\item \textbf{Page 3, Lemma 2.2:} One very pedantic remark: The $n$ in Lemma
2.2 has nothing to do with the $n$ on the upper half of page 3. The $n$ in
Lemma 2.2 is the index of the last element of the sequence $\sigma_{0},$
$\sigma_{1},$ $...,$ $\sigma_{n}$, whereas the $n$ on the upper half of page 3
is the number of vertices of $G$. Some danger of confusion arises in the proof
of Lemma 2.2 (and probably in a few other places in the paper), where you are
speaking of a vertex $v_{n}$ which has nothing to do with the $v_{n}$ from the
upper half of page 3. Maybe you should mention the ambiguity of $n$ in some
footnote. (The same ambiguous use of $n$ appears in Lemma 3.9 and its proof.)
\item \textbf{Page 4, proof of Lemma 2.2:} You write: "Then $v_{i},v_{1}%
,v_{2},...,v_{i-1},v_{i+1},...,v_{n}$ is a permissible firing sequence". I
think it is permissible only if $i$ is defined as the \textit{first} index $k$
satisfying $v_{k}=v_{1}^{\prime}$ (and not just as some arbitrary index $k$
satisfying $v_{k}=v_{1}^{\prime}$). Of course, this is easy to fix (just
define $i$ this way).
\item \textbf{Page 5, proof of Lemma 2.4:} You write: "and $d_{v_{r-1}}$ such
chips will cause $v_{r-1}$ to fire once". Maybe this would be clearer if you
replace the word "cause" by "allow" (at least it would be clearer this way to
me), because until we know that each chip configuration stabilizes, we cannot
speak of "causing" here (in fact, one could imagine that we fire some subset
of vertices again and again, and never come to fire $v_{r-1}$ even though
$d_{v_{r-1}}$ chips have accumulated at $v_{r-1}$).
\item \textbf{Page 5:} What is the Theorem 4.8 you refer to when you write
"see Theorem 4.8 for a better bound"?
\item \textbf{Page 6, proof of Lemma 2.8:} I think this proof only shows that
the order of $\mathcal{S}\left( G\right) $ is the \textit{absolute value} of
the determinant of $\Delta^{\prime}\left( G\right) $. In fact, in order to
have the volume of any parallelepiped equal to the determinant of the matrix
from its edge-vectors, we need our volumes to be \textit{signed} - but the
index of a lattice is the \textit{unsigned} volume of its fundamental
domain.\newline What is needed to fix the proof is an argument why $\det
\Delta^{\prime}\left( G\right) \geq0$. I don't see how to prove this
combinatorially. We can prove this by a mix of algebra and continuity
arguments: The matrix $\Delta^{\prime}\left( G\right) $ is (weakly)
diagonally dominant, and the determinant of a (weakly) diagonally dominant
matrix is always $\geq0$ (this is a limiting case of the inequality proved
in\newline http://www.artofproblemsolving.com/Forum/viewtopic.php?t=176225 ).
\item \textbf{Page 6, Definition 2.11:} At this point, I believe it would be
useful to make one rather trivial observation which you are tacitly using
several times in the subsequent proofs (or at least I believe you use
it).\newline There are two ways to interpret your definition of
"accessible":\newline\textit{First interpretation:} A chip configuration
$\sigma$ is \textbf{accessible} if from any other chip configuration it is
possible to obtain $\sigma$ by \textit{first} adding some chips and
\textit{then} selectively firing active vertices.\newline\textit{Second
interpretation:} A chip configuration $\sigma$ is \textbf{accessible} if from
any other chip configuration it is possible to obtain $\sigma$ by a sequence
of operations, each of which is either an addition of a chip or a firing of an
active vertex.\newline The observation is now that these two interpretations
are equivalent.\newline\textit{Proof.} If we \textit{first} fire an active
vertex in a chip configuration $\tau$, and \textit{then} add some chip, then
we could have just as well \textit{first} added the chip to the configuration
$\tau$ and \textit{then} fired the vertex (because adding the chip can never
make any active vertex non-active). Therefore, if we have a sequence of
operations, each of which is either an addition of a chip or a firing of an
active vertex, then we can move all the "addition-of-a-chip" operations to the
front and all of the "firing-of-an-active-vertex" operations to the end of
this sequence. Hence, the second interpretation is equivalent to the first interpretation.
\item \textbf{Page 7:} What is the Lemma 4.6 you refer to when you write
"cluster-firing and superstabilization as described in Definition 4.3 and
Lemma 4.6"?
\item \textbf{Page 7, proof of Lemma 2.13:} You write "$\beta\geq\left[
d_{\max}+\left( -m\right) \right] \left( \delta-\delta^{\circ}\right)
\geq d_{\max}\left( \delta-\delta^{\circ}\right) \geq d_{\max}$". I think
this is wrong, because $\beta=\alpha+\left[ d_{\max}+\left( -m\right)
\right] \left( \delta-\delta^{\circ}\right) $ does not yield $\beta
\geq\left[ d_{\max}+\left( -m\right) \right] \left( \delta-\delta^{\circ
}\right) $. What you mean is that%
\begin{align*}
\beta & =\alpha+\left[ d_{\max}+\left( -m\right) \right] \left(
\delta-\delta^{\circ}\right) =\alpha+\left( -m\right) \underbrace{\left(
\delta-\delta^{\circ}\right) }_{\geq1}+d_{\max}\left( \delta-\delta^{\circ
}\right) \\
& \geq\underbrace{\alpha+\left( -m\right) }_{\geq0}+d_{\max}\left(
\delta-\delta^{\circ}\right) \geq d_{\max}\underbrace{\left( \delta
-\delta^{\circ}\right) }_{\geq1}\geq d_{\max}.
\end{align*}
\item \textbf{Page 7, Lemma 2.14:} This lemma is nice, but I think that the
actual fact you repeatedly use (for example, when you write "by Lemma 2.14" in
the proof of Lemma 2.15) is the following strengthening:\newline\textbf{Lemma
2.14a:} Let $\epsilon=\left( 2\delta\right) -\left( 2\delta\right)
^{\circ}$, where $\delta$ is given by $\delta\left( v\right) =d_{v}$ as
before. If $\sigma$ is recurrent, and $k\geq0$ is an integer, then $\left(
\sigma+k\epsilon\right) ^{\circ}=\sigma$.\newline\textit{Proof.} Induct over
$k$. The induction base ($k=0$) is clear (since $\sigma$ is recurrent and thus
stable).\newline For the induction step, assume that $\left( \sigma
+k\epsilon\right) ^{\circ}=\sigma$, and try to prove that $\left(
\sigma+\left( k+1\right) \epsilon\right) ^{\circ}=\sigma$.\newline Indeed
there is a fact which I call the "partial stabilization lemma", and which
states that any two chip configurations $\alpha$ and $\beta$ satisfy $\left(
\alpha+\beta\right) ^{\circ}=\left( \alpha+\beta^{\circ}\right) ^{\circ}$.
Applied to $\alpha=\epsilon$ and $\beta=\sigma+k\epsilon$, this fact yields
$\left( \epsilon+\sigma+k\epsilon\right) ^{\circ}=\left( \epsilon+\left(
\sigma+k\epsilon\right) ^{\circ}\right) ^{\circ}$. Since $\left(
\sigma+k\epsilon\right) ^{\circ}=\sigma$ and $\epsilon+\sigma+k\epsilon
=\sigma+\left( k+1\right) \epsilon$, this rewrites as $\left(
\sigma+\left( k+1\right) \epsilon\right) ^{\circ}=\left( \epsilon
+\sigma\right) ^{\circ}$. Thus, $\left( \sigma+\left( k+1\right)
\epsilon\right) ^{\circ}=\left( \epsilon+\sigma\right) ^{\circ}=\left(
\sigma+\epsilon\right) ^{\circ}=\sigma$ (by Lemma 2.14), completing the
induction step. Lemma 2.14a is thus proven.
%\newline\newline\textit{Alternative proof of Lemma 2.14a.} Induct
%over $k$. The induction base ($k=0$) is clear (since $\sigma$ is recurrent and
%thus stable).\newline For the induction step, assume that $\left(
%\sigma+k\epsilon\right) ^{\circ}=\sigma$, and try to prove that $\left(
%\sigma+\left( k+1\right) \epsilon\right) ^{\circ}=\sigma$.\newline Indeed
%there is a fact (which is, strangely enough, not in your text) stating that
%any three chip configurations $\alpha$, $\beta$ and $\gamma$ satisfy $\left(
%\alpha+\left( \beta+\gamma\right) ^{\circ}\right) ^{\circ}=\left(
%\alpha+\beta+\gamma\right) ^{\circ}=\left( \left( \alpha+\beta\right)
%^{\circ}+\gamma\right) ^{\circ}$. Applied to $\alpha=\epsilon$, $\beta
%=\sigma$ and $\gamma=k\epsilon$, this yields $\left( \epsilon+\left(
%\sigma+k\epsilon\right) ^{\circ}\right) ^{\circ}=\left( \epsilon
%+\sigma+k\epsilon\right) ^{\circ}=\left( \left( \epsilon+\sigma\right)
%^{\circ}+k\epsilon\right) ^{\circ}$. Since $\left( \sigma+k\epsilon\right)
%^{\circ}=\sigma$ and $\epsilon+\sigma+k\epsilon=\sigma+\left( k+1\right)
%\epsilon$, this rewrites as $\left( \epsilon+\sigma\right) ^{\circ}=\left(
%\sigma+\left( k+1\right) \epsilon\right) ^{\circ}=\left( \left(
%\epsilon+\sigma\right) ^{\circ}+k\epsilon\right) ^{\circ}$. Thus, $\left(
%\sigma+\left( k+1\right) \epsilon\right) ^{\circ}=\left( \epsilon
%+\sigma\right) ^{\circ}=\left( \sigma+\epsilon\right) ^{\circ}=\sigma$ (by
%Lemma 2.14), completing the induction.
\item \textbf{Page 7, Lemma 2.14:} I am just wondering: Why do you formulate
this lemma for $\epsilon=\left( 2\delta\right) -\left( 2\delta\right)
^{\circ}$ rather than for $\epsilon=\delta-\delta^{\circ}$ ? Unless I am
mistaken, the proof of Lemma 2.14 works just as well if the definition of
$\epsilon$ is replaced by $\epsilon=\delta-\delta^{\circ}$. The only
difference I can see between $\left( 2\delta\right) -\left( 2\delta\right)
^{\circ}$ and $\delta-\delta^{\circ}$ is that $\left( 2\delta\right)
-\left( 2\delta\right) ^{\circ}$ is accessible (while $\delta-\delta^{\circ
}$ is not necessarily - at least I don't see a proof that it is). But you
don't seem to ever use the accessibility of $\left( 2\delta\right) -\left(
2\delta\right) ^{\circ}$ ...
\item \textbf{Page 11, proof of Lemma 2.17:} You write: "If (4) holds, there
is a chip configuration $\alpha$ such that $\left( \sigma+\alpha\right)
^{\circ}=\sigma$ and $\alpha$ is nonzero on at least one vertex of each
strongly connected component of $G$". Here you seem to be using the the
following (not completely trivial) fact:\newline\textbf{Lemma 2.14b:} If some
stable chip configuration $\sigma$ and some chip configurations $\alpha_{1}$,
$\alpha_{2}$, $...$, $\alpha_{k}$ satisfy $\left( \sigma+\alpha_{1}\right)
^{\circ}=\left( \sigma+\alpha_{2}\right) ^{\circ}=...=\left( \sigma
+\alpha_{k}\right) ^{\circ}=\sigma$, then $\left( \sigma+\alpha_{1}%
+\alpha_{2}+...+\alpha_{k}\right) ^{\circ}=\sigma$.\newline\textit{Proof.} We
will prove that every $i\in\left\{ 0,1,...,k\right\} $ satisfies $\left(
\sigma+\alpha_{1}+\alpha_{2}+...+\alpha_{i}\right) ^{\circ}=\sigma$.\newline
Indeed, let us prove this by induction over $i$: The induction base ($i=0$) is
clear (since $\sigma$ is stable).\newline For the induction step, let
$i\in\left\{ 1,2,...,k\right\} $. Assume that $\left( \sigma+\alpha
_{1}+\alpha_{2}+...+\alpha_{i-1}\right) ^{\circ}=\sigma$, and try to prove
that $\left( \sigma+\alpha_{1}+\alpha_{2}+...+\alpha_{i}\right) ^{\circ
}=\sigma$.\newline Applying the "partial stabilization lemma" (which I quoted
above in the proof of Lemma 2.14a) to $\alpha=\alpha_{i}$ and $\beta
=\sigma+\alpha_{1}+\alpha_{2}+...+\alpha_{i-1}$, we obtain $\left( \alpha
_{i}+\sigma+\alpha_{1}+\alpha_{2}+...+\alpha_{i-1}\right) ^{\circ}=\left(
\alpha_{i}+\left( \sigma+\alpha_{1}+\alpha_{2}+...+\alpha_{i-1}\right)
^{\circ}\right) ^{\circ}$. Since $\alpha_{i}+\sigma+\alpha_{1}+\alpha
_{2}+...+\alpha_{i-1}=\sigma+\alpha_{1}+\alpha_{2}+...+\alpha_{i}$ and
$\left( \sigma+\alpha_{1}+\alpha_{2}+...+\alpha_{i-1}\right) ^{\circ}%
=\sigma$, this rewrites as $\left( \sigma+\alpha_{1}+\alpha_{2}%
+...+\alpha_{i}\right) ^{\circ}=\left( \alpha_{i}+\sigma\right) ^{\circ}$.
Thus, $\left( \sigma+\alpha_{1}+\alpha_{2}+...+\alpha_{i}\right) ^{\circ
}=\left( \alpha_{i}+\sigma\right) ^{\circ}=\left( \sigma+\alpha_{i}\right)
^{\circ}=\sigma$. This completes the induction step.\newline We have thus
proven by induction that every $i\in\left\{ 0,1,...,k\right\} $ satisfies
$\left( \sigma+\alpha_{1}+\alpha_{2}+...+\alpha_{i}\right) ^{\circ}=\sigma$.
Applied to $i=k$, this yields $\left( \sigma+\alpha_{1}+\alpha_{2}%
+...+\alpha_{k}\right) ^{\circ}=\sigma$. We have thus proven Lemma 2.14b.
\item \textbf{Page 11, proof of Lemma 2.17:} You write: "Moreover, $\left(
\sigma+\beta\right) ^{\circ}=\sigma$." It took me a while to figure out why
this is so. Here is my proof:\newline Since $\beta$ is obtained from $k\alpha$
by selective firing, we see that $\sigma+\beta$ is obtained from
$\sigma+k\alpha$ by selective firing. Thus, $\left( \sigma+\beta\right)
^{\circ}=\left( \sigma+k\alpha\right) ^{\circ}$. Since Lemma 2.14b (applied
to $\alpha_{1}=\alpha$, $\alpha_{2}=\alpha$, $...$, $\alpha_{k}=\alpha$)
yields $\left( \sigma+k\alpha\right) ^{\circ}=\sigma$, we obtain $\left(
\sigma+\beta\right) ^{\circ}=\left( \sigma+k\alpha\right) ^{\circ}=\sigma
$.\newline So much for my proof of $\left( \sigma+\beta\right) ^{\circ
}=\sigma$. Is this argument more complicated than what you had in mind?
Because I find it strange that you leave this to the reader (after all, you
haven't even explicitly stated Lemma 2.14b).
\item \textbf{Page 11:} The formula%
\[
\sum_{w}\Delta_{v,w}^{\prime}f\left( w\right) =\sigma\left( v\right)
\]
should be%
\[
\sum_{w}\Delta_{v,w}^{\prime}f\left( w\right) =\sigma\left( v\right)
\operatorname{mod}1.
\]
\item \textbf{Page 13, proof of Lemma 3.4:} You write: "this same directed
occurs within $\rho$". This should be "this same directed cycle occurs within
$\rho$" (you forgot the word "cycle").
\item \textbf{Page 14, proof of Lemma 3.7:} You write: "and hence the edge
from $v_{i}$ to $v_{i+1}$ was traversed more recently than the edge from
$v_{i-1}$ to $v_{i}$". Since both of these edges might have been traversed
several times, I would find it clearer if you replaced this by "and hence the
last time the chip was at $v_{i+1}$ is later than the last time it was at
$v_{i}$" (or something similar - I don't think my sentence is good English,
but I assume you get what I mean).
\item \textbf{Page 14, proof of Lemma 3.9:} You write: "then some permutation
of $v$ agrees with $v^{\prime}$ in the first $i$ terms". I believe that, for
the induction to work as intended, this should be amended to "then some
\textit{legal} permutation of $v$ agrees with $v^{\prime}$ in the first $i$
terms", where a sequence of vertices of $G$ is said to be \textit{legal} if it
is possible to fire the vertices in the order of this sequence. Otherwise, it
is not clear why "the vertices $v_{1},v_{2},...,v_{i-1},v_{j},v_{i}%
,v_{i+1},...,v_{j-1},v_{j+1},...,v_{n}$ can be fired in that order" (since we
don't know whether the sequence $v$ itself was legal to begin with!).\newline
Alternatively, you could also leave the "then some permutation of $v$ agrees
with $v^{\prime}$ in the first $i$ terms" part as it is, but then "the
vertices $v_{1},v_{2},...,v_{i-1},v_{j},v_{i},v_{i+1},...,v_{j-1}%
,v_{j+1},...,v_{n}$ can be fired in that order" is not literally true, unless
you allow vertices to host a negative number of chips.
\item \textbf{Page 16, proof of Lemma 3.10:} You write: "where the induction
hypothesis states that every path leads to either the sink or to the chip". I
think it would be more precise to say that "where the induction hypothesis
states that every sufficiently long path leads either to the sink or through
the chip".
\item \textbf{Page 17, Definition 3.11:} The formula%
\[
\sigma\left( \rho\right) =\left( \prod_{v\in V\left( G\right) }%
E_{v}^{\sigma\left( v\right) }\right) \rho
\]
should be%
\[
\sigma\left( \rho\right) =\left( \prod_{v\in V\left( G\right)
\diagdown\left\{ \operatorname{sink}\right\} }E_{v}^{\sigma\left( v\right)
}\right) \rho.
\]
\item \textbf{Page 19, proof of Lemma 3.16:} You write: "Since $\rho$ is
reachable from $\rho^{\prime}$ ". But why is $\rho$ reachable from
$\rho^{\prime}$ ? This is again something that took me a while to figure out.
What you seem to be using here is the following lemma:\newline\textbf{Lemma
3.16a.} Let $G$ be a digraph, and let $\rho$ be a rotor configuration. Let
$\ell\in\mathbb{N}$. Let $\sigma_{1}$, $\sigma_{2}$, $...$, $\sigma_{\ell}$ be
chip configurations such that $\rho$ is reachable from $\sigma_{i}\left(
\rho\right) $ for every $i\in\left\{ 1,2,...,\ell\right\} $. Then, $\rho$
is reachable from $\left( \sigma_{1}+\sigma_{2}+...+\sigma_{\ell}\right)
\left( \rho\right) $.\newline\textit{Proof.} For every $i\in\left\{
1,2,...,\ell\right\} $, there exists some chip configuration $\tau_{i}$ such
that $\tau_{i}\left( \sigma_{i}\left( \rho\right) \right) =\rho$ (since
$\rho$ is reachable from $\sigma_{i}\left( \rho\right) $). Consider this
$\tau_{i}$. Then, $\left( \tau_{i}+\sigma_{i}\right) \left( \rho\right)
=\tau_{i}\left( \sigma_{i}\left( \rho\right) \right) =\rho$. Thus,
$\left( \tau_{\ell}+\sigma_{\ell}\right) \rho=\rho$, $\left( \tau_{\ell
-1}+\sigma_{\ell-1}\right) \rho=\rho$, $...$, $\left( \tau_{1}+\sigma
_{1}\right) \rho=\rho$. Now,%
\begin{align*}
& \left( \tau_{1}+\tau_{2}+...+\tau_{\ell}\right) \left( \left(
\sigma_{1}+\sigma_{2}+...+\sigma_{\ell}\right) \left( \rho\right) \right)
\\
& =\left( \underbrace{\left( \tau_{1}+\tau_{2}+...+\tau_{\ell}\right)
+\left( \sigma_{1}+\sigma_{2}+...+\sigma_{\ell}\right) }_{=\left( \tau
_{1}+\sigma_{1}\right) +\left( \tau_{2}+\sigma_{2}\right) +...+\left(
\tau_{\ell}+\sigma_{\ell}\right) }\right) \left( \rho\right) \\
& =\left( \left( \tau_{1}+\sigma_{1}\right) +\left( \tau_{2}+\sigma
_{2}\right) +...+\left( \tau_{\ell}+\sigma_{\ell}\right) \right) \left(
\rho\right) \\
& =\left( \tau_{1}+\sigma_{1}\right) \left( \left( \tau_{2}+\sigma
_{2}\right) \left( ...\left( \left( \tau_{\ell-1}+\sigma_{\ell-1}\right)
\underbrace{\left( \left( \tau_{\ell}+\sigma_{\ell}\right) \rho\right)
}_{=\rho}\right) \right) \right) \\
& =\left( \tau_{1}+\sigma_{1}\right) \left( \left( \tau_{2}+\sigma
_{2}\right) \left( ...\left( \left( \tau_{\ell-2}+\sigma_{\ell-2}\right)
\underbrace{\left( \left( \tau_{\ell-1}+\sigma_{\ell-1}\right) \rho\right)
}_{=\rho}\right) \right) \right) \\
& =...\\
& =\left( \tau_{1}+\sigma_{1}\right) \rho=\rho.
\end{align*}
Hence, $\rho$ is reachable from $\left( \sigma_{1}+\sigma_{2}+...+\sigma
_{\ell}\right) \left( \rho\right) $. This proves Lemma 3.16a.\newline You
apply Lemma 3.16a to $\sigma_{j}=\mathbf{1}_{v_{j}}$ (where $\mathbf{1}%
_{v_{j}}$ is the chip configuration consisting of a single chip at $v_{j}$),
thus obtaining that $\rho$ is reachable from $\left( \mathbf{1}_{v_{1}%
}+\mathbf{1}_{v_{2}}+...+\mathbf{1}_{v_{\ell}}\right) \left( \rho\right) $.
In other words, $\rho$ is reachable from $\left( \prod\limits_{i}E_{v_{i}%
}\right) \rho$. Then you apply Lemma 3.16a to $k$ and $\prod\limits_{i}%
E_{v_{i}}$ instead of $\ell$ and $\sigma_{j}$, thus concluding that $\rho$ is
reachable from $\left( \prod\limits_{i}E_{v_{i}}\right) ^{k}\rho$. Since
$\rho^{\prime}=\left( \prod\limits_{i}E_{v_{i}}\right) ^{k}\rho$, this
yields that $\rho$ is reachable from $\rho^{\prime}$.\newline Is this really
the argument you intended?
\item \textbf{Page 19, proof of Corollary 3.18:} You write: "The rows of
$\Delta^{\prime}\left( G\right) $ corresponding to vertices in $S$ sum to
zero". This is ambiguous. The sum of the rows of $\Delta^{\prime}\left(
G\right) $ corresponding to vertices in $S$ is \textit{not} the zero vector
(unless the indegree of every vertex of $S$ happens to be equal to its
outdegree). What you probably wanted to say is that each row of $\Delta
^{\prime}\left( G\right) $ corresponding to a vertex in $S$ has its entries
adding up to $0$, and its only entries are in the columns corresponding to
vertices in $S$. (This yields that if we WLOG assume that the columns
corresponding to the vertices in $S$ are the first $k$ columns of
$\Delta^{\prime}\left( G\right) $, and the rows corresponding to the rows in
$S$ are the first $k$ rows of $\Delta^{\prime}\left( G\right) $, then
$\Delta^{\prime}\left( G\right) $ is a lower block-triangular matrix, and
its upper left part has determinant $0$, so that the whole matrix
$\Delta^{\prime}\left( G\right) $ must also have determinant $0$.)
\item \textbf{Page 20, proof of Proposition 3.21:} You write: "Since $h$ is a
harmonic function on $G$ ". I think that $h$ is harmonic on $V\left(
G\right) \diagdown Z$ only. Thus you should only consider edges $e=\left(
u,v\right) $ with $u\notin Z$. Can't we actually improve the inequality (1)
by summing only over such edges $e$ on the right hand side? I mean, any other
edge will neither ever be used by a random walk stopped on first hitting $Z$,
nor will it ever be switched by a rotor-router walk stopped on first hitting
$Z$.
\item \textbf{Page 21, proof of Proposition 3.21:} You write: "We assign
weight $\sum_{v}\operatorname*{wt}\left( \rho\left( v\right) \right) $ to
a rotor configuration $\rho$". I would replace $\rho$ by $\tau$ here, since
$\rho$ already stands for the (fixed!) initial rotor configuration (whose
weight is $0$ since $\operatorname*{wt}\left( \rho\left( v\right) \right)
=0$ for every vertex $v$).
\item \textbf{Page 21, the very first line of Section 4:} Replace $G=\left(
E,V\right) $ by $G=\left( V,E\right) $.
\item \textbf{Page 21, the fourth line of Section 4:} You claim that
"equivalently, $G$ has a sink and every other vertex has out-degree that is at
least as large as its in-degree". I don't think the word "equivalently" is
appropriate here. Surely, if $G$ is an Eulerian digraph with sink, then $G$
has a sink and every other vertex has out-degree that is at least as large as
its in-degree. But the converse doesn't hold: If $G$ has a sink and every
other vertex has out-degree that is at least as large as its in-degree, then
we can add some edges starting at the sink to ensure that every in-degree in
the resulting graph becomes equal to the corresponding out-degree, but nothing
guarantees us that the resulting graph is strongly connected.
\item \textbf{Page 21, the sixth line of Section 4:} You write: "Such a tour
exists if and only if $G$ is Eulerian." I am being really pedantic here, but
no - such a tour can also exist if $G$ consists of an Eulerian graph and some
isolated vertices ;)
\item \textbf{Page 21, proof of Lemma 4.1:} You write: "By the "(4)
$\Longrightarrow$ (1)" part of Lemma 2.17, if $\left( \sigma+\beta\right)
^{\circ}=\sigma$, then $\sigma$ is recurrent." Are you claiming that every
strongly connected component of $G$ contains some vertex $v$ such that
$\beta\left( v\right) >0$ ? This is wrong. Counterexample: Consider a
directed cycle on $4$ vertices. Remove one edge. You get an Eulerian digraph
with sink. Each of its strongly connected components consists of one vertex
only; in particular, the vertex opposite to the sink forms a strongly
connected component, but its $\beta$-value is $0$.\newline Here is how you can
fix this part of the proof:\newline We need something slightly stronger than
the "(4) $\Longrightarrow$ (1)" part of Lemma 2.17. We can get it by adding
another equivalent assertion to Lemma 2.17:\newline(5) There exists an
epidemic chip configuration $\alpha$ such that $\sigma$ is reachable from
$\sigma+\alpha$. Here, a chip configuration $\alpha$ is said to be
\textit{epidemic} if for every vertex $v$ of $G$, there exists a vertex $w$ of
$G$ such that $\alpha\left( w\right) >0$ and such that there exists a path
from $w$ to $v$.\newline The proof of (4) $\Longrightarrow$ (5) is obvious,
and the proof of (5) $\Longrightarrow$ (1) is the same as the proof of (4)
$\Longrightarrow$ (1) given in your paper.\newline Now, we claim that in the
situation of Lemma 4.1, the chip configuration $\beta$ is epidemic. In fact,
let $v$ be a vertex of $G$. By the definition of an Eulerian digraph with
sink, the digraph $G$ was obtained by taking an Eulerian digraph $G^{\prime}$
and deleting all the edges from one vertex $s$. Consider such a $G^{\prime}$.
Since $G^{\prime}$ is Eulerian, there exists an Eulerian tour of $G^{\prime}$.
Let us walk \textit{backwards} along this tour, starting at the vertex $v$,
until we reach the vertex $s$ for the first time (with one exception: if
$v=s$, then we walk backwards until we reach the vertex $s$ for the
\textit{second} time). Then, go a step forward again. Let $w$ be the vertex in
which we land. Then, $w$ is the endpoint of an edge from $s$, so that
$\beta\left( w\right) >0$, but on the other hand, there exists a path from
$w$ to $v$ in $G$ (namely, we can reach $v$ from $w$ by walking forward along
the Eulerian tour, without ever using an edge from $s$). This shows that
$\beta$ is epidemic, and we can apply the "(5) $\Longrightarrow$ (1)" part of
Lemma 2.17, qed.
\item \textbf{Page 21, proof of Lemma 4.1:} You write: "The rows of
$\Delta^{\prime}$ are linearly independent". Do you have a quick proof for
this seemingly obvious statement? The best proof I can come up with is
nontrivial: Since $G$ is an Eulerian digraph with sink, its sink is a global
sink, and thus it has an oriented spanning tree rooted at the sink. Therefore,
Corollary 3.18 yields that $\det\Delta^{\prime}\left( G\right) \geq1$, so
that $\Delta^{\prime}=\Delta^{\prime}\left( G\right) $ is nonsingular, i.
e., the rows of $\Delta^{\prime}$ are linearly independent. But this looks
like an overkill argument...
\item \textbf{Page 22, second line of the page:} You write: "Then $\sigma$ is
recurrent if and only if every non-sink vertex fires in the stabilization of
$\sigma+\beta$." I don't think this is true (take $\sigma$ to be a
configuration consisting of very big numbers), but there we can make it true
either by replacing "fires" by "fires exactly once" or by requiring $\sigma$
to be stable. Here are my proofs of this:\newline\textit{Statement 1:} If
$\sigma$ is a chip configuration such that every non-sink vertex fires exactly
once in the stabilization of $\sigma+\beta$, then $\sigma$ is
recurrent.\newline\textit{Statement 2:} If $\sigma$ is a stable chip
configuration such that every non-sink vertex fires at least once in the
stabilization of $\sigma+\beta$, then $\sigma$ is recurrent.\newline%
\textit{Proof of Statement 1:} Let $\sigma$ be a chip configuration such that
every non-sink vertex fires exactly once in the stabilization of $\sigma
+\beta$. Then, $\left( \sigma+\beta\right) ^{\circ}=\sigma+\beta
-\sum\limits_{i=1}^{n-1}\Delta_{i}^{\prime}$ (using the notations of the proof
of Lemma 4.1). Since $\beta=\sum\limits_{i=1}^{n-1}\Delta_{i}^{\prime}$, this
yields $\left( \sigma+\beta\right) ^{\circ}=\sigma$, and thus $\sigma$ is
recurrent (this is shown just as in the proof of Lemma 4.1). This proves
Statement 1.\newline Before proving Statement 2, we show something
stronger:\newline\textit{Statement 3:} If $\sigma$ is a stable chip
configuration, then every non-sink vertex fires at most once in the
stabilization of $\sigma+\beta$.\newline\textit{Proof of Statement 3:} Assume
the contrary. Then, there exists a stable chip configuration $\sigma$ such
that some non-sink vertex fires more than once in the stabilization of
$\sigma+\beta$. Consider such a $\sigma$. Let $v_{1},v_{2},...,v_{n}$ be the
vertices fired in the stabilization of $\sigma+\beta$ (in this order). Then,
there exists some $j$ such that some $i