\documentclass[12pt,final,notitlepage,onecolumn]{article}%
\usepackage[all,cmtip]{xy}
\usepackage{lscape}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{color}
\usepackage{amsmath}
\usepackage{hyperref}
%TCIDATA{OutputFilter=latex2.dll}
%TCIDATA{Version=5.50.0.2960}
%TCIDATA{CSTFile=LaTeX article (bright).cst}
%TCIDATA{Created=Sat Mar 27 17:33:36 2004}
%TCIDATA{LastRevised=Sunday, April 30, 2017 23:32:22}
%TCIDATA{SuppressPackageManagement}
%TCIDATA{}
%TCIDATA{}
%TCIDATA{BibliographyScheme=Manual}
%BeginMSIPreambleData
\providecommand{\U}[1]{\protect\rule{.1in}{.1in}}
%EndMSIPreambleData
\newcommand{\id}{\operatorname*{id}}
\definecolor{violet}{RGB}{143,0,255}
\definecolor{forestgreen}{RGB}{34, 100, 34}
\voffset=-2.5cm
\hoffset=-2.5cm
\setlength\textheight{24cm}
\setlength\textwidth{15.5cm}
\begin{document}
\begin{center}
\textbf{A Course in Combinatorial Optimization}
\textit{Alexander Schrijver}
\url{http://homepages.cwi.nl/~lex/files/dict.pdf}
version March 23rd, 2017
\textbf{Errata and comments (Darij Grinberg)}
\bigskip
\end{center}
I have read three chapters of these notes: 1, 2 and 10, as well as sporadic
pieces of other chapters. Most of my comments on chapters 1 and 2 are nitpicks
(these seem to have undergone some thorough error hunting already); some are
probably misunderstandings on my side. I hope the comments on chapter 10 are
more useful.
\section*{Chapter 1}
\begin{itemize}
\item \textbf{Proof of Theorem 1.3:} At the beginning of the proof, it
wouldn't harm to add something like this (hopefully you can express it more
succinctly): ``Note that, for every vertex $v\in V$, the value $f\left(
v\right) $ never increases during the course of the algorithm; it can only
stay constant or decrease. Besides, if $u\in V$ is any vertex, then the value
$f\left( u\right) $ does not change during the $u$-iteration (because we
cannot have $f\left( u\right) >f\left( u\right) +l\left( a\right) $ for
any arc $a$). (Here, for any vertex $p\in V$, the $p$\textit{-iteration} means
the iteration of (3) in which the vertex $p$ is chosen as $u$; this is the
iteration which results in the removal of $u$ from $U$.) Consequently, if
$u\in V$ is any vertex, then, immediately after the $u$-iteration, we have
$f\left( v\right) \leq f\left( u\right) +l\left( a\right) $ for every
$a=\left( u,v\right) \in A$.'' These statements are obvious, but they are
tacitly used below.
\item \textbf{Proof of Theorem 1.3:} After ``If $i>0$, then (as $v_{i-1}\in
V\setminus U$)'', add ``the following inequality must have been valid directly
after the $v_{i-1}$-iteration''. In fact, it isn't a priori clear that
$f\left( v_{i-1}\right) \leq f\left( v_{i}\right) +l\left( v_{i-1}%
,v_{i}\right) $ is true immediately after the $u$-iteration, but it is
immediately clear that $f\left( v_{i-1}\right) \leq f\left( v_{i}\right)
+l\left( v_{i-1},v_{i}\right) $ is true immediately after the $v_{i-1}$-iteration.
After (4), add the following sentence: ``Since the value $f\left(
v_{i}\right) $ never increases during the course of the algorithm, this
yields that $f\left( v_{i}\right) \leq\operatorname*{dist}\left(
v_{i}\right) $ also holds immediately after the $u$-iteration.''
\item \textbf{Proof of Theorem 1.5:} In footnote $^{3}$, replace ``set'' by ``multiset''.
\item \textbf{Proof of Theorem 1.6:} Replace ``$2i+1\leq j\leq2i+2$'' by
``$2i\leq j\leq2i+1$''. Also, replace ``While $i>0$'' by ``While $i>1$'', and
replace ``$j:=\left\lfloor \dfrac{i-1}{2}\right\rfloor $'' by
``$j:=\left\lfloor \dfrac{i}{2}\right\rfloor $''. (The indexing of your heap
starts with $1$, not with $0$.)
\item \textbf{Proof of Theorem 1.8:} In (13), replace ``number of calls of
make root $=\operatorname*{decr}\left( f\left( u\right) \right)
+\operatorname*{decr}\left( T\right) $'' by ``number of calls of make root
$\leq\operatorname*{decr}\left( f\left( u\right) \right)
+\operatorname*{decr}\left( T\right) $''. In fact, while it is true that
every call of make root immediately succeeds either a decrease of some
$f\left( u\right) $ or a decrease of $T$, it can happen that $T$ decreases
without make root being called (namely, this happens if we delete a $u$
minimizing $f\left( u\right) $, but this $u$ happens to belong to $T$).
\item \textbf{Proof of Theorem 1.8:} You write: ``since we increase $T$ at
most once after we have decreased some $f\left( u\right) $''. I think this
could better be replaced by ``since we increase $T$ at most once after each
decrease of some $f\left( u\right) $''; otherwise it sounds as if, once
$f\left( u\right) $ has been decreased for the first time, $T$ can't
increase twice anymore.
\item \textbf{Proof of Theorem 1.8:} I don't understand the steps of (14).
Here is my estimation for the number of calls of repair, with some
explanations that hopefully make it clear what I am doing:
First, let me make two conventions:
-- When the set $F$ decreases by $k$ arcs (for $k>1$), we don't consider this
as one decrease of $F$, but as $k$ decreases of $F$.
-- When a $u$ minimizing $f\left( u\right) $ is being deleted, the root $u$
is lost but every child of $R$ becomes a root. We consider this as a decrease
of $R$ by $1$ root followed by an increase of $R$ by $k$ roots (where $k$ is
the number of children of $u$), but \textbf{not} as an increase of $R$ by
$k-1$ roots.
Here are the three possible ways how the algorithm can call repair:
- The algorithm calls repair when a vertex $u$ minimizing $f\left( u\right)
$ and having at least one child is being deleted; in this case, repair is
being called $k$ times (where $k$ is the number of children of $u$). The set
$F$ decreases by $k$ arcs (namely, the arcs connecting the now-deleted vertex
$u$ to its children).\footnote{Also, the set $R$ (the set of roots) decreases
by $1$ root (the root $u$) and then increases by $k$ roots, but we don't care
about this.}
- The algorithm also can call repair from inside a repair subroutine. In this
case, repair is being called directly after a root has been grafted on another
root; thus, repair is being called directly after the set $R$ of roots has
decreased by $1$ root (because the root that has been grafted on another root
is no longer a root after this).
- The algorithm also can call repair from inside a make root subroutine. In
this case, repair is being called directly after an arc has been removed;
thus, repair is being called directly after the set $F$ of arcs has decreased
by $1$ arc.
There are no other circumstances under which repair can be called in the
algorithm. Thus, each calling of repair during the algorithm corresponds to a
decrease of $F$ or a decrease of $R$ (although this is not a 1-to-1
correspondence). Thus,%
\begin{align*}
& \left( \text{number of calls of repair}\right) \\
& \leq\underbrace{\operatorname*{decr}\left( R\right) }_{\leq
\operatorname*{incr}\left( R\right) +p}+\operatorname*{decr}\left( F\right)
\\
& \leq\operatorname*{incr}\left( R\right) +p+\operatorname*{decr}\left(
F\right) =\operatorname*{decr}\left( F\right) +p+\operatorname*{decr}%
\left( F\right) \\
& \ \ \ \ \ \ \ \ \ \ \left(
\begin{array}
[c]{c}%
\text{because every increase of the set }R\text{ of roots immediately follows
a decrease}\\
\text{of the set }F\text{ of arcs, and vice versa, so that }%
\operatorname*{incr}\left( R\right) =\operatorname*{decr}\left( F\right)
\end{array}
\right) \\
& =2\operatorname*{decr}\left( F\right) +p\\
& \leq2\left( n\left( 1+2\log p\right) +\left( \text{number of calls of
make root}\right) \right) +p\\
& \ \ \ \ \ \ \ \ \ \ \left(
\begin{array}
[c]{c}%
\text{because the set }F\text{ of arcs decreases by at most }1+2\log p\text{
elements every time we}\\
\text{delete a }u\text{ minimizing }f\left( u\right) \text{ (because we lose
an arc for every child of }u\text{, and}\\
\text{by Theorem 1.7 we know that }u\text{ has at most }1+2\log p\text{
children), and decreases}\\
\text{by at most }1\text{ element every time we call the make root subroutine,
and decreases}\\
\text{never else, so we have}\\
\operatorname*{decr}\left( F\right) \leq n\left( 1+2\log p\right) +\left(
\text{number of calls of make root}\right)
\end{array}
\right) \\
& =2n\left( 1+2\log p\right) +2\underbrace{\left( \text{number of calls of
make root}\right) }_{\substack{\leq2m+p\\\text{(by (13))}}}+p\\
& \leq2n\left( 1+2\log p\right) +2\left( 2m+p\right) +p.
\end{align*}
This is almost the same as what you get, except for an additional $2n$ addend.
Have I made any mistakes here?
Note that I don't think we have $\left( \text{number of calls of
repair}\right) =\operatorname*{decr}\left( R\right) +\operatorname*{decr}%
\left( F\right) $ all the time. In fact, when we delete a $u$ minimizing
$f\left( u\right) $, and this $u$ happens to have no children, then $R$
decreases (since we lose the root $u$) but we don't call repair.
\item \textbf{Proof of Theorem 1.8:} I don't see what you mean by ``deciding
calling make root or repair''. Don't you just want to say ``calling make root
or repair''?
(Anyway, why does a call of make root or repair take time $O\left( 1\right)
$ ? Can we really look up an arbitrary value of the function $b$ or $l$ in
$O\left( 1\right) $ time? And delete a vertex from a linked list in
$O\left( 1\right) $ time?)
\item \textbf{Application 1.4:} On page 16, replace ``If activity $j$ has been
done after activity $i$'' by ``If activity $j$ has to be done after activity
$i$''. Or am I misunderstanding you?
\item \textbf{Application 1.5:} In (22), what do you mean by ``the maximum
liter price in village $Y$'' ? Probably ``the maximum liter price in village
$Y$ over all price equilibria''?
\item \textbf{Exercise 1.3:} You use the word ``distance'' in two different
meanings in this exercise. (You use it to mean both the length of the direct
route and the length of the shortest path.)
\item \textbf{Proof of Theorem 1.12:} A\ point that you miss in the analysis
of the running time of the Dijkstra-Prim algorithm (and also of the Dijkstra
algorithm in the proof of Corollary 1.8a) is the running time required to
build the initial Fibonacci heap at the start of the algorithm. This isn't
very difficult\footnote{I think it is enough to start with a forest with no
edges, and then call repair over and over again until we get a Fibonacci
forest. We can't make use of the function $b$ yet, but we can keep a list of
roots $u$ with their values of $d^{\operatorname*{out}}\left( u\right) $,
sorted by the size of $d^{\operatorname*{out}}\left( u\right) $.} and
doesn't take very long\footnote{I think it takes somewhere between $O\left(
\left\vert V\right\vert \right) $ and $O\left( \left\vert V\right\vert
\log\left\vert V\right\vert \right) $ in Corollary 1.8a and somewhere between
$O\left( \left\vert V\right\vert \right) $ and $O\left( \left\vert
V\right\vert \log\left\vert V\right\vert \right) $ in Theorem 1.12, at least
if you start the Dijkstra-Prim algorithm with $U_{0}=\varnothing$ and all
$f\left( v\right) =\infty$ rather than with $U_{1}=\left\{ v_{1}\right\}
$}, so it doesn't change the end result, but I think it should be mentioned.
\item \textbf{Paragraph on page 20 directly above Corollary 1.12a.} Replace
``Choose an edge $e_{k+1}$'' by ``Choose an edge $e_{k+1}\notin\left\{
e_{1},e_{2},...,e_{k}\right\} $''. I know, this can hardly be misunderstood
either way, but for the sake of formal correctness I think the change should
be done.
\item \textbf{Exercise 1.9:} It might help to point out that ``forest'' means
``spanning forest'', and $\left\vert F\right\vert $ means the number of edges
of $F$.
\end{itemize}
\section*{Chapter 2}
\begin{itemize}
\item \textbf{Second line of \S 2.1:} Replace ``with any two points in $C$''
by ``with any two points $x,y$ in $C$''.
\item \textbf{Between (1) and (2):} Replace ``A basic property of closed
convex sets is'' by ``A basic property of closed convex sets $C$ is''.
\item \textbf{Proof of Theorem 2.2:} The Sufficiency part of this proof could
be made a bit easier to follow if you would replace ``there exist points $x$
and $y$ in $P$ such that $x\neq z\neq y$ and $z=\dfrac{1}{2}\left(
x+y\right) $'' by ``there exist points $x$ and $y$ in $P$ and a $\lambda$
with $0<\lambda<1$ such that $x\neq z$, $y\neq z$ and $z=\lambda x+\left(
1-\lambda\right) y$'', and replace ``$y-z=-\left( x-z\right) $'' by
``$y-z=-\dfrac{\lambda}{1-\lambda}\left( x-z\right) $ and $-\dfrac{\lambda
}{1-\lambda}<0$''. No reason to use midpoints here...
\item \textbf{Between the proof of Theorem 2.2 and Theorem 2.3:} Some absolute
nitpicking: When you write ``$A_{z}\neq A_{z^{\prime}}$'', you actually mean
``$\left\{ i\in\left\{ 1,2,...,m\right\} \ \mid\ a_{i}z=b_{i}\right\}
\neq\left\{ i\in\left\{ 1,2,...,m\right\} \ \mid\ a_{i}z^{\prime}%
=b_{i}\right\} $''. (Though this is actually equivalent to $A_{z}\neq
A_{z^{\prime}}$, but the equivalence is not completely trivial.) Also,
``collections of subrows of $A$'' should be ``sets of rows of $A$'' (since
``collection'' could be misunderstood to mean ``multiset'', and ``subrow'' is
probably a typo for ``row'').
\item \textbf{Proof of Theorem 2.4:} You write: ``To see the inclusion
$\supseteq$ in (25)''. You mean (27), not (25).
\item \textbf{Proof of Theorem 2.4:} Replace ``$c=\mu_{1}y_{1}+\cdots\mu
_{s}y_{s}$'' by ``$c=\mu_{1}y_{1}+\cdots+\mu_{s}y_{s}$''. (Actually, it would
be better to write ``$c=\sum\limits_{j=1}^{s}\mu_{j}y_{s}$'' instead of
``$c=\mu_{1}y_{1}+\cdots+\mu_{s}y_{s}$'' here, and similarly write
``$\sum\limits_{j=1}^{s}\mu_{j}=1$'' instead of ``$\mu_{1}+\cdots+\mu_{s}%
=1$''; in fact, you use summation signs in the formula (28) below.)
\item \textbf{Convex cones:} In (29), replace ``$\lambda_{1}x_{1}%
+\cdots\lambda_{t}x_{t}$'' by ``$\lambda_{1}x_{1}+\cdots+\lambda_{t}x_{t}$''.
\item \textbf{Exercise 2.12:} I don't see how to directly use the hint that
you give to this exercise. In my opinion, $\operatorname*{cone}X$ needs not be closed.
Here is a sketch of my solution to Exercise 2.12:
\textit{Solution to Exercise 2.12 (sketched):}
$\Longrightarrow:$ Let $P=\left\{ x\in\mathbb{R}^{n}\ \mid\ Ax\leq b\right\}
$ be a polyhedron, with $A$ being an $m\times n$ matrix and $b$ being a vector
in $\mathbb{R}^{n}$. We want to write $P$ in the form $Q+C$ with $Q$ being a
polytope and $C$ being a finitely generated cone.
First of all, we can WLOG assume that $\operatorname*{Ker}A=0$%
.\ \ \ \ \footnote{\textit{Proof.} Assume that $\operatorname*{Ker}A\neq0$.
Let $V$ be a subspace of $\mathbb{R}^{n}$ which satisfies $\mathbb{R}%
^{n}=\left( \operatorname*{Ker}A\right) \oplus V$. Let $P^{\prime}=V\cap P$.
Then, $P^{\prime}$ is a polyhedron in $V$. Moreover, it is easy to see that
$P=P^{\prime}+\operatorname*{Ker}A$ (since every $p\in P$ and every
$w\in\operatorname*{Ker}A$ satisfy $p+w\in P$). By an induction, we can assume
that the $\Longrightarrow$ direction of Exercise 2.12 is already proven for
the polyhedron $P^{\prime}$ (because $P^{\prime}$ lies in a vector space of
smaller dimension than $\mathbb{R}^{n}$, so we can do an induction over the
dimension). In other words, we can write $P^{\prime}$ in the form $Q^{\prime
}+C^{\prime}$ with $Q^{\prime}$ being a polytope in $V$ and $C^{\prime}$ being
a finitely generated cone in $V$. Thus, $P=P^{\prime}+\operatorname*{Ker}A$
becomes $P=Q^{\prime}+C^{\prime}+\operatorname*{Ker}A$. Since $Q^{\prime}$ is
a polytope, while $C^{\prime}+\operatorname*{Ker}A$ is a finitely generated
cone (in fact, if we write $C^{\prime}$ as $\operatorname*{cone}\left\{
c_{1},c_{2},...,c_{i}\right\} $ for some points $c_{1}$, $c_{2}$, $...$,
$c_{i}$, and if we let $\left( w_{1},w_{2},...,w_{j}\right) $ be a basis of
the $\mathbb{R}$-vector space $\operatorname*{Ker}A$, then $C^{\prime
}+\operatorname*{Ker}A=\operatorname*{cone}\left\{ c_{1},c_{2},...,c_{i}%
,w_{1},w_{2},...,w_{j},-w_{1},-w_{2},...,-w_{j}\right\} $), this shows that
$P$ can be written in the form $Q+C$ with $Q$ being a polytope and $C$ being a
finitely generated cone, qed.. In other words, the case when
$\operatorname*{Ker}A\neq0$ has been reduced to a smaller case.} Assume this.
Now, let $x_{1},x_{2},...,x_{t}$ be the vertices of $P$. We claim that%
\begin{equation}
A=\operatorname*{conv}.\operatorname*{hull}\left\{ x_{1},...,x_{t}\right\}
+\left\{ x\in\mathbb{R}^{n}\ \mid\ Ax\leq0\right\} . \label{2.12.pf.1}%
\end{equation}
Once (\ref{2.12.pf.1}) is proven, it will clearly follow that $P$ can be
written in the form $Q+C$ with $Q$ being a polytope and $C$ being a finitely
generated cone (because $\operatorname*{conv}.\operatorname*{hull}\left\{
x_{1},...,x_{t}\right\} $ is a polytope, and because $\left\{ x\in
\mathbb{R}^{n}\ \mid\ Ax\leq0\right\} $ is a finitely generated cone
(according to Exercise 2.11)). Thus, in order to prove the $\Longrightarrow$
direction of Exercise 2.12, we only need to establish (\ref{2.12.pf.1}).
To prove (\ref{2.12.pf.1}), we need to show that%
\begin{equation}
\text{if }z\in A\text{ then }z\in\operatorname*{conv}.\operatorname*{hull}%
\left\{ x_{1},...,x_{t}\right\} +\left\{ x\in\mathbb{R}^{n}\ \mid
\ Ax\leq0\right\} . \label{2.12.pf.2}%
\end{equation}
To show this, we proceed as in the proof of Theorem 2.3 (thus, by induction
over $n-\operatorname*{rank}\left( A_{z}\right) $), with the following
modification: The numbers%
\[
\mu_{0}:=\max\left\{ \mu\ \mid\ z+\mu c\in P\right\}
\ \ \ \ \ \ \ \ \ \ \text{and}\ \ \ \ \ \ \ \ \ \ \nu_{0}:=\max\left\{
\nu\ \mid\ z-\nu c\in P\right\}
\]
don't necessarily both exist anymore, since $P$ is no longer bounded. Thus, we
must be in one of the following cases:
\textit{Case 1:} Both numbers $\mu_{0}$ and $\nu_{0}$ exist. In this case,
proceed exactly as in the proof of Theorem 2.3.
\textit{Case 2:} The number $\mu_{0}$ exists, but the number $\nu_{0}$
doesn't. In this case, since $\nu_{0}$ doesn't exist, there exist arbitrarily
large $\nu\in\mathbb{R}$ satisfying $z-\nu c\in P$. In other words, there
exist arbitrarily large $\nu\in\mathbb{R}$ satisfying $A\left( z-\nu
c\right) \leq b$. This quickly yields that $Ac\geq0$, hence $A\left(
-c\right) \leq0$, and thus $-c\in\left\{ x\in\mathbb{R}^{n}\ \mid
\ Ax\leq0\right\} $, so that $-\mu_{0}c\in\left\{ x\in\mathbb{R}^{n}%
\ \mid\ Ax\leq0\right\} $ (since $\mu_{0}\geq0$). Now, let $x:=z+\mu_{0}c$.
Then, $\operatorname*{rank}\left( A_{x}\right) >\operatorname*{rank}\left(
A_{z}\right) $ (this is proven as in the proof of Theorem 2.3), thus
$x\in\operatorname*{conv}.\operatorname*{hull}\left\{ x_{1},...,x_{t}%
\right\} +\left\{ x\in\mathbb{R}^{n}\ \mid\ Ax\leq0\right\} $ (by the
induction hypothesis), and thus%
\begin{align*}
x+\left( -\mu_{0}c\right) & \in\operatorname*{conv}.\operatorname*{hull}%
\left\{ x_{1},...,x_{t}\right\} +\left\{ x\in\mathbb{R}^{n}\ \mid
\ Ax\leq0\right\} +\underbrace{\left( -\mu_{0}c\right) }_{\in\left\{
x\in\mathbb{R}^{n}\ \mid\ Ax\leq0\right\} }\\
& \subseteq\operatorname*{conv}.\operatorname*{hull}\left\{ x_{1}%
,...,x_{t}\right\} +\underbrace{\left\{ x\in\mathbb{R}^{n}\ \mid
\ Ax\leq0\right\} +\left\{ x\in\mathbb{R}^{n}\ \mid\ Ax\leq0\right\}
}_{\subseteq\left\{ x\in\mathbb{R}^{n}\ \mid\ Ax\leq0\right\} }\\
& \subseteq\operatorname*{conv}.\operatorname*{hull}\left\{ x_{1}%
,...,x_{t}\right\} +\left\{ x\in\mathbb{R}^{n}\ \mid\ Ax\leq0\right\} .
\end{align*}
Since $x+\left( -\mu_{0}c\right) =x-\mu_{0}c=z$ (since $x=z+\mu_{0}c$), this
rewrites as
\[
z\in\operatorname*{conv}.\operatorname*{hull}\left\{ x_{1},...,x_{t}\right\}
+\left\{ x\in\mathbb{R}^{n}\ \mid\ Ax\leq0\right\} ,
\]
and this completes our induction step.
\textit{Case 3:} The number $\nu_{0}$ exists, but the number $\mu_{0}$
doesn't. This case is similar to Case 2.
\textit{Case 4:} Neither of the numbers $\mu_{0}$ and $\nu_{0}$ exists. In
this case, since $\nu_{0}$ doesn't exist, there exist arbitrarily large
$\nu\in\mathbb{R}$ satisfying $z-\nu c\in P$. In other words, there exist
arbitrarily large $\nu\in\mathbb{R}$ satisfying $A\left( z-\nu c\right) \leq
b$. This quickly yields that $Ac\geq0$. Similarly, the non-existence of
$\mu_{0}$ yields $Ac\leq0$. From $Ac\geq0$ and $Ac\leq0$, it follows that
$Ac=0$, thus $c\in\operatorname*{Ker}A=0$, contradicting $c\neq0$. Hence, Case
4 cannot occur.
Altogether, in each case (except of Case 4 which cannot occur), we have shown
that $z\in\operatorname*{conv}.\operatorname*{hull}\left\{ x_{1}%
,...,x_{t}\right\} +\left\{ x\in\mathbb{R}^{n}\ \mid\ Ax\leq0\right\} $, so
the induction is complete. The $\Longrightarrow$ part of Exercise 2.12 is thus proven.
$\Longleftarrow:$ Let $P$ be a subset of $\mathbb{R}^{n}$ which can be written
in the form $P=Q+C$ with $Q$ being a polytope and $C$ being a finitely
generated cone. We want to prove that $P$ is a polyhedron.
Write $Q$ in the form $Q=\operatorname*{conv}.\operatorname*{hull}\left\{
q_{1},q_{2},...,q_{i}\right\} $. Write $C$ in the form
$C=\operatorname*{cone}\left\{ c_{1},c_{2},...,c_{j}\right\} $. For any
subset $S$ of $\mathbb{R}^{n}$ and any $\lambda\in\mathbb{R}$, let $\left(
\begin{array}
[c]{c}%
S\\
\lambda
\end{array}
\right) $ denote the subset $\left\{ \left(
\begin{array}
[c]{c}%
s\\
\lambda
\end{array}
\right) \in\mathbb{R}^{n+1}\ \mid\ s\in S\right\} $ of $\mathbb{R}^{n+1}$.
Note that $\left(
\begin{array}
[c]{c}%
Q\\
1
\end{array}
\right) $ is a convex set and $\left(
\begin{array}
[c]{c}%
C\\
0
\end{array}
\right) $ is a convex cone. It is now easy to see that%
\[
\operatorname*{cone}\left(
\begin{array}
[c]{c}%
Q\\
1
\end{array}
\right) +\left(
\begin{array}
[c]{c}%
C\\
0
\end{array}
\right) =\operatorname*{cone}\left\{ \left(
\begin{array}
[c]{c}%
q_{1}\\
1
\end{array}
\right) ,\left(
\begin{array}
[c]{c}%
q_{2}\\
1
\end{array}
\right) ,...,\left(
\begin{array}
[c]{c}%
q_{i}\\
1
\end{array}
\right) ,\left(
\begin{array}
[c]{c}%
c_{1}\\
0
\end{array}
\right) ,\left(
\begin{array}
[c]{c}%
c_{2}\\
0
\end{array}
\right) ,...,\left(
\begin{array}
[c]{c}%
c_{j}\\
0
\end{array}
\right) \right\} .
\]
\footnote{To make this completely correct, we have to modify the definition of
a cone so as to require that any cone contains $0$. The only difference that
this makes is that $\varnothing$ no longer becomes a cone with this modified
definition, so that $\operatorname*{cone}\varnothing$ is no longer
$\varnothing$ but rather $0$.} Thus, $\operatorname*{cone}\left(
\begin{array}
[c]{c}%
Q\\
1
\end{array}
\right) +\left(
\begin{array}
[c]{c}%
C\\
0
\end{array}
\right) $ is a finitely generated cone, hence (by Exercise 2.11) an
intersection of finitely many linear halfspaces. In particular, this yields
that $\operatorname*{cone}\left(
\begin{array}
[c]{c}%
Q\\
1
\end{array}
\right) +\left(
\begin{array}
[c]{c}%
C\\
0
\end{array}
\right) $ is a polyhedron. Thus,%
\[
\left\{ x\in\operatorname*{cone}\left(
\begin{array}
[c]{c}%
Q\\
1
\end{array}
\right) +\left(
\begin{array}
[c]{c}%
C\\
0
\end{array}
\right) \ \mid\ \left( \text{the }\left( n+1\right) \text{-th coordinate
of }x\text{ is }1\right) \right\}
\]
is also a polyhedron (because it is the intersection of the polyhedron
$\operatorname*{cone}\left(
\begin{array}
[c]{c}%
Q\\
1
\end{array}
\right) +\left(
\begin{array}
[c]{c}%
C\\
0
\end{array}
\right) $ with an affine hyperplane). But since it is easy to see that%
\[
\left\{ x\in\operatorname*{cone}\left(
\begin{array}
[c]{c}%
Q\\
1
\end{array}
\right) +\left(
\begin{array}
[c]{c}%
C\\
0
\end{array}
\right) \ \mid\ \left( \text{the }\left( n+1\right) \text{-th coordinate
of }x\text{ is }1\right) \right\} =\left(
\begin{array}
[c]{c}%
Q+C\\
1
\end{array}
\right) ,
\]
this yields that $\left(
\begin{array}
[c]{c}%
Q+C\\
1
\end{array}
\right) $ is a polyhedron. In other words, $Q+C$ is a polyhedron. Since
$Q+C=P$, this yields that $P$ is a polyhedron. This solves the $\Longleftarrow
$ direction of Exercise 2.12.
This was a nontrivial proof, and your hint helped me only in the
$\Longleftarrow$ direction, and even that wasn't a simple application of the
hint (since $\operatorname*{cone}\left(
\begin{array}
[c]{c}%
Q\\
1
\end{array}
\right) +\left(
\begin{array}
[c]{c}%
C\\
0
\end{array}
\right) $ is not the same as your $\operatorname*{cone}X$). Did I
misunderstand something?
\item \textbf{Exercise 2.13:} You could add (for the sake of clarity) a remark
that if $C$ is a convex cone, then $C^{\ast}=\left\{ y\in\mathbb{R}^{n}%
\ \mid\ x^{T}y\leq0\text{ for each }x\in C\right\} $.
\item \textbf{Exercises to \S 2.2:} I am somewhat surprised why you don't have
this exercise:
\textbf{2.14a.} Prove that:
(i) If $P$ is a polyhedron in $\mathbb{R}^{n}$ and $\phi$ is a linear map
$\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}$, then $\phi\left( P\right) $ is a polyhedron.
(ii) If $P$ is a polytope in $\mathbb{R}^{n}$ and $\phi$ is a linear map
$\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}$, then $\phi\left( P\right) $ is a polytope.
(iii) If $P$ is a polyhedron in $\mathbb{R}^{n}$ and $\phi$ is a linear map
$\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}$, then $\phi^{-1}\left( P\right) $
is a polyhedron.
(iv) If $P$ is a polyhedron in $\mathbb{R}^{n}$, and $Q$ is a polyhedron in
$\mathbb{R}^{m}$, then $P\times Q$ is a polyhedron in $\mathbb{R}^{n}%
\times\mathbb{R}^{m}\cong\mathbb{R}^{n+m}$.
(v) If $P$ is a polytope in $\mathbb{R}^{n}$, and $Q$ is a polytope in
$\mathbb{R}^{m}$, then $P\times Q$ is a polytope in $\mathbb{R}^{n}%
\times\mathbb{R}^{m}\cong\mathbb{R}^{n+m}$.
(vi) If $P$ and $Q$ are two polyhedra in $\mathbb{R}^{n}$, then $P+Q$ and
$P\cap Q$ are polyhedra.
(vii) If $P$ and $Q$ are two polytopes in $\mathbb{R}^{n}$, then $P+Q$ and
$P\cap Q$ are polytopes.
(viii) If $P$ and $Q$ are two polyhedra in $\mathbb{R}^{n}$ and $\mathbb{R}%
^{m}$, respectively, then $\left\{ f\in\operatorname*{Hom}%
\nolimits_{\mathbb{R}}\left( \mathbb{R}^{n},\mathbb{R}^{m}\right)
\ \mid\ f\left( P\right) \subseteq Q\right\} $ is a polyhedron in
$\operatorname*{Hom}\nolimits_{\mathbb{R}}\left( \mathbb{R}^{n}%
,\mathbb{R}^{m}\right) \cong\mathbb{R}^{n\times m}$.
(ix) If $P$ and $Q$ are two polytopes in $\mathbb{R}^{n}$ and $\mathbb{R}^{m}%
$, respectively, then $\operatorname*{conv}.\operatorname*{hull}\left\{
p\otimes q\ \mid\ p\in P,\ q\in Q\right\} $ is a polytope in $\mathbb{R}%
^{n}\otimes_{\mathbb{R}}\mathbb{R}^{m}\cong\mathbb{R}^{n\times m}$.
(This exercise can be easily solved using Exercise 2.12. Note that the
analogue of part (ix) for polyhedra doesn't directly hold -- see
\newline\url{http://mathoverflow.net/questions/115677} .)
\item \textbf{Proof of Corollary 2.5b:} Replace ``system of linear
inequalities'' by ``system of linear equations''.
\item \textbf{Proof of Corollary 2.5b:} You write:
``According to Farkas' lemma this implies that there exists a vector $\left(
\begin{array}
[c]{c}%
z\\
\mu
\end{array}
\right) $ so that%
\[
\left(
\begin{array}
[c]{cc}%
A & b\\
0 & 1
\end{array}
\right) \left(
\begin{array}
[c]{c}%
z\\
\mu
\end{array}
\right) \geq\left(
\begin{array}
[c]{c}%
0\\
0
\end{array}
\right) \ \ \ \ \ \ \ \ \ \ \text{and}\ \ \ \ \ \ \ \ \ \ \left(
\begin{array}
[c]{cc}%
c^{T} & \delta
\end{array}
\right) \left(
\begin{array}
[c]{c}%
z\\
\mu
\end{array}
\right) <0.
\]
''
In my opinion, you are going a bit too fast here.\footnote{Here is how I would
argue (immediately after (38)):
\par
``In other words, the following system of linear equations%
\[
\left(
\begin{array}
[c]{cc}%
A & b\\
0 & 1
\end{array}
\right) ^{T}\left(
\begin{array}
[c]{c}%
y\\
\lambda
\end{array}
\right) =\left(
\begin{array}
[c]{c}%
c\\
\delta
\end{array}
\right)
\]
in the variables $y$ and $\lambda$ has no nonnegative solution $\left(
\begin{array}
[c]{c}%
y\\
\lambda
\end{array}
\right) $. According to Theorem 2.5, this implies that there exists a vector
$\left(
\begin{array}
[c]{c}%
z\\
\mu
\end{array}
\right) $ such that%
\[
\left(
\begin{array}
[c]{c}%
z\\
\mu
\end{array}
\right) ^{T}\left(
\begin{array}
[c]{cc}%
A & b\\
0 & 1
\end{array}
\right) ^{T}\geq\left(
\begin{array}
[c]{c}%
0\\
0
\end{array}
\right) \ \ \ \ \ \ \ \ \ \ \text{and}\ \ \ \ \ \ \ \ \ \ \left(
\begin{array}
[c]{c}%
z\\
\mu
\end{array}
\right) ^{T}\left(
\begin{array}
[c]{c}%
c\\
\delta
\end{array}
\right) <0.
\]
Consider this vector. Then, $\left(
\begin{array}
[c]{c}%
z\\
\mu
\end{array}
\right) ^{T}\left(
\begin{array}
[c]{cc}%
A & b\\
0 & 1
\end{array}
\right) ^{T}\geq\left(
\begin{array}
[c]{c}%
0\\
0
\end{array}
\right) $ rewrites as $\left(
\begin{array}
[c]{cc}%
A & b\\
0 & 1
\end{array}
\right) \left(
\begin{array}
[c]{c}%
z\\
\mu
\end{array}
\right) \geq\left(
\begin{array}
[c]{c}%
0\\
0
\end{array}
\right) $, so that $\mu\geq0$ and $Az+b\mu\geq0$. Also, $\left(
\begin{array}
[c]{c}%
z\\
\mu
\end{array}
\right) ^{T}\left(
\begin{array}
[c]{c}%
c\\
\delta
\end{array}
\right) <0$ rewrites as $\left(
\begin{array}
[c]{cc}%
c^{T} & \delta
\end{array}
\right) \left(
\begin{array}
[c]{c}%
z\\
\mu
\end{array}
\right) <0$.''}
\item \textbf{Geometric interpretation of the Duality theorem:} On page 35,
you write: ``Without loss of generality we may assume that the first $k$ rows
of $A$ belong to the matrix $A_{x^{\ast}}$.'' You are not very precise here.
What you really want to assume is that the first $k$ rows of $A$, \textbf{and
no others}, belong to the matrix $A_{x^{\ast}}$.
\item \textbf{Proof of Corollary 2.6a:} There is an alternative, simpler proof
of Corollary 2.6a, which proceeds by applying Theorem 2.6 to $n$, $m$,
$-A^{T}$, $-c$ and $-b$ instead of $m$, $n$, $A$, $b$ and $c$. I am absolutely
not suggesting that you should replace your proof of Corollary 2.6a by this
one, because your proof shows a deeper idea that is used in solving the
exercises, whereas my proof only shows the ``idea'' that $\min$ and $\max$ can
be interchanged at the cost of introducing some minus signs. But I wouldn't
consider it a bad thing if you would demonstrate the idea that you use for
proving Corollary 2.6a on Exercise 2.23 rather than leaving it as an exercise
for the reader. In fact, it seems to me that most of the time when you apply
linear programming duality to combinatorial proofs in your text, you are not
using Theorem 2.6, but rather you are using Exercise 2.23, so it seems that
Exercise 2.23 is the more relevant form of linear programming duality for
combinatorics, and deserves a better place than an exercise. But this is a
subjective opinion of mine.
\end{itemize}
\section*{Chapter 4}
\begin{itemize}
\item \textbf{Proof of Theorem 4.1:} After \textquotedblleft there exists for
each $c\in U\cup\left\{ v\right\} $ a $c-T$ path\textquotedblright, add
\textquotedblleft$Q_{c}$\textquotedblright.
\item \textbf{Proof of Theorem 4.1:} Replace \textquotedblleft pairwise
disjoint\textquotedblright\ by \textquotedblleft pairwise
vertex-disjoint\textquotedblright.
\item \textbf{\S 4.1:} You write: \textquotedblleft\ They can be derived from
the directed case by replacing each undirected edge $uw$ by two opposite arcs
$\left( u,w\right) $ and $\left( w,u\right) $.\textquotedblright
This is not literally true -- at least not for the directed arc-disjoint
version of Menger's theorem (Corollary 4.1b). Indeed, the analogue of
Corollary 4.1b for undirected graphs does not directly from Corollary 4.1b. If
$G$ is an undirected graph, and if $D$ is the directed graph obtained from $G$
by replacing each undirected edge $uw$ by two opposite arcs $\left(
u,w\right) $ and $\left( w,u\right) $, then $k$ arc-disjoint paths in $D$
\textbf{need not} directly \textquotedblleft project down to\textquotedblright%
\ $k$ edge-disjoint paths in $G$, because it could happen that one of these
paths uses an arc $\left( u,w\right) $ while another uses the opposite arc
$\left( w,u\right) $. There might well be a fix for this problem, but it is
not obvious to me.
\item \textbf{\S 4.3:} Here you write: \textquotedblleft Moreover, for the
sake of exposition we assume that for no arc $\left( u,v\right) $, also
$\left( v,u\right) $ is an arc\textquotedblright.
I understand why you make this assumption: It is needed for the flow
augmenting algorithm\footnote{Indeed, if you do not make this assumption, then
it can happen that a single $i\in\left\{ 1,2,\ldots,k\right\} $ satisfies
\textbf{both} (14) (i) and (14) (ii) at the same time; in this case, the
definition of the $\sigma_{1},\sigma_{2},\ldots,\sigma_{k}$ in (14) becomes
ambiguous (because it gives two different (non-equivalent) definitions of
$\sigma_{i}$), and the definition (15) of the map $f^{\prime}:A\rightarrow
\mathbb{R}_{+}$ fails to guarantee that $f^{\prime}$ satisfies the flow
conservation law.}. However, let me point out an alternative way to make the
flow augmenting algorithm work, without making this assumption:
Consider Case 1 in the flow augmenting algorithm.
For each $i\in\left\{ 1,2,\ldots,k\right\} $, we color the arc $a_{i}\in
A_{f}$ either in red or in blue (not in both colors simultaneously), in such a
way that the following holds:
\begin{itemize}
\item If $a_{i}$ is colored red, then $a_{i}\in A$ and $c\left( a_{i}\right)
-f\left( a_{i}\right) >0$ holds.
\item If $a_{i}$ is colored blue, then $a_{i}^{-1}\in A$ and $f\left(
a_{i}^{-1}\right) >0$ holds.
\end{itemize}
(Such a coloring clearly exists, because for each $i\in\left\{ 1,2,\ldots
,k\right\} $, at least one of the statements $\left( a_{i}\in A\text{ and
}c\left( a_{i}\right) -f\left( a_{i}\right) >0\right) $ and $\left(
a_{i}^{-1}\in A\text{ and }f\left( a_{i}^{-1}\right) >0\right) $
holds\footnote{This is because $a_{i}\in A_{f}$.}. If only one of them holds,
then the color of $a_{i}$ is thus uniquely determined; but if both of them
hold, it can be chosen at will. But we must choose it once and consider it as
fixed from then on.)
Now, the definition of the numbers $\sigma_{1},\sigma_{2},\ldots,\sigma_{k}$
has to be modified as follows:%
\begin{align*}
\sigma_{i} & =%
\begin{cases}
c\left( a_{i}\right) -f\left( a_{i}\right) , & \text{if }a_{i}\text{ is
colored red;}\\
f\left( a_{i}^{-1}\right) , & \text{if }a_{i}\text{ is colored blue}%
\end{cases}
\\
& =%
\begin{cases}
\text{the }\sigma_{i}\text{ from (14) (i)}, & \text{if }a_{i}\text{ is colored
red;}\\
\text{the }\sigma_{i}\text{ from (14) (ii)}, & \text{if }a_{i}\text{ is
colored blue.}%
\end{cases}
\end{align*}
Furthermore, the definition of $f^{\prime}:A\rightarrow\mathbb{R}_{+}$ has to
be modified as follows:%
\[
f^{\prime}\left( a\right) :=%
\begin{cases}
f\left( a\right) +\alpha, & \text{if }a\text{ is an arc of }P\text{ colored
red;}\\
f\left( a\right) -\alpha, & \text{if }a^{-1}\text{ is an arc of }P\text{
colored blue;}\\
f\left( a\right) , & \text{otherwise.}%
\end{cases}
\]
for each $a\in A$.
It is not hard to see that this $f^{\prime}$ is indeed an $s-t$ flow under $c$.
This modification of the flow augmenting algorithm makes your assumption that
\textquotedblleft for no arc $\left( u,v\right) $, also $\left( v,u\right)
$ is an arc\textquotedblright\ unnecessary in \S 4.3. (However, this
assumption remains necessary for \S 4.6, where it is used (for example) in the
formula (35).)
\item \textbf{Proof of Theorem 4.4:} There are some steps missing here
(although not too hard to fill in). You are claiming without proof that
$\left\vert \alpha\left( D_{f}\right) \right\vert \leq\left\vert
A\right\vert $ for each $f$. In order to prove this, we need the following lemma:
\textbf{Lemma 4.4b.} Let $D=\left( V,A\right) $ and $s,t\in V$. Then,
$\alpha\left( D\right) \cap\alpha\left( D\right) ^{-1}=\varnothing$.
This lemma is easy to check (you have essentially proven it inside the proof
of Proposition 3), but I think it should be stated explicitly. From this lemma
(applied to $D_{f}$ instead of $D$), we can conclude that $\alpha\left(
D_{f}\right) $ is a subset of $A\cup A^{-1}$ having at most half the size of
$A\cup A^{-1}$ (because it can never contain both $a$ and $a^{-1}$ for any
given $a\in A$); therefore its size is $\left\vert \alpha\left( D_{f}\right)
\right\vert \leq\dfrac{1}{2}\left\vert A\cup A^{-1}\right\vert =\left\vert
A\right\vert -\dfrac{1}{2}\left\vert A\cap A^{\prime}\right\vert
\leq\left\vert A\right\vert $.
\end{itemize}
\section*{Chapter 5}
\begin{itemize}
\item \textbf{Corollary 5.6c:} The formula (31) would become clearer if you
would replace ``$U\subseteq V$'' by ``$U\in\mathcal{P}_{\operatorname*{odd}%
}\left( V\right) $''.
\end{itemize}
\section*{Chapter 8}
\begin{itemize}
\item \textbf{\S 8.1:} On page 133, replace ``$LP$-problem'' by ``LP-problem''
(there is no reason to put ``LP'' in math mode here).
\item \textbf{Exercise 8.2:} This is correct, but I am wondering whether the
problem is as difficult as it seems to me. Below is an outline of my solution,
which you probably should ignore due to its exorbitant length and ugliness; I
have merely written it up to convince myself that it is true (but I cannot say
I am very much convinced). Exercise 8.2 is surrounded by some of the easiest
exercises I have seen in your notes, including the Exercise 8.1 which is
completely trivial (all that one needs to do in that exercise is noticing that
$P$ is bounded and thus $P\cap\mathbb{Z}^{n}$ is finite, right?), so I am very
surprised that Exercise 8.2 did not give in to most of my attacks.
Before I sketch my solution, let me add a new exercise for Chapter 2:
\textbf{2.6a.} Let $n\in\mathbb{N}$. Let $A$ and $B$ be two subsets of
$\mathbb{R}^{n}$ such that $A+B\subseteq A$. Prove that
\[
\operatorname*{conv}.\operatorname*{hull}A+\operatorname*{cone}%
B=\operatorname*{conv}.\operatorname*{hull}A.
\]
(I'm not claiming this exercise is interesting for its own sake, but I'll use
it in my solution.) The solution to Exercise 2.6a is really straightforward.
\textit{Solution to Exercise 8.2 (sketched):}
The matrix $A$ is rational. Thus, we can WLOG assume that $A$ is integral
(since otherwise, we can multiply both $A$ and $b$ with the (positive) common
denominator of the fractions which occur in $A$). Assume this.
We can WLOG assume that $b$ is an integer (else, just replace $b$ by
$\left\lfloor b\right\rfloor $, and nothing changes). Assume this.
We can rewrite the two directions of Exercise 2.12 as two lemmas:
\textit{Lemma U1:} Let $\mathfrak{P}$ be a polyhedron in $\mathbb{R}^{n}$.
Then, there exist a finite subset $Q$ of $\mathbb{R}^{n}$, an integer
$k\in\mathbb{N}$ and some vectors $c_{1}$, $c_{2}$, $...$, $c_{k}$ in
$\mathbb{R}^{n}$ such that $\mathfrak{P}=\operatorname*{conv}%
.\operatorname*{hull}Q+\operatorname*{cone}\left\{ c_{1},c_{2},...,c_{k}%
\right\} $.
\textit{Lemma U2:} Let $\mathfrak{Q}$ be a polytope in $\mathbb{R}^{n}$. Let
$\mathfrak{C}$ be a finitely generated convex cone in $\mathbb{R}^{n}$. Then,
$\mathfrak{Q}+\mathfrak{C}$ is a polyhedron in $\mathbb{R}^{n}$.
Now, let us define a couple of notions related to rational numbers.
\textbf{(a)} A subset $H$ of $\mathbb{Q}^{n}$ will be called a $\mathbb{Q}%
$\textit{-halfspace} if there exist a vector $c\in\mathbb{Q}^{n}$ with
$c\neq0$ and a $\delta\in\mathbb{Q}$ such that $H=\left\{ x\in\mathbb{Q}%
^{n}\ \mid\ c^{T}x\leq\delta\right\} $.
\textbf{(b)} A $\mathbb{Q}$\textit{-polyhedron} in $\mathbb{Q}^{n}$ will mean
an intersection of finitely many $\mathbb{Q}$-halfspaces in $\mathbb{Q}^{n}$.
\textbf{(c)} A subset $C$ of $\mathbb{Q}^{n}$ is called $\mathbb{Q}%
$\textit{-convex} if for all $x$ and $y$ in $C$ and any $\lambda\in\mathbb{Q}$
satisfying $0\leq\lambda\leq1$, the vector $\lambda x+\left( 1-\lambda
\right) y$ belongs to $C$.
\textbf{(d)} If $X$ is a subset of $\mathbb{Q}^{n}$, then the $\mathbb{Q}%
$\textit{-convex hull} of $X$ will mean the subset%
\begin{align*}
& \left\{ \lambda_{1}x_{1}+\lambda_{2}x_{2}+...+\lambda_{t}x_{t}\ \mid
\ t\in\mathbb{N},\ x_{1},x_{2},...,x_{t}\in X;\ \lambda_{1},\lambda
_{2},...,\lambda_{t}\in\mathbb{Q};\ \right. \\
&
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left.
\lambda_{1},\lambda_{2},...,\lambda_{t}\geq0;\ \lambda_{1}+\lambda
_{2}+...+\lambda_{t}=1\right\}
\end{align*}
of $\mathbb{Q}^{n}$. We denote this $\mathbb{Q}$-convex hull by
$\operatorname*{conv}.\operatorname*{hull}\nolimits_{\mathbb{Q}}X$. It is the
smallest $\mathbb{Q}$-convex set which contains $X$ as a subset.
\textbf{(e)} A subset $C$ of $\mathbb{Q}^{n}$ is called a $\mathbb{Q}%
$\textit{-convex cone} if it satisfies $0\in C$ and for any $x\in C$ and $y\in
C$ and any $\lambda\in\mathbb{Q}$ and $\mu\in\mathbb{Q}$ such that
$\lambda\geq0$ and $\mu\geq0$, one has $\lambda x+\mu y\in C$.
\textbf{(f)} For any $X\subseteq\mathbb{Q}^{n}$, we define
$\operatorname*{cone}\nolimits_{\mathbb{Q}}X$ to be the subset%
\begin{align*}
& \left\{ \lambda_{1}x_{1}+\lambda_{2}x_{2}+...+\lambda_{t}x_{t}\ \mid
\ t\in\mathbb{N},\ x_{1},x_{2},...,x_{t}\in X;\ \lambda_{1},\lambda
_{2},...,\lambda_{t}\in\mathbb{Q};\ \right. \\
&
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left.
\lambda_{1},\lambda_{2},...,\lambda_{t}\geq0\right\}
\end{align*}
of $\mathbb{Q}^{n}$. It is the smallest $\mathbb{Q}$-convex cone which
contains $X$ as a subset.
It is clear that these notions of a $\mathbb{Q}$-halfspace, a $\mathbb{Q}%
$-polyhedron, the $\mathbb{Q}$-convex hull $\operatorname*{conv}%
.\operatorname*{hull}\nolimits_{\mathbb{Q}}X$ etc. are defined exactly in the
same way as you defined the notions of a halfspace, a polyhedron, the convex
hull $\operatorname*{conv}.\operatorname*{hull}X$ etc. (except for minor
differences in the wording of the definitions) but using $\mathbb{Q}$ as the
ground field instead of $\mathbb{R}$.
You spent most of Chapter 2 discussing linear programming and polyhedra over
the field $\mathbb{R}$. However, most results from Chapter 2 (more precisely,
Theorem 2.2, Theorem 2.3, Corollary 2.3a, Theorem 2.4, Theorem 2.5, Corollary
2.5a, Corollary 2.5b, Lemma 2.1, Theorem 2.6 and Corollary 2.6a, as well as
most exercises) still hold when the underlying field $\mathbb{R}$ is replaced
by $\mathbb{Q}$ (or any other ordered field), as long as the notions used
therein (e. g., the notions of convexity, polytopes, polyhedra, hyperplanes,
etc.) are replaced by the corresponding notions which are based on
$\mathbb{Q}$ instead of $\mathbb{R}$ as the ground field (for example, the
notion of a halfspace must be replaced by the notion of a $\mathbb{Q}%
$-halfspace, the notion of $\operatorname*{conv}.\operatorname*{hull}X$ for a
set $X\subseteq\mathbb{R}^{n}$ must be replaced by the notion of
$\operatorname*{conv}.\operatorname*{hull}\nolimits_{\mathbb{Q}}X$ for a set
$X\subseteq\mathbb{Q}^{n}$, etc.).\footnote{That said, the proofs you give
don't necessarily work over $\mathbb{Q}$...} In particular, Lemma U1 (being a
consequence of Exercise 2.12) still holds when the underlying field
$\mathbb{R}$ is replaced by $\mathbb{Q}$, as long as the appropriate
replacements are done. In other words, the following lemma holds:
\textit{Lemma U3:} Let $\mathfrak{P}$ be a $\mathbb{Q}$-polyhedron in
$\mathbb{Q}^{n}$. Then, there exist a finite subset $Q$ of $\mathbb{Q}^{n}$,
an integer $k\in\mathbb{N}$ and some vectors $c_{1}$, $c_{2}$, $...$, $c_{k}$
in $\mathbb{Q}^{n}$ such that $\mathfrak{P}=\operatorname*{conv}%
.\operatorname*{hull}\nolimits_{\mathbb{Q}}Q+\operatorname*{cone}%
\nolimits_{\mathbb{Q}}\left\{ c_{1},c_{2},...,c_{k}\right\} $.
But now, let us get back to the solution of Exercise 8.2.
We know that $A$ is integral and $b$ is an integer. Thus,%
\[
\underbrace{P}_{=\left\{ x\ \mid\ Ax\leq b\right\} }\cap\mathbb{Q}%
^{n}=\left\{ x\ \mid\ Ax\leq b\right\} \cap\mathbb{Q}^{n}=\left\{
x\in\mathbb{Q}^{n}\ \mid\ Ax\leq b\right\}
\]
is a $\mathbb{Q}$-polyhedron in $\mathbb{Q}^{n}$. Denote this $\mathbb{Q}%
$-polyhedron by $P^{\prime}$. Clearly,%
\[
\underbrace{P^{\prime}}_{=P\cap\mathbb{Q}^{n}}\cap\mathbb{Z}^{n}%
=P\cap\underbrace{\mathbb{Q}^{n}\cap\mathbb{Z}^{n}}_{=\mathbb{Z}^{n}}%
=P\cap\mathbb{Z}^{n}.
\]
Applied to $\mathfrak{P}=P^{\prime}$, Lemma U3 yields that there exist a
finite subset $Q$ of $\mathbb{Q}^{n}$, an integer $k\in\mathbb{N}$ and some
vectors $c_{1}$, $c_{2}$, $...$, $c_{k}$ in $\mathbb{Q}^{n}$ such that
$P^{\prime}=\operatorname*{conv}.\operatorname*{hull}\nolimits_{\mathbb{Q}%
}Q+\operatorname*{cone}\nolimits_{\mathbb{Q}}\left\{ c_{1},c_{2}%
,...,c_{k}\right\} $. Consider this $Q$, this $k$ and these $c_{1}$, $c_{2}$,
$...$, $c_{k}$. WLOG assume that $c_{1}$, $c_{2}$, $...$, $c_{k}$ belong to
$\mathbb{Z}^{n}$ (because otherwise, we can multiply the vectors $c_{1}$,
$c_{2}$, $...$, $c_{k}$ by their common denominator).
Let $\mathfrak{C}=\operatorname*{cone}\nolimits_{\mathbb{Q}}\left\{
c_{1},c_{2},...,c_{k}\right\} $. Then, $\mathfrak{C}$ is a finitely generated
$\mathbb{Q}$-convex cone.
Let $\mathfrak{Q}=\operatorname*{conv}.\operatorname*{hull}%
\nolimits_{\mathbb{Q}}Q$. Then, $\mathfrak{Q}$ is a $\mathbb{Q}$-polytope.
We have%
\[
P^{\prime}=\underbrace{\operatorname*{conv}.\operatorname*{hull}%
\nolimits_{\mathbb{Q}}Q}_{=\mathfrak{Q}}+\underbrace{\operatorname*{cone}%
\nolimits_{\mathbb{Q}}\left\{ c_{1},c_{2},...,c_{k}\right\} }_{=\mathfrak{C}%
}=\mathfrak{Q}+\mathfrak{C}.
\]
Let $\mathfrak{K}$ be the subset $\sum\limits_{i=1}^{k}\operatorname*{conv}%
.\operatorname*{hull}\nolimits_{\mathbb{Q}}\left\{ 0,c_{i}\right\} $ of
$\mathbb{Q}^{n}$ (where the sum sign is based on the Minkowski sum of subsets
of $\mathbb{Q}^{n}$). Then, $\mathfrak{Q}+\mathfrak{K}=\mathfrak{Q}%
+\sum\limits_{i=1}^{k}\operatorname*{conv}.\operatorname*{hull}%
\nolimits_{\mathbb{Q}}\left\{ 0,c_{i}\right\} $ is a sum of finitely many
$\mathbb{Q}$-polytopes, hence $\mathfrak{Q}+\mathfrak{K}$ itself is a bounded
set. Thus, $\left( \mathfrak{Q}+\mathfrak{K}\right) \cap\mathbb{Z}^{n}$ is a
finite set.
From the definitions of $\mathfrak{K}$ and $\mathfrak{C}$, it follows readily
that $\mathfrak{K}\subseteq\mathfrak{C}$, so that%
\begin{equation}
\mathfrak{Q}+\underbrace{\mathfrak{K}}_{\subseteq\mathfrak{C}}\subseteq
\mathfrak{Q}+\mathfrak{C}=P^{\prime}. \label{8.2.pf.0}%
\end{equation}
We will now prove that
\begin{equation}
\operatorname*{conv}.\operatorname*{hull}\left( P^{\prime}\cap\mathbb{Z}%
^{n}\right) =\operatorname*{conv}.\operatorname*{hull}\left( \left(
\mathfrak{Q}+\mathfrak{K}\right) \cap\mathbb{Z}^{n}\right)
+\operatorname*{cone}\left\{ c_{1},c_{2},...,c_{k}\right\} .
\label{8.2.pf.1}%
\end{equation}
(Note that, in this equation, we are again working over the ground field
$\mathbb{R}$, not over $\mathbb{Q}$, even if $\mathfrak{Q}$ and $\mathfrak{K}$
are defined over $\mathbb{Q}$.)
\textit{Proof of (\ref{8.2.pf.1}):} Let $x\in P^{\prime}\cap\mathbb{Z}^{n}$ be
arbitrary. Then, $x\in P^{\prime}$ and $x\in\mathbb{Z}^{n}$. Since $x\in
P^{\prime}=\mathfrak{Q}+\mathfrak{C}$, there exist two elements $q\in
\mathfrak{Q}$ and $c\in\mathfrak{C}$ such that $x=q+c$. Consider these $q$ and
$c$.
Since $c\in\mathfrak{C}=\operatorname*{cone}\nolimits_{\mathbb{Q}}\left\{
c_{1},c_{2},...,c_{k}\right\} $, there exists a $k$-tuple $\left(
\lambda_{1},\lambda_{2},...,\lambda_{k}\right) $ of nonnegative elements of
$\mathbb{Q}$ such that $c=\sum\limits_{i=1}^{k}\lambda_{i}c_{i}$. Consider
this $k$-tuple $\left( \lambda_{1},\lambda_{2},...,\lambda_{k}\right) $.
We have
\begin{align*}
c & =\sum\limits_{i=1}^{k}\underbrace{\lambda_{i}}_{=\left\lfloor
\lambda_{i}\right\rfloor +\left( \lambda_{i}-\left\lfloor \lambda
_{i}\right\rfloor \right) }c_{i}=\sum\limits_{i=1}^{k}\left( \left\lfloor
\lambda_{i}\right\rfloor +\left( \lambda_{i}-\left\lfloor \lambda
_{i}\right\rfloor \right) \right) c_{i}\\
& =\sum\limits_{i=1}^{k}\left\lfloor \lambda_{i}\right\rfloor c_{i}%
+\sum\limits_{i=1}^{k}\underbrace{\left( \lambda_{i}-\left\lfloor \lambda
_{i}\right\rfloor \right) c_{i}}_{\substack{\in\operatorname*{conv}%
.\operatorname*{hull}\nolimits_{\mathbb{Q}}\left\{ 0,c_{i}\right\}
\\\text{(since }\lambda_{i}-\left\lfloor \lambda_{i}\right\rfloor \in\left[
0,1\right] \text{)}}}\\
& \in\sum\limits_{i=1}^{k}\left\lfloor \lambda_{i}\right\rfloor
c_{i}+\underbrace{\sum\limits_{i=1}^{k}\operatorname*{conv}%
.\operatorname*{hull}\nolimits_{\mathbb{Q}}\left\{ 0,c_{i}\right\}
}_{=\mathfrak{K}}=\sum\limits_{i=1}^{k}\left\lfloor \lambda_{i}\right\rfloor
c_{i}+\mathfrak{K},
\end{align*}
so that%
\[
x=\underbrace{q}_{\in\mathfrak{Q}}+\underbrace{c}_{\in\sum\limits_{i=1}%
^{k}\left\lfloor \lambda_{i}\right\rfloor c_{i}+\mathfrak{K}}\in
\mathfrak{Q}+\sum\limits_{i=1}^{k}\left\lfloor \lambda_{i}\right\rfloor
c_{i}+\mathfrak{K}=\sum\limits_{i=1}^{k}\left\lfloor \lambda_{i}\right\rfloor
c_{i}+\mathfrak{Q}+\mathfrak{K}.
\]
Hence,%
\[
x-\sum\limits_{i=1}^{k}\left\lfloor \lambda_{i}\right\rfloor c_{i}%
\in\mathfrak{Q}+\mathfrak{K}.
\]
Combined with%
\[
\underbrace{x}_{\in\mathbb{Z}^{n}}-\sum\limits_{i=1}^{k}%
\underbrace{\left\lfloor \lambda_{i}\right\rfloor }_{\in\mathbb{Z}%
}\underbrace{c_{i}}_{\in\mathbb{Z}^{n}}\in\mathbb{Z}^{n}-\sum\limits_{i=1}%
^{k}\mathbb{Z}\cdot\mathbb{Z}^{n}\subseteq\mathbb{Z}^{n},
\]
this yields%
\[
x-\sum\limits_{i=1}^{k}\left\lfloor \lambda_{i}\right\rfloor c_{i}\in\left(
\mathfrak{Q}+\mathfrak{K}\right) \cap\mathbb{Z}^{n}\subseteq
\operatorname*{conv}.\operatorname*{hull}\left( \left( \mathfrak{Q}%
+\mathfrak{K}\right) \cap\mathbb{Z}^{n}\right) .
\]
But every $i\in\left\{ 1,2,...,k\right\} $ satisfies $\lambda_{i}\geq0$ and
thus $\left\lfloor \lambda_{i}\right\rfloor \geq0$. Hence,
\[
\sum\limits_{i=1}^{k}\left\lfloor \lambda_{i}\right\rfloor c_{i}%
\in\operatorname*{cone}\left\{ c_{1},c_{2},...,c_{k}\right\} .
\]
Now,%
\begin{align*}
x & =\underbrace{\left( x-\sum\limits_{i=1}^{k}\left\lfloor \lambda
_{i}\right\rfloor c_{i}\right) }_{\in\operatorname*{conv}%
.\operatorname*{hull}\left( \left( \mathfrak{Q}+\mathfrak{K}\right)
\cap\mathbb{Z}^{n}\right) }+\underbrace{\sum\limits_{i=1}^{k}\left\lfloor
\lambda_{i}\right\rfloor c_{i}}_{\in\operatorname*{cone}\left\{ c_{1}%
,c_{2},...,c_{k}\right\} }\\
& \in\operatorname*{conv}.\operatorname*{hull}\left( \left( \mathfrak{Q}%
+\mathfrak{K}\right) \cap\mathbb{Z}^{n}\right) +\operatorname*{cone}\left\{
c_{1},c_{2},...,c_{k}\right\} .
\end{align*}
Now forget that we fixed $x$. We thus have proven that every $x\in P^{\prime
}\cap\mathbb{Z}^{n}$ satisfies $x\in\operatorname*{conv}.\operatorname*{hull}%
\left( \left( \mathfrak{Q}+\mathfrak{K}\right) \cap\mathbb{Z}^{n}\right)
+\operatorname*{cone}\left\{ c_{1},c_{2},...,c_{k}\right\} $. In other
words,%
\begin{equation}
P^{\prime}\cap\mathbb{Z}^{n}\subseteq\operatorname*{conv}.\operatorname*{hull}%
\left( \left( \mathfrak{Q}+\mathfrak{K}\right) \cap\mathbb{Z}^{n}\right)
+\operatorname*{cone}\left\{ c_{1},c_{2},...,c_{k}\right\} .
\label{8.2.pf.3}%
\end{equation}
Since set $\operatorname*{conv}.\operatorname*{hull}\left( \left(
\mathfrak{Q}+\mathfrak{K}\right) \cap\mathbb{Z}^{n}\right)
+\operatorname*{cone}\left\{ c_{1},c_{2},...,c_{k}\right\} $ is convex, this
yields%
\begin{equation}
\operatorname*{conv}.\operatorname*{hull}\left( P^{\prime}\cap\mathbb{Z}%
^{n}\right) \subseteq\operatorname*{conv}.\operatorname*{hull}\left( \left(
\mathfrak{Q}+\mathfrak{K}\right) \cap\mathbb{Z}^{n}\right)
+\operatorname*{cone}\left\{ c_{1},c_{2},...,c_{k}\right\} .
\label{8.2.pf.sub}%
\end{equation}
We will now prove that%
\begin{equation}
\operatorname*{conv}.\operatorname*{hull}\left( \left( \mathfrak{Q}%
+\mathfrak{K}\right) \cap\mathbb{Z}^{n}\right) +\operatorname*{cone}\left\{
c_{1},c_{2},...,c_{k}\right\} \subseteq\operatorname*{conv}%
.\operatorname*{hull}\left( P^{\prime}\cap\mathbb{Z}^{n}\right) .
\label{8.2.pf.super}%
\end{equation}
Indeed, let us set $\mathfrak{A}=P^{\prime}\cap\mathbb{Z}^{n}$ and
$\mathfrak{B}=\left\{ c_{1},c_{2},...,c_{k}\right\} $. Then, it is very easy
to see that $\mathfrak{A}+\mathfrak{B}\subseteq\mathfrak{A}$%
\ \ \ \ \footnote{\textit{Proof.} Let $\xi\in\mathfrak{A}+\mathfrak{B}$. Then,
there exist $\alpha\in\mathfrak{A}$ and $\beta\in\mathfrak{B}$ such that
$\xi=\alpha+\beta$. Consider these $\alpha$ and $\beta$.
\par
Since $\alpha\in\mathfrak{A}=P^{\prime}\cap\mathbb{Z}^{n}\subseteq
\mathbb{Z}^{n}$ and $\beta\in\mathfrak{B}=\left\{ c_{1},c_{2},...,c_{k}%
\right\} \subseteq\mathbb{Z}^{n}$ (since $c_{1}$, $c_{2}$, $...$, $c_{k}$
belong to $\mathbb{Z}^{n}$), we have $\alpha+\beta\in\mathbb{Z}^{n}%
+\mathbb{Z}^{n}\subseteq\mathbb{Z}^{n}$. Thus, $\xi=\alpha+\beta\in
\mathbb{Z}^{n}$. On the other hand, from $\alpha\in\mathfrak{A}=P^{\prime}%
\cap\mathbb{Z}^{n}\subseteq P^{\prime}=\mathfrak{Q}+\mathfrak{C}$ and
$\beta\in\mathfrak{B}=\left\{ c_{1},c_{2},...,c_{k}\right\} \subseteq
\operatorname*{cone}\nolimits_{\mathbb{Q}}\left\{ c_{1},c_{2},...,c_{k}%
\right\} =\mathfrak{C}$, we obtain%
\[
\alpha+\beta\in\left( \mathfrak{Q}+\mathfrak{C}\right) +\mathfrak{C}%
=\mathfrak{Q}+\underbrace{\mathfrak{C}+\mathfrak{C}}_{\substack{\subseteq
\mathfrak{C}\\\text{(since }\mathfrak{C}\text{ is a cone)}}}\subseteq
\mathfrak{Q}+\mathfrak{C}=P^{\prime}.
\]
Thus, $\xi=\alpha+\beta\in P^{\prime}$. Combined with $\xi\in\mathbb{Z}^{n}$,
this yields $\xi\in P^{\prime}\cap\mathbb{Z}^{n}=\mathfrak{A}$.
\par
Now, forget that we fixed $\xi$. We thus have proven that every $\xi
\in\mathfrak{A}+\mathfrak{B}$ satisfies $\xi\in\mathfrak{A}$. In other words,
$\mathfrak{A}+\mathfrak{B}\subseteq\mathfrak{A}$, qed.}. Thus, Exercise 2.6a
(applied to $\mathfrak{A}$ and $\mathfrak{B}$ instead of $A$ and $B$) yields
that%
\[
\operatorname*{conv}.\operatorname*{hull}\mathfrak{A}+\operatorname*{cone}%
\mathfrak{B}=\operatorname*{conv}.\operatorname*{hull}\mathfrak{A}.
\]
Now,%
\begin{align*}
& \operatorname*{conv}.\operatorname*{hull}\left( \underbrace{\left(
\mathfrak{Q}+\mathfrak{K}\right) }_{\substack{\subseteq P^{\prime}\\\text{(by
(\ref{8.2.pf.0}))}}}\cap\mathbb{Z}^{n}\right) +\operatorname*{cone}%
\underbrace{\left\{ c_{1},c_{2},...,c_{k}\right\} }_{=\mathfrak{B}}\\
& \subseteq\operatorname*{conv}.\operatorname*{hull}\underbrace{\left(
P^{\prime}\cap\mathbb{Z}^{n}\right) }_{=\mathfrak{A}}+\operatorname*{cone}%
\mathfrak{B}=\operatorname*{conv}.\operatorname*{hull}\mathfrak{A}%
+\operatorname*{cone}\mathfrak{B}=\operatorname*{conv}.\operatorname*{hull}%
\underbrace{\mathfrak{A}}_{=P^{\prime}\cap\mathbb{Z}^{n}}\\
& =\operatorname*{conv}.\operatorname*{hull}\left( P^{\prime}\cap
\mathbb{Z}^{n}\right) \mathfrak{.}%
\end{align*}
This proves (\ref{8.2.pf.super}).
Combining (\ref{8.2.pf.super}) with (\ref{8.2.pf.sub}), we obtain%
\[
\operatorname*{conv}.\operatorname*{hull}\left( P^{\prime}\cap\mathbb{Z}%
^{n}\right) =\operatorname*{conv}.\operatorname*{hull}\left( \left(
\mathfrak{Q}+\mathfrak{K}\right) \cap\mathbb{Z}^{n}\right)
+\operatorname*{cone}\left\{ c_{1},c_{2},...,c_{k}\right\} .
\]
Thus, (\ref{8.2.pf.1}) is proven.
Since $P^{\prime}\cap\mathbb{Z}^{n}=P\cap\mathbb{Z}^{n}$, we can rewrite
(\ref{8.2.pf.1}) as
\begin{equation}
\operatorname*{conv}.\operatorname*{hull}\left( P\cap\mathbb{Z}^{n}\right)
=\operatorname*{conv}.\operatorname*{hull}\left( \left( \mathfrak{Q}%
+\mathfrak{K}\right) \cap\mathbb{Z}^{n}\right) +\operatorname*{cone}\left\{
c_{1},c_{2},...,c_{k}\right\} . \label{8.2.pf.1rewr}%
\end{equation}
Notice that $\operatorname*{conv}.\operatorname*{hull}\left( \left(
\mathfrak{Q}+\mathfrak{K}\right) \cap\mathbb{Z}^{n}\right) $ is a polytope
(since $\left( \mathfrak{Q}+\mathfrak{K}\right) \cap\mathbb{Z}^{n}$ is a
finite set) and that $\operatorname*{cone}\left\{ c_{1},c_{2},...,c_{k}%
\right\} $ is a finitely generated convex cone. Therefore, Lemma U2 (applied
to $\operatorname*{conv}.\operatorname*{hull}\left( \left( \mathfrak{Q}%
+\mathfrak{K}\right) \cap\mathbb{Z}^{n}\right) $ and $\operatorname*{cone}%
\left\{ c_{1},c_{2},...,c_{k}\right\} $ instead of $\mathfrak{Q}$ and
$\mathfrak{C}$) yields that $\operatorname*{conv}.\operatorname*{hull}\left(
\left( \mathfrak{Q}+\mathfrak{K}\right) \cap\mathbb{Z}^{n}\right)
+\operatorname*{cone}\left\{ c_{1},c_{2},...,c_{k}\right\} $ is a polyhedron
in $\mathbb{R}^{n}$. Since $\operatorname*{conv}.\operatorname*{hull}\left(
\left( \mathfrak{Q}+\mathfrak{K}\right) \cap\mathbb{Z}^{n}\right)
+\operatorname*{cone}\left\{ c_{1},c_{2},...,c_{k}\right\}
=\operatorname*{conv}.\operatorname*{hull}\left( P\cap\mathbb{Z}^{n}\right)
$ (by (\ref{8.2.pf.1rewr})), this rewrites as follows: $\operatorname*{conv}%
.\operatorname*{hull}\left( P\cap\mathbb{Z}^{n}\right) $ is a polyhedron in
$\mathbb{R}^{n}$. This solves Exercise 8.2.
\item \textbf{Exercise 8.5:} Replace ``$P$'' by ``$P=\left\{ x\ \mid\ Ax\leq
b\right\} $''.
\item \textbf{Exercise 8.16:} In (43), add a plus sign before
\textquotedblleft$\lambda_{m}P_{m}$\textquotedblright.
\item \textbf{Exercise 8.16:} After \textquotedblleft A matrix $M$%
\textquotedblright, add \textquotedblleft with nonnegative
entries\textquotedblright.
\end{itemize}
\section*{Chapter 9}
\begin{itemize}
\item \textbf{\S 9.1:} In the condition (5), it wouldn't harm to state that
the demand of the arc $\left( s_{i},t_{i}\right) $ should be understood as
$d_{i}$. (You only ever refer to the $d_{1}$, $d_{2}$, $...$, $d_{k}$ as
'demands', never actually linking them to the arcs $\left( s_{1}%
,t_{1}\right) $, $\left( s_{2},t_{2}\right) $, $...$, $\left( s_{k}%
,t_{k}\right) $.)
\item \textbf{Proof of Theorem 9.1:} On page 153, you write: ``for any $x\in
R^{A}$''. I assume the ``$R$'' should be an ``$\mathbb{R}$'' here.
\item \textbf{Proof of Theorem 9.1:} On page 153, you write: ``This can be
seen by extending the undirected graph $G$ by adding two new vertices
$s^{\prime}$ and $t^{\prime}$ and four new edges $\left\{ s^{\prime}%
,s_{1}\right\} $, $\left\{ t_{1},t^{\prime}\right\} $ (both with capacity
$d_{1}$) and $\left\{ s^{\prime},s_{2}\right\} $, $\left\{ t_{2},t^{\prime
}\right\} $ (both with capacity $d_{2}$) as in Figure 9.3.'' I am not
convinced that it's really an undirected graph that you want here; first of
all, I don't remember you ever formulating the max-flow-min-cut theorem for
undirected graphs; moreover, I do think you want the edges $\left\{
s^{\prime},s_{1}\right\} $, $\left\{ t_{1},t^{\prime}\right\} $, $\left\{
s^{\prime},s_{2}\right\} $, $\left\{ t_{2},t^{\prime}\right\} $ to be
directed. So I would rather write:
``To see this, we define the \textit{directed double cover }$\widehat{G}$
\textit{of }$G$ as the directed graph obtained from $G$ by replacing every
edge $\left\{ v,w\right\} $ of $G$ by two arcs $\left( v,w\right) $ and
$\left( w,v\right) $. Extend this directed graph $\widehat{G}$ by adding two
new vertices $s^{\prime}$ and $t^{\prime}$ and four new arcs $\left(
s^{\prime},s_{1}\right) $, $\left( t_{1},t^{\prime}\right) $ (both with
capacity $d_{1}$) and $\left( s^{\prime},s_{2}\right) $, $\left(
t_{2},t^{\prime}\right) $ (both with capacity $d_{2}$) as in Figure 9.3.''
(Figure 9.3 now should have arrows on the four new arcs, and a $\widehat{G}$
in lieu of $G$.)
\item \textbf{Proof of Theorem 9.1:} On page 154, you write: ``To see this we
extend $G$ with vertices $s^{\prime\prime}$ and $t^{\prime\prime}$ and edges
$\left\{ s^{\prime\prime},s_{1}\right\} $, $\left\{ t_{1},t^{\prime\prime
}\right\} $ (both with capacity $d_{1}$) and $\left\{ s^{\prime\prime}%
,t_{2}\right\} $, $\left\{ s_{2},t^{\prime\prime}\right\} $ (both with
capacity $d_{2}$) (cf. Figure 9.4).''
Similarly to the my previous comment on this proof, this should be replaced by:
``To see this we extend the directed graph $\widehat{G}$ by adding two
vertices $s^{\prime\prime}$ and $t^{\prime\prime}$ and four arcs $\left(
s^{\prime\prime},s_{1}\right) $, $\left( t_{1},t^{\prime\prime}\right) $
(both with capacity $d_{1}$) and $\left( s^{\prime\prime},t_{2}\right) $,
$\left( s_{2},t^{\prime\prime}\right) $ (both with capacity $d_{2}$) (cf.
Figure 9.4).''
(Figure 9.4 also needs arrows on the four new arcs, and a $\widehat{G}$ in
lieu of $G$.)
\item \textbf{Proof of Theorem 9.2:} On the last line of this proof, in
``$x_{2}=\dfrac{1}{2}\left( x^{\prime}-x\text{\textquotedblright}\right)
$'', the \textquotedblright\ should be a mathmode $^{\prime\prime}$.
\item \textbf{Proof of Theorem 9.3:} On the second line of page 158, replace
``$v_{i^{\prime},j_{i^{\prime}}}\neq s_{i^{\prime}}$'' by ``$v_{i^{\prime
},j_{i^{\prime}}}\neq t_{i^{\prime}}$''.
\item \textbf{\S 9.6:} On page 169, you write: ``Since for each fixed
$i=1,...,k$, a solution $x_{i}$ to (43) is an $s_{i}-t_{i}$ flow, we can
decompose $x_{i}$ as a nonnegative combination of $s_{i}-t_{i}$ paths.'' Why?
(I don't think a flow can be literally decomposed into paths; what I think can
be done is decomposing a $s_{i}-t_{i}$ flow as a nonnegative combination of
$s_{i}-t_{i}$ paths plus a nonnegative circulation. I guess you're throwing
out the circulation since it does not change the value of the flow nor does
subtracting it break the capacity constraints.)
\item \textbf{\S 9.6:} In (44) (i), replace ``$x_{j}\left( e\right) $'' by
``$x_{i}\left( e\right) $''.
\end{itemize}
\section*{Chapter 10}
\begin{itemize}
\item \textbf{Definition of ``basis'' on page 174:} You write: ``That is, for
any set $Z\in\mathcal{I}$ with $B\subseteq Z\subseteq Y$ one has $Z=B$''.
Maybe it would be better to clarify this, e. g., as follows: ``That is, $B$ is
a basis if and only if for any set $Z\in\mathcal{I}$ with $B\subseteq
Z\subseteq Y$ one has $Z=B$''.
\item \textbf{Description of the greedy algorithm on page 174:} You write:
``We stop if no $y$ satisfying (5)(i) exist, that is, if $\left\{
y_{1},...,y_{k}\right\} $ is a basis.'' I would add here: ``Then, we output
$\left\{ y_{1},...,y_{k}\right\} $.''
\item \textbf{Exercise 10.2:} You use the notion ``circuit'' here, but you
don't define it until \S 10.2.
\item \textbf{\S 10.2:} On page 176, replace ``minimimal'' by ``minimal''.
\item \textbf{\S 10.2:} On page 176, you write: ``Finally, $r$ arises in this
way if and only if
(9)\qquad(i) $r\left( \varnothing\right) =0$,
\ \ \ \qquad(ii) if $Z\subseteq Y\subseteq X$ then $r\left( Z\right) \leq
r\left( Y\right) $.''
This claim is wrong. Fortunately, you only use the ``only if'' direction,
which is correct indeed (and trivial). The ``if'' direction can be easily
refuted, e. g., by the counterexample $r\left( Y\right) =\max\left(
0,\left\vert Y\right\vert -1\right) $ (where $X$ is any finite set with at
least two elements). Fortunately, if we add the condition (10) to (9)(i) and
(9)(ii), \textbf{then} any $r$ satisfying these three conditions must indeed
arise by (8) from a nonempty down-monotone collection $\mathcal{I}$ of subsets
of $X$ (and, in fact, from a matroid $\left( X,\mathcal{I}\right) $). But
without (10), the $\mathcal{I}$ needs not exist.
\item \textbf{\S 10.2:} In the footnote $^{22}$ on page 176, it would be
better to replace ``no two of which are contained in each other'' by ``none of
which is contained in another'' (or else it sounds as if no two sets can be
\textbf{mutually} contained in each other \textbf{at the same time}, i. e., no
two sets can be equal).
\item \textbf{Proof of Theorem 10.2:} In the proof of the
(iii)$\Longrightarrow$(i) implication, replace ``$\left( B^{\prime}%
\setminus\left\{ y\right\} \right) \cup\left\{ x\right\} $'' by
``$\left( B^{\prime}\setminus\left\{ y\right\} \right) \cup\left\{
x\right\} \in\mathcal{B}$''.
\item \textbf{Proof of Theorem 10.2:} In the proof of the (iv)$\Longrightarrow
$(i) implication, replace ``choose $y\in F\setminus F^{\prime}$'' by ``choose
$x\in F\setminus F^{\prime}$''.
\item \textbf{Proof of Theorem 10.2:} In the proof of the (i)$\Longrightarrow
$(vi) implication, replace ``$F\subseteq F\subseteq Y\cup Z$'' by
``$F\subseteq F^{\prime}\subseteq Y\cup Z$''.
\item \textbf{Proof of Theorem 10.2:} In the proof of the (vi)$\Longrightarrow
$(i) implication, replace ``$x\in F^{\prime}\setminus F\cup U$'' by ``$x\in
F^{\prime}\setminus\left( F\cup U\right) $'' (for the sake of disambiguation
of the ambiguous expression $F^{\prime}\setminus F\cup U$).
\item \textbf{\S 10.2, definition of ``basis'':} On page 178, replace ``any in
$\mathcal{B}$ is called a \textit{basis}'' by ``any set in $\mathcal{B}$ is
called a \textit{basis}''.
\item \textbf{\S 10.2, example for contraction:} On page 179, you use the
notion of the ``cycle matroid'', but you only define these words in \S 10.3
(although the cycle matroid itself, without its name, already appears in \S 10.1).
\item \textbf{Exercise 10.4:} Here you use the notions of graphic and
cographic matroids; these notions are not defined until \S 10.3.
\item \textbf{Exercise 10.6:} I guess you can just as well add the following
exercise somewhere here:
If $\left( X,\mathcal{I}\right) $ and $\left( Y,\mathcal{J}\right) $ are
two matroids such that $X\cap Y=\varnothing$, then $\left( X\cup Y,\left\{
U\cup V\ \mid\ U\in\mathcal{I}\text{ and }V\in\mathcal{J}\right\} \right) $
is also a matroid.
(This, of course, is a particular case of Exercise 10.27 (iii).)
\item \textbf{Exercise 10.7:} You have never defined what a ``loop'' is. (A
\textit{loop} of a matroid $\left( X,\mathcal{I}\right) $ is an element
$x\in X$ such that $\left\{ x\right\} \notin\mathcal{I}$.)
\item \textbf{Exercise 10.7:} In part (iii) of this exercise, replace
``$Y\subseteq X$'' by ``$Y\subseteq X\setminus\left\{ x\right\} $''.
\item \textbf{Exercise 10.8:} This exercise has no (i) part but two (ii)
parts. (I assume the first (ii) part should be (i).)
\item \textbf{Exercise 10.7-10.8:} I am somewhat surprised about the order in
which you give these exercises. In my opinion, it is easiest to first solve
Exercise 10.8 (ii) (by repeated application of Theorem 10.3), then conclude
Theorem 10.8 (i) (since any subsets $B$, $Y$ and $U$ of $X$ such that
$B\subseteq Y\subseteq X$ and $U\subseteq X\setminus Y$ satisfy%
\[
r_{M}\left( U\cup Y\right) -r_{M}\left( Y\right) \leq r_{M}\left( U\cup
B\right) -r_{M}\left( B\right)
\]
(by (10), applied to $U\cup B$ and $Y$ instead of $Y$ and $Z$)), and then
derive Exercise 10.7.
\item \textbf{Exercise 10.9:} I would add to this exercise the condition that
``$Y\cap Z=\varnothing$'' (otherwise, the statement of the exercise doesn't
make much sense).
\item \textbf{\S 10.3, I:} I would add somewhere in the definition of graphic
matroids (or in Section 10.1) that the graph $G$ is allowed to have double
edges (otherwise, Exercise 10.11 would be wrong) and loops (else, Exercise
10.18 would sound strange :) ).
\item \textbf{\S 10.3, II:} Replace the period at the end of (19) by a comma.
\item \textbf{\S 10.3, II:} Replace ``where, for each graph $H$, let
$\kappa\left( H\right) $'' by ``where, for each graph $H$, we let
$\kappa\left( H\right) $''.
\item \textbf{\S 10.3, II:} Replace ``By definition, a subset $C$ of $E$ is a
circuit of $M^{\ast}\left( G\right) $ if'' by ``By definition, a subset $C$
of $E$ is a circuit of $M^{\ast}\left( G\right) $ if and only if''.
\item \textbf{\S 10.3, III:} On page 181, you write: ``Note that the rank
$r_{M}\left( Y\right) $ of any subset $Y$ of $X$ is equal to the rank of the
matrix formed by the columns indexed by $Y$.'' Either replace $M$ by $\left(
X,\mathcal{I}\right) $ here, or define $M$ to be $\left( X,\mathcal{I}%
\right) $.
\item \textbf{\S 10.3, IV:} On page 181, replace ``A set $Y=\left\{
y_{1},...,y_{n}\right\} $ is called a \textit{partial transversal}'' by ``A
set $Y=\left\{ y_{1},...,y_{n}\right\} $ (with $y_{1}$, $...$, $y_{n}$
pairwise distinct) is called a \textit{partial transversal}''.
\item \textbf{Proof of Theorem 10.5:} On page 182, replace ``Let $N$ and
$N^{\prime}$ denote the edges'' by ``Let $N$ and $N^{\prime}$ denote the sets
of edges''.
\item \textbf{Exercise 10.12:} I think ``$M=\left( V,\mathcal{I}\right) $''
should be ``$M=\left( X,\mathcal{I}\right) $'' here.
\item \textbf{Exercise 10.13:} Replace ``$F$'' by ``$E$'' (or by ``$\left(
V,E\right) $'') here.
\item \textbf{Proof of Lemma 10.1:} You write:
``By K\H{o}nig's matching theorem there exist a subset $S$ of $Y\setminus Z$
and a subset $S^{\prime}$ of $Z\setminus Y$ such that for each edge $\left\{
y,z\right\} $ of $H\left( M,Y\right) $ satisfying $z\in S^{\prime}$ one has
$y\in S$ and such that $\left\vert S\right\vert <\left\vert S^{\prime
}\right\vert $.''
There are a few things that I'd like to change about this sentence.
First, ``for each edge $\left\{ y,z\right\} $ of $H\left( M,Y\right) $''
should be ``for each edge $\left\{ y,z\right\} $ of $\widetilde{H}$'', where
$\widetilde{H}$ should be defined (probably before this sentence) as the
induced subgraph of $H\left( M,Y\right) $ on the vertex set $Y\bigtriangleup
Z$.
Second, while your claim indeed can be derived from K\H{o}nig's matching
theorem, it is easier to conclude it from Hall's marriage theorem. (Hall's
marriage theorem and K\H{o}nig's matching theorem are known to be equivalent,
but your claim is closer to Hall's than to K\H{o}nig's.)
Third, it would probably better to explain which bipartite graph you are
applying K\H{o}nig's matching theorem (or Hall's marriage theorem) to. You are
applying it to the bipartite graph $\widetilde{H}$ that I defined above.
\item \textbf{Proof of Lemma 10.1:} Replace ``for some $x\notin S$'' by ``for
some $x\in\left( Y\setminus Z\right) \setminus S$'' (because just having
$x\notin S$ is not enough for what you want to do). Maybe it would also be
useful to explain why $U=\left( Y\setminus\left\{ x\right\} \right)
\cup\left\{ z\right\} $ for some $x\in\left( Y\setminus Z\right) \setminus
S$.\ \ \ \ \footnote{The explanation isn't very long:
\par
We have $T=\underbrace{\left( Y\cap Z\right) }_{\subseteq Y}\cup
\underbrace{S}_{\subseteq Y\setminus Z\subseteq Y}\cup\left\{ z\right\}
\subseteq\underbrace{Y\cup Y}_{=Y}\cup\left\{ z\right\} =Y\cup\left\{
z\right\} $ and therefore $U\subseteq\underbrace{T}_{\subseteq Y\cup\left\{
z\right\} }\cup Y\subseteq Y\cup\left\{ z\right\} \cup Y=Y\cup\left\{
z\right\} $. Since $\left\vert U\right\vert =\left\vert Y\right\vert $, this
yields that $U=\left( Y\cup\left\{ z\right\} \right) \setminus\left\{
t\right\} $ for some $t\in Y\cup\left\{ z\right\} $. We will now show that
$t\in\left( Y\setminus Z\right) \setminus S$ and that $U=\left(
Y\setminus\left\{ t\right\} \right) \cup\left\{ z\right\} $.
\par
Since $t\notin\left( Y\cup\left\{ z\right\} \right) \setminus\left\{
t\right\} =U$ and $T\subseteq U$, we have $t\notin T$. Combined with $t\in
Y\cup\left\{ z\right\} $, this yields
\begin{align*}
t & \in\left( Y\cup\left\{ z\right\} \right) \setminus T=\left(
Y\setminus T\right) \cup\underbrace{\left( \left\{ z\right\} \setminus
T\right) }_{\substack{=\varnothing\\\text{(since }z\in\left\{ z\right\}
\subseteq\left( Y\cap Z\right) \cup S\cup\left\{ z\right\} =T\text{)}%
}}=Y\setminus\underbrace{T}_{\substack{=\left( Y\cap Z\right) \cup
S\cup\left\{ z\right\} \\\supseteq\left( Y\cap Z\right) \cup S}}\subseteq
Y\setminus\left( \left( Y\cap Z\right) \cup S\right) \\
& =\underbrace{\left( Y\setminus\left( Y\cap Z\right) \right)
}_{=Y\setminus Z}\setminus S=\left( Y\setminus Z\right) \setminus S.
\end{align*}
Also, since $t\notin T$ but $z\in\left\{ z\right\} \subseteq\left( Y\cap
Z\right) \cup S\cup\left\{ z\right\} =T$, we have $t\neq z$, so that
$\left( Y\setminus\left\{ t\right\} \right) \cup\left\{ z\right\}
=\left( Y\cup\left\{ z\right\} \right) \setminus\left\{ t\right\} =U$.
\par
Thus, we have proven $t\in\left( Y\setminus Z\right) \setminus S$ and that
$U=\left( Y\setminus\left\{ t\right\} \right) \cup\left\{ z\right\} $.
Hence, $U=\left( Y\setminus\left\{ x\right\} \right) \cup\left\{
z\right\} $ for some $x\in\left( Y\setminus Z\right) \setminus S$ (namely,
for $x=t$), qed.
\par
I guess you can simplify this proof further.}
\item \textbf{Proof of Lemma 10.2:} You write:
``By the unicity of $N$ there exists an edge $\left\{ y,z\right\} \in N$,
with $y\in Y\setminus Z$ and $z\in Z\setminus Y$, with the property that there
is no $z^{\prime}\in Z\setminus Y$ such that $z^{\prime}\neq z$ and $\left\{
y,z^{\prime}\right\} $ is an edge of $H\left( M,Y\right) $.''
How do you prove this?\footnote{The only proof I see requires a lemma from
graph theory:
\par
\textbf{Lemma 10.2a.} Let $G$ be a bipartite graph with the two parts $A$ and
$B$ (this means that the sets $A$ and $B$ are disjoint, their union is the set
of all vertices of $G$, and every edge of $G$ connects a vertex in $A$ with a
vertex in $B$). Assume that $G$ has a perfect matching from $A$ to $B$. Then,
there exists a vertex $v\in A$ such that for every edge $e$ of $G$ satisfying
$v\in e$, there exists a perfect matching of $G$ which contains the edge $e$.
\par
Lemma 10.2a seems to be folklore, although I can't find a good reference for
it. Two proofs of Lemma 10.2a have been given at \newline%
\url{http://www.artofproblemsolving.com/Forum/viewtopic.php?f=42&t=289103} .
\par
We will need the following corollary of Lemma 10.2a:
\par
\textbf{Corollary 10.2b.} Let $G$ be a bipartite graph with the two parts $A$
and $B$ (this means that the sets $A$ and $B$ are disjoint, their union is the
set of all vertices of $G$, and every edge of $G$ connects a vertex in $A$
with a vertex in $B$). Assume that $G$ has a unique perfect matching from $A$
to $B$. Then, there exists a vertex $v\in A$ which has exactly one neighbor in
$G$.
\par
Corollary 10.2b follows very quickly from Lemma 10.2a.
\par
Now, let us prove that (in the proof of Lemma 10.2) there exists an edge
$\left\{ y,z\right\} \in N$, with $y\in Y\setminus Z$ and $z\in Z\setminus
Y$, with the property that there is no $z^{\prime}\in Z\setminus Y$ such that
$z^{\prime}\neq z$ and $\left\{ y,z^{\prime}\right\} $ is an edge of
$H\left( M,Y\right) $. In fact, let $\widetilde{H}$ be the induced subgraph
of $H\left( M,Y\right) $ on the vertex set $Y\bigtriangleup Z$. Then (by the
conditions of Lemma 10.2) the bipartite graph $\widetilde{H}$ has a unique
perfect matching. Hence, Corollary 10.2b (applied to $\widetilde{H}$,
$Y\setminus Z$ and $Z\setminus Y$) yields that there exists a vertex $v\in
Y\setminus Z$ which has exactly one neighbor in $\widetilde{H}$. Let $y$ be
this $v$. Then, $y$ has exactly one neighbor in $\widetilde{H}$. Since $N$ is
a perfect matching of $\widetilde{H}$ (by the conditions of Lemma 10.2), there
exists an edge $\left\{ y,z\right\} \in N$ with $z\in Z\setminus Y$. If
there would be a $z^{\prime}\in Z\setminus Y$ such that $z^{\prime}\neq z$ and
$\left\{ y,z^{\prime}\right\} $ is an edge of $H\left( M,Y\right) $, then
$y$ would have at least two neighbors in $\widetilde{H}$ (namely, $z$ and
$z^{\prime}$), which would contradict the fact that $y$ has exactly one
neighbor in $\widetilde{H}$. Hence, there is no $z^{\prime}\in Z\setminus Y$
such that $z^{\prime}\neq z$ and $\left\{ y,z^{\prime}\right\} $ is an edge
of $H\left( M,Y\right) $.
\par
We thus have proven that there exists an edge $\left\{ y,z\right\} \in N$,
with $y\in Y\setminus Z$ and $z\in Z\setminus Y$, with the property that there
is no $z^{\prime}\in Z\setminus Y$ such that $z^{\prime}\neq z$ and $\left\{
y,z^{\prime}\right\} $ is an edge of $H\left( M,Y\right) $.}
\item \textbf{Proof of Lemma 10.2:} You write:
``There exists an $S\in\mathcal{I}$ such that $Z^{\prime}\setminus\left\{
y\right\} \subseteq S\subseteq\left( Y\setminus\left\{ y\right\} \right)
\cup\left\{ z\right\} $ and $\left\vert S\right\vert =\left\vert
Y\right\vert $ (since $\left( Y\setminus\left\{ y\right\} \right) \cup
Z=\left( Y\setminus\left\{ y\right\} \right) \cup\left\{ z\right\} \cup
Z^{\prime}$ and since $\left( Y\setminus\left\{ y\right\} \right)
\cup\left\{ z\right\} $ belongs to $\mathcal{I}$).''
There seems to be a typo in this sentence (the equality $\left(
Y\setminus\left\{ y\right\} \right) \cup Z=\left( Y\setminus\left\{
y\right\} \right) \cup\left\{ z\right\} \cup Z^{\prime}$ does not hold,
since $y$ belongs to the right hand side but not to the left hand side), but
more importantly I find this sentence too compressed.\footnote{I would replace
this sentence by the following paragraph (which is probably too detailed):
\par
Since $\left\{ y,z\right\} $ is an edge of $H\left( M,Y\right) $, we have
$\left( Y\setminus\left\{ y\right\} \right) \cup\left\{ z\right\}
\in\mathcal{I}$. Combined with $Z^{\prime}\setminus\left\{ y\right\}
\in\mathcal{I}$ (since $Z^{\prime}\in\mathcal{I}$), this yields that there
exists an $S\in\mathcal{I}$ such that $Z^{\prime}\setminus\left\{ y\right\}
\subseteq S\subseteq\left( Z^{\prime}\setminus\left\{ y\right\} \right)
\cup\left( \left( Y\setminus\left\{ y\right\} \right) \cup\left\{
z\right\} \right) $ and $\left\vert S\right\vert =\left\vert \left(
Y\setminus\left\{ y\right\} \right) \cup\left\{ z\right\} \right\vert $
(by the axioms of a matroid). Consider this $S$. We have%
\begin{align*}
S & \subseteq\left( \underbrace{Z^{\prime}}_{=\left( Z\setminus\left\{
z\right\} \right) \cup\left\{ y\right\} }\setminus\left\{ y\right\}
\right) \cup\left( \left( Y\setminus\left\{ y\right\} \right)
\cup\left\{ z\right\} \right) =\underbrace{\left( \left( \left(
Z\setminus\left\{ z\right\} \right) \cup\left\{ y\right\} \right)
\setminus\left\{ y\right\} \right) }_{\subseteq Z\setminus\left\{
z\right\} \subseteq Z}\cup\left( \left( Y\setminus\left\{ y\right\}
\right) \cup\left\{ z\right\} \right) \\
& \subseteq Z\cup\left( Y\setminus\left\{ y\right\} \right) \cup\left\{
z\right\} =\left( Y\setminus\left\{ y\right\} \right) \cup
\underbrace{Z\cup\left\{ z\right\} }_{\substack{=Z\\\text{(since }z\in
Z\text{)}}}=\left( Y\setminus\left\{ y\right\} \right) \cup Z
\end{align*}
and $\left\vert S\right\vert =\left\vert \left( Y\setminus\left\{ y\right\}
\right) \cup\left\{ z\right\} \right\vert =\left( \left\vert Y\right\vert
-1\right) +1=\left\vert Y\right\vert $.}
\item \textbf{Proof of Lemma 10.2:} You write: ``Assuming $Z\notin\mathcal{I}%
$, we know $z\notin S$''. Why we have $z\notin S$ took me quite a while to
realize; I wish you would spend some more words explaining this.\footnote{For
what it's worth, here is my argument:
\par
We have $Z\neq S$ (since $Z\notin\mathcal{I}$ but $S\in\mathcal{I}$). Assume
that $z\in S$. Then, $\left\{ z\right\} \subseteq S$. On the other hand,
since $Z^{\prime}=\left( Z\setminus\left\{ z\right\} \right) \cup\left\{
y\right\} $, we have $Z^{\prime}\setminus\left\{ y\right\} =\left( \left(
Z\setminus\left\{ z\right\} \right) \cup\left\{ y\right\} \right)
\setminus\left\{ y\right\} =Z\setminus\left\{ z\right\} $ (since $y\notin
Z\setminus\left\{ z\right\} $), thus $\left( Z^{\prime}\setminus\left\{
y\right\} \right) \cup\left\{ z\right\} =\left( Z\setminus\left\{
z\right\} \right) \cup\left\{ z\right\} =Z$ (since $z\in Z$). Hence,
$Z=\left( Z^{\prime}\setminus\left\{ y\right\} \right) \cup\left\{
z\right\} \subseteq S$ (since $Z^{\prime}\setminus\left\{ y\right\}
\subseteq S$ and $\left\{ z\right\} \subseteq S$). Combined with $\left\vert
Z\right\vert =\left\vert Y\right\vert =\left\vert S\right\vert $, this yields
$Z=S$, contradicting $Z\neq S$. Hence, our assumption (that $z\in S$) was
wrong. Thus, $z\notin S$.}
\item \textbf{Proof of Lemma 10.2:} You write: ``and hence $r\left( \left(
Y\cup Z^{\prime}\right) \setminus\left\{ y\right\} \right) =\left\vert
Y\right\vert $''. Again, why is this true?
This time, I actually don't think it is true (although I don't have a
counterexample). What is definitely true is that $r\left( \left( Y\cup
Z^{\prime}\right) \setminus\left\{ y\right\} \right) \geq\left\vert
Y\right\vert $ (and this is all we need later). Actually, I wouldn't even
speak about $r\left( \left( Y\cup Z^{\prime}\right) \setminus\left\{
y\right\} \right) $, but instead just say that $S\subseteq\left( Y\cup
Z^{\prime}\right) \setminus\left\{ y\right\} $\ \ \ \ \footnote{because
combining $S\subseteq\left( Y\setminus\left\{ y\right\} \right) \cup Z$
and $z\notin S$, we get
\begin{align*}
S & \subseteq\left( \left( Y\setminus\left\{ y\right\} \right) \cup
Z\right) \setminus\left\{ z\right\} =\underbrace{\left( \left(
Y\setminus\left\{ y\right\} \right) \setminus\left\{ z\right\} \right)
}_{\subseteq Y\setminus\left\{ y\right\} }\cup\left( Z\setminus\left\{
z\right\} \right) \subseteq\left( Y\setminus\left\{ y\right\} \right)
\cup\underbrace{\left( Z\setminus\left\{ z\right\} \right) }%
_{\substack{=\left( \left( Z\setminus\left\{ z\right\} \right)
\cup\left\{ y\right\} \right) \setminus\left\{ y\right\} \\\text{(since
}y\notin Z\text{, thus }y\notin Z\setminus\left\{ z\right\} \text{)}}}\\
& =\left( Y\setminus\left\{ y\right\} \right) \cup\left(
\underbrace{\left( \left( Z\setminus\left\{ z\right\} \right)
\cup\left\{ y\right\} \right) }_{=Z^{\prime}}\setminus\left\{ y\right\}
\right) =\left( Y\setminus\left\{ y\right\} \right) \cup\left(
Z^{\prime}\setminus\left\{ y\right\} \right) =\left( Y\cup Z^{\prime
}\right) \setminus\left\{ y\right\}
\end{align*}
}. And this is all that we need to prove that there exists an $z^{\prime}\in
Z^{\prime}\setminus Y$ such that $\left( Y\setminus\left\{ y\right\}
\right) \cup\left\{ z^{\prime}\right\} $ belongs to $\mathcal{I}%
$.\ \ \ \ \footnote{In fact, the existence of this $z^{\prime}$ can now be
shown as follows: We have $Y\setminus\left\{ y\right\} \in\mathcal{I}$
(since $Y\in\mathcal{I}$) and $S\in\mathcal{I}$ and $\left\vert Y\setminus
\left\{ y\right\} \right\vert =\left\vert Y\right\vert -1<\left\vert
Y\right\vert =\left\vert S\right\vert $. Thus, there must exist a $w\in
S\setminus\left( Y\setminus\left\{ y\right\} \right) $ such that $\left(
Y\setminus\left\{ y\right\} \right) \cup\left\{ w\right\} $ belongs to
$\mathcal{I}$ (by the axioms of a matroid). Consider this $w$. We have%
\begin{align*}
w & \in\underbrace{S}_{\subseteq\left( Y\cup Z^{\prime}\right)
\setminus\left\{ y\right\} }\setminus\left( Y\setminus\left\{ y\right\}
\right) \subseteq\left( \left( Y\cup Z^{\prime}\right) \setminus\left\{
y\right\} \right) \setminus\left( Y\setminus\left\{ y\right\} \right) \\
& \subseteq\left( Y\cup Z^{\prime}\right) \setminus Y=Z^{\prime}\setminus
Y.
\end{align*}
Thus, there exists an $z^{\prime}\in Z^{\prime}\setminus Y$ such that $\left(
Y\setminus\left\{ y\right\} \right) \cup\left\{ z^{\prime}\right\} $
belongs to $\mathcal{I}$ (namely, $z^{\prime}=w$).}
\item \textbf{Exercise 10.20:} In part (i), replace ``to $X^{\prime}:=\left(
Z\setminus\delta\left( y\right) \right) \cup\left\{ y\right\} $ and
$X^{\prime\prime}:=\left( Z\setminus\delta\left( y\right) \right)
\cup\left( Y\setminus\left\{ y\right\} \right) $'' by ``to the sets
$X^{\prime}:=\left( Z\setminus\delta\left( y\right) \right) \cup\left\{
y\right\} $ and $X^{\prime\prime}:=\left( Z\setminus\delta\left( y\right)
\right) \cup\left( Y\setminus\left\{ y\right\} \right) $ in lieu of $Y$
and $Z$''.
\item \textbf{Example 10.5c:} It might be useful to say here that the
``underlying undirected graph'' is not to be understood as a graph in the
usual sense (e. g., as a pair $\left( V,E\right) $ with $E$ being a
collection of $2$-element sets of $V$), but it must be implemented as an
``edge-aware'' graph.
Here, by an \textit{edge-aware} graph, I mean a triple $\left(
V,E,\operatorname*{end}\right) $ with $V$ and $E$ being two sets, and
$\operatorname*{end}:E\rightarrow\mathcal{P}_{2}\left( V\right) $ being a
map. (If we allow loops, then $\operatorname*{end}$ should be a map
$E\rightarrow\mathcal{P}_{1}\left( V\right) \cup\mathcal{P}_{2}\left(
V\right) $ instead of a map $E\rightarrow\mathcal{P}_{2}\left( V\right) $.)
The elements of $V$ are called the \textit{vertices} of the edge-aware graph
$\left( V,E,\operatorname*{end}\right) $. The elements of $E$ are called the
\textit{edges} of the edge-aware graph $\left( V,E,\operatorname*{end}%
\right) $. If $e$ is an edge of an edge-aware graph $\left(
V,E,\operatorname*{end}\right) $, then the elements of $\operatorname*{end}e$
are called the \textit{endpoints} of $e$ and are said to \textit{lie} on $e$
(or \textit{belong} to $e$).
Given an edge-aware graph $\left( V,E,\operatorname*{end}\right) $, we can
canonically obtain a graph in the usual sense (namely, $\left(
V,\operatorname*{end}\left( E\right) \right) $), but by doing this, we are
losing some information (namely, we are losing the ``labeling'' of the edges
and their multiplicities). Even the multigraph $\left( V,\left(
\text{multiset image of }E\text{ under }\operatorname*{end}\right) \right) $
contains less information than the edge-aware graph $\left(
V,E,\operatorname*{end}\right) $ since equal elements in a multiset are indistinguishable.
So, when $D=\left( V,A\right) $ is a directed graph, I define the
\textit{underlying undirected graph} of $D$ as the edge-aware graph $\left(
V,A,\operatorname*{end}\right) $, where the map $\operatorname*{end}%
:A\rightarrow V$ is defined by%
\[
\left( \operatorname*{end}\left( a\right) =\left\{ \text{the source and
the sink of }a\right\} \ \ \ \ \ \ \ \ \ \ \text{for every }a\in A\right) .
\]
This makes your example correct. If we would define the underlying undirected
graph of $D$ as a graph, or even a multigraph, in the usual sense, then its
set (or multiset) of edges would not in general be $A$ (at best, it would be a
multiset with the same cardinality as $A$, but we cannot define a matroid on a multiset).
\item \textbf{\S 10.5, directly after Example 10.5d:} You write: ``In this
section we describe an algorithm for finding a maximum-cardinality common
independent sets in two given matroids.'' Replace ``sets'' by ``set'' here.
\item \textbf{\S 10.5, Case 2 in the description of the algorithm:} Replace
``vertex vertex'' by ``vertex''.
\item \textbf{Proof of Theorem 10.6:} You write: ``and that, as $\left(
Y^{\prime}\setminus\left\{ y_{0}\right\} \right) \cap X_{1}=\varnothing$,
$r_{M_{1}}\left( \left( Y\cup Y^{\prime}\right) \setminus\left\{
y_{0}\right\} \right) =\left\vert Y\right\vert $''.
I think neither the claim that $\left( Y^{\prime}\setminus\left\{
y_{0}\right\} \right) \cap X_{1}=\varnothing$ nor the claim that $r_{M_{1}%
}\left( \left( Y\cup Y^{\prime}\right) \setminus\left\{ y_{0}\right\}
\right) =\left\vert Y\right\vert $ is very obvious.\footnote{Let me give my
proofs for these claims, though I can bet you have shorter ones:
\par
\textit{Proof of }$\left( Y^{\prime}\setminus\left\{ y_{0}\right\} \right)
\cap X_{1}=\varnothing$\textit{:} Let $\xi\in\left( Y^{\prime}\setminus
\left\{ y_{0}\right\} \right) \cap X_{1}$ be arbitrary. Then, $\xi\in
Y^{\prime}\setminus\left\{ y_{0}\right\} $ and $\xi\in X_{1}$. Clearly,%
\begin{align*}
\xi & \in Y^{\prime}\setminus\left\{ y_{0}\right\} \subseteq\left(
Y\cup\left\{ y_{0},y_{1},...,y_{m}\right\} \right) \setminus\left\{
y_{0}\right\} \\
& \ \ \ \ \ \ \ \ \ \ \left( \text{since (25) yields }Y^{\prime
}=\underbrace{\left( Y\setminus\left\{ z_{1},z_{2},...,z_{m}\right\}
\right) }_{\subseteq Y}\cup\left\{ y_{0},y_{1},...,y_{m}\right\} \subseteq
Y\cup\left\{ y_{0},y_{1},...,y_{m}\right\} \right) \\
& =\underbrace{\left( Y\setminus\left\{ y_{0}\right\} \right)
}_{\subseteq Y}\cup\underbrace{\left( \left\{ y_{0},y_{1},...,y_{m}\right\}
\setminus\left\{ y_{0}\right\} \right) }_{=\left\{ y_{1},y_{2}%
,...,y_{m}\right\} }\\
& \subseteq Y\cup\left\{ y_{1},y_{2},...,y_{m}\right\} .
\end{align*}
But $\xi\in X_{1}\subseteq X\setminus Y$, so that $\xi\notin Y$. Combining
$\xi\in Y\cup\left\{ y_{1},y_{2},...,y_{m}\right\} $ with $\xi\notin Y$, we
obtain%
\[
\xi\in\left( Y\cup\left\{ y_{1},y_{2},...,y_{m}\right\} \right) \setminus
Y\subseteq\left\{ y_{1},y_{2},...,y_{m}\right\} .
\]
Thus, there exists a $j\in\left\{ 1,2,...,m\right\} $ such that $\xi=y_{j}$.
Consider this $j$. Then, $y_{j}=\xi\in X_{1}$. Hence, by taking the segment of
the path $P$ beginning with the vertex $y_{j}$ and ending with the vertex
$y_{m}$ (the last vertex of $P$), we get a path from some vertex in $X_{1}$ to
some vertex in $X_{2}$ (since $y_{m}\in X_{2}$). But this new path is shorter
than $P$. This contradicts the fact that $P$ was chosen to be a shortest path
from some vertex in $X_{1}$ to some vertex in $X_{2}$.
\par
Now, forget that we fixed $\xi$. We thus have proven that every $\xi\in\left(
Y^{\prime}\setminus\left\{ y_{0}\right\} \right) \cap X_{1}$ satisfies a
contradiction. In other words, there exists no $\xi\in\left( Y^{\prime
}\setminus\left\{ y_{0}\right\} \right) \cap X_{1}$. Thus, $\left(
Y^{\prime}\setminus\left\{ y_{0}\right\} \right) \cap X_{1}=\varnothing$,
qed.
\par
\textit{Proof of }$r_{M_{1}}\left( \left( Y\cup Y^{\prime}\right)
\setminus\left\{ y_{0}\right\} \right) =\left\vert Y\right\vert $\textit{:}
We have $y_{0}\notin Y$ (since $y_{0}\in X\setminus Y$). Since $Y\in
\mathcal{I}_{1}$ and $Y\subseteq\left( Y\cup Y^{\prime}\right)
\setminus\left\{ y_{0}\right\} $ (because $Y\subseteq Y\cup Y^{\prime}$ and
$y_{0}\notin Y$), we have $r_{M_{1}}\left( \left( Y\cup Y^{\prime}\right)
\setminus\left\{ y_{0}\right\} \right) \geq\left\vert Y\right\vert $.
\par
Assume now that $r_{M_{1}}\left( \left( Y\cup Y^{\prime}\right)
\setminus\left\{ y_{0}\right\} \right) >\left\vert Y\right\vert $. Since
$Y\in\mathcal{I}_{1}$ and $Y\subseteq\left( Y\cup Y^{\prime}\right)
\setminus\left\{ y_{0}\right\} $, this yields that there exists some
$\eta\in\left( \left( Y\cup Y^{\prime}\right) \setminus\left\{
y_{0}\right\} \right) \setminus Y$ such that $Y\cup\left\{ \eta\right\}
\in\mathcal{I}_{1}$. Consider this $\eta$.
\par
We have%
\[
\eta\in\left( \left( Y\cup Y^{\prime}\right) \setminus\left\{
y_{0}\right\} \right) \setminus Y=\underbrace{\left( \left( Y\cup
Y^{\prime}\right) \setminus Y\right) }_{\subseteq Y^{\prime}}\setminus
\left\{ y_{0}\right\} \subseteq Y^{\prime}\setminus\left\{ y_{0}\right\}
.
\]
On the other hand, $\eta\in\underbrace{\left( \left( Y\cup Y^{\prime
}\right) \setminus\left\{ y_{0}\right\} \right) }_{\subseteq X}\setminus
Y\subseteq X\setminus Y$ and $Y\cup\left\{ \eta\right\} \in\mathcal{I}_{1}$,
so that $\eta\in X_{1}$ (by the definition of $X_{1}$). Combining $\eta\in
Y^{\prime}\setminus\left\{ y_{0}\right\} $ with $\eta\in X_{1}$, we obtain
$\eta\in\left( Y^{\prime}\setminus\left\{ y_{0}\right\} \right) \cap
X_{1}=\varnothing$, which is absurd. Thus, we have obtained a contradiction.
This contradiction shows that our assumption (that $r_{M_{1}}\left( \left(
Y\cup Y^{\prime}\right) \setminus\left\{ y_{0}\right\} \right) >\left\vert
Y\right\vert $) was wrong. Hence, $r_{M_{1}}\left( \left( Y\cup Y^{\prime
}\right) \setminus\left\{ y_{0}\right\} \right) \leq\left\vert
Y\right\vert $. Combined with $r_{M_{1}}\left( \left( Y\cup Y^{\prime
}\right) \setminus\left\{ y_{0}\right\} \right) \geq\left\vert
Y\right\vert $, this yields $r_{M_{1}}\left( \left( Y\cup Y^{\prime}\right)
\setminus\left\{ y_{0}\right\} \right) =\left\vert Y\right\vert $, qed.}
\item \textbf{Proof of Theorem 10.7:} You write:
``In the second alternative, $\left( y,x\right) $ is an arc of $H\left(
M_{1},M_{2},Y\right) $ entering $U$. This contradicts the definition of $U$
(as $y\notin U$ and $x\in U$).''
I think this should rather be:
``In the second alternative, $\left( y,x\right) $ is an arc of $H\left(
M_{1},M_{2},Y\right) $ entering $U$ (as $y\notin U$ and $x\in U$). This
contradicts the definition of $U$.''
\item \textbf{Proof of Theorem 10.7:} You write:
``Similarly we have that $r_{M_{2}}\left( X\setminus U\right) =\left\vert
Y\setminus U\right\vert $.''
I wish to see some more details on what the ``Similarly'' here
means.\footnote{Here is what I think it means: We know that $U\supseteq X_{2}$
and $U\cap X_{1}=\varnothing$. In other words, $X\setminus U\supseteq X_{1}$
and $\left( X\setminus U\right) \cap X_{2}=\varnothing$. Hence, the same
argument that we used to prove (27) can be modified to become a proof of
$r_{M_{2}}\left( X\setminus U\right) =\left\vert Y\cap\left( X\setminus
U\right) \right\vert $. (The modifications necessary to obtain a proof
$r_{M_{2}}\left( X\setminus U\right) =\left\vert Y\cap\left( X\setminus
U\right) \right\vert $ are very simple: Replace every appearance of $U$ by
$X\setminus U$; replace every appearance of $X_{1}$ by $X_{2}$; replace every
appearance of $M_{1}$ by $M_{2}$; replace every appearance of $\mathcal{I}%
_{1}$ by $\mathcal{I}_{2}$; replace every appearance of $\left( y,x\right) $
by $\left( x,y\right) $.)
\par
Thus, we have shown that $r_{M_{2}}\left( X\setminus U\right) =\left\vert
Y\cap\left( X\setminus U\right) \right\vert $. Since $Y\cap\left(
X\setminus U\right) =Y\setminus U$, this rewrites as $r_{M_{2}}\left(
X\setminus U\right) =\left\vert Y\setminus U\right\vert $.}
\item \textbf{Between the proof of Theorem 10.7 and Theorem 10.8:} You write:
``The algorithm clearly has polynomially bounded running time, since we can
construct the auxiliary directed graph $H\left( M_{1},M_{2},Y\right) $ and
find the path $P$ (if it exists), in polynomial time.'' The second comma in
this sentence makes no sense.
\item \textbf{Proof of Theorem 10.9:} Maybe worth noticing: When you say ``the
algorithm'' in this proof, you don't literally mean the cardinality common
independent set augmenting algorithm, but you mean the algorithm ``set
$Y:=\varnothing$, and apply repeatedly the cardinality common independent set
augmenting algorithm, replacing $Y$ by $Y^{\prime}$ after each finished
iteration, until the algorithm no longer outputs a $Y^{\prime}$''.
\item \textbf{Exercise 10.23:} For the sake of precision, it would be better
to replace ``be subsets'' by ``be sequences of subsets''.
\item \textbf{Exercise 10.31:} In part (i), replace ``$k\left( r_{M}\left(
X\right) -r_{M}\left( U\right) \right) \geq\left\vert X\setminus
U\right\vert $'' by ``$k\left( r_{M}\left( X\right) -r_{M}\left( U\right)
\right) \leq\left\vert X\setminus U\right\vert $''.
\item \textbf{Exercise 10.32:} Replace ``into $t$ classes'' by ``into $t$
nonempty classes''.
\item \textbf{Exercise 10.34:} Replace ``$Y_{1}\cup Z_{1}\cup Z_{2}$'' by
``$Y_{1}\cup Z_{2}$''. Also, why do you require $B$ and $B^{\prime}$ to be
disjoint? The problem would still be true if you didn't (and the solution
wouldn't be much harder: one can WLOG assume that $B_{1}\cap B_{2}%
=\varnothing$ by means of contracting $B_{1}\cap B_{2}$).
\item \textbf{Exercise 10.35:} First, I would prefer to see a definition of
what $b\left( U\right) $ means in this exercise. (Namely, $b\left(
U\right) :=\sum\limits_{u\in U}b\left( u\right) $ for every $U\subseteq V$.)
Second, I think this exercise needs the additional requirement that $b\left(
v\right) >0$ for every $v\in V$. (From a look at Exercise 4.5 I am pretty
sure that you define $\mathbb{Z}_{+}$ as $\left\{ 0,1,2,...\right\} $ rather
than as $\left\{ 1,2,3,...\right\} $, so this requirement is not a
tautology. By the way, what do you think about explicitly defining your
$\mathbb{Z}_{+}$ at the beginning of the notes?) I think a counterexample to
the claim of this exercise without the requirement that $b\left( v\right)
>0$ for every $v\in V$ would be the following pair $\left( G,b\right) $: The
graph $G$ consists of three vertices $x$, $y$ and $z$ with edges $xz$ and
$yz$, and the function $b$ is given by $b\left( x\right) =b\left( y\right)
=1$ and $b\left( z\right) =0$.
\item \textbf{\S 10.6:} On the third line from below on page 190, replace ``if
matroid'' by ``if matroids''.
\item \textbf{\S 10.6, description of the weighted common independent set
augmenting algorithm:} Two lines below (36), remove ``counting
multiplicities'', since a path does not traverse any vertex twice. Also, add
the following definition:
``The \textit{length} of a circuit $C$, denoted by $l\left( C\right) $, is
defined as $\sum\limits_{c\in VC}\sharp_{C}\left( c\right) $, where
$\sharp_{C}\left( c\right) $ denotes the number of edges belonging to the
circuit $C$ which are outgoing from $c$.''
Note that I wouldn't try to reformulate the definition of the length of a path
to make it also cover the case of a circuit, because it is not clear what the
multiplicities are for a circuit.
\item \textbf{Theorem 10.10:} Replace ``circuit'' by ``cycle''.
(I am assuming that the difference between a circuit and a cycle is that a
circuit may contain repeated vertices (except from its first and last vertex),
whereas a cycle cannot. In this case, I don't know whether Theorem 10.10 holds
for arbitrary circuits, but at least I am fairly sure that the proof you are
giving does not.
Actually, I am no longer sure that the meanings of the words ``circuit'' and
``cycle'' are the ones I believe. For example, Exercise 4.15 probably uses the
word ``circuit'' in the meaning of ``cycle with no repeated vertices''. Maybe
it would be better to disambiguate these two terms and define them somewhere
in Chapter 1.)
\item \textbf{Proof of Theorem 10.10:} On page 191, replace ``by Lemma 10.2 a
matching'' by ``by Lemma 10.2 a perfect matching''.
\item \textbf{Proof of Theorem 10.10:} On page 191, replace ``a directed
circuit $C_{1}$'' by ``a directed cycle $C_{1}$'', and replace ``into directed
circuits'' by ``into directed cycles''. [It might generally be good to handle
the distinction between circuits and cycles with the same care that you take
for paths vs. walks.]
\item \textbf{Proof of Theorem 10.10:} The ending of this proof is very hard
to understand and possibly wrong. First, it has typos (one should replace
every ``$l\left( VC_{j}\right) $'' by ``$l\left( C_{j}\right) $'', replace
every ``$l\left( VC\right) $'' by ``$l\left( C\right) $'', and replace
every ``proposition'' by ``theorem''). Second, I think there is too much going
on at the same time in this argument, and the claim that ``$l\left(
VC_{j}\right) \leq l\left( VC\right) $ for all $j\leq k$'' at the end might
not be true (I have not looked for a counterexample, but I don't see a reason
why it should hold).\footnote{I would replace the part of the proof that
begins with ``If, say $VC_{k}=VC$'' and ends with the end of the proof by the
following (again, my hope is that you can do much better):
\par
``From (37), we see that $l\left( C_{1}\right) +l\left( C_{2}\right)
+\cdots+l\left( C_{k}\right) =2l\left( C\right) $. If some $j\in\left\{
1,2,...,k\right\} $ such that $VC_{j}\neq VC$ satisfies $l\left(
C_{j}\right) <0$, then Theorem 10.10 is proven (by taking $C^{\prime}=C_{j}%
$). Hence, for the rest of this proof, we can assume WLOG that no such $j$
exists. Assume this. Thus, every $j\in\left\{ 1,2,...,k\right\} $ such that
$VC_{j}\neq VC$ satisfies $l\left( C_{j}\right) \geq0$.
\par
Note that every of the cycles $C_{1}$, $C_{2}$, $...$, $C_{k}$ can be regarded
as a cycle in the directed graph $H\left( M_{1},M_{2},Y\right) $, because it
uses each of the doubled arcs in $N_{2}$ at most once. (But the resulting $k$
cycles in $H\left( M_{1},M_{2},Y\right) $ won't be arc-disjoint.)
\par
We know that $t$ (like every vertex in $VC$) is left by exactly two arcs of
$D$. Let $e$ and $f$ be these two arcs. Since the cycles $C_{1}$, $C_{2}$,
$...$, $C_{k}$ form a decomposition of $A$, each of the two arcs $e$ and $f$
belongs to one of the cycles $C_{1}$, $C_{2}$, $...$, $C_{k}$. That is, there
exist $u\in\left\{ 1,2,...,k\right\} $ and $v\in\left\{ 1,2,...,k\right\}
$ such that $e\in C_{u}$ and $f\in C_{v}$. Consider these $u$ and $v$. Since
$e$ and $f$ are two distinct arcs of $D$ leaving the vertex $t$, we have
$u\neq v$ (because no cycle may contain two distinct arcs leaving the same
vertex). Since $e\in C_{u}$ and $t\in e$, we have $t\in VC_{u}$.
\par
Recall that $VC_{j}=VC$ for at most one $j$. Hence, we must be in one of the
following two cases:
\par
\textit{Case A:} There exists exactly one $j$ satisfying $VC_{j}=VC$.
\par
\textit{Case B:} There exists no $j$ satisfying $VC_{j}=VC$.
\par
Let us consider Case A first. In this case, there exists exactly one $j$
satisfying $VC_{j}=VC$. Hence, we can WLOG assume that $VC_{k}=VC$, whereas
every $j\neq k$ satisfies $VC_{j}\neq VC$. Thus, every $j\neq k$ satisfies
$l\left( C_{j}\right) \geq0$ (since we know that every $j\in\left\{
1,2,...,k\right\} $ such that $VC_{j}\neq VC$ satisfies $l\left(
C_{j}\right) \geq0$). Consequently, every $j\neq k$ satisfies
\begin{align*}
l\left( C_{j}\right) & \leq l\left( C_{1}\right) +l\left( C_{2}\right)
+\cdots+l\left( C_{k-1}\right) =\underbrace{\left( l\left( C_{1}\right)
+l\left( C_{2}\right) +\cdots+l\left( C_{k}\right) \right) }_{=2l\left(
C\right) }-\underbrace{l\left( C_{k}\right) }_{\substack{=l\left(
C\right) \\\text{(since }VC_{k}=VC\text{ and since}\\C_{k}\text{ and }C\text{
are cycles)}}}\\
& =2l\left( C\right) -l\left( C\right) =l\left( C\right) .
\end{align*}
But since $u\neq v$, it is not possible that both $u$ and $v$ equal $k$.
Hence, at least one of the numbers $u$ and $v$ does not equal $k$. Let us WLOG
assume that $u\neq k$. Thus, $VC_{u}\neq VC$ (since every $j\neq k$ satisfies
$VC_{j}\neq VC$) and $l\left( C_{u}\right) \leq l\left( C\right) $ (since
every $j\neq k$ satisfies $l\left( C_{j}\right) \leq l\left( C\right) $)
and also $t\in VC_{u}$ (as we know). Thus, Theorem 10.10 is proven (by taking
$C^{\prime}=C_{u}$) in Case A.
\par
Now, let us consider Case B. In this case, there exists no $j$ satisfying
$VC_{j}=VC$. Hence, every $j\in\left\{ 1,2,...,k\right\} $ satisfies
$VC_{j}\neq VC$. Thus, every $j\in\left\{ 1,2,...,k\right\} $ satisfies
$l\left( C_{j}\right) \geq0$ (since we know that every $j\in\left\{
1,2,...,k\right\} $ such that $VC_{j}\neq VC$ satisfies $l\left(
C_{j}\right) \geq0$). As a consequence, using $u=v$ we obtain%
\[
l\left( C_{u}\right) +l\left( C_{v}\right) \leq l\left( C_{1}\right)
+l\left( C_{2}\right) +\cdots+l\left( C_{k}\right) =2l\left( C\right) .
\]
Hence, at least one of the two inequalities $l\left( C_{u}\right) \leq
l\left( C\right) $ and $l\left( C_{v}\right) \leq l\left( C\right) $
must hold (because otherwise we would have $l\left( C_{u}\right) >l\left(
C\right) $ and $l\left( C_{v}\right) >l\left( C\right) $, and therefore
$\underbrace{l\left( C_{u}\right) }_{>l\left( C\right) }%
+\underbrace{l\left( C_{v}\right) }_{>l\left( C\right) }>l\left(
C\right) +l\left( C\right) =2l\left( C\right) $, contradicting $l\left(
C_{u}\right) +l\left( C_{v}\right) \leq2l\left( C\right) $). Let us WLOG
assume that $l\left( C_{u}\right) \leq l\left( C\right) $ holds. Thus,
$VC_{u}\neq VC$ (since every $j\in\left\{ 1,2,...,k\right\} $ satisfies
$VC_{j}\neq VC$) and $l\left( C_{u}\right) \leq l\left( C\right) $ and
also $t\in VC_{u}$ (proven above). Thus, Theorem 10.10 is proven (by taking
$C^{\prime}=C_{u}$) in Case B.
\par
Hence, Theorem 10.10 is proven in both cases A and B. Since we know that cases
A and B cover all possibilities, this yields that we have proven Theorem
10.10.''}
\item \textbf{Proof of Theorem 10.11:} I think that for the sake of clarity,
it would be better to replace ``To see necessity, suppose $H\left(
M_{1},M_{2},Y\right) $ has a cycle $C$ of negative length'' by ``To see
necessity, suppose that $Y$ is extreme but $H\left( M_{1},M_{2},Y\right) $
has a cycle $C$ of negative length''. Similarly, I believe that it would be
better to replace ``To see sufficiency, consider a $Z\in\mathcal{I}_{1}%
\cap\mathcal{I}_{2}$ with $\left\vert Z\right\vert =\left\vert Y\right\vert
$'' by ``To see sufficiency, assume that $H\left( M_{1},M_{2},Y\right) $ has
no directed cycle of negative length, and consider a $Z\in\mathcal{I}_{1}%
\cap\mathcal{I}_{2}$ with $\left\vert Z\right\vert =\left\vert Y\right\vert
$''. Note that this might be just my personal problem. (When I see words like
``necessity'' and ``sufficiency'' in relation to an if-and-only-if assertion,
it is not clear to me what they mean: which of the two sides is meant to be
necessary/sufficient? But maybe there is a common convention how to understand
this. I prefer subdividing proofs of equivalences into ``$\Longrightarrow:$''
and ``$\Longleftarrow:$'' parts, though.)
\item \textbf{Proof of Theorem 10.11:} Replace ``Proposition 10.10'' by
``Theorem 10.10''.
\item \textbf{Between Theorem 10.11 and Theorem 10.12:} You write: ``This
theorem implies that we can find in the algorithm a shortest path $P$ in
polynomial time (with the Bellman-Ford method).'' Here you are using the fact
that the Bellman-Ford method (as described in the last paragraph of page 13)
computes not just a minimum-length $s-t$ path, but actually a minimum-length
$s-t$ path which has a minimum number of arcs among all minimum-length $s-t$
paths. Maybe this claim could be added to \S 1.3 as an exercise?
Also, there is a further little difference between the Bellman-Ford method
presented in Chapter 1 and the algorithm you are using here: In Chapter 1, the
Bellman-Ford method was constructed for a directed graph with a length
function on arcs, while here you have a directed graph with a length function
on vertices. Fortunately, it is easy to reduce the latter situation to the
former (by adding a new vertex $s$ along with an arc from every vertex in
$X_{2}$ to $s$, and then defining the length of any arc to be the length of
its source, and searching for a shortest path from $X_{1}$ to $s$ with respect
to this lengths of arcs).
\item \textbf{Proof of Theorem 10.12:} On page 192, replace ``adding arcs from
each vertex in $X_{1}$ to $t$, and from $t$ to each vertex in $X_{2}$'' by
``adding arcs from each vertex in $X_{2}$ to $t$, and from $t$ to each vertex
in $X_{1}$''.
\item \textbf{Proof of Theorem 10.12:} On page 193, replace ``So $Z=\left(
Y+t\right) \bigtriangleup VC$'' by ``Set $Z=\left( Y+t\right)
\bigtriangleup VC$''.
\item \textbf{Proof of Theorem 10.12:} On page 193, replace ``Proposition
10.10'' by ``Theorem 10.10''.
\item \textbf{Proof of Theorem 10.12:} At the end of this proof, add the
sentence: ``Since $Z=Y^{\prime}$, this yields that $Y^{\prime}$ is extreme.''
\item \textbf{Proof of Theorem 10.13:} Replace ``we have an extreme'' by ``we
have a maximum-weight''.
\item \textbf{Exercise 10.38:} Replace ``rooted tree'' by ``rooted tree (with
vertex set $V$ and arc set $\subseteq A$)''.
\item \textbf{Proof of Theorem 10.14:} On the second line of this proof,
replace ``$...w\left( y_{m}\right) $'' by ``$...\geq w\left( y_{m}\right)
$''.
\item \textbf{\S 10.7:} On page 195, you write: ``So system (41) is totally
dual integral.'' This is the first time the words ``totally dual integral''
appear in your text. Since you apparently never use total dual integrality, it
probably isn't necessary to define what it means, but it is still weird to
suddenly use this notion without ever introducing it.
\item \textbf{Proof of Theorem 10.15:} Here, you write: ``Since $B$ is a
maximum-weight basis, $H\left( M_{1},M_{2},B\right) $ has no directed
circuits of negative length.'' Here, it might be useful to add ``(by Exercise 10.39)''.
\item \textbf{Proof of Theorem 10.15:} You write: ``Hence there exists a
function $\phi:X\rightarrow\mathbb{Z}$ so that $\phi\left( y\right)
-\phi\left( x\right) \leq l\left( y\right) $ for each arc $\left(
x,y\right) $ of $H\left( M_{1},M_{2},B\right) $.'' It took me a while to
understand why this is true; what you are using here is the following (easy
but not exactly trivial) fact:
\textbf{Lemma 10.15a.} Let $G=\left( V,D\right) $ be a directed graph. Let
$l:V\rightarrow\mathbb{Z}$ be a function. For every $x\in V$, denote $l\left(
x\right) $ as the \textit{length of }$x$. The \textit{length} of a path $P$
in $G$, denoted by $l\left( P\right) $, will mean the sum of the lengths of
the vertices traversed by $P$, counting multiplicities. Assume that the graph
$G$ has no directed circuit of negative length. Then, there exists a function
$\phi:V\rightarrow\mathbb{Z}$ such that $\phi\left( y\right) -\phi\left(
x\right) \leq l\left( y\right) $ for each arc $\left( x,y\right) $ of $G$.
Couldn't this be a nice exercise for Chapter 1?
\item \textbf{Proof of Theorem 10.15:} In (47), replace ``$\leq w\left(
x\right) $'' by ``$\leq w\left( y\right) $''.
\item \textbf{Proof of Theorem 10.16:} Between (49) and (50), you write: ``and
$M_{2}:=\left( X\cup U,\mathcal{J}_{2}\right) $''. The ``$M_{2}$'' here
should be an ``$M_{2}^{\prime}$''.
\item \textbf{Proof of Theorem 10.16:} Between (49) and (50), you write:
``Define $\widetilde{w}:X\rightarrow\mathbb{Z}$''. The ``$X$'' here should be
an ``$X\cup U$''.
\item \textbf{Proof of Theorem 10.16:} You write: ``In fact, $Y\cup W$ is a
maximum-weight common basis''. I would replace the words ``In fact'' by
``Moreover'' here, since ``In fact'' sounds as if you are proving the claim in
the preceding sentence.
\item \textbf{Proof of Theorem 10.16:} On page 196, replace ``$\widetilde{w}%
_{1},\widetilde{w}_{2}:X\rightarrow\mathbb{Z}$'' by ``$\widetilde{w}%
_{1},\widetilde{w}_{2}:X\cup U\rightarrow\mathbb{Z}$''.
\item \textbf{Proof of Theorem 10.16:} On the first line of page 197, replace
``subtracting'' by ``subtract''.
\item \textbf{\S 10.7:} Two lines above (51), you write: ``By Theorem 10.14
the intersection''. I think you want to cite Corollary 10.14a rather than
Theorem 10.14.
\item \textbf{Proof of Theorem 10.17:} Replace ``By Theorem 10.15'' by ``By
Theorem 10.16''.
\item \textbf{Proof of Theorem 10.17:} The proof of optimality is a bit of a
mess (you use some $Z$ which you haven't introduced, and it's not very clear
what is being done). Here is how I would correct it:
``Rename $Y$ as $Z$ (so that the letter $Y$ is free again and can be used as a
summation index). Denote by $\widetilde{z}$ the vector $\chi^{Z}\in
\mathbb{R}^{X}$. Since $Z$ is a maximum-weight independent set in $M_{1}$ with
respect to $w_{1}$, we know that $\widetilde{z}=\chi^{Y}$ is an optimum
solution of the problem (42) with respect to $M_{1},w_{1}$. As a consequence,
\begin{align*}
& w_{1}^{T}\widetilde{z}\\
& =\max\left\{ w_{1}^{T}z\ \mid\ z\left( x\right) \geq0\text{ for all
}x\in X\text{, and }z\left( Y\right) \leq r_{M_{1}}\left( Y\right) \text{
for all }Y\subseteq X\right\} \\
& =\min\left\{ \sum\limits_{Y\subseteq X}y_{Y}r_{M_{1}}\left( Y\right)
\ \mid\ y_{Y}\geq0\text{ for all }Y\subseteq X\text{, and }\sum
\limits_{Y\subseteq X,\ x\in Y}y_{Y}\geq w\left( x\right) \text{ for all
}x\in X\right\} \\
& =\sum\limits_{Y\subseteq X}y_{Y}^{\prime}r_{M_{1}}\left( Y\right)
\end{align*}
(since $y^{\prime}$ is an optimum solution for problem (43) with respect to
$M_{1},w_{1}$). Similarly, $w_{2}^{T}\widetilde{z}=\sum\limits_{Y\subseteq
X}y_{Y}^{\prime\prime}r_{M_{2}}\left( Y\right) $. Now, since $w=w_{1}+w_{2}%
$, we have%
\begin{align*}
w^{T}\widetilde{z} & =\underbrace{w_{1}^{T}\widetilde{z}}_{=\sum
\limits_{Y\subseteq X}y_{Y}^{\prime}r_{M_{1}}\left( Y\right) }%
+\underbrace{w_{2}^{T}\widetilde{z}}_{=\sum\limits_{Y\subseteq X}y_{Y}%
^{\prime\prime}r_{M_{2}}\left( Y\right) }=\sum\limits_{Y\subseteq X}%
y_{Y}^{\prime}r_{M_{1}}\left( Y\right) +\sum\limits_{Y\subseteq X}%
y_{Y}^{\prime\prime}r_{M_{2}}\left( Y\right) \\
& =\sum\limits_{Y\subseteq X}\left( y_{Y}^{\prime}r_{M_{1}}\left( Y\right)
+y_{Y}^{\prime\prime}r_{M_{2}}\left( Y\right) \right) .
\end{align*}
Thus, $\widetilde{z}$ is an integer optimum solution to (52), and $\left(
y_{Y}^{\prime},y_{Y}^{\prime\prime}\right) _{Y\subseteq X}$ is an integer
optimum solution to (53), qed.''
\end{itemize}
\end{document}