The notation used will not be exactly the same as Axler. For
instance, we will denote the set
as
out of laziness.
Preliminaries
Axler does many things right. It jumps straight into the main
protagonists of linear algebra — vector spaces — without boring a reader
with the details of a field or a vector space. This is all sensible for
a first read, but it is useful to eventually learn what a field and
vector space actually are.
Field axioms
Definition. A field is a set of elements
,
including two special distinct elements
and
,
with two binary operators
and
such that
- Commutativity:
- Associativity:
- Distributivity:
- Identity:
- Inverse
- For all
,
there exists an element
such that
.
- For all
,
there exists an element
such that
.
for all elements
,
,
.
Even though the field is really the triple
,
we will simply refer to the field as
for brevity.
Typically we omit
when it is clear, e.g. we will write
as
.
Examples of fields include
and
,
as well as the integers modulo
,
i.e. .
Note that
is not a field.
A couple of results you should try to prove:
- Prove that if
,
then
.
- Note from above that the additive inverse is unique, i.e. if
,
then
.
- Prove that if
,
then either
or
.
- Note from above that the multiplicative inverse of any non-zero
element is unique, i.e. if
and
,
then
.
- Prove that
.
Vector space axioms
Definition. A vector space consists of a set
of vectors — including a special vector
— over a field
of scalars, and two binary operators
and
which satisfy
- Commutativity:
- Associativity:
- Distributivity:
for all
,
and
,
.
Note that
can either represent the binary operator
or
.
Similarly,
can either represent the binary operator
or
.
In the interest of conciseness, we will not explicitly differentiate the
two. You will have to infer.
Furthermore, we denote the vector space as
for brevity, even though it really also involves a field and two binary
operators.
A result you should try to prove:
- Prove that
.
Notes
Existence
of direct complement for infinite vector spaces
Given a finite-dimensional vector space
and a subspace
of
,
there exists some subspace
of
such that
.
(We can prove this by explicitly extending a basis of
.)
However, what if
is infinite? Then our explicit extension of a basis
doesn’t work, because infinite vector spaces cannot have a finite
basis.
In fact, the existence of this direct complement is guaranteed not
through a proof but through the Axiom of Choice. (This statement being
true, in other words, is equivalent to AoC.)
Chapter 2
- Suppose that
forms a basis of a vector space
.
Prove that
- replacing any element
with
where
is a non-zero scalar
- replacing any element
with
where
creates another basis of
.
- Prove that every vector in a vector space has a unique
representation in any basis. More concretely, given
in a vector space
and a basis
of
,
show that there is only one choice of scalars
such that
.
- Prove that if
is a subspace of
and
,
then
.
Chapter 3
- Suppose
is invertible. Show that if
is a basis of
,
then so is
.
- Prove that for any invertible linear map
,
is also invertible for all non-zero scalars
.
- Prove that for any non-invertible linear map
,
is also non-invertible for all scalars
.
- Suppose
is a linear map. Show that there exists some map
such that
.
Matrix exercises (feel free
to skip)
- Show the matrix of that
with respect to a basis
is the matrix of
with respect to the basis
.
(Here “the basis P” means the basis of vectors
,
where
is the basis
.)
- Show that the product of two square upper-triangular matrices
and
is an upper-triangular square matrix
.
Also, show that the
th
entry on the diagonal of
is the product of the
th
entry onthe diagonals of
and
.
Chapter 5
- Suppose that
is a collection of invariant subspaces of
under a linear map
.
Show that
is also invariant under
.
- (CMU 21341 Final, Spring 2011) Let
be a finite-dimensional vector space over
.
Suppose
are such that
.
Let
be an eigenvalue of
.
Show that there exists
and
with
,
such that both
and
.
Chapter 6
- Prove the Extended Triangle Inequality:
with equality if and only if all
are multiples of each other.
- Prove that for any inner product space
over the real or complex numbers,
(where
are elements of
).
Chapter 7
Commentary
Theorem 7.25 states any normal operator can be expressed as a block
diagonal matrix with blocks of size
or
(and the size
block matrices are scalar multiples of the rotation matrix).
This statement isn’t terrible (knowing the explicit representation of
a linear map is useful, I guess), but there’s a much more
natural way to state it. Return to the Spectral Theorem:
(normal/self-adjoint) operators in
(/)
have an orthonormal basis of eigenvectors. A (somewhat contrived)
reformulation of the Spectral Theorem is that
can be decomposed into invariant orthogonal subspaces of
.
And obviously every subspace of
is self-adjoint (and thus normal), so we can say
can be decomposed into normal invariant orthogonal subspaces of
dimension
iff it is (normal/self-adjoint) in
(/).
So the equivalent reformulation of 7.25 would be
can be decomposed into normal invariant orthogonal subspaces of
dimension
or
iff it is normal in
.
- (Corollary to 7.6) Show that
if
is normal. Also, show that the converse does not hold.
- Show that
.
- (Generalization of uniqueness of polar decomposition) If
and
are positive operators such that
for all
,
show that
.
Chapter 8
Commentary
Let’s restate 8.5 and 8.9 in their full forms, which Axler alludes to
later in the chapter.
- (Higher-powered 8.5) There exists some non-negative integer
such that
for
and
for
.
- (Higher-powered 8.9) There exists some non-negative integer
such that
for
and
for
.
- Consider a vector space
and a linear map
.
Given two invariant subspaces
,
whose intersection consists only of the
vector, show that
and
.
- Suppose
has eigenvalues
with multiplicities
.
Then show
has eigenvalues
with multiplicities
for all
.