Notes of Linear Algebra
A Note to Students
Concepts and computations are equally important
Keep notebooks for calculations and what you are learning
- Read and reread
- Linear algebra is a language that needs a lot of work
Study Guide
Construct links between ideas
Read the Study Guide after solving the problems
Advice
Keep track of which objects are which:
- Pay attention to case and font.
- Pay attention to subscripts and super scripts.
- Repeated letters in the same font and case are significant.
- Imagine shapes and pictures when reading.
Read the surrounding text carefully.
Chapter 1: Linear Equations in Linear Algebra
1.1 Systems of Linear Equations
Definitions
A linear equation in the variables $x_1, \dots, x_n$ is an equation that can be written in the form
where $b$ and the coefficients and $a_1, \dots, a_n$ are real or complex numbers.
A system of linear equations (or a linear system) is a collection of one or more linear equations involving the same variables—say, $x_1, \dots, x_n$.
A solution of the system is a list $(s_1, s_2, \dots, s_n)$ of numbers that makes each equation a true statement when the values $s_1, \dots, s_n$ are substituted for $x_1, \dots, x_n$, respectively.
The set of all possible solutions is called the solution set of the linear system.
Two linear systems are called equivalent if they have the same solution set.
==A system of linear equations has==
- no solution, or
- exactly one solution, or
- infinitely many solutions.==
A system of linear equations is said to be consistent if it has either one solution or infinitely many solutions; a system is inconsistent if it has no solution.
Matrix Notation
- The matrix of coefficients of the variables is called the coefficient matrix.(or matrix of coefficients)
- The matrix of the whole linear system is called the augmented matrix
- An $m \times n$ matrix is a rectangular array of numbers with $m$ rows and $n$ columns.
Solving a Linear System
ELEMENTARY ROW OPERATIONS
- (Replacement) Replace one row by the sum of itself and a multiple of another row.
- (Interchange) Interchange two rows.
- (Scaling) Multiply all entries in a row by a nonzero constant.
Two matrices are called row equivalent if there is a sequence of elementary row operations that transforms one matrix into the other.
Row operations are reversible.
Statement:
If the augmented matrices of two linear systems are row equivalent, then the two systems have the same solution set.
Existence and Uniqueness Question
TWO FUNDAMENTAL QUESTIONS ABOUT A LINEAR SYSTEM
- Is the system consistent; that is, does at least one solution exist?
- If a solution exists, is it the only one; that is, is the solution unique?
1.2 Row Reduction and Echelon Forms
DEFINITION
A rectangular matrix is in echelon form (or row echelon form) if it has the following three properties:
All nonzero rows are above any rows of all zeros.
Each leading entry of a row is in a column to the right of the leading entry of the row above it.
All entries in a column below a leading entry are zeros.
If a matrix in echelon form satisfies the following additional conditions, then it is in reduced echelon form (or reduced row echelon form):
- The leading entry in each nonzero row is 1.
- Each leading 1 is the only nonzero entry in its column.
An echelon matrix (respectively, reduced echelon matrix) is one that is in echelon form (respectively, reduced echelon form).
THEOREM 1
Uniqueness of the Reduced Echelon Form
Each matrix is row equivalent to one and only one reduced echelon matrix.
If a matrix $A$ is row equivalent to an echelon matrix $U$, we call $U$ an echelon form (or row echelon form) of $A$; if $U$ is in reduced echelon form, we call $U$ the reduced echelon form of $A$.
Pivot Positions
DEFINITION
A pivot position in a matrix $A$ is a location in $A$ that corresponds to a leading $1$ in the reduced echelon form of $A$. A pivot column is a column of $A$ that contains a pivot position.
The Row Reduction Algorithm
Step 1
Begin with the leftmost nonzero column. This is a pivot column. The pivot position is at the top.
Step 2
Select a nonzero entry in the pivot column as a pivot. If necessary, interchange rows to move this entry into the pivot position.
Step 3
Use row replacement operations to create zeros in all positions below the pivot.
Step 4
Cover (or ignore) the row containing the pivot position and cover all rows, if any, above it. Apply steps 1–3 to the submatrix that remains. Repeat the process until there are no more nonzero rows to modify.
Step 5
Beginning with the rightmost pivot and working upward and to the left, create zeros above each pivot. If a pivot is not 1, make it 1 by a scaling operation.
The combination of steps 1–4 is called the forward phase of the row reduction algorithm. Step 5 is called the backward phase.
Solutions of Linear Systems
Parametric Descriptions of Solution
The descriptions
is parametric descriptions of solution sets in which the free variables act as parameters.
Whenever a system is inconsistent, the solution set is empty, even when the system has free variables. In this case, the solution set has no parametric representation.
Back-Substitution
A method replacing Back-substitution in computer algorithms.
Existence and Uniqueness Questions
THEOREM 2
Existence and Uniqueness Theorem
A linear system is consistent if and only if the rightmost column of the augmented matrix is not a pivot column—that is, if and only if an echelon form of the augmented matrix has no row of the form
If a linear system is consistent, then the solution set contains either (i) a unique solution, when there are no free variables, or (ii) infinitely many solutions, when there is at least one free variable.
USING ROW REDUCTION TO SOLVE A LINEAR SYSTEM
- Write the augmented matrix of the system.
- Use the row reduction algorithm to obtain an equivalent augmented matrix in echelon form. Decide whether the system is consistent. If there is no solution, stop; otherwise, go to the next step.
- Continue row reduction to obtain the reduced echelon form.
- Write the system of equations corresponding to the matrix obtained in step 3.
- Rewrite each nonzero equation from step 4 so that its one basic variable is expressed in terms of any free variables appearing in the equation.
Note: The symbol $\sim$ between matrices denote row equivalence.
1.3 Vector Equations
Vectors in $\mathbb{R}^n$
A matrix with only one column is called a column vector or simply a vector, which is like:
The set of all vectors with $n$ entries is denoted by $\mathbb{R}^n$ (read “r - n”).
Two vectors in $\mathbb{R}^n$ are equal if and only if their corresponding entries are equal.
The vector whose entries are all zero is called the zero vector and is denoted by 0.
Linear Combinations
Given vectors $\textbf{v}_1, \textbf{v}_2, \dots, \textbf{v}_\textit{p}$ in $\mathbb{R} ^ n$ and given scalars $c_1, c_2, \dots, c_\textit{p}$, the vector $\textbf{y}$ defined by
is called a linear combination of $\textbf{v}_1, \textbf{v}_2, \dots, \textbf{v}_\textit{p}$ with weights $c_1, c_2, \dots, c_\textit{p}$.
A vector equation
has the same solution set as the linear system whose augmented matrix is
In particular, $\textbf{b}$ can be generated by a linear combination of $\textbf{a}_1, \dots, \textbf{a}_n$ if and only if there exists a solution to the linear system corresponding to the matrix (7).
==Tips==: One of the key ideas in linear algebra is to study the set of all vectors that can be generated or written as a linear combination of a fixed set $\{\textbf{v}_1, \textbf{v}_2, \dots, \textbf{v}_\textit{p}\}$ of vectors.
DEFINITION
If $\textbf{v}_1, \dots, \textbf{v}_\textit{p}$ are in $\mathbb{R} ^ n$ , then the set of all linear combinations of $\textbf{v}_1, \textbf{v}_2, \dots, \textbf{v}_\textit{p}$ is denoted by Span $\{\textbf{v}_1, \dots, \textbf{v}_\textit{p}\}$ and is called the subset of $\mathbb{R} ^ n$ spanned (or generated) by $\textbf{v}_1, \dots, \textbf{v}_\textit{p}$. That is, Span $\{\textbf{v}_1, \dots, \textbf{v}_\textit{p}\}$ is the collection of all vectors that can be written in the form
with $c_1, \dots, c_\textit{p}$ scalars.
Asking whether a vector $\textbf{b}$ is in Span $\{\textbf{v}_1, \dots, \textbf{v}_\textit{p}\}$ amounts to asking whether the vector equation
has a solution, or, equivalently, asking whether the linear system with augmented matrix $\begin{bmatrix} \textbf{v}_1& \cdots& \textbf{v}_\textit{p}& \textbf{b} \end{bmatrix}$has a solution.
The $\textbf{0}$ vector must be in Span $\{\textbf{v}_1, \dots, \textbf{v}_\textit{p}\}$.
1.4 The Matrix Equation $Ax = b$
DEFINITION
If $A$ is an $m \times n$ matrix, with columns $\textbf{a}_1, \dots, \textbf{a}_n$, and if $\textbf{x}$ is in $\mathbb{R} ^ n$ , then the product of $A$ and $\textbf{x}$, denoted by $A\textbf{x}$, is the linear combination of the columns of $A$ using the corresponding entries in $\textbf{x}$ as weights; that is,
Note that $A\textbf{x}$ is defined only if the number of columns of A equals the number of entries in $\textbf{x}$.
An equation that has the form $A\textbf{x} = \textbf{b}$ is called a matrix equation.
THEOREM 3
If $A$ is an $m \times n$ matrix, with columns $\textbf{a}_1, \dots, \textbf{a}_n$, and if $\textbf{b}$ is in $\mathbb{R}^m$, the matrix equation
has the same solution set as the vector equation
which, in turn, has the same solution set as the system of linear equations whose augmented matrix is
Existence of Solutions
DEFINITION
The equation $A \textbf{x} = \textbf{b}$ has a solution if and only if $\textbf{b}$ is a linear combination of the columns of $A$.
THEOREM 4
Let $A$ be an $m \times n$ matrix. Then the following statements are logically equivalent. That is, for a particular $A$, either they are all true statements or they are all false.
a. For each $\textbf{b}$ in $\mathbb{R}^m$, the equation $A \textbf{x} = \textbf{b}$ has a solution.
b. Each $\textbf{b}$ in $\mathbb{R} ^ m$ is a linear combination of the columns of $A$.
c. The columns of $A$ span $\mathbb{R} ^ m$.
d. $A$ has a pivot position in every row.
Warning: Theorem 4 is about a coefficient matrix, not an augmented matrix. If an augmented matrix $\begin{bmatrix} A& \textbf{b} \end{bmatrix}$ has a pivot position in every row, then the equation $A \textbf{x} = \textbf{b}$ may or may not be consistent.
Properties of the Matrix–Vector $A \textbf{x}$
THEOREM 5
If $A$ is an $m \times n$ matrix, $\textbf{u}$ and $\textbf{v}$ are vectors in $\mathbb{R} ^ n$ , and $c$ is a scalar, then:
a. $A (\textbf{u} + \textbf{v}) = A \textbf{u} + A \textbf{v}$
b. $A (c \textbf{u}) = c(A \textbf{u})$.
1.5 Solution Sets of Linear Systems
Homogeneous Linear Systems
A system of linear equations is said to be homogeneous if it can be written in the form $A \textbf{x} = \textbf{0}$, where $A$ is an $m \times n$ matrix and $0$ is the zero vector in $\mathbb{R} ^ m$. Such a system $A \textbf{x} = \textbf{0}$ always has at least one solution, namely $\textbf{x} = \textbf{0}$ (the zero vector in $\mathbb{R} ^ m$. This zero solution is usually called the trivial solution. For a given equation $A \textbf{x} = \textbf{0}$; the important question is whether there exists a nontrivial solution, that is, a nonzero vector $\textbf{x}$ that satisfies $A \textbf{x} = \textbf{0}$.
The homogeneous equation $A \textbf{x} = \textbf{0}$ has a nontrivial solution if and only if the equation has at least one free variable.
the solution set of a homogeneous equation $A \textbf{x} = \textbf{0}$ can always be expressed explicitly as Span $\{\textbf{v}_1, \dots, \textbf{v}_\textit{p}\}$ for suitable vectors $\textbf{v}_1, \dots, \textbf{v}_\textit{p}$.
Parametric Vector Form
The original equation
is an implicit description for the space. Solving this equation amounts to finding an explicit description of the space as the set spanned by $\textbf{v}_1, \dots, \textbf{v}_n, $. Equation
is called a parametric vector equation of the space. Sometimes such an equation is written as
to emphasize that the parameters vary over all real numbers.
Whenever a solution set is described explicitly with vectors, we say that the solution is in parametric vector form.
Solutions of Nonhomogeneous Systems
THEOREM 6
Suppose the equation $A \textbf{x} = \textbf{b}$ is consistent for some given $\textbf{b}$, and let $\textbf{p}$ be a solution. Then the solution set of $A \textbf{x} = \textbf{b}$ is the set of all vectors of the form $\textbf{w} = \textbf{p} + \textbf{v}_h$, where $\textbf{v}_h$ is any solution of the homogeneous equation $A \textbf{x} = \textbf{0}$.
Theorem 6 says that if $A \textbf{x} = \textbf{b}$ has a solution, then the solution set is obtained by translating the solution set of $A \textbf{x} = \textbf{0}$, using any particular solution $\textbf{p}$ of $A \textbf{x} = \textbf{b}$ for the translation.
Warning: Theorem 6 and apply only to an equation $A \textbf{x} = \textbf{b}$ that has at least one nonzero solution $\textbf{p}$. When $A \textbf{x} = \textbf{b}$ has no solution, the solution set is empty.
ALGORITHM
WRITING A SOLUTION SET (OF A CONSISTENT SYSTEM) IN PARAMETRIC VECTOR FORM
- Row reduce the augmented matrix to reduced echelon form.
- Express each basic variable in terms of any free variables appearing in an equation.
- Write a typical solution $\textbf{x}$ as a vector whose entries depend on the free variables, if any.
- Decompose $\textbf{x}$ into a linear combination of vectors (with numeric entries) using the free variables as parameters.
1.6 Applications of linear Systems
1.7 Linear Independence
Trivial solution
The solution that every variable equal to 0.
DEFINATION
Linearly independent
If a vector equation
has only the trivial solution, it is said to be linearly independent.
If not, it is said to be linearly dependent.
Equation $(1)$ is called a linear dependence relation among $\textbf{v}_1,\ \dots\ ,\textbf{v}_\textit{p}$ when the weights are not all zero.
For brevity, we may say that $\textbf{v}_1,\dots,\textbf{v}_\textbf{p}$ are linearly dependent when we mean that $\{\textbf{v}_1,\dots,\textbf{v}_\textit{p}\}$ is a linearly dependent set.
Linear Independence of Matrix Columns
The columns of a matrix $\textit{A}$ are linearly independent if and only if the equation $\textit{A}\textbf{x}=\textbf{0}$ has only the trivial solution.
Sets of One or Two or More Vectors
One:
If $\textbf{v}$ is not $\textbf{0}$, it is linearly independent. Or, it is Linearly dependent.
Two:
A set of two vectors $\{\textbf{v}_1,\textbf{v}_2\}$ is linearly dependent if at least one of the vectors is a multiple of the other. The set is linearly independent if and only if neither of the vectors is a multiple of the other.
More:
THEOREM 7
Characterization of Linearly Dependent Sets
An indexed set $S=\{\textbf{v}_1,\dots,\textbf{v}_\textit{p}\}$ of two or more vectors is linearly dependent if and only if at least one of the vectors in $S$ is a linear combination of the others. In fact, if $S$ is linearly dependent and $\textbf{v}_1\neq\textbf{0}$ then some $\textbf{v}_j$(with $j>1$) is a linear combination of the preceding vectors, $\textbf{v}_1,\dots,\textbf{v}_{j-1}$.
THEOREM 8
If a set contains more vectors than there are entries in each vector, then the set is linearly dependent. That is, any set $\{\textbf{v}_1,\dots,\textbf{v}_\textit{p}\}$ in Rn is linearly dependent if $p>n$.
THEOREM 9
If a set $S=\{\textbf{v}_1,\dots,\textbf{v}_\textit{p}\}$ in $\mathbb{R}^n$ contains the zero vector, then the set is linearly dependent.
1.8 Introduction to Linear Transformations
1.9 The Matrix of a Linear Transformation
1.10 Linear Models in Business, Science, and Engineering
Chapter 2: Matrix Algebra
2.1 Matrix Operations
2.2 The Inverse of a Matrix
2.3 Characterizations of Invertible Matrices
2.4 Partitioned Matrices
2.5 Matrix Factorizations
2.6 The Leontief Input-Output Model
2.7 Applications to Computer Graphics
2.8 Subspaces of Rn
2.9 Dimension and Rank Projects
Chapter 3: Determinants
3.1 Introduction to Determinants
3.2 Properties of Determinants
3.3 Cramer’s Rule, Volume, and Linear Transformations
(吐槽:哥你能给我翻译一下吗。。。纯英文的笔记写得爽,看得是真的费劲)
这件事说明,做笔记不是纯抄,还要有自己的理解。大脑里记的又不是英文,这样写和抄书没两样了。