Chapter 1
1.1 Vector
Origin of trigonometric formulas: (dot product— θ \theta θ is the angle between two vectors)
cos θ = cos ( β − α ) = cos α cos β + sin α sin β \cos\theta=\cos(\beta-\alpha)=\cos\alpha\cos\beta+\sin\alpha\sin\beta cos θ = cos ( β − α ) = cos α cos β + sin α sin β
![[Pasted image 20240924101929.png|200]]
right multiplication vs left multiplication column operation vs row operation
![[Pasted image 20241209191936.png#||500]]
![[Pasted image 20241209192003.png||500]]
1.2 Dot Product
Inequality formulas:
∣ v ⋅ w ∣ ≤ ∥ v ∥ ∥ w ∥ ∥ v + w ∥ ≤ ∥ v ∥ + ∥ w ∥ \begin{aligned}&|v\cdot w|\leq\left\|v\right\|\left\|w\right\|\\&\|v+w\|\leq\|v\|+\|w\|
\end{aligned} ∣ v ⋅ w ∣ ≤ ∥ v ∥ ∥ w ∥ ∥ v + w ∥ ≤ ∥ v ∥ + ∥ w ∥
For v = ( a , b ) , w = ( b , a ) v=(a,b),w=(b,a) v = ( a , b ) , w = ( b , a ) => 2 a b ≤ a 2 + b 2 2ab \le a^2+b^2 2 ab ≤ a 2 + b 2 if x = a 2 y = b 2 x=a^{2} \quad y=b^2 x = a 2 y = b 2 => x y ≤ x + y 2 . \sqrt{xy}\leq\frac{x+y}2. x y ≤ 2 x + y . (geometric mean ≤ \le ≤ arithmetic mean)
1.3 Matrices
x ( t ) = t 2 x(t)=t^2 x ( t ) = t 2
Forward: x ( t + 1 ) − x ( t ) = 2 t + 1 x(t+1)-x(t)=2t+1 x ( t + 1 ) − x ( t ) = 2 t + 1
Backward: x ( t ) − x ( t − 1 ) = 2 t − 1 x(t)-x(t-1)=2t-1 x ( t ) − x ( t − 1 ) = 2 t − 1
Centered: x ( t + 1 ) − x ( t − 1 ) 2 = 2 t \frac{x(t+1)-x(t-1)}{2}=2t 2 x ( t + 1 ) − x ( t − 1 ) = 2 t
Chapter 2
row picture hyperlanes meeting at a single point
column picture column vectors combination to produce the target vector b
elimination ---- back substitution
U just keeps the diagonal of A when A is lower triangular.
[ 3 0 0 6 2 0 9 − 2 1 ] [ x y z ] = [ 3 8 9 ] − > [ 3 0 0 0 2 0 0 0 1 ] [ x y z ] = [ 3 2 2 ] \begin{bmatrix} 3 & 0 & 0 \\ 6 & 2 & 0 \\ 9 & -2 & 1 \end{bmatrix} \begin{bmatrix} x \\ y \\ z \end{bmatrix} = \begin{bmatrix} 3 \\ 8 \\ 9 \end{bmatrix}
\quad- > \quad
\begin{bmatrix} 3 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} x \\ y \\ z \end{bmatrix} = \begin{bmatrix} 3 \\ 2 \\ 2 \end{bmatrix} ⎣ ⎡ 3 6 9 0 2 − 2 0 0 1 ⎦ ⎤ ⎣ ⎡ x y z ⎦ ⎤ = ⎣ ⎡ 3 8 9 ⎦ ⎤ − > ⎣ ⎡ 3 0 0 0 2 0 0 0 1 ⎦ ⎤ ⎣ ⎡ x y z ⎦ ⎤ = ⎣ ⎡ 3 2 2 ⎦ ⎤
A is factored into LU A=LU u is upper triangular L is lower triangular
Notice: how to get pivot? 2 - 3/6 * 0=2
[!this is elimination without row exchanges!]
first we can conclude that E A = U A = L U ( L = E − 1 ) EA=U \quad A=LU (L=E^{-1}) E A = U A = LU ( L = E − 1 )
A=PLU P is permutation matrix ----- row exchange for pivot
A=LDU D is diagonal matrix. symmetric
Gauss-Jordan
M u l t i p l y [ A I ] b y A − 1 t o g e t [ I A − 1 ] Multiply \quad \begin{bmatrix} A & I \end{bmatrix} \quad by \quad A^{-1} \quad to \quad get \quad \begin{bmatrix} I & A^{-1} \end{bmatrix} M u lt i pl y [ A I ] b y A − 1 t o g e t [ I A − 1 ]
Diagonally dominant matrices are invertible. Each ∣ a i i ∣ |a_{ii}| ∣ a ii ∣ dominates its row.