TODO

definition of minor of matrix
a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows or columns<div>
</div><div>
</div><div></div>
algebra matrix minor

what are elementary operations in Gaussian elimination?

row switching, row addition, row multiplication

algebra gaussian_elimination systems_of_linear_equations

why in Gaussian elimination elementary row operations do not change solutions set?
see my notes
algebra gaussian_elimination systems_of_linear_equations

do elementary row operations in Gausian elimination change kernel and image?<div>
</div><div>what about column operations?</div>
elementary row operations do not change the kernel (and hence do not change the solution set)<div>
</div><div>they do change the image</div>
algebra gaussian_elimination systems_of_linear_equations

what is row echelon form?<div>
</div><div>and what is reduced row echelon form?</div>

row echelon form
$$
\left( \begin{array}{ccccc}
   1 & a_0 & a_1 & a_2 & a_3 \\
   0 & 0 & 1 & a_4 & a_5 \\
   0 & 0 & 0 & 1 & a_6
   \end{array} \right)
$$



reduced row echelon form
$$
\left( \begin{array}{ccccc}
   1 & 0 & 0 & 0 & b_1 \\
   0 & 1 & 0 & 0 & b_2 \\
   0 & 0 & 0 & 1 & b_3
   \end{array} \right)
$$

$$\begin{pmatrix} 1 & 0 & -2 & 3 & 0 & -24 \\ 0 & 1 & -2 & 2 & 0 & -7 \\ 0 & 0 & 0 & 0 & 1 & 4 \end{pmatrix}$$

algebra gaussian_elimination systems_of_linear_equations

what is time complexity of Gaussian elimination?
$O(n^3)$<div>
</div><div>TODO: why?</div>
algebra gaussian_elimination systems_of_linear_equations

Let $V$ be a finite-dimensional vectorspace with a spanning set $S$ of $n$ vectors.
   
Maximum number of linear independent vectors?
Every linear independent set in $V$ has a maximum of $n$ elements.

algebra linear_independence

definition of rank of matrix
The column rank of a matrix A is the maximum number of linearly independent column vectors of A.<div>
</div><div>$\iff$</div><div>
</div><div>The row rank is number of non-zero rows after Gaussian elimination.</div><div>
</div><div>$\iff$</div><div>
</div><div>rank is largest size of a nonzero minor</div><div>
</div><div>
</div><div>
</div><div>
</div><div>row rank and column rank are always equal</div><div>
</div>
algebra gaussian_elimination matrix rank

definition of basis
basis is a set of linearly independent vectors that, in a linear combination, can represent every vector in a given vector space
algebra basis vector_space

when a system of linear equations has a solution?<div>
</div><div>criteria based on rank.</div>

a system of linear equations with $n$ variables has a solution 
$\Leftrightarrow$
the rank of its coefficient matrix $A$ is equal to the rank of its augmented matrix $[A|b]$.




If n = rank(A), the solution is unique, otherwise there are infinite number of solutions.

algebra matrix rank systems_of_linear_equations theorem

how many minors of size $k$ are there for a matrix of size $n \times m$
$C_n^k \cdot C_m^k$<div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>e.g. there are $C_3^2 \cdot C_3^2 = 9$ minors of size 2 for a 3x3 matrix</div><div>one for each of 9 elements of the matrix</div>
algebra matrix minor

definition of rank of matrix through minors and algorithm of counting
rank of A is the largest order of any non-zero minor in A<div>
</div><div>
</div><div>
</div><div>
</div><div>brute force: if there is a non-zero minor of size 1 which is basically a non-zero element, then the rank is at least 1</div><div>then try all minors of size 2. If there is at least one non-zero minor, then the rank is at least 2.</div><div>after some steps, if we found that all minors of size k are zero, then the rank is k-1</div><div>
</div><div><div>
</div><div>
</div><div>but there is an observation</div><div>        </div><div><div>    Theorem. Suppose that there exists a minor $m_n$ of order $n$ for a matrix that is nonzero and such that all minors of order $n+1$ which contain $m_n$ are zero.</div><div>    Then rank of the matrix is $n$.</div></div></div><div>
</div><div>
</div><div>
</div><div><div>then the algorithm is as following</div><div>    take a non-zero 1-minor (it can be upper left element of matrix)</div><div>    if there is a non-zero 2-minor which contain 1-minor, take it and</div><div>    check 3-minors</div><div>    after some steps if all k+1-minors that contain a k-minor from previous step are zero then the rank is k</div></div>
algebra matrix minor rank

solution set of homogeneous system of linear equations
$<div>  \left{\begin{array}{ccc}<div>   a_{11}x_1+\ldots+a_{1n}x_n &=& 0 \</div><div>   \ldots & & \</div><div>   a_{m1}x_1+\ldots+a_{mn}x_n &=& 0</div><div>   \end{array}\right.</div><div>   \iff A_{m\times n} \ \vec{x}=\vec{0},\quad A_{m\times n}=\left(\begin{array}{ccc}a_{11} & \ldots & a_{1n}\ \ldots & & \ a_{m1} & \ldots & a_{mn}</div>   \end{array}\right)<div>$</div></div><div>
</div><div><div>
</div><div>
</div><div>
</div><div>
</div><div>   if $\operatorname{rank}(A) = n$ then there is only trivial solution: $\vec{x} = \vec{0}$</div><div>
</div><div>
</div><div>
</div><div>   if $\operatorname{rank}(A) < n$ then there is a basis of $(n - \operatorname{rank}(A))$ linear independent solutions, general solution looks like $c_1\vec{x}^1+\ldots+c_{n-r}\vec{x}^{n-r}$ </div></div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div><div>The solution set to any homogeneous system of linear equations with n variables is a subspace of $F^n$.</div><div>
</div><div>For example, the set of all vectors $(x, y, z)$ satisfying the equations $x + 3y + 2z = 0$ and $2x - 4y + 5z = 0$ is a one-dimensional subspace of $\mathbb R^3$.</div></div>
algebra homogeneous_system matrix rank systems_of_linear_equations

determinant of a 2x2 matrix
<div>
</div><div><img src=”“algebra.org.20120925-1424-506Xck.screenshot.png”” /></div><div>
</div><div>
</div><div>
</div><div>proof:</div><div><img src=”“algebra.org.20120926-0941-506KZS.screenshot.png”” /></div>”
algebra determinant matrix

determinant of a 3x3 matrix
“$\begin{vmatrix}a&b&c\d&e&f\g&h&i\end{vmatrix} = a\begin{vmatrix}e&f\h&i\end{vmatrix}-b\begin{vmatrix}d&f\g&i\end{vmatrix}+c\begin{vmatrix}d&e\g&h\end{vmatrix} = aei+bfg+cdh-ceg-bdi-afh$<div>
</div><div><img src=”“20130329-2018-4699Oqd.screenshot.png”” /></div><div>
</div><div><img src=”“algebra.org.20120925-1430-506xww.screenshot.png”” /></div><div>
</div><div>
</div><div><img src=”“algebra.org.20120925-1424-506kmq.screenshot.png”” /></div>”
algebra determinant matrix

definition of determinant
by transpositions<div>
</div><div></div>
algebra determinant matrix

determinant property

$\det(A^{\rm T}) = \ ?$
$\det(A^{\rm T}) = \det(A)$
algebra determinant matrix

determinant property<div>
</div><div></div>

algebra determinant matrix

determinant property

$\det(AB) = \ ?$
$\det(AB) = \det(A)\det(B)$
algebra determinant matrix

determinant property<div>
</div>

algebra determinant matrix

determinant of triangular matrix
the product of the diagonal entries:<div>
</div><div></div>
algebra determinant matrix

how does interchanging of two rows or columns affect the determinant?
it multiplies the determinant by -1
algebra determinant matrix

how does adding a column multiplied by a constant to another columnt affects the determinant?
“does not change<div>
</div><div>
</div><div><img src=”“algebra.org.20120926-1100-39848N7Q.screenshot.png”” /></div>”
algebra determinant matrix

determinant is zero — what about linear independence?
rows are linear dependent $\iff$ $\det A = 0$
algebra determinant matrix

determinant expansion by minors
“<div></div><div>
</div><div></div><div>
</div><div></div><div>   </div><div>
</div><div>
</div><div></div><div>
</div><div>
</div><div><img src=”“algebra.org.20121004-1059-39848olA.screenshot.png”” style=”“max-width: 90%; “” /></div>”
algebra determinant matrix minor

property of determinant<div>
</div><div></div><div>
</div><div></div><div>
</div><div>
</div><div>where A and D are square matrices</div>
   
algebra determinant matrix

matrix operation property<div>
</div><div>$(A^T)^{-1} = \ ?$</div>
$(A^T)^{-1} = (A^{-1})^T$
algebra matrix

matrix operation property<div>
</div><div>$(AB)^{-1} = \ ?$</div>
$(AB)^{-1} = B^{-1}A^{-1}$
algebra matrix

properties of
<div>
</div><div>
</div><div>$\operatorname{rank}(A) + \operatorname{rank}(B) - n \leq \operatorname{rank}(A B)$</div><div>
</div>
algebra matrix rank

inversion of matrix using minors
“<div></div><div>  </div><div>
</div><div></div><div>
</div><div>
</div><div><img src=”“algebra.org.20121004-1059-39848olA.screenshot.png”” /></div>”
algebra matrix minor rank

inversion by Gauss-Jordan elimination

$$[A|I] \Rightarrow \dots \Rightarrow [I|A^{-1}]$$


  $$ A = 
  \begin{bmatrix}
  2 & -1 & 0 \\
  -1 & 2 & -1 \\
  0 & -1 & 2
  \end{bmatrix}$$


  $$ [A|I] = 
  \begin{bmatrix}
  2 & -1 & 0 & 1 & 0 & 0\\
  -1 & 2 & -1 & 0 & 1 & 0\\
  0 & -1 & 2 & 0 & 0 & 1
  \end{bmatrix}$$


  $$ [ I A^{-1} ] = 
  \begin{bmatrix}
  1 & 0 & 0 & \frac{3}{4} & \frac{1}{2} & \frac{1}{4}\\
  0 & 1 & 0 & \frac{1}{2} & 1 & \frac{1}{2}\\
  0 & 0 & 1 & \frac{1}{4} & \frac{1}{2} & \frac{3}{4}
  \end{bmatrix}$$

algebra gaussian_elimination matrix

how many bases in n-dimensional vector space over $F_q$?

  Basis set $B = \{b_1, \ldots, b_n\}$


  $b_1$ can be chosen from $V \  \setminus <0>$
  $b_2$ can be chosen from $V \  \setminus <b_1>$
  $b_3$ can be chosen from $V \  \setminus <b_1, b_2>$
  ...
  $b_n$ can be chosen from $V \  \setminus <b_1, b_2, \ldots, b_{n-1}>$





$$(q^n-1)\cdot(q^n-q)\cdot \ldots \cdot(q^n-q^{n-1})$$

algebra basis vector_space

how many k-dimensional subspaces of n-dimensional vector space over $F_q$?

choose k linear independent vectors:


Basis set $B = \{b_1, \ldots, b_n\}$

$b_1$ can be chosen from $V \  \setminus <0>$
$b_2$ can be chosen from $V \  \setminus <b_1>$
$b_3$ can be chosen from $V \  \setminus <b_1, b_2>$
...
$b_k$ can be chosen from $V \  \setminus <b_1, b_2, \ldots, b_{k-1}>$  

$(q^n-1)(q^n-q) \ldots (q^n-q^{k-1})$


subspaces they span may repeat --- we can get rid of this fact
how many bases in k-dimensional vector subspace?





$$\frac{(q^n-1)(q^n-q) \ldots (q^n-q^{k-1})}{(q^k-1)(q^k-q) \ldots (q^k-q^{k-1})}$$

algebra basis vector_space

transformation matrix from one basis to another
<div>
</div><div>
</div><div></div><div>
</div><div>
</div><div>
</div><div>
</div><div></div><div>
</div><div>
</div><div>
</div><div></div>
algebra basis matrix vector_space

definition of sum of vector subspaces

$W_1+\ldots+W_n \ = \ <W_1 \cup \ldots \cup W_n>$

e.g., $R^3 = \text{x-axis} + \text{y-axis} + \text{z-axis}$






   
$\operatorname{dim} (U+W) = \operatorname{dim} U + \operatorname{dim} W - \operatorname{dim} (U \cap W)$





representation $u = u_1 + \ldots + u_n$, $u_i \in U_i$ is unique $\iff$ the sum of subspaces is direct (that's one of definitions, see below)

algebra vector_space

intersection of vector subspaces
is a subspace
algebra vector_space

definition of linear independence of vector subspaces

collection of subspaces $U_1, \ldots, U_n$ is called linearly independent 

if $u_1 + \ldots + u_n = 0$

leads to $u_1 = \ldots = u_n = 0$

   






for two subspaces U and W linear independence is equivalent to $U \cap W = 0$

for a bigger collection that is not true

algebra vector_space

definition of direct sum of subspaces

$U_1 \oplus \ldots \oplus U_n$ is called a direct sum if it is a sum and subspaces are linearly independent



equivalently, every vector has a unique representation $u = u_1 + \ldots + u_n$, $u_i \in U_i$

each $u_i$ is called a projection of $u$

   
   

   
direct sum decompositions are not unique
$R^2 = \text{x-axis} \oplus \text{y-axis} = \ <(1,1)> \oplus <(1,2)>$

algebra vector_space

what is the basis of a direct sum of vector subspaces?
the basis for a direct sum is the disjoint union of bases for the summands
algebra vector_space

definition of homomorphism

   homomorphism from a structure $(G, *)$ to $(H, \times)$ is a function $f(g_1 * g_2) = f(g_1) \times f(g_2)$


   


   When an algebraic structure includes more than one operation, homomorphisms are required to preserve each operation.

   $f: (G, +, \times) \mapsto (H, \oplus, \otimes)$

   $f(g_1 + g_2) = f(g_1) \oplus f(g_2)$

   $f(g_1 \times g_2) = f(g_1) \otimes f(g_2)$







algebra

definition of linear map

$X$ and $Y$ are vector spaces over F
  


linear map is a homomorphism $\varphi: X \mapsto Y$
  
$f(\mathbf{x}+\mathbf{y}) = f(\mathbf{x})+f(\mathbf{y})$

$f(\alpha \mathbf{x}) = \alpha f(\mathbf{x})$




  
  

$\iff$

  $f(\alpha \mathbf{x} + \beta \mathbf{y}) = \alpha f(\mathbf{x}) + \beta f(\mathbf{y})$









instead of isomorphism, linear map does not require bijection

algebra linear_map vector_space

definition of linear functional (linear form)
linear map from $V$ to $F$  is called a linear functional<div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>any linear functional can be written as $\varphi(x) = a_1 x_1 + \ldots + a_n x_n$, because </div><div>
</div><div>$\varphi(x) = \varphi(x_1 e_1) + \ldots + \varphi(x_n e_n) = x_1 \varphi(e_1) + \ldots + x_n \varphi(e_n)$</div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>a hyperplane can be written using linear functional as $h(v) = b$</div>
algebra linear_map vector_space

examples of linear map

  rotation is a liner map (and even isomorphism)


  orthogonal projection of $E^3$ to a plane is a linear map (but not an isomorphism). $\ker\varphi$ is a line that is orthogonal to the plane


  differentiation is a linear map from the space of all differentiable functions to the space of all functions

  








  $x\mapsto x^2$ is not linear


  $x\mapsto x+1$ is not linear (but is an affine transformation, and also a linear function, as defined in analytic geometry)

algebra example linear_map vector_space

when are vector spaces isomorphic?

   $V$ and $U$ are isomorphic $\iff$ $\operatorname{dim} V = \operatorname{dim} U$






   $\forall V$ over $F$ with $\operatorname{dim} V = n$ is isomorphic to $F^n$

algebra vector_space

matrix representation of linear map

   linear map $\varphi: F^n \mapsto F^m$

   $\{ e_1 \ \ldots \ e_n \}$ and $\{ f_1 \ \ldots \ f_m \}$ are given bases





  $$\varphi(e_j) \ = \ f_1 \, a_{1j} + f_2 \, a_{2j} + \ldots + f_m \, a_{mj} \  = \ \sum_k \, f_k \, a_{kj} \  = \ (f_1 \ \ldots \ f_m) \begin{pmatrix} a_{1j} \\ a_{2j} \\ \vdots \\ a_{mj} \end{pmatrix} $$   


  $$x \ = \begin{pmatrix} x_1 \\ \vdots \\ x_{\boldsymbol n} \end{pmatrix}$$,     $$\varphi(x) = y = \begin{pmatrix} y_1 \\ \vdots \\ y_{\boldsymbol m} \end{pmatrix} = \begin{pmatrix} a_{11} & \dots & \boldsymbol{a_{1j}} &\dots & a_{1n}  \\ a_{21} & \dots & \boldsymbol{a_{2j}} & \dots & a_{2n}  \\  \dots&\dots&\dots&\dots&\dots \\ a_{m1} & \dots & \boldsymbol{a_{mj}} &\dots & a_{mn} \end{pmatrix} \cdot \begin{pmatrix} x_1 \\ \vdots \\ x_{\boldsymbol n} \end{pmatrix}$$



thus, there is a one-to-one correspondence between linear maps and matrices

algebra linear_map matrix vector_space

definition of linear operator
“the expression ““linear operator”” is commonly used for a linear map from a vector space to itself (i.e., endomorphism)”
algebra linear_map vector_space

definition of kernel

   $\ker\varphi=\{x \ : \  \varphi(x)=0 \}$





   by definition $\ker(A)$ of the matrix is the solution space to $Ax=0$


$\dim\ker\varphi \  =  \ n - \operatorname{rk} A$

algebra kernel linear_map matrix

definition of image
“$\operatorname{im}(f)={y \ : \  y=f(x) }$<div>
</div><div>
</div><div>
</div><div><img src=”“algebra.org.20121021-1757-12518yvs.screenshot.png”” /></div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>$\dim\operatorname{im}\varphi = \operatorname{rk}A$</div><div>
</div><div>where $\varphi : F^n \mapsto F^m$ — linear map with matrix $A$</div>”
algebra image linear_map

relation between dimentions of kernel and image

$\dim \ker \varphi \ + \  \dim \operatorname{im} \varphi \ = \  \dim X$





$\varphi: X \mapsto Y$<div>
</div><div>
</div><div>
</div><div><div>$\ker\varphi$ is a subspace of $X$</div><div>
</div><div>$\operatorname{im}\varphi$  is a subspace of $Y$</div></div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div><div>example. a projection of 3d space to 2d plane (x,y,z) -> (x,y) — kernel is a vertical line through center, image is 2d plane itself</div><div>
</div><div>example. nonzero linear function $\varphi : F^n \mapsto F$, then $\operatorname{im}\varphi = F$ and $\dim\ker\varphi = n - 1$</div></div>
algebra image kernel linear_map

definition of dual space

   dual vector space $V^*$ --- set of all linear functionals on $V$

   $V^*$ becomes a vector space over $F$ when equipped with the following 

   $(\varphi + \psi)(x) = \varphi(x) + \psi(x)$

   $(a \varphi)(x) = a \left(\varphi(x)\right)$
   

algebra dual_space linear_map vector_space

examples of linear functional (linear form)

   example. $\alpha(x) = (a, x)$ --- dot product of vectors --- is a linear functional 



   example. $\alpha(f) = f'(x_0)$ is a linear functional on vector space of differentiable functions



   example. $I(f) = \int_a^b f(x) \, dx$ is a linear functional on vector space $C[a,b]$ of continous functions






example. differentiation is a linear operator in the space of polynomials of degree $\leqslant n$
  
 in basis $\{1, x, x^2, x^3, \ldots, x^n \}$ has a matrix

   $$
   \begin{pmatrix}
   0 & 1 &   &   & \cdots &   \\
     & 0 & 2 &   & \cdots &   \\
     &   & 0 & 3 & \cdots &   \\
     &   &   &   & \ddots &   \\
     &   &   &   & \cdots & n \\
     &   &   &   & \cdots & 0 \\
   \end{pmatrix}
   $$
   

   in basis $\{ 1, \frac{x}{1!}, \frac{x^2}{2!}, \frac{x^3}{3!}, \ldots, \frac{x^n}{n!} \}$

   $$
   \begin{pmatrix}
   0 & 1 &   &   & \cdots &   \\
     & 0 & 1 &   & \cdots &   \\
     &   & 0 & 1 & \cdots &   \\
     &   &   &   & \ddots &   \\
     &   &   &   & \cdots & 1 \\
     &   &   &   & \cdots & 0 \\
   \end{pmatrix}
   $$

algebra dual_space linear_map vector_space

isomorphism of a vector space and it’s dual space

   $V \cong V^* \iff \operatorname{dim} V = \operatorname{dim} V^*$ because dual basis has the same number of vectors:

   $\{ e_1, \ldots, e_n \}$ --- basis in $V$, $x = \sum x_i \, e_i$

   $\{ \varepsilon_1, \ldots, \varepsilon_n \}$ --- $\varepsilon_i(x) = x_i$ --- dual basis in $V^*$





   or equivalently $$\varepsilon_i(e_j) = \delta_{ij} = \begin{cases} 1, & \text{if } i = j \\ 0, & \text{if } i \ne j \end{cases}$$ --- Kronecker delta symbol 










   this isomorphism $A : V \mapsto V^*$ is /not canonical/ because when we change basis in $V$, the dual basis in $V^*$ also changes, so there are as many isomorphisms as bases



if $V$ Euclidean space --- it has dot product $(x, y)$ --- there is canonical isomorphism between $V$ and $V^*$: $x \mapsto f_x(t) = (x, t)$

algebra dual_space linear_map vector_space

isomorphism of a vector space and it’s double dual space

$V \cong V^{**}$ --- there exists isomorphism --- /canonical/ in that sence that it does not depend on basis:

   $A : V \mapsto V^{**}$

   $A(x) = A_x$, where $A_x(f) = f(x)$

   linearity: $A(\alpha x + \beta y) = A_{\alpha x + \beta y} = \alpha A_x + \beta A_y = \alpha A(x) + \beta A(y)$

   bijection: $\{ e_1, \ldots, e_n \}$, $\{ \varepsilon_1, \ldots, \varepsilon_n \}$, $\{ A_{e_1}, \ldots,  A_{e_n} \}$ --- basises in $V$, $V^*$, and $V^{**}$, where $A_{e_i}(\varepsilon_j) = \varepsilon_j(e_i) = \delta_{ij}$










   $V$ and $V^*$ are dual for each other since we can consider $x \in V$ as linear function $A_x(f)$

algebra dual_space linear_map vector_space

matrix of linear operator and transformation matrix from one basis to another

linear operator $\mathcal A: V \mapsto V$

   in basis $\{e_1, \ldots, e_n \}$ matrix $A$ of linear operator $\mathcal A$

$\mathcal A e_j = \sum_i e_i \, a_{ij}$


   $$\{ \mathcal A e_1, \ldots, \boldsymbol{\mathcal A e_j}, \ldots, \mathcal A e_n \} \  = \   (e_1, \ldots, e_n) \cdot A \  = \   (e_1, \ldots, e_n) \begin{pmatrix} a_{11} & \dots & \boldsymbol{a_{1j}} &\dots & a_{1n}  \\ a_{21} & \dots & \boldsymbol{a_{2j}} & \dots & a_{2n}  \\  \dots&\dots&\dots&\dots&\dots \\ a_{n1} & \dots & \boldsymbol{a_{nj}} &\dots & a_{nn} \end{pmatrix}$$







   


   let $C$ be a transition matrix from basis $\{e_1, \ldots, e_n \}$ to basis $\{e'_1, \ldots, e'_n \}$:

   $(e'_1, \ldots, e'_n) = (e_1, \ldots, e_n) \cdot C$

   $(e'_1, \ldots, e'_n) \cdot A' = (\mathcal A e'_1, \ldots, \mathcal A e'_n) = (\mathcal A e_1, \ldots, \mathcal A e_n) \cdot C = (e_1, \ldots, e_n) \cdot AC = (e'_1, \ldots, e'_n) \cdot C^{-1}AC$

   $A' = C^{-1}AC$

algebra linear_map vector_space

examples of linear operators
“<div>rotation             $\mathbf{A}=\begin{pmatrix}\cos(\theta) & -\sin(\theta)\ \sin(\theta) & \cos(\theta)\end{pmatrix}$</div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>   reflection</div><div>   scaling</div><div>   projection</div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>   horizontal shear       </div><div>
</div><div>$\mathbf{A}=\begin{pmatrix}1 & m\ 0 & 1\end{pmatrix}$</div><div><img src=”“20130403-1640-4699ane.screenshot.png”” /></div>”
algebra linear_map vector_space

definition of bilinear form
bilinear function $\alpha: V \times V \mapsto F$ is linear in each of its arguments<div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>
</div><div>  bilinear function is a generalization of vector dot product </div><div>
</div>
algebra bilinear_form

examples of bilinear forms

  example. vector dot product is a bilinear function



  example. $\alpha (f, g) = \int_a^b f(x)g(x)dx$ is a bilinear function in $C[a,b]$

algebra bilinear_form

matrix representation of bilinear form
see my notes
algebra bilinear_form

matrix of bilinear form after change of basis

$(e'_1, \ldots, e'_n) = (e_1, \ldots, e_n) \cdot C$

  $X = CX'$

$Y = CY'$





  $\alpha(x, y) = (CX')^\top \ A \  (CY') = (X')^\top C^\top A \  C \  Y'$



  

then in new basis $A = C^\top A \  C$

algebra bilinear_form

definition of symmetric and skew-symmetric bilinear forms

   symmetric: $\alpha(y, x) = \alpha(x, y)$

   skew-symmetric: $\alpha(y, x) = - \alpha(x, y)$

   there are also non-symmetric and non-skew-symmetric bilinear functions





   bilinear form is symmetric $\iff$ $A^\top = A$

   bilinear form is skew-symmetric $\iff$ $A^\top = - A$





   this property is independent of basis

algebra bilinear_form

definition of characteristic of a ring

  if exists, the characteristic $\operatorname{char}(R)$ is the smallest $n$
 such that $\forall r \ : \ nr =0$ 







  
  if doesn't exist, then $\operatorname{char}(R) = 0$


  when $\operatorname{char}(R) = 1$, the ring is trivial set: {0, 1}

algebra definition

example of getting the matrix of bilinear form<div>
</div><div><div>    $f (x, y) = x_1 y_2 + x_1 y_3 - 2 x_3 y_1$ </div></div><div>
</div>

$f : F^3 \mapsto F$
    $f (x, y) = x_1 y_2 + x_1 y_3 - 2 x_3 y_1$ 

    $a_{11} = f ( < 1, 0, 0 > , < 1, 0, 0 > ) = 1 \cdot 0 + 1 \cdot 0 - 2 \cdot 0 \cdot 1 = 0$ 
    $a_{12} = f ( < 1, 0, 0 > , < 0, 1, 0 > ) = 1 \cdot 1 + 1 \cdot 0 - 2 \cdot 0 \cdot 0 = 1$ 
    $a_{12} = f ( < 1, 0, 0 > , < 0, 0, 1 > ) = 1 \cdot 0 + 1 \cdot 1 - 2 \cdot 0 \cdot 0 = 1$ 
    $\ldots$
    $a_{31} = f ( < 0, 0, 1 > , < 1, 0, 0 > ) = 0 \cdot 0 + 0 \cdot 1 - 2 \cdot 1 \cdot 1 = 2$
    $\ldots$
    $a_{33} = f ( < 0, 0, 1 > , < 0, 0, 1 > ) = 0 \cdot 0 + 0 \cdot 1 - 2 \cdot 1 \cdot 0 = 0$
    
    $$f(x,y) \ = \  
    \begin{pmatrix} x_1 & x_2 & x_3 \end{pmatrix}^\top
    \
    \begin{pmatrix}
    0 & 1 & 1 \\
    0 & 0 & 0 \\
    -2 & 0 & 0
    \end{pmatrix}
    \ \ 
    \begin{pmatrix} y_1 \\ y_2 \\ y_3 \end{pmatrix}
    \ = \ x_1 y_2 + x_1 y_3 - 2 x_3 y_1
    $$

algebra bilinear_form example

$\alpha$ is a bilinear form with matrix $A$


$\operatorname{dim}\ker\alpha \ = \ ?$
see my notes
algebra bilinear_form kernel

vector space of all bilinear forms is direct sum of subspaces: of symmetric bilinear forms and skew-symmetric bilinear forms

if $\operatorname{char}(F) \neq 2$, then vector space of all bilinear forms is direct sum of subspaces: of symmetric bilinear forms and skew-symmetric bilinear forms
   


proof: if $\alpha$ is symmetric and skew-symmetric simultaneously,

then $\alpha(x, y) = \alpha(y, x) = - \alpha(x, y) \ \Rightarrow \ 2\alpha(x, y) = 0 \ \Rightarrow \ \alpha(x, y) = 0$
   


   i.e. every bilinear form can be represented as a sum of symmetric and skew-symmetric bilinear forms:

   $\alpha(x,y) \  = \  \frac{1}{2}\left\{ \alpha(x,y) + \alpha(y,x) \right\} \  + \  \frac{1}{2}\left\{ \alpha(x,y) - \alpha(y,x) \right\}$

   or in for their matrices: $A \ = \  \frac{1}{2}(A + A^\top) \ + \  \frac{1}{2}(A - A^\top)$   
   
   
   







   but if $\operatorname{char}(F) = 2$ the theorem does not work:

   $F = \{0, 1\}$,  then a skew-symmetric form is the same as a symmetric form: $\alpha(-1, x) = - \alpha(1, x) = \alpha(1, x)$ 

   and, for example, $\begin{pmatrix}0 & 1 \\ 0 & 1\end{pmatrix}$ can't be represented as a sum of two symmetric matrices

algebra bilinear_form

examples of symmetric and skew-symmetric matrices


   symmetric matrix:
   $$
   \begin{pmatrix}
   i & 2 & 3 \\
   2 & j & 4 \\
   3 & 4 & k
   \end{pmatrix}
   $$
   



   skew-symmetric matrix always have zeros on its diagonal:
   $$
   \begin{pmatrix}
   0 & 2 & 3 \\
   -2 & 0 & 4 \\
   -3 & -4 & 0
   \end{pmatrix}
   $$

algebra bilinear_form matrix

definition of quadratic form

$q(x) = \sum a_{ij} \, x_i x_j \ : \ V \mapsto F$

  $A = (a_{ij})$ is the symmetric matrix of the quadratic form

algebra definition quadratic_form

examples of quadratic forms and their matrices

$$x^2 - 4xy + y^2 + 5xz + 2yz + z^2 \ = \  x^2 - 2xy -2yx + y^2 + \frac{5}{2}xz + \frac{5}{2}zx + yz + zy + z^2 \ = \  \left(\begin{array}{ccc} x & y & z\end{array}\right) \left(\begin{array}{ccc}1 & -2 & \frac{5}{2} \\ -2 & 1 & 1 \\ \frac{5}{2} & 1 & 1\end{array}\right)\left(\begin{array}{ccc} x \\ y \\ z\end{array}\right)$$
  


$ax^2+bxy+cy^2 = \left(\begin{array}{ccc} x & y \end{array}\right) \left(\begin{array}{ccc} a & \frac{b}{2} \\ \frac{b}{2} & c\end{array}\right)\left(\begin{array}{ccc} x \\ y \end{array}\right)$
  

algebra example quadratic_form

relation of quadratic forms and bilinear forms

any symmetric bilinear form b defines a quadratic form: $q(x) = \alpha(x,x)$
  


  
polarization of a quadratic form (is a bilinear function): $\alpha(x,y)=\frac{1}{2}\left\{q(x+y)-q(x)-q(y)\right\} = x^\mathrm{T}Ay = y^\mathrm{T}Ax$





  
  matrix of a polar bilinear form is equal to matrix of quadratic form  

algebra example quadratic_form

definition of orthogonal complement of a subspace $W$ of a vector space $V$ equipped with a bilinear form

set $W^\bot$ of all vectors in $V$ that are orthogonal to every vector in $W$

   $W^\bot = \left\{x\in V \  : \  \forall y\in W \ \alpha(x, y) = 0 \right\}$





   $V^\bot = \operatorname{Ker}\alpha$





   if $\alpha(x, y) \not\equiv 0$, then $\dim W + \dim {W^\bot} = \dim V$ 

   and $(W^\bot)^\bot = W$

algebra bilinear_form definition

definition of orthogonal vectors in respect to a bilinear form

   $x$ and $y$ are /orthogonal/ if $\alpha(x, y) = 0$





   $\operatorname{char} F \neq 2$






   when bilinear form is skew-symmetric, every vector is orthogonal to itself

algebra bilinear_form definition

Gram–Schmidt process — orthonormalising a set of vectors
“<div> </div><div>    </div><div>    </div><div>[latex]<div>    \begin{align}</div><div>    \mathbf{b}_1 & = \mathbf{a}_1, & \mathbf{e}_1 & = {\mathbf{b}_1 \over |\mathbf{b}_1|} \</div><div>    \mathbf{b}_2 & = \mathbf{a}_2-\mathrm{proj}_{\mathbf{b}_1}\,\mathbf{a}_2,</div><div>    & \mathbf{e}_2 & = {\mathbf{b}_2 \over |\mathbf{b}_2|} \</div><div>    \mathbf{b}_3 & = \mathbf{a}_3- \left( \mathrm{proj}_{\mathbf{b}_1}\,\mathbf{a}_3+\mathrm{proj}_{\mathbf{b}_2}\,\mathbf{a}_3 \right), & \mathbf{e}_3 & = {\mathbf{b}_3 \over |\mathbf{b}_3|} \</div><div>    \mathbf{b}_4 & = \mathbf{a}_4- \left( \mathrm{proj}_{\mathbf{b}_1}\,\mathbf{a}_4+\mathrm{proj}_{\mathbf{b}_2}\,\mathbf{a}_4+\mathrm{proj}_{\mathbf{b}_3}\,\mathbf{a}_4 \right), & \mathbf{e}_4 & = {\mathbf{b}_4 \over |\mathbf{b}_4|} \</div><div>    & {}\ \  \vdots & & {}\ \  \vdots \</div><div>    \mathbf{b}_k & = \mathbf{a}_k-\sum_{j=1}^{k-1}\mathrm{proj}_{\mathbf{b}_j}\,\mathbf{a}_k, & \mathbf{e}_k & = {\mathbf{b}_k\over |\mathbf{b}_k |}.</div><div>    \end{align}</div>[/latex]</div><div>
</div><div><img src=”“20130503-1233-379yXc.screenshot.png”” /></div><div>
</div><div>   </div><div>   $\mathbf{b}_1$ and $\mathbf{b}_2$ are orthogonal:</div><div>    </div><div>   </div><div>   $\mathbf{b}_2$ and $\mathbf{b}_3$ are orthogonal similarly:</div><div>   </div><div>   </div><div>   etc</div><div>   </div>”
algebra bilinear_form

canonical, or normal form of quadratic function

   If $F = \mathbb C$, coefficients $\alpha_i = 0, 1$, 


i.e. canonical quadratic function look like

 $q(x) = x_1^2 + \ldots + x_r^2$ after a suitable permutation of basis vectors, 


$r = \operatorname{rank} q$.
   





   If $F = \mathbb R$, coefficients $\alpha_i = -1, 0, 1$, 

i.e. canonical quadratic function look like

 $q(x) = x_1^2 + \ldots + x_k^2 - x_{k+1}^2 - \ldots - x_{k+l}^2$ after a suitable permutation of basis vectors.


   $k+l = \operatorname{rank} q$


   a pair $(k,l)$ is called signature of $q$

algebra quadratic_form

law of inertia for quadratic forms
“<div>   $k$ and $l$ do not depend on a particular choice of diagonalizing basis</div><div>
</div><div>
</div><div>   // it’s about invariant signature, the word ““inertia”” was used because no better word was found</div><div>
</div><div>
</div><div>
</div><div>   </div><div>   $k$ is max dimention of a subspace on which $q(x)$ is positive definite</div><div>
</div><div>   (same for $l$)</div><div>
</div>”
algebra quadratic_form

definition of positive definite and negative definite quadratic forms

   $q(x) > 0$ for $x \neq 0$ is positive definite

   $q(x) < 0$ for $x \neq 0$ is negative definite





   
   bilinear form $\alpha(x, y)$ is positive definite if associated quadratic form $q(x)$ is positive definite (same for negative)

   dot product $(x, y)$ of vectors is positive definite



   normal form of positive definite quadratic function is $q(x) = x_1^2 + \ldots + x_k^2$

algebra quadratic_form

Sylvester’s criterion — is a quadratic form positive definite

 a matrix $A$ is positive definite 
$\iff$ 
all the following matrices have a positive determinant:


   the upper left 1-by-1 corner
   the upper left 2-by-2 corner
   ...
   the upper left n-by-n corner -- $A$ itself




   
   $$A = \begin{pmatrix} a_{11} & a_{12} & \ldots & a_{1n} \\ a_{21} & a_{22} & \ldots & a_{2n} \\ \ldots & \ldots & \ldots & \ldots \\ a_{n1} & a_{n2} & \ldots & a_{nn} \end{pmatrix}$$

   $$\Delta_1=a_{11}$$

   $$\Delta_2=\begin{vmatrix}a_{11} & a_{12} \\ a_{21} & a_{22}\end{vmatrix}$$

   $$\Delta_3=\begin{vmatrix}a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33}\end{vmatrix}$$

   $$\Delta_i=\begin{vmatrix} a_{11} & a_{12} & \ldots & a_{1i} \\ a_{21} & a_{22} & \ldots & a_{2i} \\ \ldots & \ldots & \ldots & \ldots \\ a_{i1} & a_{i2} & \ldots & a_{ii}\end{vmatrix}$$

   $$\Delta_n = |A|$$
   

   
   in another words, all leading principal minors are positive
   


   
   if every $\Delta_k$ is non-zero then the negative index of inertia is equal to the number of sign changes in the sequence $\Delta_0=1, \Delta_1, \ldots, \Delta_n=\det A$

algebra quadratic_form

definition of eigenvector and eigenvalue
see my notes
algebra eigenvector

definition of eigenspace

/eigenspace/ associated with an eigenvalue 

$V^{\lambda} = \left\{ \ x \  : \  A x = \lambda x \  \right\}$ 

consists of $\boldsymbol 0$ and all eigenvectors associated with the eigenvalue





   eigenspace is a subspace since nonempty and $Ax = \lambda x, \  Ay = \lambda y \  \Rightarrow \  A(\alpha x + \beta y) = \lambda (\alpha x + \beta y)$
   



   eigenspaces are linearly independent
   i.e. eigenvectors corresponding to different eigenvalues are linearly independent
   




   $\dim V^{\lambda}$ is /geometric multiplicity/ of an eigenvalue $\lambda$

algebra eigenvector

characteristic polynomial of a linear operator

   The eigenvalues of a matrix A are precisely the solutions $\lambda$ to the equation

   $\det(A - \lambda I) = 0$




   proof:

   $Ax = \lambda x \ \Leftrightarrow \ Ax - \lambda I x = 0  \ \Leftrightarrow \ (A - \lambda I) x = 0$

   If there exists an inverse $(A - \lambda I)^{-1}$ then both sides can be left-multiplied by it to obtain $x=0$. 

   Therefore, if $\lambda$ is such that $(A - \lambda I)$ is invertible, $\lambda$ cannot be an eigenvalue.

   If it is not invertible, $\lambda$ is an eigenvalue.







   $$\det(A-\lambda I) = \begin{vmatrix} a_{11} - \lambda & a_{12} & \ldots & a_{1n} \\ a_{21} & a_{22} - \lambda & \ldots & a_{2n} \\ \ldots & \ldots & \ldots & \ldots \\ a_{n1} & a_{n2} & \ldots & a_{nn} - \lambda \end{vmatrix}$$


   /characteristic polynomial/ $\chi_{\mathcal A}(t) = \det(tI - A)$             ( $= (-1)^n \, \det(A-tI)$ )

   The solutions of the /characteristic equation/ $\chi_A(t) = 0$ are precisely the eigenvalues.






   Two similar matrices have the same characteristic polynomial: $\chi_{C^{-1}AC} (t)=\chi_{A}(t)$

   /similar/: $C$ is invertible change of basis. $A' = C^{-1}AC$. Then $\det(tI - A') = \det(tC^{-1}IC - C^{-1}AC) = \det\left( C^{-1} (tI -A) C \right) = \det C^{-1} \det(tI - A) \det C = \det(tI - A)$









   We can assume $\chi_{\mathcal A}(t) = \chi_A(t)$

   characteristic polynominal does not depend on the choice of a basis

algebra eigenvector

    $f(x) = (x - \lambda_1)^{d_1} \ldots (x - \lambda_k)^{d_k}$ 

    $\operatorname{tr}(A) \ = \ ?$

    $\operatorname{tr}(A) = d_1 \lambda_1 + \cdots + d_k \lambda_k$

algebra eigenvector

A has eigenvalues $\lambda_1, \ldots, \lambda_n$<div>
</div><div>$\det A = ?$</div>

$\det A = \prod \lambda_i$

algebra eigenvector

relation of invertible matrices and eigenvalues

   matrix is invertible $\iff$ no zero eigenvalue


   
   if $A$ is invertible, then eigenvalues of $A^{-1}$ are $\frac{1}{\lambda_1}, \ldots, \frac{1}{\lambda_n}$
   

algebra eigenvector

examples of eigenvalues and eigenvectors

   $$
   A =
   \begin{pmatrix}
   2 & 0 & 0 \\
   0 & 3 & 4 \\
   0 & 4 & 9
   \end{pmatrix}
   $$
   
   characteristic polynomial:
   
   $$
   \det (A-\lambda I) \ = \  
   \det \left(\begin{bmatrix}
   2 & 0 & 0 \\
   0 & 3 & 4 \\
   0 & 4 & 9
   \end{bmatrix} - \lambda
   \begin{bmatrix}
   1 & 0 & 0 \\
   0 & 1 & 0 \\
   0 & 0 & 1
   \end{bmatrix}\right) \;=\;
   \det \begin{bmatrix}
   2 - \lambda & 0 & 0 \\
   0 & 3 - \lambda & 4 \\
   0 & 4 & 9 - \lambda
   \end{bmatrix} =  (2 - \lambda) \bigl[ (3 - \lambda) (9 - \lambda) - 16 \bigr] = \lambda^3 -14\lambda^2 + 35\lambda - 22
   $$
   
   eigenvalues are 1, 2, 11
   
   when we have eigenvalues, we can get corresponding eigenvectors from a system of linear equations:
   $Ax = \lambda x$
   
   

   
   
   another example
   $$A = \begin{bmatrix} 4 & 1\\6 & 3 \end{bmatrix}$$
   
   $\lambda_1 = 1, \ \lambda_2 = 6$
   
   $$\begin{bmatrix} 4 & 1\\6 & 3 \end{bmatrix}\begin{bmatrix}x\\y\end{bmatrix} = 6 \cdot \begin{bmatrix}x\\y\end{bmatrix}$$
   
   $$
   \left\{\begin{matrix} 4x + {\ }y &{}= 6x\\6x + 3y &{}=6 y\end{matrix}\right.
   \quad\quad\quad
   $$
   
   $y = 2x$
   
   $\Rightarrow$ for an eigenvalue 6 the eigenvectors are $(a, 2a)$ for nonzero $a$

algebra eigenvector example

definition and properties of diagonizable linear operator
see my notes
algebra eigenvector

proove that row rank is equal to column rank
***
algebra matrix rank

proove that <div>
</div><div><div>    $\operatorname{rank}A = r$   $\iff$   </div><div>
</div><div>
</div><div>    $\exists$ invertible matrices $X_{m \times m}$ and $Y_{n \times n}$ such that </div><div>    </div></div><div>
</div>
***
algebra matrix rank

illustration of matrix multiplication
“<div><img src=”“20150227-1802-32099XmF.screenshot.png”” /></div>”
algebra matrix

rank of real matrix
for real matrices: $\operatorname{rank}(A^T A) = \operatorname{rank}(A A^T) = \operatorname{rank}(A) = \operatorname{rank}(A^T)$<div>
</div><div>see my notes</div>
algebra matrix rank

rank of complex matrix

for real matrices: $\operatorname{rank}(R^T R) = \operatorname{rank}(R R^T) = \operatorname{rank}(R) = \operatorname{rank}(R^T)$
    for complex matrices: $\operatorname{rank}(C) = \operatorname{rank}(\overline{C}) = \operatorname{rank}(C^T) = \operatorname{rank}(C^*) = \operatorname{rank}(C^*C)$, where $C^* = \overline{C^T} = (\overline{C})^T$
    

algebra matrix rank

property of rank<div>
</div><div>$\operatorname{rank}(AB) + \operatorname{rank}(BC)$</div>
$\operatorname{rank}(AB) + \operatorname{rank}(BC) \le \operatorname{rank}(B) + \operatorname{rank}(ABC)$
algebra matrix rank

gramian matrix

gramian matrix for a set of vectors is $G_{ij}=\langle v_j, v_i \rangle$


$$\det
    \begin{pmatrix} 
    \langle v_1,\;v_1\rangle & \langle v_1,\;v_2\rangle & \ldots & \langle v_1,\;v_n\rangle \\ 
    \langle v_2,\;v_1\rangle & \langle v_2,\;v_2\rangle & \ldots & \langle v_2,\;v_n\rangle \\ 
    \ldots & \ldots & \ldots & \ldots \\ 
    \langle v_n,\;v_1\rangle & \langle v_n,\;v_2\rangle & \ldots & \langle v_n,\;v_n\rangle 
    \end{pmatrix} =
    \det \left(
    \begin{pmatrix}
    v_{11} & \ldots & v_{1n} \\
    \ldots & \ldots & \ldots \\
    v_{n1} & \ldots & v_{nn}
    \end{pmatrix}
    \begin{pmatrix}
    v_{11} & \ldots & v_{1n} \\
    \ldots & \ldots & \ldots \\
    v_{n1} & \ldots & v_{nn}
    \end{pmatrix}^\top
    \right) = \det (VV^\top) = \det V \det V^\top = (\det V)^2 \geqslant 0
    $$

algebra matrix

solution set to any homogeneous system of linear equations with n variables is a subspace of $F^n$.
for example, the set of all vectors $(x, y, z)$ satisfying the equations $x + 3y + 2z = 0$ and $2x - 4y + 5z = 0$ is a one-dimensional subspace of $\mathbb R^3$.

algebra homogeneous_system matrix rank systems_of_linear_equations vector_space

$U$ and $W$ are vector subspaces


$\operatorname{dim} (U+W) = ?$

$\operatorname{dim} (U+W) = \operatorname{dim} U + \operatorname{dim} W - \operatorname{dim} (U \cap W)$

is it always $\operatorname{rank}(A^T A) = \operatorname{rank}(AA^T) = \operatorname{rank}(A)$?
see my notes
algebra matrix rank

knowing two solutions of $Ax=b$, how to get a solution of homogeneous system $Ax=0$

    
    if $x_1$, $x_2$ are solutions for $Ax=b$
    then $x_1 - x_2$ is solution for the homogenous system $Ax=0$
    $A (x_1 - x_2) = b - b = 0$

algebra homogeneous_system

prove that a vector in a basis can be interchanged with some another vector from a second basis
see my notes
algebra basis vector_space

prove that bases have equal number of vectors
see my notes
algebra basis vector_space

Does there exist an isomorphism between $\mathbb R^2$ and $\mathbb R^3$
see my notes
algebra basis vector_space

illustration of eigenvectors
see my notes
algebra eigenvector

definition of vector space

vector space V over field F is an abelian group (V, +) with multiplication by elements of F

———————————————————————————————————————-








vector space V over field F:




abelian group (V, +)




multiplication by elements of F

$\lambda (a+b) = \lambda a + \lambda b$

$(\lambda + \mu) a = \lambda a + \mu a$

$\lambda (\mu a) = (\lambda \mu) a$

$1 \cdot a = a$
algebra definition vector_space