In [1]:
using Pkg
Pkg.activate(pwd())
Pkg.instantiate()
  Activating environment at `~/Documents/github.com/ucla-biostat216-2021fall.github.io/slides/03-matrix/Project.toml`
In [2]:
using BenchmarkTools, Distributions, DSP, ForwardDiff, GraphPlot
using ImageCore, ImageFiltering, ImageIO, ImageShow, LightGraphs
using LinearAlgebra, Plots, Polynomials, Random, SparseArrays, SpecialMatrices
using Symbolics, TestImages
Random.seed!(216)
Out[2]:
MersenneTwister(216)

Matrices (BV Chapters 6-10)

Notations and terminology

  • A matrix $\mathbf{A} \in \mathbb{R}^{m \times n}$ $$ \mathbf{A} = \begin{pmatrix} a_{11} & \cdots & a_{1n} \\ \vdots & \ddots & \vdots \\ a_{m1} & \cdots & a_{mn} \end{pmatrix}. $$

    • $a_{ij}$ are the matrix elements or entries or coefficients or components.
    • Size or dimension of the matrix is $m \times n$.
  • Many authors use square brackets: $$ \mathbf{A} = \begin{bmatrix} a_{11} & \cdots & a_{1n} \\ \vdots & \ddots & \vdots \\ a_{m1} & \cdots & a_{mn} \end{bmatrix}. $$

In [3]:
# a 3x4 matrix
A = [0 1 -2.3 0.1;
    1.3 4 -0.1 0;
    4.1 -1 0 1.7]
Out[3]:
3×4 Matrix{Float64}:
 0.0   1.0  -2.3  0.1
 1.3   4.0  -0.1  0.0
 4.1  -1.0   0.0  1.7
  • We say a matrix $\mathbf{A} \in \mathbb{R}^{m \times n}$ is
    • tall if $m > n$;
    • wide if $m < n$;
    • square if $m = n$
In [4]:
# a tall matrix
A = randn(5, 3)
Out[4]:
5×3 Matrix{Float64}:
 -1.28506    -0.326449   -2.23444
 -1.44549     1.8623      0.915744
 -0.0148676   0.0582264   0.854177
  1.9427     -0.135717    0.576381
 -0.244914   -0.8972     -0.0627216
In [5]:
# a wide matrix
A = randn(2, 4)
Out[5]:
2×4 Matrix{Float64}:
  0.848249  -1.07614   0.321647  -1.46013
 -0.358689   0.336559  0.183508   0.0341577
In [6]:
# a square matrix
A = randn(3, 3)
Out[6]:
3×3 Matrix{Float64}:
 -1.38244   -0.159054   1.90148
  0.527643   1.06857    0.853046
  0.277008  -1.38077   -0.555663
  • Block matrix is a rectangular array of matrices $$ \mathbf{A} = \begin{pmatrix} \mathbf{B} & \mathbf{C} \\ \mathbf{D} & \mathbf{E} \end{pmatrix}. $$ Elements in the block matrices are the blocks or submatrices. Dimensions of the blocks must be compatible.
In [7]:
# blocks
B = [2; 1]
C = [0 2 3; 5 4 7]
D = [1]
E = [-1 6 0]
# block matrix
A = [B C; D E]
Out[7]:
3×4 Matrix{Int64}:
 2   0  2  3
 1   5  4  7
 1  -1  6  0
  • Matrix $\mathbf{A} \in \mathbb{R}^{m \times n}$ can be viewed as a $1 \times n$ block matrix with each column as a block $$ \mathbf{A} = \begin{pmatrix} \mathbf{a}_1 & \cdots & \mathbf{a}_n \end{pmatrix}, $$ where $$ \mathbf{a}_j = \begin{pmatrix} a_{1j} \\ \vdots \\ a_{mj} \end{pmatrix} $$ is the $j$-th column of $\mathbf{A}$.

  • $\mathbf{A} \in \mathbb{R}^{m \times n}$ viewed as an $m \times 1$ block matrix with each row as a block $$ \mathbf{A} = \begin{pmatrix} \mathbf{b}_1' \\ \vdots \\ \mathbf{b}_m' \end{pmatrix}, $$ where $$ \mathbf{b}_i' = \begin{pmatrix} a_{i1} \, \cdots \, a_{in} \end{pmatrix} $$ is the $i$-th row of $\mathbf{A}$.

  • Submatrix: $\mathbf{A}_{i_1:i_2,j_1:j_2}$.
In [8]:
A = [0 1 -2.3 0.1;
    1.3 4 -0.1 0;
    4.1 -1 0 1.7]
Out[8]:
3×4 Matrix{Float64}:
 0.0   1.0  -2.3  0.1
 1.3   4.0  -0.1  0.0
 4.1  -1.0   0.0  1.7
In [9]:
# sub-array
A[2:3, 2:3]
Out[9]:
2×2 Matrix{Float64}:
  4.0  -0.1
 -1.0   0.0
In [10]:
# a row
A[3, :]
Out[10]:
4-element Vector{Float64}:
  4.1
 -1.0
  0.0
  1.7
In [11]:
# a column
A[:, 3]
Out[11]:
3-element Vector{Float64}:
 -2.3
 -0.1
  0.0

Some special matrices

  • Zero matrix $\mathbf{0}_{m \times n} \in \mathbb{R}^{m \times n}$ is the matrix with all entries $a_{ij}=0$. The subscript is sometimes omitted when the dimension is clear from context.

  • One matrix $\mathbf{1}_{m \times n} \in \mathbb{R}^{m \times n}$ is the matrix with all entries $a_{ij}=1$. It is often written as $\mathbf{1}_m \mathbf{1}_n'$.

  • Identity matrix $\mathbf{I}_n \in \mathbb{R}^{n \times n}$ has entries $a_{ij} = \delta_{ij}$ (Kronecker delta notation): $$ \mathbf{I}_n = \begin{pmatrix} 1 & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & 1 \end{pmatrix}. $$ The subscript is often omitted when the dimension is clear from context. The columns of $\mathbf{A}$ are the unit vectors $$ \mathbf{I}_n = \begin{pmatrix} \mathbf{e}_1 \, \mathbf{e}_2 \, \cdots \, \mathbf{e}_n \end{pmatrix}. $$ Identity matrix is called the uniform scaling in some computer languages.

In [12]:
# a 3x5 zero matrix
zeros(3, 5)
Out[12]:
3×5 Matrix{Float64}:
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
In [13]:
# a 3x5 one matrix
ones(3, 5)
Out[13]:
3×5 Matrix{Float64}:
 1.0  1.0  1.0  1.0  1.0
 1.0  1.0  1.0  1.0  1.0
 1.0  1.0  1.0  1.0  1.0
In [14]:
@show ones(3, 5) == ones(3) * ones(5)'
ones(3, 5) == ones(3) * (ones(5))' = true
Out[14]:
true
In [15]:
# uniform scaling
I
Out[15]:
UniformScaling{Bool}
true*I
In [16]:
# a 3x3 identity matrix
I(3)
Out[16]:
3×3 Diagonal{Bool, Vector{Bool}}:
 1  ⋅  ⋅
 ⋅  1  ⋅
 ⋅  ⋅  1
In [17]:
# convert to dense matrix
Matrix(I(3))
Out[17]:
3×3 Matrix{Bool}:
 1  0  0
 0  1  0
 0  0  1
  • Symmetric matrix is a square matrix $\mathbf{A}$ with $a_{ij} = a_{ji}$.
In [18]:
# a symmetric matrix
A = [4 3 -2; 3 -1 5; -2 5 0]
Out[18]:
3×3 Matrix{Int64}:
  4   3  -2
  3  -1   5
 -2   5   0
In [19]:
issymmetric(A)
Out[19]:
true
In [20]:
# turn a general square matrix into a symmetric matrix
A = randn(3, 3)
Out[20]:
3×3 Matrix{Float64}:
 0.794304   1.66378     0.0561321
 1.26949   -0.0676266   0.0104737
 0.333068  -0.474861   -0.254432
In [21]:
# use upper triangular part as data
Symmetric(A)
Out[21]:
3×3 Symmetric{Float64, Matrix{Float64}}:
 0.794304    1.66378     0.0561321
 1.66378    -0.0676266   0.0104737
 0.0561321   0.0104737  -0.254432
In [22]:
# use lower triangular part as data
Symmetric(A, :L)
Out[22]:
3×3 Symmetric{Float64, Matrix{Float64}}:
 0.794304   1.26949     0.333068
 1.26949   -0.0676266  -0.474861
 0.333068  -0.474861   -0.254432
  • A diagonal matrix is a square matrix $\mathbf{A}$ with $a_{ij} = 0$ for all $i \ne j$.
In [23]:
# a diagonal matrix
A = [-1 0 0; 0 2 0; 0 0 -5]
Out[23]:
3×3 Matrix{Int64}:
 -1  0   0
  0  2   0
  0  0  -5
In [24]:
# turn a general square matrix into a diagonal matrix
Diagonal(A)
Out[24]:
3×3 Diagonal{Int64, Vector{Int64}}:
 -1  ⋅   ⋅
  ⋅  2   ⋅
  ⋅  ⋅  -5
  • A lower triangular matrix is a square matrix $\mathbf{A}$ with $a_{ij} = 0$ for all $i < j$.

  • A upper triangular matrix is a square matrix $\mathbf{A}$ with $a_{ij} = 0$ for all $i > j$.

In [25]:
# a lower triangular matrix
A = [4 0 0; 3 -1 0; -1 5 -2]
Out[25]:
3×3 Matrix{Int64}:
  4   0   0
  3  -1   0
 -1   5  -2
In [26]:
# turn a general square matrix into a lower triangular matrix
A = randn(3, 3)
LowerTriangular(A)
Out[26]:
3×3 LowerTriangular{Float64, Matrix{Float64}}:
 -0.529029    ⋅          ⋅ 
 -0.904305   0.0366214   ⋅ 
  0.369159  -0.516973   0.0865552
In [27]:
# turn a general square matrix into an upper triangular matrix
UpperTriangular(A)
Out[27]:
3×3 UpperTriangular{Float64, Matrix{Float64}}:
 -0.529029  0.519145   0.0669796
   ⋅        0.0366214  0.0965049
   ⋅         ⋅         0.0865552
  • A unit lower triangular matrix is a square matrix $\mathbf{A}$ with $a_{ij} = 0$ for all $i < j$ and $a_{ii}=1$.

  • A unit upper triangular matrix is a square matrix $\mathbf{A}$ with $a_{ij} = 0$ for all $i > j$ and $a_{ii}=1$.

In [28]:
# turn a general square matrix into a unit lower triangular matrix
UnitLowerTriangular(A)
Out[28]:
3×3 UnitLowerTriangular{Float64, Matrix{Float64}}:
  1.0         ⋅         ⋅ 
 -0.904305   1.0        ⋅ 
  0.369159  -0.516973  1.0
In [29]:
# turn a general square matrix into a unit upper triangular matrix
UnitUpperTriangular(A)
Out[29]:
3×3 UnitUpperTriangular{Float64, Matrix{Float64}}:
 1.0  0.519145  0.0669796
  ⋅   1.0       0.0965049
  ⋅    ⋅        1.0
  • We say a matrix is sparse if most (almost all) of its elements are zero.

    • In computer, a sparse matrix only stores the non-zero entries and their positions.

    • Special algorithms exploit the sparsity. Efficiency depends on number of nonzeros and their positions.

    • Sparse matrix can be visualized by the spy plot.

In [30]:
# generate a random 50x120 sparse matrix, with sparsity level 0.05
A = sprandn(50, 120, 0.05)
Out[30]:
50×120 SparseMatrixCSC{Float64, Int64} with 289 stored entries:
⢨⠀⠀⠀⠀⠐⠀⠢⠈⡀⠀⠐⢀⢀⠂⠈⠀⠂⠨⠀⠠⠀⠂⡈⢀⠀⠠⠀⠀⠒⠈⠀⢀⠀⠆⡀⠀⠀⠊⠂
⢂⡀⠀⠐⢂⠠⢀⠀⠠⠀⠁⠀⢀⠀⠀⠒⠁⠀⡀⢀⠀⣁⣐⠀⢀⠁⠐⠀⠀⡀⠄⠐⠀⠀⠀⠀⠀⡀⠂⠃
⠨⠈⠀⠀⠠⠄⠀⠁⠈⠀⠈⠂⠁⠀⢀⠐⠠⡀⠁⠄⠠⠁⠁⠈⠀⣀⠃⠀⠀⠠⡠⠅⠀⠄⠁⠠⠀⠀⠀⠅
⠀⠁⠀⣠⠀⠌⠀⠀⠈⠀⠰⠀⠄⠀⠀⠈⠁⠀⠱⠀⠐⠄⠀⠀⠀⠀⠀⠁⠁⠀⠀⠌⠒⠀⠀⠬⠡⡠⠀⠂
⠠⠀⠠⠠⠁⠈⢀⠀⡀⠡⠀⢀⠀⠀⠠⢀⠀⢁⠀⠀⠀⠀⠁⠐⠀⡂⠈⠀⠀⠀⠀⠎⡅⢀⠀⡂⠁⠀⠁⠀
⠀⠀⢀⠀⠀⠄⠈⠠⠁⠁⠀⢀⢂⡔⠀⠀⠠⠀⠈⠃⠀⠃⠀⠀⡆⠀⠀⠀⠅⠃⠨⠈⡂⠃⠀⠋⠀⠁⠄⠤
⠀⠀⠀⢁⣆⠀⢐⠑⢑⠀⠀⠀⠉⢐⠀⢀⠀⠐⢀⢀⠁⡐⠂⠂⠀⠂⡀⢁⠀⠉⠠⠀⢀⠂⠀⡈⠂⠋⢠⠀
⠀⠠⠠⠀⢀⠈⠐⠨⠀⠀⠐⠴⠐⠂⡐⠐⠀⠀⠈⢈⡀⠂⠀⡀⢀⠈⠃⡀⡄⠀⡐⠂⡀⡀⠀⠀⠂⠀⠀⠁
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
In [31]:
# show contents of SparseMatrixCSC{Bool, Int64}
dump(A)
SparseMatrixCSC{Float64, Int64}
  m: Int64 50
  n: Int64 120
  colptr: Array{Int64}((121,)) [1, 2, 7, 9, 11, 12, 13, 13, 15, 16  …  267, 268, 272, 274, 277, 278, 281, 288, 289, 290]
  rowval: Array{Int64}((289,)) [8, 1, 4, 6, 12, 14, 16, 28, 11, 20  …  42, 2, 7, 8, 13, 21, 34, 44, 17, 35]
  nzval: Array{Float64}((289,)) [0.3538360351341854, 0.698513581368634, 1.0297417811852574, 2.335044134794315, -1.393559434162859, -2.1759100026640312, 1.8982960128801192, 0.7880030395039885, -0.9337764537577509, 1.1910513644580212  …  -1.2936023970187036, 0.21816637681857168, 0.46882474265260043, 1.6253141153582433, -0.6449321806678313, 0.18222465974617355, -0.7109845579830814, 1.1433097964370182, -0.3394250591410285, -0.698448958216769]
In [32]:
Plots.spy(A)
Out[32]:

Matrix operations

  • Scalar-matrix multiplication or scalar-matrix product: For $\beta \in \mathbb{R}$ and $\mathbf{A} \in \mathbb{R}^{m \times n}$, $$ \beta \mathbf{A} = \begin{pmatrix} \beta a_{11} & \cdots & \beta a_{1n} \\ \vdots & \ddots & \vdots \\ \beta a_{m1} & \cdots & \beta a_{mn} \end{pmatrix}. $$
In [33]:
β = 0.5
A = [0 1 -2.3 0.1;
    1.3 4 -0.1 0;
    4.1 -1 0 1.7]
Out[33]:
3×4 Matrix{Float64}:
 0.0   1.0  -2.3  0.1
 1.3   4.0  -0.1  0.0
 4.1  -1.0   0.0  1.7
In [34]:
β * A
Out[34]:
3×4 Matrix{Float64}:
 0.0    0.5  -1.15  0.05
 0.65   2.0  -0.05  0.0
 2.05  -0.5   0.0   0.85
  • Matrix addition and substraction: For $\mathbf{A}, \mathbf{B} \in \mathbb{R}^{m \times n}$, $$ \mathbf{A} + \mathbf{B} = \begin{pmatrix} a_{11} + b_{11} & \cdots & a_{1n} + b_{1n} \\ \vdots & \ddots & \vdots \\ a_{m1} + b_{m1} & \cdots & a_{mn} + b_{mn} \end{pmatrix}, \quad \mathbf{A} - \mathbf{B} = \begin{pmatrix} a_{11} - b_{11} & \cdots & a_{1n} - b_{1n} \\ \vdots & \ddots & \vdots \\ a_{m1} - b_{m1} & \cdots & a_{mn} - b_{mn} \end{pmatrix}. $$
In [35]:
A = [0 1 -2.3 0.1;
    1.3 4 -0.1 0;
    4.1 -1 0 1.7]
B = ones(size(A))
Out[35]:
3×4 Matrix{Float64}:
 1.0  1.0  1.0  1.0
 1.0  1.0  1.0  1.0
 1.0  1.0  1.0  1.0
In [36]:
A + B
Out[36]:
3×4 Matrix{Float64}:
 1.0  2.0  -1.3  1.1
 2.3  5.0   0.9  1.0
 5.1  0.0   1.0  2.7
In [37]:
A - B
Out[37]:
3×4 Matrix{Float64}:
 -1.0   0.0  -3.3  -0.9
  0.3   3.0  -1.1  -1.0
  3.1  -2.0  -1.0   0.7
  • Composition of scalar-matrix product and and matrix addition/substraction is linear combination of matrices of same size $$ \alpha_1 \mathbf{A}_1 + \cdots + \alpha_k \mathbf{A}_k $$ One example of linear combination of matrices is color-image-to-grayscale-image conversion.
In [38]:
rgb_image = testimage("lighthouse")
Out[38]:
In [39]:
# R(ed) channel
channelview(rgb_image)[1, :, :]
Out[39]:
512×768 Array{N0f8,2} with eltype N0f8:
 0.361  0.349  0.345  0.376  0.369  …  0.349  0.341  0.329  0.345  0.349
 0.361  0.361  0.361  0.361  0.361     0.349  0.361  0.357  0.349  0.369
 0.376  0.353  0.361  0.361  0.369     0.361  0.357  0.357  0.349  0.361
 0.365  0.369  0.365  0.361  0.353     0.349  0.345  0.349  0.353  0.345
 0.365  0.376  0.361  0.365  0.365     0.357  0.361  0.361  0.361  0.349
 0.369  0.376  0.365  0.365  0.369  …  0.369  0.369  0.357  0.365  0.365
 0.38   0.365  0.369  0.365  0.365     0.369  0.361  0.361  0.361  0.365
 0.369  0.365  0.353  0.369  0.365     0.357  0.361  0.361  0.349  0.376
 0.357  0.376  0.388  0.369  0.369     0.357  0.361  0.361  0.369  0.373
 0.349  0.376  0.38   0.388  0.369     0.369  0.376  0.373  0.369  0.373
 0.373  0.376  0.376  0.38   0.369  …  0.369  0.376  0.376  0.361  0.353
 0.388  0.369  0.369  0.38   0.38      0.369  0.361  0.361  0.349  0.361
 0.38   0.388  0.376  0.369  0.376     0.361  0.373  0.349  0.353  0.376
 ⋮                                  ⋱                ⋮             
 0.204  0.18   0.184  0.263  0.396  …  0.22   0.161  0.165  0.157  0.157
 0.169  0.157  0.149  0.208  0.278     0.235  0.2    0.176  0.173  0.169
 0.149  0.153  0.176  0.192  0.149     0.243  0.235  0.251  0.251  0.231
 0.153  0.165  0.169  0.239  0.329     0.255  0.227  0.192  0.188  0.176
 0.188  0.255  0.365  0.529  0.557     0.224  0.235  0.204  0.2    0.231
 0.31   0.49   0.584  0.494  0.4    …  0.247  0.22   0.243  0.255  0.255
 0.486  0.537  0.592  0.604  0.62      0.263  0.235  0.251  0.247  0.231
 0.663  0.745  0.843  0.753  0.651     0.204  0.208  0.208  0.239  0.267
 0.702  0.678  0.616  0.541  0.506     0.267  0.271  0.259  0.259  0.267
 0.459  0.427  0.514  0.655  0.635     0.235  0.22   0.192  0.173  0.184
 0.443  0.443  0.486  0.529  0.573  …  0.173  0.165  0.173  0.165  0.18
 0.0    0.0    0.0    0.0    0.0       0.0    0.0    0.0    0.0    0.0
In [40]:
# G(reen) channel
channelview(rgb_image)[2, :, :]
Out[40]:
512×768 Array{N0f8,2} with eltype N0f8:
 0.486  0.475  0.478  0.514  0.506  …  0.486  0.475  0.463  0.482  0.49
 0.486  0.498  0.494  0.498  0.498     0.482  0.49   0.486  0.482  0.498
 0.514  0.49   0.494  0.494  0.506     0.49   0.482  0.482  0.486  0.498
 0.498  0.506  0.498  0.494  0.49      0.478  0.471  0.475  0.49   0.478
 0.498  0.51   0.494  0.498  0.498     0.482  0.49   0.486  0.498  0.486
 0.502  0.51   0.498  0.498  0.506  …  0.494  0.494  0.482  0.502  0.502
 0.518  0.498  0.506  0.498  0.498     0.494  0.49   0.486  0.498  0.502
 0.502  0.498  0.49   0.506  0.498     0.482  0.486  0.486  0.475  0.502
 0.502  0.51   0.522  0.506  0.506     0.482  0.486  0.486  0.494  0.498
 0.49   0.51   0.518  0.522  0.506     0.49   0.502  0.498  0.494  0.498
 0.518  0.51   0.514  0.518  0.514  …  0.482  0.506  0.506  0.498  0.494
 0.533  0.506  0.506  0.518  0.518     0.494  0.49   0.49   0.486  0.498
 0.525  0.522  0.51   0.502  0.51      0.49   0.498  0.475  0.49   0.514
 ⋮                                  ⋱                ⋮             
 0.169  0.145  0.157  0.224  0.353  …  0.255  0.2    0.192  0.188  0.188
 0.133  0.129  0.118  0.169  0.235     0.282  0.235  0.216  0.2    0.196
 0.122  0.122  0.161  0.153  0.106     0.29   0.282  0.286  0.29   0.271
 0.125  0.137  0.149  0.212  0.294     0.302  0.275  0.227  0.227  0.216
 0.165  0.227  0.349  0.514  0.545     0.263  0.282  0.239  0.235  0.271
 0.298  0.486  0.584  0.494  0.4    …  0.282  0.267  0.29   0.302  0.302
 0.498  0.545  0.612  0.616  0.631     0.298  0.282  0.298  0.306  0.29
 0.682  0.776  0.871  0.773  0.671     0.239  0.255  0.255  0.286  0.31
 0.745  0.722  0.655  0.573  0.533     0.306  0.322  0.31   0.306  0.31
 0.486  0.467  0.553  0.69   0.675     0.275  0.259  0.231  0.212  0.22
 0.467  0.467  0.525  0.565  0.62   …  0.204  0.192  0.204  0.192  0.208
 0.0    0.0    0.0    0.0    0.0       0.0    0.0    0.0    0.0    0.0
In [41]:
# B(lue) channel
channelview(rgb_image)[3, :, :]
Out[41]:
512×768 Array{N0f8,2} with eltype N0f8:
 0.6    0.588  0.588  0.624  0.616  …  0.537  0.525  0.518  0.525  0.529
 0.6    0.608  0.616  0.608  0.608     0.557  0.576  0.569  0.565  0.58
 0.624  0.608  0.616  0.616  0.616     0.592  0.588  0.596  0.596  0.608
 0.62   0.624  0.62   0.616  0.608     0.58   0.576  0.588  0.6    0.588
 0.627  0.639  0.616  0.62   0.62      0.596  0.592  0.6    0.608  0.596
 0.635  0.631  0.62   0.62   0.624  …  0.596  0.596  0.596  0.612  0.612
 0.635  0.62   0.624  0.62   0.62      0.596  0.592  0.6    0.608  0.612
 0.635  0.62   0.608  0.624  0.62      0.596  0.6    0.6    0.588  0.616
 0.627  0.631  0.631  0.624  0.624     0.596  0.6    0.6    0.596  0.604
 0.62   0.631  0.627  0.643  0.624     0.608  0.616  0.604  0.596  0.604
 0.643  0.631  0.624  0.635  0.631  …  0.592  0.608  0.608  0.596  0.592
 0.659  0.624  0.624  0.635  0.635     0.596  0.592  0.592  0.596  0.608
 0.643  0.651  0.639  0.635  0.631     0.592  0.604  0.588  0.6    0.624
 ⋮                                  ⋱                ⋮             
 0.137  0.125  0.145  0.196  0.31   …  0.29   0.224  0.22   0.204  0.204
 0.118  0.118  0.118  0.153  0.2       0.314  0.271  0.239  0.231  0.224
 0.11   0.122  0.153  0.145  0.082     0.333  0.314  0.325  0.314  0.294
 0.114  0.125  0.133  0.192  0.267     0.333  0.31   0.267  0.251  0.239
 0.141  0.208  0.314  0.478  0.498     0.298  0.314  0.275  0.263  0.294
 0.263  0.447  0.545  0.443  0.349  …  0.318  0.298  0.325  0.333  0.333
 0.443  0.494  0.557  0.561  0.569     0.333  0.314  0.329  0.333  0.318
 0.627  0.725  0.82   0.725  0.624     0.275  0.286  0.286  0.325  0.353
 0.678  0.667  0.612  0.529  0.494     0.341  0.341  0.341  0.349  0.353
 0.447  0.424  0.518  0.659  0.643     0.298  0.282  0.255  0.235  0.247
 0.439  0.439  0.49   0.545  0.592  …  0.22   0.22   0.22   0.2    0.22
 0.0    0.0    0.0    0.0    0.0       0.0    0.0    0.0    0.0    0.0
In [42]:
gray_image = Gray.(rgb_image)
Out[42]:
In [43]:
channelview(gray_image)
Out[43]:
512×768 reinterpret(reshape, N0f8, ::Array{Gray{N0f8},2}) with eltype N0f8:
 0.463  0.451  0.451  0.486  0.478  …  0.451  0.439  0.427  0.447  0.451
 0.463  0.471  0.467  0.471  0.471     0.451  0.463  0.459  0.451  0.471
 0.486  0.463  0.467  0.467  0.478     0.463  0.459  0.459  0.459  0.471
 0.471  0.478  0.471  0.467  0.463     0.451  0.447  0.451  0.463  0.451
 0.475  0.486  0.467  0.471  0.471     0.459  0.463  0.463  0.471  0.459
 0.478  0.482  0.471  0.471  0.478  …  0.467  0.467  0.459  0.475  0.475
 0.49   0.471  0.478  0.471  0.471     0.467  0.463  0.463  0.471  0.475
 0.478  0.471  0.463  0.478  0.471     0.459  0.463  0.463  0.451  0.478
 0.475  0.482  0.494  0.478  0.478     0.459  0.463  0.463  0.467  0.475
 0.463  0.482  0.49   0.494  0.478     0.467  0.478  0.475  0.467  0.475
 0.49   0.482  0.486  0.49   0.482  …  0.463  0.478  0.478  0.467  0.463
 0.506  0.478  0.478  0.49   0.49      0.467  0.463  0.463  0.459  0.471
 0.494  0.498  0.486  0.478  0.482     0.463  0.475  0.451  0.463  0.486
 ⋮                                  ⋱                ⋮             
 0.176  0.153  0.165  0.231  0.361  …  0.247  0.192  0.188  0.18   0.18
 0.141  0.137  0.125  0.18   0.243     0.271  0.227  0.208  0.196  0.192
 0.129  0.129  0.165  0.165  0.118     0.282  0.271  0.278  0.282  0.263
 0.133  0.145  0.153  0.22   0.302     0.29   0.263  0.22   0.22   0.208
 0.169  0.235  0.349  0.514  0.545     0.255  0.271  0.231  0.227  0.263
 0.298  0.482  0.58   0.49   0.396  …  0.275  0.255  0.278  0.29   0.29
 0.49   0.537  0.6    0.608  0.62      0.29   0.271  0.286  0.29   0.275
 0.671  0.761  0.855  0.761  0.659     0.231  0.243  0.243  0.278  0.302
 0.725  0.702  0.639  0.557  0.522     0.298  0.31   0.298  0.298  0.302
 0.475  0.451  0.537  0.675  0.659     0.267  0.251  0.224  0.204  0.212
 0.455  0.455  0.51   0.553  0.604  …  0.196  0.188  0.196  0.184  0.2
 0.0    0.0    0.0    0.0    0.0       0.0    0.0    0.0    0.0    0.0

The gray image are computed by taking a linear combination of the R(ed), G(green), and B(lue) channels. $$ 0.299 \mathbf{R} + 0.587 \mathbf{G} + 0.114 \mathbf{B} $$ according to Rec. ITU-R BT.601-7.

In [44]:
@show 0.299 * channelview(rgb_image)[1, 1, 1] + 
    0.587 * channelview(rgb_image)[2, 1, 1] + 
    0.114 * channelview(rgb_image)[3, 1, 1]
@show channelview(gray_image)[1, 1];
0.299 * (channelview(rgb_image))[1, 1, 1] + 0.587 * (channelview(rgb_image))[2, 1, 1] + 0.114 * (channelview(rgb_image))[3, 1, 1] = 0.4617176470588235
(channelview(gray_image))[1, 1] = 0.463N0f8
In [45]:
# first pixel
rgb_image[1]
Out[45]:
In [46]:
# first pixel converted to gray scale
Gray{N0f8}(0.299 * rgb_image[1].r + 0.587 * rgb_image[1].g + 0.114 * rgb_image[1].b)
Out[46]:
  • The transpose of a matrix $\mathbf{A} \in \mathbb{R}^{m \times n}$ is the $n \times m$ matrix $$ \mathbf{A}' = \begin{pmatrix} a_{11} & \cdots & a_{m1} \\ \vdots & \ddots & \vdots \\ a_{1n} & \cdots & a_{mn} \end{pmatrix}. $$ In words, we swap $a_{ij} \leftrightarrow a_{ji}$ for all $i, j$.

  • Alternative notation: $\mathbf{A}^T$, $\mathbf{A}^t$.

  • Properties of matrix transpose:

    • A symmetric matrix satisfies $\mathbf{A} = \mathbf{A}'$.

    • $(\beta \mathbf{A})' = \beta \mathbf{A}'$.

    • $(\mathbf{A} + \mathbf{B})' = \mathbf{A}' + \mathbf{B}'$.

In [47]:
A
Out[47]:
3×4 Matrix{Float64}:
 0.0   1.0  -2.3  0.1
 1.3   4.0  -0.1  0.0
 4.1  -1.0   0.0  1.7
In [48]:
A'
Out[48]:
4×3 adjoint(::Matrix{Float64}) with eltype Float64:
  0.0   1.3   4.1
  1.0   4.0  -1.0
 -2.3  -0.1   0.0
  0.1   0.0   1.7
In [49]:
# note it's not rotating the picture!
rgb_image'
Out[49]:
  • Transpose a block matrix $$ \begin{pmatrix} \mathbf{A} & \mathbf{B} \\ \mathbf{C} & \mathbf{D} \end{pmatrix}' = \begin{pmatrix} \mathbf{A}' & \mathbf{C}' \\ \mathbf{B}' & \mathbf{D}' \end{pmatrix}. $$

Matrix norm

  • The (Frobenius) norm or norm of a matrix $\mathbf{A} \in \mathbb{R}^{m \times n}$ is defined as $$ \|\mathbf{A}\| = \sqrt{\sum_{i,j} a_{ij}^2}. $$ Often we use $\|\mathbf{A}\|_{\text{F}}$ to differentiate with other matrix norms.

  • Similar to the vector norm, matrix norm satisfies the following properties:

    1. Positive definiteness: $\|\mathbf{A}\| \ge 0$. $\|\mathbf{A}\| = 0$ if and only if $\mathbf{A}=\mathbf{0}_{m \times n}$.

    2. Homogeneity: $\|\alpha \mathbf{A}\| = |\alpha| \|\mathbf{A}\|$ for any scalar $\alpha$ and matrix $\mathbf{A}$.

    3. Triangle inequality: $\|\mathbf{A} + \mathbf{B}\| \le \|\mathbf{A}\| + \|\mathbf{B}\|$ for any $\mathbf{A}, \mathbf{B} \in \mathbb{R}^{m \times n}$.

    4. A new rule for (any) matrix norm: $\|\mathbf{A} \mathbf{B}\| \le \|\mathbf{A}\| \|\mathbf{B}\|$.

In [50]:
A = [0 1 -2.3 0.1;
    1.3 4 -0.1 0;
    4.1 -1 0 1.7]
Out[50]:
3×4 Matrix{Float64}:
 0.0   1.0  -2.3  0.1
 1.3   4.0  -0.1  0.0
 4.1  -1.0   0.0  1.7
In [51]:
# Frobenius norm 
norm(A)
Out[51]:
6.685805860178712
In [52]:
# manually calculate Frobenius norm
sqrt(sum(abs2, A))
Out[52]:
6.685805860178712

Matrix-matrix and matrix-vector multiplications

Matrix-matrix multiplication

  • Matrix-matrix multiplication or matrix-matrix product: For $\mathbf{A} \in \mathbb{R}^{m \times n}$ and $\mathbf{B} \in \mathbb{R}^{n \times p}$, $$ \mathbf{C} = \mathbf{A} \mathbf{B} $$ is the $m \times p$ matrix with entries $$ c_{ij} = a_{i1} b_{1j} + a_{i2} b_{2k} + \cdots + a_{in} b_{nj}. $$ Note the number of columns in $\mathbf{A}$ must be equal to the number of rows in $\mathbf{B}$.

  • Vector inner product $\mathbf{a}' \mathbf{b}$ is a special case of matrix-matrix multiplication with $m=p=1$. $$ \mathbf{a}' \mathbf{b} = \begin{pmatrix} a_1 \, \cdots \, a_n \end{pmatrix} \begin{pmatrix} b_1 \\ \vdots \\ b_n \end{pmatrix}. $$

  • View of matrix-matrix multiplication as inner products. $c_{ij}$ is the inner product of the $i$-th row of $\mathbf{A}$ and the $j$-th column of $\mathbf{B}$: $$ \begin{pmatrix} & & \\ & c_{ij} & \\ & & \end{pmatrix} = \begin{pmatrix} & & \\ - & \mathbf{a}_i' & - \\ & & \end{pmatrix} \begin{pmatrix} & | & \\ & \mathbf{b}_j & \\ & | & \end{pmatrix} = \begin{pmatrix} & & \\ a_{i1} & \cdots & a_{in} \\ & & \end{pmatrix} \begin{pmatrix} & b_{1j} & \\ & \vdots & \\ & b_{nj} & \end{pmatrix}. $$

In [53]:
# an example with m, n, p = 2, 3, 2
A = [-1.5 3 2; 1 -1 0]
Out[53]:
2×3 Matrix{Float64}:
 -1.5   3.0  2.0
  1.0  -1.0  0.0
In [54]:
B = [-1 -1; 0 -2; 1 0]
Out[54]:
3×2 Matrix{Int64}:
 -1  -1
  0  -2
  1   0
In [55]:
C = A * B
Out[55]:
2×2 Matrix{Float64}:
  3.5  -4.5
 -1.0   1.0
In [56]:
# check C[2,3] = A[2,:]'B[:,3]
C[2, 2]  A[2, :]'B[:, 2]
Out[56]:
true

Matrix-vector multiplication

  • Matrix-vector multiplication or matrix-vector product are special cases of the matrix-matrix multiplication with $m=1$ or $p=1$.

    • For $\mathbf{A} \in \mathbb{R}^{m \times n}$ and $\mathbf{b} \in \mathbb{R}^{n}$, $$ \mathbf{A} \mathbf{b} = \begin{pmatrix} - & \mathbf{a}_1' & - \\ & \vdots & \\ - & \mathbf{a}_m' & - \end{pmatrix} \mathbf{b} = \begin{pmatrix} \mathbf{a}_1' \mathbf{b} \\ \vdots \\ \mathbf{a}_m' \mathbf{b} \end{pmatrix} \in \mathbb{R}^m. $$ Alternatively, $\mathbf{A} \mathbf{b}$ can be viewed as a linear combination of columns of $\mathbf{A}$ $$ \mathbf{A} \mathbf{b} = \begin{pmatrix} | & & | \\ \mathbf{a}_1 & & \mathbf{a}_n \\ | & & | \end{pmatrix} \begin{pmatrix} b_1 \\ \vdots \\ b_n \end{pmatrix} = b_1 \mathbf{a}_1 + \cdots b_n \mathbf{a}_n. $$

    • For $\mathbf{a} \in \mathbb{R}^n$ and $\mathbf{B} \in \mathbb{R}^{n \times p}$, $$ \mathbf{a}' \mathbf{B} = \mathbf{a}' \begin{pmatrix} | & & | \\ \mathbf{b}_1 & & \mathbf{b}_p \\ | & & | \end{pmatrix} = (\mathbf{a}' \mathbf{b}_1 \, \ldots \, \mathbf{a}'\mathbf{b}_p). $$ Alternatively, $\mathbf{a}' \mathbf{B}$ can be viewed as a linear combination of the rows of $\mathbf{B}$ $$ \mathbf{a}' \mathbf{B} = (a_1 \, \ldots \, a_n) \begin{pmatrix} - & \mathbf{b}_1' & - \\ & & \\ - & \mathbf{b}_n' & - \end{pmatrix} = a_1 \mathbf{b}_1' + \cdots + a_n \mathbf{b}_n'. $$

  • View of matrix mulplication $\mathbf{C} = \mathbf{A} \mathbf{B}$ as matrix-vector products.

    • $j$-th column of $\mathbf{C}$ is equal to product of $\mathbf{A}$ and $j$-th column of $\mathbf{B}$ $$ \begin{pmatrix} & | & \\ & \mathbf{c}_j & \\ & | & \end{pmatrix} = \mathbf{A} \begin{pmatrix} & | & \\ & \mathbf{b}_j & \\ & | & \end{pmatrix}. $$

    • $i$-th row of $\mathbf{C}$ is equal to product of $i$-th row of $\mathbf{A}$ and $\mathbf{B}$ $$ \begin{pmatrix} & & \\ - & \mathbf{c}_i' & - \\ & & \end{pmatrix} = \begin{pmatrix} & & \\ - & \mathbf{a}_i' & - \\ & & \end{pmatrix} \mathbf{B}. $$

In [57]:
# check C[:, 2] = A * B[:, 2]
C[:, 2]  A * B[:, 2]
Out[57]:
true
In [58]:
# check C[2, :]' = A[2, :]' * B
# note C[2, :] returns a column vector in Julia!
C[2, :]'  A[2, :]'B
Out[58]:
true

Exercise - multiplication of adjacency matrix

Here is a directed graph with 4 nodes and 5 edges.

In [59]:
# a simple directed graph on GS p16
g = SimpleDiGraph(4)
add_edge!(g, 1, 2)
add_edge!(g, 1, 3)
add_edge!(g, 2, 3)
add_edge!(g, 2, 4)
add_edge!(g, 4, 3)
gplot(g, nodelabel=["x1", "x2", "x3", "x4"], edgelabel=["b1", "b2", "b3", "b4", "b5"])
Out[59]:
b1 b2 b3 b4 b5 x1 x2 x3 x4

The adjacency matrix $\mathbf{A}$ has entries \begin{eqnarray*} a_{ij} = \begin{cases} 1 & \text{if node $i$ links to node $j$} \\ 0 & \text{otherwise} \end{cases}. \end{eqnarray*} Note this definition differs from the BV book (p112).

In [60]:
# adjacency matrix A
A = convert(Matrix{Int64}, adjacency_matrix(g))
Out[60]:
4×4 Matrix{Int64}:
 0  1  1  0
 0  0  1  1
 0  0  0  0
 0  0  1  0

Give a graph interpretation of $\mathbf{A}^2 = \mathbf{A} \mathbf{A}$, $\mathbf{A}^3 = \mathbf{A} \mathbf{A} \mathbf{A}$, ...

In [61]:
A * A
Out[61]:
4×4 Matrix{Int64}:
 0  0  1  1
 0  0  1  0
 0  0  0  0
 0  0  0  0
In [62]:
A * A * A
Out[62]:
4×4 Matrix{Int64}:
 0  0  1  0
 0  0  0  0
 0  0  0  0
 0  0  0  0

Answer: $(A^k)_{ij}$ counts the number of paths from node $i$ to node $j$ in exactly $k$ steps.

Properties of matrix multiplications

  • Associative: $$ (\mathbf{A} \mathbf{B}) \mathbf{C} = \mathbf{A} (\mathbf{B} \mathbf{C}) = \mathbf{A} \mathbf{B} \mathbf{C}. $$

  • Associative with scalar-matrix multiplication: $$ (\alpha \mathbf{B}) \mathbf{C} = \alpha \mathbf{B} \mathbf{C} = \mathbf{B} (\alpha \mathbf{C}). $$

  • Distributive with sum: $$ (\mathbf{A} + \mathbf{B}) \mathbf{C} = \mathbf{A} \mathbf{C} + \mathbf{B} \mathbf{C}, \quad \mathbf{A} (\mathbf{B} + \mathbf{C}) = \mathbf{A} \mathbf{B} + \mathbf{A} \mathbf{C}. $$

  • Transpose of product: $$ (\mathbf{A} \mathbf{B})' = \mathbf{B}' \mathbf{A}'. $$

  • Not commutative: In general, $$ \mathbf{A} \mathbf{B} \ne \mathbf{B} \mathbf{A}. $$ There are exceptions, e.g., $\mathbf{A} \mathbf{I} = \mathbf{I} \mathbf{A}$ for square $\mathbf{A}$.

In [63]:
A = randn(3, 2)
B = randn(2, 4)
A * B
Out[63]:
3×4 Matrix{Float64}:
  0.392005  -1.49338   -0.132463  -0.114101
 -1.23592   -0.865032  -1.32952   -0.468272
  0.332785  -0.401422   0.159134   0.0318462
In [64]:
# dimensions are even incompatible!
B * A
DimensionMismatch("A has dimensions (2,4) but B has dimensions (3,2)")

Stacktrace:
 [1] gemm_wrapper!(C::Matrix{Float64}, tA::Char, tB::Char, A::Matrix{Float64}, B::Matrix{Float64}, _add::LinearAlgebra.MulAddMul{true, true, Bool, Bool})
   @ LinearAlgebra /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/matmul.jl:643
 [2] mul!
   @ /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/matmul.jl:169 [inlined]
 [3] mul!
   @ /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/matmul.jl:275 [inlined]
 [4] *(A::Matrix{Float64}, B::Matrix{Float64})
   @ LinearAlgebra /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/matmul.jl:160
 [5] top-level scope
   @ In[64]:2
 [6] eval
   @ ./boot.jl:360 [inlined]
 [7] include_string(mapexpr::typeof(REPL.softscope), mod::Module, code::String, filename::String)
   @ Base ./loading.jl:1094
  • For two vectors $\mathbf{a} \in \mathbb{R}^m$ and $\mathbf{b} \in \mathbb{R}^n$, the matrix $$ \mathbf{a} \mathbf{b}' = \begin{pmatrix} a_1 \\ \vdots \\ a_m \end{pmatrix} \begin{pmatrix} b_1 \, \cdots \, b_n \end{pmatrix} = \begin{pmatrix} a_1 b_1 & \cdots & a_1 b_n \\ \vdots & & \vdots \\ a_m b_1 & \cdots & a_m b_n \end{pmatrix} $$ is called the outer product of $\mathbf{a}$ and $\mathbf{b}$. Apparently $\mathbf{a} \mathbf{b}' \ne \mathbf{b}' \mathbf{a}$ unless both are scalars.

  • View of matrix-matrix product $\mathbf{C} = \mathbf{A} \mathbf{B}$ as sum of outer products: $$ \mathbf{C} = \begin{pmatrix} | & & | \\ \mathbf{a}_1 & \cdots & \mathbf{a}_n \\ | & & | \end{pmatrix} \begin{pmatrix} - & \mathbf{b}_1' & - \\ & \vdots & \\ - & \mathbf{b}_n' & - \end{pmatrix} = \mathbf{a}_1 \mathbf{b}_1' + \cdots + \mathbf{a}_n \mathbf{b}_n'. $$

In [65]:
m, n, p = 3, 2, 4
A = randn(m, n)
B = randn(n, p)
C = A * B
@show C  A[:, 1] * B[1, :]' + A[:, 2] * B[2, :]'
C ≈ A[:, 1] * (B[1, :])' + A[:, 2] * (B[2, :])' = true
Out[65]:
true

Gram matrix

  • Let $\mathbf{A} \in \mathbb{R}^{m \times n}$ with columns $\mathbf{a}_1, \ldots, \mathbf{a}_n$. The matrix product $$ \mathbf{G} = \mathbf{A}' \mathbf{A} = \begin{pmatrix} - & \mathbf{a}_1' & - \\ & \vdots & \\ - & \mathbf{a}_n' & - \end{pmatrix} \begin{pmatrix} | & & | \\ \mathbf{a}_1 & & \mathbf{a}_n \\ | & & | \end{pmatrix} = \begin{pmatrix} \mathbf{a}_1' \mathbf{a}_1 & \cdots & \mathbf{a}_1' \mathbf{a}_n \\ \vdots & \ddots & \vdots \\ \mathbf{a}_n' \mathbf{a}_1 & \cdots & \mathbf{a}_n' \mathbf{a}_n \end{pmatrix} \in \mathbb{R}^{n \times n} $$ is called the Gram matrix.

  • Gram matrix is symmetric: $\mathbf{G}' = (\mathbf{A}' \mathbf{A})' = \mathbf{A}' (\mathbf{A}')' = \mathbf{A}' \mathbf{A} = \mathbf{G}$.

  • We also call $\mathbf{A} \mathbf{A}'$ a Gram matrix.

In [66]:
A = randn(5, 3)
Out[66]:
5×3 Matrix{Float64}:
 -1.43493    0.195032   -0.968677
 -0.404729   1.38798     0.652294
 -0.855932   0.293899   -0.825902
 -1.79805   -0.0890465  -0.177281
 -0.694515   0.228598    0.325683
In [67]:
# 3x3
A' * A
Out[67]:
3×3 Matrix{Float64}:
  6.67079  -1.09183   1.92547
 -1.09183   2.1111    0.563955
  1.92547   0.563955  2.18344
In [68]:
# 5x5
A * A'
Out[68]:
5×5 Matrix{Float64}:
 3.03541   0.219599  2.08556   2.73444   0.725686
 0.219599  2.51579   0.215617  0.488489  0.810822
 2.08556   0.215617  1.50111   1.65925   0.39266
 2.73444   0.488489  1.65925   3.27233   1.17068
 0.725686  0.810822  0.39266   1.17068   0.640677

Linear independence of columns

  • A set of vectors $\mathbf{a}_1, \ldots, \mathbf{a}_k \in \mathbb{R}^{n}$ are linearly independent if, for the matrix $\mathbf{A} = (\mathbf{a}_1 \, \ldots \, \mathbf{a}_k) \in \mathbb{R}^{n \times k}$, $$ \mathbf{A} \mathbf{b} = b_1 \mathbf{a}_1 + \cdots + b_k \mathbf{a}_k = \mathbf{0}_n $$ implies that $\mathbf{b} = \mathbf{0}_k$. In this case, we also say $\mathbf{A}$ has linearly independent columns.

Block matrix-matrix multiplication

  • Matrix-matrix multiplication in block form: $$ \begin{pmatrix} \mathbf{A} & \mathbf{B} \\ \mathbf{C} & \mathbf{D} \end{pmatrix} \begin{pmatrix} \mathbf{W} & \mathbf{Y} \\ \mathbf{X} & \mathbf{Z} \end{pmatrix} = \begin{pmatrix} \mathbf{A} \mathbf{W} + \mathbf{B} \mathbf{X} & \mathbf{A} \mathbf{Y} + \mathbf{B} \mathbf{Z} \\ \mathbf{C} \mathbf{W} + \mathbf{D} \mathbf{X} & \mathbf{C} \mathbf{Y} + \mathbf{D} \mathbf{Z} \end{pmatrix}. $$ Submatrices need to have compatible dimensions.

Linear functions and operators

  • The matrix-vector product $\mathbf{y} = \mathbf{A} \mathbf{x}$ can be viewed as a function acting on input $\mathbf{x} \in \mathbb{R}^n$ and outputing $\mathbf{y} \in \mathbb{R}^m$. In this sense, we also say any matrix $\mathbf{A}$ is a linear operator.

  • A function $f: \mathbb{R}^n \mapsto \mathbb{R}^m$ is linear if $$ f(\alpha \mathbf{x} + \beta \mathbf{y}) = \alpha f(\mathbf{x}) + \beta f(\mathbf{y}). $$ for all scalars $\alpha, \beta$ and vectors $\mathbf{x}, \mathbf{y} \in \mathbb{R}^n$.

  • Definition of linear function implies that the superposition property holds for any linear combination $$ f(\alpha_1 \mathbf{x}_1 + \cdots + \alpha_p \mathbf{x}_p) = \alpha_1 f(\mathbf{x}_1) + \cdots + \alpha_p f(\mathbf{x}_p). $$

  • Any linear function is a matrix-vector product and vice versa.

    1. For a fixed matrix $\mathbf{A} \in \mathbb{R}^{m \times n}$, the function $f: \mathbb{R}^n \mapsto \mathbb{R}^m$ defined by $$ f(\mathbf{x}) = \mathbf{A} \mathbf{x} $$ is linear, because $f(\alpha \mathbf{x} + \beta \mathbf{y}) = \mathbf{A} (\alpha \mathbf{x} + \beta \mathbf{y}) = \alpha \mathbf{A} \mathbf{x} + \beta \mathbf{A} \mathbf{y} = \alpha f(\mathbf{x}) + \beta f(\mathbf{y})$.

    2. Any linear function is a matrix-vector product because \begin{eqnarray*} f(\mathbf{x}) &=& f(x_1 \mathbf{e}_1 + \cdots + x_n \mathbf{e}_n) \\ &=& x_1 f(\mathbf{e}_1) + \cdots + x_n f(\mathbf{e}_n) \\ &=& \begin{pmatrix} f(\mathbf{e}_1) \, \cdots \, f(\mathbf{e}_n) \end{pmatrix} \mathbf{x}. \end{eqnarray*} Hence $f(\mathbf{x}) = \mathbf{A} \mathbf{x}$ with $\mathbf{A} = \begin{pmatrix} f(\mathbf{e}_1) \, \cdots \, f(\mathbf{e}_n) \end{pmatrix}$.

Permutation

  • Reverser matrix: $$ A = \begin{pmatrix} 0 & \cdots & 1 \\ \vdots & .^{.^{.}} & \vdots \\ 1 & \cdots & 0 \end{pmatrix}, \quad \mathbf{A} \begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_{n-1} \\ x_n \end{pmatrix} = \begin{pmatrix} x_n \\ x_{n-1} \\ \vdots \\ x_2 \\ x_1 \end{pmatrix}. $$
In [69]:
n = 5
A = [i + j == n + 1 ? 1 : 0 for i in 1:n, j in 1:n]
Out[69]:
5×5 Matrix{Int64}:
 0  0  0  0  1
 0  0  0  1  0
 0  0  1  0  0
 0  1  0  0  0
 1  0  0  0  0
In [70]:
x = Vector(1:n)
Out[70]:
5-element Vector{Int64}:
 1
 2
 3
 4
 5
In [71]:
A * x
Out[71]:
5-element Vector{Int64}:
 5
 4
 3
 2
 1
  • Circular shift matrix $$ \mathbf{A} = \begin{pmatrix} 0 & 0 & \cdots & 0 & 1 \\ 1 & 0 & \cdots & 0 & 0 \\ 0 & 1 & \cdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & 1 & 0 \end{pmatrix}, \quad \mathbf{A} \begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_{n-1} \\ x_n \end{pmatrix} = \begin{pmatrix} x_n \\ x_1 \\ x_2 \\ \vdots \\ x_{n-1} \end{pmatrix}. $$
In [72]:
n = 5
A = [mod(i - j, n) == 1 ? 1 : 0 for i in 1:n, j in 1:n]
Out[72]:
5×5 Matrix{Int64}:
 0  0  0  0  1
 1  0  0  0  0
 0  1  0  0  0
 0  0  1  0  0
 0  0  0  1  0
In [73]:
x = Vector(1:n)
Out[73]:
5-element Vector{Int64}:
 1
 2
 3
 4
 5
In [74]:
A * x
Out[74]:
5-element Vector{Int64}:
 5
 1
 2
 3
 4
In [75]:
A * (A * x)
Out[75]:
5-element Vector{Int64}:
 4
 5
 1
 2
 3
  • The reverser and circular shift matrices are examples of permutation matrix.

    A permutation matrix is a square 0-1 matrix with one element 1 in each row and one element 1 in each column.

    Equivalently, a permutation matrix is the identity matrix with columns reordered.

    Equivalently, a permutation matrix is the identity matrix with rows reordered.

    $\mathbf{A} \mathbf{x}$ is a permutation of elements in $\mathbf{x}$.

In [76]:
σ = randperm(n)
Out[76]:
5-element Vector{Int64}:
 1
 5
 3
 4
 2
In [77]:
# permute the rows of identity matrix
P = I(n)[σ, :]
Out[77]:
5×5 SparseMatrixCSC{Bool, Int64} with 5 stored entries:
 1  ⋅  ⋅  ⋅  ⋅
 ⋅  ⋅  ⋅  ⋅  1
 ⋅  ⋅  1  ⋅  ⋅
 ⋅  ⋅  ⋅  1  ⋅
 ⋅  1  ⋅  ⋅  ⋅
In [78]:
x = Vector(1:n)
Out[78]:
5-element Vector{Int64}:
 1
 2
 3
 4
 5
In [79]:
# operator
P * x
Out[79]:
5-element Vector{Int64}:
 1
 5
 3
 4
 2
In [80]:
x[σ]
Out[80]:
5-element Vector{Int64}:
 1
 5
 3
 4
 2

Rotation

  • A rotation matrix in the plane $\mathbb{R}^2$: $$ \mathbf{A} = \begin{pmatrix} \cos \theta & - \sin \theta \\ \sin \theta & \cos \theta \end{pmatrix}. $$ $\mathbf{A} \mathbf{x}$ is $\mathbf{x}$ rotated counterclockwise over an angle $\theta$.
In [81]:
θ = π/4
A = [cos(θ) -sin(θ); sin(θ) cos(θ)]
Out[81]:
2×2 Matrix{Float64}:
 0.707107  -0.707107
 0.707107   0.707107
In [82]:
# rotate counterclockwise 45 degree
x1 = [2, 1]
x2 = A * x1
Out[82]:
2-element Vector{Float64}:
 0.7071067811865477
 2.1213203435596424
In [83]:
# rotate counterclockwise 90 degree
x3 = A * x2
Out[83]:
2-element Vector{Float64}:
 -0.9999999999999997
  2.0
In [84]:
x_vals = [0 0 0; x1[1] x2[1] x3[1]]
y_vals = [0 0 0; x1[2] x2[2] x3[2]]

plot(x_vals, y_vals, arrow = true, color = :blue,
    legend = :none, xlims = (-3, 3), ylims = (-3, 3),
    annotations = [((x1 .+ 0.2)..., "x1"),
                ((x2 .+ 0.2)..., "x2 = A * x1"),
                ((x3 .+ [-1, 0.2])..., "x3 = A * A * x1")],
    xticks = -3:1:3, yticks = -3:1:3,
    framestyle = :origin,
    aspect_ratio = :equal)
Out[84]:
  • Exercise: Given two vectors $\mathbf{x}, \mathbf{y} \in \mathbb{R}^2$ of same length, how do we find a rotation matrix such that $\mathbf{A} \mathbf{x} = \mathbf{y}$?
In [85]:
x, y = randn(2), randn(2)
x = x / norm(x)
y = y / norm(y)
cosθ = x'y / (norm(x) * norm(y))
Out[85]:
-0.3126336674369729
In [86]:
sinθ = sqrt(1 - cosθ^2)
A = [cosθ -sinθ; sinθ cosθ]
Out[86]:
2×2 Matrix{Float64}:
 -0.312634  -0.949874
  0.949874  -0.312634
In [87]:
# note we have either Ax = y or Ay = x
if A * y  x
    A = [cosθ sinθ; -sinθ cosθ]
end
Out[87]:
2×2 Matrix{Float64}:
 -0.312634   0.949874
 -0.949874  -0.312634
In [88]:
A * x  y
Out[88]:
true

Projection and reflection

  • Projection of $\mathbf{x} \in \mathbb{R}^n$ on the line through $\mathbf{0}_n$ and $\mathbf{a}$: $$ \mathbf{y} = \frac{\mathbf{a}'\mathbf{x}}{\|\mathbf{a}\|^2} \mathbf{a} = \frac{\mathbf{a}\mathbf{a}'}{\|\mathbf{a}\|^2} \mathbf{x} = \mathbf{A} \mathbf{x}, $$ where $$ \mathbf{A} = \frac{1}{\|\mathbf{a}\|^2} \mathbf{a} \mathbf{a}'. $$
  • Reflection of $\mathbf{x} \in \mathbb{R}^n$ with respect to the line through $\mathbf{0}_n$ and $\mathbf{a}$: $$ \mathbf{z} = \mathbf{x} + 2(\mathbf{y} - \mathbf{x}) = 2 \mathbf{y} - \mathbf{x} = \left( \frac{2}{\|\mathbf{a}\|^2} \mathbf{a} \mathbf{a}' - \mathbf{I} \right) \mathbf{x} = \mathbf{B} \mathbf{x}, $$ with $$ \mathbf{B} = 2 \mathbf{A} - \mathbf{I} = \frac{2}{\|\mathbf{a}\|^2} \mathbf{a} \mathbf{a}' - \mathbf{I}. $$
In [89]:
a = [3, 2]
A = a * a' / norm(a)^2
x = [1, 2]
y = A * x
B = 2A - I
z = B * x;
In [90]:
x_vals = [0 0 0; x[1] y[1] z[1]]
y_vals = [0 0 0; x[2] y[2] z[2]]

plt = plot(x_vals, y_vals, arrow = true, color = :blue,
    legend = :none, xlims = (-4, 4), ylims = (-2, 4),
    annotations = [((x .+ 0.2)..., "x"),
                ((y .+ 0.2)..., "y"),
                ((z .+ 0.2)..., "z")],
    xticks = -3:1:3, yticks = -1:1:3,
    framestyle = :origin,
    aspect_ratio = :equal)
plot!(plt, [0, a[1]], [0, a[2]], arrow = true,
    annotations = [((a .+ 0.2)..., "a")])
Out[90]:

Incidence matrix

  • For a directed graph with $m$ nodes and $n$ arcs (directed edges), the node-arc indicence matrix $\mathbf{B} \in \{-1,0,1\}^{m \times n}$ has entries \begin{eqnarray*} b_{ij} = \begin{cases} -1 & \text{if edge $j$ starts at vertex $i$} \\ 1 & \text{if edge $j$ ends at vertex $i$} \\ 0 & \text{otherwise} \end{cases}. \end{eqnarray*}
In [91]:
# a simple directed graph on GS p16
g = SimpleDiGraph(4)
add_edge!(g, 1, 2)
add_edge!(g, 1, 3)
add_edge!(g, 2, 3)
add_edge!(g, 2, 4)
add_edge!(g, 4, 3)
gplot(g, nodelabel=["x1", "x2", "x3", "x4"], edgelabel=["b1", "b2", "b3", "b4", "b5"])
Out[91]:
b1 b2 b3 b4 b5 x1 x2 x3 x4
In [92]:
# incidence matrix B
B = convert(Matrix{Int64}, incidence_matrix(g))
Out[92]:
4×5 Matrix{Int64}:
 -1  -1   0   0   0
  1   0  -1  -1   0
  0   1   1   0   1
  0   0   0   1  -1
  • Kirchhoff's current law: Let $\mathbf{B} \in \mathbb{R}^{m \times n}$ be an incidence matrix and $\mathbf{x}=(x_1 \, \ldots \, x_n)'$ with $x_j$ the current through arc $j$, $$ (\mathbf{B} \mathbf{x})_i = \sum_{\text{arc $j$ enters node $i$}} x_j - \sum_{\text{arc $j$ leaves node $i$}} x_j = \text{net current arriving at node $i$}. $$
In [93]:
# symbolic calculation
@variables x1 x2 x3 x4 x5
B * [x1, x2, x3, x4, x5]
Out[93]:
\begin{equation} \left[ \begin{array}{c} - x1 - x2 \\ x1 - x3 - x4 \\ x2 + x3 + x5 \\ x4 - x5 \\ \end{array} \right] \end{equation}
  • Kirchhoff's voltage law: Let $\mathbf{B} \in \mathbb{R}^{m \times n}$ be an incidence matrix and $\mathbf{y} = (y_1, \ldots, y_m)'$ with $y_i$ the potential at node $i$, $$ (\mathbf{B}' \mathbf{y})_j = y_k - y_l \text{ if edge $j$ goes from node $l$ to node $k$ = negative of the voltage across arc $j$}. $$
In [94]:
@variables y1 y2 y3 y4
B' * [y1, y2, y3, y4]
Out[94]:
\begin{equation} \left[ \begin{array}{c} y2 - y1 \\ y3 - y1 \\ y3 - y2 \\ y4 - y2 \\ y3 - y4 \\ \end{array} \right] \end{equation}

Convolution and filtering

  • Convolution plays important roles in signal processing, image processing, and (convolutional) neural network.

  • The convolution of $\mathbf{a} \in \mathbb{R}^n$ and $\mathbf{b} \in \mathbb{R}^m$ is a vector $\mathbf{c} \in \mathbb{R}^{n+m-1}$ with entries $$ c_k = \sum_{i+j=k+1} a_i b_j. $$ Notation: $\mathbf{c} = \mathbf{a} * \mathbf{b}$.

  • Example: $n=4$, $m=3$, $$ \mathbf{a} = \begin{pmatrix} a_1 \\ a_1 \\ a_3 \\ a_4 \end{pmatrix}, \quad \mathbf{b} = \begin{pmatrix} b_1 \\ b_2 \\ b_3 \end{pmatrix}. $$ Then $\mathbf{c} = \mathbf{a} * \mathbf{b} \in \mathbb{R}^6$ has entries \begin{eqnarray*} c_1 &=& a_1 b_1 \\ c_2 &=& a_1 b_2 + a_2 b_1 \\ c_3 &=& a_1 b_3 + a_2 b_2 + a_3 b_1 \\ c_4 &=& a_2 b_3 + a_3 b_2 + a_4 b_1 \\ c_5 &=& a_3 b_3 + a_4 b_2 \\ c_6 &=& a_4 b_3 \end{eqnarray*}

  • Polynomial interpretation: Let $\mathbf{a}$ and $\mathbf{b}$ be the coefficients in polynomials \begin{eqnarray*} p(x) &=& a_1 + a_2 x + \cdots + a_n x^{n-1} \\ q(x) &=& b_1 + b_2 x + \cdots + b_m x^{m-1}, \end{eqnarray*} then $\mathbf{c} = \mathbf{a} * \mathbf{b}$ gives the coefficients of the product polynomial $$ p(x) q(x) = c_1 + c_2 x + \cdots + c_{n+m-1} x^{m+n-2}. $$
In [95]:
n, m = 4, 3
@variables a[1:n] b[1:m]
Out[95]:
2-element Vector{Symbolics.Arr{Num, 1}}:
 a[1:4]
 b[1:3]
In [96]:
p = Polynomial(a)
Out[96]:
a[1] + a[2]∙x + a[3]∙x2 + a[4]∙x3
In [97]:
q = Polynomial(b)
Out[97]:
b[1] + b[2]∙x + b[3]∙x2
In [98]:
p * q
Out[98]:
a[1]*b[1] + a[1]*b[2] + a[2]*b[1]∙x + a[3]*b[1] + a[1]*b[3] + a[2]*b[2]∙x2 + a[4]*b[1] + a[3]*b[2] + a[2]*b[3]∙x3 + a[4]*b[2] + a[3]*b[3]∙x4 + a[4]*b[3]∙x5
In [99]:
coeffs(p * q)
Out[99]:
\begin{equation} \left[ \begin{array}{c} a{_1} b{_1} \\ a{_1} b{_2} + a{_2} b{_1} \\ a{_3} b{_1} + a{_1} b{_3} + a{_2} b{_2} \\ a{_4} b{_1} + a{_3} b{_2} + a{_2} b{_3} \\ a{_4} b{_2} + a{_3} b{_3} \\ a{_4} b{_3} \\ \end{array} \right] \end{equation}
In [100]:
a = [1, 0, -1, 2]
b = [2, 1, -1]
c = DSP.conv(a, b)
Out[100]:
6-element Vector{Int64}:
  2
  1
 -3
  3
  3
 -2
  • Probabilistic interpretation. In probability and statistics, convolution often appears when computing the distribution of the sum of two independent random variables. Let $X$ be a discrete random variable taking value $i \in \{0,1,\ldots,n-1\}$ with probability $a_{i+1}$ and $Y$ be another discrete random variable taking values $j \in \{0,1,\ldots,m-1\}$ with probability $b_{j+1}$. Assume $X$ is independent of $Y$, then the distribution of $Z = X+Y$ is $$ c_k = \mathbb{P}(Z = k - 1) = \sum_{i+j=k-1} \mathbb{P}(X = i) \mathbb{P}(Y = j) = \sum_{i+j=k-1} a_{i+1} b_{j+1} = \sum_{i'+j'=k+1} a_{i'} b_{j'} $$ for $k = 1,\ldots,n+m-1$. In the probability setting, the polynomial is called the probability generating function of a discrete random variable.
In [101]:
n, m, p = 10, 3, 0.5
# pmf of X ∼ Bin(n-1, p)
a = Distributions.pdf(Binomial(n-1, p), 0:n-1)
Out[101]:
10-element Vector{Float64}:
 0.001953125
 0.017578124999999997
 0.07031250000000025
 0.16406250000000056
 0.24609375000000044
 0.24609375000000044
 0.16406250000000056
 0.07031250000000025
 0.017578124999999997
 0.001953125
In [102]:
# pmf of Y ∼ Unif(0, m-1)
b = Distributions.pdf(DiscreteUniform(0, m-1), 0:m-1)
Out[102]:
3-element Vector{Float64}:
 0.3333333333333333
 0.3333333333333333
 0.3333333333333333
In [103]:
# compute pmf of Z = X + Y by convolution: c = a * b
c = DSP.conv(a, b)
Out[103]:
12-element Vector{Float64}:
 0.0006510416666666713
 0.006510416666666685
 0.02994791666666674
 0.08398437500000026
 0.1601562500000004
 0.21875000000000047
 0.2187500000000005
 0.1601562500000004
 0.08398437500000026
 0.029947916666666727
 0.006510416666666671
 0.0006510416666666713
In [104]:
plta = bar(0:(n - 1), a, label="X∼Bin(9, 0.5)")
pltb = bar(0:(m - 1), b, label="Y∼Unif(0, 2)")
pltc = bar(0:(n + m - 1), c, label="Z=X+Y")

plot(plta, pltb, pltc, layout=(1,3), size=(1000, 300), ylim=[0, 0.4])
Out[104]:
  • Propoerties of convolution

    • symmetric: $\mathbf{a} * \mathbf{b} = \mathbf{b} * \mathbf{a}$.
    • associative: $(\mathbf{a} * \mathbf{b}) * \mathbf{c} = \mathbf{a} * \mathbf{b} * \mathbf{c} = \mathbf{a} * (\mathbf{b} * \mathbf{c})$.
    • If $\mathbf{a} * \mathbf{b} = \mathbf{0}$, then $\mathbf{a} = \mathbf{0}$ or $\mathbf{b} = \mathbf{0}$.

      These properties follow either from the polynomial interpretation or probabilistic interpretation.

  • $\mathbf{c} = \mathbf{a} * \mathbf{b}$ is a linear function $\mathbf{b}$ if we fixe $\mathbf{a}$.

  • $\mathbf{c} = \mathbf{a} * \mathbf{b}$ is a linear function $\mathbf{a}$ if we fixe $\mathbf{b}$.

  • For $n=4$ and $m=3$,

In [105]:
n, m = 4, 3
@variables a[1:n] b[1:m]
Out[105]:
2-element Vector{Symbolics.Arr{Num, 1}}:
 a[1:4]
 b[1:3]
In [106]:
# Toeplitz matrix corresponding to the vector a
Ta = diagm(6, 3,
     0 => [a[1], a[1], a[1]],
    -1 => [a[2], a[2], a[2]],
    -2 => [a[3], a[3], a[3]],
    -3 => [a[4], a[4], a[4]]
)
Out[106]:
\begin{equation} \left[ \begin{array}{ccc} a{_1} & 0 & 0 \\ a{_2} & a{_1} & 0 \\ a{_3} & a{_2} & a{_1} \\ a{_4} & a{_3} & a{_2} \\ 0 & a{_4} & a{_3} \\ 0 & 0 & a{_4} \\ \end{array} \right] \end{equation}
In [107]:
c = Ta * b
Out[107]:
\begin{equation} \left[ \begin{array}{ccc} a{_1} & 0 & 0 \\ a{_2} & a{_1} & 0 \\ a{_3} & a{_2} & a{_1} \\ a{_4} & a{_3} & a{_2} \\ 0 & a{_4} & a{_3} \\ 0 & 0 & a{_4} \\ \end{array} \right] b \end{equation}
In [108]:
# c = Ta * b
Symbolics.scalarize(c)
Out[108]:
\begin{equation} \left[ \begin{array}{c} a{_1} b{_1} \\ a{_1} b{_2} + a{_2} b{_1} \\ a{_3} b{_1} + a{_1} b{_3} + a{_2} b{_2} \\ a{_2} b{_3} + a{_3} b{_2} + a{_4} b{_1} \\ a{_3} b{_3} + a{_4} b{_2} \\ a{_4} b{_3} \\ \end{array} \right] \end{equation}
In [109]:
# Toeplitz matrix corresponding to the vector b
Tb = diagm(6, 4,
     0 => [b[1], b[1], b[1], b[1]],
    -1 => [b[2], b[2], b[2], b[2]],
    -2 => [b[3], b[3], b[3], b[3]]
)
Out[109]:
\begin{equation} \left[ \begin{array}{cccc} b{_1} & 0 & 0 & 0 \\ b{_2} & b{_1} & 0 & 0 \\ b{_3} & b{_2} & b{_1} & 0 \\ 0 & b{_3} & b{_2} & b{_1} \\ 0 & 0 & b{_3} & b{_2} \\ 0 & 0 & 0 & b{_3} \\ \end{array} \right] \end{equation}
In [110]:
# c = Tb * a
Symbolics.scalarize(Tb * a)
Out[110]:
\begin{equation} \left[ \begin{array}{c} a{_1} b{_1} \\ a{_1} b{_2} + a{_2} b{_1} \\ a{_3} b{_1} + a{_1} b{_3} + a{_2} b{_2} \\ a{_4} b{_1} + a{_3} b{_2} + a{_2} b{_3} \\ a{_4} b{_2} + a{_3} b{_3} \\ a{_4} b{_3} \\ \end{array} \right] \end{equation}
  • The convolution matrices $\mathbf{T}_a$ and $\mathbf{T}_b$ are examples of Toeplitz matrices.

  • When one vector is much longer than the other, for example, $m \ll n$, we can consider the longer vector $\mathbf{a} \in \mathbb{R}^n$ as signal, and apply the inner product of the filter (also called kernel) $(b_m \, \cdots \, b_1)' \in \mathbb{R}^m$ along a sliding window of $\mathbf{a}$.

  • If we apply filter (or kernel) in original order to the sliding window of the signal, it is called correlation instead of convolution. For symmetric filter (or kernel), correlation is same as convolution.

  • (2D) Filtering is extensively used in image processing.

In [111]:
img = testimage("mandrill")
Out[111]:
In [112]:
# correlatin with Gaussian kernel
imgg = imfilter(img, Kernel.gaussian(3))
Out[112]:
In [113]:
?Kernel.gaussian()
Out[113]:
gaussian((σ1, σ2, ...), [(l1, l2, ...)]) -> g
gaussian(σ)                  -> g

Construct a multidimensional gaussian filter, with standard deviation σd along dimension d. Optionally provide the kernel length l, which must be a tuple of the same length.

If σ is supplied as a single number, a symmetric 2d kernel is constructed.

See also: KernelFactors.gaussian.

In [114]:
# Gaussian kernel with σ = 3
Kernel.gaussian(3)
Out[114]:
13×13 OffsetArray(::Matrix{Float64}, -6:6, -6:6) with eltype Float64 with indices -6:6×-6:6:
 0.000343881  0.000633593  0.00104462  …  0.000633593  0.000343881
 0.000633593  0.00116738   0.00192468     0.00116738   0.000633593
 0.00104462   0.00192468   0.00317327     0.00192468   0.00104462
 0.00154117   0.00283956   0.00468165     0.00283956   0.00154117
 0.00203464   0.00374877   0.00618068     0.00374877   0.00203464
 0.00240364   0.00442865   0.00730161  …  0.00442865   0.00240364
 0.00254095   0.00468165   0.00771874     0.00468165   0.00254095
 0.00240364   0.00442865   0.00730161     0.00442865   0.00240364
 0.00203464   0.00374877   0.00618068     0.00374877   0.00203464
 0.00154117   0.00283956   0.00468165     0.00283956   0.00154117
 0.00104462   0.00192468   0.00317327  …  0.00192468   0.00104462
 0.000633593  0.00116738   0.00192468     0.00116738   0.000633593
 0.000343881  0.000633593  0.00104462     0.000633593  0.000343881
In [115]:
# convolution with Gaussian kernel is same as correlation, since Gaussian kernel is symmetric
imgg = imfilter(img, reflect(Kernel.gaussian(3)))
Out[115]:
In [116]:
# Correlation with Laplace kernel
imgl = imfilter(img, Kernel.Laplacian())
Out[116]:
In [117]:
# Laplacian kernel
Kernel.Laplacian()
Out[117]:
ImageFiltering.Kernel.Laplacian{2}((true, true), CartesianIndex{2}[CartesianIndex(1, 0), CartesianIndex(0, 1)])
In [118]:
?Kernel.Laplacian()
Out[118]:
Laplacian((true,true,false,...))
Laplacian(dims, N)
Laplacian()

Laplacian kernel in N dimensions, taking derivatives along the directions marked as true in the supplied tuple. Alternatively, one can pass dims, a listing of the dimensions for differentiation. (However, this variant is not inferrable.)

Laplacian() is the 2d laplacian, equivalent to Laplacian((true,true)).

The kernel is represented as an opaque type, but you can use convert(AbstractArray, L) to convert it into array format.

In [119]:
convert(AbstractArray, Kernel.Laplacian())
Out[119]:
3×3 OffsetArray(::Matrix{Int64}, -1:1, -1:1) with eltype Int64 with indices -1:1×-1:1:
 0   1  0
 1  -4  1
 0   1  0

Affine functions

  • A function $f: \mathbb{R}^n \mapsto \mathbb{R}^m$ is affine if $$ f(\alpha \mathbf{x} + \beta \mathbf{y}) = \alpha f(\mathbf{x}) + \beta f(\mathbf{y}). $$ for all vectors $\mathbf{x}, \mathbf{y} \in \mathbb{R}^n$ and scalars $\alpha, \beta$ with $\alpha + \beta = 1$.

  • Any linear function is affine. Why?

  • Definition of affine function implies that $$ f(\alpha_1 \mathbf{x}_1 + \cdots + \alpha_p \mathbf{x}_p) = \alpha_1 f(\mathbf{x}_1) + \cdots + \alpha_p f(\mathbf{x}_p) $$ for all vectors $\mathbf{x}_1, \ldots, \mathbf{x}_p$ and scalars $\alpha_1, \ldots, \alpha_p$ such that $\alpha_1 + \cdots + \alpha_p = 1$.

  • Any affine function $f: \mathbb{R}^n \mapsto \mathbb{R}^m$ is of the form $\mathbf{A} \mathbf{x} + \mathbf{b}$ for some matrix $\mathbf{A} \in \mathbb{R}^{m \times n}$ and vector $\mathbf{b} \in \mathbb{R}^m$ and vice versa.

    1. For fixed $\mathbf{A} \in \mathbb{R}^{m \times n}$ and $\mathbf{b} \in \mathbb{R}^m$, define function $f: \mathbb{R}^n \mapsto \mathbb{R}^m$ by a matrix-vector product plus a constant: $$ f(\mathbf{x}) = \mathbf{A} \mathbf{x} + \mathbf{b}. $$ Then $f$ is an affine function: if $\alpha + \beta = 1$, then $$ f(\alpha \mathbf{x} + \beta \mathbf{y}) = \mathbf{A}(\alpha \mathbf{x} + \beta \mathbf{y}) + \mathbf{b} = \alpha \mathbf{A} \mathbf{x} + \beta \mathbf{A} \mathbf{y} + \alpha \mathbf{b} + \beta \mathbf{b} = \alpha(\mathbf{A} \mathbf{x} + \mathbf{b}) + \beta (\mathbf{A} \mathbf{y} + \mathbf{b}) = \alpha f(\mathbf{x}) + \beta f(\mathbf{y}). $$

    2. Any affine function can be written as $f(\mathbf{x}) = \mathbf{A} \mathbf{x} + \mathbf{b}$ with $$ \mathbf{A} = \begin{pmatrix} f(\mathbf{e}_1) - f(\mathbf{0}) \quad f(\mathbf{e}_2) - f(\mathbf{0}) \quad \ldots \quad f(\mathbf{e}_n) - f(\mathbf{0}) \end{pmatrix} $$ and $\mathbf{b} = f(\mathbf{0})$. Hint: Write $\mathbf{x} = x_1 \mathbf{e}_1 + \cdots + x_n \mathbf{e}_n + (1 - x_1 - \cdots - x_n) \mathbf{0}$.

Affine approximation of a function (important)

  • The first-order Taylor approximation of a differentiable function $f: \mathbb{R} \mapsto \mathbb{R}$ at a point $z \in \mathbb{R}$ is $$ \widehat f(x) = f(z) + f'(z) (x - z), $$ where $f'(z)$ is the derivative of $f$ at $z$.

  • Example: $f(x) = e^{2x} - x$. The derivative is $f'(x) = 2e^{2x} - 1$. Then the first-order Taylor approximation of $f$ at point $z$ is $$ f(x) \approx e^{2z} - z + (2e^{2z} - 1) (x - z). $$

In [120]:
x = Vector(-1:0.1:1)
f1(x) = exp(2x) - x
# draw the function
plt = plot(x, f1.(x), label = "f(x)")
# draw the affine approximation at z=0
z = 0.0
@show ForwardDiff.derivative(f1, z) # let computer calculate derivative!
f̂1 = f1(z) .+ ForwardDiff.derivative(f1, z) .* (x .- z)
plot!(plt, x, f̂1, label = "f(0)+f'(0)*(x-0)", legend = :topleft)
ForwardDiff.derivative(f1, z) = 1.0
Out[120]:
  • The first-order Taylor approximation of a differentiable function $f: \mathbb{R}^n \mapsto \mathbb{R}$ at a point $\mathbf{z} \in \mathbb{R}^n$ is $$ \widehat f(\mathbf{x}) = f(\mathbf{z}) + [\nabla f(\mathbf{z})]' (\mathbf{x} - \mathbf{z}), $$ where $$ \nabla f(\mathbf{z}) = \begin{pmatrix} \frac{\partial f}{\partial x_1}(\mathbf{z}) \\ \vdots \\ \frac{\partial f}{\partial x_n}(\mathbf{z}) \end{pmatrix} $$ is the gradient of $f$ evaluated at point $\mathbf{z}$.

  • An example with $n=2$: The gradient of $$ f(\mathbf{x}) = f(x_1, x_2) = e^{2x_1 + x_2} - x_1 $$ is $$ \nabla f(\mathbf{x}) = \begin{pmatrix} 2 e^{2x_1 + x_2} - 1\\ e^{2x_1 + x_2} \end{pmatrix}. $$ The first-order Taylor approximation of $f$ at a point $\mathbf{z}$ is $$ f(\mathbf{x}) \approx e^{2z_1 + z_2} - z_1 + \begin{pmatrix} 2 e^{2z_1 + z_2} - 1 \,\, e^{2z_1 + z_2} \end{pmatrix} \begin{pmatrix} x_1 - z_1 \\ x_2 - z_2 \end{pmatrix}. $$

In [121]:
# a non-linear function f
f2(x) = exp(2x[1] + x[2]) - x[1]
# define grid
n = 20
grid = range(-1, 1, length = n)
# draw the first-order approximation at (0,0)
z = [0.0, 0.0]
@show ForwardDiff.gradient(f2, z) # let computer calculate gradient!
f2_approx(x) = f2(z) + ForwardDiff.gradient(f2, z)' * (x - z)
f̂2x = [f2_approx([grid[row], grid[col]]) for row in 1:n, col in 1:n]
plt = wireframe(grid, grid, f̂2x, label="f")
# draw the function 
f2x = [f2([grid[row], grid[col]]) for row in 1:n, col in 1:n]
wireframe!(plt, grid, grid, f2x, label="fhat")
ForwardDiff.gradient(f2, z) = [1.0, 1.0]
Out[121]:
  • Affine approximation: The first-order Taylor approximation of a differentiable function $f: \mathbb{R}^n \mapsto \mathbb{R}^m$ at a point $\mathbf{z} \in \mathbb{R}^n$ is $$ \widehat f(\mathbf{x}) = f(\mathbf{z}) + [\operatorname{D} f(\mathbf{z})] (\mathbf{x} - \mathbf{z}), $$ where $\operatorname{D} f(\mathbf{z}) \in \mathbb{R}^{m \times n}$ is the derivative matrix or Jacobian matrix or differential evaluated at $\mathbf{z}$ $$ \operatorname{D} f(\mathbf{z}) = \begin{pmatrix} \frac{\partial f_1}{\partial x_1} (\mathbf{z}) & \frac{\partial f_1}{\partial x_2} (\mathbf{z}) & \cdots & \frac{\partial f_1}{\partial x_n} (\mathbf{z}) \\ \frac{\partial f_2}{\partial x_1} (\mathbf{z}) & \frac{\partial f_2}{\partial x_2} (\mathbf{z}) & \cdots & \frac{\partial f_2}{\partial x_n} (\mathbf{z}) \\ \vdots & \vdots & & \vdots \\ \frac{\partial f_m}{\partial x_1} (\mathbf{z}) & \frac{\partial f_m}{\partial x_2} (\mathbf{z}) & \cdots & \frac{\partial f_m}{\partial x_n} (\mathbf{z}) \end{pmatrix} = \begin{pmatrix} \nabla f_1(\mathbf{z})' \\ \nabla f_2(\mathbf{z})' \\ \vdots \\ \nabla f_m(\mathbf{z})' \end{pmatrix}. $$

  • An example with $n=m=2$: $$ f(\mathbf{x}) = \begin{pmatrix} f_1(\mathbf{x}) \\ f_2(\mathbf{x}) \end{pmatrix} = \begin{pmatrix} e^{2x_1 + x_2} - x_1 \\ x_1^2 - x_2 \end{pmatrix}. $$ Derivative matrix is $$ \operatorname{D} f(\mathbf{x}) = \begin{pmatrix} 2e^{2x_1 + x_2} - 1 & e^{2x_1 + x_2} \\ 2x_1 & -1 \end{pmatrix} $$ First-order approximation of $f$ around $\mathbf{z} = \mathbf{0}$ is $$ f(\mathbf{x}) \approx \begin{pmatrix} 1 \\ 0 \end{pmatrix} + \begin{pmatrix} 1 & 1 \\ 0 & -1 \end{pmatrix} \begin{pmatrix} x_1 - 0 \\ x_2 - 0 \end{pmatrix}. $$

In [122]:
# a nonlinear function
f3(x) = [exp(2x[1] + x[2]) - x[1], x[1]^2 - x[2]]
z = [0, 0]
ForwardDiff.jacobian(f3, [0, 0])
Out[122]:
2×2 Matrix{Float64}:
 1.0   1.0
 0.0  -1.0

Computational complexity (important)

  • Matrix-vector product: $\mathbf{y} = \mathbf{A} \mathbf{x}$ where $\mathbf{A} \in \mathbb{R}^{m \times n}$ and $\mathbf{x} \in \mathbb{R}^n$. $2mn$ flops.

  • Special cases:

    • $\mathbf{A} \in \mathbb{R}^{n \times n}$ is diagonal, $\mathbf{A} \mathbf{x}$ takes $n$ flops.
    • $\mathbf{A} \in \mathbb{R}^{n \times n}$ is lower triangular, $\mathbf{A} \mathbf{x}$ takes $n^2$ flops.
    • $\mathbf{A} \in \mathbb{R}^{m \times n}$ is sparse, # flops $\ll 2mn$.
In [123]:
Random.seed!(216)
n = 2000
A = Diagonal(randn(n))
Out[123]:
2000×2000 Diagonal{Float64, Vector{Float64}}:
 -1.28506    ⋅         ⋅          ⋅      …    ⋅        ⋅         ⋅ 
   ⋅       -1.44549    ⋅          ⋅           ⋅        ⋅         ⋅ 
   ⋅         ⋅       -0.0883723   ⋅           ⋅        ⋅         ⋅ 
   ⋅         ⋅         ⋅         1.9427       ⋅        ⋅         ⋅ 
   ⋅         ⋅         ⋅          ⋅           ⋅        ⋅         ⋅ 
   ⋅         ⋅         ⋅          ⋅      …    ⋅        ⋅         ⋅ 
   ⋅         ⋅         ⋅          ⋅           ⋅        ⋅         ⋅ 
   ⋅         ⋅         ⋅          ⋅           ⋅        ⋅         ⋅ 
   ⋅         ⋅         ⋅          ⋅           ⋅        ⋅         ⋅ 
   ⋅         ⋅         ⋅          ⋅           ⋅        ⋅         ⋅ 
   ⋅         ⋅         ⋅          ⋅      …    ⋅        ⋅         ⋅ 
   ⋅         ⋅         ⋅          ⋅           ⋅        ⋅         ⋅ 
   ⋅         ⋅         ⋅          ⋅           ⋅        ⋅         ⋅ 
  ⋮                                      ⋱                      
   ⋅         ⋅         ⋅          ⋅           ⋅        ⋅         ⋅ 
   ⋅         ⋅         ⋅          ⋅           ⋅        ⋅         ⋅ 
   ⋅         ⋅         ⋅          ⋅      …    ⋅        ⋅         ⋅ 
   ⋅         ⋅         ⋅          ⋅           ⋅        ⋅         ⋅ 
   ⋅         ⋅         ⋅          ⋅           ⋅        ⋅         ⋅ 
   ⋅         ⋅         ⋅          ⋅           ⋅        ⋅         ⋅ 
   ⋅         ⋅         ⋅          ⋅           ⋅        ⋅         ⋅ 
   ⋅         ⋅         ⋅          ⋅      …    ⋅        ⋅         ⋅ 
   ⋅         ⋅         ⋅          ⋅           ⋅        ⋅         ⋅ 
   ⋅         ⋅         ⋅          ⋅         -1.6531    ⋅         ⋅ 
   ⋅         ⋅         ⋅          ⋅           ⋅      -0.121904   ⋅ 
   ⋅         ⋅         ⋅          ⋅           ⋅        ⋅        0.75365
In [124]:
Ads = convert(Matrix{Float64}, A)
Out[124]:
2000×2000 Matrix{Float64}:
 -1.28506   0.0       0.0        0.0     …   0.0      0.0       0.0
  0.0      -1.44549   0.0        0.0         0.0      0.0       0.0
  0.0       0.0      -0.0883723  0.0         0.0      0.0       0.0
  0.0       0.0       0.0        1.9427      0.0      0.0       0.0
  0.0       0.0       0.0        0.0         0.0      0.0       0.0
  0.0       0.0       0.0        0.0     …   0.0      0.0       0.0
  0.0       0.0       0.0        0.0         0.0      0.0       0.0
  0.0       0.0       0.0        0.0         0.0      0.0       0.0
  0.0       0.0       0.0        0.0         0.0      0.0       0.0
  0.0       0.0       0.0        0.0         0.0      0.0       0.0
  0.0       0.0       0.0        0.0     …   0.0      0.0       0.0
  0.0       0.0       0.0        0.0         0.0      0.0       0.0
  0.0       0.0       0.0        0.0         0.0      0.0       0.0
  ⋮                                      ⋱                      
  0.0       0.0       0.0        0.0         0.0      0.0       0.0
  0.0       0.0       0.0        0.0         0.0      0.0       0.0
  0.0       0.0       0.0        0.0     …   0.0      0.0       0.0
  0.0       0.0       0.0        0.0         0.0      0.0       0.0
  0.0       0.0       0.0        0.0         0.0      0.0       0.0
  0.0       0.0       0.0        0.0         0.0      0.0       0.0
  0.0       0.0       0.0        0.0         0.0      0.0       0.0
  0.0       0.0       0.0        0.0     …   0.0      0.0       0.0
  0.0       0.0       0.0        0.0         0.0      0.0       0.0
  0.0       0.0       0.0        0.0        -1.6531   0.0       0.0
  0.0       0.0       0.0        0.0         0.0     -0.121904  0.0
  0.0       0.0       0.0        0.0         0.0      0.0       0.75365
In [125]:
x = randn(n)
Out[125]:
2000-element Vector{Float64}:
 -0.04763395435559483
 -0.9468716257835643
 -0.07175083628064
  0.050144499737259166
 -0.1344431129697485
  0.16576841844173415
  0.4820781637270806
 -0.12900182351736333
 -0.7357102994022214
 -0.7621899549335216
 -0.8552525765389232
 -1.3395951582650312
  0.14612978133070148
  ⋮
 -0.16460974501636721
  0.00678716664869133
 -0.08881743551890671
 -1.3472687643510293
  0.9264629040855277
 -2.208409819315764
 -0.2903087732405249
 -0.6673121392352216
  0.20802317137461687
  0.5022260544796845
 -0.8364772500854698
  1.3781219972350554
In [126]:
# full matrix times vector
@benchmark $Ads * $x
Out[126]:
BenchmarkTools.Trial: 2917 samples with 1 evaluation.
 Range (minmax):  1.343 ms  2.841 ms   GC (min … max): 0.00% … 0.00%
 Time  (median):     1.671 ms                GC (median):    0.00%
 Time  (mean ± σ):   1.686 ms ± 137.284 μs   GC (mean ± σ):  0.00% ± 0.00%

                   ▂▄█▇▅▆▇▄▁▂                                
  ▁▁▂▂▂▂▂▂▂▂▃▂▂▃▄▅▇██████████▇▆▅▄▄▃▃▃▂▂▂▂▂▂▁▂▂▂▂▂▂▁▁▁▁▁▁▁▁▂ ▃
  1.34 ms         Histogram: frequency by time        2.17 ms <

 Memory estimate: 15.75 KiB, allocs estimate: 1.
In [127]:
# dense matrix times vector
@benchmark $A * $x
Out[127]:
BenchmarkTools.Trial: 10000 samples with 10 evaluations.
 Range (minmax):  1.328 μs 2.520 ms   GC (min … max):  0.00% … 99.77%
 Time  (median):     6.738 μs               GC (median):     0.00%
 Time  (mean ± σ):   8.101 μs ± 80.895 μs   GC (mean ± σ):  34.51% ±  3.45%

         ▇█                                                   
  ▄█▂▂▁▁▂██▇▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▂▃▅▅██▇▇▆▅▅▄▃▃▂▂▁▂▁▁▁▁ ▂
  1.33 μs        Histogram: frequency by time        9.68 μs <

 Memory estimate: 15.75 KiB, allocs estimate: 1.
In [128]:
# benchmark
Random.seed!(216)
n = 2000
A = LowerTriangular(randn(n, n))
Out[128]:
2000×2000 LowerTriangular{Float64, Matrix{Float64}}:
 -1.28506      ⋅           ⋅         …    ⋅          ⋅          ⋅ 
 -1.44549     0.246225     ⋅              ⋅          ⋅          ⋅ 
 -0.0883723  -0.856756   -0.18285         ⋅          ⋅          ⋅ 
  1.9427     -0.493101   -1.12408         ⋅          ⋅          ⋅ 
 -0.244914   -1.58836     0.326169        ⋅          ⋅          ⋅ 
 -0.326449   -0.368125   -1.06872    …    ⋅          ⋅          ⋅ 
  1.8623     -0.184933    0.926251        ⋅          ⋅          ⋅ 
  0.0582264   0.684755   -0.293983        ⋅          ⋅          ⋅ 
 -0.135717    1.22274    -0.484394        ⋅          ⋅          ⋅ 
 -0.8972      0.109929   -0.893859        ⋅          ⋅          ⋅ 
 -2.23444    -0.808692    1.13897    …    ⋅          ⋅          ⋅ 
  0.915744   -0.0820053   0.292675        ⋅          ⋅          ⋅ 
  0.854177   -0.688189   -0.13326         ⋅          ⋅          ⋅ 
  ⋮                                  ⋱                        
 -0.91281     0.769891   -0.365947        ⋅          ⋅          ⋅ 
 -1.99237    -2.06281    -0.633601        ⋅          ⋅          ⋅ 
 -1.803      -1.0444      0.0514036  …    ⋅          ⋅          ⋅ 
 -0.949468   -1.56419    -0.151106        ⋅          ⋅          ⋅ 
  1.24308     0.397714   -0.640662        ⋅          ⋅          ⋅ 
  0.850945   -1.17207    -0.914287        ⋅          ⋅          ⋅ 
 -0.246916   -1.24477    -0.081315        ⋅          ⋅          ⋅ 
 -0.491537    1.46686    -2.0669     …    ⋅          ⋅          ⋅ 
 -0.68257    -1.05482     1.05954         ⋅          ⋅          ⋅ 
 -1.6531      0.128414    0.191718      -0.114644    ⋅          ⋅ 
 -0.121904    1.89372    -0.811959      -0.573759   0.499562    ⋅ 
  0.75365     2.0271     -0.181198       1.2565    -0.724389  -0.739664
In [129]:
Ads = convert(Matrix{Float64}, A)
Out[129]:
2000×2000 Matrix{Float64}:
 -1.28506     0.0         0.0        …   0.0        0.0        0.0
 -1.44549     0.246225    0.0            0.0        0.0        0.0
 -0.0883723  -0.856756   -0.18285        0.0        0.0        0.0
  1.9427     -0.493101   -1.12408        0.0        0.0        0.0
 -0.244914   -1.58836     0.326169       0.0        0.0        0.0
 -0.326449   -0.368125   -1.06872    …   0.0        0.0        0.0
  1.8623     -0.184933    0.926251       0.0        0.0        0.0
  0.0582264   0.684755   -0.293983       0.0        0.0        0.0
 -0.135717    1.22274    -0.484394       0.0        0.0        0.0
 -0.8972      0.109929   -0.893859       0.0        0.0        0.0
 -2.23444    -0.808692    1.13897    …   0.0        0.0        0.0
  0.915744   -0.0820053   0.292675       0.0        0.0        0.0
  0.854177   -0.688189   -0.13326        0.0        0.0        0.0
  ⋮                                  ⋱                        
 -0.91281     0.769891   -0.365947       0.0        0.0        0.0
 -1.99237    -2.06281    -0.633601       0.0        0.0        0.0
 -1.803      -1.0444      0.0514036  …   0.0        0.0        0.0
 -0.949468   -1.56419    -0.151106       0.0        0.0        0.0
  1.24308     0.397714   -0.640662       0.0        0.0        0.0
  0.850945   -1.17207    -0.914287       0.0        0.0        0.0
 -0.246916   -1.24477    -0.081315       0.0        0.0        0.0
 -0.491537    1.46686    -2.0669     …   0.0        0.0        0.0
 -0.68257    -1.05482     1.05954        0.0        0.0        0.0
 -1.6531      0.128414    0.191718      -0.114644   0.0        0.0
 -0.121904    1.89372    -0.811959      -0.573759   0.499562   0.0
  0.75365     2.0271     -0.181198       1.2565    -0.724389  -0.739664
In [130]:
x = randn(n)
# full matrix times vector
@benchmark $Ads * $x
Out[130]:
BenchmarkTools.Trial: 3055 samples with 1 evaluation.
 Range (minmax):  1.253 ms  2.593 ms   GC (min … max): 0.00% … 0.00%
 Time  (median):     1.553 ms                GC (median):    0.00%
 Time  (mean ± σ):   1.608 ms ± 180.451 μs   GC (mean ± σ):  0.00% ± 0.00%

             ▄▅▄▄▁█▃▂                                          
  ▁▁▁▁▁▂▂▃▄▄▄█████████▇▆▆▄▆▅▆▅▆▅▅▅▅▅▅▄▄▅▄▄▄▃▃▃▂▃▃▂▂▂▂▂▂▁▂▁▁ ▃
  1.25 ms         Histogram: frequency by time         2.1 ms <

 Memory estimate: 15.75 KiB, allocs estimate: 1.
In [131]:
# lower triangular matrix times vector
@benchmark $A * $x
Out[131]:
BenchmarkTools.Trial: 6437 samples with 1 evaluation.
 Range (minmax):  536.534 μs 1.353 ms   GC (min … max): 0.00% … 0.00%
 Time  (median):     749.371 μs               GC (median):    0.00%
 Time  (mean ± σ):   758.473 μs ± 74.449 μs   GC (mean ± σ):  0.00% ± 0.00%

                     ▄▅▂▄▄▅█▇▅▄▃▁                             
  ▂▁▁▁▂▁▁▂▂▂▂▂▂▂▂▂▃▇██████████████▆▆▆▅▅▅▄▃▃▃▃▃▂▂▂▂▂▂▂▂▂▂▁▂▁▁ ▃
  537 μs          Histogram: frequency by time         1.01 ms <

 Memory estimate: 15.75 KiB, allocs estimate: 1.
  • Matrix-matrix product: $\mathbf{C} = \mathbf{A} \mathbf{B}$, where $\mathbf{A} \in \mathbb{R}^{m \times n}$ and $\mathbf{B} \in \mathbb{R}^{n \times p}$. $2mnp$ flops.
  • Exercise: Evaluate $\mathbf{y} = \mathbf{A} \mathbf{B} \mathbf{x}$, where $\mathbf{A}, \mathbf{B} \in \mathbb{R}^{n \times n}$, in two ways
    • $\mathbf{y} = (\mathbf{A} \mathbf{B}) \mathbf{x}$. $2n^3$ flops.
    • $\mathbf{y} = \mathbf{A} (\mathbf{B} \mathbf{x})$. $4n^2$ flops.
      Which method is faster?
In [132]:
Random.seed!(216)
n = 2000
A = randn(n, n)
B = randn(n, n)
x = randn(n);
In [133]:
@benchmark $A * $B * $x
Out[133]:
BenchmarkTools.Trial: 42 samples with 1 evaluation.
 Range (minmax):  100.894 ms192.358 ms   GC (min … max): 0.00% … 0.00%
 Time  (median):     109.524 ms                GC (median):    0.00%
 Time  (mean ± σ):   120.481 ms ±  22.820 ms   GC (mean ± σ):  2.30% ± 5.02%

    █▆▆▁                                                        
  ▇▄████▁▄▇▁▄▄▁▁▄▄▁▄▁▁▄▁▁▁▁▁▁▁▄▁▄▁▄▁▁▄▁▄▄▁▁▁▁▁▁▁▁▁▁▁▄▁▁▁▁▁▁▁▄ ▁
  101 ms           Histogram: frequency by time          192 ms <

 Memory estimate: 30.53 MiB, allocs estimate: 3.
In [134]:
@benchmark $A * ($B * $x)
Out[134]:
BenchmarkTools.Trial: 1382 samples with 1 evaluation.
 Range (minmax):  2.622 ms  8.762 ms   GC (min … max): 0.00% … 0.00%
 Time  (median):     3.529 ms                GC (median):    0.00%
 Time  (mean ± σ):   3.582 ms ± 352.865 μs   GC (mean ± σ):  0.00% ± 0.00%

                     ▆▇▇▆▆▅▂                                
  ▂▂▂▁▁▂▁▂▂▂▃▂▂▃▄▄▃▃▇███████▇▇▇▇▅▄▄▅▄▃▃▄▃▄▃▃▃▃▃▃▂▂▂▃▂▁▂▁▂▂ ▄
  2.62 ms         Histogram: frequency by time        4.74 ms <

 Memory estimate: 31.50 KiB, allocs estimate: 2.
  • Exercise: Evaluate $\mathbf{y} = (\mathbf{I} + \mathbf{u} \mathbf{v}') \mathbf{x}$, where $\mathbf{u}, \mathbf{v}, \mathbf{x} \in \mathbb{R}^n$, in two ways.
    • Evaluate $\mathbf{A} = \mathbf{I} + \mathbf{u} \mathbf{v}'$, then $\mathbf{y} = \mathbf{A} \mathbf{x}$. $3n^2$ flops.
    • Evaluate $\mathbf{y} = \mathbf{x} + (\mathbf{v}'\mathbf{x}) \mathbf{u}$. $4n$ flops.
      Which method is faster?
In [135]:
Random.seed!(216)
n = 2000
u = randn(n)
v = randn(n)
x = randn(n);
In [136]:
# method 1
@benchmark (I + $u * $v') * $x
Out[136]:
BenchmarkTools.Trial: 293 samples with 1 evaluation.
 Range (minmax):  10.760 ms37.389 ms   GC (min … max):  0.00% … 66.79%
 Time  (median):     11.409 ms               GC (median):     0.00%
 Time  (mean ± σ):   17.116 ms ±  9.594 ms   GC (mean ± σ):  34.36% ± 29.46%

  █                         ▁            ▁▁      
  ██▇▅▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▅█▇▇▆▅▅▄▁▁▁▅█▇███▇▄▇ ▅
  10.8 ms      Histogram: log(frequency) by time      36.4 ms <

 Memory estimate: 61.05 MiB, allocs estimate: 5.
In [137]:
# method 2
@benchmark $x + dot($v, $x) * $u
Out[137]:
BenchmarkTools.Trial: 10000 samples with 8 evaluations.
 Range (minmax):   2.857 μs  4.223 ms   GC (min … max):  0.00% … 99.60%
 Time  (median):     13.561 μs                GC (median):     0.00%
 Time  (mean ± σ):   15.547 μs ± 135.882 μs   GC (mean ± σ):  30.21% ±  3.45%

         ▃█▂                            ▂▂▂▁                   
  ▂▆▄▂▁▂████▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▂▃▄▆█████▇▆▄▃▂▂▂▁▁▁▁▁▁▁▁▁ ▂
  2.86 μs         Histogram: frequency by time         19.9 μs <

 Memory estimate: 31.50 KiB, allocs estimate: 2.

Matrix with orthonormal columns, orthogonal matrix, QR factorization

  • A set of vectors $\mathbf{a}_1, \ldots, \mathbf{a}_k \in \mathbb{R}^{n}$ are orthonormal if the matrix $\mathbf{A} = (\mathbf{a}_1 \, \ldots \, \mathbf{a}_k) \in \mathbb{R}^{n \times k}$ satisfies $$ \mathbf{A}' \mathbf{A} = \mathbf{I}_k. $$ In this case, we say $\mathbf{A}$ has orthonormal columns, or $\mathbf{A}$ lives in the Stiefel manifold.

  • When $k = n$, a square matrix $\mathbf{A}$ with $\mathbf{A}'\mathbf{A} = \mathbf{I}_n$ is called an orthogonal matrix. Example: $\mathbf{I}_n$, permutation matrix, rotation matrix, ...

In [138]:
# a 3x3 orthogonal matrix
a1 = [0, 0, 1]
a2 = (1 / sqrt(2)) * [1, 1, 0]
a3 = (1 / sqrt(2)) * [1, -1, 0]
# a 3x3 orthogonal matrix
A = [a1 a2 a3]
Out[138]:
3×3 Matrix{Float64}:
 0.0  0.707107   0.707107
 0.0  0.707107  -0.707107
 1.0  0.0        0.0
In [139]:
A'A
Out[139]:
3×3 Matrix{Float64}:
 1.0  0.0  0.0
 0.0  1.0  0.0
 0.0  0.0  1.0

Properties of matrices with orthonormal columns.

Assume $\mathbf{A} \in \mathbb{R}^{m \times n}$ has orthonormal columns. The corresponding linear mapping is $f:\mathbb{R}^n \mapsto \mathbb{R}^m$ defined by $f(\mathbf{x}) = \mathbf{A} \mathbf{x}$.

  1. $f$ is norm preserving: $\|\mathbf{A} \mathbf{x}\| = \|\mathbf{x}\|$. Recall in HW2, we showed that, for a general matrix $\mathbf{A}$, $\|\mathbf{A} \mathbf{x}\| \le \|\mathbf{A}\| \|\mathbf{x}\|$

    Proof: $\|\mathbf{A} \mathbf{x}\|^2 = \mathbf{x}' \mathbf{A}' \mathbf{A} \mathbf{x} = \mathbf{x}' \mathbf{x} = \|\mathbf{x}\|_2^2$.

  2. $f$ preserves the inner product between vectors: $(\mathbf{A} \mathbf{x})'(\mathbf{A} \mathbf{y}) = \mathbf{x}'\mathbf{y}$.

    Proof: $(\mathbf{A} \mathbf{x})'(\mathbf{A} \mathbf{y}) = \mathbf{x}' \mathbf{A}' \mathbf{A} \mathbf{y} = \mathbf{x}'\mathbf{y}$.

  3. $f$ preserves the angle between vectors: $\angle (\mathbf{A} \mathbf{x}, \mathbf{A} \mathbf{y}) = \angle (\mathbf{x}, \mathbf{y})$.

    Proof: $$ \angle (\mathbf{A} \mathbf{x}, \mathbf{A} \mathbf{y}) = \arccos \frac{(\mathbf{A} \mathbf{x})'(\mathbf{A} \mathbf{y})}{\|\mathbf{A} \mathbf{x}\|\|\mathbf{A} \mathbf{y}\|} = \arccos \frac{\mathbf{x}'\mathbf{y}}{\|\mathbf{x}\|\|\mathbf{y}\|}. $$

In [140]:
x = [1, 2]
y = [3, 2]

θ = π / 4
A = [cos(θ) -sin(θ); sin(θ) cos(θ)]

Ax = A * x
Ay = A * y

x_vals = [0 0 0 0; x[1] y[1] Ax[1] Ay[1]]
y_vals = [0 0 0 0; x[2] y[2] Ax[2] Ay[2]]

plt = plot(x_vals, y_vals, arrow = true, color = :blue,
    legend = :none, xlims = (-4, 4), ylims = (-2, 4),
    annotations = [((x .+ 0.2)..., "x"),
                ((y .+ 0.2)..., "y"),
                ((Ax .+ 0.2)..., "Ax"),
                ((Ay .+ 0.2)..., "Ay")],
    xticks = -3:1:3, yticks = -1:1:3,
    framestyle = :origin,
    aspect_ratio = :equal)
Out[140]:

QR factorization

  • Recall in the Gram-Schmidt algorithm, given input vectors $\mathbf{a}_1, \ldots, \mathbf{a}_k \in \mathbb{R}^n$, the Gram-Schmidt algorithm outputs orthonormal vectors $\mathbf{q}_1, \ldots, \mathbf{q}_k$.

    This can be compactly expressed as $$ \mathbf{A} = \mathbf{Q} \mathbf{R}, $$ where $\mathbf{A} = (\mathbf{a}_1 \, \cdots \, \mathbf{a}_k) \in \mathbb{R}^{n \times k}$, $\mathbf{Q} = (\mathbf{q}_1 \, \cdots \, \mathbf{q}_k) \in \mathbb{R}^{n \times k}$ satisfying $\mathbf{Q}'\mathbf{Q} = \mathbf{I}_k$ ($\mathbf{Q}$ has orthonormal columns), and $\mathbf{R} \in \mathbb{R}^{k \times k}$ is an upper triangular matrix with positive diagonal elements.

  • What are the entries in $\mathbf{R}$? In the $i$-th iteration of the Gram-Schmidt algorithm \begin{eqnarray*} \text{Orthogonalization step: } & & \tilde{\mathbf{q}}_i = \mathbf{a}_i - (\mathbf{q}_1' \mathbf{a}_i) \mathbf{q}_1 - \cdots - (\mathbf{q}_{i-1}' \mathbf{a}_i) \mathbf{q}_{i-1} \\ \text{Normalization step: } & & \mathbf{q}_i = \tilde{\mathbf{q}}_i / \|\tilde{\mathbf{q}}_i\|. \end{eqnarray*} Therefore, $$ \mathbf{a}_i = (\mathbf{q}_1' \mathbf{a}_i) \mathbf{q}_1 + \cdots + (\mathbf{q}_{i-1}' \mathbf{a}_i) \mathbf{q}_{i-1} + \|\tilde{\mathbf{q}}_i\| \mathbf{q}_i. $$ This tells us $R_{ij} = \mathbf{q}_i' \mathbf{a}_j$ for $i < j$ and $R_{ii} = \|\tilde{\mathbf{q}}_i\|$.

  • Thus overall Gram-Schmidt algorithm performs $$ \mathbf{A} = \begin{pmatrix} | & | & & | & | \\ \mathbf{a}_1 & \mathbf{a}_2 & \cdots & \mathbf{a}_{k-1} & \mathbf{a}_k \\ | & | & & | & | \end{pmatrix} = \begin{pmatrix} | & | & & | & | \\ \mathbf{q}_1 & \mathbf{q}_2 & \cdots & \mathbf{q}_{k-1} & \mathbf{q}_k \\ | & | & & | & | \end{pmatrix} \begin{pmatrix} \|\tilde{\mathbf{q}}_1\| & \mathbf{q}_1'\mathbf{a}_2 & \cdots & \mathbf{q}_1' \mathbf{a}_{k-1} & \mathbf{q}_1' \mathbf{a}_k \\ & \|\tilde{\mathbf{q}}_2\| & \cdots & \mathbf{q}_2' \mathbf{a}_{k-1} & \mathbf{q}_2' \mathbf{a}_k \\ & & \ddots & \vdots & \vdots \\ & & & \|\tilde{\mathbf{q}}_{k-1}\| & \mathbf{q}_{k-1}' \mathbf{a}_k \\ & & & & \|\tilde{\mathbf{q}}_k\| \end{pmatrix} = \mathbf{Q} \mathbf{R}. $$

In [141]:
# HW1: BV 5.6
n = 5
A = [i  j ? 1 : 0 for i in 1:n, j in 1:n]
Out[141]:
5×5 Matrix{Int64}:
 1  1  1  1  1
 0  1  1  1  1
 0  0  1  1  1
 0  0  0  1  1
 0  0  0  0  1
In [142]:
# in computer, QR decomposition is performed by some algorithm other than Gram-Schmidt
qr(A)
Out[142]:
LinearAlgebra.QRCompactWY{Float64, Matrix{Float64}}
Q factor:
5×5 LinearAlgebra.QRCompactWYQ{Float64, Matrix{Float64}}:
 1.0  0.0  0.0  0.0  0.0
 0.0  1.0  0.0  0.0  0.0
 0.0  0.0  1.0  0.0  0.0
 0.0  0.0  0.0  1.0  0.0
 0.0  0.0  0.0  0.0  1.0
R factor:
5×5 Matrix{Float64}:
 1.0  1.0  1.0  1.0  1.0
 0.0  1.0  1.0  1.0  1.0
 0.0  0.0  1.0  1.0  1.0
 0.0  0.0  0.0  1.0  1.0
 0.0  0.0  0.0  0.0  1.0