3Blue1Brown-linear algebra notes

view linear algebra from geometrically 【官方双语/合集】线性代数的本质 - 系列合集_哔哩哔哩_bilibili

L1 Vector

vector points of view:

  • fancy word of list [a, b, c] - computer science
  • arrow in space - physics

facts about vector (geometrically view)

  • root from origin
  • in a coordinate system a vector specify each basis steps from origin to tip
  • can be view as length and direction
  • natural view of adding and scale
    2D x-y coordinate system, adding & scale geometrically natural
    \overrightarrow{v_1} = \begin{bmatrix} x_1\\ y_1\\ \end{bmatrix} \overrightarrow{v_2} = \begin{bmatrix} x_2\\ y_2\\ \end{bmatrix}
    \overrightarrow{v_1} + \overrightarrow{v_2} view as go to tip of \overrightarrow{v1} then go as coord of \overrightarrow{v_2}, equally go as \begin{bmatrix} x_1+x_2\\ y_1+x_2\\ \end{bmatrix}, scale v is equal to scale to it's coordinate.

L2 Linear Combinations, Span, Basis

basis

In a coord system, each dimension unit vector like\widehat{i}, \widehat{j} length is 1, any vector in coordinate system can be viewed product of those unit vectors \overrightarrow{v} = a\widehat{i} + b\widehat{j}, unit vector called basis, any time vector is depending on current basis

linear combination

scaling 2 vectors and adding is linear combination of those 2 vectors a\overrightarrow{v_1} + b\overrightarrow{v_2}

span

span of two of vectors is the set of all possible vectors can be reach with all linear combination of a given pair of vectors
in x-y 2D coordinate system, if \overrightarrow{v} and \overrightarrow{w}

  • tips not on the same line: span is x-y 2D space
  • tips line up: span is all vectors whose tips on a certain line
  • tips at origin, span is origin
    single vector view as arrow in space cross origin, dealing with collection of vectors more easily viewed them as points

liner dependent

adding vector not expend span, the add one is "linear depend" with existing ones. \overrightarrow{u} = a\overrightarrow{v} + b\overrightarrow{w}, \overrightarrow{u} is linearly dependent, \overrightarrow{w} \neq a\overrightarrow{v} for all possible a, then \overrightarrow{w} is linear indedenpent with \overrightarrow{v}
the basis of a vector is a set of linear independent vectors that span full space.

L3 Matrices as linear transformations

linear transformation

info

transformation a fancy word of function, with one input then generate one output \overrightarrow{v} \xrightarrow{} transformation \xrightarrow{} \overrightarrow{u}, view this as movement \overrightarrow{v} change to \overrightarrow{u}, for after linear transformation geometrically:

  • all lines must remain lines (without getting curved)
  • origin myst fixed in place
    as the author puts: "keeping grid line parallel and evenly spaced"

represent

all vector in input space can be transformation in output span, how to represent numerically concisely
vector in input space can be track by basis, if we track how basis transform, then input vector can be match with output vector. \overrightarrow{v} = x\widehat{i} + y\widehat{j}, \overrightarrow{u} = lt(\overrightarrow{v}), \widehat{ti} is after transformation of \widehat{i}, likewise for \widehat{tj}. so \overrightarrow{u} = x\widehat{ti} + y\widehat{tj}
as example \overrightarrow{v} = a\begin{bmatrix}1\\0\\\end{bmatrix} + b\begin{bmatrix}0\\1\\\end{bmatrix}, \begin{bmatrix} 1&0\\ 0&1\\ \end{bmatrix}\rightarrow{}\begin{bmatrix} 1&3\\ -2&0\\ \end{bmatrix}, now we see output space basis \widehat{ti} = \begin{bmatrix}1\\-2\\\end{bmatrix}, \widehat{tj} = \begin{bmatrix}3\\0\\ \end{bmatrix}
so \overrightarrow{u} = x\begin{bmatrix}1\\-2\\\end{bmatrix} + y\begin{bmatrix}3\\0\\\end{bmatrix}(geo -view - vector) = \begin{bmatrix} 1&3\\ -2&0\\ \end{bmatrix}\begin{bmatrix}x\\y\\\end{bmatrix}(linear - transformation) = \begin{bmatrix}x+3y\\-2x\\\end{bmatrix}(matrix - multiplication), linear transformation is specify/describe by 4 numbers for arbitrary x y(all input space), we see matrix is transformation specify by column vector as basis
formally linear transformation

  • lt(\overrightarrow{v} + \overrightarrow{u}) = lt(\overrightarrow{v}) + lt(\overrightarrow{u})
  • lt(c\overrightarrow{v}) = c*lt(\overrightarrow{v}) where c is constant
    if \widehat{ti} is linear combination of \widehat{tj}, span is line, since not extra is added
    linear transformation are ways to move around space, see a matrix as certain transformation of space, matrix-vector multiplication is just a way to compute what that transformation does to a given vector

L4 Matrix multiplication and composite transformation

multiplying two matrices like geometric meaning of applying one transformation then another
M_2 = \begin{bmatrix}0&2\\1&0\\\end{bmatrix}, M_1 = \begin{bmatrix}1&-2\\1&0\\\end{bmatrix}, application is M_2 then M_1(right -> left), track where \widehat{i} and \widehat{j} going, first after \widehat{i} \rightarrow{} \begin{bmatrix}1\\1\\\end{bmatrix} \rightarrow applying M2, 1*\begin{bmatrix}0\\1\\\end{bmatrix} + 1 * \begin{bmatrix}2\\0\\\end{bmatrix}, since applying M_1 then M_2 is same (M_2M_1), the overall effect is same, which means M_2M_1 = \begin{bmatrix}2&0\\1&-2\\\end{bmatrix} = \begin{bmatrix}0&2\\1&0\\\end{bmatrix}\begin{bmatrix}1&-2\\1&0\\\end{bmatrix} (matrix - mul -view) = 1*\begin{bmatrix}0\\1\\\end{bmatrix} + 1 * \begin{bmatrix}2\\0\\\end{bmatrix} , -2*\begin{bmatrix}0\\1\\\end{bmatrix} + 0*\begin{bmatrix}2\\0\\\end{bmatrix}(vector - geo -view), so the composite transformation basis are two columns, geometrically matches numerically
we can view naturally why M_1M_2 \neq M_2M_1 and (AB)C = A(BC)

L6 Determinant

2D as example, determinant of transformation: tracking area surround by basis, and all other shapes change the same mount, since grid lines evenly space
when Det(t) = 0, it squishes all of space onto a line or even a single point. checking Det is zero means span of output is squishes small dimensions.
if after transformation \widehat{i}and\widehat{j}remain the virtual order Det is positive, otherwise is negative. this natural: at start \widehat{i}and\widehat{j} stay apart, Det is positive, then \widehat{i} towards \widehat{j}, Det is closing to 0, after \widehat{i} line on \widehat{j} Det is 0, at last \widehat{i} across \widehat{j}, Det is nagetive.
3D coordinate system Det is volumn, sign apply right-hand rule, some times computations does not fall within the essence of linear algebra, prove Det(M_1M_2)=Det(M_1)Det(M_2) take some effort, but if view geometrically it's natural composite area just same each area multiply, it's not official prove but it's give us inspire what it should like.

L7 Inverse matrix, column space and null space

Inverse matrix

linear algebra mainly application in graphs, robot, solve linear equations. support we have in matrix form A\overrightarrow{x}=\overrightarrow{v}, it's identical find original vector \overrightarrow{x} that after transformation lands on \overrightarrow{v}, if Det(A) \neq 0 there exist unique solution, if we define reverse transformation of A is A^{-1}, transform then back as nothing happen.
notice we hind in geometrically as transformation numerically as matrix, then like a coin both side, so we got inverse matrix from inverse transformation.
we want solve \overrightarrow{x} = A^{-1}A\overrightarrow{x}=A^{-1}\overrightarrow{v}

  • if Det(A) = 0, not reverse transformation(function one input match one output) exists, you can not turn a line/point into a plane, A^{-1} not exists.
  • if Det(A) \neq 0, transformation unique(same span to span), A^{-1} exists.

rank

we transform 3D space into 2D plane, one dimensional number line, even a point, in that rank specify number of dimensions in output basis.

column space

for all possible \overrightarrow{v}, the sets of A\overrightarrow{v} all outputs is column space. zero vector always in column space, since linear transformation require "origin remain". rank also view as dimensions in column space. full rank imply rank == columns count
for full rank transformation only zero vector lands on itself, not full rank transformation a bunch of vectors land on zero vector.
the set of vectors land on origin after transformation is null space/kernal of matrix

L8 no square matrix

\begin{bmatrix}x_1&x_2\\y_1&y_2\\x_3&y_3\\\end{bmatrix} as same we have two basis, each use 3 coordinate to descripe
\begin{bmatrix}2\\7\\\end{bmatrix} -> LT -> \begin{bmatrix}1\\8\\2\\\end{bmatrix} column space is a 2D plane slicing through the origin of of 3D space, it use 3D to describe, but it's not cover all 3D space, it's full rank transformation
full rank: column dimension = input space dimension

L9 Dot product and duality

dot product

\begin{bmatrix}x_1\\x_2\\x_3\end{bmatrix}.\begin{bmatrix}y_1\\y_2\\y_3\end{bmatrix} = \sum_{i=1}^3 x_i.y_i, it's geometric meaning is projection one to anther then multiply their length. if project vector is opposite from original one the result is negative. if both vector are perpenclicular, dot product is zero.

connection

why dot production connect with projection ?
consider some 1x2 matrix transform 2x2 vector into number, which means 2D plane -> 1D number, \begin{bmatrix}a&b\\\end{bmatrix}\begin{bmatrix}x\\y\\\end{bmatrix} which has the same calculation of dot project.
consider this: in 2D x-y coordinate system project unit vector \widehat{i} and \widehat{j} in to diagonal line, since this transform is linear, there must exist some 1x2 matrix fulfill this job, which not defined in term of numerical vector or vector dot product, since multiplying 1x2 matrix by a 2D vector is the same thing as turning its side and taking a dot product, this transformation inescapably related to to some 2D vector, so in this way dot product has geometrically projecting.
for any 2D vector in original space, \overrightarrow{v} = x.\widehat{i} + y.\widehat{j}, tranform \overrightarrow{v}to line of \widehat{u}, tracking \widehat{i} and \widehat{j}, \widehat{i} mapping into u_x, u_x = i_{project}, \overrightarrow{v} = x.u_x + y.u_y = = \begin{bmatrix}x&y\\\end{bmatrix}\begin{bmatrix}u_x\\u_y\\\end{bmatrix}(transformation-project-view) = \begin{bmatrix}x\\y\\\end{bmatrix}.\begin{bmatrix}u_x\\u_y\\\end{bmatrix} (dot - product)
for non-unit vector c\widehat{u}, dot product also multiple some scale factor c
matrix multiply (1x2 as example) is the same thing as tasking a dot product.

duality

duality: natural but surprising correspondence between two types of mathematical things.

  • dual of a vector is the linear transformation that it encodes.
  • dual of linear transformation from some space to one dimension is a certain vector
    \begin{bmatrix}x&y\\\end{bmatrix}\leftarrow{}duality\rightarrow\begin{bmatrix}u\\v\\\end{bmatrix}
    dotting 2 vectors together is a way to translate one of them into the world of transformation
    view vector as physical embodiment of a linear transformation, vector is really a conceptual shorthand for a certain transformation

C10 Cross product in the light of linear transformation

\overrightarrow{v} \times \overrightarrow{w} = det(\begin{bmatrix}v_{x}&w_x\\v_{y}&w_y\\\end{bmatrix}), geometrically
3D: something combines 2 different 3D vector to get a new 3D vector

  1. direction fellowed by right-hand rule.
  2. perpendicular with \overrightarrow{v} and \overrightarrow{w}.
  3. length = det(\begin{bmatrix}\overrightarrow{v}\\\overrightarrow{w}\end{bmatrix})
    \begin{bmatrix}v_1\\v_2\\v_3\end{bmatrix} \times \begin{bmatrix}{w_1}\\{w_2}\\{w_3}\end{bmatrix} =\begin{bmatrix}\widehat{i}&v_1&w_1\\\widehat{j}&v_2&w_2\\\widehat{k}&v_3&w_3\end{bmatrix}, it's strange mess with i-hat in determinant, just symbols for show.

C11Cross product

pre-defined:
\overrightarrow{v} \times \overrightarrow{w} = \overrightarrow{P}

  1. ||\overrightarrow{P}|| = P_{area}, parallelogram's area over \overrightarrow{v} and \overrightarrow{w}, surround by \begin{bmatrix}\widehat{i}&v_1&w_1\\\widehat{j}&v_2&w_2\\\widehat{k}&v_3&w_3\end{bmatrix}
  2. perpendicular to \overrightarrow{v} and \overrightarrow{w}
  3. direction obeys right-hand rule.

why corss product geometrically related #1 #2 #3 properties ?
when there exists a linear transformation to number line, a vector can be found (dual of that transformation), performing the linear transformation is same thing as taking a dot product.
hint of steps:

  1. define 3D->1D linear transformation in term of \overrightarrow{v} and \overrightarrow{w}
  2. find dual of vector of #1 linear transformation.
  3. show dual vector is \overrightarrow{v} \times \overrightarrow{w}

function f(x, y, z) = det(\begin{bmatrix}x&v_1&w_1\\y&v_2&w_2\\z&v_3&w_3\end{bmatrix}), f transform 3D -> number line, it's linear,
-> there must exist unique matrix describe this transformation,
-> there must exist unique vector(dual) makes \begin{bmatrix}P_X&P_y&P_z\end{bmatrix}.\begin{bmatrix}x\\y\\z\end{bmatrix} = f(x, y, z)
-> compare both side P_x = v_2w_3 - v_3w_2, P_y = v_2w_1 - v_1w_2, P_z = v_1w_2 - v_2w_1, plugging \widehat{i}, \widehat{j}, \widehat{k} is way of signaling interpret those coefficients as the coordinates, thus \overrightarrow{P} = \overrightarrow{v} \times \overrightarrow{w} = det(\begin{bmatrix}\widehat{i}&v_1&w_1\\\widehat{j}&v_2&w_2\\\widehat{k}&v_3&w_3\end{bmatrix}) = f(x, y, z) = \begin{bmatrix}P_X&P_y&P_z\end{bmatrix}.\begin{bmatrix}x\\y\\z\end{bmatrix}
-> in this way, we connect cross product and dot product, \begin{bmatrix}P_X&P_y&P_z\end{bmatrix} has geometric interoperation
-> cross product \overrightarrow{P} = \overrightarrow{v} \times \overrightarrow{w} has geometric perpendicular to plane determined by \overrightarrow{v} and \overrightarrow{w}, ||P|| is area determinded by \overrightarrow{v} and \overrightarrow{w}

12 Change of basis

coordinate system function as translate between vectors and sets of number, x, y implicit current basis.

standard coordinate system (SC) \widehat{i}, \widehat{j}; bob 's coordinate system (BC), \widehat{b_1}, \widehat{b_2}(use SC basis describe)

  1. origin meets
  2. direction of axes and the spacing of grid lines are different

T^{-1}M^{100}T[bob_x, box_y] function as transform bob' s coordinate into current coordinate system(describe by SC), then apply M 100 times(describe by SC), then reverse into BC(describe by BC)

13 Eigenvectors and eigenvalue

math equation : A\overrightarrow{v} = \lambda\overrightarrow{v}, after linear transformation some vector is equal scaled by some factor, the vector is eigenvector, the factor is eigenvalue.
eigenvector ' s span it unchanged, it geometrically stretch/squish.
application: 3D rotation of axis is eigenvector, it's unchanged durning rotation. it's easy think 3D rotation in terms of some axis rotation and an angle rather then thinking 3x3 matrix associated with transformation.
eigenvector/eigenvalue in in-depend with coordinate system
(A - \lambda I)\overrightarrow{v} = 0, zero vector always hold, the only way it's possible for product of a matrix with a non-zero vector to become zero is if the transformation with that squishes space into lower dimension, thus det(A - \lambda I) = 0
3D -> plane, line, origin
if every vector has move, thus no eigenvector exists, -90^\circ rotation det(\begin{bmatrix}0 - \lambda&-1\\1&0 -\lambda\end{bmatrix}) = 0, thus \lambda^2 = -1 no real number solution, since rotation lets every vector left there own span.
shear transformation \begin{bmatrix}1&1\\0&1\end{bmatrix}, x-axis vector not move or scale, \lambda = 1
eigenvectors may have multiple eigenvalue
eigenbasis: eigenvector lines up with basis.
diagonal matrix: all the basis vector is eigenvector, eigenvalue is diagonal value. A^{100} = \begin{bmatrix}\lambda^{100}\end{bmatrix}
A\begin{bmatrix}\lambda_i\end{bmatrix} guaranteed to be diagonal with \lambda, base vector just get scaled durning transformation.
A^{100} = \begin{bmatrix}\lambda_i\end{bmatrix}^{-1}A^{99}A\begin{bmatrix}\lambda_i\end{bmatrix}, use eigen value change basis, form diagonal matrix, then multiple 100 times, reverse to original coordinate system.

C14 Abstract vector space

what's vector
determinant and eigenvectors don't care about the coordinate system, how much a transformation scales area, stay on their own span durning transformation.
vector-ish qualities:

  1. additivity: L(\overrightarrow{v} + \overrightarrow{w}) = L(\overrightarrow{v}) + L(\overrightarrow{w})
  2. scaling: L(c\overrightarrow{v}) = cL(\overrightarrow{v})
    transformation, function(derivative is linear) can preserve the operations of additivity & scaler multiplication, so abstract entity full-fill #1 #2 properties is in vector space

axioms for vector

  1. \overrightarrow{u} + (\overrightarrow{v} + \overrightarrow{w}) = (\overrightarrow{u} + \overrightarrow{v}) + \overrightarrow{w}
  2. \overrightarrow{u} + \overrightarrow{v} = \overrightarrow{v} + \overrightarrow{u}
  3. \exists \overrightarrow{0}, \overrightarrow{0} + \overrightarrow{v} = \overrightarrow{v}
  4. for every \overrightarrow{v}, exists \overrightarrow{-v} + \overrightarrow{v} = 0
  5. a(b\overrightarrow{v}) = (ab)\overrightarrow{v}
  6. 1.\overrightarrow{v} = \overrightarrow{v}
  7. a(\overrightarrow{v} + \overrightarrow{w}) = a\overrightarrow{v} + a\overrightarrow{w}
  8. (a + b)\overrightarrow{v} = a\overrightarrow{v} + b\overrightarrow{v}

grid lines remain parallel and evenly spaced is geometrically equal to 8-axioms more intuitively, the form of vector doesn't really matter. like number 3, can be 3 persons, 3 cars, 3 things, while add, subtract is same thing.
Abstract is price of generality
terms between different context, they mean the same thing.

linear algebra alternate names when apply functions
linear transformation linear operations
dot product inner product
eigenvector eigenfunction

C15 Cramer's rule

show as example
\begin{bmatrix}3&2\\-1&2\end{bmatrix}\begin{bmatrix}x\\y\end{bmatrix} = \begin{bmatrix}-4\\-2\end{bmatrix}, slove x, y.
Cramer's rule is not most efficient ways of solving equation, but it's more intuitive.
det(A) = 0 there may be non solution or many solutions
det(A) \neq 0, unique solution exist.
after some transformation T, if \overrightarrow{v}.\overrightarrow{w} = T(\overrightarrow{v}).T(\overrightarrow{w}), T is orthonormal(正交)
\begin{bmatrix}x\\y\end{bmatrix} surround with \overrightarrow{i} parallelogram area equal to y, this parallelogram after transformation change the same of mount det(A), det(\begin{bmatrix}3&-4\\-1&-2\end{bmatrix}) (parallelogram formed by transformed i-hat and \begin{bmatrix}-4\\-2\end{bmatrix}) = y \times det(A), y = det(\begin{bmatrix}3&-4\\-1&-2\end{bmatrix})/det(A), for the reason, x = det(\begin{bmatrix}-4&2\\-2&2\end{bmatrix})/det(A)

show

it's worthy thing how to work in 3D, volume change the same way

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 220,063评论 6 510
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 93,805评论 3 396
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 166,403评论 0 357
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 59,110评论 1 295
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 68,130评论 6 395
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 51,877评论 1 308
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 40,533评论 3 420
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 39,429评论 0 276
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 45,947评论 1 319
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 38,078评论 3 340
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 40,204评论 1 352
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,894评论 5 347
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 41,546评论 3 331
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 32,086评论 0 23
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 33,195评论 1 272
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 48,519评论 3 375
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 45,198评论 2 357

推荐阅读更多精彩内容