Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel

Vectors_And_Spaces.md 84 KB

You have to be logged in to leave a comment. Sign In

Vectors And Spaces

by Thom Ives, Ph.D. using Python applied to lectures of Sal Khan from Khan Academy

Khan and Python are MORE TOGETHER

In case you've never heard me say it, "I love Sal Khan's teaching!" He is a gift to America and the world! When I need to review something in Linear Algebra, I usually turn to his videos and Khan Academy materials first. Well, a while back, a group of people wanted to learn Linear Algebra from me. I said,

"Learn it from the best! And then I will help you learn how to use Python for linear algebraic operations and linear algebraic visualizations to help you learn it AND Python better!"

Well, now I want to create a repo for this and share it with you as I grow it. I am eager to hear from you where I can make things more clear and helpful. This is a work in progress.

Here are the links we will cover below. Khan Academy Section www.khanacademy.org/math/linear-algebra/vectors-and-spaces

  1. Vector Intro For Linear Algebra www.khanacademy.org/math/linear-algebra/vectors-and-spaces/vectors/v/vector-introduction-linear-algebra?modal=1
  2. Real Coordinate Spaces www.khanacademy.org/math/linear-algebra/vectors-and-spaces/vectors/v/real-coordinate-spaces?modal=1
  3. Adding Vectors Algebraically & Graphically www.khanacademy.org/math/linear-algebra/vectors-and-spaces/vectors/v/adding-vectors
  4. Multiplying A Vector By A Scalar www.khanacademy.org/math/linear-algebra/vectors-and-spaces/vectors/v/multiplying-vector-by-scalar
  5. Vector Examples www.khanacademy.org/math/linear-algebra/vectors-and-spaces/vectors/v/linear-algebra-vector-examples?modal=1
  6. Unit Vectors Introduction www.khanacademy.org/math/linear-algebra/vectors-and-spaces/vectors/v/intro-unit-vector-notation?modal=1
  7. Parametric Representations Of Lines www.khanacademy.org/math/linear-algebra/vectors-and-spaces/vectors/v/linear-algebra-parametric-representations-of-lines?modal=1

Vectors have direction and magnitude.

It does NOT matter where they start from.

What are some vectors that you know about and what is their direction?

$ \vec{v} = [5 \hat{i}] $ where $\hat{i}$ is a unit vector in the x direction.

import numpy as np
import matplotlib.pyplot as plt

plt.rc_context({
    'axes.edgecolor':'black', 'xtick.color':'black',
    'ytick.color':'black', 'figure.facecolor':'white'});

plt.plot([2, 2.001],[1, 1.001], c='white')
plt.plot([8.999, 9],[3.999, 4], c='white')

vi = 5
vj = 0
plt.arrow(3, 2, vi, vj, head_width = 0.2, width = 0.05, ec ='green')
plt.show();

png

$ \vec{v_2} = [3\hat{i} ;; 4\hat{j}] $ where $\hat{j}$ is a unit vector in the y direction.

What is the magnitude of $ \vec{v_2} $?

It is $ \lVert \vec{v_2} \rVert = \sqrt{3^2 + 4^2} = \sqrt{9 + 16} = \sqrt{25} = 5$

import numpy as np
import matplotlib.pyplot as plt

plt.rc_context({
    'axes.edgecolor':'black', 'xtick.color':'black',
    'ytick.color':'black', 'figure.facecolor':'white'});

plt.plot([0, 0.001],[0, 0.001], c='white')
plt.plot([7.999, 8],[7.999, 8], c='white')

vi = 3
vj = 4
plt.arrow(1, 2, vi, vj, head_width = 0.2, width = 0.05, ec ='green')
plt.show();

png

Adding Two Vectors Of Like Kind

$ \vec{v_1} = [1\hat{i} ;; 2\hat{j}], ;;; \vec{v_2} = [2\hat{i} ;; 2\hat{j}] $

It is $ \vec{v_3} = \vec{v_1} + \vec{v_2} = [(1+2)\hat{i} + (2+2)\hat{j}] = [3\hat{i} + 4\hat{j}] $

What is the magnitude of $ \vec{v_3} $?

It is $ \lVert \vec{v_3} \rVert = \lVert \vec{v_1} + \vec{v_2} \rVert = \sqrt{(1+2)^2 + (2+2)^2} = \sqrt{9 + 16} = \sqrt{25} = 5$

import numpy as np
import matplotlib.pyplot as plt

plt.rc_context({  # Only needed for some dark modes
    'axes.edgecolor':'black', 'xtick.color':'black',
    'ytick.color':'black', 'figure.facecolor':'white'});

# Create a pallet plot for vectors
plt.plot([0, 0.001],[0, 0.001], c='white')
plt.xlim([0, 5]); plt.ylim([0, 7])

v1i = 1; v1j = 2; v2i = 2; v2j = 2

plt.arrow(1, 2, v1i, v1j, head_width = 0.2, width = 0.05,
          ec ='green', length_includes_head=True)
plt.arrow(2, 4, v2i, v2j, head_width = 0.2, width = 0.05,
          ec ='red', length_includes_head=True)

v3i = v1i + v2i; v3j = v1j + v2j
v3_mag = np.linalg.norm([3, 4])

stuff = r'The magnitude of $\vec{v_3}$' + f' is {v3_mag}'
plt.text(1, 1, stuff, fontsize=14, ha='left')
plt.arrow(1, 2, v3i, v3j, head_width = 0.2, width = 0.05,
          ec ='blue', length_includes_head=True)
plt.grid()
plt.show();

png

How do you find the magnitude of a vector in NumPy?

You use norm - short for Euclidean Norm.

v1 = np.array([2, 2])
v2 = np.array([1, 2])
v3 = v1 + v2
print(v3)
v3_mag = np.linalg.norm(v3)
print(v3_mag)
[3 4]
5.0

How do we scale vectors in NumPy as Sal Khan showed us?

v4 = 4*v1
v5 = 4*v2
print(v4)
print(v5)
[8 8]
[4 8]

Now we can simply add our scaled vectors.

v6 = v4 + v5
print(v6)
v6_mag = np.linalg.norm(v6)
print(v6_mag)
[12 16]
20.0

Can I find the unit vectors for all our vectors above? YES! Divide the vectors by their norms!

v1_unit = v1 / np.linalg.norm(v1)
print(f'Unit vector of v1 is {v1_unit}\n')

v2_unit = v2 / np.linalg.norm(v2)
print(f'Unit vector of v2 is {v2_unit}\n')

v3_unit = v3 / v3_mag
print(f'Unit vector of v3 is {v3_unit}\n')

v4_unit = v4 / np.linalg.norm(v4)
print(f'Unit vector of v4 is {v4_unit}\n')

v5_unit = v5 / np.linalg.norm(v5)
print(f'Unit vector of v5 is {v5_unit}\n')

v6_unit = v6 / np.linalg.norm(v6)
print(f'Unit vector of v6 is {v6_unit}\n')
Unit vector of v1 is [0.70710678 0.70710678]

Unit vector of v2 is [0.4472136  0.89442719]

Unit vector of v3 is [0.6 0.8]

Unit vector of v4 is [0.70710678 0.70710678]

Unit vector of v5 is [0.4472136  0.89442719]

Unit vector of v6 is [0.6 0.8]

Why are they called unit vectors? If you find their norm, their norms will ALWAYS equal 1 IF they are truly unit vectors.

print(f'The magnitude of v1_unit is {round(np.linalg.norm(v1_unit), 2)}\n')
print(f'The magnitude of v2_unit is {round(np.linalg.norm(v2_unit), 2)}\n')
print(f'The magnitude of v3_unit is {round(np.linalg.norm(v3_unit), 2)}\n')
print(f'The magnitude of v4_unit is {round(np.linalg.norm(v4_unit), 2)}\n')
print(f'The magnitude of v5_unit is {round(np.linalg.norm(v5_unit), 2)}\n')
print(f'The magnitude of v6_unit is {round(np.linalg.norm(v6_unit), 2)}\n')
The magnitude of v1_unit is 1.0

The magnitude of v2_unit is 1.0

The magnitude of v3_unit is 1.0

The magnitude of v4_unit is 1.0

The magnitude of v5_unit is 1.0

The magnitude of v6_unit is 1.0

IF you multiply your unit vectors by a magnitude or scalar, you will get a vector in the same direction with a different magnitude.

Linear Combinations And Span

Khan Academy Section

       Video for Linear Combinations And Span

$ \vec{v_1} = [1\hat{i} ;; 2\hat{j}], ;;; \vec{v_2} = [2\hat{i} ;; 2\hat{j}] $ are independent.

We can say that some $ \vec{v_3} = c_1 \vec{v_1} + c_2 \vec{v_2} $

How much space could be spanned by using many different values for $c_1$ and $c_2$ ?

It would be all real values in 2 dimensional space.

In other words, $ \vec{v_3} ;; \epsilon ;; ℝ^2 $.

The same can be true for independent vectors in $ ℝ^3 $.

$ \vec{v_1} = [1\hat{i} ;; 2\hat{j} ;; 3\hat{k}], ;;; \vec{v_2} = [2\hat{i} ;; 2\hat{j} ;; 1\hat{k}] $ are independent.

We can say that some $ \vec{v_3} = c_1 \vec{v_1} + c_2 \vec{v_2} $

How much space could be spanned by using many different values for $c_1$ and $c_2$ ?

It would be all real values in 3 dimensional space.

In other words, $ \vec{v_3} ;; \epsilon ;; ℝ^3 $.

Etcetera for more than 3 dimensions.

import numpy as np
import matplotlib.pyplot as plt

c1 = 3; c2 = 4
v1 = np.array([2, 2]); v2 = np.array([1, 2])
v3 = c1*v1 + c2*v2
print(v3)
v3_mag = round(np.linalg.norm(v3), 2)

plt.plot([0, 0.001],[0, 0.001], c='white')
plt.xlim([-20, 20]); plt.ylim([-20, 20])
plt.arrow(0, 0, v3[0], v3[1], head_width = 0.5, width = 0.05, ec ='green')

stuff = r'The magnitude of $\vec{v_3}$' + f' is {v3_mag}'
plt.title(stuff, color='white')
plt.show();
[10 14]

png

The Span of Linearly Combined Independent Vectors

$ V = Span(\vec{V_1}, \vec{V_2}, \dots, \vec{V_m}) \mapsto c_1 \vec{V_1}, c_2 \vec{V_2}, \dots, c_3 \vec{V_m} ; \forall ; c_i \in ℝ $

import numpy as np
import matplotlib.pyplot as plt

v1 = np.array([2, 2]); v2 = np.array([1, 2])

finite_span = 10
for c1 in range(-finite_span, finite_span+1, 1):
    for c2 in range(-finite_span, finite_span+1, 1):
        v3 = c1*v1 + c2*v2
        plt.plot([0, 0.001],[0, 0.001], c='white')
        plt.arrow(0, 0, v3[0], v3[1], head_width = 0.5,
                  width = 0.05, ec ='green')

plt.xlim([-20, 20]); plt.ylim([-20, 20])
plt.show();

png

The Span of Linearly Combined Dependent Vectors

import numpy as np
import matplotlib.pyplot as plt


finite_span = 5
v1 = np.array([2, 2]); v2 = np.array([1, 1])

for c1 in range(-finite_span, finite_span + 1, 1):
    for c2 in range(-finite_span, finite_span + 1, 1):
        v3 = c1*v1 + c2*v2

        plt.plot([0, 0.001],[0, 0.001], c='white')
        plt.arrow(0, 0, v3[0], v3[1], head_width = 0.5,
                  width = 0.05, ec ='green')

plt.xlim([-20, 20]); plt.ylim([-20, 20])
plt.show();

png

Linear Dependence And Independence

Linear Dependence And Independence

Overview

Linear Independence - No collinearity between two or more vectors (i.e. features) Linear Dependence - Collinearity is present between two or more vectors (i.e. features)

Linear Dependence

v1 = np.array([2, 2])
v2 = np.array([1, 1])

print(v1)
print(v2)
[2 2]
[1 1]
print(v1)
print(2*v2)
[2 2]
[2 2]

This is easy to see by inspection, but what if we have many MANY dimensions? We need a mathematical way / a code way / a programmatic way to check for dependence or independence.

u_v1 = v1 / np.linalg.norm(v1)
u_v2 = v2 / np.linalg.norm(v2)

print(u_v1)
print(u_v2)

if np.array_equal(u_v1, u_v2):
    print('The vectors are dependent')
else:
    print('The vectors are independent')
[0.70710678 0.70710678]
[0.70710678 0.70710678]
The vectors are dependent

Linear Independence

v1 = np.array([2, 2])
v2 = np.array([1, 2])

print(v1)
print(v2)
[2 2]
[1 2]
u_v1 = v1 / np.linalg.norm(v1)
u_v2 = v2 / np.linalg.norm(v2)

print(u_v1)
print(u_v2)

if np.array_equal(u_v1, u_v2):
    print('The vectors are dependent')
else:
    print('The vectors are independent')
[0.70710678 0.70710678]
[0.4472136  0.89442719]
The vectors are independent

General Check For Independence

$ \vec{v_1} = [1\hat{i} ;; 2\hat{j}], ;;; \vec{v_2} = [2\hat{i} ;; 2\hat{j}] $

If there is $c_1 \neq 0$ for $c_1$ being any real number

and / or some

$ c_2 \neq 0 $ for $c_2$ being any real number

to satisfy the equation

$ 0 = c_1 \vec{v_1} + c_2 \vec{v_2} $

then, the vectors are dependent.

If $ c_1 $ and $ c_2 $ MUST BE zero to satisfy the equation above,

then the vectors are independent.

Visualization Of Above Principles

Visualizing Linear Dependence

import numpy as np
import matplotlib.pyplot as plt

plt.plot([0, 0.001],[0, 0.001], c='white')
plt.xlim([0, 7]); plt.ylim([0, 7])

v1i = 1; v1j = 1
v2i = 2; v2j = 2

plt.arrow(2, 2, v1i, v1j, head_width = 0.2, width = 0.05, ec ='green')
plt.arrow(2, 2, v2i, v2j, head_width = 0.2, width = 0.05, ec ='red')

stuff = r'Visualizing $\vec{v_1}$ and $\vec{v_2}$'
plt.title(stuff, color='white')
plt.show();

png

Visualizing Linear Independence

import numpy as np
import matplotlib.pyplot as plt

plt.plot([0, 0.001],[0, 0.001], c='white')
plt.xlim([0, 7]); plt.ylim([0, 7])

v1i = 1; v1j = 2
v2i = 2; v2j = 2

plt.arrow(2, 2, v1i, v1j, head_width = 0.2, width = 0.05, ec ='green')
plt.arrow(2, 2, v2i, v2j, head_width = 0.2, width = 0.05, ec ='red')

stuff = r'Visualizing $\vec{v_1}$ and $\vec{v_2}$'
plt.title(stuff, color='white')
plt.show();

png

Takeaway

With increased dimensionality comes new information.

Compact Review
Linear Dependence & Independence:
Visually & Via Unit Vectorization

import numpy as np
import matplotlib.pyplot as plt

plt.figure(figsize=(7, 4))
plt.plot([0, 0.001],[0, 0.001], c='white')
plt.xlim([0, 7]); plt.ylim([0, 4])

v1 = [1, 1]; v2 = [2, 2]; v3 = [1, 2]; v4 = [2, 2]

u_v1 = v1 / np.linalg.norm(v1); u_v2 = v2 / np.linalg.norm(v2)
u_v3 = v3 / np.linalg.norm(v3); u_v4 = v4 / np.linalg.norm(v4)

print(f'Vectors 1 and 2 are independent: {not np.array_equal(u_v1, u_v2)}')
print(f'Vectors 3 and 4 are independent: {not np.array_equal(u_v3, u_v4)}\n')

plt.arrow(1, 1, v1[0], v1[1], head_width = 0.2, width = 0.05, ec ='green')
plt.arrow(1, 1, v2[0], v2[1], head_width = 0.2, width = 0.05, ec ='red')
plt.arrow(4, 1, v3[0], v3[1], head_width = 0.2, width = 0.05, ec ='green')
plt.arrow(4, 1, v4[0], v4[1], head_width = 0.2, width = 0.05, ec ='red')

stuff = r'$\vec{v_1}$ and $\vec{v_2}$ are Dependent'
stuff += r' ... $\vec{v_3}$ and $\vec{v_4}$ are Independent'
plt.title(stuff, color='white')

plt.show();
Vectors 1 and 2 are independent: False
Vectors 3 and 4 are independent: True

png

Subspaces And The Basis For A Subspace

Subspaces And The Basis For A Subspace

Linear Algebra and the Data Sciences are subsets of STEM

  • Sometimes it takes learning several lectures or sessions or chapters of STEM material before the light turns on in your mind - be patient
  • It's perfectly normal if or when you fall in love with this stuff to want to watch Sal's lectures again, OR read a linear algebra textbook multiple times, and to do the same with multiple linear algebra textbooks OR materials.
  • Branches of math and science important to the data sciences:
    • Algebra
    • Linear Algebra
    • Trigonometry
    • Statistics (a science that uses the math of probability)
    • The Calculus

How To Determine IF Some $V$, Which Is A Subset Of $ℝ^n$ Is Also A Valid Subspace of $ℝ^n$

Some $V$ is a subset of $ℝ^n$.

$ \vec{V} $ is a valid subspace of $ℝ^n$ IF the following three statements are true:

  1. $ \vec{V} $ contains the zero vector for n dimensions ... $ \vec{0} = \begin{bmatrix} 0 \ 0 \ \vdots \ 0 \ \end{bmatrix} $,
    which means all vectors in $\vec{V}$ are linearly independent of each other and no vectors in $\vec{V}$ are linear combinations of each other.

  2. If $\vec{x}$ is in $\vec{V}$, then any $c \vec{x}$ is also in $\vec{V}$ where $c$ is any scalar value.

  3. If $ \vec{a} $ is in $ \vec{V} $ and if $ \vec{b} $ is in $ \vec{V} $, then $ \vec{a} + \vec{b} $ is in $ \vec{V} $

This all means that if the set of vectors $V$ is closed under multiplication and addition and contains the zero vector, it is a valid subspace of $ℝ^n$.

A Super Simple Example Of A Valid Subspace Of $ ℝ^3 $

$\vec{V} = \begin{bmatrix} 0 \ 0 \ 0 \end{bmatrix}$ is a subset of $ℝ^3$.

Is $\vec{V}$ a valid subspace of $ℝ^3$?

  1. $\vec{V}$ does contain the zero vector for 3 dimensions.

  2. Any $c \vec{V}$ is also in $\vec{V}$ where $c$ is any scalar value.

  3. $\vec{V} + \vec{V} = \vec{V}$

Thus $\vec{V}$ is a valid subspace of $ℝ^3$.

Even though this is trivially simple subspace, it is still valid subspace.

A Simple Example Of An Invalid Subspace Of $ℝ^2$

$S = \begin{Bmatrix} \begin{bmatrix} x_1 \ x_2 \end{bmatrix} in; ℝ^2 \mid x_1 \ge 0 \end{Bmatrix}$

Is $S$ a valid subspace of $ℝ^2$?

  1. $S$ does contain the zero vector for 2 dimensions.

  2. NOT all $c \begin{bmatrix} x_1 \ x_2 \end{bmatrix}$ are in $S$ where $c$ is any scalar value, so $S$ is not closed under multiplication.

  3. $\vec{a}$ is in $S$ and $\vec{b}$ is in $S$ and $\vec{a} + \vec{b}$ is in $S$, so $S$ is closed under addition.

Thus $S$ is NOT a valid subspace of $ℝ^2$, because it is not closed for multiplication (rule 2).

Let's illustrate this with Python, NumPy, and MatPlotLib

import numpy as np
import matplotlib.pyplot as plt

plt.rc_context({
    'axes.edgecolor':'black', 'xtick.color':'black',
    'ytick.color':'black', 'figure.facecolor':'white'});

x_list = []
finite_span = 20
lim = 30
plt.figure(figsize=(10,6))
for x1 in range(0, finite_span+1, 1):
    for x2 in range(-finite_span, finite_span+1, 1):
        x = np.array([x1, x2])
        x_list.append(x)
        plt.plot([0, 0.001],[0, 0.001], c='white')
        plt.arrow(0, 0, x[0], x[1], head_width = 0.5,
                  width = 0.05, ec ='green')

plt.xlim([-lim, lim])
plt.ylim([-lim, lim])
plt.show();

png

plt.figure(figsize=(10,6))
for x in x_list:
    plt.plot([0, 0.001],[0, 0.001], c='white')
    plt.arrow(0, 0, x[0], x[1], head_width = 0.5,
              width = 0.05, ec ='green')

plt.xlim([-lim, lim]); plt.ylim([-lim, lim])
plt.show();

png

# is [0 0] in x_list, and x_list in a python list of numpy vectors / arrays
# We want a python list of python arrays so we can use the "in" logic
py_list_of_py_lists = [x.tolist() for x in x_list]
print([0, 0] in py_list_of_py_lists)
True
import random

list_1 = random.sample(x_list, 1)[0]
list_2 = random.sample(x_list, 1)[0]
print(list_1, list_2)
sum_of_any_two_lists = list_1 + list_2
print(sum_of_any_two_lists)
print(sum_of_any_two_lists.tolist() in py_list_of_py_lists)
[10 13] [  5 -16]
[15 -3]
True
plt.figure(figsize=(10,6))
for x in x_list:
    plt.plot([0, 0.001],[0, 0.001], c='white')
    plt.arrow(0, 0, x[0], x[1], head_width = 0.5,
              width = 0.05, ec ='green')

bool_list = []
x = np.array([1, 1])
for c in range(-10, 10+1, 1):
    plt.plot([0, 0.001],[0, 0.001], c='white')
    plt.arrow(0, 0, c*x[0], c*x[1], head_width = 0.5,
              width = 0.05, ec ='red')
    curr_vec = c*x
    bool_list.append(curr_vec.tolist() in py_list_of_py_lists)

print(bool_list)
plt.xlim([-lim, lim]); plt.ylim([-lim, lim])
plt.show();
[False, False, False, False, False, False, False, False, False, False, True, True, True, True, True, True, True, True, True, True, True]

png

A VERY General Example For A Span In $ℝ^n$

Note, that in this example, we do not even know the number of dimensions $n$.

Is $ U = Span(\vec{V_1}, \vec{V_2}, \vec{V_3})$ a valid subspace of $ℝ^n$

where each of the vectors $\vec{V_1}, \vec{V_2}, \vec{V_3}$ have dimension $n$?

  1. $ 0 \vec{V_1} + 0 \vec{V_2} + 0 \vec{V_3} = \vec{0}$

  2. $ \vec{x} = c_1 \vec{V_1} + c_2 \vec{V_2} + c_3 \vec{V_3}$
    $ a \vec{x} = a c_1 \vec{V_1} + a c_2 \vec{V_2} + a c_3 \vec{V_3}$
    $ a \vec{x} = c_4 \vec{V_1} + c_5 \vec{V_2} + c_6 \vec{V_3}$
    all of these are in the $Span$ of $U$, which is our subspace, so U is closed under multiplication.

  3. Let's define another vector
    $\vec{y} = d_1 \vec{V_1} + d_2 \vec{V_2} + d_3 \vec{V_3}$
    $\vec{x} + \vec{y} = (c_1 + d_1) \vec{V_1} + (c_2 + d_2) \vec{V_2} + (c_3 + d_3) \vec{V_3}$
    but $\vec{x} + \vec{y}$ is just a linear combination of $\vec{V_1}, \vec{V_2}, \vec{V_3}$, which would be in the span of $U$, so U is closed under addition.

Thus, $U$ is a valid subspace of $ℝ^n$

A Simple Example Of A Valid Subspace in $ℝ^2$

Is $U = Span \begin{pmatrix} \begin{bmatrix} 1 \ 1 \end{bmatrix} \end{pmatrix}$ a valid subspace of $ℝ^2$ ?

  1. First, $ 0 \begin{bmatrix} 1 \ 1 \end{bmatrix} = \begin{bmatrix} 0 \ 0 \end{bmatrix}$, so this rule is valid.

  2. Any $ \vec{x} = c \begin{bmatrix} 1 \ 1 \end{bmatrix} = \begin{bmatrix} c \ c \end{bmatrix} $
    and all $ \begin{bmatrix} c \ c \end{bmatrix} $ would be in $U$, so U is closed under multiplication.

  3. Some other vector $\vec{y} = d \begin{bmatrix} 1 \ 1 \end{bmatrix} = \begin{bmatrix} d \ d \end{bmatrix}$ would be in $U$ by the previous rule,
    and $\vec{x} + \vec{y} = \begin{bmatrix} c+d \ c+d \end{bmatrix}$,
    which is just a linear combination of $\begin{bmatrix} 1 \ 1 \end{bmatrix}$ and would thus be in $U$, so $U$ is closed under addition.

Thus, $U = Span \begin{pmatrix} \begin{bmatrix} 1 \ 1 \end{bmatrix} \end{pmatrix}$ IS a valid subspace of $ℝ^2$

Let's illustrate this with Python, NumPy, and MatPlotLib

import numpy as np
import matplotlib.pyplot as plt

plt.rc_context({
    'axes.edgecolor':'black', 'xtick.color':'black',
    'ytick.color':'black', 'figure.facecolor':'white'});

x_list = []
finite_span = 20
lim = 30
plt.figure(figsize=(10,6))
x = np.array([1, 1])
for c in range(-finite_span, finite_span+1, 1):
    x_list.append(c * x)
    plt.plot([0, 0.001],[0, 0.001], c='white')
    plt.arrow(0, 0, c*x[0], c*x[1], head_width = 0.5,
                width = 0.05, ec ='green')

plt.xlim([-lim, lim])
plt.ylim([-lim, lim])
plt.show();

png

plt.figure(figsize=(10,6))
for x in x_list:
    plt.plot([0, 0.001],[0, 0.001], c='white')
    plt.arrow(0, 0, x[0], x[1], head_width = 0.5,
              width = 0.05, ec ='green')

plt.xlim([-lim, lim]); plt.ylim([-lim, lim])
plt.show();

png

# is [0 0] in x_list, and x_list in a python list of numpy vectors / arrays
# We want a python list of python arrays so we can use the "in" logic
py_list_of_py_lists = [x.tolist() for x in x_list]
print([0, 0] in py_list_of_py_lists)
True
import random

list_1 = random.sample(x_list, 1)[0]
list_2 = random.sample(x_list, 1)[0]
print(list_1, list_2)
sum_of_any_two_lists = list_1 + list_2
print(sum_of_any_two_lists)
print(sum_of_any_two_lists.tolist() in py_list_of_py_lists)
[-4 -4] [14 14]
[10 10]
True
plt.figure(figsize=(10,6))
for x in x_list:
    plt.plot([0, 0.001],[0, 0.001], c='white')
    plt.arrow(0, 0, x[0], x[1], head_width = 0.5,
              width = 0.05, ec ='green')

bool_list = []
x = np.array([1, 1])
for c in range(-10, 10+1):
    plt.plot([0, 0.001],[0, 0.001], c='white')
    plt.arrow(0, 0, c*x[0], c*x[1], head_width = 0.5,
              width = 0.05, ec ='red')
    curr_vec = c*x
    bool_list.append(curr_vec.tolist() in py_list_of_py_lists)

print(bool_list)
plt.xlim([-lim, lim]); plt.ylim([-lim, lim])
plt.show();
[True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True]

png

The Basis For A Subspace Is A Set Of Linearly Independent Vectors

Let's say that $ V = Span(\vec{V_1}, \vec{V_2}, \dots, \vec{V_m})$ is a subspace of some $ℝ^n$

where each of the vectors $\vec{V_1}, \vec{V_2}, \dots, \vec{V_m}$ are linearly independent $\therefore$ no vectors in $V$ are linear combinations of other vectors in $V$.

Then, if $S = {\vec{V_1}, \vec{V_2}, \dots, \vec{V_m}}$, $S$ is a basis for $V$.

However, if $T = {\vec{V_1}, \vec{V_2}, \dots, \vec{V_m}, \vec{V_s}}$, and $\vec{V_s} = \vec{V_1} + \vec{V_2}$, we can say that $Span(T) = V$, but $T$ is linearly dependent, and thus $T$ cannot be a basis for $V$. Thus, $\vec{V_s}$ is redundant.

Thus a basis is the minimum set of vectors that spans the subspace.

An Example Of Basis Of A Subspace

$S = \begin{Bmatrix} \begin{bmatrix} 2 \ 3 \end{bmatrix} \begin{bmatrix} 7 \ 0 \end{bmatrix} \end{Bmatrix}$. Is the Span of $S$ a basis for $ℝ^2$ ? Yes!

Please note that $S$ is not the only basis for $ℝ^2$.

See Sal's lecture for how to prove it algebraically. But note too that the two vectors are independent. One of them cannot be defined by terms of the other by scaling one to match the other.

Apply the rules that we covered above.

Let's illustrate below with some code reused and modified from above. Remember, we have to imagine how the space with $c_i \in ℝ$. We are only using a finite set of $c_i$.

import numpy as np
import matplotlib.pyplot as plt

v1 = np.array([2, 3]); v2 = np.array([7, 0])

finite_span = 20
lim = 70
plt.figure(figsize=(10,6))
for c1 in range(-finite_span, finite_span+1, 1):
    for c2 in range(-finite_span, finite_span+1, 1):
        v3 = c1*v1 + c2*v2
        plt.plot([0, 0.001],[0, 0.001], c='white')
        plt.arrow(0, 0, v3[0], v3[1], head_width = 0.5,
                  width = 0.05, ec ='green')

plt.xlim([-3*lim, 3*lim]); plt.ylim([-lim, lim])
plt.show();

png

Another Example Of A Basis Of A Subspace $ℝ^2$

$T = \begin{Bmatrix} \begin{bmatrix} 1 \ 0 \end{bmatrix} \begin{bmatrix} 0 \ 1 \end{bmatrix} \end{Bmatrix}$. Is the Span of $T$ a basis for $ℝ^2$ ? Yes.

Apply the rules that we covered above.

See Sal's lecture for how to prove it algebraically.

Let's illustrate again below with some code reused and modified from above.

import numpy as np
import matplotlib.pyplot as plt

plt.rc_context({
    'axes.edgecolor':'black', 'xtick.color':'black',
    'ytick.color':'black', 'figure.facecolor':'white'});

v1 = np.array([1, 0]); v2 = np.array([0, 1])
print(v1)
print(v2)

finite_span = 20
lim = 30
plt.figure(figsize=(10,6))
for c1 in range(-finite_span, finite_span+1, 1):
    for c2 in range(-finite_span, finite_span+1, 1):
        v3 = c1*v1 + c2*v2
        plt.plot([0, 0.001],[0, 0.001], c='white')
        plt.arrow(0, 0, v3[0], v3[1], head_width = 0.5,
                  width = 0.05, ec ='green')

plt.xlim([-lim, lim]); plt.ylim([-lim, lim])
plt.show();
[1 0]
[0 1]

png

Another Example Of A Basis Of A Subspace $ℝ^3$

$T = \begin{Bmatrix} \begin{bmatrix} 1 \ 0 \ 0 \end{bmatrix} \begin{bmatrix} 0 \ 1 \ 0 \end{bmatrix} \begin{bmatrix} 0 \ 0 \ 1 \end{bmatrix} \end{Bmatrix}$. Is the Span of $T$ a basis for $ℝ^3$ ? Yes.

$U = \begin{Bmatrix} \begin{bmatrix} 1 \ 0 \ 0 \end{bmatrix} \begin{bmatrix} 0 \ 1 \ 0 \end{bmatrix} \begin{bmatrix} 0 \ 0 \ 1 \end{bmatrix} \begin{bmatrix} 1 \ 1 \ 1 \end{bmatrix} \end{Bmatrix}$. Is the Span of $U$ a basis for $ℝ^3$ ? NO!

Why? The sum of the first $3$ vectors equals the $4^{th}$ vector.

Vector Dot And Cross Products

Vector Dot And Cross Products

Vector Dot Product And Vector Length


The dot product of two vectors is the sum of the products of their individual elements / components

$$ \vec{x} \cdot \vec{y} = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix} \cdot \begin{bmatrix} y_1 \\ y_2 \\ \vdots \\ y_n \end{bmatrix} = x_1 y_1 + x_2 y_2 + \dots + x_n y_n $$

Also, the length of a vector $ \vec{a} $ is the square root of the dot product of that vector onto itself ...

$$ \lVert \vec{a} \rVert = \sqrt{ \vec{a} \cdot \vec{a} } = \sqrt{ a_1 a_1 + a_2 a_2 + \dots + a_n a_n } $$

Note that dot products of vectors yield scalars

v1 = [1, 1]; v2 = [2, 2]

v1_dot_v2 = v1[0] * v2[0] + v1[1] * v2[1]
print(f'The dot product of v1 and v2 = {v1_dot_v2}')
The dot product of v1 and v2 = 4
v1 = [1, 1]; v2 = [2, 2]

def dot_product(va, vb):
    assert len(va) == len(vb), "Vectors are not the same dimension"
    va_dot_vb = 0
    for i in range(len(va)):
        va_dot_vb += va[i] * vb[i]

    return va_dot_vb

print(f'The dot product of v1 and v2 = {dot_product(v1, v2)}')
The dot product of v1 and v2 = 4
import numpy as np

print(f'The dot product of v1 and v2 = {np.dot(v1, v2)}')
The dot product of v1 and v2 = 4
import numpy as np

print(f'The dot product of v1 and v2 = {np.dot(np.array(v1), np.array(v2))}')
The dot product of v1 and v2 = 4
va = [3, 4]

print(f'The length of va = {(np.dot(va, va))**0.5}')
The length of va = 5.0

Proof Of The Cauchy-Schwarz Inequality With Code

Let's first review the Cauchy-Schwarz Inequality with math ...

$$ \lVert \vec{x} \rVert \lVert \vec{y} \rVert \ge \lvert \vec{x} \cdot \vec{y} \rvert $$

Let's think about this together. The Cauchy-Schwarz Inequality is saying that the product of the magnitudes of two vectors is always greater than or equal to the absolute value of the dot product of those two vectors.

Let's think about it even more. If the two vectors have the same direction, such as the angle between them is 0 degrees, then the product of their magnitudes will be the same as the absolute value of their dot products. If their is even a slight angle between the two vectors, then the product of their magnitudes will be greater than the absolute value of their dot products.

Sal does a great job of proving this in his lecture on the Cauchy-Schwarz Inequality. Let's do a pseudo proof with code.

Let's create a function that will check the Cauchy-Schwarz inequality for any given two vectors.

def cauchy_schwarz_inequality(va, vb):
    assert len(va) == len(vb), "Vectors are not the same dimension"
    va_norm = np.linalg.norm(va)
    vb_norm = np.linalg.norm(vb)
    abs_va_dot_vb = np.abs(np.dot(va, vb))

    the_check = round(va_norm * vb_norm, 12) >= round(abs_va_dot_vb, 12)
    if not the_check:
        print(va_norm * vb_norm)
        print(abs_va_dot_vb)
        return "Cauchy-Schwarz Inequality Failed!"
    else:
        return "Cauchy-Schwarz Inequality Holds!"
count = 0
finite_span = 10
did_not_hold = False

for i in range(-finite_span, finite_span+1):
    for j in range(-finite_span, finite_span+1):
        for k in range(-finite_span, finite_span+1):
            for l in range(-finite_span, finite_span+1):
                count += 1
                result = cauchy_schwarz_inequality([i, j], [k, l])
                if result != "Cauchy-Schwarz Inequality Holds!":
                    did_not_hold = True

if did_not_hold:
    print("Cauchy-Schwarz Inequality Failed!")
else:
    print(f"Cauchy-Schwarz Inequality held for {count} tests")
Cauchy-Schwarz Inequality held for 194481 tests

VPython Code To Run From An IDE

# You MIGHT be able to get this to run in Colab, but
# I have not yet done this
import vpython as vp

floor=vp.box(pos=vp.vector(0,0,0),
              size=vp.vector(12,0.5,12),
              color=vp.color.red)

ball=vp.sphere(pos=vp.vector(0,7,0),
               radius=2,
               color=vp.color.blue)

ball.velocity = vp.vector(0,-1,0)
dt = 0.01

while True:
    vp.rate(100)
    ball.pos = ball.pos + ball.velocity * dt
    if ball.pos.y < (ball.radius+floor.size.y/2):
        ball.velocity.y = abs(ball.velocity.y)
    else:
        ball.velocity.y = ball.velocity.y - 9.8 * dt

Vector Triangle Inequality

Cauchy-Schwarz Inequality Repeated

For $$ \vec{x} ;; and ;; \vec{y} \in ℝ^n $$

with $ \vec{x} ;; and ;; \vec{y} $ as non-zero vectors,

$$\lVert \vec{x} \rVert \lVert \vec{y} \rVert \ge \lvert \vec{x} \cdot \vec{y} \rvert $$

Note too that if c is non zero, then $;; \vec{x} ; = ; c ; \vec{y} $ $ \iff $ $\lVert \vec{x} \rVert \lVert \vec{y} \rVert = \lvert \vec{x} \cdot \vec{y} \rvert $

Let's experiment. Remember that the length of a vector squared is equal to the dot product of that vector with itself.

Let's remind ourselves that

$$ {\lVert \vec{a} \rVert}^2 = \vec{a} ⋅ \vec{a} $$

So, let's try something interesting ...

$$ {\lVert \vec{x} + \vec{y} \rVert}^2 = (\vec{x} + \vec{y}) ⋅ (\vec{x} + \vec{y}) $$

$$ {\lVert \vec{x} + \vec{y} \rVert}^2 = (\vec{x} ⋅ \vec{x}) + 2(\vec{x} ⋅ \vec{y}) + (\vec{y} ⋅ \vec{y}) $$

$$ {\lVert \vec{x} + \vec{y} \rVert}^2 = {\lVert \vec{x} \rVert}^2 + 2(\vec{x} ⋅ \vec{y}) + {\lVert \vec{y} \rVert}^2 $$

Now remember that

$$ \lvert \vec{x} ⋅ \vec{y} \rvert \ge \vec{x} ⋅ \vec{y} $$

and thus

$$ \lVert \vec{x} \rVert \lVert \vec{y} \rVert \ge \lvert \vec{x} \cdot \vec{y} \rvert \ge \vec{x} ⋅ \vec{y} $$

Therefore,

$$ {\lVert \vec{x} \rVert}^2 + 2 \lVert \vec{x} \rVert \lVert \vec{y} \rVert + {\lVert \vec{y} \rVert}^2 ≥ {\lVert \vec{x} + \vec{y} \rVert}^2 $$

or

$$ {\lVert \vec{x} + \vec{y} \rVert}^2 ≤ {\lVert \vec{x} \rVert}^2 + 2 \lVert \vec{x} \rVert \lVert \vec{y} \rVert + {\lVert \vec{y} \rVert}^2 $$

But, looking at the right, we see that it is a square. So,

$$ {\lVert \vec{x} + \vec{y} \rVert}^2 ≤ (\lVert \vec{x} \rVert + \lVert \vec{y} \rVert)^2 $$

Now, let's take the square root of both sides ...

$$ \lVert \vec{x} + \vec{y} \rVert ≤ \lVert \vec{x} \rVert + \lVert \vec{y} \rVert $$

This is called The Triangle Inequality

So now it's time for our Python and Matplotlib graph paper!

import numpy as np
import matplotlib.pyplot as plt

plt.rc_context({  # Only needed for some dark modes
    'axes.edgecolor':'black', 'xtick.color':'black',
    'ytick.color':'black', 'figure.facecolor':'white'});

# Create a pallet plot for vectors
plt.plot([0, 0.001],[0, 0.001], c='white')
plt.xlim([-3, 3]); plt.ylim([-1, 7])
# plt.xlim([-5, 3]); plt.ylim([-1, 7])
plt.grid()

x = [2, 4]; y = [-4, 2]

plt.arrow(0, 0, x[0], x[1], head_width = 0.2, width = 0.05,
          ec ='green', length_includes_head=True)
plt.arrow(2, 4, y[0], y[1], head_width = 0.2, width = 0.05,
          ec ='red', length_includes_head=True)
# plt.arrow(0, 0, y[0], y[1], head_width = 0.2, width = 0.05,
#           ec ='red', length_includes_head=True)

x_plus_y = np.array(x) + np.array(y)

plt.text(1, 1, r'$\vec{x}$', fontsize=14, ha='left')
plt.text(0, 5.5, r'$\vec{y}$', fontsize=14, ha='left')
plt.text(-2, 2.5, r'$\vec{x} + \vec{y}$', fontsize=14, ha='left')

plt.arrow(0, 0, x_plus_y[0], x_plus_y[1], head_width = 0.2,
          width = 0.05, ec ='blue', length_includes_head=True)
plt.show();

png

Now, what is the extreme case where

$$ \lVert \vec{x} + \vec{y} \rVert = \lVert \vec{x} \rVert + \lVert \vec{y} \rVert $$

It must be when $ \vec{x} ;; and ;; \vec{y} $ are pointing in the same direction. If they are in opposite directions, the minimum of an inequality happens on the right hand side.

import numpy as np
import matplotlib.pyplot as plt

# Create a pallet plot for vectors
plt.plot([0, 0.001],[0, 0.001], c='white')
plt.xlim([-7, 1]); plt.ylim([-1, 7])
# plt.xlim([-3, 3]); plt.ylim([-3, 3])
plt.grid()

x = [-2, 2]; y = [-4, 4]
# x = [2, 2]; y = [-4, -4]

plt.arrow(0, 0, x[0], x[1], head_width = 0.2, width = 0.05,
          ec ='green', length_includes_head=True)
plt.arrow(x[0]-0.1, x[1]+0.1, y[0], y[1], head_width = 0.2, width = 0.05,
          ec ='red', length_includes_head=True)

x_plus_y = np.array(x) + np.array(y)

plt.text(-1.5, 0.5, r'$\vec{x}$', fontsize=14, ha='left')
plt.text(-4, 3.0, r'$\vec{y}$', fontsize=14, ha='left')
plt.text(-3, 3.7, r'$\vec{x} + \vec{y}$', fontsize=14, ha='left')

# NOTE that we are offsetting by the 2 * width of arrows to see all vectors
plt.arrow(0.1, 0.1, x_plus_y[0], x_plus_y[1], head_width = 0.2,
          width = 0.05, ec ='blue', length_includes_head=True)
# plt.arrow(0.1, -0.1, x_plus_y[0], x_plus_y[1], head_width = 0.2,
#           width = 0.05, ec ='blue', length_includes_head=True)
plt.show();

png

We find a minimum for the right hand side of

$$ \lVert \vec{x} + \vec{y} \rVert ≤ \lVert \vec{x} \rVert + \lVert \vec{y} \rVert $$

when the vectors point in perfectly opposite directions (they are $ 180^{\circ} $ with respect to each other).

import numpy as np
import matplotlib.pyplot as plt

# Create a pallet plot for vectors
plt.plot([0, 0.001],[0, 0.001], c='white')
# plt.xlim([-7, 1]); plt.ylim([-1, 7])
plt.xlim([-3, 3]); plt.ylim([-3, 3])
plt.grid()

# x = [-2, 2]; y = [-4, 4]
x = [2, 2]; y = [-4, -4]

plt.arrow(0, 0, x[0], x[1], head_width = 0.2, width = 0.05,
          ec ='green', length_includes_head=True)
plt.arrow(x[0]-0.1, x[1]+0.1, y[0], y[1], head_width = 0.2, width = 0.05,
          ec ='red', length_includes_head=True)

x_plus_y = np.array(x) + np.array(y)

plt.text(-1.5, 0.5, r'$\vec{x}$', fontsize=14, ha='left')
plt.text(-4, 3.0, r'$\vec{y}$', fontsize=14, ha='left')
plt.text(-3, 3.7, r'$\vec{x} + \vec{y}$', fontsize=14, ha='left')

# NOTE that we are offsetting by the 2 * width of arrows to see all vectors
# plt.arrow(0.1, 0.1, x_plus_y[0], x_plus_y[1], head_width = 0.2,
#           width = 0.05, ec ='blue', length_includes_head=True)
plt.arrow(0.1, -0.1, x_plus_y[0], x_plus_y[1], head_width = 0.2,
          width = 0.05, ec ='blue', length_includes_head=True)
plt.show();

png

Finally, what is ULTRA cool is that this works for $ ℝ^n $!

Our Dynamic Experiment

import numpy as np
import matplotlib.pyplot as plt
from math import cos, sin, acos, pi

plt.rc_context({  # Only needed for some dark modes
    'axes.edgecolor':'black', 'xtick.color':'black',
    'ytick.color':'black', 'figure.facecolor':'white'});

# Create a pallet plot for vectors
plt.figure(figsize=(10, 10))
plt.plot([0, 0.001],[0, 0.001], c='white')
plt.xlim([-3, 7]); plt.ylim([-1, 10])
plt.grid()

xmag = 5; ymag = 5; x = [3, 4]

align_dir = acos(3/5)/pi*180
Angles = np.arange(0, 180 + 1, 30)

plt.arrow(0, 0, x[0], x[1], head_width = 0.2, width = 0.05,
          ec ='green', length_includes_head=True)
plt.text(2, 1.8, r'$\vec{x}$', fontsize=14, ha='left')

for angle in Angles:
    rad = (angle + align_dir) / 180 * pi
    trad = (angle + align_dir - 6) / 180 * pi
    y = [ymag*cos(rad), ymag*sin(rad)]
    ty = [ymag*cos(trad), ymag*sin(trad)]

    plt.arrow(x[0], x[1], y[0], y[1], head_width = 0.2, width = 0.05,
              ec ='red', length_includes_head=True)

    z = np.array(x) + np.array(y)
    tp = np.array(x) + np.array(ty)*0.7

    plt.text(tp[0], tp[1], r'$\vec{y}$', fontsize=14, ha='left')
    plt.text(z[0]*0.9 - 0.25, z[1]*0.9, r'$\vec{x} + \vec{y}$', fontsize=14,
             ha='right')
    plt.text(z[0]*0.8 - 0.2, z[1]*0.8 - 0.25,
             f'{round(angle, 2)}'+r'$^{\circ}$', fontsize=14, ha='right')

    plt.arrow(0.07, -0.07, z[0], z[1], head_width = 0.2, width = 0.05,
              ec ='blue', length_includes_head=True)
    print(f'x + y with y {angle} degrees ccw from x = {round(np.linalg.norm(z), 5):.5f}')

print()
plt.show();
x + y with y 0 degrees ccw from x = 10.00000
x + y with y 30 degrees ccw from x = 9.65926
x + y with y 60 degrees ccw from x = 8.66025
x + y with y 90 degrees ccw from x = 7.07107
x + y with y 120 degrees ccw from x = 5.00000
x + y with y 150 degrees ccw from x = 2.58819
x + y with y 180 degrees ccw from x = 0.00000

png

A Line In 2D, $ℝ^2$, Using Linear Algebra

An equation that defines a line subspace in 2D space or $ℝ^2$ is

$$ \begin{bmatrix} a_1 \ a_2 \end{bmatrix} \cdot \begin{bmatrix} x \ y \end{bmatrix} = a_1 x + a_2 y = d $$
But what do we choose for $ a_1 $ and $ a_2 $? The vector normal to the line that we want.

If I want a slope of 1, the normal to that line is the vector

$$ \begin{bmatrix} a_1 \ a_2 \end{bmatrix} = \begin{bmatrix} -1 \ 1 \end{bmatrix} $$

What is the value for $d$? A $y$ intercept if we solve for $y$ interms of $x$, and an $x$ intercept if we solve for $x$ in terms of $y$!

$$ y = - a_1 / a_2 x + d / a_2 = m x + y_{int} $$

Let's see 3 examples of normal vectors and y intercepts:

Normal Y Intercept
[-2, 1] 1
[-1, 1] 0
[-1, 2] -1
x = []; yofx_10 = []; yofx_21 = []; yofx_0p5m1 = []

for i in range(-60, 61, 2):
    x.append(i/10)
    yofx_10.append(1*i/10+0)
    yofx_21.append(2*(i/10)+1)
    yofx_0p5m1.append(0.5*(i/10)-1)
import matplotlib.pyplot as plt

plt.figure(figsize=(8,8))
plt.xlim([-6, 6]); plt.ylim([-6, 6])
plt.plot(x, yofx_10); plt.plot(x, yofx_21); plt.plot(x, yofx_0p5m1)
plt.arrow(1, 3, -1, 0.5, head_width = 0.2, width = 0.05,
              ec ='red', length_includes_head=True)
plt.arrow(4, 4, -1, 1, head_width = 0.2, width = 0.05,
              ec ='red', length_includes_head=True)
plt.arrow(4, 1, -0.5, 1, head_width = 0.2, width = 0.05,
              ec ='red', length_includes_head=True)
plt.grid()
plt.show()

png

Can we also treat $\bf A$ as a vector?

line = [[-3, -2], [0, 1], [3, 4]]
dfl = pd.DataFrame(data=line, columns=['X', 'Y'])
dfl

X Y
0 -3 -2
1 0 1
2 3 4

<svg xmlns="http://www.w3.org/2000/svg" height="24px"viewBox="0 0 24 24" width="24px">

  <script>
    const buttonEl =
      document.querySelector('#df-ba983a39-0d36-49db-9c90-c6fed8fb1993 button.colab-df-convert');
    buttonEl.style.display =
      google.colab.kernel.accessAllowed ? 'block' : 'none';

    async function convertToInteractive(key) {
      const element = document.querySelector('#df-ba983a39-0d36-49db-9c90-c6fed8fb1993');
      const dataTable =
        await google.colab.kernel.invokeFunction('convertToInteractive',
                                                 [key], {});
      if (!dataTable) return;

      const docLinkHtml = 'Like what you see? Visit the ' +
        '<a target="_blank" href=https://colab.research.google.com/notebooks/data_table.ipynb>data table notebook</a>'
        + ' to learn more about interactive tables.';
      element.innerHTML = '';
      dataTable['output_type'] = 'display_data';
      await google.colab.output.renderOutput(dataTable, element);
      const docLink = document.createElement('div');
      docLink.innerHTML = docLinkHtml;
      element.appendChild(docLink);
    }
  </script>
</div>

The Equation for a Plane in 3D Space - $ℝ^3$

$$ \vec A \cdot \begin{bmatrix} x \ y \ z \end{bmatrix} = \begin{bmatrix} a_1 \ a_2 \ a_3 \end{bmatrix} \cdot \begin{bmatrix} x \ y \ z \end{bmatrix} = a_1 x + a_2 y + a_3 z = d $$
where $\bf A$ is normal to the plane!

Now, IF you do NOT know $A$, but you know 3 points in the plane, you can solve for $A$ this way ...

$$\begin{bmatrix} x_1 & y_1 & z_1 \ x_2 & y_2 & z_2 \ x_3 & y_3 & z_3 \end{bmatrix} \begin{bmatrix} a_1 \ a_2 \ a_3 \end{bmatrix} = \begin{bmatrix} d \ d \ d \end{bmatrix}$$
We need at least 3 points in $ ℝ^3 $ to develop this equation for a plane. Once we have solved for $ \bf A $ and $ d $, we can create a linearly spaced grid of points in $x$ and $y$ and solve for $z$.

$$ z = (d - a_1 x - a_2 y) / a_3 $$

You could also just solve for $x$ or $y$.

Let's start with a simple and obvious equation of a plane problem.

$$\begin{bmatrix} 1 & 0 & 0 \ 0 & 1 & 0 \ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} a_1 \ a_2 \ a_3 \end{bmatrix} = \begin{bmatrix} 1 \ 1 \ 1 \end{bmatrix}$$

which yields,

$$ \vec A = \begin{bmatrix} 1 \ 1 \ 1 \end{bmatrix} $$

Now we create our grid of points in the $x$ - $y$ plane, and our equation for $z$ simplifies to

$$ z = (1 - x - y) $$

Let's plot this in 3D.

xyz = []
for i in range(-20,21,2):
    for j in range(-20,21,2):
        x = i/10
        y = j/10
        z = 1 - x - y
        xyz.append([x, y, z, 0.1])
import pandas as pd

dfp = pd.DataFrame(data=xyz, columns=['X', 'Y', 'Z', 'size'])
dfp

X Y Z size
0 -2.0 -2.0 5.0 0.1
1 -2.0 -1.6 4.6 0.1
2 -2.0 -1.2 4.2 0.1
3 -2.0 -0.8 3.8 0.1
4 -2.0 -0.4 3.4 0.1
... ... ... ... ...
116 2.0 0.4 -1.4 0.1
117 2.0 0.8 -1.8 0.1
118 2.0 1.2 -2.2 0.1
119 2.0 1.6 -2.6 0.1
120 2.0 2.0 -3.0 0.1

121 rows × 4 columns

<svg xmlns="http://www.w3.org/2000/svg" height="24px"viewBox="0 0 24 24" width="24px">

  <script>
    const buttonEl =
      document.querySelector('#df-20e592ff-02ef-4569-9c57-bcf9d978331a button.colab-df-convert');
    buttonEl.style.display =
      google.colab.kernel.accessAllowed ? 'block' : 'none';

    async function convertToInteractive(key) {
      const element = document.querySelector('#df-20e592ff-02ef-4569-9c57-bcf9d978331a');
      const dataTable =
        await google.colab.kernel.invokeFunction('convertToInteractive',
                                                 [key], {});
      if (!dataTable) return;

      const docLinkHtml = 'Like what you see? Visit the ' +
        '<a target="_blank" href=https://colab.research.google.com/notebooks/data_table.ipynb>data table notebook</a>'
        + ' to learn more about interactive tables.';
      element.innerHTML = '';
      dataTable['output_type'] = 'display_data';
      await google.colab.output.renderOutput(dataTable, element);
      const docLink = document.createElement('div');
      docLink.innerHTML = docLinkHtml;
      element.appendChild(docLink);
    }
  </script>
</div>

Can we also treat $\bf A$ as a vector?

line = [[0, 0, 0], [1, 1, 1]]
dfl = pd.DataFrame(data=line, columns=['X', 'Y', 'Z'])
dfl

X Y Z
0 0 0 0
1 2 2 2

<svg xmlns="http://www.w3.org/2000/svg" height="24px"viewBox="0 0 24 24" width="24px">

  <script>
    const buttonEl =
      document.querySelector('#df-86e22d6b-a3d1-419c-9f8d-219652fb460c button.colab-df-convert');
    buttonEl.style.display =
      google.colab.kernel.accessAllowed ? 'block' : 'none';

    async function convertToInteractive(key) {
      const element = document.querySelector('#df-86e22d6b-a3d1-419c-9f8d-219652fb460c');
      const dataTable =
        await google.colab.kernel.invokeFunction('convertToInteractive',
                                                 [key], {});
      if (!dataTable) return;

      const docLinkHtml = 'Like what you see? Visit the ' +
        '<a target="_blank" href=https://colab.research.google.com/notebooks/data_table.ipynb>data table notebook</a>'
        + ' to learn more about interactive tables.';
      element.innerHTML = '';
      dataTable['output_type'] = 'display_data';
      await google.colab.output.renderOutput(dataTable, element);
      const docLink = document.createElement('div');
      docLink.innerHTML = docLinkHtml;
      element.appendChild(docLink);
    }
  </script>
</div>
import plotly.express as px
import plotly.graph_objects as go

fig1 = px.scatter_3d(dfp, x='X', y='Y', z='Z', title="Title", size='size')
fig2 = px.line_3d(dfl, x='X', y='Y', z='Z')
fig3 = go.Figure(data = fig1.data + fig2.data)
fig3.show()
<html> <head></head>
Tip!

Press p or to see the previous file or, n or to see the next file

Comments

Loading...