diff --git a/Assignments/Images/Assignment_Square_Lec10to12.PNG b/Assignments/Images/Assignment_Square_Lec10to12.PNG new file mode 100644 index 0000000..6af6464 Binary files /dev/null and b/Assignments/Images/Assignment_Square_Lec10to12.PNG differ diff --git a/Assignments/Images/Assignment_Triangle_Lec10to12.PNG b/Assignments/Images/Assignment_Triangle_Lec10to12.PNG new file mode 100644 index 0000000..3d158e7 Binary files /dev/null and b/Assignments/Images/Assignment_Triangle_Lec10to12.PNG differ diff --git a/Assignments/Lec3_MIT-7to9_Assignment.md b/Assignments/Lec3_MIT-7to9_Assignment.md new file mode 100644 index 0000000..6e5fcfd --- /dev/null +++ b/Assignments/Lec3_MIT-7to9_Assignment.md @@ -0,0 +1,16 @@ +## Assignment +Q) Write a program to find a Reduced row echelon form, xparticular, xnullspace, xcomplete solutions of a given Ax = b in which matrix A of any shape (mxn) . You can use any programming language, however use of any linear algebra/mathematical library is not allowed. + +You need to submit the code and screenshot of the result. + +Input: +* Matrix A +* Matrix b + +Result should print the following: +* Reduced Row echelon form matrix +* xparticular +* xnullspace +* xcomplete + +Make sure the code is well commented. \ No newline at end of file diff --git a/Assignments/Lec4_MIT-10to12_Assignment.md b/Assignments/Lec4_MIT-10to12_Assignment.md new file mode 100644 index 0000000..1ecdaa2 --- /dev/null +++ b/Assignments/Lec4_MIT-10to12_Assignment.md @@ -0,0 +1,71 @@ +## Assignment for Lectures 10 to 12 + +Upload different documents for each question. + +### Q.1) Orthogonality in Subspaces + +Prove the orthogonality between: +1. row space and nullspace, AND +2. column space and left nullspace. + +This answer should be handwritten and uploaded as a PDF file. + +### Q.2) Code for Four Fundamental Subspaces + +Write a code (in any language) to find the basis to the four fundamental subspaces of a matrix. Don't use linear algebra libraries. + +Input Format: + +``` + + +``` + +Output Format: + +``` + + + + +``` + +Upload the code file and a document containing screenshot(s) of the output. + +### Q.3) Code to Find the Current and Potentials for a Graph + +Write a code (in any language) to find the current along the edges and potentials at the nodes of a graph (electric circuit) using the incidence matrix, conductance along edges, and external current source (refer Section 10.1 of Introduction to Linear Algebra by Gilbert Strang Fifth Edition - Pg. 457). Don't use linear algebra libraries. + +Input Format: + +``` + + + + +``` + +Output Format: + +``` + + +``` + +Upload the code file and a document containing screenshot(s) of the output. Make assumptions wherever required but state them clearly in the document along with conventions used. + +### Q.4) Problems on Graphs + +1. Write down the 3 by 3 incidence matrix A for the triangle graph. The first row has -1 in column 1 and +1 in column 2. What vectors (x1, x2, x3) are in its nullspace? How do you know that (1, 0, 0) is not in its row space? + +![Triangle Network](Images/Assignment_Triangle_Lec10to12.PNG) + +2. With conductances c1 = c2 = 2 and c3 = c4 = c5 = 3, multiply the matrices ATCA. Find a solution to ATCAx = f = (1, 0, 0, -1). Write these potentials x and currents y = -CAx on the nodes and edges of the square graph. + +![Square Network](Images/Assignment_Square_Lec10to12.PNG) + +3. Suppose A is a 11 by 8 incidence matrix formed from a connected unknown graph. How many columns of A are independent? What significance does ATA have? HINT: Check the diagonal entries. With this information, what is the sum of the diagonal entries of ATA of this specific 11 x 8 matrix? + +4. If A = uvT is a 2 by 2 matrix of rank 1, what are the four fundamental subspaces of A? If another matrix B produces the same four subspaces, what is the relationship between A and B? + +These answers should be handwritten and uploaded as a PDF file. diff --git a/Assignments/Lec5_MIT-13to16_Assignment.md b/Assignments/Lec5_MIT-13to16_Assignment.md new file mode 100644 index 0000000..d8dda2b --- /dev/null +++ b/Assignments/Lec5_MIT-13to16_Assignment.md @@ -0,0 +1,7 @@ +# Assignment 5: Implementing Simple linear Regression + +Implement a simple linear regression model from scratch using only numpy and other basic libraries (sklearn only for loading datasets) + +Implement it on the following datasets: +1. Iris between Petal Length and Petal Width (learn to split data on your own :))(10 points) +2. (Bonus 10 points) https://en.wikipedia.org/wiki/Transistor_count#Microprocessors Verify accuracy of Moore's law from the data available here (no libs allowed for OLS regression). diff --git a/RevisionNotes/3b1b_1.md b/RevisionNotes/3b1b_1.md new file mode 100644 index 0000000..192aa4d --- /dev/null +++ b/RevisionNotes/3b1b_1.md @@ -0,0 +1,111 @@ +# Vectors, what even are they? + +## Meaning of Vector +* Viewpoints of various domian of study. + * for Physicist (arrows pointing in space) + * CS Student (ordered list of numbers, sets) + * Mathematician (Generalizes both the ideas together). + +* In Linear Algebra the Vector will almost always origin at 0,0 in Space. And the root of it will always be fixed which is different compared to the way Physics students define vectors. + +* Vectors are coordinates as to how much distance along the X-axis or Y axis , that can also be represented in form of a 2X1 Matrix + +## Addition +* the process of finding one vector that is equivalent to the result of the successive application of two or more given vectors. (OR in simple words) To add vectors, lay the first one on a set of axes with its tail at the origin. Place the next vector with its tail at the previous vector's head. When there are no more vectors, draw a straight line from the origin to the head of the last vector. This line is the sum of the vectors. + + + +## Scaling +* Multiplication of a vector by a positive scalar changes the magnitude of the vector, but leaves its direction unchanged. The scalar changes the size of the vector. The scalar "scales" the vector. + +# Linear combinations, span, and basis vectors + +* i vector is unit vector along x-axix and j is unit vector along y-axis. + +* i and j form the Basis of the Coordinate System (and everything else is just scaling these basis in the coordinate system). + +* **Basis**: The Basis of a vector is a set of linearly independent vectors that span the full space. + +* **Linear combination** is an expression constructed from a set of terms by multiplying each term by a constant and adding the results (e.g. a linear combination of x and y would be any expression of the form ax + by, where a and b are constants). + +* The choice of basis vector matters when choosing a coordinate system. + +* **Span** : Span of two vectors is set of all linear combinations (ie the points they can reach). + +* If adding two vectors keeps the resultant vector span in the same dimension then they are Linearly dependent , and if their addition adds a new span of dimensions then they are “Linear independent”. + +* Same concept of Basis and Span is followed in 3 dimensions as well. Where Linear combination of 3 vectors can span the entire 3-D space. + +![Linear Combination of Scalers](https://miro.medium.com/max/875/1*oUMNZs9xh-Hnyc2gs4RpVQ.png 'Linear Combination of Scalers') + + +![Every point in the Plane can be reached by scaling the two vectors](https://miro.medium.com/max/875/1*aILVKRoggnGh4MXzrWHz-g.png 'Every point in the Plane can be reached by scaling the two vectors' ) + +# Linear transformations and matrices + +**Linear Transformation** :Linear (all lines must remain straight and Origin remains fixed )+ Transformation (a function that takes an input and gives an output, the transformation is used instead of function to bring int the role of moment of vector from its initial position to final position ) in easier words parallel and evenly spaced. + +Now since we need to think about a lot of vectors and their transformation at a time, it is always better to visualize them as points in space. + +![](https://miro.medium.com/max/875/1*G9xdFqbxSUkcz1ng5_gvmQ.png 'Linear Transformation') + +* A2-D linear transformation depends on only 4 numbers. + * 2 cordinates where the i lands. + * 2 cordinates where the j lands. + +These four numbers can be represented in the matrix form as follows: + +![](https://miro.medium.com/max/875/1*ZJdQgbjflCjXAssTCsB7Bg.png '2X2 Matrix') + +Here each column represents the point where the i and j vector lands after transformation. + +# Matrix multiplication as composition + +* **Composition** is a way of chaining transformations together. The composition of matrix transformations corresponds to a notion of multiplying two matrices together + +* Matrix Multiplication is Not Commutative + +* Matrix Multiplication is Not Associative + +![](https://miro.medium.com/max/875/1*fV_fDIHuPFQOVDTjsOLxCQ.png 'Composition of Matrices') + +# Three-dimensional linear transformations + +All the concepts of 2-D matrix transformation are followed by the 3-D matrix transformation as well. The only difference being that earlier we were working with 4 numbers in a matrix whereas now we will work on 9 numbers in matrix + +Each of the 3 columns represents the landing position of i,j and k basis vectors respectively. + +![](https://miro.medium.com/max/875/1*wqo5egbllyA_REdr2T4xqg.png '3-D Linear Transformations') + +# **Determinant** + + - The Determinant of Matrix is the change in the area covered by the Vectors after the Transformation + ![](Images/Lect6_1.png) + - Determinant is Zero 0 if the area squeeze down to lower dimension + - -ve Determinant is caused due to flipping of Orientation of Space + - In 3D Matrix the Determinant is the change in Volume + + ![](Images/Lect6_2.png) + ![](Images/Lect6_3.png) + +# **Inverse , Rank** + +![](Images/Lect7_1.png) +- It can be Visually Interpreted as the Transformation required so that the Vector A lands on vector V after the Transformation +- Inverse Transformation is the transformation required from V to go back to A +- Unless the the Determinant is Not ZERO the Inverse will Exist (if zero the area will be line and you cannot decompose back line back to area or volume through a single function) +- A Inverse X A is doing Nothing thus the Identity matrix +- Solution can Still exist if the Determinant is zero if the solution exist on the Same Line +- When the output of Determinant is 0 (it squeeze down to a line ) then the Rank of the Matrix is said to be 1 , if the Transformation lands on a 2-D Plane instead of a line than it has the Rank 2 +- RANK : is the number of Dimensions in the Output +- Be it Line of Plane : it is called the Columns Space of a Matrix , and column tells where the base vector lands +- Span of Columns = Column Space + +![](Images/Lect7_2.png) + + +- 0,0 is always in the Column Space since in the Linear Transformation the origin cannot be moved and any number of vectors can land on the origin after the Transformation thus it is the Null Space (Kernel of Matrix)of the Vector + +# **Non Square Matrices and Transformation** +- Transformation can occur between inter Dimensions in 1D-2D etc + diff --git a/RevisionNotes/3b1b_2.md b/RevisionNotes/3b1b_2.md new file mode 100644 index 0000000..0a35786 --- /dev/null +++ b/RevisionNotes/3b1b_2.md @@ -0,0 +1,106 @@ +# **Dot Product and Duality** +![](Images/Lect9_1.png) + + +- Dot Product is projecting a Vector onto other vector and Multiplying their Length , if the projection is in other direction the Dot Product will be Negative , and if Perpendicular then the projection will be zero + +![](Images/Lect9_2.png) + + +- Order dosen’t Matter who projects onto whom + +![](Images/Lect9_3.png) + + +- vector Multiplication (1X2 Matrice = 2D Matrice) + +![](Images/Lect9_4.png) + + +![](Images/Lect9_5.png) + + +- Duality : Natural correspondence (ie linear transformation of vector is some other vector in that space ) (ie 2 computation that look similar ) + +# **Cross Product** +![](Images/Lect10_1.png) +- take a vector v and move the vector w to the end of it and then repeat the same in other order (ie take w and move start of v to end of w)the diagram obtained is a parallelogram ) — the Determinant +- +ve and -ve is due to the order (j should be on left of I anticlockwise ) + +![](Images/Lect10_2.png) +![](Images/Lect10_3.png) +- The Cross product is not a number its a resultant vector as per Right hand thumb rule of the magnitude given by the number +![](Images/Lect10_4.png) + +# Cross products in the light of linear transformations + +The **Cross product** a × b is defined as a vector c that is perpendicular (orthogonal) to both a and b, with a direction given by the right-hand rule and a magnitude equal to the area of the parallelogram that the vectors span + +![General way of defining the cross product](Images/Lect11_1.png 'General way of defining the cross product') + +The linear transformation to the number line can be matched to a vector which is called the dual vector of that transformation, such that performing the linear transformation is same as taking the dot product with that vector. + +![](Images/Lect11_2.png ) + +![](Images/Lect11_3.png ) + +# Cramer's rule, explained geometrically + +An orthogonal transformation is a linear transformation which preserves a symmetric inner product. In particular, an orthogonal transformation (technically, an orthonormal transformation) preserves lengths of vectors and angles between vectors. + +![Cramer's formula](Images/Lect12_1.png "Cramer's formula") + +![](Images/Lect12_2.png ) + +# Change of Basis + +In linear algebra, a basis for a vector space is a linearly independent set spanning the vector space. + +![](Images/Lect13_1.png) + +Geometrically the matrix represents transformation from other grid to our grid, but numerically it is exactly opposite. + +![](Images/Lect13_2.png) + +![](Images/Lect11_3.png) +Inverse of Matrix represents the reverse linear transformation. So the Inverse matrix will be the transformation that will transform the vector from our grid to the other grids. + +Translation matrices is not same as transforming vectors. The following pictures show the various steps in translating a matrix from one coordinate system to another. + +![](Images/Lect13_4.png) +![](Images/Lect13_5.png) +![](Images/Lect13_6.png) +![](Images/Lect13_7.png) + +![](Images/Lect14_1.png) + +- when a vector is transformed the span of it also changes but sometimes even after transformation the span doesn't changes +- For those vectors which remain on their span even after transformation but stretched are called the eigen vectors of the transformation and eigen values are the factors by which they are stretched or squished + +![](Images/Lect14_2.png) + +![](Images/Lect14_3.png) + +![](Images/Lect14_4.png) + +![](Images/Lect14_5.png) + +- We have to find the value of the lambda so that the determinant is zero so that the following assumption becomes true + +![](Images/Lect14_6.png) + +![](Images/Lect14_7.png) + +![](Images/Lect14_8.png) + +![](Images/Lect14_9.png) + +![](Images/Lect14_10.png) + +![](Images/Lect14_11.png) + +Not all matrices have eigen basis + +![](Images/Lect15_1.png) + +![](Images/Lect15_2.png) diff --git a/RevisionNotes/Images/ColumnSpace_lec12.PNG b/RevisionNotes/Images/ColumnSpace_lec12.PNG new file mode 100644 index 0000000..2a6a036 Binary files /dev/null and b/RevisionNotes/Images/ColumnSpace_lec12.PNG differ diff --git a/RevisionNotes/Images/Current_lec12.PNG b/RevisionNotes/Images/Current_lec12.PNG new file mode 100644 index 0000000..f62bbf3 Binary files /dev/null and b/RevisionNotes/Images/Current_lec12.PNG differ diff --git a/RevisionNotes/Images/Det_Proof.png b/RevisionNotes/Images/Det_Proof.png new file mode 100644 index 0000000..732baf6 Binary files /dev/null and b/RevisionNotes/Images/Det_Proof.png differ diff --git a/RevisionNotes/Images/Elimination_lec12.PNG b/RevisionNotes/Images/Elimination_lec12.PNG new file mode 100644 index 0000000..2b8f0be Binary files /dev/null and b/RevisionNotes/Images/Elimination_lec12.PNG differ diff --git a/RevisionNotes/Images/GraphToTree_lec12.PNG b/RevisionNotes/Images/GraphToTree_lec12.PNG new file mode 100644 index 0000000..cc4b896 Binary files /dev/null and b/RevisionNotes/Images/GraphToTree_lec12.PNG differ diff --git a/RevisionNotes/Images/Graph_lec12.PNG b/RevisionNotes/Images/Graph_lec12.PNG new file mode 100644 index 0000000..9575cce Binary files /dev/null and b/RevisionNotes/Images/Graph_lec12.PNG differ diff --git a/RevisionNotes/Images/Lect10_1.png b/RevisionNotes/Images/Lect10_1.png new file mode 100644 index 0000000..ac7ebbd Binary files /dev/null and b/RevisionNotes/Images/Lect10_1.png differ diff --git a/RevisionNotes/Images/Lect10_2.png b/RevisionNotes/Images/Lect10_2.png new file mode 100644 index 0000000..72b47c7 Binary files /dev/null and b/RevisionNotes/Images/Lect10_2.png differ diff --git a/RevisionNotes/Images/Lect10_3.png b/RevisionNotes/Images/Lect10_3.png new file mode 100644 index 0000000..bed7e5b Binary files /dev/null and b/RevisionNotes/Images/Lect10_3.png differ diff --git a/RevisionNotes/Images/Lect10_4.png b/RevisionNotes/Images/Lect10_4.png new file mode 100644 index 0000000..fe06629 Binary files /dev/null and b/RevisionNotes/Images/Lect10_4.png differ diff --git a/RevisionNotes/Images/Lect11_1.png b/RevisionNotes/Images/Lect11_1.png new file mode 100644 index 0000000..16be4ca Binary files /dev/null and b/RevisionNotes/Images/Lect11_1.png differ diff --git a/RevisionNotes/Images/Lect11_2.png b/RevisionNotes/Images/Lect11_2.png new file mode 100644 index 0000000..653e7f1 Binary files /dev/null and b/RevisionNotes/Images/Lect11_2.png differ diff --git a/RevisionNotes/Images/Lect11_3.png b/RevisionNotes/Images/Lect11_3.png new file mode 100644 index 0000000..c829c64 Binary files /dev/null and b/RevisionNotes/Images/Lect11_3.png differ diff --git a/RevisionNotes/Images/Lect12_1.png b/RevisionNotes/Images/Lect12_1.png new file mode 100644 index 0000000..21f44d2 Binary files /dev/null and b/RevisionNotes/Images/Lect12_1.png differ diff --git a/RevisionNotes/Images/Lect12_2.png b/RevisionNotes/Images/Lect12_2.png new file mode 100644 index 0000000..fbe6ee5 Binary files /dev/null and b/RevisionNotes/Images/Lect12_2.png differ diff --git a/RevisionNotes/Images/Lect13_1.png b/RevisionNotes/Images/Lect13_1.png new file mode 100644 index 0000000..a8dcf99 Binary files /dev/null and b/RevisionNotes/Images/Lect13_1.png differ diff --git a/RevisionNotes/Images/Lect13_2.png b/RevisionNotes/Images/Lect13_2.png new file mode 100644 index 0000000..dca2803 Binary files /dev/null and b/RevisionNotes/Images/Lect13_2.png differ diff --git a/RevisionNotes/Images/Lect13_3.png b/RevisionNotes/Images/Lect13_3.png new file mode 100644 index 0000000..4918faf Binary files /dev/null and b/RevisionNotes/Images/Lect13_3.png differ diff --git a/RevisionNotes/Images/Lect13_4.png b/RevisionNotes/Images/Lect13_4.png new file mode 100644 index 0000000..2c15a7d Binary files /dev/null and b/RevisionNotes/Images/Lect13_4.png differ diff --git a/RevisionNotes/Images/Lect13_5.png b/RevisionNotes/Images/Lect13_5.png new file mode 100644 index 0000000..6b7a5b0 Binary files /dev/null and b/RevisionNotes/Images/Lect13_5.png differ diff --git a/RevisionNotes/Images/Lect13_6.png b/RevisionNotes/Images/Lect13_6.png new file mode 100644 index 0000000..a102141 Binary files /dev/null and b/RevisionNotes/Images/Lect13_6.png differ diff --git a/RevisionNotes/Images/Lect13_7.png b/RevisionNotes/Images/Lect13_7.png new file mode 100644 index 0000000..2554870 Binary files /dev/null and b/RevisionNotes/Images/Lect13_7.png differ diff --git a/RevisionNotes/Images/Lect14_1.png b/RevisionNotes/Images/Lect14_1.png new file mode 100644 index 0000000..6c8597e Binary files /dev/null and b/RevisionNotes/Images/Lect14_1.png differ diff --git a/RevisionNotes/Images/Lect14_10.png b/RevisionNotes/Images/Lect14_10.png new file mode 100644 index 0000000..02d7afb Binary files /dev/null and b/RevisionNotes/Images/Lect14_10.png differ diff --git a/RevisionNotes/Images/Lect14_11.png b/RevisionNotes/Images/Lect14_11.png new file mode 100644 index 0000000..3057cff Binary files /dev/null and b/RevisionNotes/Images/Lect14_11.png differ diff --git a/RevisionNotes/Images/Lect14_2.png b/RevisionNotes/Images/Lect14_2.png new file mode 100644 index 0000000..c8026da Binary files /dev/null and b/RevisionNotes/Images/Lect14_2.png differ diff --git a/RevisionNotes/Images/Lect14_3.png b/RevisionNotes/Images/Lect14_3.png new file mode 100644 index 0000000..94a9a18 Binary files /dev/null and b/RevisionNotes/Images/Lect14_3.png differ diff --git a/RevisionNotes/Images/Lect14_4.png b/RevisionNotes/Images/Lect14_4.png new file mode 100644 index 0000000..e08a463 Binary files /dev/null and b/RevisionNotes/Images/Lect14_4.png differ diff --git a/RevisionNotes/Images/Lect14_5.png b/RevisionNotes/Images/Lect14_5.png new file mode 100644 index 0000000..522363c Binary files /dev/null and b/RevisionNotes/Images/Lect14_5.png differ diff --git a/RevisionNotes/Images/Lect14_6.png b/RevisionNotes/Images/Lect14_6.png new file mode 100644 index 0000000..d9d0f97 Binary files /dev/null and b/RevisionNotes/Images/Lect14_6.png differ diff --git a/RevisionNotes/Images/Lect14_7.png b/RevisionNotes/Images/Lect14_7.png new file mode 100644 index 0000000..c0e1d91 Binary files /dev/null and b/RevisionNotes/Images/Lect14_7.png differ diff --git a/RevisionNotes/Images/Lect14_8.png b/RevisionNotes/Images/Lect14_8.png new file mode 100644 index 0000000..c376302 Binary files /dev/null and b/RevisionNotes/Images/Lect14_8.png differ diff --git a/RevisionNotes/Images/Lect14_9.png b/RevisionNotes/Images/Lect14_9.png new file mode 100644 index 0000000..86bbfb4 Binary files /dev/null and b/RevisionNotes/Images/Lect14_9.png differ diff --git a/RevisionNotes/Images/Lect15_1.png b/RevisionNotes/Images/Lect15_1.png new file mode 100644 index 0000000..3057cff Binary files /dev/null and b/RevisionNotes/Images/Lect15_1.png differ diff --git a/RevisionNotes/Images/Lect15_2.png b/RevisionNotes/Images/Lect15_2.png new file mode 100644 index 0000000..4895a00 Binary files /dev/null and b/RevisionNotes/Images/Lect15_2.png differ diff --git a/RevisionNotes/Images/Lect2_1.png b/RevisionNotes/Images/Lect2_1.png new file mode 100644 index 0000000..5918c45 Binary files /dev/null and b/RevisionNotes/Images/Lect2_1.png differ diff --git a/RevisionNotes/Images/Lect2_2.png b/RevisionNotes/Images/Lect2_2.png new file mode 100644 index 0000000..36da246 Binary files /dev/null and b/RevisionNotes/Images/Lect2_2.png differ diff --git a/RevisionNotes/Images/Lect3_1.png b/RevisionNotes/Images/Lect3_1.png new file mode 100644 index 0000000..4a7e555 Binary files /dev/null and b/RevisionNotes/Images/Lect3_1.png differ diff --git a/RevisionNotes/Images/Lect3_2.png b/RevisionNotes/Images/Lect3_2.png new file mode 100644 index 0000000..c0d3854 Binary files /dev/null and b/RevisionNotes/Images/Lect3_2.png differ diff --git a/RevisionNotes/Images/Lect4_1.png b/RevisionNotes/Images/Lect4_1.png new file mode 100644 index 0000000..95127a9 Binary files /dev/null and b/RevisionNotes/Images/Lect4_1.png differ diff --git a/RevisionNotes/Images/Lect5_1.png b/RevisionNotes/Images/Lect5_1.png new file mode 100644 index 0000000..eb176c1 Binary files /dev/null and b/RevisionNotes/Images/Lect5_1.png differ diff --git a/RevisionNotes/Images/Lect6_1.png b/RevisionNotes/Images/Lect6_1.png new file mode 100644 index 0000000..4af2519 Binary files /dev/null and b/RevisionNotes/Images/Lect6_1.png differ diff --git a/RevisionNotes/Images/Lect6_2.png b/RevisionNotes/Images/Lect6_2.png new file mode 100644 index 0000000..4af2519 Binary files /dev/null and b/RevisionNotes/Images/Lect6_2.png differ diff --git a/RevisionNotes/Images/Lect6_3.png b/RevisionNotes/Images/Lect6_3.png new file mode 100644 index 0000000..0541d8c Binary files /dev/null and b/RevisionNotes/Images/Lect6_3.png differ diff --git a/RevisionNotes/Images/Lect7_1.png b/RevisionNotes/Images/Lect7_1.png new file mode 100644 index 0000000..28bc810 Binary files /dev/null and b/RevisionNotes/Images/Lect7_1.png differ diff --git a/RevisionNotes/Images/Lect7_2.png b/RevisionNotes/Images/Lect7_2.png new file mode 100644 index 0000000..ab35f28 Binary files /dev/null and b/RevisionNotes/Images/Lect7_2.png differ diff --git a/RevisionNotes/Images/Lect9_1.png b/RevisionNotes/Images/Lect9_1.png new file mode 100644 index 0000000..2df5509 Binary files /dev/null and b/RevisionNotes/Images/Lect9_1.png differ diff --git a/RevisionNotes/Images/Lect9_2.png b/RevisionNotes/Images/Lect9_2.png new file mode 100644 index 0000000..c612004 Binary files /dev/null and b/RevisionNotes/Images/Lect9_2.png differ diff --git a/RevisionNotes/Images/Lect9_3.png b/RevisionNotes/Images/Lect9_3.png new file mode 100644 index 0000000..07b2054 Binary files /dev/null and b/RevisionNotes/Images/Lect9_3.png differ diff --git a/RevisionNotes/Images/Lect9_4.png b/RevisionNotes/Images/Lect9_4.png new file mode 100644 index 0000000..b83c618 Binary files /dev/null and b/RevisionNotes/Images/Lect9_4.png differ diff --git a/RevisionNotes/Images/Lect9_5.png b/RevisionNotes/Images/Lect9_5.png new file mode 100644 index 0000000..05b2bf8 Binary files /dev/null and b/RevisionNotes/Images/Lect9_5.png differ diff --git a/RevisionNotes/Images/LeftNullspace_lec12.PNG b/RevisionNotes/Images/LeftNullspace_lec12.PNG new file mode 100644 index 0000000..38b488c Binary files /dev/null and b/RevisionNotes/Images/LeftNullspace_lec12.PNG differ diff --git a/RevisionNotes/Images/Loops_lec12.PNG b/RevisionNotes/Images/Loops_lec12.PNG new file mode 100644 index 0000000..3644136 Binary files /dev/null and b/RevisionNotes/Images/Loops_lec12.PNG differ diff --git a/RevisionNotes/Images/MIT_15_basic.png b/RevisionNotes/Images/MIT_15_basic.png new file mode 100644 index 0000000..ab0e732 Binary files /dev/null and b/RevisionNotes/Images/MIT_15_basic.png differ diff --git a/RevisionNotes/Images/MIT_15_projection.png b/RevisionNotes/Images/MIT_15_projection.png new file mode 100644 index 0000000..52afab8 Binary files /dev/null and b/RevisionNotes/Images/MIT_15_projection.png differ diff --git a/RevisionNotes/Images/Matrix_A_lec12.PNG b/RevisionNotes/Images/Matrix_A_lec12.PNG new file mode 100644 index 0000000..e475fe7 Binary files /dev/null and b/RevisionNotes/Images/Matrix_A_lec12.PNG differ diff --git a/RevisionNotes/Images/Nullspace_lec12.PNG b/RevisionNotes/Images/Nullspace_lec12.PNG new file mode 100644 index 0000000..3c4ecc3 Binary files /dev/null and b/RevisionNotes/Images/Nullspace_lec12.PNG differ diff --git a/RevisionNotes/Images/PotentialDiff_lec12.PNG b/RevisionNotes/Images/PotentialDiff_lec12.PNG new file mode 100644 index 0000000..a79b445 Binary files /dev/null and b/RevisionNotes/Images/PotentialDiff_lec12.PNG differ diff --git a/RevisionNotes/Images/Row1-n.jpg b/RevisionNotes/Images/Row1-n.jpg new file mode 100644 index 0000000..07b657d Binary files /dev/null and b/RevisionNotes/Images/Row1-n.jpg differ diff --git a/RevisionNotes/Images/RowSpace_per_Nullspace.jpg b/RevisionNotes/Images/RowSpace_per_Nullspace.jpg new file mode 100644 index 0000000..643f920 Binary files /dev/null and b/RevisionNotes/Images/RowSpace_per_Nullspace.jpg differ diff --git a/RevisionNotes/Images/echelon_form_lect7.png b/RevisionNotes/Images/echelon_form_lect7.png new file mode 100644 index 0000000..9ec7b36 Binary files /dev/null and b/RevisionNotes/Images/echelon_form_lect7.png differ diff --git a/RevisionNotes/Images/echelon_sol_lect7.png b/RevisionNotes/Images/echelon_sol_lect7.png new file mode 100644 index 0000000..5dcaee5 Binary files /dev/null and b/RevisionNotes/Images/echelon_sol_lect7.png differ diff --git a/RevisionNotes/Images/four_fundamental_subspaces.png b/RevisionNotes/Images/four_fundamental_subspaces.png new file mode 100644 index 0000000..90cfbf0 Binary files /dev/null and b/RevisionNotes/Images/four_fundamental_subspaces.png differ diff --git a/RevisionNotes/Images/li_example.png b/RevisionNotes/Images/li_example.png new file mode 100644 index 0000000..cc53d94 Binary files /dev/null and b/RevisionNotes/Images/li_example.png differ diff --git a/RevisionNotes/Images/linear_independence.png b/RevisionNotes/Images/linear_independence.png new file mode 100644 index 0000000..3044b1c Binary files /dev/null and b/RevisionNotes/Images/linear_independence.png differ diff --git a/RevisionNotes/Images/matrix_a_lect7.png b/RevisionNotes/Images/matrix_a_lect7.png new file mode 100644 index 0000000..708c69f Binary files /dev/null and b/RevisionNotes/Images/matrix_a_lect7.png differ diff --git a/RevisionNotes/Images/reduced_row_lect7.png b/RevisionNotes/Images/reduced_row_lect7.png new file mode 100644 index 0000000..b508da8 Binary files /dev/null and b/RevisionNotes/Images/reduced_row_lect7.png differ diff --git a/RevisionNotes/Images/rref_sol_lect7.png b/RevisionNotes/Images/rref_sol_lect7.png new file mode 100644 index 0000000..3a116a8 Binary files /dev/null and b/RevisionNotes/Images/rref_sol_lect7.png differ diff --git a/RevisionNotes/Images/sol_lect7.png b/RevisionNotes/Images/sol_lect7.png new file mode 100644 index 0000000..0fa0662 Binary files /dev/null and b/RevisionNotes/Images/sol_lect7.png differ diff --git a/RevisionNotes/Images/special_sol_lect7.png b/RevisionNotes/Images/special_sol_lect7.png new file mode 100644 index 0000000..5679400 Binary files /dev/null and b/RevisionNotes/Images/special_sol_lect7.png differ diff --git a/RevisionNotes/MIT_Lec7.md b/RevisionNotes/MIT_Lec7.md new file mode 100644 index 0000000..ace862b --- /dev/null +++ b/RevisionNotes/MIT_Lec7.md @@ -0,0 +1,59 @@ +## Computing Ax=0, Pivot variables, special solutions. + +* `pivot` = first non zero element in every row after elimination +* `pivot columns` = columns containing pivot +* `pivot variables` = variables corresponding to pivot columns +* `free columns` = columns without pivot +* `free variables` = variables corresponding to free columns +* `rank` = number of pivot variables + +## Computing Ax = 0 using echelon form. + +Consider the Martix A in the below diagram, using elementary row transformation converting the matrix in echelon form. + +![Echelon form](./Images/echelon_form_lect7.png) + +Now finding the solution x by putting free variables as random constants + +![Echelon solution](./Images/echelon_sol_lect7.png) + +## Reduced Row Echelon Form + +If the leading coefficient in each row is the only non-zero number and unity in that column, the matrix is said to be in reduced row echelon form(rref). Consider the below example. + +![rref](./Images/reduced_row_lect7.png) + +Here the martix I is the pivot columns and F is the free columns. + +## Special Solution + +Consider the matrix I and F from the above example. We can rewrite R in form given below. + +![special sol](./Images/special_sol_lect7.png) + +Nullspace matrix have columns are special solution, their free variables have this special value `I` and pivot variable have `-F` + +## Example + +Consider the following matrix A. + +![Matrix A](./Images/matrix_a_lect7.png) + +Find the nullspace of the given matrix + +
+ +Answer + + +* Echelon form + +![solution](./Images/sol_lect7.png) + +* Reduced form + +![rref solution](./Images/rref_sol_lect7.png) + +
+
+
\ No newline at end of file diff --git a/RevisionNotes/MIT_Lec8.md b/RevisionNotes/MIT_Lec8.md new file mode 100644 index 0000000..9e8d1cc --- /dev/null +++ b/RevisionNotes/MIT_Lec8.md @@ -0,0 +1,63 @@ +## Complete Solution for Ax=b, Reduced Row Echlon Form R + +### Solving Ax=b +The equations are:
+ x1 + 2x2 + 2x3 + 2x4 = b1
+ 2x1 + 4x2 + 6x3 + 8x4 = b2
+ 3x1 + 6x2 + 8x3 + 10x4 =b3

+The augmented matrix would be:
+![Augmented matrix](Images/3x4_aug_mat.jpg) +
After performing row operations on the matrix, you get the reduced form as:
+![Reduced matrix](Images/3x4_aug_mat_red.jpg) +
Considering b as 1,5,6 we get the matrix as:
+![Reduced matrix1](Images/3x4_aug_mat_red1.jpg) + +
+ +### Solvability +Determining whether Ax=b is solvable or no.
+Ax=b is solvable when b is in the column space of A C(A). +If a combination of rows of A gives zero row then same combination of entries of b must give 0.
+In this case, the matrix is solvable and so lets find the solution.
+ +### Complete Solution for Ax=b +In order to find the complete solution for Ax=b, we need to find the particular solution.
+Xparticular : In order to find Xparticular, we need to set the free variables to 0 and then solve for the pivot variables.
+In our example we set x2=0 and x4=0,
+we get,
+x1 + 2x3 =1
+2x3 = 3
+![x-particular](Images/x_pat.jpg) + +
+Xnull space : In order to find Xnull space, we need to set the free varialbes as 0 and 1.
+Xcomplete = Xparticular + Xnull space
+ +![x-complete](Images/x_com.jpg) + +
+ +### Rank of a matrix +Rank of a matrix is determined by : +* Number of pivot +* Number of independent columns/rows +* Dimension of column space
+r <= `max`(m, n) where, + m = rows, n = columns
+ +| Condition | Solution | Comment | +| :------------: | :-----------: | :-------------------------------------------: | +| r = n < m | 0 or 1 | No free variables. Hence, null space is empty | +| r = m < n | Infinite | Every row has pivot, `n-r` free variables | +| r = m = n | Unique | Invertible matrix | +| r < m && r < n | 0 or Infinite | depends on `b` | + + + + + + + + + + diff --git a/RevisionNotes/MIT_Lec9.md b/RevisionNotes/MIT_Lec9.md new file mode 100644 index 0000000..30415ec --- /dev/null +++ b/RevisionNotes/MIT_Lec9.md @@ -0,0 +1,57 @@ +## Linear Independence, Basis and Dimension + +- **Linear independence:** A set of vectors {v1, v2, · · · , vn} is linearly independent when no linear combination +of them (except for the 0 combination) result in a 0 vector. + +![li](Images/linear_independence.png) + +Example: + +![example](Images/li_example.png) + +- A set of vectors is linearly dependent if some vector can be expressed as a linear combination of the others (i.e., is in the **span** of the other vectors). (Such a vector is said to be redundant.) + +## Span + + +- **Question**: Is zero vector linearly Independent or dependent? +
+ +Answer + +Zero vector is a multiple of any vector, so it is collinear with any other vector. Hence it is Linearly dependent. +
+ + +### Spanning, Basis, and Dimensions + +- **Spanning Definition:** A set of vectors {v1, v2, · · · , vl} span a space if the space consists of all linear combinations of +those vectors. + +- **Basis Definition:** Let V be a vector space. If S is a basis of V and S has only finitely many elements, then we say that V is finite-dimensional. + +- The number of vectors in S(i.e. basis) is the dimension of V. + + +### Basis +- A basis for vector space V is a set of vectors that + - Is Linearly Independent + - Spans V. + - **Example:** The set {x^2,x,1} is a basis for the vector space of polynomials in x with real coefficients having degree at most 2. + - **Question:** What are the possible basis for R^2×2. +
+ +Answer + +There are many possible answers. One possible answer is:
+ +
+ + +### Dimension +- The number of vectors in a basis for V is called the dimension of V, denoted by **dim(V)**. +- Every set of linearly independent vectors in V has size at most dim(V). For example, a set of four vectors in R^3(3D Space) cannot be a linearly independent set. + +Here are some facts: +- A set of vectors in R^n {v1, · · · , vn} gives a basis if the n × n matrix with those as columns gives an invertible matrix. +- Every basis has the same number of vectors, the number being the dimension of the space. diff --git a/RevisionNotes/MIT_Lec_10.md b/RevisionNotes/MIT_Lec_10.md new file mode 100644 index 0000000..b53995d --- /dev/null +++ b/RevisionNotes/MIT_Lec_10.md @@ -0,0 +1,37 @@ + +# 10 - Four fundamental subspaces + +For a matrix `A` + +| | | | +| :----------------------------------: | :--------------------------------: | :------------------------------: | +| **Column Space C(A)** | combination of columns of A | Rr in Rm | +| **Null Space N(A)** | all solution of `Ax = 0` | Rn-r in Rn | +| **Row Space C(AT)** | combination of rows of A | Rr in Rn | +| **Left Null Space N(AT)** | all solution of ATy = 0 | Rm-r in Rm | + +![othogonality between spaces](Images/four_fundamental_subspaces.png) + +* Basis of `row` space of A ==> first `r` rows of R or A +* Basis of `column` space of A ==> pivot columns of `A` +* Basis of `null` space of A ==> special solution of A +* Basis of `left null` space of A ==> transforming `[A | I] ---> [R | E]`, look for combination of rows which give zero row +* When the rank is as large as possible, r = n or r = m or r = m = n, the matrix has a left-inverse B or a right-inverse C or a two-sided A-1 +* Row spaces of A, U(echelon form) and R(reduced row echelon form) are same. +* Column spaces of A, U(echelon form) and R(reduced row echelon form) are different. + +### 3x3 matrices a vector space? + +
+ View Answer + + > _Yes. Since they contain a Null vector and follow all other rules of being a vector space._ +
+ +### What are its subspaces? + +
+ View Answer + + > _upper triangular, symmetrical, diagonal..._ +
diff --git a/RevisionNotes/MIT_Lec_11.md b/RevisionNotes/MIT_Lec_11.md new file mode 100644 index 0000000..78b133f --- /dev/null +++ b/RevisionNotes/MIT_Lec_11.md @@ -0,0 +1,22 @@ +# 11 - Bases of new vector spaces, Rank one matrices + +**All for 3x3 cases** +* Basis - 9 dimensional (1 at each cell) +* Basis of symmetric matrices - 6 dimensional +* Basis of upper triangular matrices - 6 dimensional +* Basis of (Symmetric ∩ Upper triangular) - 3 dimensional (since diagonal) +* Basis of (Symmetric ∪ Upper triangular) - 9 dimensional (since all 3x3) +* dim(S) + dim(U) = dim(S ∩ U) + dim(S ∪ U), where S and U are vector spaces. + +* Every rank 1 matrix can be expressed as - u x vT + * where `u` and `v` are column matrices + +* Let M be all 5x17 matrices with rank 4. Is M a subspace? + * No. Since no `0` matrix + +**Some extra conditions for a double sided inverse** +* The rows of A span Rn. +* The rows are linearly independent. +* Elimination can be completed: PA = LDU, with all n pivots. +* Zero is not an eigenvalue of A. +* ATA is positive definite. diff --git a/RevisionNotes/MIT_Lec_12.md b/RevisionNotes/MIT_Lec_12.md new file mode 100644 index 0000000..c2e9045 --- /dev/null +++ b/RevisionNotes/MIT_Lec_12.md @@ -0,0 +1,222 @@ +## Graphs, Networks, Incidence Matrices + +This session explores the linear algebra of electrical networks and sheds light on important results in graph theory. + +Consider this directed graph - + +![Graph](Images/Graph_lec12.PNG) + +Here `m` = 5 edges, `n` = 4 nodes. + +### Incidence Matrix + +`A` - Incidence matrix used to denote this graph. + +It is the 5 by 4 matrix which tells us which nodes are connected by which edges. + +In general, an incidence matrix is a logical matrix that shows the relationship between two classes of objects, usually called an incidence relation. Here, we are showing the relationship between the edges and nodes of the graph. + +![Incidence Matrix](Images/Matrix_A_lec12.PNG) + +Row numbers in A are edge numbers, column numbers are node numbers. + +* -1 means the edge is going out of the node. + +* 1 means the edge is going into the node. + +* 0 means the edge does not connect this node. + +### Analysing the Incidence Matrix + +#### Elimination + +After elimination, we get - + +![After Elimination](Images/Elimination_lec12.PNG) + +This represents the following tree. + +![Tree](Images/GraphToTree_lec12.PNG) + +Elimination reduces every graph to a tree (the graph has no closed loops). Rows are dependent when edges form a loop. Independent rows come from trees. + +This also gives us the rank, `r` = 3. + +#### The Nullspace + +Let the nullspace solution of `A` be `x`. + +`x` is basically the potential at each node. + +`Ax = 0` + +![Nullspace of A](Images/Nullspace_lec12.PNG) + +![Potential Difference](Images/PotentialDiff_lec12.PNG) + +`Ax` denotes the Potential Difference between nodes. + +It is 0 when all the potentials are the same. + +So nullspace includes all vectors of the form +`c(1, 1, 1, 1)` + +So the dimension of nullspace = `n - r` = 1 + +Current doesn't flow when potential difference is 0. + +We can raise or lower all voltages by the same amount `c`, without changing the differences. There is an "arbitrary constant" in the voltages. + +The nullspace disappears when we fix x4 = 0. The unknown x4 is removed and so is the fourth column of A (the column multiplied x4). Basically, node 4 has been "grounded." So only zero vector is in the nullspace now. + +#### The Row Space + +The row space contains all combinations of the 3 row basis vectors since the matrix has a rank of 3. + +Dimension of rowspace = `r` = 3 + +We got this from elimination. + +How do we know if a vector is in the rowspace? + +If it is perpendicular to (1, 1, 1, 1) in the nullspace. (Orthogonality) + +#### The Column Space + +The column space contains all combinations of the four columns. We expect three independent columns, since there were three independent rows. The first three columns of A are independent (so are any three). + +So, dimension of column space = `r` = 3 + +How can we tell if a particular vector b is in the column space of an incidence matrix? + +The components of b should add to 0 around every loop. Why? + +`Ax` gives the vector of potential differences. + +`Ax = b` + +If we add differences around a closed loop in the graph, they cancel to leave zero. + +For example, let's take the potential differences for this loop. + +![Loop](Images/ColumnSpace_lec12.PNG) + +Here, `(x``2`` - x``1``) + (x``3`` - x``2``) - (x``3`` - x``1``) = 0` + +These correspond to the components of b: + +Therefore, `b``1`` + b``2`` - b``3`` = 0` + +So a quick way to check if a vector is in the column space is to check whether the components add up to 0 around a loop (considering directions). + +#### The Left Nullspace + +The left nullspace contains the solutions to `A``T``y = 0`. + +Dimension of left nullspace = `m - r` = 2 + +![Left Nullspace](Images/LeftNullspace_lec12.PNG) + +![Current](Images/Current_lec12.PNG) + +Here, `y` is the current along each edge. + +Thus, the net flow at each node should be zero. + +We can find the basis by assuming current (`y`) through one edge and solving such that there is no charge accumulation. + +What are solutions to `A``T``y = 0`? + +Currents which balance themselves. Every loop current is a solution. + +![Current Loops](Images/Loops_lec12.PNG) + +Here, we have 2 small independent loops. The big loop of 1-2-3-4-1 is basically the sum of these two loops. + +Flows around the 2 small loops are a basis for the left nullspace. + +Thus, (1, 1, -1, 0, 0) and (0, 0, 1, -1, 1) form the basis. + +Number of independent loops = dimension of left nullspace. + +### Kirchhoff's Voltage Law + +Kirchhoff's Voltage Law states that for a closed loop series path the algebraic sum of all the voltages around any closed loop in a circuit is equal to zero. + +This can be found from the column space. + +In terms of vectors, the law states that the components of Ax = b add to zero around every loop. + +### Kirchhoff's Current Law + +Kirchhoff's Current Law states that the current flowing into a node (or a junction) must be equal to the current flowing out of it. + +This can be found from the left nullspace. + +In terms of vectors, the law states that `A``T``y = 0`. Flow in equals flow out at each node. + +### Euler's Formula + +The incidence matrix A comes from a connected graph with n nodes and m edges. The row space and column space have dimensions `r = n - 1`. The nullspaces of A and AT have dimensions `1` and `m - n + 1`: + +* N(A) - The constant vectors `(c, c, ... , c)` make up the nullspace of `A` : `dim = n - r = 1`. +* C(AT) - The edges of any tree give `r` independent rows of `A` : `r = n - 1`. +* C(A) - Voltage Law - The components of `Ax` add to zero around all loops : `dim = n - 1`. +* N(AT) - Current Law - `A``T``y = (flow in) - (flow out) = 0` is solved by loop currents : `dim = m - r` + +There are `m - r = m - n + 1` independent small loops in the graph. + +For every graph in a plane, linear algebra yields Euler's formula: + +`(number of nodes) - (number of edges) + (number of small loops) = 1` + +This is `n - m + (m - n + 1) = 1` + +How? + +`n - m + (m - n + 1)` + +`= n - m + m - r` + +`= n - r` + +`= 1` + +### Ohm's Law + +* If resistances are 1, Ohm's Law will match `y = -Ax`. Then `A``T``y = -A``T``Ax = 0`. + +* Minus sign is there in circuit theory - we change from `Ax` to `-Ax`. This is because the flow is from higher potential to lower potential. There is (positive) current from node 1 to node 2 when `x``1`` - x``2` is positive, whereas `Ax` was constructed to yield `x``2`` - x``1`. + +* Without any sources, the solution to `A``T``Ax = 0` will just be no flow: `x = 0 and y = 0`. + +* So for current to flow, we either need to fix voltages to one or more nodes, or add voltage or current sources. + +* For example, on adding a current source, +`A``T``Ax = f` + +* Here, `f` represents the source added. + +* This is because Kirchhoff's Current Law changes from `A``T``y = 0` to `A``T``y = -f`, to balance the source `f` from outside. + +* The `A``T``A` is the graph Laplacian matrix. It is always symmetric. + +* But this doesn't consider the whole picture because the resistances are 1 here. + +* Let `R` be a matrix of resistance values. + +* Now, Ohm's Law takes the form of : `Ry = -Ax` + +* Let `C` be a matrix of conductance values. `C` is called the Conductance Matrix. It is a diagonal `m` by `m` matrix. + +* `C = R``-1` + +* Now, Ohm's Law takes the form of : `y = -CAx` + +* Ohm's Law states that current along an edge = conductance times the voltage difference. + +* Ohm's Law for all m currents is `y = -CAx`. The vector `Ax` gives the potential differences, and `C` multiplies by the conductances. + +* Combining Ohm's Law with Kirchhoff's Current Law (`A``T``y = 0` or `A``T``y = -f` for a current source `f`), we get `A``T``CAx = 0` or `A``T``CAx = f`. + +* Finally, we get `A``T``CAx = f`. diff --git a/RevisionNotes/MIT_Lec_14.md b/RevisionNotes/MIT_Lec_14.md new file mode 100644 index 0000000..134e005 --- /dev/null +++ b/RevisionNotes/MIT_Lec_14.md @@ -0,0 +1,99 @@ +# Orthogonal vectors and subspaces "The 90 degree Chapter" + +## Overview: + +- Row space is perpendicular to null space +- Column space is perpendicular to left null space + +![li](Images/RowSpace_per_Nullspace.jpg) + +## Orthogonal vectors: + +- Two vectors are orthogonal if the angle between them is 90 degrees. +- `x` and `y` are orthogonal if xTy = 0 or yTx = 0. + +* **Question**: Which vector is orthogonal to every vector? +
+ +Answer + +The zero vector. +
+ +## Orthogonal subspaces + +- Subspace S is orthogonal to subspace T means: every vector in S is orthogonal to every vector in T. +- The blackboard is not orthogonal to the floor; two vectors in the line where the blackboard meets the floor aren’t orthogonal to each other. +- Subspaces should add up to the original space dimension for orthogonality. +- Suppose for 3-Dimension. A plane and a line may form an orthogonal pair as 2 + 1 = 3. +- Similarly two planes in 3-Dimension like the blackboard floor example are not orthogonal. +- In the plane, the space containing only the zero vector and any line through the origin are orthogonal subspaces. +- A line through the origin and the whole plane are never orthogonal subspaces. +- Two lines through the origin are orthogonal subspaces if they meet at right angles. + +## Rowspace is orthogonal to nullspace + +- **Why ?** +- Because, Ax = 0 means the dot product of x with each row of A is 0. +- ![li](Images/Row1-n.jpg) +- Also the product of x with any combination of rows of A must be 0 (Another way to express it). +- The column space is orthogonal to the left nullspace of A because the row space of AT. is perpendicular to the nullspace of AT. +- We can also assume that row space and the nullspace of a matrix subdivide Rn into two perpendicular subspaces. +- For the same reason, in 3-Dimension we can not have two perpendicular lines as rowspace nullspace pair as dimensions don't match the findings. (1 + 1 = 2 but we need 3) +- **Example** +
+- A = [ 1 2 5 ]  
+      [ 2 4 10 ]
+
+   [ 1 2 5 ]  [x1]   [0]  
+   [ 2 4 10 ] [x2] = [0]  
+              [x3]
+
+- Rowspace dimension = 1
+- Basis = [1]
+          [2]
+          [5]
+- Nullspace dimension = 2 (It is a plane through the origin perpendicular to the Basis of rowspace).
+
+ +- The nullspace and the row space are orthogonal complements in Rn. +- The nullspace contains all the vectors that are perpendicular to the row space, and vice versa. +- The subspaces come in orthogonal pairs + +## To solve Ax = b when there is no solution, when no. of eqns (m) > no. of variables (n) : + +- Due to measurement error, Ax = b is often unsolvable if m > n. +- Our next challenge is to find the best possible solution in this case. +- The matrix ATA plays a key role in this effort: the central equation is ATAxhat= ATb. +- Here xhat is different from x which is the best possible solution. +- Also ATA is square (n × n) and symmetric. + +- xhat = (ATA)-1ATb +- If +1) The columns of A are linearly independent. +2) ATA is invertible. + +- **Example** +
+- A = [ 1 1 ]  
+      [ 1 2 ]
+      [ 1 5 ]
+
+- Then ATA =
+
+  [ 1 1 1 ][ 1 1 ]   [ 3 8 ]  
+  [ 1 2 5 ][ 1 2 ] = [ 8 30 ]
+           [ 1 5 ]
+
+- Here it is invertible therefore we can find a possible solution.
+- ATA is not always invertible.
+- Example
+  [ 1 1 1 ][ 1 3 ]   [ 3 9 ]  
+  [ 3 3 3 ][ 1 3 ] = [ 9 27 ]
+           [ 1 3 ]
+- Here rank is 1 and it is not invertible.
+- Therefore, ATA is invertible exactly when A has independent columns.
+
+- **Pointers**: +- N(ATA) = N(A) +- rank of ATA = rank of A. diff --git a/RevisionNotes/MIT_Lec_15.md b/RevisionNotes/MIT_Lec_15.md new file mode 100644 index 0000000..2bb257e --- /dev/null +++ b/RevisionNotes/MIT_Lec_15.md @@ -0,0 +1,27 @@ +# 15 - Projections into subspaces + +### The basics of projection +![Projection image](./Images/MIT_15_projection.png) + +### Defining a basic formula for projecting b onto a line formed by a single vector. +![Projection derivation example](./Images/MIT_15_basic.png) + +- Let `p` be the projection of `b` onto the line going through `a`. +- Since `p` is at the shortest distance possible from `b`, the line joining them must be perpendicular to the line formed by multiples of `a`. +- Therefore, if `e = b - p`, then e must be perpendicular to a. +- Now `p` is a scalar multiple of `a`, so let `p = ax` with x being a scalar. +- We get aT (b- ax) = 0. +- Rearranging and substituting x, p = a(aT . b / aT . a) +- By using the associative law, we see that p = (a . aT / aT . a)b with the first bracket being known as the projection matrix. + +### Applying this formula to approximating Ax = b when there are no solutions to it. + +- Instead of solving `Ax=b` we solve `Ax=p`, where `p` is the projection of `b` onto the coloumn space of A. +- So we first find `p` and then predict `x` as the closest answer. +- So in this case `b - p` i.e. `b - Ax` has to be perpendicular to the plane with basis vectors as `a1` and `a2` (this is an example of a 2-dimensional subspace) +- So, a1T (b - Ax) = 0 and a2T (b - Ax) = 0, combining which vertically gives, AT (b - Ax) = 0. +- Rearranging and substituting, we get AT Ax = AT b, p= A (AT A)^-1 AT b + +### Some important properties of P (the projecting matrix)' +- PT = P +- P . P = P diff --git a/RevisionNotes/MIT_Lec_16.md b/RevisionNotes/MIT_Lec_16.md new file mode 100644 index 0000000..f16fc82 --- /dev/null +++ b/RevisionNotes/MIT_Lec_16.md @@ -0,0 +1,15 @@ +# 16 - Least Square Approximations + +In this chapter we come to an important application of projections, and we come across what is perhaps the most basic formula of statistics when we are trying to predict something. + +As usual we are trying to fit a linear hyperplane to a set of points, we call it **linear regression**. + +So here approximating `Ax=b` what will A, x and b be? + +A = The individual points that have to be fit. +x = The vector of weights for each feature +b = The vector of independant variable ( the prediction to be made ) + +Here estimating x is essentially estimating the weights of the variables. + +So we are done estimating variables and fiting them to a hyperplane.