-
Notifications
You must be signed in to change notification settings - Fork 11
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch '2021' of https://github.com/SRA-VJTI/linear-algebra-stu…
…dy-group into patch-1
- Loading branch information
Showing
86 changed files
with
912 additions
and
0 deletions.
There are no files selected for viewing
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,16 @@ | ||
## Assignment | ||
Q) Write a program to find a Reduced row echelon form, x<sub>particular</sub>, x<sub>nullspace</sub>, x<sub>complete</sub> solutions of a given Ax = b in which matrix A of any shape (mxn) . You can use any programming language, however use of any linear algebra/mathematical library is not allowed. | ||
|
||
You need to submit the code and screenshot of the result. | ||
|
||
Input: | ||
* Matrix A | ||
* Matrix b | ||
|
||
Result should print the following: | ||
* Reduced Row echelon form matrix | ||
* x<sub>particular</sub> | ||
* x<sub>nullspace</sub> | ||
* x<sub>complete</sub> | ||
|
||
Make sure the code is well commented. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,71 @@ | ||
## Assignment for Lectures 10 to 12 | ||
|
||
Upload different documents for each question. | ||
|
||
### Q.1) Orthogonality in Subspaces | ||
|
||
Prove the orthogonality between: | ||
1. row space and nullspace, AND | ||
2. column space and left nullspace. | ||
|
||
This answer should be handwritten and uploaded as a PDF file. | ||
|
||
### Q.2) Code for Four Fundamental Subspaces | ||
|
||
Write a code (in any language) to find the basis to the four fundamental subspaces of a matrix. Don't use linear algebra libraries. | ||
|
||
Input Format: | ||
|
||
``` | ||
<Number of rows> <Number of columns> | ||
<Row-wise entry of elements of the matrix> | ||
``` | ||
|
||
Output Format: | ||
|
||
``` | ||
<Basis of Column Space> | ||
<Basis of Nullspace> | ||
<Basis of Row Space> | ||
<Basis of Left Nullspace> | ||
``` | ||
|
||
Upload the code file and a document containing screenshot(s) of the output. | ||
|
||
### Q.3) Code to Find the Current and Potentials for a Graph | ||
|
||
Write a code (in any language) to find the current along the edges and potentials at the nodes of a graph (electric circuit) using the incidence matrix, conductance along edges, and external current source (refer Section 10.1 of Introduction to Linear Algebra by Gilbert Strang Fifth Edition - Pg. 457). Don't use linear algebra libraries. | ||
|
||
Input Format: | ||
|
||
``` | ||
<Number of rows> <Number of columns> | ||
<Row-wise entry of elements of the incidence matrix> | ||
<Conductance for each edge> | ||
<External current source> | ||
``` | ||
|
||
Output Format: | ||
|
||
``` | ||
<Current along edges> | ||
<Potential at nodes> | ||
``` | ||
|
||
Upload the code file and a document containing screenshot(s) of the output. Make assumptions wherever required but state them clearly in the document along with conventions used. | ||
|
||
### Q.4) Problems on Graphs | ||
|
||
1. Write down the 3 by 3 incidence matrix A for the triangle graph. The first row has -1 in column 1 and +1 in column 2. What vectors (x<sub>1</sub>, x<sub>2</sub>, x<sub>3</sub>) are in its nullspace? How do you know that (1, 0, 0) is not in its row space? | ||
|
||
![Triangle Network](Images/Assignment_Triangle_Lec10to12.PNG) | ||
|
||
2. With conductances c<sub>1</sub> = c<sub>2</sub> = 2 and c<sub>3</sub> = c<sub>4</sub> = c<sub>5</sub> = 3, multiply the matrices A<sup>T</sup>CA. Find a solution to A<sup>T</sup>CAx = f = (1, 0, 0, -1). Write these potentials x and currents y = -CAx on the nodes and edges of the square graph. | ||
|
||
![Square Network](Images/Assignment_Square_Lec10to12.PNG) | ||
|
||
3. Suppose A is a 11 by 8 incidence matrix formed from a connected unknown graph. How many columns of A are independent? What significance does A<sup>T</sup>A have? HINT: Check the diagonal entries. With this information, what is the sum of the diagonal entries of A<sup>T</sup>A of this specific 11 x 8 matrix? | ||
|
||
4. If A = uv<sup>T</sup> is a 2 by 2 matrix of rank 1, what are the four fundamental subspaces of A? If another matrix B produces the same four subspaces, what is the relationship between A and B? | ||
|
||
These answers should be handwritten and uploaded as a PDF file. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
# Assignment 5: Implementing Simple linear Regression | ||
|
||
Implement a simple linear regression model from scratch using only numpy and other basic libraries (sklearn only for loading datasets) | ||
|
||
Implement it on the following datasets: | ||
1. Iris between Petal Length and Petal Width (learn to split data on your own :))(10 points) | ||
2. (Bonus 10 points) https://en.wikipedia.org/wiki/Transistor_count#Microprocessors Verify accuracy of Moore's law from the data available here (no libs allowed for OLS regression). |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,111 @@ | ||
# Vectors, what even are they? | ||
|
||
## Meaning of Vector | ||
* Viewpoints of various domian of study. | ||
* for Physicist (arrows pointing in space) | ||
* CS Student (ordered list of numbers, sets) | ||
* Mathematician (Generalizes both the ideas together). | ||
|
||
* In Linear Algebra the Vector will almost always origin at 0,0 in Space. And the root of it will always be fixed which is different compared to the way Physics students define vectors. | ||
|
||
* Vectors are coordinates as to how much distance along the X-axis or Y axis , that can also be represented in form of a 2X1 Matrix | ||
|
||
## Addition | ||
* the process of finding one vector that is equivalent to the result of the successive application of two or more given vectors. (OR in simple words) To add vectors, lay the first one on a set of axes with its tail at the origin. Place the next vector with its tail at the previous vector's head. When there are no more vectors, draw a straight line from the origin to the head of the last vector. This line is the sum of the vectors. | ||
|
||
|
||
|
||
## Scaling | ||
* Multiplication of a vector by a positive scalar changes the magnitude of the vector, but leaves its direction unchanged. The scalar changes the size of the vector. The scalar "scales" the vector. | ||
|
||
# Linear combinations, span, and basis vectors | ||
|
||
* i vector is unit vector along x-axix and j is unit vector along y-axis. | ||
|
||
* i and j form the Basis of the Coordinate System (and everything else is just scaling these basis in the coordinate system). | ||
|
||
* **Basis**: The Basis of a vector is a set of linearly independent vectors that span the full space. | ||
|
||
* **Linear combination** is an expression constructed from a set of terms by multiplying each term by a constant and adding the results (e.g. a linear combination of x and y would be any expression of the form ax + by, where a and b are constants). | ||
|
||
* The choice of basis vector matters when choosing a coordinate system. | ||
|
||
* **Span** : Span of two vectors is set of all linear combinations (ie the points they can reach). | ||
|
||
* If adding two vectors keeps the resultant vector span in the same dimension then they are Linearly dependent , and if their addition adds a new span of dimensions then they are “Linear independent”. | ||
|
||
* Same concept of Basis and Span is followed in 3 dimensions as well. Where Linear combination of 3 vectors can span the entire 3-D space. | ||
|
||
![Linear Combination of Scalers](https://miro.medium.com/max/875/1*oUMNZs9xh-Hnyc2gs4RpVQ.png 'Linear Combination of Scalers') | ||
|
||
|
||
![Every point in the Plane can be reached by scaling the two vectors](https://miro.medium.com/max/875/1*aILVKRoggnGh4MXzrWHz-g.png 'Every point in the Plane can be reached by scaling the two vectors' ) | ||
|
||
# Linear transformations and matrices | ||
|
||
**Linear Transformation** :Linear (all lines must remain straight and Origin remains fixed )+ Transformation (a function that takes an input and gives an output, the transformation is used instead of function to bring int the role of moment of vector from its initial position to final position ) in easier words parallel and evenly spaced. | ||
|
||
Now since we need to think about a lot of vectors and their transformation at a time, it is always better to visualize them as points in space. | ||
|
||
![](https://miro.medium.com/max/875/1*G9xdFqbxSUkcz1ng5_gvmQ.png 'Linear Transformation') | ||
|
||
* A2-D linear transformation depends on only 4 numbers. | ||
* 2 cordinates where the i lands. | ||
* 2 cordinates where the j lands. | ||
|
||
These four numbers can be represented in the matrix form as follows: | ||
|
||
![](https://miro.medium.com/max/875/1*ZJdQgbjflCjXAssTCsB7Bg.png '2X2 Matrix') | ||
|
||
Here each column represents the point where the i and j vector lands after transformation. | ||
|
||
# Matrix multiplication as composition | ||
|
||
* **Composition** is a way of chaining transformations together. The composition of matrix transformations corresponds to a notion of multiplying two matrices together | ||
|
||
* Matrix Multiplication is Not Commutative | ||
|
||
* Matrix Multiplication is Not Associative | ||
|
||
![](https://miro.medium.com/max/875/1*fV_fDIHuPFQOVDTjsOLxCQ.png 'Composition of Matrices') | ||
|
||
# Three-dimensional linear transformations | ||
|
||
All the concepts of 2-D matrix transformation are followed by the 3-D matrix transformation as well. The only difference being that earlier we were working with 4 numbers in a matrix whereas now we will work on 9 numbers in matrix | ||
|
||
Each of the 3 columns represents the landing position of i,j and k basis vectors respectively. | ||
|
||
![](https://miro.medium.com/max/875/1*wqo5egbllyA_REdr2T4xqg.png '3-D Linear Transformations') | ||
|
||
# **Determinant** | ||
|
||
- The Determinant of Matrix is the change in the area covered by the Vectors after the Transformation | ||
![](Images/Lect6_1.png) | ||
- Determinant is Zero 0 if the area squeeze down to lower dimension | ||
- -ve Determinant is caused due to flipping of Orientation of Space | ||
- In 3D Matrix the Determinant is the change in Volume | ||
|
||
![](Images/Lect6_2.png) | ||
![](Images/Lect6_3.png) | ||
|
||
# **Inverse , Rank** | ||
|
||
![](Images/Lect7_1.png) | ||
- It can be Visually Interpreted as the Transformation required so that the Vector A lands on vector V after the Transformation | ||
- Inverse Transformation is the transformation required from V to go back to A | ||
- Unless the the Determinant is Not ZERO the Inverse will Exist (if zero the area will be line and you cannot decompose back line back to area or volume through a single function) | ||
- A Inverse X A is doing Nothing thus the Identity matrix | ||
- Solution can Still exist if the Determinant is zero if the solution exist on the Same Line | ||
- When the output of Determinant is 0 (it squeeze down to a line ) then the Rank of the Matrix is said to be 1 , if the Transformation lands on a 2-D Plane instead of a line than it has the Rank 2 | ||
- RANK : is the number of Dimensions in the Output | ||
- Be it Line of Plane : it is called the Columns Space of a Matrix , and column tells where the base vector lands | ||
- Span of Columns = Column Space | ||
|
||
![](Images/Lect7_2.png) | ||
|
||
|
||
- 0,0 is always in the Column Space since in the Linear Transformation the origin cannot be moved and any number of vectors can land on the origin after the Transformation thus it is the Null Space (Kernel of Matrix)of the Vector | ||
|
||
# **Non Square Matrices and Transformation** | ||
- Transformation can occur between inter Dimensions in 1D-2D etc | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,106 @@ | ||
# **Dot Product and Duality** | ||
![](Images/Lect9_1.png) | ||
|
||
|
||
- Dot Product is projecting a Vector onto other vector and Multiplying their Length , if the projection is in other direction the Dot Product will be Negative , and if Perpendicular then the projection will be zero | ||
|
||
![](Images/Lect9_2.png) | ||
|
||
|
||
- Order dosen’t Matter who projects onto whom | ||
|
||
![](Images/Lect9_3.png) | ||
|
||
|
||
- vector Multiplication (1X2 Matrice = 2D Matrice) | ||
|
||
![](Images/Lect9_4.png) | ||
|
||
|
||
![](Images/Lect9_5.png) | ||
|
||
|
||
- Duality : Natural correspondence (ie linear transformation of vector is some other vector in that space ) (ie 2 computation that look similar ) | ||
|
||
# **Cross Product** | ||
![](Images/Lect10_1.png) | ||
- take a vector v and move the vector w to the end of it and then repeat the same in other order (ie take w and move start of v to end of w)the diagram obtained is a parallelogram ) — the Determinant | ||
- +ve and -ve is due to the order (j should be on left of I anticlockwise ) | ||
|
||
![](Images/Lect10_2.png) | ||
![](Images/Lect10_3.png) | ||
- The Cross product is not a number its a resultant vector as per Right hand thumb rule of the magnitude given by the number | ||
![](Images/Lect10_4.png) | ||
|
||
# Cross products in the light of linear transformations | ||
|
||
The **Cross product** a × b is defined as a vector c that is perpendicular (orthogonal) to both a and b, with a direction given by the right-hand rule and a magnitude equal to the area of the parallelogram that the vectors span | ||
|
||
![General way of defining the cross product](Images/Lect11_1.png 'General way of defining the cross product') | ||
|
||
The linear transformation to the number line can be matched to a vector which is called the dual vector of that transformation, such that performing the linear transformation is same as taking the dot product with that vector. | ||
|
||
![](Images/Lect11_2.png ) | ||
|
||
![](Images/Lect11_3.png ) | ||
|
||
# Cramer's rule, explained geometrically | ||
|
||
An orthogonal transformation is a linear transformation which preserves a symmetric inner product. In particular, an orthogonal transformation (technically, an orthonormal transformation) preserves lengths of vectors and angles between vectors. | ||
|
||
![Cramer's formula](Images/Lect12_1.png "Cramer's formula") | ||
|
||
![](Images/Lect12_2.png ) | ||
|
||
# Change of Basis | ||
|
||
In linear algebra, a basis for a vector space is a linearly independent set spanning the vector space. | ||
|
||
![](Images/Lect13_1.png) | ||
|
||
Geometrically the matrix represents transformation from other grid to our grid, but numerically it is exactly opposite. | ||
|
||
![](Images/Lect13_2.png) | ||
|
||
![](Images/Lect11_3.png) | ||
Inverse of Matrix represents the reverse linear transformation. So the Inverse matrix will be the transformation that will transform the vector from our grid to the other grids. | ||
|
||
Translation matrices is not same as transforming vectors. The following pictures show the various steps in translating a matrix from one coordinate system to another. | ||
|
||
![](Images/Lect13_4.png) | ||
![](Images/Lect13_5.png) | ||
![](Images/Lect13_6.png) | ||
![](Images/Lect13_7.png) | ||
|
||
![](Images/Lect14_1.png) | ||
|
||
- when a vector is transformed the span of it also changes but sometimes even after transformation the span doesn't changes | ||
- For those vectors which remain on their span even after transformation but stretched are called the eigen vectors of the transformation and eigen values are the factors by which they are stretched or squished | ||
|
||
![](Images/Lect14_2.png) | ||
|
||
![](Images/Lect14_3.png) | ||
|
||
![](Images/Lect14_4.png) | ||
|
||
![](Images/Lect14_5.png) | ||
|
||
- We have to find the value of the lambda so that the determinant is zero so that the following assumption becomes true | ||
|
||
![](Images/Lect14_6.png) | ||
|
||
![](Images/Lect14_7.png) | ||
|
||
![](Images/Lect14_8.png) | ||
|
||
![](Images/Lect14_9.png) | ||
|
||
![](Images/Lect14_10.png) | ||
|
||
![](Images/Lect14_11.png) | ||
|
||
Not all matrices have eigen basis | ||
|
||
![](Images/Lect15_1.png) | ||
|
||
![](Images/Lect15_2.png) |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,59 @@ | ||
## Computing Ax=0, Pivot variables, special solutions. | ||
|
||
* `pivot` = first non zero element in every row after elimination | ||
* `pivot columns` = columns containing pivot | ||
* `pivot variables` = variables corresponding to pivot columns | ||
* `free columns` = columns without pivot | ||
* `free variables` = variables corresponding to free columns | ||
* `rank` = number of pivot variables | ||
|
||
## Computing Ax = 0 using echelon form. | ||
|
||
Consider the Martix A in the below diagram, using elementary row transformation converting the matrix in echelon form. | ||
|
||
![Echelon form](./Images/echelon_form_lect7.png) | ||
|
||
Now finding the solution x by putting free variables as random constants | ||
|
||
![Echelon solution](./Images/echelon_sol_lect7.png) | ||
|
||
## Reduced Row Echelon Form | ||
|
||
If the leading coefficient in each row is the only non-zero number and unity in that column, the matrix is said to be in reduced row echelon form(rref). Consider the below example. | ||
|
||
![rref](./Images/reduced_row_lect7.png) | ||
|
||
Here the martix I is the pivot columns and F is the free columns. | ||
|
||
## Special Solution | ||
|
||
Consider the matrix I and F from the above example. We can rewrite R in form given below. | ||
|
||
![special sol](./Images/special_sol_lect7.png) | ||
|
||
Nullspace matrix have columns are special solution, their free variables have this special value `I` and pivot variable have `-F` | ||
|
||
## Example | ||
|
||
Consider the following matrix A. | ||
|
||
![Matrix A](./Images/matrix_a_lect7.png) | ||
|
||
Find the nullspace of the given matrix | ||
|
||
<details> | ||
<summary> | ||
Answer | ||
</summary> | ||
|
||
* Echelon form | ||
|
||
![solution](./Images/sol_lect7.png) | ||
|
||
* Reduced form | ||
|
||
![rref solution](./Images/rref_sol_lect7.png) | ||
|
||
<blockquote> | ||
</blockquote> | ||
</details> |
Oops, something went wrong.