NumPy for Linear Algebra Applications

This guide shows you how to perform key operations like matrix multiplication, eigenvalue calculations, and solving linear systems. Learn to use NumPy’s functions for linear algebra computations.



NumPy for Linear Algebra Applications
Image by Editor

 

NumPy is an efficient tool for linear algebra. It helps with matrix operations and solving equations. This article describes the NumPy functions used for linear algebra.

 

Matrix Multiplication

 

Matrix multiplication creates a new matrix from two matrices. Multiply each row of the first matrix with each column of the second matrix. Add the products to get each element in the new matrix.

# Define matrices
A = np.array([[1, 2], [3, 4]])
B = np.array([[5, 6], [7, 8]])
# Matrix multiplication
C = np.dot(A, B)
print("Matrix Multiplication:\n", C)

# Output:
# [[19 22]
# [43 50]]

 

Matrix Inversion

 

Multiply a matrix with its inverse to get the identity matrix. It helps solve systems of linear equations. Only square and non-singular matrices have an inverse.

# Define a square matrix
A = np.array([[1, 2], [3, 4]])
# Matrix inversion
A_inv = np.linalg.inv(A)
print("Matrix Inversion:\n", A_inv)

# Output:
# [[-2.   1.]
# [ 1.5 -0.5]]

 

Matrix Determinant

 

The matrix determinant is a number from a matrix. It tells us if the matrix can be inverted. We use specific rules to calculate it based on matrix size.

# Define a square matrix
A = np.array([[1, 2], [3, 4]])
# Compute the determinant
det_A = np.linalg.det(A)
print("Determinant of the Matrix:", det_A)

# Output: -2.0000000000000004

 

Matrix Trace

 

The trace is the sum of the diagonal elements. It only applies to square matrices. We get a single number as the trace.

# Define a square matrix
A = np.array([[1, 2], [3, 4]])
# Compute the trace of the matrix
trace_A = np.trace(A)
print("Trace of the Matrix:", trace_A)

# Output: 5

 

Matrix Transpose

 

The matrix transpose flips a matrix over its diagonal. It swaps rows with columns.

# Define a matrix
A = np.array([[1, 2, 3], [4, 5, 6]])

# Compute the transpose of the matrix
A_T = np.transpose(A)
print("Transpose of the Matrix:\n", A_T)

# Output:
# [[1 4]
# [2 5]
# [3 6]]

 

Eigenvalues and Eigenvectors

 

Eigenvalues show the extent to which an eigenvector is scaled during a transformation. Eigenvectors do not change direction under this transformation.

# Define a square matrix
A = np.array([[1, 2], [3, 4]])
# Compute eigenvalues and eigenvectors
eigenvalues, eigenvectors = np.linalg.eig(A)
print("Eigenvalues:", eigenvalues)
print("Eigenvectors:\n", eigenvectors)

# Output:
# Eigenvalues: [-0.37228132  5.37228132]
# Eigenvectors:
# [[-0.82456484 -0.41597356]
# [ 0.56576746 -0.90937671]]

 

LU Decomposition

 

LU decomposition breaks a matrix into two parts. One part is a lower triangular matrix (L). The other part is an upper triangular matrix (U). It helps solve linear least squares problems and find eigenvalues.

import numpy as np
from scipy.linalg import lu

# Define a matrix
A = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
# LU Decomposition
P, L, U = lu(A)
# Display results
print("LU Decomposition:")
print("P matrix:\n", P)
print("L matrix:\n", L)
print("U matrix:\n", U)

# Output
# LU Decomposition:
# P matrix:
# [[0. 1. 0.]
#  [0. 0. 1.]
#  [1. 0. 0.]]
# L matrix:
# [[ 1.          0.          0.        ]
#  [ 0.33333333  1.          0.        ]
#  [ 0.66666667 -0.5         1.        ]]
# U matrix:
# [[ 7.          8.          9.        ]
#  [ 0.          0.33333333  0.66666667]
#  [ 0.          0.          0.        ]]

 

QR Decomposition

 

QR decomposition divides a matrix into two parts. One part is an orthogonal matrix (Q). The other part is an upper triangular matrix (R). It helps solve linear least squares problems and find eigenvalues.

import numpy as np
from scipy.linalg import qr
# Define a matrix
A = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
# QR Decomposition
Q, R = qr(A)
# Display results
print("QR Decomposition:")
print("Q matrix:\n", Q)
print("R matrix:\n", R)

# Output
# QR Decomposition:
# Q matrix:
# [[-0.26726124 -0.78583024  0.55708601]
#  [-0.53452248 -0.08675134 -0.83125484]
#  [-0.80178373  0.6172134   0.08122978]]
# R matrix:
# [[-7.41619849 -8.48528137 -9.55445709]
#  [ 0.         -0.90453403 -1.80906806]
#  [ 0.          0.          0.        ]]

 

SVD (Singular Value Decomposition)

 

SVD decomposes a matrix into three matrices: U, Σ, and V*. U and V* are orthogonal matrices. Σ is a diagonal matrix. It is useful in many applications like data reduction and solving linear systems.

import numpy as np
from scipy.linalg import svd
# Define a matrix
A = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
# Singular Value Decomposition
U, s, Vh = svd(A)
# Display results
print("SVD Decomposition:")
print("U matrix:\n", U)
print("Singular values:\n", s)
print("Vh matrix:\n", Vh)

# Output
# SVD Decomposition:
# U matrix:
# [[-0.21483724  0.88723069  0.40824829]
#  [-0.52058739  0.24964395 -0.61237224]
#  [-0.82633755 -0.38794279  0.61237224]]
# Singular values:
# [16.84810335  1.06836951  0.        ]
# Vh matrix:
# [[-0.47967118 -0.57236779 -0.66506439]
#  [ 0.77669099  0.07568647 -0.62531812]
#  [-0.40824829  0.81649658 -0.40824829]]

 

Direct Solution of Linear Equations

 

Find the values of variables that satisfy equations in a system. Each equation represents a straight line. The solution is where these lines meet.

# Define matrix A and vector B
A = np.array([[3, 1], [1, 2]])
B = np.array([9, 8])
# Solve the system of linear equations Ax = B
x = np.linalg.solve(A, B)
print("Solution to Ax = B:", x)

# Output: [2. 3.]

 

Least Squares Fitting

 

The least squares fitting finds the best match for data points. It lowers the squared differences between actual and predicted values.

# Define matrix A and vector B
A = np.array([[1, 1], [1, 2], [1, 3]])
B = np.array([1, 2, 2])
# Solve the linear least-squares problem
x, residuals, rank, s = np.linalg.lstsq(A, B, rcond=None)
print("Least Squares Solution:", x)
print("Residuals:", residuals)
print("Rank of the matrix:", rank)
print("Singular values:", s)

# Output:
# Least Squares Solution: [0.66666667 0.5]
# Residuals: [0.33333333]
# Rank of the matrix: 2
# Singular values: [4.07914333 0.60049122]

 

Matrix Norms

 

Matrix norms measure the size of a matrix. Norms are useful to check numerical stability and analyze matrices.

# Define a matrix
A = np.array([[1, 2], [3, 4]])
# Compute various norms
frobenius_norm = np.linalg.norm(A, 'fro')
one_norm = np.linalg.norm(A, 1)
infinity_norm = np.linalg.norm(A, np.inf)
print("Frobenius Norm:", frobenius_norm)
print("1-Norm:", one_norm)
print("Infinity Norm:", infinity_norm)

# Output:
# Frobenius Norm: 5.477225575051661
# 1-Norm: 6.0
# Infinity Norm: 7.0

 

Condition Number

 

The condition number of a matrix measures sensitivity to input changes. A high condition number means the solution could be unstable.

# Define a matrix
A = np.array([[1, 2], [3, 4]])
# Compute the condition number of the matrix
condition_number = np.linalg.cond(A)
print("Condition Number:", condition_number)

# Output: 14.933034373659268

 

Matrix Rank

 

The rank of a matrix is the number of independent rows or columns. It shows the matrix's size and its ability to cover vector spaces.

# Define a matrix
A = np.array([[1, 2], [3, 4]])
# Compute the rank of the matrix
rank_A = np.linalg.matrix_rank(A)
print("Matrix Rank:", rank_A)

# Output: 2

 

Conclusion

 

NumPy simplifies tasks like matrix operations and linear equations. You can learn more about these NumPy functions at this website. website.
 
 

Jayita Gulati is a machine learning enthusiast and technical writer driven by her passion for building machine learning models. She holds a Master's degree in Computer Science from the University of Liverpool.


Get the FREE ebook 'KDnuggets Artificial Intelligence Pocket Dictionary' along with the leading newsletter on Data Science, Machine Learning, AI & Analytics straight to your inbox.

By subscribing you accept KDnuggets Privacy Policy


Get the FREE ebook 'KDnuggets Artificial Intelligence Pocket Dictionary' along with the leading newsletter on Data Science, Machine Learning, AI & Analytics straight to your inbox.

By subscribing you accept KDnuggets Privacy Policy

Get the FREE ebook 'KDnuggets Artificial Intelligence Pocket Dictionary' along with the leading newsletter on Data Science, Machine Learning, AI & Analytics straight to your inbox.

By subscribing you accept KDnuggets Privacy Policy

No, thanks!