An Efficiency Boost
December 27, 2025
Context (Survey Drone): We’re consulting for a company that uses a small survey drone to inspect bridges and buildings. The drone is controlled by an autopilot system that restricts its motion to three fixed directions: (i) forward while slightly increasing altitude, (ii) right while slightly increasing altitude, and (iii) backward, slightly left, while slightly decreasing altitude. These three directions are enough to navigate space and have been optimized for the types of structures the company typically investigates.
When the autopilot system runs, it takes a target location, solves the corresponding vector equation, and runs the drone’s motors appropriately in order to arrive at that location. For each structure, the company generally inspects hundreds of locations with the drone.
▶ Video from Skydio.
The LU decomposition approach rewrites the system \(A\vec{x} = \vec{b}\) as \(LU\vec{x} = \vec{b}\).
\(\bigstar\) Notice that the constant vector is left unchanged since we are just decomposing \(A\) as the product of a lower triangular matrix \(L\) and an upper triangular matrix \(U\).
In everything we do in numerical methods, keep the adage “Work Smart, Not Hard” in the back of your mind.
Solving a system \(LU\vec{x} = \vec{b}\) can be done easily if we work strategically.
Example: Consider the system of the form \(LU\vec{x} = \vec{b}\) where \(L = \left[\begin{array}{cc} 2 & 0\\ 1 & -3\end{array}\right]\), \(U = \left[\begin{array}{cc} 1 & -1\\ 0 & 2\end{array}\right]\), and \(\vec{b} = \left[\begin{array}{c} 16\\ 38\end{array}\right]\).
Start by solving \(L\vec{y} = \vec{b}\).
\[\left[\begin{array}{cc} 2 & 0\\ 1 & -3\end{array}\right]\left[\begin{array}{c} y_1\\ y_2\end{array}\right] = \left[\begin{array}{c} 16\\ 38\end{array}\right]\]
Forward substitution will work.
\[\begin{align*} 2y_1 &= 16\\ \implies y_1 &= 8~\checkmark\\ y_1 - 3y_2 &= 38\\ \implies 8 -3y_2 &= 38\\ \implies -3y_2 &= 30\\ \implies y_2 &= -10~\checkmark \end{align*}\]
Now we’ll solve \(U\vec{x} = \vec{y}\).
\[\left[\begin{array}{cc} 1 & -1\\ 0 & 2\end{array}\right]\left[\begin{array}{c} x_1\\ x_2\end{array}\right] = \left[\begin{array}{c} 8\\ -10\end{array}\right]\]
We can use back substitution.
The LU decomposition approach rewrites the system \(A\vec{x} = \vec{b}\) as \(LU\vec{x} = \vec{b}\).
\(\bigstar\) Notice that the constant vector is left unchanged since we are just decomposing \(A\) as the product of a lower triangular matrix \(L\) and an upper triangular matrix \(U\).
In everything we do in numerical methods, keep the adage “Work Smart, Not Hard” in the back of your mind.
Solving a system \(LU\vec{x} = \vec{b}\) can be done easily if we work strategically.
Example: Consider the system of the form \(LU\vec{x} = \vec{b}\) where \(L = \left[\begin{array}{cc} 2 & 0\\ 1 & -3\end{array}\right]\), \(U = \left[\begin{array}{cc} 1 & -1\\ 0 & 2\end{array}\right]\), and \(\vec{b} = \left[\begin{array}{c} 16\\ 38\end{array}\right]\).
Start by solving \(L\vec{y} = \vec{b}\).
\[\left[\begin{array}{cc} 2 & 0\\ 1 & -3\end{array}\right]\left[\begin{array}{c} y_1\\ y_2\end{array}\right] = \left[\begin{array}{c} 16\\ 38\end{array}\right]\]
Forward substitution will work.
\[\begin{align*} 2x_2 &= -10\\ \implies x_2 &= -5~\checkmark\\ x_1 - x_2 &= 8\\ \implies x_1 - \left(-5\right) &= 8\\ \implies x_1 &= 3~\checkmark \end{align*}\]
Now we’ll solve \(U\vec{x} = \vec{y}\).
\[\left[\begin{array}{cc} 1 & -1\\ 0 & 2\end{array}\right]\left[\begin{array}{c} x_1\\ x_2\end{array}\right] = \left[\begin{array}{c} 8\\ -10\end{array}\right]\]
We can use back substitution.
This means that the solution to \(LU\vec{x} = \vec{b}\) is \(\vec{x} = \begin{bmatrix} 3\\ -5\end{bmatrix}~~\checkmark\)
Not every square matrix \(A\) has an \(LU\) decomposition, but…
it is the case that for any given square matrix \(A\), it is always possible to find a lower triangular matrix \(L\) and an upper triangular matrix \(U\) such that \(LU\) is row-equivalent to \(A\).
In fact, there are infinitely many pairs \(L\) and \(U\) which can be used to rewrite that matrix. For this reason, some common methods with constraints have been developed.
| Name | Constraints |
|---|---|
| Doolittle’s Decomposition | \(L_{ii} = 1\) for \(i\in[n]\) |
| Crout’s Decomposition | \(U_{ii} = 1\) for \(i\in [n]\) |
| Choleski’s Decomposition | \(L = U^T\) |
We’ll consider only Doolittle because (i) Crout’s is nearly the same, and (ii) Choleski’s is specialized, offering efficiencies, but only applying in specific scenarios.
Doolittle’s Decomposition Method: As with the Gaussian Elimination routine, this method of solution will also come in two phases
On the next slide, we’ll introduce the general decomposition phase, but then follow it up with an example to make things more tangible!
General: Consider a square matrix \(A = \left[\begin{array}{ccc} A_{11} & A_{12} & A_{13}\\ A_{21} & A_{22} & A_{23}\\ A_{31} & A_{32} & A_{33}\end{array}\right]\)
Assume that there exist \(L = \left[\begin{array}{ccc} 1 & 0 & 0\\ L_{21} & 1 & 0\\ L_{31} & L_{32} & 1\end{array}\right]\) and \(U = \left[\begin{array}{ccc} U_{11} & U_{12} & U_{13}\\ 0 & U_{22} & U_{23}\\ 0 & 0 & U_{33}\end{array}\right]\) such that \(A = LU\).
Since \(A = LU\), we have
\[A = \left[\begin{array}{ccc} U_{11} & U_{12} & U_{13}\\ U_{11}L_{21} & U_{12}L_{21} + U_{22} & U_{13}L_{21} + U_{23}\\ U_{11}L_{31} & U_{12}L_{31} + U_{22}L_{32} & U_{13}L_{31} + U_{23}L_{32} + U_{33}\end{array}\right]\]
From here, we could apply Gaussian Elimination to help us solve for the entries \(U_{ij}\) and \(L_{ij}\).
The reduced form of the matrix above is:
\[\begin{align*} \left[\begin{array}{ccc} U_{11} & U_{12} & U_{13}\\ U_{11}L_{21} & U_{12}L_{21} + U_{22} & U_{13}L_{21} + U_{23}\\ U_{11}L_{31} & U_{12}L_{31} + U_{22}L_{32} & U_{13}L_{31} + U_{23}L_{32} + U_{33}\end{array}\right] \end{align*}\]
General: Consider a square matrix \(A = \left[\begin{array}{ccc} A_{11} & A_{12} & A_{13}\\ A_{21} & A_{22} & A_{23}\\ A_{31} & A_{32} & A_{33}\end{array}\right]\)
Assume that there exist \(L = \left[\begin{array}{ccc} 1 & 0 & 0\\ L_{21} & 1 & 0\\ L_{31} & L_{32} & 1\end{array}\right]\) and \(U = \left[\begin{array}{ccc} U_{11} & U_{12} & U_{13}\\ 0 & U_{22} & U_{23}\\ 0 & 0 & U_{33}\end{array}\right]\) such that \(A = LU\).
Since \(A = LU\), we have
\[A = \left[\begin{array}{ccc} U_{11} & U_{12} & U_{13}\\ U_{11}L_{21} & U_{12}L_{21} + U_{22} & U_{13}L_{21} + U_{23}\\ U_{11}L_{31} & U_{12}L_{31} + U_{22}L_{32} & U_{13}L_{31} + U_{23}L_{32} + U_{33}\end{array}\right]\]
From here, we could apply Gaussian Elimination to help us solve for the entries \(U_{ij}\) and \(L_{ij}\).
The reduced form of the matrix above is:
\[\begin{align*} \left[\begin{array}{ccc} U_{11} & U_{12} & U_{13}\\ U_{11}L_{21} & U_{12}L_{21} + U_{22} & U_{13}L_{21} + U_{23}\\ U_{11}L_{31} & U_{12}L_{31} + U_{22}L_{32} & U_{13}L_{31} + U_{23}L_{32} + U_{33}\end{array}\right] &\stackrel{R_2 \leftarrow R_2 - L_{21}R_1}{\to} \left[\begin{array}{ccc} U_{11} & U_{12} & U_{13}\\ 0 & U_{22} & U_{23}\\ U_{11}L_{31} & U_{12}L_{31} + U_{22}L_{32} & U_{13}L_{31} + U_{23}L_{32} + U_{33}\end{array}\right] \end{align*}\]
General: Consider a square matrix \(A = \left[\begin{array}{ccc} A_{11} & A_{12} & A_{13}\\ A_{21} & A_{22} & A_{23}\\ A_{31} & A_{32} & A_{33}\end{array}\right]\)
Assume that there exist \(L = \left[\begin{array}{ccc} 1 & 0 & 0\\ L_{21} & 1 & 0\\ L_{31} & L_{32} & 1\end{array}\right]\) and \(U = \left[\begin{array}{ccc} U_{11} & U_{12} & U_{13}\\ 0 & U_{22} & U_{23}\\ 0 & 0 & U_{33}\end{array}\right]\) such that \(A = LU\).
Since \(A = LU\), we have
\[A = \left[\begin{array}{ccc} U_{11} & U_{12} & U_{13}\\ U_{11}L_{21} & U_{12}L_{21} + U_{22} & U_{13}L_{21} + U_{23}\\ U_{11}L_{31} & U_{12}L_{31} + U_{22}L_{32} & U_{13}L_{31} + U_{23}L_{32} + U_{33}\end{array}\right]\]
From here, we could apply Gaussian Elimination to help us solve for the entries \(U_{ij}\) and \(L_{ij}\).
The reduced form of the matrix above is:
\[\begin{align*} \left[\begin{array}{ccc} U_{11} & U_{12} & U_{13}\\ U_{11}L_{21} & U_{12}L_{21} + U_{22} & U_{13}L_{21} + U_{23}\\ U_{11}L_{31} & U_{12}L_{31} + U_{22}L_{32} & U_{13}L_{31} + U_{23}L_{32} + U_{33}\end{array}\right] &\stackrel{R_2 \leftarrow R_2 - L_{21}R_1}{\to} \left[\begin{array}{ccc} U_{11} & U_{12} & U_{13}\\ 0 & U_{22} & U_{23}\\ U_{11}L_{31} & U_{12}L_{31} + U_{22}L_{32} & U_{13}L_{31} + U_{23}L_{32} + U_{33}\end{array}\right]\\ &\stackrel{R_3 \leftarrow R_3 - L_{31}R_1}{\to} \left[\begin{array}{ccc} U_{11} & U_{12} & U_{13}\\ 0 & U_{22} & U_{23}\\ 0 & U_{22}L_{32} &U_{23}L_{32} + U_{33}\end{array}\right] \end{align*}\]
General: Consider a square matrix \(A = \left[\begin{array}{ccc} A_{11} & A_{12} & A_{13}\\ A_{21} & A_{22} & A_{23}\\ A_{31} & A_{32} & A_{33}\end{array}\right]\)
Assume that there exist \(L = \left[\begin{array}{ccc} 1 & 0 & 0\\ L_{21} & 1 & 0\\ L_{31} & L_{32} & 1\end{array}\right]\) and \(U = \left[\begin{array}{ccc} U_{11} & U_{12} & U_{13}\\ 0 & U_{22} & U_{23}\\ 0 & 0 & U_{33}\end{array}\right]\) such that \(A = LU\).
Since \(A = LU\), we have
\[A = \left[\begin{array}{ccc} U_{11} & U_{12} & U_{13}\\ U_{11}L_{21} & U_{12}L_{21} + U_{22} & U_{13}L_{21} + U_{23}\\ U_{11}L_{31} & U_{12}L_{31} + U_{22}L_{32} & U_{13}L_{31} + U_{23}L_{32} + U_{33}\end{array}\right]\]
From here, we could apply Gaussian Elimination to help us solve for the entries \(U_{ij}\) and \(L_{ij}\).
The reduced form of the matrix above is:
\[\begin{align*} \left[\begin{array}{ccc} U_{11} & U_{12} & U_{13}\\ U_{11}L_{21} & U_{12}L_{21} + U_{22} & U_{13}L_{21} + U_{23}\\ U_{11}L_{31} & U_{12}L_{31} + U_{22}L_{32} & U_{13}L_{31} + U_{23}L_{32} + U_{33}\end{array}\right] &\stackrel{R_2 \leftarrow R_2 - L_{21}R_1}{\to} \left[\begin{array}{ccc} U_{11} & U_{12} & U_{13}\\ 0 & U_{22} & U_{23}\\ U_{11}L_{31} & U_{12}L_{31} + U_{22}L_{32} & U_{13}L_{31} + U_{23}L_{32} + U_{33}\end{array}\right]\\ &\stackrel{R_3 \leftarrow R_3 - L_{31}R_1}{\to} \left[\begin{array}{ccc} U_{11} & U_{12} & U_{13}\\ 0 & U_{22} & U_{23}\\ 0 & U_{22}L_{32} &U_{23}L_{32} + U_{33}\end{array}\right]\\ &\stackrel{R_3 \leftarrow R_3 - L_{32}R_2}{\to} \left[\begin{array}{ccc} U_{11} & U_{12} & U_{13}\\ 0 & U_{22} & U_{23}\\ 0 & 0 & U_{33}\end{array}\right] \end{align*}\]
Those scale factors look familiar!
And so does that upper triangular matrix.
The general decomposition may not look extremely transparent – let’s confirm what we saw previously with a small example.
Example: Consider \(A = \left[\begin{array}{ccc} 1 & 0 & 1\\ 2 & -1 & 5\\ 3 & 3 & 3\end{array}\right]\). Use Gaussian Elimination to row reduce the matrix and then use the result (and the pivot multipliers) to construct the Doolittle \(LU\)-decomposition of \(A\).
\[\begin{align*} \left[\begin{array}{ccc} 1 & 0 & 1\\ 2 & -1 & 5\\ 3 & 3 & 3\end{array}\right] \end{align*}\]
The general decomposition may not look extremely transparent – let’s confirm what we saw previously with a small example.
Example: Consider \(A = \left[\begin{array}{ccc} 1 & 0 & 1\\ 2 & -1 & 5\\ 3 & 3 & 3\end{array}\right]\). Use Gaussian Elimination to row reduce the matrix and then use the result (and the pivot multipliers) to construct the Doolittle \(LU\)-decomposition of \(A\).
\[\begin{align*} \left[\begin{array}{ccc} 1 & 0 & 1\\ 2 & -1 & 5\\ 3 & 3 & 3\end{array}\right] &\stackrel{R_1 \leftarrow R_2 - 2R_1}{\to} \left[\begin{array}{ccc} 1 & 0 & 1\\ 0 & -1 & 3\\ 3 & 3 & 3\end{array}\right] \end{align*}\]
The general decomposition may not look extremely transparent – let’s confirm what we saw previously with a small example.
Example: Consider \(A = \left[\begin{array}{ccc} 1 & 0 & 1\\ 2 & -1 & 5\\ 3 & 3 & 3\end{array}\right]\). Use Gaussian Elimination to row reduce the matrix and then use the result (and the pivot multipliers) to construct the Doolittle \(LU\)-decomposition of \(A\).
\[\begin{align*} \left[\begin{array}{ccc} 1 & 0 & 1\\ 2 & -1 & 5\\ 3 & 3 & 3\end{array}\right] &\stackrel{R_1 \leftarrow R_2 - 2R_1}{\to} \left[\begin{array}{ccc} 1 & 0 & 1\\ 0 & -1 & 3\\ 3 & 3 & 3\end{array}\right]\\ &\stackrel{R_3 \leftarrow R_3 - 3R_1}{\to} \left[\begin{array}{ccc} 1 & 0 & 1\\ 0 & -1 & 3\\ 0 & 3 & 0\end{array}\right] \end{align*}\]
The general decomposition may not look extremely transparent – let’s confirm what we saw previously with a small example.
Example: Consider \(A = \left[\begin{array}{ccc} 1 & 0 & 1\\ 2 & -1 & 5\\ 3 & 3 & 3\end{array}\right]\). Use Gaussian Elimination to row reduce the matrix and then use the result (and the pivot multipliers) to construct the Doolittle \(LU\)-decomposition of \(A\).
\[\begin{align*} \left[\begin{array}{ccc} 1 & 0 & 1\\ 2 & -1 & 5\\ 3 & 3 & 3\end{array}\right] &\stackrel{R_1 \leftarrow R_2 - 2R_1}{\to} \left[\begin{array}{ccc} 1 & 0 & 1\\ 0 & -1 & 3\\ 3 & 3 & 3\end{array}\right]\\ &\stackrel{R_3 \leftarrow R_3 - 3R_1}{\to} \left[\begin{array}{ccc} 1 & 0 & 1\\ 0 & -1 & 3\\ 0 & 3 & 0\end{array}\right]\\ &\stackrel{R_3 \leftarrow R_3 - (-3)R_2}{\to} \left[\begin{array}{ccc} 1 & 0 & 1\\ 0 & -1 & 3\\ 0 & 0 & 9\end{array}\right]\\ \end{align*}\]
On the left, we’ve discovered that \(U = \left[\begin{array}{ccc} 1 & 0 & 1\\ 0 & -1 & 3\\ 0 & 0 & 9\end{array}\right]\), and keeping track of the scalar multipliers for our pivots, we have \(L_{21} = 2\), \(L_{31} = 3\), and \(L_{32} = -3\). That is, \(L = \left[\begin{array}{ccc} 1 & 0 & 0\\ 2 & 1 & 0\\ 3 & -3 & 1\end{array}\right]\).
We’ll now explicitly show that \(LU = A\).
\[\begin{align*}LU &= \left[\begin{array}{ccc} 1 & 0 & 0\\ 2 & 1 & 0\\ 3 & -3 & 1\end{array}\right]\left[\begin{array}{ccc} 1 & 0 & 1\\ 0 & -1 & 3\\ 0 & 0 & 9\end{array}\right] \end{align*}\]
The general decomposition may not look extremely transparent – let’s confirm what we saw previously with a small example.
Example: Consider \(A = \left[\begin{array}{ccc} 1 & 0 & 1\\ 2 & -1 & 5\\ 3 & 3 & 3\end{array}\right]\). Use Gaussian Elimination to row reduce the matrix and then use the result (and the pivot multipliers) to construct the Doolittle \(LU\)-decomposition of \(A\).
\[\begin{align*} \left[\begin{array}{ccc} 1 & 0 & 1\\ 2 & -1 & 5\\ 3 & 3 & 3\end{array}\right] &\stackrel{R_1 \leftarrow R_2 - 2R_1}{\to} \left[\begin{array}{ccc} 1 & 0 & 1\\ 0 & -1 & 3\\ 3 & 3 & 3\end{array}\right]\\ &\stackrel{R_3 \leftarrow R_3 - 3R_1}{\to} \left[\begin{array}{ccc} 1 & 0 & 1\\ 0 & -1 & 3\\ 0 & 3 & 0\end{array}\right]\\ &\stackrel{R_3 \leftarrow R_3 - (-3)R_2}{\to} \left[\begin{array}{ccc} 1 & 0 & 1\\ 0 & -1 & 3\\ 0 & 0 & 9\end{array}\right]\\ \end{align*}\]
On the left, we’ve discovered that \(U = \left[\begin{array}{ccc} 1 & 0 & 1\\ 0 & -1 & 3\\ 0 & 0 & 9\end{array}\right]\), and keeping track of the scalar multipliers for our pivots, we have \(L_{21} = 2\), \(L_{31} = 3\), and \(L_{32} = -3\). That is, \(L = \left[\begin{array}{ccc} 1 & 0 & 0\\ 2 & 1 & 0\\ 3 & -3 & 1\end{array}\right]\).
We’ll now explicitly show that \(LU = A\).
\[\begin{align*}LU &= \left[\begin{array}{ccc} 1 & 0 & 0\\ 2 & 1 & 0\\ 3 & -3 & 1\end{array}\right]\left[\begin{array}{ccc} 1 & 0 & 1\\ 0 & -1 & 3\\ 0 & 0 & 9\end{array}\right]\\ &= \left[\begin{array}{ccc} 1 & 0 & 1\\ 2 & -1 & 5\\ 3 & 3 & 3\end{array}\right]~~~\checkmark \end{align*}\]
\(\bigstar\) Two of our primary goals in numerical methods are speed and efficiency. \(\bigstar\)
The matrices \(L\) and \(U\) have no overlapping non-zero (or non-one) elements.
We can store \(L\) and \(U\) compactly within one data structure.
This can be important as we solve very large problems and actual computer memory becomes a concern.
The structure \(\left[L/U\right] = \left[\begin{array}{ccc} U_{11} & U_{12} & U_{13}\\ L_{21} & U_{22} & U_{23}\\ L_{31} & L_{32} & U_{33}\end{array}\right]\) contains all of the information required to reconstruct the matrices \(L\) and \(U\).
Example: If \(\left[L/U\right] = \left[\begin{array}{ccc} 2 & 5 & -6\\ 1 & -4 & 3\\ 9 & 7 & 8\end{array}\right]\), then we know \(L = \left[\begin{array}{ccc} 1 & 0 & 0\\ 1 & 1 & 0\\ 9 & 7 & 1\end{array}\right]\) and \(U = \left[\begin{array}{ccc} 2 & 5 & -6\\ 0 & -4 & 3\\ 0 & 0 & 8\end{array}\right]\).
Gaussian Elimination required backward substitution, so we’ve already got code for that.
We’ll be able to adapt that easily to run forward substitution by reversing the order of the loop (top row to bottom row instead of bottom row to top row).
Careful work will be required to implement the decomposition and storage portion of the algorithm though.
The algorithm to decompose \(A\) into its \(LU\)-decomposition is simply the Gaussian Elimination algorithm.
The only additional step is that we’ll need to save those \(\lambda\) values (the scalar multipliers for our pivots) along the way.
We’ll also use the space-saving strategy from above to pack \(L\) and \(U\) into a single data structure.
#decomposition routine
for k in range(0, n-1):
for i in range(k+1, n):
if A[i, k] != 0.0:
lam = A[i, k]/A[k, k]
A[i, (k+1):n] = A[i, (k+1):n] - lam*A[k, (k+1):n]
A[i, k] = lam
The array \(A\) resulting from the decomposition routine above will contain the entries of \(L\) below the main diagonal, and the entries of \(U\) on and above the main diagonal.
The back-substitution routine for solving a system whose coefficient matrix is upper-triangular is extracted from our Gaussian Elimination routine below:
#back-substitution routine
for k in range(n - 2, -1, -1):
b[k] = (b[k] - np.dot(A[k, (k+1):n], b[(k+1):n]))/A[k, k]
Similarly, the forward substitution routine is:
#forward-substitution routine
for k in range(1, n):
b[k] = (b[k] - np.dot(A[k, 0:k], b[0:k]))/A[k, k]
We can put these things together into the DoolittleLUdecomp() routine on the next slide.
def LUdecomp(A):
n = A.shape[0]
for k in range(n):
for i in range(k+1, n):
if A[i, k] != 0.0:
lam = A[i, k]/A[k, k]
A[i, (k+1):n] = A[i, (k+1):n] - lam*A[k, (k+1):n]
A[i, k] = lam
return A
def LUsolve(LU, b):
n = len(b)
for k in range(1, n):
b[k] = (b[k] - np.dot(LU[k, 0:k], b[0:k]))
b[n-1] = b[n-1]/LU[n-1, n-1]
for k in range(n-2, -1, -1):
b[k] = (b[k] - np.dot(LU[k, (k+1):n], b[(k+1):n]))/LU[k, k]
return b
def DoolittleLUsolver(A, b):
LU = LUdecomp(A) #Row Reduction
sol = LUsolve(LU, b) #Forward/backward substitution
return sol
Our implementation consists of three total functions – two helper functions and one encapsulating function.
LUdecomp() function will decompose a square matrix \(A\) into the [L/U] data structure that contains the information required to reconstruct both \(L\) and \(U\).LUsolve() function takes in the [L/U] data structure and the constraint vector \(\vec{b}\) and solves \(LU\vec{x} = \vec{b}\).DoolittleLUsolver() function takes in the matrix \(A\) and the constraing vector \(\vec{b}\), runs the LU decomposition, and then solves \(LU\vec{x} = \vec{b}\).Note. The entire point of pursuing the LU decomposition method was to avoid the row reduction bottleneck.
Keep this in mind when applying this functionality – it is rarely the case that you want to apply DoolittleLUsolver() over and over again.
Example: Use the DoolittleLUsolver() to solve the system \(\left[\begin{array}{ccc} 1 & 0 & 1\\
2 & -1 & 5\\
3 & 3 & 3\end{array}\right]\left[\begin{array}{c} x_1\\ x_2\\ x_3\end{array}\right] = \left[\begin{array}{c} 1\\ 3\\ 1\end{array}\right]\).
array([ 0.88888889, -0.66666667, 0.11111111])
The solution is \(\vec{x} \approx \begin{bmatrix} 8/9\\ -2/3\\ 1/9\end{bmatrix}\).
Example: Use Doolittle’s decomposition method to solve \(A\vec{x} = \vec{b_i}\) where
\[A = \left[\begin{array}{rrr} -3 & 6 & -4\\ 9 & -8 & 24\\ -12 & 24 & -26\end{array}\right]~~~~\text{ and }~~~~ \vec{b_1} =\left[\begin{array}{r} -3\\ 65\\ -42\end{array}\right],~\vec{b_2} =\left[\begin{array}{r} -15\\ -12\\ 18\end{array}\right],~\vec{b_3} =\left[\begin{array}{r} 6\\ 39\\ 27\end{array}\right],~\vec{b_4} =\left[\begin{array}{r} 12\\ 17\\ 64\end{array}\right]\]
array([1., 2., 3.])
array([22.72, 3.66, -7.8 ])
array([10.52, 6.06, -0.3 ])
array([12.57333333, 7.22 , -1.6 ])
We’ll look at methods that can be used to exploit the structure of a matrix in order to gain efficiencies in either run-time, space, or both.
Comments on the Scenario
Comment: In this setting, Gaussian Elimination becomes a serious bottleneck because the same row-reduction must be repeated for every new target location.
There is a triple-nested
forloop in the elimination procedure –foreach diagonal (pivot) entry…foreach entry below the diagonal…forall entries to the right of the diagonal…The slow part of this computation depends only on the drone’s movement restrictions — not on the target location itself.