Iterative Methods

Iterative methods solve large linear systems \( Ax = b \) by generating approximations that converge to the solution. Unlike direct methods (e.g., Gaussian elimination), they are efficient for sparse or large matrices, common in fields like computational physics, engineering simulations, and data science. This MathMultiverse guide covers key methods like Jacobi, Gauss-Seidel, SOR, and Conjugate Gradient, with examples, convergence analysis, and visualizations.

The iterative process follows:

\[ x^{(k+1)} = M x^{(k)} + c \]

Convergence requires the spectral radius \( \rho(M) < 1 \). Methods differ in how they construct \( M \) and \( c \).

Jacobi Method

Splits \( A = D + L + U \) (diagonal, lower, upper triangular) and iterates:

\[ x_i^{(k+1)} = \frac{b_i - \sum_{j \neq i} a_{ij} x_j^{(k)}}{a_{ii}}, \quad i = 1, \ldots, n \]

Requires \( a_{ii} \neq 0 \). Iteration matrix: \( M_J = -D^{-1}(L + U) \). Converges for diagonally dominant matrices:

\[ |a_{ii}| > \sum_{j \neq i} |a_{ij}| \]

Error: \( e^{(k+1)} = M_J e^{(k)} \). Slow but parallelizable.

Jacobi Convergence Visualization

Shows error reduction for \( 2x + y = 5 \), \( x + 3y = 7 \).

Examples

Practical applications of the Jacobi Method.

1. \( 2x + y = 5 \), \( x + 3y = 7 \)

Matrix: \( A = \begin{bmatrix} 2 & 1 \\ 1 & 3 \end{bmatrix} \), \( b = \begin{bmatrix} 5 \\ 7 \end{bmatrix} \). Exact: \( (2, 1) \). Updates:

\[ x^{(k+1)} = \frac{5 - y^{(k)}}{2}, \quad y^{(k+1)} = \frac{7 - x^{(k)}}{3} \]

Initial \( (0, 0) \): \( x^{(1)} = 2.5 \), \( y^{(1)} \approx 2.3333 \); \( x^{(2)} \approx 1.3333 \), \( y^{(2)} \approx 1.5 \). After 5 iterations: \( x \approx 2.0370 \), \( y \approx 0.9753 \).

2. \( 4x - y + z = 7 \), \( -x + 3y - z = 4 \), \( x - y + 5z = 6 \)

\( A = \begin{bmatrix} 4 & -1 & 1 \\ -1 & 3 & -1 \\ 1 & -1 & 5 \end{bmatrix} \), \( b = \begin{bmatrix} 7 \\ 4 \\ 6 \end{bmatrix} \). Updates:

\[ x^{(k+1)} = \frac{7 + y^{(k)} - z^{(k)}}{4} \] \[ y^{(k+1)} = \frac{4 + x^{(k)} + z^{(k)}}{3} \] \[ z^{(k+1)} = \frac{6 - x^{(k)} + y^{(k)}}{5} \]

Initial \( (0, 0, 0) \): \( x^{(1)} = 1.75 \), \( y^{(1)} \approx 1.3333 \), \( z^{(1)} = 1.2 \). Converges to \( (1, 2, 1) \).

3. Non-Convergent

\( 2x + 10y = 5 \), \( x + y = 1 \). Not diagonally dominant; diverges.

Advanced Methods

Enhanced techniques for faster convergence.

Gauss-Seidel

Uses updated values immediately:

\[ x_i^{(k+1)} = \frac{b_i - \sum_{j < i} a_{ij} x_j^{(k+1)} - \sum_{j > i} a_{ij} x_j^{(k)}}{a_{ii}} \]

Faster for diagonally dominant matrices.

Successive Over-Relaxation (SOR)

Adds relaxation parameter \( \omega \):

\[ x_i^{(k+1)} = (1 - \omega) x_i^{(k)} + \omega \frac{b_i - \sum_{j < i} a_{ij} x_j^{(k+1)} - \sum_{j > i} a_{ij} x_j^{(k)}}{a_{ii}} \]

Optimal \( \omega \approx \frac{2}{1 + \sqrt{1 - \rho(M_J)^2}} \).

Conjugate Gradient

For symmetric positive-definite matrices:

\[ x^{(k+1)} = x^{(k)} + \alpha_k p^{(k)}, \quad \alpha_k = \frac{r^{(k)T} r^{(k)}}{p^{(k)T} A p^{(k)}} \]

Efficient for sparse systems.

Applications

Iterative methods are used in:

  • Finite Element Analysis: Solves structural problems in engineering.
  • Image Processing: Reconstructs images from sparse data.
  • Fluid Dynamics: Simulates flow with large matrices.
  • Machine Learning: Optimizes large-scale linear models.