Convergence Rates Basics

Convergence rates are a cornerstone of numerical analysis, quantifying how quickly iterative methods approach the exact solution of a problem. In computational mathematics, where exact solutions are often unattainable, numerical methods generate a sequence of approximations \( x_n \) converging to the true solution \( x^* \). The speed of convergence—whether linear, quadratic, or superlinear—directly impacts algorithm efficiency. This guide from MathMultiverse explores convergence rates with detailed examples, mathematical formulations, and their significance in computational efficiency.

The error at iteration \( n \), defined as \( e_n = x_n - x^* \), is central. Convergence rate describes how \( e_{n+1} \) relates to \( e_n \). This section sets the stage for understanding types, examples, and applications.

\[ |g'(x^*)| < 1 \text{ for convergence of } x_{n+1} = g(x_n) \]

Types of Convergence

Convergence rates are classified by their asymptotic behavior, including linear, superlinear, quadratic, and higher-order, each affecting iteration speed.

Linear Convergence

Linear convergence reduces error by a constant factor:

\[ |e_{n+1}| \leq C |e_n|, \quad 0 < C < 1 \]

For \( C = 0.5 \), the error halves each step, requiring \( \log(1/\epsilon) / \log(1/C) \) iterations for precision \( \epsilon \).

Superlinear Convergence

Superlinear convergence accelerates beyond linear:

\[ \lim_{n \to \infty} \frac{|e_{n+1}|}{|e_n|} = 0 \]

Order \( p \in (1, 2) \), e.g., secant method (\( p \approx 1.618 \)).

Quadratic Convergence

Quadratic convergence squares the error:

\[ |e_{n+1}| \leq K |e_n|^2, \quad K > 0 \]

Newton’s method achieves this when \( g'(x^*) = 0 \).

Higher-Order Convergence

Order \( p > 2 \), e.g., Halley’s method (cubic):

\[ |e_{n+1}| \leq M |e_n|^p, \quad p > 2 \]

Sublinear Convergence

Slower than linear, where \( |e_{n+1}| / |e_n| \to 1 \), often impractical.

Examples of Convergence Rates

Examples illustrate convergence behavior in numerical methods.

Bisection Method (Linear)

For \( f(x) = x^2 - 2 \), \( x^* = \sqrt{2} \):

\[ |e_n| = \frac{b_n - a_n}{2}, \quad C = 0.5 \]

10 iterations: \( |e_{10}| \approx 0.000488 \).

Newton’s Method (Quadratic)

\( f(x) = x^2 - 2 \), \( x_0 = 1 \):

\[ x_{n+1} = \frac{x_n^2 + 2}{2 x_n} \]

\( |e_3| \approx 2 \times 10^{-6} \), \( K \approx 0.353 \).

Secant Method (Superlinear)

\( f(x) = x^2 - 2 \), \( x_0 = 1 \), \( x_1 = 2 \):

\[ x_{n+1} = x_n - f(x_n) \frac{x_n - x_{n-1}}{f(x_n) - f(x_{n-1})} \]

Order \( p \approx 1.618 \).

Fixed-Point Iteration (Linear)

\( x = \cos(x) \), \( C \approx 0.673 \).

Halley’s Method (Cubic)

\( f(x) = x^3 - 3x + 1 \):

\[ x_{n+1} = x_n - \frac{2 f(x_n) f'(x_n)}{2 [f'(x_n)]^2 - f(x_n) f''(x_n)} \]

Significance of Convergence Rates

Convergence rates determine algorithm efficiency.

Efficiency in Iterations

Quadratic convergence needs fewer iterations than linear for high precision.

Computational Cost

Balances iteration count and step complexity:

\[ \text{Total Cost} = N \cdot C_{\text{step}} \]

Stability and Robustness

Faster methods require good initial guesses:

\[ \kappa = \frac{|f'(x^*)|}{|f(x^*)|} \]

Applications

Critical in optimization, simulations, and root-finding.

Improving Convergence

Aitken’s method accelerates linear convergence:

\[ \hat{x}_n = x_n - \frac{(x_{n+1} - x_n)^2}{x_{n+2} - 2 x_{n+1} + x_n} \]