MV-07: Numerical Methods for Natural Frequencies
In Part 6, we formulated the eigenvalue problem for Multi-Degree-of-Freedom systems:
$$ \det\bigl([\mathbf{K}] - \omega^2[\mathbf{M}]\bigr) = 0 $$
Elegant in theory—but what happens when your system has 50, 500, or 5000 degrees of freedom?
Expanding a 100×100 determinant produces a polynomial with 100 roots and astronomical coefficients. Even with modern computers, round-off errors accumulate catastrophically.
So how do engineers analyze the vibrations of turbine rotors, suspension bridges, or aircraft wings? They turn to numerical methods—clever algorithms that find eigenvalues without ever forming the characteristic polynomial.
This post, based on Chapter 7 of Rao’s Mechanical Vibrations, presents five essential techniques:
| Category | Methods | Key Idea |
|---|---|---|
| Bounding | Dunkerley, Rayleigh | Bracket $\omega_1$ without heavy computation |
| Transfer Matrix | Holzer | Propagate through chain-like systems |
| Iterative | Matrix Iteration, Jacobi | Let the computer do the heavy lifting |
Dunkerley’s Formula: The Conservative Estimate
Before investing in heavy computation, engineers need a quick sanity check. Dunkerley’s formula delivers exactly that—a guaranteed lower bound on $\omega_1$ that takes seconds to compute.
The Physical Intuition
The key insight is elegant: imagine “turning off” all masses except one. Each single-mass subsystem vibrates at its own natural frequency $\omega_{ii}$. Dunkerley’s formula combines these individual frequencies to approximate the full system’s fundamental frequency:
The Formula
For a lumped-mass system with flexibility coefficients $a_{ii}$:
$$ \boxed{\frac{1}{\omega_1^2} \approx a_{11}m_1 + a_{22}m_2 + \cdots + a_{nn}m_n = \sum_{i=1}^{n} \frac{1}{\omega_{ii}^2}} $$
where:
- $a_{ii}$ = flexibility influence coefficient (deflection at point $i$ due to unit force at $i$)
- $\omega_{ii}$ = natural frequency of mass $m_i$ acting alone
Rayleigh’s Method: The Energy Approach
Dunkerley gives a floor; Rayleigh’s method provides the ceiling. Better yet, Rayleigh’s estimate is often remarkably accurate—typically within 1-2% of the exact value. Together, these two methods bracket the true fundamental frequency.
The Core Idea: Energy Balance
At a natural frequency, the maximum kinetic energy equals the maximum potential energy. Rayleigh exploited this by defining a quotient that relates stiffness (potential energy) to mass (kinetic energy):
$$ \boxed{R(\tilde{\mathbf{X}}) = \omega^2 \approx \frac{\tilde{\mathbf{X}}^T [\mathbf{K}] \tilde{\mathbf{X}}}{\tilde{\mathbf{X}}^T [\mathbf{M}] \tilde{\mathbf{X}}} = \frac{V_{\max}}{T_{\max}}} $$
where $\tilde{\mathbf{X}}$ is an assumed (trial) mode shape—it doesn’t need to be exact!
Why Rayleigh Works So Well
| Property | Implication |
|---|---|
| Stationarity | The quotient is insensitive to small errors in $\tilde{\mathbf{X}}$—a 10% error in shape gives only ~1% error in $\omega^2$ |
| Upper Bound | $R(\tilde{\mathbf{X}}) \geq \omega_1^2$ always, providing a reliable ceiling |
| Static Deflection Trick | Using the gravity-loaded shape as $\tilde{\mathbf{X}}$ often yields excellent accuracy |
Holzer’s Method: Marching Through the System
Bounding methods are fast but approximate. What if we need exact values for all natural frequencies? For torsional systems—engine crankshafts, turbine rotors, drive trains—Holzer’s method has been the engineer’s go-to technique since the 1920s.
The Physical Setup
Consider a shaft with multiple disks, each with rotational inertia $J_i$, connected by shaft sections with torsional stiffness $k_{ti}$:
The Algorithm
Holzer’s method is a transfer matrix approach: starting from one end, we propagate angular displacements through the system:
| Step | Action | Equation |
|---|---|---|
| 1 | Guess a trial frequency | $\omega$ |
| 2 | Set first disk amplitude | $\Theta_1 = 1$ (arbitrary normalization) |
| 3 | Compute inertia torque | $T_i = J_i \omega^2 \Theta_i$ |
| 4 | Propagate twist | $\Theta_{i+1} = \Theta_i - \dfrac{\sum_{j=1}^{i} T_j}{k_{ti}}$ |
| 5 | Check at final disk | If $\sum T = 0$, then $\omega$ is a natural frequency |
The Residual Torque Plot
Plot the residual torque $\sum T$ versus trial frequency $\omega$. Natural frequencies occur where the curve crosses zero. This graphical approach makes it easy to find all modes systematically.
Matrix Iteration: The Workhorse Algorithm
Holzer excels for chain-like systems, but real structures have complex connectivity. The Matrix Iteration Method—also known as the power method—is the workhorse algorithm behind many commercial vibration solvers. It’s simple, robust, and converges reliably to the fundamental mode.
The Eigenvalue Setup
We reformulate the problem using the dynamical matrix $[D] = [K]^{-1}[M]$:
$$ \boxed{[D]\vec{X} = \lambda \vec{X}, \quad \text{where } \lambda = \frac{1}{\omega^2}} $$
The largest eigenvalue $\lambda_1$ corresponds to the smallest natural frequency $\omega_1$—exactly what engineers usually need first.
The Algorithm
| Step | Operation | Purpose |
|---|---|---|
| 1 | Choose arbitrary $\vec{X}_1$ | Initial guess (e.g., all ones) |
| 2 | Compute $\vec{Y}_{r+1} = [D]\vec{X}_r$ | Matrix-vector multiplication |
| 3 | Normalize: $\vec{X}{r+1} = \vec{Y}{r+1} / |\vec{Y}{r+1}|{\max}$ | Prevent numerical overflow |
| 4 | Check convergence | Repeat until $\vec{X}$ stabilizes |
| 5 | Extract $\lambda_1 = |\vec{Y}|_{\max}$ | Largest eigenvalue → $\omega_1 = 1/\sqrt{\lambda_1}$ |
Finding Higher Modes: Matrix Deflation
Once $(\lambda_1, \vec{X}^{(1)})$ is found, we sweep out the first mode using:
$$ [D_2] = [D] - \lambda_1 \vec{X}^{(1)} {\vec{X}^{(1)}}^T [M] $$
Iterating on $[D_2]$ converges to $\lambda_2$ (second mode). Repeat deflation for $\omega_3, \omega_4, \ldots$
Jacobi’s Method: The Complete Solution
Matrix iteration is efficient for finding the fundamental mode, but what if you need the entire spectrum? Jacobi’s Method delivers all eigenvalues and eigenvectors in one sweep—making it the backbone of many modern eigensolvers like LAPACK and NumPy.
The Core Idea: Rotation to Diagonalization
The method exploits a fundamental property of symmetric matrices: they can be diagonalized by orthogonal transformations. Jacobi’s insight was to use a sequence of plane rotations, each designed to zero out one off-diagonal element pair:
$$ \boxed{[R]^T [A] [R] \rightarrow [\Lambda] = \text{diag}(\lambda_1, \lambda_2, \ldots, \lambda_n)} $$
The Rotation Matrix
To zero out elements $a_{pq}$ and $a_{qp}$, we apply a Givens rotation in the $(p,q)$ plane:
$$ [R_{pq}] = \begin{bmatrix} 1 & \cdots & 0 & \cdots & 0 \\ \vdots & \cos\theta & \cdots & \sin\theta & \vdots \\ 0 & \cdots & 1 & \cdots & 0 \\ \vdots & -\sin\theta & \cdots & \cos\theta & \vdots \\ 0 & \cdots & 0 & \cdots & 1 \end{bmatrix} $$
The rotation angle $\theta$ is chosen specifically to eliminate the $(p,q)$ element: $$ \tan(2\theta) = \frac{2a_{pq}}{a_{qq} - a_{pp}} $$
The Algorithm
| Step | Operation | Details |
|---|---|---|
| 1 | Find largest off-diagonal | Locate $|a_{pq}|_{\max}$ for $p \neq q$ |
| 2 | Compute rotation angle | $\theta$ from the formula above |
| 3 | Apply rotation | $[A’] = [R_{pq}]^T [A] [R_{pq}]$ |
| 4 | Accumulate eigenvectors | $[V] = [V][R_{pq}]$ |
| 5 | Repeat until converged | Stop when all $|a_{pq}| < \epsilon$ |
Convergence Properties
A key insight: while zeroing $(p,q)$ may disturb other elements, the sum of squares of all off-diagonal elements strictly decreases with each rotation. This guarantees convergence!
$$ S = \sum_{i \neq j} a_{ij}^2 \quad \rightarrow \quad S’ < S \text{ after each rotation} $$
Putting It Into Practice: Julia Implementation
Modern languages make eigenvalue computation trivial. Julia’s LinearAlgebra standard library wraps highly optimized LAPACK routines—internally using algorithms like QR iteration and divide-and-conquer, descendants of the methods we’ve discussed.
Here’s how to solve for the natural frequencies of a 3-DOF system (based on Example 7.8 in the textbook):
| |
The output gives you the eigenvalues (from which $\omega_i = 1/\sqrt{\lambda_i}$) and the mode shapes as columns of the eigenvector matrix.
Expected Output:
| |
Summary: Your Eigenvalue Toolkit
We’ve traveled from the analytical elegance of determinants to the computational muscle of iterative algorithms. Here’s your quick-reference decision chart:
| Method | Strategy | Output | Accuracy | Best Use Case |
|---|---|---|---|---|
| Dunkerley | Decompose into single-DOF systems | $\omega_1$ (lower bound) | Conservative estimate | Quick sanity checks |
| Rayleigh | Energy quotient with trial shape | $\omega_1$ (upper bound) | Excellent (~1-2% error) | Beams, continuous systems |
| Holzer | Transfer matrix propagation | All $\omega_n$ + mode shapes | Exact | Torsional chains |
| Matrix Iteration | Power method on $[D]$ | One mode at a time | Exact | General MDOF, fundamental mode |
| Jacobi | Rotation to diagonalization | All modes simultaneously | Exact | Dense matrices, full spectrum |
What We Learned
- Avoid the characteristic polynomial—it’s numerically unstable for large systems
- Bound before you compute—Dunkerley and Rayleigh give fast reality checks
- Choose the right tool—chain systems favor Holzer; general systems need matrix methods
- Iteration converges reliably—both power iteration and Jacobi rotations are guaranteed to work
What’s Next?
We’ve analyzed discrete masses on springs, but real structures—guitar strings, diving boards, airplane wings—have mass and stiffness distributed continuously. In Part 8, we enter the realm of Continuous Systems, where the number of degrees of freedom becomes infinite, and partial differential equations take center stage.
References
Rao, S. S. (2018). Mechanical Vibrations (6th ed.). Pearson.