MV-07: Numerical Methods for Natural Frequencies

Summary
How do engineers find natural frequencies of systems with thousands of DOFs? This post covers five essential numerical techniques: Dunkerley's lower bound, Rayleigh's quotient, Holzer's transfer matrix, power iteration, and Jacobi's rotation method.

In Part 6, we formulated the eigenvalue problem for Multi-Degree-of-Freedom systems:

$$ \det\bigl([\mathbf{K}] - \omega^2[\mathbf{M}]\bigr) = 0 $$

Elegant in theory—but what happens when your system has 50, 500, or 5000 degrees of freedom?

Expanding a 100×100 determinant produces a polynomial with 100 roots and astronomical coefficients. Even with modern computers, round-off errors accumulate catastrophically.

So how do engineers analyze the vibrations of turbine rotors, suspension bridges, or aircraft wings? They turn to numerical methods—clever algorithms that find eigenvalues without ever forming the characteristic polynomial.

This post, based on Chapter 7 of Rao’s Mechanical Vibrations, presents five essential techniques:

CategoryMethodsKey Idea
BoundingDunkerley, RayleighBracket $\omega_1$ without heavy computation
Transfer MatrixHolzerPropagate through chain-like systems
IterativeMatrix Iteration, JacobiLet the computer do the heavy lifting

Dunkerley’s Formula: The Conservative Estimate

Before investing in heavy computation, engineers need a quick sanity check. Dunkerley’s formula delivers exactly that—a guaranteed lower bound on $\omega_1$ that takes seconds to compute.

The Physical Intuition

The key insight is elegant: imagine “turning off” all masses except one. Each single-mass subsystem vibrates at its own natural frequency $\omega_{ii}$. Dunkerley’s formula combines these individual frequencies to approximate the full system’s fundamental frequency:

Dunkerley's formula decomposition
Dunkerley's approach: decompose the multi-mass system into single-mass subsystems, then combine their frequencies.

The Formula

For a lumped-mass system with flexibility coefficients $a_{ii}$:

$$ \boxed{\frac{1}{\omega_1^2} \approx a_{11}m_1 + a_{22}m_2 + \cdots + a_{nn}m_n = \sum_{i=1}^{n} \frac{1}{\omega_{ii}^2}} $$

where:

  • $a_{ii}$ = flexibility influence coefficient (deflection at point $i$ due to unit force at $i$)
  • $\omega_{ii}$ = natural frequency of mass $m_i$ acting alone
Key Property: Dunkerley’s formula always underestimates the true fundamental frequency. This makes it a reliable lower bound—if your detailed analysis gives a frequency below Dunkerley’s estimate, something is wrong!

Rayleigh’s Method: The Energy Approach

Dunkerley gives a floor; Rayleigh’s method provides the ceiling. Better yet, Rayleigh’s estimate is often remarkably accurate—typically within 1-2% of the exact value. Together, these two methods bracket the true fundamental frequency.

The Core Idea: Energy Balance

At a natural frequency, the maximum kinetic energy equals the maximum potential energy. Rayleigh exploited this by defining a quotient that relates stiffness (potential energy) to mass (kinetic energy):

$$ \boxed{R(\tilde{\mathbf{X}}) = \omega^2 \approx \frac{\tilde{\mathbf{X}}^T [\mathbf{K}] \tilde{\mathbf{X}}}{\tilde{\mathbf{X}}^T [\mathbf{M}] \tilde{\mathbf{X}}} = \frac{V_{\max}}{T_{\max}}} $$

where $\tilde{\mathbf{X}}$ is an assumed (trial) mode shape—it doesn’t need to be exact!

Rayleigh's method visualization
Left: A guessed trial vector (dashed) approximates the true mode shape. Right: Dunkerley and Rayleigh bracket the exact frequency from below and above.

Why Rayleigh Works So Well

PropertyImplication
StationarityThe quotient is insensitive to small errors in $\tilde{\mathbf{X}}$—a 10% error in shape gives only ~1% error in $\omega^2$
Upper Bound$R(\tilde{\mathbf{X}}) \geq \omega_1^2$ always, providing a reliable ceiling
Static Deflection TrickUsing the gravity-loaded shape as $\tilde{\mathbf{X}}$ often yields excellent accuracy
Engineering Tip: For beams and shafts, use the static deflection curve (the shape under its own weight) as your trial vector. This usually gives frequency estimates within 1-2% of the exact value!

Holzer’s Method: Marching Through the System

Bounding methods are fast but approximate. What if we need exact values for all natural frequencies? For torsional systems—engine crankshafts, turbine rotors, drive trains—Holzer’s method has been the engineer’s go-to technique since the 1920s.

The Physical Setup

Consider a shaft with multiple disks, each with rotational inertia $J_i$, connected by shaft sections with torsional stiffness $k_{ti}$:

Holzer's method for torsional systems
Left: A torsional system with disks and shaft segments. Right: The iterative process—guess ω, propagate through the system, check if torques balance.

The Algorithm

Holzer’s method is a transfer matrix approach: starting from one end, we propagate angular displacements through the system:

StepActionEquation
1Guess a trial frequency$\omega$
2Set first disk amplitude$\Theta_1 = 1$ (arbitrary normalization)
3Compute inertia torque$T_i = J_i \omega^2 \Theta_i$
4Propagate twist$\Theta_{i+1} = \Theta_i - \dfrac{\sum_{j=1}^{i} T_j}{k_{ti}}$
5Check at final diskIf $\sum T = 0$, then $\omega$ is a natural frequency

The Residual Torque Plot

Plot the residual torque $\sum T$ versus trial frequency $\omega$. Natural frequencies occur where the curve crosses zero. This graphical approach makes it easy to find all modes systematically.

Advantages: Holzer’s method (1) finds all natural frequencies, (2) gives mode shapes directly from $\Theta_i$ values, and (3) requires no matrix assembly—perfect for hand calculations and early computer implementations.

Matrix Iteration: The Workhorse Algorithm

Holzer excels for chain-like systems, but real structures have complex connectivity. The Matrix Iteration Method—also known as the power method—is the workhorse algorithm behind many commercial vibration solvers. It’s simple, robust, and converges reliably to the fundamental mode.

The Eigenvalue Setup

We reformulate the problem using the dynamical matrix $[D] = [K]^{-1}[M]$:

$$ \boxed{[D]\vec{X} = \lambda \vec{X}, \quad \text{where } \lambda = \frac{1}{\omega^2}} $$

The largest eigenvalue $\lambda_1$ corresponds to the smallest natural frequency $\omega_1$—exactly what engineers usually need first.

Matrix iteration and deflation
Left: Power iteration converges to the fundamental mode. Right: Matrix deflation removes known modes to reveal higher frequencies.

The Algorithm

StepOperationPurpose
1Choose arbitrary $\vec{X}_1$Initial guess (e.g., all ones)
2Compute $\vec{Y}_{r+1} = [D]\vec{X}_r$Matrix-vector multiplication
3Normalize: $\vec{X}{r+1} = \vec{Y}{r+1} / |\vec{Y}{r+1}|{\max}$Prevent numerical overflow
4Check convergenceRepeat until $\vec{X}$ stabilizes
5Extract $\lambda_1 = |\vec{Y}|_{\max}$Largest eigenvalue → $\omega_1 = 1/\sqrt{\lambda_1}$

Finding Higher Modes: Matrix Deflation

Once $(\lambda_1, \vec{X}^{(1)})$ is found, we sweep out the first mode using:

$$ [D_2] = [D] - \lambda_1 \vec{X}^{(1)} {\vec{X}^{(1)}}^T [M] $$

Iterating on $[D_2]$ converges to $\lambda_2$ (second mode). Repeat deflation for $\omega_3, \omega_4, \ldots$

Why It Works: Each iteration amplifies the component along the dominant eigenvector. After enough iterations, all other components become negligible—the vector “locks onto” the first mode.

Jacobi’s Method: The Complete Solution

Matrix iteration is efficient for finding the fundamental mode, but what if you need the entire spectrum? Jacobi’s Method delivers all eigenvalues and eigenvectors in one sweep—making it the backbone of many modern eigensolvers like LAPACK and NumPy.

The Core Idea: Rotation to Diagonalization

The method exploits a fundamental property of symmetric matrices: they can be diagonalized by orthogonal transformations. Jacobi’s insight was to use a sequence of plane rotations, each designed to zero out one off-diagonal element pair:

$$ \boxed{[R]^T [A] [R] \rightarrow [\Lambda] = \text{diag}(\lambda_1, \lambda_2, \ldots, \lambda_n)} $$

Jacobi's method rotation and convergence
Left: Each rotation matrix zeros one off-diagonal pair (p,q). Right: Progressive sweeps drive all off-diagonal elements to zero.

The Rotation Matrix

To zero out elements $a_{pq}$ and $a_{qp}$, we apply a Givens rotation in the $(p,q)$ plane:

$$ [R_{pq}] = \begin{bmatrix} 1 & \cdots & 0 & \cdots & 0 \\ \vdots & \cos\theta & \cdots & \sin\theta & \vdots \\ 0 & \cdots & 1 & \cdots & 0 \\ \vdots & -\sin\theta & \cdots & \cos\theta & \vdots \\ 0 & \cdots & 0 & \cdots & 1 \end{bmatrix} $$

The rotation angle $\theta$ is chosen specifically to eliminate the $(p,q)$ element: $$ \tan(2\theta) = \frac{2a_{pq}}{a_{qq} - a_{pp}} $$

The Algorithm

StepOperationDetails
1Find largest off-diagonalLocate $|a_{pq}|_{\max}$ for $p \neq q$
2Compute rotation angle$\theta$ from the formula above
3Apply rotation$[A’] = [R_{pq}]^T [A] [R_{pq}]$
4Accumulate eigenvectors$[V] = [V][R_{pq}]$
5Repeat until convergedStop when all $|a_{pq}| < \epsilon$

Convergence Properties

A key insight: while zeroing $(p,q)$ may disturb other elements, the sum of squares of all off-diagonal elements strictly decreases with each rotation. This guarantees convergence!

$$ S = \sum_{i \neq j} a_{ij}^2 \quad \rightarrow \quad S’ < S \text{ after each rotation} $$

Why Jacobi Excels: Unlike matrix iteration (which finds one mode at a time), Jacobi delivers the complete eigensolution—all $n$ eigenvalues and eigenvectors—in one pass. This makes it ideal for dense matrices where all modes are needed.

Putting It Into Practice: Julia Implementation

Modern languages make eigenvalue computation trivial. Julia’s LinearAlgebra standard library wraps highly optimized LAPACK routines—internally using algorithms like QR iteration and divide-and-conquer, descendants of the methods we’ve discussed.

Here’s how to solve for the natural frequencies of a 3-DOF system (based on Example 7.8 in the textbook):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
using LinearAlgebra

# Define the system matrix (dynamical matrix [D] = [K]⁻¹[M])
A = [3.0 -1.0  0.0;
    -2.0  4.0 -3.0;
     0.0 -1.0  1.0]

# Calculate eigenvalues and eigenvectors
eigenvalues, eigenvectors = eigen(A)

# Display results
println("Eigenvalues (λ = 1/ω²):")
display(eigenvalues)

println("\nNatural Frequencies:")
for (i, λ) in enumerate(eigenvalues)
    λ > 0 && println("  ω$i = $(round(1/sqrt(λ), digits=4)) rad/s")
end

println("\nEigenvectors (Mode Shapes):")
display(eigenvectors)

The output gives you the eigenvalues (from which $\omega_i = 1/\sqrt{\lambda_i}$) and the mode shapes as columns of the eigenvector matrix.

Expected Output:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
Eigenvalues (λ = 1/ω²):
  λ1 = 0.074577
  λ2 = 2.448071
  λ3 = 5.477352

Natural Frequencies:
  ω1 = 3.661823 rad/s
  ω2 = 0.639128 rad/s
  ω3 = 0.427282 rad/s

Eigenvectors (Mode Shapes):
3×3 Matrix{Float64}:
 0.226159  -0.830483  -0.366533
 0.661611  -0.458368   0.908033
 0.714928   0.316537  -0.202806

Summary: Your Eigenvalue Toolkit

We’ve traveled from the analytical elegance of determinants to the computational muscle of iterative algorithms. Here’s your quick-reference decision chart:

MethodStrategyOutputAccuracyBest Use Case
DunkerleyDecompose into single-DOF systems$\omega_1$ (lower bound)Conservative estimateQuick sanity checks
RayleighEnergy quotient with trial shape$\omega_1$ (upper bound)Excellent (~1-2% error)Beams, continuous systems
HolzerTransfer matrix propagationAll $\omega_n$ + mode shapesExactTorsional chains
Matrix IterationPower method on $[D]$One mode at a timeExactGeneral MDOF, fundamental mode
JacobiRotation to diagonalizationAll modes simultaneouslyExactDense matrices, full spectrum
The Engineer’s Shortcut: Compute both Dunkerley (lower) and Rayleigh (upper) bounds. If they’re within 5% of each other, you’ve likely bracketed the true $\omega_1$ with high confidence—no heavy computation needed!

What We Learned

  1. Avoid the characteristic polynomial—it’s numerically unstable for large systems
  2. Bound before you compute—Dunkerley and Rayleigh give fast reality checks
  3. Choose the right tool—chain systems favor Holzer; general systems need matrix methods
  4. Iteration converges reliably—both power iteration and Jacobi rotations are guaranteed to work

What’s Next?

We’ve analyzed discrete masses on springs, but real structures—guitar strings, diving boards, airplane wings—have mass and stiffness distributed continuously. In Part 8, we enter the realm of Continuous Systems, where the number of degrees of freedom becomes infinite, and partial differential equations take center stage.

References

Rao, S. S. (2018). Mechanical Vibrations (6th ed.). Pearson.