Designing Optimal Second-Derivative Estimators

Another page [1] on this site describes how to design "Chebyshev-optimal" filters to calculate numerical estimates of the first derivative from samples of a function. This page extends that approach to estimate the second derivative. These filters are also optimal in the sense of minimizing the maximum deviation from the "ideal design" specification. Despite promising results, the designs are not the best choice for all applications.

Adapting the solution method

While the approach used for designing first-derivative approximators applies nicely, there are several details to take into account when applying the method to second-derivative approximators.

After taking these adjustments into account, the solution method is fundamentally the same as for the first-derivative case. Full details of the method are provided on the page that covers the first-derivative case [1].

Here is the result of a typical design (see the 15-term design below).

min-max design

Design script

An m-file design script for Octave shows the numerical details of constructing the matrix terms, calculating the solution, and plotting the results for the second-derivative filter design. If you wish to experiment with this script, download a copy of the minmax2.m file. The Octave solution is based on the GLPK package, but you can use the equivalent solver from the Optimization Toolbox for Matlab [2].

Comparison to other designs

To see the practical benefits of the min-max designs, the 15-term filter (as tabulated below) is compared to the 9-term "Robust Low Noise" design from Pavel Holoborodko's site. While this comparison seems inherently unfair — the min-max design is allowed 60% more parameters to work with — this choice is pretty much forced.

  1. The ninth-order "Robust Low Noise" designs is the highest order filter listed.

  2. The "Robust Low Noise" designs deliver worse accuracy as the filter order increases, as you can clearly see from the frequency response plots for estimators of various orders — more parameters only makes the comparison more unfair.

First, here is the performance delivered by the RLN method in the presence of white random noise at 5% of the signal level, at a frequency of about 8% of the sampling rate (approximately 12 terms span a complete waveform cycle). A mathematically perfect result in green is plotted on top of the calculated results in blue.

robust low-noise estimator applied

Next, here is the performance delivered by the min-max method, same level of random noise, same signal. (The apparent different number of terms is only an edge effect artifact, nothing of significance. When a longer filter is convolved with the same length data block, there are more invalid values at the ends where the filter "overlaps" beyond the ends of the data block.)

min-max estimator applied

At first glance, it might seem like the min-max design is undisputed winner — but unfortunately, the choice is not that clear. While the min-max design tracks changing curvature and higher frequencies better, in situations where the curvature is small and changing slowly, a "Robust Low Noise" design produces superior results. This is not yet fully understood, but it is consistent with the contrasting design strategies. The "RLN" approach concentrates maximum effort at matching low frequencies accurately. In contrast, the "min-max" design tolerates a small absolute error at every frequency, and this can result in a significant "relative error" when the size of the desired result is also supposed to be small. For now, the min-max designs are not a complete and general solution.

Some tabulated results

The following table provides some design results that you can use directly without resorting to the design script. For each target number of filter terms, the design parameters, the resulting performance specs, and filter coefficients are listed. If you are not sure about where to start, try the 15-term filter design with its 0.08·Fs guaranteed accurate band. If you need more economy or better performance, switch to one of the other designs.


11-term — 5% accurate — 8% useful
Design:        pass: 0.05     transition: 0.185     sensitivity: 100.0
pass ripple: 0.001    stopband edge: 0.24     stopband ripple: 0.10
-0.024705 0.064369 0.070200 0.018820 -0.070707 -0.115954 -0.070707 0.018820 0.070200 0.064369 -0.024705
13-term — 6% accurate — 9% useful
Design:        pass: 0.08     transition: 0.135     sensitivity: 100.0
pass ripple: 0.001     stopband edge: 0.21     stop ripple: 0.104
-0.0303998 0.0062139 0.0870660 0.0549440 0.0219066 -0.0862821 -0.1068973 -0.0862821 0.0219066 0.0549440 0.0870660 0.0062139 -0.0303998
15-term — 8% accurate — 11% useful
Design:        pass: 0.08     transition: 0.14     sensitivity: 150
pass ripple: 0.00075     stopband edge: 0.22     stopband ripple: 0.1
0.010053 -0.051074 0.018345 0.070910 0.081264 0.017178 -0.081808 -0.129738 -0.081808 0.017178 0.081264 0.070910 0.018345 -0.051074 0.010053
17-term — 10% accurate — 13% useful
Design:        pass: 0.10     transition: 0.16     sensitivity: 420
pass ripple: 0.0002    stopband edge: 0.26     stopband ripple: 0.082
-0.0020745 0.0274629 -0.0434860 -0.0404178 0.0523018 0.1458368 0.0842230 -0.1122293 -0.2232337 -0.1122293 0.0842230 0.1458368 0.0523018 -0.0404178 -0.0434860 0.0274629 -0.0020745



[1] The page on this site about first-derivative filter approximation has the details about to set up and solve the approximation problem.

[2] The "robust low noise estimators" are described at Pavel Holoborodko's web site,

[3] You can find more information about the Optimization Toolkit at .


Site:    RidgeRat's Barely Operating Site
Author:  Larry Trammell    Created: Dec 19, 2011    Revised: Jan 28, 2012    Status: Experimental
Contact: NOSPAM ltramme1476 AT earthlink DOT net NOSPAM
Restrictions: This information is public.