A semi-automated Foucault test

1. Introduction

After encountering the usual frustrations trying to figure a relatively fast mirror with a crude home made foucault test apparatus and couder mask I decided that on my next ATM project I would use a real quantitative mirror test. This is especially necessary since my next two planned projects are going to have f/3 primaries, which are generally considered to be untestable in a conventional foucault setup. Occasional discussions on the ATM mailing list and in the literature suggested that it might be possible to turn the foucault test or one of its variants into a useful quantitative test. Most such discussions center on the idea of digitizing foucaultgrams with a digital camera or video frame grabber. This note describes an implementation of that idea that I've been experimenting with recently. My idea is a very straightforward version of a standard foucault test, except that I invert the usual data collection procedure. Instead of picking a zonal radius and finding the longitudinal knife edge position that nulls that zone, I select knife edge positions and determine the zonal radius of the null. This is done without the use of masks or pin sticks, as explained in more detail below.

The method described here was developed independently from a similar approach described by Dick Suiter in an article in Amateur Telescope Making Journal #13, and at the time I posted the original version of this note I had not read Suiter's article. I've since done so, and comment a little bit further on it in an appendix. I also have received helpful comments from several readers, which I have tried to address in additional appendices. Finally, I performed one additional test run approximately two months after the ones in the original note, and the results have been incorporated in the discussion. I obtained excellent agreement in this third run with the previous two, which bolsters my confidence that this is a viable approach to mirror testing.

2. Apparatus

The test apparatus is a fairly standard moving source, slitless knife edge tester with a few refinements as shown in figures 1 and 2. The stage consists of a pair of surplus micrometer driven linear stages stacked to make an XY platform. The micrometers are graduated in 0.01mm increments and allow precise and repeatable positioning in each axis. The remaining hardware consists of various Edmund mounting components and a quick release system that I use in my photographic endeavors. A Radio Shack LED is used as the light source, with the tip sawed off and sanded down to diffuse the light. The knife edge is the usual single edge razor style knife blade, left bare since I don't normally have my eye in close contact with the blade. The digital heart of this system is a mid-range Kodak point & shoot digital still camera. This particular model (DC265) has a ccd with nominal resolution of 1536 x 1024 pixels, a decent optical zoom lens, better than average manual control capabilities, and an lcd screen on the back for real time image previews. A variety of interfaces can be used to transfer pictures to a computer, and the camera even comes with a TWAIN driver for remote control from a computer. This feature wasn't as helpful as I had hoped however, and I ended up using the camera offline and transferring the pictures later.
 

Figure 1

Figure 1

Figure 2
Figure 2
 

3. Data collection

My test piece for this experiment is a 6" (153 mm) f/5 paraboloidal mirror. This particular optic happens to be the only one I have made with the help of a professional tutor. Although I haven't used it a great deal in the 15 years or more since I made it the mirror star tests well and certainly qualifies as "diffraction limited", if less than perfect as we will see below. It also was overdue to be recoated, so I didn't mind stripping the old coating.

Data collection is a simple matter of capturing an image of the illuminated mirror, using the camera in place of the eye. In my setup the camera is placed with the front of the lens directly behind the knife edge and resting on top of the light source. The camera thus "sees" what the eye would in the same position. The lcd preview allows verification that the light cone is actually returning to the camera and the knife edge is cutting into the light. This model camera allows manual focus and I take advantage of that to set the focus to the proper value (1.5 meters in this case), and manually set the exposure to 1 second. Of course the flash must be turned off. I also set the zoom to the longest telephoto position, which produces an image of the f/5 mirror of about 460 pixels diameter. As anyone who's done it knows the foucault test is extremely sensitive to the lateral position of the knife edge, and even though this tester is quite stable pressing the shutter will move the camera. Since the camera lacks an effective remote I just set the self timer and use the 10 seconds between pressing the shutter release and its firing to tweak the knife edge position, if necessary. The camera produces a 24 bit RGB file in JPEG format. The light source I use produces useful information in the R channel only, so after dumping the files to my PC I throw out G and B, crop the image to a little larger than the mirror size, and save the image to a "raw" bit map file.

Figure 3 shows a representative sample foucaultgram taken at about the 70% zone. This shows the classic donut pattern of a fairly strongly aspheric mirror. It also shows the main flaw of this particular mirror, which is fairly severe "dog-biscuit" resulting no doubt from over-aggressive polishing with cerium oxide. A bright spot in the middle is a fleck of coating that was hiding behind a sharpie mark and escaped being removed when I stripped the coating. This is not what I use to determine the center point of the mirror in data reduction.

Now this picture is nice enough, but without further analysis it's not especially useful. But a simple manipulation of the picture will give a flavor of the analytical approach I use to extract useful data. Figure 4 was created in Adobe Photoshop by duplicating the original image, flipping it around the vertical axis (creating a mirror image of the original), and pasting the mirror image over the original using the Photoshop "difference" operation to combine the two images. The resulting image is dark where the original and its mirror image are approximately the same brightness, and brighter where they differ. The approximately circular dark ring is the null zone for this knife edge position, which in this case is near 70% as mentioned above. Notice that this operation creates an image with bilateral symmetry around the vertical axis.
 
 

Figure 3
Figure 3

Figure 4
Figure 4

Figure 5 shows one more manipulated foucaultgram, this one with a null at roughly the 90% zone. The blue ring in this image is the position of the null determined by the algorithm described in the next section. One thing I find encouraging about this method is that it appears to work well for the outermost zones of the mirror, which I find difficult to read by eye.

Figure 5

Figure 5
 

4. Data analysis

The main thing I'm trying to do in this experiment is to eliminate subjective judgments as much as possible, and to that end I have automated most of the data analysis. Reducing the data from a single frame (i.e. one image captured at a preset knife edge position) consists of a 5 part algorithm as follows:

1. Establish a coordinate system. Astute readers will notice that there are no grids, masks, or other reference aids to establish a coordinate system in these images (except for that stray fleck of coating that in fact is slightly offset from the mirror center). Instead what I do is take advantage of the fact that the diffraction ring defines the edge of the mirror with fairly high precision. Given a minimum of 3 edge points on a circle it's straightforward to calculate the rectangular coordinates of the center and the circle's radius. I actually measure 6 to 8 points scattered around the edge and use least squares to estimate the center position and mirror radius in pixel coordinates. From independent measurements on a number of frames these estimates appear to have a typical RMS error on the order of 1 pixel, or about 1/3 mm on this mirror. The mirror's known clear diameter of 153mm then establishes a scale factor to convert to absolute coordinates. I ignore possible optical distortion in the camera lens, as well as the entirely likely possibility that the mirror itself is slightly non-circular. Measuring the coordinates of a number of edge points is the most time consuming manual chore in this process. Although the eye is fairly good at edge detection I hope to replace this manual procedure with an automated edge finding algorithm in the future.

2. "Slice" the mirror into radial zones. My assumption going into this project was that ccd noise, jpeg artifacts, and other image defects would reduce the effective resolution of an image to something worse than the nominal size of a pixel. So, I divide the mirror into a number of annular slices, with that number rather smaller than the mirror radius in pixels. For now at least I am making those slices equal area a la couder, so they get narrower with increasing zonal radius. I have experimented with varying numbers of slices from 10 up to 100. In my original analysis I used 40 slices for each frame. I've since increased that to 100, and the first two test runs were re-reduced using the finer grid of slices. No significant differences in estimated mirror profiles were found in either of those two test runs.

3. For each pixel in the image, determine which slice (if any) it belongs to. I divide the mirror into left and right halves, and I only consider pixels that fall in a wedge shaped area around the horizontal axis of the mirror. The reason for the latter is that having the light source below the camera lens introduces some astigmatism into the test, but points on any horizontal chord at equal radii should have the same optical path length if the mirror itself is radially symmetric. In fact as figures 4 and 5 show the way I reduce the data appears to eliminate instrumental astigmatism, but I still consider only a wedge of +- 45 degrees around the horizontal axis (half the total mirror area). I've experimented a bit with adjusting this wedge angle, and again this appears to have no significant impact on the results. Keep in mind my test mirror was small, full thickness, and finished on a full size lap, so astigmatism is presumably not an issue. If astigmatism is a concern a smaller wedge angle should be chosen.

4. For each radial slice, I compute the mean brightness and the standard error of the mean separately for the left and right half of the mirror. This gives me a pair of averaged brightness profiles. Figure 6 shows a representative profile constructed from the foucaultgram in figure 3. I compute the standard error of the mean for each point in the profile in anticipation of future statistical analyses of my data. In fact though the scatter around the mean values is small enough that error bars would be too small to print on this scale, so these profiles appear to be remarkably well defined. Now the key assumption I'm making is that the point where the profiles cross marks the point on the mirror where its slope is normal to the incident light from the tester. I emphasize the word assumption because this is based on a purely geometric interpretation of the foucault test. A careful and skeptical reader has pointed out to me that this assumption needs to be questioned, and in fact my preliminary investigations indicate that wave optical effects do need to be taken into account in foucault data reduction. I discuss this a little more in Appendix 3. For the rest of the body of this note I will stick with a conventional geometric interpretation of the data.

Figure 6
Figure 6

5. The final step in this part of the analysis is to compute the difference between left and right halves and find all zeroes of the differenced profile. I linearly interpolate to find the estimated zonal radius. In principle there might be many zero crossings, although in an aspheric mirror that's not too rough you would certainly only expect zero or one.

With a good micrometer driven linear stage I consider the measurement of longitudinal knife edge positions not to be a significant source of errors. The micrometer is graduated in 0.01 mm increments, and repeatability of positioning should be at least this good. To get an idea of the accuracy of radial measurements of the null zone I did a limited number of repeat measurements at a few knife edge positions. Figure 7 shows the results from 3 measurements taken at a single knife edge position in one of my test runs.

Figure 7
Figure 7

The first measurement (shown as the tan profiles) was taken in the middle of this test run. After completing it I repositioned the knife edge and took two additional images. In the first of these the knife edge cut a little too far into the light cone (the green profiles), as evidenced by the generally lower brightness profiles. A 3rd image shown in blue was judged satisfactory. In fact the first and 3rd profiles are nearly identical, yet there is about a 1mm shift in the crossing points. A close comparison of the 1st and 3rd foucaultgrams shows some localized slope shifts between the two readings. I suspect this may be evidence of thermal changes in the mirror over the course of the test, a possibility I consider a little surprising since I had set the mirror on its test stand at least a couple hours before these images were taken and didn't touch it during the test run.

Additional analysis suggests that this is fairly representative of the uncertainty in radial position measurements. This appears to have an insignificant effect on inferred mirror profiles, although I have not fully analyzed errors of this sort yet. In the future I plan to add a Monte Carlo analysis to my data reduction routine to perform a more complete statistical analysis of errors.

As in any foucault test readings must be taken at a number of knife edge positions to get a survey of the mirror's surface. How many are necessary depends on how thorough a map you want of the surface. I initially did two independent test runs for this experiment. I had intended to get at least 10 measurements for each test run, but as it turned out several images were apparently inside the center of curvature in each run, and I ended up with 7 and 9 measurements respectively. These two runs were carried out on the same day on different diameters of the mirror and separated by a couple hours.

A third test run was performed approximately two months later on the same mirror using the same data collection protocol, except this time I used a much finer grid of longitudinal knife edge positions. I ended up using data from 27 frames taken at 25 different knife edge positions (two positions got two frames each). The individual frames were of rather lower image quality than in my first two test runs. I was frankly fatigued when I took these data (it was well past midnight of a long day) and took less care than previously to get the knife edge cutting in on axis. After uploading the data I also discovered that the dynamic range of the images was worse than previously (most images had about a 7 bit brightness range), probably a result of battery drain on my light source. Rather than try to pick out the higher quality data I decided to throw everything into the analysis and see what would happen. My ultimate goal is to remove subjective judgements as much as possible from data collection and analysis, and to that end I've resisted the urge to "clean up" my raw data inputs.

Figure 8 shows the results of the test runs plotted as zonal radius against knife edge offset. As usual the zero point of the offset scale (the x axis in this plot) is more or less a free variable, so I shifted each set of readings horizontally -- basically to make them look good. All of the test runs appear superficially similar and follow the expected trend for a parabola for at least the outer 40% of the mirror's radius (64% of its area, more or less), and all fall well below it inside that radius.

Figure 8
Figure 8
 

What's more important is what the tests tell us about the mirror's surface. For reducing foucault test readings I use my own implementation of the method of direct differential equation integration, as outlined in Jim Burrow's excellent discourse on mirror mathematics posted on the web. Thanks to Jim for helping me understand issues related to numerical integration and interpolation.

The estimated surface errors for the test runs are shown in figure 9, and estimated summary statistics are shown in Table 1. These are shown in waves at 550nm measured at the surface (double these values to get wavefront errors). Markers on each profile show where actual readings were taken. Maximum differences among the three test runs are denoted by the pair of dotted lines. Most of the difference in the inner part of the first two runs can be attributed to sparse data collection in the center of the mirror. The innermost measured point in the first run was at nearly 50% radius, while the second started a little outside 25%. In the third, most recent test run, 4 points were measured inside 25%. As luck would have it the estimated errors nicely split the difference between the first two runs. Overall the agreement among these 3 test runs is very good, especially in the outer 70% of the mirror. Some of the differences in detail seen in the parts of the mirror where substantial data were collected could be real, given the rough appearance of the mirror. All three test runs indicate that the mirror is mildly undercorrected overall, but at least marginally acceptable.

Although I'm far from expert at star testing this more or less agrees with my judgment of the mirror. I use a fairly large secondary with this short focal ratio mirror, so the high center is effectively masked off. Diffraction rings are about equally contrasty on both sides of focus, indicating a good edge and reasonable overall correction. I have never been satisfied with this telescope's image contrast though, and the relatively rough surface seen in the foucaultgrams indicates one possible reason why. I guesstimate that the "dog biscuit" has an amplitude on the order of a few hundredths wave, and since the data reduction averages out much of that roughness the estimated RMS values shown in Table 1 could easily be low by a factor of 2 or more. The only other test of this mirror I have is the fact that it was judged good enough by my one time mirror making mentor, Chicago's Dan Joyce.

Figure 9
Figure 9
 
Table 1 - Summary statistics
 Test run Number of measurements P-V (surface) RMS (surface) Strehl Ratio
 t3 7 0.077 0.021 0.93
 t4 9 0.115 0.027 0.89
 t5 25 0.091 0.022 0.92

5. (Tentative) conclusions

A few, tentative, conclusions based on the limited data I've collected so far:

1) Meaningful quantitative data can be extracted from foucaultgrams recorded with moderate price consumer grade digital cameras. Sufficient accuracy appears to be achievable to successfully test at least a typical moderate focal ratio telescope mirror. What I am aiming for is the ability to test mirrors in the f/3 range, and it remains to be seen if this method can be extended to such fast mirrors.

2) Given a good positioning device the potential accuracy of the method is limited by the spatial resolution achievable with the device used for image capture. This in turn will be limited by the size of its ccd chip, the quality and focal length of the camera's optics, and processing done in camera - especially jpeg compression. Averaging data over annular zones appears to be an effective way to deal with at least some of the limitations of digital pictures.

3) Although I haven't done so yet it should be possible to attach meaningful error bars to summary estimates of surface quality using statistical data that can be collected in a single test run.

4) As in any foucault-like test the more data points you collect the finer-grained will be your map of the surface under test. One advantage of this approach is that since it uses no mask you can measure as many points as you have the patience to collect, or as few as you need for a quick survey of the surface during early figuring stages. One minor potential disadvantage is that it's fairly easy to overlook the inner parts of the mirror. In the first two test runs I thought I was collecting data on the inner 30% of the mirror, but the reduced data said otherwise. The obvious solution to that problem is to take frames at smaller knife edge increments, at least in the vicinity of the center of curvature.

Also like the standard foucault test this is an essentially one dimensional test, at least as I have implemented it. A full 2 dimensional analysis of a foucault test is a difficult project, and the standard knife edge test is in any event most sensitive to errors along chords perpendicular to the knife edge. It's certainly possible to gain qualitative information about the overall state of a mirror however. Large scale surface roughness is easily detectable for example, as is all too obvious in Figure 3.

Appendix 1. Notes on mechanical setup

Long time ATM list member Kevin McCarthy has pointed out to me that systematic errors in the measurement of longitudinal knife edge position will result if the optical axis of the camera is not accurately parallel to the mechanical axis of the micrometer driving the stage. The apparatus shown in Figures 1 and 2 offers too many adjustments to guarantee parallelism. This problem should be curable simply by attaching the camera, light source, and knife edge directly to the stage. I removed and repositioned the camera between each of the test runs in this analysis, and since they are all consistent I either got lucky three times in a row or made the same error each time.

Appendix 2. Comparison with Suiter

H.R. Suiter (1999) has published a version of a "Digital Knife-Edge Test" in the Amateur Telescope Making Journal #13. I developed my approach independently of Suiter's work, and have only recently had a chance to read his article. Most importantly, our approaches are fundamentally similar and should produce the same results within reasonable uncertainties given the same input data. Both of us try to find the points where brightness profiles match on the left and right halves of foucault shadowgrams. Both of us perform various image processing tricks intended to smooth the data somewhat while retaining essential radial brightness information. Suiter's approach is better suited to visually oriented, interactive data reduction. My approach was developed with the intention of automating test reduction as fully as possible. My approach would be difficult and extremely time consuming to implement in a general purpose image processing program like Photoshop, but is straightforward to program in a high level programming language (I use C). These are differences in detail though -- the basic principles underlying our approaches are the same.

One reader has commented on the apparent similarity of the flip and difference procedure I used to produce Figures 4 and 5 above with the invert and multiply procedure that Suiter cautions against using in his article. The problem with the latter procedure mostly lies in the fact that Photoshop, and many other general purpose image processing programs, store and manipulate brightness data as 8 bit integers (although limited 16 bit per channel capabilities are present in recent versions). Many manipulations in Photoshop can result in loss of brightness information, manifesting as a loss of dynamic range or posterization of the manipulated image. The Photoshop multiply operation is especially prone to this - if you multiply two 8 bit integers the result will be a 16 bit integer, which Photoshop must squeeze back into an 8 bit integer. The difference command by contrast takes the absolute value of the difference of two 8 bit integers, so only one bit (the sign) is lost. Losing the sign bit actually helps illustrate the effect we're looking for - the manipulated image ends up being dark where the left and right halves of the mirror are the same brightness, and brighter everywhere else.

In any event the image processing I actually perform is done completely independently of Photoshop and I use double precision floating point arithmetic for all operations, so the full dynamic range of the original image is preserved. The Photoshop processing that produced Figures 4 and 5 was intended purely as a visual illustration of the general approach.

Appendix 3. Preliminary comments on physical optics and the foucault test

Most amateur telescope makers understand that the bright ring seen at the edge of a decent quality mirror under the knife edge test is a diffraction effect, i.e. a manifestation of the wave nature of light. Some may know of other diffraction effects that can appear, especially when a narrow slit or small pinhole is used. Few ATMs seem to be aware that the overall appearance of a mirror in a knife edge test is affected by wave optics in ways that cannot be reconciled with a purely geometric interpretation. At the urging of Nils Olof Carlin I have taken the first steps towards a closer look at this. This is far from a complete analysis, but should give some idea of the magnitude of effects to be expected.

The diffraction theory of the foucault test was worked out by a number of investigators in the first half of the 20th century. The most complete analytical exposition of the theory that I've been able to find is in Linfoot (1958). Because his work was done before the era of modern computing it is long on analysis and short on numerical results, and those are mostly derived for small error approximations. My literature search has turned up only one modern attempt to apply a rigorously wave optical approach to reducing real foucault test data (Wilson 1975). Wilson applied his method to a nearly perfect spherical mirror, and used a linearized form of Linfoot's theory appropriate in the small error limit. As such his work is not directly relevant to the typical amateur situation in which an aspheric mirror that might have several waves of total spherical aberration is being tested at the center of curvature. It could be useful though for ATMs interested in developing a quantitative version of a knife edge test for use in null test setups.

There are a couple of modern computational tools for simulating foucault tests that I'm aware of. In the freeware/shareware domain Jim Burrows' DIFFRACT (available at http://www.halcyon.com/burrjaw/atm/odyframe.htm) simulates a variety of commonly used geometric tests. A similar program from David Lewis (http://www.eecg.toronto.edu/~lewis/ ) is primarily intended to simulate star tests, but also includes routines for foucault and caustic test simulations. The commercial optical design program ZEMAX is also reported to include a foucault test simulation in its most recent version, but I have not used it. DIFFRACT is a handy tool because it includes an interface to Jim's test reduction program SIXTESTS, allowing the appearance of mirrors with arbitrary radially symmetric defects to be simulated. The following comments are based on a combination of Linfoot's work and some numerical simulations using DIFFRACT.

The heart of a geometric analysis of foucault is the visual metaphor of grazing light incident on a contoured surface - the classic example being the characteristic "donut" shadow pattern of a paraboloidal mirror tested at the 70% zone. One specific prediction of Linfoot's theory is that this metaphor breaks down for a sufficiently weakly aspheric mirror - instead of a low contrast donut what you see looks like an overall "sloped" surface with a broad slightly flattened region in the center. As a corollary, suppose you take an arbitrary aspheric mirror and place the knife edge just outside the center of curvature. Geometric theory predicts that you should see a small dimple in the center with the "crest" of the hill at some small distance from the center - in a moving source test the crest is interpreted as marking the normal to the surface. Wave optics says that for small LA values no such apparently contoured surface will be seen. This appears to be a very general result, and I suspect it's one that most ATMs have encountered and written off to the limitations of their eyes. The knife edge must be moved back some distance before the classic donut shape begins to break out, and initially the crest of that donut will fall well inside the position of the actual normal to the surface. As the knife is moved farther back yet the crest will approach its predicted geometric value, at least until the edge of the mirror is approached. The consequence of this is that when data are reduced using a strictly geometric interpretation of foucault (as I have) the center of the mirror will appear higher than it really is.

Linfoot also shows that small zonal errors on an otherwise true (spherical) mirror will appear slightly smeared and displaced from their true position in a geometric analysis. This prediction appears to hold for small zonal errors on a strongly aspheric surface as well.

To try to get a handle on the magnitude of wave optical effects in a real mirror I tried the following numerical experiment using DIFFRACT. I fed it the mirror profile inferred from test run t5 described above, and computed shadowgrams at a number of knife edge positions. Each simulated shadowgram was captured as a screenshot and fed into my data reduction algorithm to find the simulated "null" radius as described in the body of this note. These were then input into my foucault data reduction program. If geometric optics and wave optics (as simulated in DIFFRACT) are consistent the output of this process should be the same as the input, within small errors due to limited sample size. The result of this exercise is shown in Figure 10.

Figure 10

Figure 10

The simulated surface exhibits most of the features predicted qualitatively in the previous paragraphs. The center of the mirror appears high relative to the input surface, and some of the fine structure in the zonal defects around the 60-90% zones is lost (this is partly due to the sparser sampling as well). Except at the very edge though the difference between the simulation and the input profile is remarkably close to the range of random errors shown in Figure 9.

This exercise doesn't tell us what we really want to know. I've treated the mirror surface inferred from one test run as "real" and asked what it looks like under the knife edge test. What I really want to find is the mirror surface that will produce the observed foucault test results from that test run, wave optical effects properly included. This is the inverse problem of the one I solved to create Figure 10, and I have not even begun to tackle that one. A plausible guess would be that the difference between the unknown real surface and the one inferred from a geometric solution is about the same magnitude and more or less in the opposite direction to that depicted in Figure 10. At this point that is only a guess however.

At this juncture I see a couple alternatives. Observing that the apparent errors that arise from ignoring diffraction effects seem to be of the same magnitude as purely random errors a case could be made for ignoring them altogether. I find this unsatisfying. Purely random errors can always be reduced by collecting more and/or higher quality data, and in any event shouldn't cause the mirror maker to make systematic errors in figuring or deciding when to stop working. Ignoring diffraction effects leads to systematic errors - for example the tendency to estimate a high center - that could potentially lead the mirror maker astray. In practice few ATMs, and even fewer pros, would try to correct a mirror surface that differed by no more than a few hundredths of a wave from its target, but there's no guarantee that such effects might not be larger in other situations.

A second alternative is just to bite the bullet and perform a properly wave optical analysis. The theory of the foucault test is perfectly well understood, but even with powerful PCs on every desk the computational task of determining the surface that produces a given set of data is going to be a nontrivial one. One thing I find encouraging here is that the geometric solution appears to be a very good one, which suggests that an iterative approach using the geometric solution as a starting guess should converge quickly. Solving a one-dimensional model will also help reduce the computational burden.

Of course a 3rd alternative is to abandon hope of turning foucault into a true quantitative test, and try something else. Of available well known alternatives the Hartmann test may be most promising.
 

Acknowledgements

I received helpful comments on the first draft of this note from Kevin McCarthy, Dick Suiter, and Nils Olof Carlin. Jim Burrows has done a very thorough analysis of all the major variants of geometric tests, and has generously shared his expertise in numerical analysis.

References

Linfoot, E.H. 1958, Recent Advances in Optics, Oxford Univ. Press, ch. 2.

Suiter, H.R. 1999, "Digital Knife-Edge Test Reduction," Amateur Telescope Making Journal, #13, p. 10.

Wilson, R. Gale, 1975, "Wavefront-error evaluation by mathematical analysis of experimental Foucault-test data," Applied Optics, vol. 14, p. 2286.
 
 

Michael Peck
mpeck1@ix.netcom.com

6 March 2000