Processing of Keratoscopic Images Using Local Spatial Phase
Technical Field
This invention relates to instruments for measuring surface topography, and more particularly, to such
instruments known as keratoscopes.
Background of the Invention
Gersten et al US patent 4,863,260 discloses a computer- controlled corneal mapping systems for providing quantitative topographic information about a corneal surface
illuminated with a structured light pattern, such as a series of concentric light bands. Because the edges of the light pattern bands must be ascertained, a considerable amount of image processing is required, including various curve-fitting algorithms in order to re-construct the corneal topography.
In the aforementioned patent, intersecting laser beams were employed to accurately locate the apex of the cornea. These light beams, however, were rather bright and could give rise to extraneous glare in the image. Therefore, to prevent the glare from swamping useful data in the image, the locating laser light beams were shut off just before the image to be analyzed was acquired. This required that the patient's head be maintained motionless between the time the locating light beams are turned off and the time that the image is acquired for processing. Accordingly, it would be desirable to more directly and accurately to ascertain the powers of refraction exhibited over a corneal surface and to allow the image to be processed without requiring the locating light beams to be turned off. In addition, it would be advantageous to allow the processing to compensate for any inadvertent mislocation of the corneal surface.
Takeda, et al in articles entitled "Fourier-transform
Method of Fringe-pattern Analysis for Computer-based
Topography and Interferometry", published in J. Opt Soc. Am, vol. 72, no. 1, January 1982 at pp. 156 - 160 and "Fourier Transform Profilometry for the Automatic Measurement of 3-D Object Shape", Applied Optics 22:3977-82, December 15, 1983 describe a method of Fourier transform processing of two- dimensional images of three dimensional objects in which local spatial phase is employed so that it is no longer necessary to detect all of the light band edges. However the Takeda articles do not disclose a method which is directly applicable for mapping the specular corneal
surface.
Summary of the Invention
In accordance with my invention, the topography of a three-dimensional specular surface, such as a cornea, may accurately be measured by processing a two-dimensional image of the surface illuminated with a quasi-periodic pattern to obtain continuous values of local spatial phase with respect to distance perpendicular to the optical axis. Preliminarily, a number of images of spheres whose dioptric powers are known are subjected to Fourier transform processing of the type disclosed in the above-mentioned Takeda articles. The Fourier spectrum is windowed and bandpass filtered to suppress all side lobes and negative frequencies except for the fundamental spatial frequency.
The inverse transform is taken and then the inverse tangent of the quotient of the imaginary and real portions of the inverse transform are computed to obtain the local spatial phase. At this point the local spatial phase may be
"unwrapped" and the unwrapped phase differentiated to obtain the instantaneous local spatial frequencies at discrete pixels. These local spatial frequencies may then be mapped to the known diopters of the spheres by a process of curve
f itting .
Alternatively, and preferably, however, the local spatial phase is "unwrapped" to obtain continuous values of local spatial phase with distance from the optical axis of the image. The distances from the optical axis at which predetermined values of the local spatial phase occur correspond to the positions of the illuminated rings of the pattern. The distances obtained from processing images of surfaces whose dioptric powers are known are compared and tabulated with corresponding distances in the original illuminated object to determine the local magnification produced by each surface. Images of surfaces having unknown dioptric powers may then be mapped to diopters by consulting the tabulation. Accuracy beyond that determined by the granularity of pixel position is obtained.
In accordance with a further aspect of my invention, the accuracy of topographic analysis is enhanced by allowing the locating light beams to remain on while the image is being acquired and by eliminating the effect of the glare during image processing. Mislocation of the apex of the surface in the x, y, or z direction is detected by
determining the location of the light beams in the image and any such mislocation, such as may arise from inadvertent movement of the subject is compensated for. Description of the Drawing
The foregoing and other objects of my invention may be better understood from the following specification and drawing, in which:
Fig. 1 is a sectional view showing a quasi-spherical specular surface properly positioned in the conical Placido disc apparatus of prior art patent 4,863,260, while Figs. 1A and 1B show the surface too closely and too remotely
positioned, respectively;
Fig. 2A shows one of the rings reflected in the image of the quasi-spherical surface, while Fig. 2B shows
exaggerated glare spots in the image; Fig. 3A is a plot of the intensity of surface
illumination versus radial distance for a quasi-periodic pattern reflected from the specular surface; Figs. 3B and 3C show the corresponding wrapped and unwrapped local spatial phase of the processed image; Figs, 3D and 3E show the unfiltered and filtered Fourier spectra of a processed image;
Fig. 4 is a flow chart for processing the scanned image to relate local spatial frequency at regularly-spaced pixel positions to diopters of refraction; Figs. 5A and 5B are flow charts for procedures involved in processing the image of a specular surface to remove the effects of glare and improper positioning of the surface;
Figs. 6A and 6B are flow charts for processing the scanned image to relate distances obtained from the
unwrapped local spatial phase to diopters of refraction; and
Figs. 7 and 8 are photographs of a human cornea shown before and after processing to remove the effects of glare.
General Description Referring now to Fig. 1, there is shown a section through the prior art illuminated Placido disc cone device 10 described in U.S. patent 4,863,260 and in US patent
5,416,539. Cone 10 causes a quasi-periodic mire pattern to be reflected from a quasi-spherical specular surface 6, such
as a human cornea or polished steel ball, positioned at its left-hand side. Cone 10 has a hollow, substantially
cylindrical bore 11. A light source (not shown) is
positioned at the right-hand side base of cone 10. A series of opaque bands 9 divides the otherwise transparent bore 11 into a series of illuminated rings 13, of which only rings 13-1 and 13-2, spaced the distance S apart, are individually labelled in Fig.1. In the illustrated embodiment, each of rings 13 is of the same diameter h. The specular surface 6 reflects a virtual image of the illuminated rings 13. On a perfectly spherical specular surface, each ring would be reflected, as shown in Fig. 2A, as a circle 13'. In Fig. 2B, greatly enlarged sectors of three illustrative virtual image rings, 13-1', 13-2' and 13- 3', are shown. For simplicity, however. Fig. 1 shows an edge view of only one of these virtual image rings, 13'.
Ring 13' appears to lie at some distance beneath the
specular surface 6.
As explained, inter alia, in the article "Keratometery" by Janet Stone in the book Contact Lens Practice , a ray incident upon a spherical mirror aimed at the focus, f, will be reflected parallel to the optical axis and will produce a virtual image which appears to lie beneath the surface of the sphere while a ray directed perpendicular to the surface toward the surface's center of curvature will be directed back upon itself. To locate a point on the plane of the virtual image, the reflected ray should be projected
parallel to the optical axis until it intersects the ray directed to the center of the sphere. This is illustrated in Fig. 1 with respect to two rays 13f and 13c from
illuminated object ring 13-1 of cone 10. Ray 13c is
directed to the center of curvature, C, of spherical surface 6 while ray 13f is directed to the focus, (which lies a distance f beneath the surface). The plane of the virtual image 13' of object ring 13-1 is located by projecting
reflected ray 13f parallel to the optical axis until it strikes the projection of ray 13c. The virtual image ring 13' is defined by the intersection of light ray 13c, which is directed perpendicular to the specular surface 6, i.e., toward its center of curvature, C, and the backward
extension of ray 13f, which passes through the focus of specular surface 6, parallel to optical axis A-C.
As explained in the aforementioned '260 patent, the proper positioning of specular surface 6 is indicated when intersecting light beams L1 and L2 (advantageously laser beams) converge at single point A at the apex of the
specular surface 6. When surface 6 is properly positioned so that point A is on the optical (Z) axis which runs through the center of bore 11 of cone 10, point A will lie in the center of the field of camera 41 and the distance between point A and the camera 41 is accurately known. In addition, when surface 6 is properly positioned, the
reflection 17' of fixation light 17 from pellicle 16 will also appear at point A. If, however, surface 6 is
positioned too far into bore 11 (Fig. 1A), or too far out of bore 11 (Fig. 1B), light beam LI will not strike surface 6 at point A but at some distance D above or below point A, thereby introducing an axial error ΔZ in the location of point A. The point where the light beam L1 strikes surface 6 effects a local glare spot GS, as shown in the greatly enlarged view Fig. 2B, which shows glare spot GS obscuring a portion of one or more rings such as 13-2' and 13-3'. A photograph of an eye where glare spots obscure some of the reflected rings is shown in Fig. 7. As will hereinafter be explained, the mislocation of glare spot GS from point A, as well as its ring obscuring effect are compensated for by my improved image processing method, the results of which are shown in Fig. 8. As disclosed in my above-mentioned application, the image of surface 6 is acquired by electronic camera 41,
through lens 40. Processor 42 scans the image acquired by camera 41 in a direction orthogonal to the illuminated pattern that is reflected from surface 6, e.g., an
illuminated ring pattern is scanned radially. However, if bore 11 of cone 10 were provided with illuminated
longitudinal stripes (not shown) parallel to the axis of the cylindrical bore 11, such longitudinal stripes would cause a pattern of radial lines to be reflected from quasi-spherical surface 6 and orthogonal scanning of such radial lines would be in a circular direction.
Fig. 3A shows the waveform of video intensity versus orthogonal scanning distance in the image acquired by processor 42. The average frequency of the waveform is determined by the average spacing (see, for example,
illustrative spacing "S", Fig. 1) of the illuminated rings of the Placido disc source of the light pattern. On a surface having surface imperfections, the instantaneous spatial frequency of the pattern will vary as surface imperfections distort the local light pattern.
My co-pending application taught that the since the illuminated spatial pattern is quasi-periodic it can be decomposed into its Fourier series:
In the above expression, bi - 0 if the pattern is
symmetrical about the origin. As disclosed in the aforementioned co-pending
application, the frequency spectrum of the video image scanned orthogonally to the quasi-periodic pattern (i.e., in the direction xr), may be ascertained by taking a discrete Fourier transform of the scanned image, using an analytic filter to suppress all side lobes and negative spatial
frequencies in the Fourier spectrum except for the narrow band of frequencies adjoining the fundamental spatial frequency, taking the inverse transform and finding the inverse tangent of the quotient of the imaginary and real portions of the inverse transform to obtain the
instantaneous spatial phase. The Fourier spectrum of the two-dimensional image is shown in Fig. 3D. The Fourier spectrum of the image after processing to suppress all side lobes and negative frequencies is shown in Fig. 3E. The real component, R(x), of the inverse transform has the form:
R( x) = A(x) cos ( ω0 x + θ ( x) ) (2) while the imaginary component, I(x), has the form:
I (x) = A ( x) sin (ω0x+ θ ( x) ) (3)
The inverse tangent of the quotient of the imaginary component divided by the real component yields the
instantaneous spatial phase Φ as a function of the radial scanning distance, x
r:
The instantaneous phase is a saw-tooth or ramp-like function, shown in Fig. 3B, having discontinuities every 2π radians. The derivative of the instantaneous phase waveform is the instantaneous frequency (which also has
discontinuities every 2π radians) :
The discontinuities may be eliminated by a process known as "phase unwrapping" which employs threshholding and numerical interpolation, as described, for example, in the article by Mitsuo Takeda, Hideki Ina and Soiji Kobayashi entitled
"Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry", published in J. Opt Soc. Am, vol. 72, no. 1, January 1982 at pp. 156 - 160, see also, the article by Jose M. Tribolet, entitled "A New Phase Unwrapping Algorithm", published in the IEEE
Transactions on Acoustics, Speech, and Signal Processing, VOl ASSP-25, No. 2, April 1977, pp.170-177.
As shown in the flow chart Fig. 4, and described in the Appendix hereto, the unwrapped phase may be differentiated to obtain the instantaneous local spatial frequencies which may then be mapped to diopters by a process of curve
fitting. Preliminarily, the images of a plurality of known specular surfaces having known dioptric powers of refraction must be processed to obtain the local spatial frequencies exhibited by the illuminated rings in each image. The local spatial frequencies exhibited by the illuminated pattern reflected from each of the known surfaces are then mapped to their known dioptric powers by using a curve fitting
technique such as least mean squares curve fitting, in which the coefficients of a polynomial to relate these frequencies to diopters is determined, e.g.:
D1 = g( f) = A0 + A1f + A1f2 + ... +Anf n, (6)
Thereafter, as shown in the flow chart Fig. 4, the local spatial frequencies obtained from processing the image of an unknown corneal surface may be employed with the determined polynomial to ascertain the corresponding diopters of that surface.
In accordance with my preferred method, however.
advantage is taken of the fact that the local spatial phase exhibits values that are a multiple of 2π at the position of each successive one of the quasi-periodic illuminated rings in the processed image, as may be observed by comparing the unwrapped phase. Fig. 3C, with Fig. 3A. Fig. 3A shows the video intensity I(x) encountered in the x direction of scanning (i.e., orthogonally to the quasi-periodic pattern). As shown in Fig. 3C, the first and second 2π points in the processed image occur at the distances, xr1 and xr2 from the center of scanning, e.g., from the optical axis.
The radial scanning distance at each 2v point is the radius of a ring in the image and twice this radius is the diameter h' of the ring. Referring again to Fig. 1, it is seen that diameter of the illustrative illuminated ring 13-1 on the bore 11 of Placido disc cone 10 is h. The ratio h/h' of the size of the original object to the size of its reflected image is the magnification M produced by the specular surface 6. As explained in the above-mentioned article by Janet Stone, the relationship between magnification M and diopters δ is δ = 1/2dM, where d is the distance along the optic axis between a point on the plane of the object and its image point. In the illustrative apparatus of Fig. 1, this distance is approximately constant for each ring.
Reference to Fig. 3C shows that the unwrapped local spatial phase gives a continuous series of values
intermediate each set of 2π points. In the illustrative embodiment where cone 10 has a cylindrical bore 11, the dimension h is constant. Each 2π point corresponds to a ring in the image but the dimension h' of each such image ring is determined by the local magnification of surface 6 at the point. Intermediate the 2π points, the intermediate
scanning distances corresponding to the "imaginary"
intermediate rings in the image. However, since the
corresponding "imaginary" intermediate ring in the
cylindrical bore 11 of cone 10 has the constant dimension h, the magnification corresponding to each of these
intermediate spatial phase points may be ascertained simply by dividing the corresponding intermediate scanning distance "xr" by the constant dimension h, without need of any curve fitting. The actual processing steps of my preferred method are illustrated in the flow chart diagram, Fig. 5:
Step 501 entitled "load matched filter" loads an image of a paradigm laser spot GSf, which is used as a matched filter. The paradigm laser spot is obtained by averaging the values obtained by the appearance of the laser spot on a plurality of different size and color eyes.
Step 502. labeled "compute RMS filter", computes the RMS value of the paradigm laser spot GSf by taking the square root of the sum of the squared values of the video intensity found over the area of GSf.
Step 503. labelled "find laser spot" compares the video intensity I(x) of the illuminated surface on a pixel by pixel basis with the RMS value of spot GS that was obtained in step 502. A laser spot is identified when the video
intensity I(x) of a scanned spot exceeds the RMS value of the spot GS, calculated as follows:
In the above expression, L is the pixel-length of the filter data and OFFSET is the offset to the video data. The video data is convolved with the filter data for various values of OFFSET . When the expression reaches a maximum that exceeds a predetermined threshold, the location of the laser spot GS is detected. The point at which the laser spot is found is designated laser . The distance D, expressed in
millimeters, is obtained from the location lasergs as
where Xc is the maximum number of pixels ("pels") in the
scanned line.
Step 504. labelled "clear laser spot 5" calls the procedure detailed in steps 510 through 520 which clears the actual laser spot GS that is found in the scanned image. Step 505. labelled "compute axial misalignment" computes the axial misalignment of point A, after laser spot GS has been cleared from the image being processed. As shown in Fig. 1, laser beam L1 travels through a tunnel in cone 10 which is at a fixed angle, illustratively, 38 degrees, to the optical axis, Z. Should surface 6 be inserted too far into the bore of cone 10, laser beam L1, instead of striking surface 6 at point A, will strike the surface at some distance above point A, such as at distance "D''. Similarly, if surface 6 is not inserted far enough into the bore, laser beam L1 will strike the surface at some distance "D" below point A. The depth error, ΔZ, measured along the optical axis, in the positioning of point A is calculated by the following:
where R
avg is the average radius of the specular surface, e.g., 7.85 mm for the average human cornea.
The above equation gives the depth error for the apex of surface 6 not being at point A, i.e., not at Z = 0. The error in diopters caused by the depth error is compensated for by the following equation:
Dc = Dm + A + B * ΔZ ( 10 ) where ΔZ is obtained from equation (10) above. The constants A and B are computed by processing a series of images of known diopter specular surface steel balls that have each deliberately been moved a known increment ΔZ away from point A and employing the least squares method to relate the measured diopters to the known diopters of each such surface.
Correction for Lateral Misalignment In addition to correcting for axial misalignment, i.e., a displacement along the optical or Z-axis, Step 505 advantageously may also correct for any lateral misalignment, that is, a
displacement of the apex of the cornea of the patient's eye from the assumed position, point "A" (Fig. 1), in the X or Y
direction, i.e., transverse to the Z axis of the instrument.
Where the apex of the patient's cornea is properly
positioned at point A, the origin for the radius of curvature, R, is at point 0,0,0 and the coordinates of point A are x, y, z. However, if it be assumed that the apex of the patient's cornea, instead of being properly positioned at point A, is displaced in the X direction by the amount Δx, the coordinates of the apex, now at point A', are (x+Δx, y, z). The lateral misalignment will affect the measurement of the radius of curvature of the cornea so that the apparent, or estimated, radius of curvature, R', will be a vector extending from the origin 0,0,0 to the point A'(x+Δx, y ,z) instead of to the point A(x, y, z).
The equations for the x, y, and z coordinates of point A in terms of the coordinates of point A' and of the actual radius of curvature of the patient's cornea R, versus the apparent radius of curvature, R', are given in (11) through (14) below, where θ is the angle of the projection of R on the X, Z plane and Φ is the angle of elevation of R.
Equation (14) can be simplified by resorting to a Taylor series expansion and neglecting all terms after the first two. Accordingly, :
It will be recalled that the diopters of refraction, D, are equal to k/R, where k is the change in the index of refraction between air and the cornea. The counterpart of equation (4) for dioptric power is, accordingly:
Equation (16) can similarly be simplified to yield:
where D' = k/R'.
An exemplary software listing, written in Pascal, for implementing equation (18), the correction for lateral
misalignment in the X direction, is given below. The correction for misalignment in the Y direction is similar. The processing of the image acquired by camera 41, Fig. 1, depends on ring position. In the illustrative embodiment, which uses a cone device 10 of the type shown in US patent 5,416,539, the first five, (innermost), rings, reflected upon the patient's cornea are produced by an illuminated disc (not shown in Fig. 1) that is placed at the righthand end of hollow bore 11. Because of the physically different manner in which the rings are produced in the cone device, a slightly different equation is employed for processing information from these rings (if iring <5) than for the other rings (else).
{correct for x errors}
if (valid_z) and (abs(dx) > 0) then
if iring < 5 then
PowerPtr^[iring] [jphi] := PowerPtr^[iring] [jphi] *
(( 1-0.025*sin( (iring/ring_total_for_cone) * pi)* dx *
(CosinesPtr^[jphi])) + 0.015
* sin((iring/ring_total_for_cone) * 2 * pi)* dx *
(CosinesPtr^[jphi]))
else
PowerPtr^[iring] [jphi] := (KMetricIndex / r_c) * ( 1-0.025*sin((iring/ring_total_for_cone) * pi)* dx *
(CosinesPtr^ [jphi])); In the above listing, PowerPtr is an array containing the observed dioptric powers of the map from the Fourier processing of the two-dimensional image, indexed by current ring number, iring, and current angle, jphi (both integer numbers). The angle of the sinusoidal function, sin( (iring/ring_total_for_cone), is the fraction of the current ring number, iring, divided by the maximum number of rings, ring_total_for_cone. The quantity dx, is the displacement Δx in the X direction of the point "A", as measured from the departure of the position of the fixation light 17', Fig. 1 from the center of the frame buffer of processor 42 storing the digitized information acquired by camera 41. In addition, the quantities "sin" and "cos" are expressed in a 360 degree system, while the quantities "sines" and "cosines" are expressed in a 256 degrees system since only 256 radial scans are employed in the exemplary embodiment. KMetric Index / r_c is the dioptric power at the radius r_c. --.
Step 506. labelled "find iris laser spot 2" examines the video intensity values in the region of point A (see Fig. 1) of the scanned image. When video intensity values exceeding the
predetermined value expected for the image 17' of the fixation light 17 (see Fig. 1), Step 507. labelled "clear laser spot 8" is executed. This procedure is detailed in steps 530 through 539. Clear Laser Spot 5 (Steps 510-5191 :
This procedure clears the laser spot GS from the image being processed.
Step 511 initializes the scanning angle ø for the acquisition of video data, I(x) along the scan line. Step 512 loads the video data, I(x) obtained along the scan line.
Step 513 performs a fast Fourier transform (FFT) of the video
data along the scan line.
Step 514 finds the peak value of the Fourier transform. Step 515 filters the signal around the peak value.
Step 516 scales the FFT data to ensure that the image being placed on the screen is neither too light nor too dark. Step 517 performs an inverse FFT.
Step 518 stores the results of processing the scan line data.
Step 519 increments the scan angle until the maximum value is reached. Steps 511 through 519 are then performed again until, illustratively, 256 radial scans are performed.
Clear Laser Spot 8 (Steps 530-539) : Step 531 initializes the scan radius to STATRAD the starting radius of the iris so that the laser spot can be found in this area. As described in patent 5,214,456 issued May 25, 1993, when radially scanning outwardly from the center of the cornea (a location in the area of the pupil) toward the limbus, video intensity is lowest in the area of the pupil and increases rapidly in the area toward the iris.
Steps 532 through 535. 536 and 538 These steps are similar to steps 512 through 515, 517 and 518, except that the data being operated upon is data pertaining to the iris portion of the eye.
Step 537 labelled "DC restoration of image" restores the DC of the video signal in the area of laser spot GS, which is much brighter than the DC levels elsewhere in the image, to the DC level existing in the adjacent sectors A-l and Bl, (see Fig. 2B).
Figs. 6A and 6B are flow charts for processing the two-
dimensional image of a three-dimensional surface to ascertain the diopters of refraction present over the surface. In Fig. 6A steps 601 through 609 and 620 and 630 are illustrated. Of these steps, steps 601 through 609 are roughly comparable to steps 401 through 409 of Fig. 4. Fig. 6B shows the details of step 620 of Fig. 6A in which the distance perpendicular to the optical axis corresponding to each incremental value of local spatial phase is obtained so that diopters can be found as a function of
incremental values of perpendicular distance rather than as a function of increments fixed by pixel position.
Steps 601 and 602 These steps acquire the two-dimensional video image of the three-dimensional surface whose topography is to be measured and are comparable to steps 401, 402 and 403 of flow chart Fig. 4.
Steps 605 and 606 These steps perform the Fourier transform on the acquired two-dimensional video data and employ a Hubert transform and band-pass filter to suppress negative frequencies and all sidebands in the Fourier spectrum allowing only the major sideband of the first harmonic to pass to the next step. These steps are alternative, preferred processing steps to steps 404 through 406 of Fig. 4. Step 607 performs the inverse Fourier transform to obtain the complex analytical signal having real and imaginary portions and is comparable to step 407 of Fig. 4.
Step 608 obtains the arc tangent of the quotient of the real and imaginary portions of the complex signal to obtain the wrapped spatial phase and is comparable to step 408 of Fig. 4.
Step 609 This step performs the step of phase-unwrapping and is comparable to step 409 of Fig. 4 which differentiates the
discontinuous local spatial phase obtained in step 408 to obtain continuous values of local spatial frequency. However, unlike step 409, step 609 advantageously re-integrates the continuous
values of local spatial frequency to obtain continuous values of local spatial phase.
Step 620 This procedure is detailed in Fig. 6B. Briefly, the continuous values of local spatial phase are inspected to
ascertain the radial distance from the optical axis at which the local spatial phase exhibits values that are multiples of 2π. These local spatial phase values occur at the positions where the surface being measured reflects each of the rings in the
illuminated pattern.
Step 630 This step samples the continuous values of local spatial phase and, during the calibration phase when the images of a plurality of known surfaces are processed, relates the local spatial phase values using a least-means-squares procedure to the known dioptric powers of the surfaces. A calibration matrix of dioptric power vs. spatial phase values is assembled.
Thereafter, the calibration matrix is employed to ascertain the dioptric powers exhibited by an unknown surface from the
processing of its image to yield continuous values of local spatial phase.
In Fig. 6B:
Step 621 sets the count for ring# to 1 at the start of the procedure for finding the distances obtaining at values of local spatial that are multiples of 2π .
Step 622 sets the variable ringposphase to the count provided by step 621 multiplied by 2π .
Step 623 increments the count of the counter for the array
"ipix" which is an array (signified by the use of square
brackets) of local spatial phase values exhibited at the pixel positions of the processed image.
Step 624 determines whether the phase value exhibited in the current pixel position of the array is less than the variable
ringposphase and whether the end of the array has been reached.
Step 625 determines for each increment of local spatial phase the corresponding distance in the image.
Step 626 increments the count for ring #.
Step 627 determines whether the maximum number of ring
positions, illustratively 25, has been reached. If not,
processing continues with a repetition of steps 622 through 627.
Appendix
Fig. 4 Basics of Image Processing Using Curve Fitting:
Fig. 4 is a flow chart of the basic steps for analyzing a two-dimensional image of a three-dimensional surface and
employing discrete Fourier transform processing to relate local spatial frequencies in the processed image to the three- dimensional radius or diopters of refraction of the surface.
After the initialization of variables, steps 401 entitled "locate 1st ring circumference" and 402 entitled "locate center of 1st ring subpixel offsets x & y" locate the point from which the pattern in the image will be scanned in step 403. As described at column 8, line 9 et seq. of the aforementioned Gersten, et al, patent 4,863,269, the center point is determined from the first ring pattern appearing in the image. The details of this
processing step are set forth in the steps of the main program below entitled "find_first_ring (ringl)" finds the edges of the first ring. This is the only ring pattern whose "edges" need to be determined.
In scanning images containing the focusing spot 17' provided by light source 17 and pellicle 16, Fig. 1, the focusing spot 17' may mask topographic data in the center of the scan. To avoid the discontinuity resulting from an absence of data that might impair the functioning of a Fourier transform, it may be
advantageous to fill-in the missing data by using the data from two radial scans lying 180 degrees apart as being from one meridian and joining the data lying on opposite sides of the circle of omitted data. This is done in the procedure
"sample_meridian" called for in the main program detailed below.
The video data of the acquired image is passed through a
Hamming window filter in step 404 so that the side lobes of the Fourier spectrum obtained in the FFT step 405 will be suppressed. After the FFT is obtained the Fourier spectra is subjected to a Hahn filter in step 406 so that only major side bands of the first harmonic are passed. The inverse Fourier transform is obtained in step 407, the arctangent of the quotient of the imaginary divided by the real components of the inverse transform
yields the local spatial phase in step 408. The local spatial phase is differentiated in step 409 to obtain the local spatial frequency and, in step 410, a polynomial is found using the least squares curve fitting technique to map the local spatial
frequencies to diopters from the processed images of a number of known surfaces. When the center of the image is restored (see "procedure restore_center"), the dioptric powers obtained from the processing are plotted in their correct positions.
The plotting programs used in the above-mentioned Gersten et al. patent device plotted the dioptric powers of the surface along the perimeter of the rings observed in the image. To use these prior plotting programs a "ring structure" is necessary. The step entitled "compute_spatial_freq_pseudo_rings" creates pseudo rings from which the instantaneous dioptric information can be read and delivered to the plotting programs. A "ring" for such plotting is identified by taking the mean value of eight instantaneous frequencies read along a meridian.
Programs
program compute_corneal_power;
uses
crt, deplib, filtlib, ring1lib,plotlib,vgalib,pointlib, cmplxlib, cmplxvs, fftlib,v8lib, utill, global;
const
debug = true;
var
ix1,iy1: integer;
max_diop,min_diop: single;
exam: words;
begin
if paramcount = 1 then exam := paramstr(l) else exit;
new(meridian);
new(dio);
fillchar (dio^,sizeof(dio^),0);
set_rick_defaults (ix1,iy1,ham_window,han_window);
x_ctr := ix1;
y_ctr := iy1;
display_selected_frame (exam,error);
find_the_fixation light (x_ctr,y_ctr,vga_xc,vga_yc);
find_first_ring(ring1);
find_offsets (ringl,xmean,ymean);
for itheta := 0 to 127 do
begin
sample_meridian (debug, ringl, itheta, -xmean,
-ymean, ham_window, meridian, last_points); rfftc(meridian^,m);
fft_filter(debug,meridian,han_window);
fftc(meridian^,m, invs);
compute_phase (debug,itheta,meridian,ring1,phase); restore_center(itheta,ring1,meridian);
compute_spatial_freq_pseudo_rings (debug, itheta, phase, dio);
compute_powers (itheta,min_diop,max_diop, dio);
write (itheta: 4);
end;
lowpass_meridians (last_points,dio);
lowpass_circumference(dio);
lowpass_circumference(dio);
preen_output(dio);
write_output_to_disk(min_diop,max_diop,dio);
dispose(meridian);
dispose(dio);
end.
procedure sample_meridian(debug: boolean; ring1: r256; itheta: integer; xmean,ymean: single; hw: r512a; var mer: r512c; var last_points: i256);
var
iphe, ipix, ir, offr0, offr1, 1p0, 1p1: integer;
f,theta: single;
vid: i512;
curv: ^r512a;
begin
fillchar (mer^,sizeof(mer^),0);
fillchar (vid,sizeof(vid),0);
for ir := 1 to last do
read_polar_pixel(ir, itheta,2.0*xmean,2.0*ymean, vid[ir]); for ir := 0 to last-1 do
read_polar_pixel(ir, itheta+128,xmean,ymean, vid[-ir]);
if itheta = 0 then
for iphe := 0 to lim do
load_last points (iphe,vid, last_points);
1p0 := last_points[it1heta];
l1l := last_points[itheta+128];
for ir := 0 to 1p0 do
begin
offr0 := round(ring1[itheta]);
offr1 := round(ring1[itheta+128]);
mer^[ir].r := vid[ir+offr0] *
hw[ir+(last_rad-1p0)];
mer^[-ir+1].r := vid[-ir-offr1] *
hw[-ir-(last_rad-1p1)];
end;
if debug then
begin
new(curv);
for ir := -last+1 to last do
curv^[ir] := mer^[ir].r;
plot_r512a(curv^, 13,6,true);
dispose(curv);
end;
end;
procedure restore_center (itheta: integer; ring1: r256; var meridian: r512c);
var
ir: integer;
begin
for ir := last downto round (ringl [itheta]) do
begin
meridian^[ir] := meridian^ [ir-round (ringl
[itheta])];
meridian^[-ir+1] := meridian^ [-ir+round (ringl
[itheta])+1];
end;
for ir := -round(ringl [itheta]+1) to round(ringl [itheta]) do begin
meridian^[ir].r := 0.0;
meridian^[ir].i := 0.0;
end;
end;
Accordingly, I have described an illustrative embodiment of my invention. Numerous modifications may be employed by those skilled in the art such as substituting a converging for the diverging lens in which case "minification" substitutes for the magnification which has been described. In addition, other types of windows may be substituted for the Hamming and "hanning" windows set forth in the described embodiment. Alternative transforms, such as the Hubert transform or finite impulse response filters may also be employed to derive the instantaneous spatial frequencies and other forms of polynomial curve fitting may be employed besides the linear least-squares technique described. Further and other modifications may be apparent to those skilled in the art without, however, departing from the spirit and scope of my invention.