CA2172284C - Processing of keratoscopic images using local spatial phase - Google Patents

Processing of keratoscopic images using local spatial phase

Info

Publication number
CA2172284C
CA2172284C CA002172284A CA2172284A CA2172284C CA 2172284 C CA2172284 C CA 2172284C CA 002172284 A CA002172284 A CA 002172284A CA 2172284 A CA2172284 A CA 2172284A CA 2172284 C CA2172284 C CA 2172284C
Authority
CA
Canada
Prior art keywords
dimensional
image
processing
unknown
distances
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CA002172284A
Other languages
French (fr)
Other versions
CA2172284A1 (en
Inventor
Richard J. Mammone
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tomey Co Ltd
Original Assignee
Tomey Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tomey Co Ltd filed Critical Tomey Co Ltd
Publication of CA2172284A1 publication Critical patent/CA2172284A1/en
Application granted granted Critical
Publication of CA2172284C publication Critical patent/CA2172284C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/255Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures for measuring radius of curvature
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/107Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining the shape or measuring the curvature of the cornea

Abstract

Quantitative measurement of corneal surface topography is obtained by processing a two-dimensional image of the surface (6) which reflects a quasi-periodic illuminated pattern, such as a series of concentric rings (13), from a Placido disk source. The local spatial phases exhibited by the image of the illuminated pattern when reflected from the corneal surface and when reflected from standard specular surfaces are obtained by processing the images (42). The distances at which predetermined local spatial phases are observed in the image from the cornea are compared (42) with the distances at which these same phases are observed in the images of the standard surfaces. The distances are also compared (42) with certain corresponding distances on the source and converted to reveal the dioptric powers of refraction of the corneal surface without the need for parametric interpolation. During processing, any mislocation of the apex of the corneal surface along the optical axis is compensated for.

Description

~ ~228~
W096/04839 PCT~S9~107214 Proces~ing of R~ratoscopic Images Using Local SpAtial Pha~e Technical Field This invention relates to instruments for measuring surface topography, and more particularly, to such instruments known as keratoscopes.

B~ckqround of the Invention Gersten et al US patent 4,863,260 discloses a computer-controlled corneal mapping systems for providing quantita-tive topographic information about a corneal surface illuminated with a structured light pattern, such as a series of concentric light bands. Because the edges of the light pattern bands must be ascertained, a considerable amount of image processing is required, including various curve-fitting algorithms in order to re-construct the corneal topography.

In the aforementioned patent, intersecting laser beams were employed to accurately locate the apex of the cornea.
These light beams, however, were rather bright and could give rise to extraneous glare in the image. Therefore, to prevent the glare from swamping useful data in the image, the locating laser light beams were shut off just before the image to be analyzed was acquired. This required that the patient's head be maintained motionless between the time the locating light beams are turned off and the time that the image is acquired for processing. Accordingly, it would be desirable to more directly and accurately to ascertain the powers of refraction exhibited over a corneal surface and to allow the image to be processed without requiring the locating light beams to be turned off. In addition, it would be ~dvant~geQlls to ~low the processing to compensate for any inadvertent mislocation of the corneal surface.

Takeda, et al in articles entitled "Fourier-transform W096/04839 PCT~S9S/07214 ~1722~ 2 -Method of Fringe-pattern Analysis for Computer-based Topography and Interferometry", published in J. Opt Soc. Am, vol. 72, no. 1, January 1982 at pp. 156 - 160 and "Fourier Transform Profilometry for the Automatic Measurement of 3-D
Object Shape", Applied optics 22:3977-82, December 15, 1983 describe a method of Fourier transform processing of two-dimensional images of three dimensional objects in which local spatial phase is employed so that it is no longer necessary to detect all of the light band edges. However the Takeda articles do not disclose a method which is directly applicable for mapping the specular corneal surface.

summarY of the Invention In accordance with my invention, the topography of a three-dimensional specular surface, such as a cornea, may accurately be measured by processing a two-dimensional image of the surface illuminated with a quasi-periodic pattern to obtain continuo-~ va-lue~ of local spatial phase with respect to distance perpendicular to the optical axis.

Preliminarily, a number of images of spheres whose dioptric powers are known are subjected to Fourier transform processing of the type disclosed in the above-mentioned Takeda articles. The Fourier spectrum is windowed and band-pass filtered to suppress all side lobes and negative frequencies except for the fundamental spatial frequency.
The inverse transform is taken and then the inverse tangent of the quotient of the imaginary and real portions of the inverse transform are computed to obtain the local spatial phase. At this point the local spatial phase may be "unwrapped" and the unwrapped phase differentiated to obtain the instantaneous local spatial frequencies at discrete pixels. These local spatial frequencies may then be mapped to the known diopters of the spheres by a process of curve ~722~
~ W096/04839 ~ PCT~S95/07214 .
fitting.

Alternatively, and preferably, however, the local spatial phase is "unwrapped" to obtain continuous values of local spatial phase with distance from the optical axis of the image. The distances from the optical axis at which predetermined values of the local spatial phase occur correspond to the positions of the illuminated rings of the pattern. The distances obtained from processing images of surfaces whose dioptric powers are known are compared and tabulated with corresponding distances in the original illuminated object to determine the local magnification produced by each surface. Images of surfaces having unknown dioptric powers may then be mapped to diopters by consulting the tabulation. Accuracy beyond that determined by the granularity of pixel position is obtained.

In accordance with a further aspect of my invention, the accuracy of topographic analysis is enhanced by allowing the locating light beams to remain on while the image is being acq~ired and by eliminating the effect of the glare during image processing. Mislocation of the apex of the surface in t~e x, y, or z direction is detected by determining the location of the light beams in the image and any such mislocation, such as may arise from inadvertent movement of the subject is compensated for.

D~scription of ths Dr~winq The foregoing and other objects of my invention may be better understood from the following specification and drawing, in which:
,.
Fig. l is a sectional view showing a quasi-spherical specular surface properly positioned in the conical Placido disc apparatus of prior art patent 4,863,260, while Figs. lA
and lB show the surface too closely and too remotely W096/04839 ~ 7 2 2 8 4 PCT~S95/07214 positioned, respectively;

Fig. 2A shows one of the rings reflected in the image of the quasi-spherical surface, while Fig. 2B shows exaggerated glare spots in the image;

Fig. 3A is a plot of the intensity of surface illumination versus radial distance for a quasi-periodic pattern reflected from the specular surface; Figs. 3B and 3C
show the corresponding wrapped and unwrapped local spatial phase of the processed image; Figs, 3D and 3E show the unfiltered and filtered Fourier spectra of a processed image;

Fig. 4 is a flow chart for processing the scanned image to relate local spatial frequency at regularly-spaced pixel positions to diopters of refraction;

Figs. 5A and SB are flow charts for procedures involved in processing the image of a specular surface to remove the effects of glare and impr~per positioning of the surface;

Figs. 6A and 6B are flow charts for processing the scanned image to relate distances obtained from the unwrapped local spatial phase to diopters of refraction;
and Figs. 7 and 8 are photographs of a human cornea shown before and after processing to remove the effects of glare.

G~neral DQ3cription Referring now to Fig. 1, there is shown a section through the prior art illuminated Placido disc cone device 10 described in U.S. patent 4,~6~,260 and in US patent 5,416,539. Cone 10 causes a quasi-periodic mire pattern to be reflected from a quasi-spherical specular surface 6, such ~722~
W096/04839 - PCT~S95/07214 as a human cornea or polished steel ball, positioned at its left-hand side. Cone lO has a hollow, substantially cylindrical bore 11. A light source (not shown) is positioned at the right-hand side base of cone lO. A series of opaque bands 9 divides the otherwise transparent bore 11 into a series of illuminated rings 13, of which only rings 13-1 and 13-2, spaced the distance ~ apart, are individually labelled in Fig.l. In the illustrated embodiment, each of rings 13 is of the same diameter h.

The specular surface 6 reflects a virtual image of the illuminated rings 13. On a perfectly spherical specular surface, each ring would be reflected, as shown in Fig. 2A, as a circle 13'. In Fig. 2B, greatly enlarged sectors of three illustrative virtual image rings, 13-1', 13-2' and 13-3', are shown. For simplicity, however, Fig. 1 shows an edge view of onl~ one of these virtual image rings, 13'.
Ring 13' appears to lie at some distance beneath the specular surface 6.

As explai~ed, intgr alia, in the article "Keratometery"
by Janet stone in the book Contact Lens Practice, a ray incident upon a spherical mirror aimed at the focus, f, will be reflected parallel to the optical axis and will produce a virtual image which appears to lie beneath the surface of the sphere while a ray directed perpendicular to the surface toward the surface~s center of curvature will be directed back upon itself. To locate a point on the plane of the virtual image, the reflected ray should be projected parallel to the optical axis until it intersects the ray directed to the center of the sphere. This is illustrated in Fig. 1 with respect to two rays 13f and 13c from - illuminated object ring 13-1 of cone lO. Ray 13c is directed to the center of curvature, C, of spherical surface - 6 while ray 13f is directed to the focus, (which lies a distance f beneath the surface). The plane of the virtual image 13' of object ring 13-1 is located by projecting WO96/0483s PCT~S95/07214 O
~2~8~ - 6 -reflected ray 13f parallel to the optical axis until it strikes the projection of ray 13c. The virtual image ring 13' is defined by the intersection of light ray 13c, which is directed perpendicular to the specular surface 6, i.e., toward its center of curvature, C, and the backward extension of ray 13f, which passes through the focus of specular surface 6, parallel to optical axis A-C.

As explained in the aforementioned '260 patent, the proper positioning of specular surface 6 is indicated when intersecting light beams L1 and L2 (advantageously laser beams) converge at single point A at the apex of the specular surface 6. When surface 6 is properly positioned so that point A is on the optical (Z) axis which runs through the center of bore 11 of cone 10, point A will lie in the center of the field of camera 41 and the distance _ between point A and the camera 41 is accurately known. In addition, when surface 6 is properly positioned, the reflection 17' of fixation light 17 from pellicle 16 will also appear at point A. If, however, surface 6 is positioned too far int~ ~ore 11 (Fig. lA), or too far out of bore 11 (Fig. lB), light beam L1 will not strike surface 6 at point A but at some distance D above or below point A, thereby introducing an axial error ~Z in the location of point A. The point where the light beam L1 strikes surface 6 effects a local glare spot GS, as shown in the greatly enlarged view Fig. 2B, which shows glare spot GS obscuring a portion of one or more rings such as 13-2' and 13-3'. A
photograph of an eye where glare spots obscure some of the reflected rings is shown in Fig. 7. As will hereinafter be explained, the mislocation of glare spot GS from point A, as well as its ring obscuring effect are compensated for by my improved image processing method, the results of which are shown in Fig. 8.

As disclosed in my above-mentioned application, the image of surface 6 is acquired by electronic camera 41, 2~2~
W096/04839 - ~.; PCT~S95/07214 through lens 40. Processor 42 scans the image acquired by camera 41 in a direction orthogonal to the illuminated pattern that is reflected from surface 6, e.g., an illuminated ring pattern is scanned radially. However, if bore ll of cone l0 were provided with illuminated longitudinal stripes (not shown) parallel to the axis of the cylindrical bore ll, such longitudinal stripes would cause a pattern of radial lines to be reflected from quasi-spherical surface 6 and orthogonal scanning of such radial lines would be in a circular direction.

Fig. 3A shows the waveform of video intensity versus orthogonal sc~nn; ng distance in the image acquired by processor 42. The average frequency of the waveform is determined by the average spacing (see, for example, illustrative spacing "S", Fig. l) of the illuminated rings of the Placido disc source of the light pattern. On a surface having surface imperfections, the instantaneous spatial frequency of the pattern will vary as surface imperfections distort the local light pattern.
My co-pending application taught that the since the illuminated spatial pattern is quasi-periodic it can be decomposed into its Fourier series:

fp( x) = ~ aicos ( ioOx) + bisin ( i(~oX) ( 1) i-o In the above expression, bj = 0 if the pattern is symmetrical about the origin.

As disclosed in the aforementioned co-pending application, the frequency spectrum of the video image scanned orthogonally to the quasi-periodic pattern (i.e., in tne direction xr), may be ascertained by taking a discrete Fourier transform of the scanned image, using an analytic filter to suppress all side lobes and negative spatial ~ ~ u WO96/0483s ~ ~ 2 2 ~ PCT~S95/07214 fre~uencies in the Fourier spectrum except for the narrow band of frequencies adjoining the fundamental spatial frequency, taking the inverse transform and finding the inverse tangent of the quotient of the imaginary and real portions of the inverse transform to obtain the instantaneous spatial phase. The Fourier spectrum of the two-dimensional image is shown in Fig. 3D. The Fourier spectrum of the image after processing to suppress all side lobes and negative frequencies is shown in Fig. 3E. The real component, R(x), of the inverse transform has the form:

R(x) = A(x) cos(~Ox+ ~(x)) (2) while the imaginary component, I(x), has the form:

I (x) = A(x) sin (~Ox + ~ (x) ) (3) The inverse tangent of the quotient of the imaginary component divided by the real component yields the instantaneous spatial phase ~ as a function of the radial scanning distance, xr:

~t ) 1( I(xr) The instantaneous phase is a saw-tooth or ramp-like function, shown in Fig. 3B, having discontinuities every 2~
radians. The derivative of the instantaneous phase waveform is the instantaneous frequency (which also has discontinuities every 2~ radians):

d- tan~1 ( RI(x)) = ~0~ ~/(x) (5) ~ 2284 W096/04839 - ;~ PCT~S95/07214 g The discontinuities may be eliminated by a process known as "phase unwrapping" which employs threshholding and numerical interpolation, as described, for example, in the article by Mitsuo Takeda, Hideki Ina and Soiji Kobayashi entitled "Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry", published in J. Opt Soc. Am, vol. 72, no. 1, January 1982 at pp. 156 -160, see also, the article by Jose M. Tribolet, entitled "A
New Phase Unwrapping Algorithm", published in the IEEE
Transactions on Acoustics, Speech, and Signal Processing, vol ASSP-25, No. 2, April 1977, pp.170-177.

As shown in the flow chart Fig. 4, and described in the Appendix hereto, the unwrapped phase may be differentiated to obtain the instantaneous local spatial frequencies which may then be mapped to diopters by a process of curve fitting. Prel;rin~rily, the images of a plurality of known specular surfaces having known dioptric powers of refraction must be processed to obtain the local spatial frequencies exhibited by the illuminated rings in each image. The local 20 spat~ al frequenc~es exhibi~ed by the illuminated pattern reflected from each of the known surfaces are then mapped to their known dioptric powers by using a curve fitting technique such as least mean squares curve fitting, in which the coefficients of a polynomial to relate these frequencies to diopters is determined, e.g.:

Dl= ~(f) = Ao+ A1f+ A1f2+ ... +Anfn, (C) Thereafter, as shown in the flow chart Fig. 4, the local spatial frequencies obtained from processing the image of an unknown corneal surface may be employed with the determined polynomial to ascertain the corresponding diopters of that surface.

In accordance with my preferred method, however, W096/04839 ~17 ~ 2 ~ 4 PCT~S95/07214 ~

advantage is taken of the fact that the local spatial phase exhibits values that are a multiple of 2~ at the position of each successive one of the quasi-periodic illuminated rings in the processed image, as may be observed by comparing the unwrapped phase, Fig. 3C, with Fig. 3A. Fig. 3A shows the video intensity I~x) encountered in the x direction of sçAnn;ng (i.e., orthogonally to the quasi-periodic pattern).
As shown in Fig. 3C, the first and second 2~ points in the processed image occur at the distances, xrl and xr2 from the center of scanning, e.g., from the optical axis.
The radial scanning distance at each 2~ point is the radius of a ring in the image and twice this radius is the diameter h' of the ring. Referring again to Fig. 1, it is seen that diameter of the illustrative illuminated ring 13-1 on the bore 11 of Placido disc cone 10 is h. The ratio h/h' of the size of the original object to the size of its reflected image is the magnification M produced by the specular surface 6. As explained in the above-mentioned article by Janet Stone, the relationship between magnification M and diopters ~ is ~ = 1/2dM, where d is the distance along the optic axis b~twe~ a point on-the plane of the object and its image point. In the illustrative apparatus of Fig. 1, this distance is approximately constant for each ring.

Reference to Fig. 3C shows that the unwrapped local spatial phase gives a continuous series of values intermediate each set of 2~ points. In the illustrative embodiment where cone 10 has a cylindrical bore 11, the dimension h is constant. Each 2~ point corresponds to a ring in the image but the dimension h' of each such image ring is determined by the local magnification of surface 6 at the point. Intermediate the 2~ points, the intermediate scanning distances corresponding to the "imaginary"
intermediate rings in the image. However, since the corresponding "imag~nary~ interme*iate ring in the cylindrical bore 11 of cone 10 has the constant dimension h, the magnification corresponding to each of these W096/04839 21~ 2 28 4 PCT~S9~/07214 intermediate spatial phase points may be ascertained simply by dividing the corresponding intermediate scanning distance "xr" by the constant dimension h, without need of any curve fitting.

The actual processing steps of my preferred method are illustrated in the flow chart diagram, Fig. 5:
SteP 50l entitled "load matched filter" loads an image of a paradigm laser spot G8f, which is used as a matched filter.
The paradigm laser spot is obtained by averaging the values obtained by the appearance of the laser spot on a plurality of different size and color eyes.

Step 502, labeled "compute RMS filter", computes the RMS
value of the paradigm laser spot GSf by taking the square root of the sum of the squared values of the video intensity found over the area of G8f.

Step 503, labelled "find laser spot" compares the video intensity I~x) of the illuminated surface on a pixel by pixel basis with the RMS value ~f spot GS that was obtained in step 502. A laser spot is identified when the video intensity I~x) of a scanned spot exceeds the RMS value of the spot G8, calculated as follows:
L

+ offoot) * filterf i~-L
L 1; (7) ~ + offoo~) * ~ ( fil teri) 2 i = -1 i = -1 In the above expression, L is the pixel-length of the filter data and OFFSET iS the offset to the video data. The video data is convolved with the filter data for various values of OFFS~T. When the expression reaches a m~x;~um that exceeds a predetermined threshold, the location of the laser spot GS is detected. The point at which the laser spot is found is designated lasergS. The distance D, expressed in W096/04839 ~1722 o ~ PCT~S95107214 ~

millimeters, is obtained from the location laser95 as follows:
xc-Lasers,3 ( 8) pels /mm where Xc is the. maximum number of pixels ("pels") in the scanned line.

Step 504, labelled "clear laser spot 5" calls the procedure detailed in steps 510 through 520 which clears the actual laser spot G8 that is found in the scanned image.

Step 505 labelled "compute axial misalignment" computes the axial misalignment of point A, after laser spot GS has been cleared from the image being processed. As shown in Fig. 1, laser beam L1 travels through a tunnel in cone 10 which is at a fixed angle, illustratively, 38 degrees, to the optical axis, Z.
Should surface 6 be inserted too far into the bore of cone 10, laser beam L1, instead of striking surface 6 at point A, will strike the surface at some distance above point A, such as at dista~ce ~D". Similarly, if surface 6 is not inserted far enough into the bore, laser beam L1 will strike the surface at some distance "D" below point A. The depth error, ~Z, measured along the optical axis, in the positioning of point A is calculated by the following:

tan38 - e tan380 ~ (R~v~- ~R2vg- D2 where Rav9 is the average radius of the specular surface, e.g., 7.85 mm for the average human cornea.

The above equation gives the depth error for the apex of surface 6 not being at point A, i.e., not at Z = 0. The error in diopters caused by the depth error is compensated for by the following equation:

W096/04839 217 22 8 ~ PCT~S95/07214 Dc = Dm+ A + B*AZ ~lO) where ~Z is obtained from equation (10) above. The constants A
and B are computed by processing a series of images of known diopter specular surface steel balls that have each deliberately been moved a known increment ~Z away from point A and employing the least squares method to relate the measured diopters to the known diopters of each such surface.

Correction for Lateral Misalignment In addition to correcting for axial misalignment, i.e., a displacement along the optical or Z-axis, Step 505 advantageously may also correct for any lateral misalignment, that is, a displacement of the apex of the cornea of the patient's eye from the assumed position, point "A" (Fig. 1), in the X or Y
direction, i.e., transverse to the Z axis of the instrument.

Where the apex of the patient's cornea is properly positioned at point A, the origin for the radius of curvature, R, is at point O,O,0 and the coordinates of point A are x, y, z.
However, if it be assumed that the apex of the patient's cornea, instead of being properly positioned at point A, is displaced in the X direction by the amount ~x, the coordinates of the apex, now at point A', are (x+~x, y, z). The lateral misalignment will affect the measurement of the radius of curvature of the cornea so that the apparent, or estimated, radius of curvature, R', will be a vector extending from the origin 0,O,O to the point A'(xt~x y z) instead of to the point A(x y z) The equations for the x, y, and z coordinates of point A in terms of the coordinates of point A' and of the actual radius of curvature of the patient's cornea R, versus the apparent radius of curvature, R', are given in (11) through (14) below, where ~
is the angle of the projection of R on the X,Z plane and ~ is the angle of elevation of R.

wos6lo483s PCT~S95/07214 x= (R' sin~ ~s~

y=(Rtsin~sinO
(11) z= R'cos~

R = ~/r2 + y2 + z2 R = ~(R/ sin~ ~s~ _~)2 + (R/ sin~ sin~)2 + (R' ~s~)Z (12) = ~R/2 _ 2R/ sin~ ~s~ ~x~ ~X2 (13) = R' ~1 _ 2 sin~ ~s~ +(R~) (14) Equation (14) can be simplified by resorting to a Taylor series expansion and neglecting all terms after the first two.
Accordingly,:

R R/ 1+ sin~ ~s~ ~x] (15) R

It will be recalled that the diopters of refraction, D, are equal to k/R, where k is the change in the index of refraction between air and the cornea. The counterpart of equation ~4) for dioptric power is, accordingly:

W096/04839 2 ~ 7 2 2 ~ 1~ PCT~S95/07214 D = k R/~1 2 sin~ cos~ ~x ~ ( R/) ~16) Equation (16) can similarly be simplified to yield:

D = D/ [1 _ sin~ cos~ ~x~ (17) where D' = k/R'.

An exemplary software listing, written in Pascal, for implementing equation (18), the correction for lateral misalignment in the X direction, is given below. The correction for misalignment in the Y direction is similar. The processing of the image acquired by camera 41, Fig. 1, depends on ring position. In the illustrative embodiment, which uses a cone device 10 of the type shown in US patent 5,416,539, the first five, (innermost), rings, reflected upon the patient's cornea are produced by an illuminated disc (not shown in Fig. 1) that is placed at the righthand end of hollow bore 11. Because of the physically different manner in which the rings are produced in the cone device, a slightly different equation is employed for processing information from these rings (if iring <5) than for the other rings (else).
{correct for x errors}
if (valid_z) and (abs(dx) > 0) then if iring < 5 then PowerPtr^[iring][jphi] := PowerPtr^[iring][jphi] *
(( 1-0.025*sin((iring/ring_total_for_cone) * pi)* dx *
(CosinesPtr^tjphi]))+ 0.015 ~ * sin((iring/ring_total_for_cone) * 2 * pi)* dx *
(CosinesPtr^[jphi])) else W096/04839 ~ 7 2 2 8 4 PCT~S95/07214 PowerPtr^tiring][jphi] := (KMetricIndex / r_c) *
( 1-0.025*sin((iring/ring_total_for_cone) * pi)* dx *
(CosinesPtr^[jphi]));

In the above listing, PowerPtr is an array containing the observed dioptric powers of the map from the Fourier processing of the two-dimensional image, indexed by current ring number, iring, and current angle, jphi (both integer numbers). The angle of the sinusoidal function, sin((iring/ring_total_for_cone), is the fraction of the current ring number, iring, divided by the maximum number of rings, ring_total_for_cone. The quantity dx, is the displacement ~x in the X direction of the point "A", as measured from the departure of the position of the fixation light 17', Fig. 1 from the center of the frame buffer of processor 42 storing the digitized information acquired by camera 41. In addition, the quantities "sin" and "cos" are expressed in a 360 degree system, while the quantities "sines" and "cosines" are expressed in a 256 degrees system since only 256 radial scans are employed in the exemplary embodiment. KMetric Index / r_c is the dioptric power at the radius r_c. --.

Step 506, labelled "find iris laser spot 2" ~x~mi nes the video intensity values in the region of point A (see Fig. 1) of the scanned image. When video intensity values exceeding the predetermined value expected for the image 17' of the fixation light 17 (see Fig. 1), Step 507, labelled "clear laser spot 8" is executed. This procedure is detailed in steps 530 through 539.
Clear Laser SPot 5 (Steps 510-519):
This procedure clears the laser spot G~ from the image being processed.
Step 511 initializes the scanning angle ~ for the acquisition of video data, I(x) along the scan line.

steP 512 loads the video data, I(x) obtained along the scan line.

SteP 513 performs a fast Fourier transform (FFT) of the video ~ W096/04839 217 2 2 g ~ PCT~S95/07214 data along the scan line.

Ste 514 finds the peak value of the Fourier transform.

Ste~ 515 filters the signal around the peak value.

Step 516 scales the FFT data to ensure that the image being placed on the screen is neither too light nor too dark.

Step 517 performs an inverse FFT.

Step 518 stores the results of processing the scan line data.

Ste~ 519 increments the scan angle until the maximum value is reached. Steps 511 through 519 are then performed again until, illustratively, 256 radial scans are performed.

Clear Laser Spot 8 (Steps 530-539):

Step 531 initializes the scan radius to STATRAD the starting radius of the iris so that the laser spot can be found in this area. As described in patent 5,214,456 issued May 25, 1993, when radially scanning outwardly from the center of the cornea (a location in the area of the pupil) toward the limbus, video intensity is lowest in the area of the pupil and increases rapidly in the area toward the iris.

Steps 532 throuqh 535. 536 and 538 These steps are similar to steps 512 through 51S, 517 and 518, except that the data being operated upon is data pertaining to the iris portion of the eye.

Step 537 labelled "DC restoration of image" restores the DC of the video signal in the area of laser spot G8, which is much brighter than the DC levels elsewhere in the image, to the DC
level existing in the adjacent sectors A-1 and B1, (see Fig. 2B).

Figs. 6A and 6B are flow charts for processing the two-~2284 W096/04839 PCT~S95107214 dimensional image of a three-dimensional surface to ascertain the diopters of refraction~present over the surface. In Fig. 6A
steps 601 through 609 and 620 and 630 are illustrated. Of these steps, steps 601 through 609 are roughly comparable to steps 401 through 409 of Fig. 4. Fig. 6B shows the details of step 620 of Fig. 6A in which the distance perpendicular to the optical axis corresponding to each incremental value of local spatial phase is obtained so that diopters can be found as a function of incremental values of perpendicular distance rather than as a function of increments fixed by pixel position.

Steps 601 and 602 These steps acquire the two-dimensional video image of the three-dimensional surface whose topography is to be measured and are comparable to steps 401, 402 and 403 of flow chart Fig. 4.

StePs 605 and 606 These steps perform the Fourier transform on the acquired two-dimensional video data and employ a Hilbert transform and band-pass filter to suppress negative frequencies and all sidebands in the Fourier spectrum allowing only the major sideband of the first harmonic to pass to the next step. These steps are alternative, preferred processing steps to steps 404 through 406 of Fig. 4.

Step 607 performs the inverse Fourier transform to obtain the complex analytical signal having real and imaginary portions and is comparable to step 407 of Fig. 4.

Step 608 obtains the arc tangent of the quotient of the real and imaginary portions of the complex signal to obtain the wrapped spatial phase and is comparable to step 408 of Fig. 4.

Step 609 This step performs the step of phase-unwrapping and is comparable to step 409 of Fig. 4 which differentiates the discontinuous local spatial phase obtained in step 408 to obtain continuous values of local spatial frequency. However, unlike step 409, step 609 advantageously re-integrates the continuous W096/04839 ~17 2 ~ 8 ~ PCT~S95/07214 values of local spatial frequency to obtain continuous values of local spatial phase.

Step 620 This procedure is detailed in Fig. 6B. Briefly, the continuous values of local spatial phase are inspected to ascertain the radial distance from the optical axis at which the local spatial phase exhibits values that are multiples of 2~.
These local spatial phase values occur at the positions where the surface being measured reflects each of the rings in the illuminated pattern.

SteP 630 This step samples the continuous values of local spatial phase and, during the calibration phase when the images of a plurality of known surfaces are processed, relates the local spatial phase values using a least-means-squares procedure to the known dioptric powers of the surfaces. A calibration matrix of dioptric power vs. spatial phase values is assembled.
Thereafter, the calibration matrix is employed to ascertain the dioptric powers exhibited by an unknown surface from the processing of its image to yield continuous values of local spatial phase.

In Fig. 6B:
SteP 621 sets the count for ring# to 1 at the start of the procedure for finding the distances obtaining at values of local spatial that are multiples of 2~.

Step 622 sets the variable ringposphase to the count provided by step 621 multiplied by 2~.
SteP 623 increments the count of the counter for the array "ipix" which is an array (signified by the use of square brackets) of local spatial phase values exhibited at the pixel positions of the processed image.

Step 624 determines whether the phase value exhibited in the current pixel position of the array is less than the variable W096/04839 ~ 17 22 ~ ~ PCT~S95/07214 ringposphase and whether the end of the array has been reached.

Ste 625 determines for each increment of local spatial phase the corresponding distance in the image.

SteP 626 increments the count for ring #.

Step 627 determines whether the maximum number of ring positions, illustratively 25, has been reached. If not, processing continues with a repetition of steps 622 through 627.

~ 2i7~28~
W096/04839 - PCT~S95/07214 Appendix Fiq. 4 Basics of Ima~e Processing Usinq Curve Fittinq:
Fig. 4 is a flow chart of the basic steps for analyzing a two-dimensional image of a three-dimensional surface and employing discrete Fourier transform processing to relate local spatial frequencies in the processed image to the three-dimensional radius or diopters of refraction of the surface.
After the initialization of variables, steps 401 entitled "locate 1st ring circumference" and 402 entitled "locate center of 1st ring subpixel offsets x & y" locate the point from which the pattern in the image will be scanned in step 403. As described at column 8, line 9 et seq. of the aforementioned Gersten, et al, patent 4,863,269, the center point is determined from the first ring pattern appearing in the image. The details of this processing step are set forth in the steps of the main program below entitled "find_first_ring (ringl)" finds the edges of the first ring. This is the only ring pattern whose "edges" need to be determined.
In scanning images containing the focusing spot 17' provided by light source 17 and pellicle 16, Fig. 1, the focusing spot 17' may mask topographic data in the center of the scan. To avoid the discontinuity resulting from an absence of data that might impair the functioning of a Fourier transform, it may be advantageous to fill-in the missing data by using the data from two radial scans lying 180 degrees apart as being from one meridian and joining the data lying on opposite sides of the circle of omitted data. This is done in the procedure "sample_meridian" called for in the main program detailed below.
The video data of the acquired image is passed through a Hamming window filter in step 404 so that the side lobes of the Fourier spectrum obtained in the FFT step 405 will be suppressed.
After the FFT is obtained the Fourier spectra is subjected to a Hahn filter in step 406 so that only major side bands of the first harmonic are passed. The inverse Fourier transform is obtained in step 407, the arctangent of the quotient of the imaginary divided by the real components of the inverse transform 2~722g~
W096/04839 PCT~S95/07214 yields the local spatial phase in step 408. The local spatial phase is differentiated in step 409 to obtain the local spatial frequency and, in step 410, a polynomial is found using the least squares curve fitting technique to map the local spatial frequencies to diopters from the processed images of a number of known surfaces. When the center of the image is restored (see "procQdure r~storQ_center"), the dioptric powers obtained from the processing are plotted in their correct positions.
The plotting programs used in the above-mentioned Gersten et al. patent device plotted the dioptric powers of the surface along the perimeter of the rings observed in the image. To use these prior plotting programs a "ring structure" is necessary.
The step entitled "compute_spatial_freq_pseudo_rings" creates pseudo rings from which the instantaneous dioptric information can be read and delivered to the plotting programs. A "ring" for such plotting is identified by taking the mean value of eight instantaneous frequencies read along a meridian.

~ W096/04839 ~17 2 2 8 ~ PCT~S95/07214 Programs program compute_corne~l_power;
uses crt,deplib,filtlib,ringllib,plotlib,vgalib,pointlib, cmplxlib,cmplxvs,fftlib,v8lib,utill,global;
const debug = true;
- var ixl,iyl: integer;
max_diop,min_diop: single;
exam: words r begin if paramcount = 1 then exam := paramstr(l) else exit;
new(meridian);
new(dio);
fillchar(dio^,sizeof(dio^),O);
set_rick_defaults(ixl,iyl,ham_window,han_window);
x_ctr :- ixl;
y ctr := iyl;
dlsplay_selected_frame(exam,error);
find_the_fixation_light(x_ctr,y_ctr,vga_xc,vga_yc);
find_first_ring(ringl);
find_offsets(ringl,xmean,ymean);
for itheta :-- O to 127 do begin sample_meridian (debug, ringl, itheta, -xmean, -ymean, ham_window, meridian, last_points);
rfftc(meridian^,m);
fft_filter(debug,meridian,han_window);
fftc(meridian^,m,invs);
compute_phase(debug,itheta,meridian,ringl,phase);
restore_center(itheta,ringl,meridian);
compute_spatial_freq_pseudo_rings (debug, itheta, phase, dio);
compute_powers(itheta,min_diop,max_diop,dio);
write(itheta:4);
end;
lowpass_meridians(last_points,dio);
lowpass_circumference(dio);
lowpass_circumference(dio);
preen_output(dio);
write_output_to_disk(min_diop,max_diop,dio);
dispose(meridian);
dispose(dio);
end.

W096/04839 ~1 7 2 2 8 ~ - 24 - PCT~S95/07214 proc~duro sample_meridian(debug: boolean; ringl: r256; itheta:
integer; xmean,ymean: single; hw: r512a; var mer: r512c; var last_points: i256);
var iphe,ipix,ir,offrO,offrl,lpO,lpl: integer;
f,theta: single;
vid: i512;
curv: ^r512a;
begin fillchar(mer^,sizeof(mer^),O);
fillchar(vid,sizeof(vid),O);
for ir := 1 to last do read_polar_pixel(ir,itheta,2.0*xmean,2.0*ymean,vid[ir]);
for ir := O to last-1 do read_polar_pixel(ir,itheta+l28,xmean,ymean,vid[-ir]);
if itheta = O then for iphe := O to lim do load_last_points(iphe,vid,last_points);
lpO := last_points[itheta];
lpl := last_points[itheta+l28];
for ir := O to lpO do begin offrO := round(ringl[itheta]);
offrl := round(ringl[itheta+l28]);
mer^[ir].r := vid[ir+offrO] *
hw[ir+(last_rad-lpO)];
mer^[-ir+l].r := vid[-ir-offrl] *
hw[-ir-(last_rad-lpl)];
end;
if debug then begin new(curv);
for ir := -last+l to last do curv^[ir] := mer^[ir].r;
plot_r512a(curv^,13,6,true);
dispose(curv);
end;
end;

~172~4 W096/04839 PCT~S95/07214 p.vc~1~re restore_center(itheta: integer; ringl: r256; var meridian: r512c);
var ir: integer;
begin for ir := last downto round(ringl[itheta]) do begin meridian^[ir] := meridian^[ir-round (ringl [itheta])];
meridian^~-ir+1] := meridian^[-ir+round (ringl [itheta])+l];
end;
for ir := -round(ringl[itheta]+1) to round(ringl[itheta]) do begin meridian^[ir].r := 0.0;
meridian^[ir].i := 0.0;
end;
end;

Accordingly, I have described an illustrative embodiment of my invention. Numerous modifications may be employed by those skilled in the art such as substituting a converging for the diverging lens in which case "minification" substitutes for the magnification which has been described. In addition, other types of windows may be substituted for the ~mi ng and "h~nni ng"
windows set forth in the described embodiment. Alternative transforms, such as the Hilbert transform or finite impulse response filters may also be employed to derive the instantaneous spatial frequencies and other forms of polynomial curve fitting may be employed besides the linear least-squares technique described. Further and other modifications may be apparent to those skilled in the art without, however, departing from the spirit and scope of my invention.

Claims (24)

Claims:
1. The process of operating a stored program controlled apparatus to display a two-dimensional map of measurements of the local surface magnification of a three-dimensional surface reflecting a quasi-periodic mire pattern from an illuminated object, comprising the steps of:
a. processing two-dimensional scanned images of a plurality of known diameter, three-dimensional spheres reflecting said pattern from said object to ascertain the two-dimensional distances (h') on said surface at which predetermined local spatial phases occur in said scanned images;
b. computing for each of said images, the ratio of said two-dimensional distances (h') to the size (h) of a corresponding portion of said object to ascertain the local magnification of said surface;
c. processing a two-dimensional image of an unknown surface to ascertain the two-dimensional distances thereon at which said predetermined local spatial phases occur in said scanned image;
and d. displaying an indication of the local magnification of said unknown surface corresponding to said last-mentioned two-dimensional distances.
2. The process of claim 1 wherein said displaying includes multiplying said two-dimensional distances ascertained from processing said image of said unknown surface by said ratio computed from said images of said known diameter three-dimensional spheres.
3. The process of claim 1 wherein said pattern includes a series of concentric mires and wherein said displaying includes interpolating said magnification between adjacent ones of said mires.
4. The process of claim 3 wherein said displaying includes displaying said magnification between adjacent ones of said mires by variation in the color of said display.
5. The process of operating a stored program controlled apparatus to display a two-dimensional map of measurements of the third dimension of a three-dimensional surface reflecting a quasi-periodic mire pattern from an illuminated object, comprising the steps of:
a. processing two-dimensional scanned images of a plurality of known diameter, three-dimensional spheres to ascertain the two-dimensional distances (h') on said surface of said spheres at which predetermined local spatial phases occur in said scanned images;
b. computing the ratio of each of said two-dimensional distances (h') to the size (h) of a corresponding portion of said object to ascertain the local magnification of said surface;
c. processing a two-dimensional image of an unknown surface to ascertain the local spatial phases therein and the corresponding two-dimensional distances on said surface;
d. multiplying said two-dimensional distances ascertained from processing said image of said unknown surface by a corresponding ratio computed from said images of said three-dimensional spheres to obtain the local magnification of said unknown surface; and e. displaying on said image of said unknown surface a continuous indication of the local magnification of said unknown surface.
6. The process of claim 5 wherein said displaying includes displaying an indication of the diopters of refraction of said surface.
7. The process of claim 5 wherein the local spatial phase is unwrapped to obtain continuous values of local spatial phase with distance from the optical axis of the image.
8. The process of employing a stored program controlled apparatus for displaying a map of the dioptric powers of refraction exhibited over a three-dimensional surface, comprising the steps of:
a. processing a two-dimensional image of a three-dimensional surface illuminated with a quasi-periodic mire pattern to ascertain the instantaneous local spatial phases contained therein;
b. ascertaining from said image the orthogonal distances on said surface at which predetermined ones of said local spatial phases occur;
c. correlating said orthogonal distances with the diopters of refraction of a plurality of known three-dimensional surfaces, and d. displaying a continuous two-dimensional map of the dioptric powers of said three-dimensional surface obtained from said correlating.
9. The process of claim 8 wherein said orthogonal distances are orthogonal to an optical axis of said surface.
10. The process of claim 8 wherein said correlating includes forming a tabulation of said diopters and said orthogonal distances.
11. The process of claim 10 wherein said correlating includes employing said tabulation to ascertain the coefficients of a polynomial relating said orthogonal distances to said diopters of refraction.
12. The process of claim 10 further comprising the step of:
e. ascertaining said orthogonal distances in an image of an unknown three-dimensional surface; and f. employing said tabulation to determine the diopters of refraction of said unknown surface.
13. The process of claim 11 further comprising the step of:
e. ascertaining said orthogonal distances in an image of an unknown three-dimensional surface; and f. employing said polynomial to determine the diopters of refraction of said unknown surface.
14. The process of displaying a map of a three-dimensional surface's third dimensions comprising the steps of:
processing two-dimensional scanned images of three-dimensional surfaces reflecting images of objects to ascertain the continuous variation in local spatial phase with distance from an optical axis in said two-dimensional images;
obtaining a two-dimensional distance (h') on said surface from each of said images corresponding to each said local spatial phase;
compiling a tabulation relating each said two-dimensional distance to a known dimension (h) of each of said objects; and employing said tabulation to relate a two-dimensional distance obtained from said processing of a two-dimensional image of an unknown three-dimensional surface to the third dimension thereof.
15. The process of displaying a map of the dioptric powers of refraction exhibited over an unknown, three-dimensional, quasi-spherical, specular surface, comprising the steps of:
a. processing a plurality of orthogonally scanned images of known specular spheres exhibiting a mire pattern to obtain the coefficients of a polynomial relating (i) the known dioptric powers of said plurality of spheres to (ii) the instantaneous spatial frequencies exhibited by said mire pattern in said images, b. processing an orthogonally scanned image of an unknown three-dimensional quasi-spherical specular surface illuminated with said mire pattern to ascertain the instantaneous spatial frequencies contained in said image, c. substituting said instantaneous spatial frequencies obtained from processing said image of said unknown surface in said polynomial to obtain the dioptric powers of said unknown surface, and d. displaying a two-dimensional map of the dioptric powers of said unknown surface on said image.
16. The process of claim 15 wherein said three-dimensional surface is illuminated with a positioning light spot and wherein said processing includes ascertaining the location of said positioning light spot in said two-dimensional image.
17. The process of claim 16 wherein said processing includes ascertaining from said location of said light spot a differential axial position of said unknown three-dimensional surface.
18. The process of claim 17 wherein said processing includes correcting the value of said dioptric powers of said unknown surface in accordance with said differential axial position.
19. The process of claim 18 wherein said correcting includes processing a series of images of known specular surfaces each displaced by a predetermined axial amount.
20. The process of operating a stored program controlled apparatus to display a two-dimensional map of measurements of the third dimension of a three-dimensional surface reflecting a quasi-periodic mire pattern from an illuminated object, comprising the steps of:
a. storing the two-dimensional distances in each of a plurality of images of three-dimensional surfaces illuminated with said pattern at which predetermined local spatial phases occur;
b. computing, for each of said images, the ratio of said two-dimensional distances to the size of a corresponding portion of said object to ascertain the local magnification of said surface;
c. processing a two-dimensional image of an unknown surface to ascertain the two-dimensional distances therein at which said predetermined local spatial phases occur; and d. displaying on said image of said unknown surface an indication of said magnification.
21. The process of claim 20 wherein said processing includes ascertaining the departure of said location of said light spot from the center of said two-dimensional image.
22. The process of claim 21 wherein said processing includes correcting the value of said dioptric powers of said unknown surface in accordance with said ascertained departure of said light spot from the center of said two-dimensional image.
23. The process of correcting the apparent three-dimensional radius of curvature R' of a corneal surface displayed on a two-dimensional map acquired by processing a two-dimensional image of said surface, said processing proceeding from a predetermined point corresponding to the origin of a mire pattern reflected upon said surface when said surface is displaced from said predetermined point, comprising the steps of:
a. reflecting a spot of light on said surface to define the apex thereof;
b. defining said predetermined point in a stored two-dimensional image of said surface;
c. ascertaining the transverse displacement .DELTA.x of said light spot from said predetermined point in said stored image;
and d. calculating the correct radius of curvature R, according to where .theta. is the angle which the projection of the apparent three-dimensional radius of curvature R' makes with the x-axis in the X,Z plane and ~ is the angle of elevation of R' with respect to the X,Y plane.
24. The process of correcting the apparent dioptric powers of a corneal surface displayed on a two-dimensional map acquired by processing a two-dimensional image of said surface, said processing proceeding from a predetermined point corresponding to the origin of a mire pattern reflected upon said surface when said surface is displaced from said predetermined point, comprising the steps of:
a. reflecting a spot of light on said surface to define the apex thereof;

b. defining said predetermined point in a stored two-dimensional image of said surface;
c. ascertaining the transverse displacement .DELTA.x of said light spot from said predetermined point in said stored image;
and d. calculating the correct dioptric power D, according to where .theta. is the angle which the projection of the apparent three-dimensional radius of curvature R' makes with the x-axis in the X,Z plane and ~ is the angle of elevation of R' with respect to the X,Y plane.
CA002172284A 1994-08-08 1995-06-07 Processing of keratoscopic images using local spatial phase Expired - Fee Related CA2172284C (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US28712694A 1994-08-08 1994-08-08
US287,126 1994-08-08
PCT/US1995/007214 WO1996004839A1 (en) 1994-08-08 1995-06-07 Processing of keratoscopic images using local spatial phase

Publications (2)

Publication Number Publication Date
CA2172284A1 CA2172284A1 (en) 1996-02-22
CA2172284C true CA2172284C (en) 1999-09-28

Family

ID=23101567

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002172284A Expired - Fee Related CA2172284C (en) 1994-08-08 1995-06-07 Processing of keratoscopic images using local spatial phase

Country Status (4)

Country Link
EP (1) EP0722285A4 (en)
JP (1) JP3010072B2 (en)
CA (1) CA2172284C (en)
WO (1) WO1996004839A1 (en)

Families Citing this family (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999055813A1 (en) * 1998-04-27 1999-11-04 The Procter & Gamble Company Improved uncomplexed cyclodextrin compositions for odor control
JP3602371B2 (en) * 1999-06-04 2004-12-15 株式会社ニデック Corneal shape measuring device
US6258217B1 (en) 1999-09-29 2001-07-10 Plasma-Therm, Inc. Rotating magnet array and sputter source
EP1434522B1 (en) 2000-10-30 2010-01-13 The General Hospital Corporation Optical systems for tissue analysis
JP4995720B2 (en) 2004-07-02 2012-08-08 ザ ジェネラル ホスピタル コーポレイション Endoscopic imaging probe with double clad fiber
US8208995B2 (en) 2004-08-24 2012-06-26 The General Hospital Corporation Method and apparatus for imaging of vessel segments
JP2008521516A (en) 2004-11-29 2008-06-26 ザ ジェネラル ホスピタル コーポレイション Configuration, apparatus, endoscope, catheter, and method for performing optical image generation by simultaneously illuminating and detecting multiple points on a sample
EP2085929A1 (en) 2005-04-28 2009-08-05 The General Hospital Corporation Evaluating optical coherence tomography information for an anatomical structure
EP1889037A2 (en) 2005-06-01 2008-02-20 The General Hospital Corporation Apparatus, method and system for performing phase-resolved optical frequency domain imaging
KR101387454B1 (en) 2005-08-09 2014-04-22 더 제너럴 하스피탈 코포레이션 Apparatus, methods and storage medium for performing polarization-based quadrature demodulation in optical coherence tomography
EP2275026A1 (en) 2005-09-29 2011-01-19 The General Hospital Corporation Arrangements and methods for providing multimodality microscopic imaging of one or more biological structures
WO2007084903A2 (en) 2006-01-19 2007-07-26 The General Hospital Corporation Apparatus for obtaining information for a structure using spectrally-encoded endoscopy techniques and method for producing one or more optical arrangements
US7538859B2 (en) 2006-02-01 2009-05-26 The General Hospital Corporation Methods and systems for monitoring and obtaining information of at least one portion of a sample using conformal laser therapy procedures, and providing electromagnetic radiation thereto
EP1983921B1 (en) 2006-02-01 2016-05-25 The General Hospital Corporation Systems for providing electromagnetic radiation to at least one portion of a sample using conformal laser therapy procedures
WO2007101026A2 (en) 2006-02-24 2007-09-07 The General Hospital Corporation Methods and systems for performing angle-resolved fourier-domain optical coherence tomography
WO2007133961A2 (en) 2006-05-10 2007-11-22 The General Hospital Corporation Processes, arrangements and systems for providing frequency domain imaging of a sample
US8838213B2 (en) 2006-10-19 2014-09-16 The General Hospital Corporation Apparatus and method for obtaining and providing imaging information associated with at least one portion of a sample, and effecting such portion(s)
WO2010009136A2 (en) 2008-07-14 2010-01-21 The General Hospital Corporation Apparatus and methods for color endoscopy
US9615748B2 (en) 2009-01-20 2017-04-11 The General Hospital Corporation Endoscopic biopsy apparatus, system and method
BR112012001042A2 (en) 2009-07-14 2016-11-22 Gen Hospital Corp fluid flow measurement equipment and method within anatomical structure.
DK2542154T3 (en) 2010-03-05 2020-11-23 Massachusetts Gen Hospital APPARATUS FOR PROVIDING ELECTROMAGNETIC RADIATION TO A SAMPLE
US9069130B2 (en) 2010-05-03 2015-06-30 The General Hospital Corporation Apparatus, method and system for generating optical radiation from biological gain media
EP2566381B1 (en) * 2010-05-04 2014-04-02 Akkolens International B.V. Corneal topographer
WO2011150069A2 (en) 2010-05-25 2011-12-01 The General Hospital Corporation Apparatus, systems, methods and computer-accessible medium for spectral analysis of optical coherence tomography images
EP2575597B1 (en) 2010-05-25 2022-05-04 The General Hospital Corporation Apparatus for providing optical imaging of structures and compositions
US10285568B2 (en) 2010-06-03 2019-05-14 The General Hospital Corporation Apparatus and method for devices for imaging structures in or at one or more luminal organs
JP5883018B2 (en) 2010-10-27 2016-03-09 ザ ジェネラル ホスピタル コーポレイション Apparatus, system, and method for measuring blood pressure within at least one blood vessel
US9330092B2 (en) 2011-07-19 2016-05-03 The General Hospital Corporation Systems, methods, apparatus and computer-accessible-medium for providing polarization-mode dispersion compensation in optical coherence tomography
EP2769491A4 (en) 2011-10-18 2015-07-22 Gen Hospital Corp Apparatus and methods for producing and/or providing recirculating optical delay(s)
WO2013148306A1 (en) 2012-03-30 2013-10-03 The General Hospital Corporation Imaging system, method and distal attachment for multidirectional field of view endoscopy
US11490797B2 (en) 2012-05-21 2022-11-08 The General Hospital Corporation Apparatus, device and method for capsule microscopy
WO2014031748A1 (en) 2012-08-22 2014-02-27 The General Hospital Corporation System, method, and computer-accessible medium for fabrication minature endoscope using soft lithography
US9968261B2 (en) 2013-01-28 2018-05-15 The General Hospital Corporation Apparatus and method for providing diffuse spectroscopy co-registered with optical frequency domain imaging
US10893806B2 (en) 2013-01-29 2021-01-19 The General Hospital Corporation Apparatus, systems and methods for providing information regarding the aortic valve
WO2014121082A1 (en) 2013-02-01 2014-08-07 The General Hospital Corporation Objective lens arrangement for confocal endomicroscopy
US10478072B2 (en) 2013-03-15 2019-11-19 The General Hospital Corporation Methods and system for characterizing an object
WO2014186353A1 (en) 2013-05-13 2014-11-20 The General Hospital Corporation Detecting self-interefering fluorescence phase and amplitude
WO2015010133A1 (en) 2013-07-19 2015-01-22 The General Hospital Corporation Determining eye motion by imaging retina. with feedback
US11452433B2 (en) 2013-07-19 2022-09-27 The General Hospital Corporation Imaging apparatus and method which utilizes multidirectional field of view endoscopy
ES2893237T3 (en) 2013-07-26 2022-02-08 Massachusetts Gen Hospital Apparatus with a laser arrangement using optical scattering for applications in optical coherence tomography in the Fourier domain
WO2015105870A1 (en) 2014-01-08 2015-07-16 The General Hospital Corporation Method and apparatus for microscopic imaging
US10736494B2 (en) 2014-01-31 2020-08-11 The General Hospital Corporation System and method for facilitating manual and/or automatic volumetric imaging with real-time tension or force feedback using a tethered imaging device
US10228556B2 (en) 2014-04-04 2019-03-12 The General Hospital Corporation Apparatus and method for controlling propagation and/or transmission of electromagnetic radiation in flexible waveguide(s)
EP3171766B1 (en) 2014-07-25 2021-12-29 The General Hospital Corporation Apparatus for in vivo imaging and diagnosis

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4794550A (en) * 1986-10-15 1988-12-27 Eastman Kodak Company Extended-range moire contouring
US4863260A (en) * 1987-11-04 1989-09-05 Computed Anatomy Inc. System for topographical modeling of anatomical surfaces
WO1992001417A1 (en) * 1990-07-19 1992-02-06 Horwitz Larry S Vision measurement and correction
JPH04332525A (en) * 1991-05-02 1992-11-19 Topcon Corp Cornea shape measuring instrument
JPH0630900A (en) * 1992-07-13 1994-02-08 Kimiya Shimizu Display method for optical characteristic of cornea

Also Published As

Publication number Publication date
CA2172284A1 (en) 1996-02-22
JPH08510949A (en) 1996-11-19
JP3010072B2 (en) 2000-02-14
WO1996004839A1 (en) 1996-02-22
EP0722285A4 (en) 1998-11-04
EP0722285A1 (en) 1996-07-24

Similar Documents

Publication Publication Date Title
CA2172284C (en) Processing of keratoscopic images using local spatial phase
US5159361A (en) Method and apparatus for obtaining the topography of an object
US4995716A (en) Method and apparatus for obtaining the topography of an object
US5106183A (en) Topography measuring apparatus
EP0551955B1 (en) System for determining the topography of a curved surface
US5841511A (en) Method of corneal analysis using a checkered placido apparatus
CA1308948C (en) Topography measuring apparatus
US4998819A (en) Topography measuring apparatus
US5847804A (en) Multi-camera corneal analysis system
US6450641B2 (en) Method of corneal analysis using a checkered placido apparatus
EP0397962A1 (en) Topography measuring apparatus
US6079831A (en) Device and method for mapping the topography of an eye using elevation measurements in combination with slope measurements
EP2800058A2 (en) Improvements in and relating to Imaging of the Eye
US6926408B2 (en) Continuous two-dimensional corneal topography target
Mammone et al. 3-D corneal modeling system
US5796859A (en) Processing of keratoscopic images employing local spatial phase
CN110974150B (en) Method for measuring human eye cornea topography
EP2227131B1 (en) Method of evaluating a reconstructed surface and corneal topographer
US6771362B2 (en) Method and apparatus for testing and mapping phase objects
US6607273B2 (en) Stereo view reflection corneal topography
US5818957A (en) Processing of keratoscopic images
JPH08504108A (en) Checkered plastic seed device and method
JPH10508701A (en) Method of measuring absolute spatial coordinates of at least one point on a reflective surface
Diez et al. Image-processing techniques in laser-speckle photography with application to hybrid stress analysis
Mammone Precise Measurement of the Curvature of the Human Cornea

Legal Events

Date Code Title Description
EEER Examination request
MKLA Lapsed