WO2003074985A1 - Wavefront sensing - Google Patents

Wavefront sensing Download PDF

Info

Publication number
WO2003074985A1
WO2003074985A1 PCT/GB2003/000979 GB0300979W WO03074985A1 WO 2003074985 A1 WO2003074985 A1 WO 2003074985A1 GB 0300979 W GB0300979 W GB 0300979W WO 03074985 A1 WO03074985 A1 WO 03074985A1
Authority
WO
WIPO (PCT)
Prior art keywords
radiation
distribution
pixelwise
plane
modulator
Prior art date
Application number
PCT/GB2003/000979
Other languages
French (fr)
Inventor
Simon Christopher Woods
Gavin Robert Geoffrey Erry
Paul Harrison
Alan Howard Greenaway
Original Assignee
Qinetiq Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qinetiq Limited filed Critical Qinetiq Limited
Priority to AU2003212516A priority Critical patent/AU2003212516A1/en
Publication of WO2003074985A1 publication Critical patent/WO2003074985A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J9/00Measuring optical phase difference; Determining degree of coherence; Measuring optical wavelength

Definitions

  • the present invention relates to the determination of properties of a wavefront as it reaches an input pupil, for example of an imaging system or a measuring system.
  • wavefront shape which may be alternatively regarded as distribution of local phase
  • Many known methods are based on the a posteriori reconstruction of a complex amplitude, from energy measurements or from phase-unstable data. Such reconstructions commonly are complex and time consuming.
  • AO adaptive optics
  • One such technique involves the Shack-Hartmann sensor in which the wavefront is passed through an array of lenslets each of which forms a sub-image of the scene. The displacement of the sub-images from the local axial position for each lenslet in the array provides a matrix of local, two-dimensional wavefront gradients, from which the wavefront can be reconstructed by integration. This method has been widely and successfully used in astronomy.
  • the wavefront shape may be determined by application of the intensity transport equation.
  • This can be a complex and difficult process, which is dealt with in embodiments of the present invention by using certain arrangements for detecting the radiation, and by providing for each weighting coefficient a predetermined matrix of values for use in real time measurement of the coefficient.
  • the matrix of values is related or matched to the arrangement for detecting the radiation, and determination of the matrix, which remains a complex and lengthy process, is effected initially, i.e. prior to the radiation detection.
  • the present invention relates to a version of an established algorithm known as wavefront curvature sensing, which may be regarded as a special case of phase diversity (see, for example, Gonsalves R A, “Phase retrieval and diversity in adaptive optics", Opt. Eng. 21 (1982) 829-832; and Roddier F, Curvature sensing and compensation: a new concept in adaptive optics, Appl. Opt. 27 (1988) 1223-1225.
  • the present invention provides a measurement method for determining data relating to the local shape (or distribution of local phase) of a radiation wavefront arriving at a pupil plane, wherein said shape is defined by a set of predetermined orthonormal functions, each function being provided with a weighting coefficient for determining the shape, said data comprising at least one said weighting coefficient, the method comprising determining a pixelwise distribution indicative of rate of change of radiation intensity as the radiation traverses the input pupil, and converting said pixelwise distribution to said data, wherein said converting step comprises providing one or more matrices of predetermined values, each said matrix corresponding to one said orthonormal function, and the size of each said matrix corresponding to the number of pixels in said pixelwise distribution, for each said matrix multiplying said pixelwise distribution by said matrix and adding the result to provide said weighting coefficient for its said orthonormal function.
  • the first aspect of the invention extends to measuring apparatus for determining data relating to the local shape (or distribution of local phase) of a radiation wavefront arriving at a pupil plane, wherein said shape is defined by a set of predetermined orthonormal functions, each function being provided with a weighting coefficient for determining the shape, said data comprising at least one said weighting coefficient, the apparatus comprising a said input pupil, means for determining a pixelwise distribution indicative of rate of change of radiation intensity as the radiation traverses the input pupil, and converting means for converting said intensity distribution to said data, wherein said converting means comprises a store holding one or more matrices of predetermined values, each said matrix corresponding to one said orthonormal function, and the size of each said matrix corresponding to the number of pixels in each of said first and second images, and calculating means for multiplying said pixelwise distribution by a said matrix and adding the results to provide said weighting coefficient for its said orthonormal function.
  • the intensity distributions in the two secondary planes may be may be obtained by any method known or obvious to the skilled person, for example by the use of a conventional beam splitter placed after the input pupil and directing light to respective detectors or detector arrays. This is shown schematically in Figure 5 in relating to spaced planes, a beam splitter 24, a ccd camera 26 for receiving a focussed image of plane 22 via a lens 25, and a ccd camera 28 for receiving a focussed image of plane 23 via a lens 27.
  • a possible difficulty is synchronisation of the two cameras, and of course there is the need to ' ensure and to maintain alignment and calibration.
  • the two intensity distributions are provided on a common plane.
  • a diffraction grating of the general type described in our copending International Patent Application WO 99/46768 mentioned above may be used in conjunction with an imaging lens or other optical imaging system for directing and focussing the light from at least the plus one and minus one orders, and optionally the zero order, onto the plane of a single detector or detector array.
  • the focussed light received by the detector (array) includes simultaneous images of the laterally displaced images of the two secondary planes (plus and minus one orders), and optionally the zero order.
  • the former images are employed to provide the aforesaid distribution of intensity differences.
  • a computer generated hologram may be employed instead of the aforesaid grating for the same purpose, and with or without an additional lens according to the properties of the hologram (which can itself additionally provide the lens function).
  • the said pixelwise distribution is obtained by (means for) providing focussed first and second images of respective first and second secondary planes lying adjacent, and respectively before and after, the pupil plane, and deriving the pixelwise distribution as the pixelwise distribution of intensity difference between the first and second images.
  • the first and second images may alternatively be regarded as images of the pupil plane with relatively minor degrees of positive and negative defocus.
  • the wave is not necessarily an optical wave, and other exemplary areas where a knowledge of phase aberrations may be useful include studies on x-ray diffraction and nuclear structure (see for example von Kampen N G, "S-matrix and causality condition I. Maxwell field", Phys. Rev. 89(1953)1072-9); subatomic particles such as in electron microscopy (see for example Misell D L, "An examination of an iterative method for the solution of the phase problem in optics and electron optics", J. Phys. D: Appl. Phys.
  • the radiation is optical (ultraviolet to infra-red), and preferably visible or infra-red.
  • the set of orthonormal functions may be Zernike modes, although other sets could be employed, depending inter alia on the potential application of the data so obtained and the nature of the radiation.
  • a pixellated detector array although "pixel" in the present context refers to an image of the input pupil as opposed to the more normal reference to an object plane).
  • a non-pixellated detector e.g. a conventional CRT camera with subsequent image processing to provide a like result.
  • References herein to a detector should be taken to embrace both spatially pixellated (or digitised) detectors, as well as those providing a continuous indication of intensity with position.
  • the aforementioned use of beam splitters is not only costly, but can give rise to problems in realising the necessary precision in alignment, particularly insofar as it is necessary to ensure accurate relative alignment of the two near-pupil plane intensity distributions.
  • the use of the distorted diffraction grating or the hologram can not only lead to lower costs, but it can help avoid any relative misalignment between the two images of the secondary planes.
  • the number of weighting coefficients provided in use of the method according to the invention can vary from one upwards. If the discrete or "pixellated" array of intensity difference comprises N values, a corresponding number N of weighting coefficients for N respective orthonormal functions can be obtained if required. While this may involve a relatively high number of individual computational steps, they are simple steps capable of being effected quickly. For example, with a 40 by 40 array, giving 1600 intensity difference values effectively across the input pupil, there will be 1600 matrices for 1600 orthonormal functions, each function requiring 1600 multiplications of its matrix values by the 1600 intensity difference values, followed by an addition step.
  • the fineness of the structure of the phase distribution to which the method of the invention relates is at least in part determined by the number of values in the discrete or "pixellated" array of intensity difference.
  • the light modulator comprises a distortable membrane mirror capable of phase correcting for 37 modes, and so it is only necessary to determine the corresponding 37 weighting coefficients.
  • the number of correctable modes might correspond to the number of elements, i.e. 64.
  • Only one coefficient may be required in certain cases, such as in range finding or the determination of the degree of focus (see later).
  • the first two Zernike modes are tip and tilt.
  • a simple tip/tilt movable mirror can be employed.
  • Other Zernike modes will require more complex arrangements such as those just exemplified.
  • Another reason for not determining the weighting coefficients for a larger set of the orthonormal functions may be the time periods involved. As the phase structure becomes more finely resolved, the fine structure progressively relates to spatially small features, whereas a lower number of functions would relate to larger scale features. It is to be expected, or likely, that large scale features change slowly relative to the smaller scale features. In a real time application such as image correction, there is little point in determining the weighting coefficient for a function if that weight coefficient is expected to change at a rate comparable to or faster than the rate at which the coefficient can be determined.
  • the measurement of the weighting coeff ⁇ cient(s) may be useful in itself, for example in giving a measure of the degree of phase perturbation, or the magnitude of the cause giving rise to such perturbation
  • one particular application of the method is to imaging, and in particular to correcting for phase perturbations arising as the radiation travels towards the input pupil of an imaging system.
  • visible light is commonly affected by atmospheric turbulence and density changes, particularly when the viewed object is relatively distant, and further perturbation may arise for example if scintillation occurs, or there is atmospheric pollution.
  • the focussed image can be improved by determining phase aberrations in the received wavefront and using corrective adaptive optics in the imaging system, but this is difficult in real time.
  • the corrective optics means may be used independently of the phase measurement (for example by using a beam splitter), but in a preferred embodiment the wavefront measuring system and the imaging system receive light from the same the corrective optics means, and the arrangement is such as to minimise the measured phase aberrations.
  • the invention extends to an imaging method (preferably, but not necessarily, optical, and most preferably visible or infra-red imaging) in which weighting coefficient(s) are obtained according to the first aspect of the invention, and wherein corrective optics are incorporated in the imaging system and are controlled in response to the determined weighting coefficients.
  • the corrective optics can be a spatial phase modulator, for example a deformable mirror membrane or a micro- mechanical movable mirror array, optionally with a tip/tilt mirror for dealing separately with the first two Zernike modes. These modes have the potential for involving relatively large scale angular corrections, and a separate mirror, is therefore preferred, although, if necessary, this mode could be dealt with by the same spatial phase modulator as deals with other mode(s).
  • the corrective optics may be peculiar to the path for the imaging radiation. That is to say, the weighting coefficients are determined using light which does not encounter the phase control means.
  • the phase control means is common to the imaging radiation and that used for determination of the weighting coefficients. Operation of this embodiment may be such that the weighting coefficients are reduced as far as possible, and preferably completely nulled, by operation of the phase control means in a feed back loop. It will be appreciated that the signal in the feed back loop remains indicative of the weighting coefficients determined according to the first aspect of the invention.
  • a light beam for example a laser beam
  • the beam wavefront may be measured according to the invention and adaptive optics used to control it.
  • the adaptive optics lies in a main optical path, and the measuring system may be located either in the main path e.g. in a null sensing system, or in a separate path (for example from a beam splitter).
  • Another application of the invention is to the determination of atmospheric turbulence, which is a function of the measured coefficients, and which may be used for example in the correction or control of other (non-imaging) optical measurements.
  • a further application is in the field of focus or range determination and autofocus.
  • the wavefront mode being sensed is the component of defocus a (Zernike polynomial Z ; o(r, ⁇ ))
  • the range of the source is related to the coefficient of defocus a, by
  • R is the front focal length of the system (i.e. the range at which an object is perfectly focussed)
  • D is the diameter of the pupil
  • the invention works well when the viewed field contains a single high intensity point, enabling well defined and spatially separate images of light from the point as in the secondary planes either side of the pupil plane to be formed on a single sensor surface. If no such high intensity point exists, it is possible to provide one, for example by suitable illumination for example using a laser beam. Where there are no such singular points there may be overlap of the images on the sensor plane, and the measurement process may be correspondingly difficult.
  • a plurality of points in the viewed field are illuminated, for example using a light source or laser with an appropriate diffraction grating. It is arranged that the corresponding two sets of points on the sensor plane (i.e. from the planes either side of the input pupil) are spatially separate. In this way measurements, for example of range, can be taken at a number of points in the field of view.
  • the provision of one or more predetermined matrices facilitates essentially real time calculation of the weighting coefficient(s) since the processing amounts to image intensity subtraction or comparison, matrix multiplication of the resulting intensity distribution and addition of the results, all of which are simple steps capable of being performed very rapidly on any modern computer.
  • the calculation of the matrices themselves is a much more complicated process taking several hours or longer to perform, and is performed prior to sensing of a radiation wavefront.
  • the method of the invention has been implemented experimentally and has been found to provide fast, accurate and robust wavefront reconstruction.
  • FIG. 1 schematically illustrates an adaptive optics imaging system employing apparatus according to the invention
  • Figure 2 schematically illustrates the principle behind the determination of the rate of change of radiation (light) intensity at an input pupil
  • Figure 3 schematically illustrates an arrangement for determining the pixelwise rate of change of intensity of radiation as it passes between a pair of closely spaced planes either side of an input pupil
  • Figure 4 schematically illustrates an alternative arrangement for determining the pixelwise rate of change of intensity of radiation as it passes between a pair of closely spaced planes either side of an input pupil
  • Figure 5 shows an alternative method of obtaining the requisite first and second focussed images on separate CCD cameras
  • Figure 6 is an image for illustrating a method of correcting for background radiation.
  • a tip/tilt mirror 5 i.e. a mirror which can be moved angularly about two axes
  • a spatial phase modulator 6 comprising a deformable mirror.
  • Light from modulator 6 is passed through a beamsplitter 7 for reflection by a fixed mirror 8 and transmitted by an imaging lens acting to focus light from the object plane 1 on the image plane 2.
  • Light reflected by the splitter 7 is passed to and detected by a wavefront sensor 11 which determines the weighting coefficients for the first (tip/tilt) and other selected Zernike modes, and produces respective signals 12, 13 for correspondingly controlling the tip/tilt mirror 5 and the modulator 6.
  • the mirror 5 and modulator 6 are common to the optical paths to the focal plane 2 and the sensor 11, and the latter is arranged to minimise or null the weighting coefficients in the wavefront of the sensed light, so that the optical wavefront 14 transmitted towards the focal plane is rendered more ideal - that is substantially planar, having had phase perturbations or distortion relative to a planar wavefront substantially removed by the action of the mirror 5 and modulator 6.
  • the light 31 is a laser beam with potential imperfections in the wavefront, not necessarily due to atmospheric turbulence. The, or some, imperfections are removed in the output beam as indicated at 14. Reflector 8 and lens 9 may not be required.
  • the embodiment of Figure 1 or its variant may be further varied by relocating the beam splitter 7 and sensor 11 before the tip/tilt mirror 5 and modulator 6 and using the output of sensor 11 for forward control of mirror 5 and modulator 6 without the use of a nulling system.
  • the Zernike polynomial Z 2j o(r, ⁇ ) pertaining to defocus may be employed either to control the modulator 6, or to control the position of the lens 9, for auto-focus.
  • This coefficient may also or alternatively be employed to provide the range of an object in the field of view.
  • Figure 2 schematically illustrates the principle behind the determination of the rate of change of radiation (light) intensity at the input pupil 4. This is discussed in more detail in the theoretical part of the description to follow, but for now it is noted that phase distortion effectively are manifested as distortions from planarity (in this case, but any standard shape, e.g. spherical with a predetermined rate of curvature, could be used) and that such distortions cause the radiation to converge or diverge, giving rise to a variation in local intensity as the radiation travels between two planes 22, 23 closely equally spaced either side of the pupil plane (not shown). This variation can be measured and presented in a pixelwise manner for use in the apparatus of the present invention, e.g. in the sensor 11 of Figure 1.
  • FIG 3 shows how the optical input to the sensor 11 could be derived.
  • a focussing lens 15, such as the biconvex lens illustrated, is arranged to focus an image of an object plane 16 on a measuring plane 17 of an image sensor such as a CCD or other digital camera array.
  • a quadratically distorted diffraction grating 18 of the type generally described in our copending International Patent Application WO 99/46768.
  • the zero order component which provides an image of the plane 16 centrally on plane 17 at a position 19, the grating provides useful plus one and minus one diffraction orders at least.
  • FIG. 4 An alternative and preferred arrangement for providing an optical input to the sensor 11 is shown in Figure 4, where the lens 15 is set to focus sources at infinity on a central focussed spot 19' the sensor 17.
  • the grating 18 acts with the lens 15 so that an image of a closer real image plane A is provided by the plus one diffraction order at a position 20' on the sensor 17 on one side of the central position 19', and an equal magnification image 21' of a virtual plane is provided by the minus one diffraction order at an equal distance on the other side of the central position 19'.
  • the virtual plane is shown as a plane C, shown on the other side of the lens 15, but the skilled person will realise that this represents an image plane effectively beyond infinity, i.e. the images 20' and 21' effectively relate to planes either side of the pupil plane.
  • the images 20' and 21 ' are used in lieu of the images 20 and 21 of Figure 3.
  • Equation (1) Multiplying Equation (1) on the left by u z * (r) and the complex conjugate of Equation (1). on the left by u z (r), and subtracting the resulting expressions, leads to the Intensity Transport Equation (ITE),
  • Equation (2) can be written
  • the wavefront curvature can be obtained from Equation (4).
  • A denotes the system aperture
  • ⁇ c is a delta-function around C
  • is an outward-pointing unit vector normal to C.
  • Equation (3) then becomes
  • Measurement of the axial intensity derivative thus consists of both wavefront curvature within the aperture and wavefront slope around the aperture edge. This information is sufficient to determine the phase uniquely to within an arbitrary additive constant, which is consistent with an inability to measure absolute phase.
  • n is the unit vector normal to P. If the function g(r') is replaced by the function G(r,r') satisfying
  • the function G(r,r') is called a Green's function and is defined by its Laplacian according to Equation (7). This means we are free to choose the boundary conditions for the Green's function. In general this choice is used to simplify Equation (8) by eliminating one of the two perimeter integrals.
  • n -V/Jr /.(r) for r e P ,
  • Equation (8) Equation (8)
  • Equation (7) The RHS of this equation is identically zero, from Equation (9).
  • the LHS must therefore also be zero. This is clearly not the case if Equation (7) is satisfied exactly; so instead we must have
  • Equation (5) we see that the axial intensity derivative contains information about the normal wavefront slope around the aperture edge, rather than the function itself.
  • the problem therefore involves Neumann boundary conditions in a natural way, so we adopt a solution with a Green's function satisfying Equation (9). Inserting this into Equation (8) and replacing fir) by ⁇ (r) gives the final solution for the wavefront phase as
  • the aperture function W A restricts the integral over ⁇ V 2 ⁇ to the region A, and the delta-function ⁇ c converts the integral over n -V ⁇ into a line integral around C. Therefore we have:
  • Equation (15) becomes:
  • the coefficient is obtained by the integral of the signal S(r) multiplied by a 'modal projector function' G ⁇ (r).
  • the wavefront will be expressed as a vector, each value corresponding to a sample of the wavefront at a particular point, or the mean value of the wavefront over a small area.
  • the corresponding expansion functions are:
  • Equation (16) Equation (17).
  • the function S(r') is approximated by the difference between two intensity distributions on either side of the aperture plane, separated by a small distance ⁇ z:
  • S j the complete function is therefore approximated by S(r ⁇ S ; (S(r'-p, ® ⁇ 7(r')) .
  • the modal projector function G ⁇ (r) must be suitably defined in the region outside the aperture as well as within it. It has been found that a suitable scheme for circular apertures is to continue the boundary condition of Equation (9) to infinity. That is, the value of the Green's function at any point outside the aperture is taken to be the same as the value at the edge of the aperture at the same angle. Various other schemes were tested and this one was found to give the best performance in terms of minimising the error when sensing the low- order Zernike modes.
  • the re-formulation of the phase-diversity algorithm provides fast wavefront reconstructions and requires only a simple matrix multiply for data inversion.
  • the data collection is achieved in a single image from a single, pixellated focal plane by use of a distorted diffraction grating.
  • the algorithm is capable of working with both point sources and with extended, low- contrast scenes and can provide wavefront reconstruction data in any required set of basis functions.
  • the formulation for point by point or for Zernike- polynomial decompositions is equally easily achieved.
  • the algorithm can be used on wavefronts with severe aberrations, with extended sources, with partially obscured wavefronts, with strongly-scintillated wavefronts and with some cases where the wavefront is discontinuous (e.g. in multiply-connected pupils). It has already been tested in computer simulation and experimentally and has been found to be robust to many sources of experimental error. In most cases it is found experimentally that a departure from the theoretical restrictions, implied expressly or implicitly by the theory, leads to a wavefront reconstruction that is low- pass filtered. In this case the reconstruction of the lowest order modes, which are generally the most important sources of image degradation in the terrestrial imaging applications for which the algorithm was formulated, are found to be very accurate.
  • the wavefront sensor is used with a compact high brightness source on a dark background, such as a laser beacon imaged through a narrow-band filter, the accuracy of the results is good.
  • a natural beacon such as a sun glint or a bright object on a dark background is sought.
  • the source power is unlikely to be concentrated in a narrow waveband, making it difficult or impossible to eliminate background radiation from the rest of the viewed scene, and this tends to lead to a loss in accuracy.
  • This loss arises because the basic wavefront curvature signal is given by the difference in the two intensity profiles measured or recorded either side of the system pupil, divided by their integration sum for normalisation:
  • the curvature signal takes the form:
  • S and B indicate source and background contributions to the intensity profiles.
  • the correct curvature signal can be obtained.
  • the scene consisted of a uniformly bright background with an off- axis source of twice the background intensity.
  • a defocus of 0.5 was assigned to all points within the scene and the wavefront sensor intensity profiles calculated as well as the image, which is shown in Figure 6 as an out-of focus spot 29 on a darker background 30.
  • the defocus measurement without background correction was 0.07 waves, significantly in error.

Landscapes

  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Testing Of Optical Devices Or Fibers (AREA)

Abstract

The wavefront shape of radiation arriving at a pupil plane (4) is defined by a set of predetermined orthonormal functions such a Zernike functions with a weighting coefficients as a function of the shape. Apparatus for determining at least one such weighting coefficient comprises rate means responsive to radiation as it traverses the input pupil (4) to determine a pixelwise distribution indicative of rate of radiation intensity change thereat, and converting means for converting the intensity distribution to one or more weighting coefficients. The converting means comprises a store holding one or more matrices of predetermined values, each said matrix corresponding to one said orthonormal function, and the size of each said matrix corresponding to the number of pixels in the pixelwise distribution, and calculating means for multiplying said pixelwise distribution by a said matrix and adding the results to provide the weighting coefficient for its orthonormal function. The rate of radiation intensity change at pupil (4) is conveniently approximated by measurements of intensity in closely adjacent planes (22, 23) either side of pupil (4).

Description

Wavefront Sensing
The present invention relates to the determination of properties of a wavefront as it reaches an input pupil, for example of an imaging system or a measuring system.
Our copending International Patent Application No. WO 99/46768 (published in the name of the Secretary of State for Defence, inventors Paul Blanchard and Alan Greenaway) describes a 3-dimensional imaging system utilising a diffraction grating which is distorted according to a quadratic function. When located adjacent an imaging system such as a biconvex lens, the zero order light from the grating provides the normal focussed image in an imaging plane via the imaging system, and the positive and negative diffraction orders from the grating, particularly the first orders (although higher orders could contribute) provide respective focussed images in the same imaging plane but laterally displaced by respective amounts in opposed directions from the normal image.
A related article by the same inventors, "Simultaneous Multi-plane Imaging with a Distorted Diffraction Grating" Applied Optics 38 (32), 6692-6699 (1999) also mentions applications of such a system, including the improvement of imaging systems by sensing phase aberration introduced for example by the atmosphere and using corrective adaptive optics to improve the focussed image. It will be understood that atmospheric and other effects will lead to a distortion of a wavefront as it propagates, and that this includes a variation in phase with solid angle over that ideally expected.
More generally, it is desirable to be able to determine wavefront shape (which may be alternatively regarded as distribution of local phase), either as an absolute measurement or as an aberration from an ideal shape or phase distribution. Many known methods are based on the a posteriori reconstruction of a complex amplitude, from energy measurements or from phase-unstable data. Such reconstructions commonly are complex and time consuming. However, the need in some fields for fast accurate and robust determinations of wavefront shape or phase aberrations has led to a number of real time techniques. This is so particularly but not exclusively in the optical field, with the increasing application of adaptive optics (AO), using real- time wavefront modulation to correct stochastic aberrations (usually, but not necessarily, turbulence induced).
At least for some applications such as the aforementioned adaptive optical system, it is even more desirable to be able to do so in what is essentially real time. One such technique involves the Shack-Hartmann sensor in which the wavefront is passed through an array of lenslets each of which forms a sub-image of the scene. The displacement of the sub-images from the local axial position for each lenslet in the array provides a matrix of local, two-dimensional wavefront gradients, from which the wavefront can be reconstructed by integration. This method has been widely and successfully used in astronomy.
However, if the scene whose image is to be corrected for aberrations is extended and low-contrast, the detector format for each sub-image, and the computational effort required to compute the displacements of the sub-images from their local axes (usually by cross correlation of the sub-images), can become prohibitive. Other approaches, such as image sharpness and wavefront-shearing interferometry have been used to circumvent these problems with varying degrees of success.
It is known that the shape of a wavefront can be described by a set of appropriately weighted orthonormal functions. The functions themselves are predetermined, but the weighting is variable, and it is the weighting coefficients which define the shape.
It is also known that the wavefront shape may be determined by application of the intensity transport equation. This can be a complex and difficult process, which is dealt with in embodiments of the present invention by using certain arrangements for detecting the radiation, and by providing for each weighting coefficient a predetermined matrix of values for use in real time measurement of the coefficient. The matrix of values is related or matched to the arrangement for detecting the radiation, and determination of the matrix, which remains a complex and lengthy process, is effected initially, i.e. prior to the radiation detection.
The present invention relates to a version of an established algorithm known as wavefront curvature sensing, which may be regarded as a special case of phase diversity (see, for example, Gonsalves R A, "Phase retrieval and diversity in adaptive optics", Opt. Eng. 21 (1982) 829-832; and Roddier F, Curvature sensing and compensation: a new concept in adaptive optics, Appl. Opt. 27 (1988) 1223-1225.
Generally, 'phase-diverse' solutions to the problem of wavefront reconstruction from two intensity measurements are related to the 'two-defocus' algorithm described in Misell D L, "An examination of an iterative method for the solution of the phase problem in optics and electron optics", J. Phys. D.; Appl. Phys. 6 (1973) 2200-2216.
In a first aspect the present invention provides a measurement method for determining data relating to the local shape (or distribution of local phase) of a radiation wavefront arriving at a pupil plane, wherein said shape is defined by a set of predetermined orthonormal functions, each function being provided with a weighting coefficient for determining the shape, said data comprising at least one said weighting coefficient, the method comprising determining a pixelwise distribution indicative of rate of change of radiation intensity as the radiation traverses the input pupil, and converting said pixelwise distribution to said data, wherein said converting step comprises providing one or more matrices of predetermined values, each said matrix corresponding to one said orthonormal function, and the size of each said matrix corresponding to the number of pixels in said pixelwise distribution, for each said matrix multiplying said pixelwise distribution by said matrix and adding the result to provide said weighting coefficient for its said orthonormal function.
The first aspect of the invention extends to measuring apparatus for determining data relating to the local shape (or distribution of local phase) of a radiation wavefront arriving at a pupil plane, wherein said shape is defined by a set of predetermined orthonormal functions, each function being provided with a weighting coefficient for determining the shape, said data comprising at least one said weighting coefficient, the apparatus comprising a said input pupil, means for determining a pixelwise distribution indicative of rate of change of radiation intensity as the radiation traverses the input pupil, and converting means for converting said intensity distribution to said data, wherein said converting means comprises a store holding one or more matrices of predetermined values, each said matrix corresponding to one said orthonormal function, and the size of each said matrix corresponding to the number of pixels in each of said first and second images, and calculating means for multiplying said pixelwise distribution by a said matrix and adding the results to provide said weighting coefficient for its said orthonormal function.
As will be described in a largely theoretical section later, it is necessary to know how the local intensity of the radiation across the input pupil is changing as it passes through the input pupil. As particularly described, this is not effected directly, but by measuring the distributions of intensity in respective secondary planes either side of, and adjacent to, the input pupil, and measuring the distribution of intensity differences between the two distributions.
The intensity distributions in the two secondary planes may be may be obtained by any method known or obvious to the skilled person, for example by the use of a conventional beam splitter placed after the input pupil and directing light to respective detectors or detector arrays. This is shown schematically in Figure 5 in relating to spaced planes, a beam splitter 24, a ccd camera 26 for receiving a focussed image of plane 22 via a lens 25, and a ccd camera 28 for receiving a focussed image of plane 23 via a lens 27. A possible difficulty is synchronisation of the two cameras, and of course there is the need to' ensure and to maintain alignment and calibration.
However, in a particularly preferred optical embodiment the two intensity distributions are provided on a common plane. For example in an optical system a diffraction grating of the general type described in our copending International Patent Application WO 99/46768 mentioned above may be used in conjunction with an imaging lens or other optical imaging system for directing and focussing the light from at least the plus one and minus one orders, and optionally the zero order, onto the plane of a single detector or detector array. The focussed light received by the detector (array) includes simultaneous images of the laterally displaced images of the two secondary planes (plus and minus one orders), and optionally the zero order. The former images are employed to provide the aforesaid distribution of intensity differences. A computer generated hologram may be employed instead of the aforesaid grating for the same purpose, and with or without an additional lens according to the properties of the hologram (which can itself additionally provide the lens function).
Other arrangements for obtaining both intensity distributions in a common plane are described in our copending UK Patent Application No. GB 0301923.9 (ref: P21386GB).
Thus in a preferred embodiment the said pixelwise distribution is obtained by (means for) providing focussed first and second images of respective first and second secondary planes lying adjacent, and respectively before and after, the pupil plane, and deriving the pixelwise distribution as the pixelwise distribution of intensity difference between the first and second images. The first and second images may alternatively be regarded as images of the pupil plane with relatively minor degrees of positive and negative defocus.
The wave is not necessarily an optical wave, and other exemplary areas where a knowledge of phase aberrations may be useful include studies on x-ray diffraction and nuclear structure (see for example von Kampen N G, "S-matrix and causality condition I. Maxwell field", Phys. Rev. 89(1953)1072-9); subatomic particles such as in electron microscopy (see for example Misell D L, "An examination of an iterative method for the solution of the phase problem in optics and electron optics", J. Phys. D: Appl. Phys. 6(1973)2200-2216; and Gerchberg R W and Saxton W O, "A practical algorithm for the determination of phase from image and diffraction plane pictures", Optik 35(1972) 237-246); and phase-unstable interferometry (see for example Schwarz U.J., "Mathematical-statistical description of the iterative beam removal technique (method CLEAN)", Astron. Astrophys. 65(1978)345-356; and Lannes A, "Backprojection mechanisms in phase-closure imaging", Exp. Astron. 1(1989)47-76). As particularly described and as implemented at present the radiation is optical (ultraviolet to infra-red), and preferably visible or infra-red.
The set of orthonormal functions may be Zernike modes, although other sets could be employed, depending inter alia on the potential application of the data so obtained and the nature of the radiation. For later processing, it is necessaiy to provide the distribution of intensity differences in the form of a spatially discrete array of values. Accordingly, it is preferred to employ a pixellated detector array (although "pixel" in the present context refers to an image of the input pupil as opposed to the more normal reference to an object plane). However, as will be recognised by the reader, it is possible to use a non-pixellated detector, e.g. a conventional CRT camera with subsequent image processing to provide a like result. References herein to a detector should be taken to embrace both spatially pixellated (or digitised) detectors, as well as those providing a continuous indication of intensity with position.
The aforementioned use of beam splitters is not only costly, but can give rise to problems in realising the necessary precision in alignment, particularly insofar as it is necessary to ensure accurate relative alignment of the two near-pupil plane intensity distributions. The use of the distorted diffraction grating or the hologram can not only lead to lower costs, but it can help avoid any relative misalignment between the two images of the secondary planes.
The number of weighting coefficients provided in use of the method according to the invention can vary from one upwards. If the discrete or "pixellated" array of intensity difference comprises N values, a corresponding number N of weighting coefficients for N respective orthonormal functions can be obtained if required. While this may involve a relatively high number of individual computational steps, they are simple steps capable of being effected quickly. For example, with a 40 by 40 array, giving 1600 intensity difference values effectively across the input pupil, there will be 1600 matrices for 1600 orthonormal functions, each function requiring 1600 multiplications of its matrix values by the 1600 intensity difference values, followed by an addition step.
It should be clear that the fineness of the structure of the phase distribution to which the method of the invention relates is at least in part determined by the number of values in the discrete or "pixellated" array of intensity difference. However, in many applications it is not necessary to determine all of the possible orthonormal weighting coefficients. For example, in the above mentioned use in correcting optical images for atmospheric distortion a corrective or adaptive light modulator placed in the optical imaging path may only be capable of dealing with a limited number of modes corresponding to a like number of the orthonormal function, so that it is only necessary to determine the corresponding limited number of weighting coefficients. In one particular embodiment, the light modulator comprises a distortable membrane mirror capable of phase correcting for 37 modes, and so it is only necessary to determine the corresponding 37 weighting coefficients. If another light modulator is employed for phase correction, for example a micro-electro-mechanical (MEM displaceable mirror array, e.g. of 8 by 8 elements, the number of correctable modes might correspond to the number of elements, i.e. 64.
Only one coefficient may be required in certain cases, such as in range finding or the determination of the degree of focus (see later).
The first two Zernike modes are tip and tilt. To correct for these modes, e.g. to correct the pointing of an imaging system or a laser beam, a simple tip/tilt movable mirror can be employed. Other Zernike modes will require more complex arrangements such as those just exemplified.
Another reason for not determining the weighting coefficients for a larger set of the orthonormal functions may be the time periods involved. As the phase structure becomes more finely resolved, the fine structure progressively relates to spatially small features, whereas a lower number of functions would relate to larger scale features. It is to be expected, or likely, that large scale features change slowly relative to the smaller scale features. In a real time application such as image correction, there is little point in determining the weighting coefficient for a function if that weight coefficient is expected to change at a rate comparable to or faster than the rate at which the coefficient can be determined.
While the measurement of the weighting coeffϊcient(s) may be useful in itself, for example in giving a measure of the degree of phase perturbation, or the magnitude of the cause giving rise to such perturbation, one particular application of the method is to imaging, and in particular to correcting for phase perturbations arising as the radiation travels towards the input pupil of an imaging system. For example, visible light is commonly affected by atmospheric turbulence and density changes, particularly when the viewed object is relatively distant, and further perturbation may arise for example if scintillation occurs, or there is atmospheric pollution. As indicated previously, the focussed image can be improved by determining phase aberrations in the received wavefront and using corrective adaptive optics in the imaging system, but this is difficult in real time. Thus one reason for the development of the present invention is to meet a need for real-time wavefront reconstruction in terrestrial imaging applications, where the scene under observation is extended and intrinsically low contrast and where the use of laser beacons to give a wavefront sensing reference is prohibited because of eye safety or other issues.
The corrective optics means may be used independently of the phase measurement (for example by using a beam splitter), but in a preferred embodiment the wavefront measuring system and the imaging system receive light from the same the corrective optics means, and the arrangement is such as to minimise the measured phase aberrations.
Thus the invention extends to an imaging method (preferably, but not necessarily, optical, and most preferably visible or infra-red imaging) in which weighting coefficient(s) are obtained according to the first aspect of the invention, and wherein corrective optics are incorporated in the imaging system and are controlled in response to the determined weighting coefficients. The corrective optics can be a spatial phase modulator, for example a deformable mirror membrane or a micro- mechanical movable mirror array, optionally with a tip/tilt mirror for dealing separately with the first two Zernike modes. These modes have the potential for involving relatively large scale angular corrections, and a separate mirror, is therefore preferred, although, if necessary, this mode could be dealt with by the same spatial phase modulator as deals with other mode(s).
h the simplest implementation, the corrective optics may be peculiar to the path for the imaging radiation. That is to say, the weighting coefficients are determined using light which does not encounter the phase control means. However, in a preferred embodiment, the phase control means is common to the imaging radiation and that used for determination of the weighting coefficients. Operation of this embodiment may be such that the weighting coefficients are reduced as far as possible, and preferably completely nulled, by operation of the phase control means in a feed back loop. It will be appreciated that the signal in the feed back loop remains indicative of the weighting coefficients determined according to the first aspect of the invention.
In a similar way to the above application of adaptive imaging, a light beam, for example a laser beam, may be controlled. The beam wavefront may be measured according to the invention and adaptive optics used to control it. As in the imaging system the adaptive optics lies in a main optical path, and the measuring system may be located either in the main path e.g. in a null sensing system, or in a separate path (for example from a beam splitter).
Another application of the invention is to the determination of atmospheric turbulence, which is a function of the measured coefficients, and which may be used for example in the correction or control of other (non-imaging) optical measurements.
A further application is in the field of focus or range determination and autofocus. When the wavefront mode being sensed is the component of defocus a (Zernike polynomial Z ;o(r,θ)), the range of the source is related to the coefficient of defocus a, by
Figure imgf000011_0001
where R is the front focal length of the system (i.e. the range at which an object is perfectly focussed), and D is the diameter of the pupil.
The invention works well when the viewed field contains a single high intensity point, enabling well defined and spatially separate images of light from the point as in the secondary planes either side of the pupil plane to be formed on a single sensor surface. If no such high intensity point exists, it is possible to provide one, for example by suitable illumination for example using a laser beam. Where there are no such singular points there may be overlap of the images on the sensor plane, and the measurement process may be correspondingly difficult.
In one embodiment, a plurality of points in the viewed field are illuminated, for example using a light source or laser with an appropriate diffraction grating. It is arranged that the corresponding two sets of points on the sensor plane (i.e. from the planes either side of the input pupil) are spatially separate. In this way measurements, for example of range, can be taken at a number of points in the field of view.
In the invention, the provision of one or more predetermined matrices facilitates essentially real time calculation of the weighting coefficient(s) since the processing amounts to image intensity subtraction or comparison, matrix multiplication of the resulting intensity distribution and addition of the results, all of which are simple steps capable of being performed very rapidly on any modern computer. By contrast, the calculation of the matrices themselves is a much more complicated process taking several hours or longer to perform, and is performed prior to sensing of a radiation wavefront. The method of the invention has been implemented experimentally and has been found to provide fast, accurate and robust wavefront reconstruction.
Further features and advantages of the invention will become clear upon a perusal of the appended claims, to which the reader is referred, and upon a reading of the following more detailed description of an embodiment of an imaging system according to the invention including apparatus according to the invention, made with reference to the accompanying drawings, in which:
Figure 1 schematically illustrates an adaptive optics imaging system employing apparatus according to the invention;
Figure 2 schematically illustrates the principle behind the determination of the rate of change of radiation (light) intensity at an input pupil;
Figure 3 schematically illustrates an arrangement for determining the pixelwise rate of change of intensity of radiation as it passes between a pair of closely spaced planes either side of an input pupil; Figure 4 schematically illustrates an alternative arrangement for determining the pixelwise rate of change of intensity of radiation as it passes between a pair of closely spaced planes either side of an input pupil;
Figure 5 shows an alternative method of obtaining the requisite first and second focussed images on separate CCD cameras; and
Figure 6 is an image for illustrating a method of correcting for background radiation.
In the system illustrated in Figure 1, it is desired to focus an optical image of a distant object 10 in an object plane 1 in a focal plane 2, but without correction the focussed image tends to be distorted by atmospheric turbulence 3 acting on the light 31 before it reaches the input pupil 4.
Light from the input pupil 4 is reflected by a tip/tilt mirror 5 (i.e. a mirror which can be moved angularly about two axes), and reflected by a spatial phase modulator 6 comprising a deformable mirror. Light from modulator 6 is passed through a beamsplitter 7 for reflection by a fixed mirror 8 and transmitted by an imaging lens acting to focus light from the object plane 1 on the image plane 2. Light reflected by the splitter 7 is passed to and detected by a wavefront sensor 11 which determines the weighting coefficients for the first (tip/tilt) and other selected Zernike modes, and produces respective signals 12, 13 for correspondingly controlling the tip/tilt mirror 5 and the modulator 6. The mirror 5 and modulator 6 are common to the optical paths to the focal plane 2 and the sensor 11, and the latter is arranged to minimise or null the weighting coefficients in the wavefront of the sensed light, so that the optical wavefront 14 transmitted towards the focal plane is rendered more ideal - that is substantially planar, having had phase perturbations or distortion relative to a planar wavefront substantially removed by the action of the mirror 5 and modulator 6.
In a variation of the embodiment of Figure 1, the light 31 is a laser beam with potential imperfections in the wavefront, not necessarily due to atmospheric turbulence. The, or some, imperfections are removed in the output beam as indicated at 14. Reflector 8 and lens 9 may not be required. The embodiment of Figure 1 or its variant may be further varied by relocating the beam splitter 7 and sensor 11 before the tip/tilt mirror 5 and modulator 6 and using the output of sensor 11 for forward control of mirror 5 and modulator 6 without the use of a nulling system.
In an imaging system of the type shown in Figure 1 or its variant, the Zernike polynomial Z2jo(r,θ) pertaining to defocus may be employed either to control the modulator 6, or to control the position of the lens 9, for auto-focus. This coefficient may also or alternatively be employed to provide the range of an object in the field of view.
Figure 2 schematically illustrates the principle behind the determination of the rate of change of radiation (light) intensity at the input pupil 4. This is discussed in more detail in the theoretical part of the description to follow, but for now it is noted that phase distortion effectively are manifested as distortions from planarity (in this case, but any standard shape, e.g. spherical with a predetermined rate of curvature, could be used) and that such distortions cause the radiation to converge or diverge, giving rise to a variation in local intensity as the radiation travels between two planes 22, 23 closely equally spaced either side of the pupil plane (not shown). This variation can be measured and presented in a pixelwise manner for use in the apparatus of the present invention, e.g. in the sensor 11 of Figure 1.
Figure 3 shows how the optical input to the sensor 11 could be derived. A focussing lens 15, such as the biconvex lens illustrated, is arranged to focus an image of an object plane 16 on a measuring plane 17 of an image sensor such as a CCD or other digital camera array. Closely adjacent the lens (for example before it, as shown) is provided a quadratically distorted diffraction grating 18 of the type generally described in our copending International Patent Application WO 99/46768. Together with the zero order component, which provides an image of the plane 16 centrally on plane 17 at a position 19, the grating provides useful plus one and minus one diffraction orders at least. These orders respectively effectively slightly decrease and increase the power of the lens 15, so as to produce in the plane 17 respective focussed images of planes 22 and 23 axially either side of the image plane 16, at locations 20 and 21 laterally either side of the position 19. Planes 22 and 23 are located close to the plane 16.
It is thus possible to measure the pixelwise intensity distribution of the light at planes 22 and 23, and pixelwise to subtract the resulting measurement to obtain an indication of the pixelwise rate of change of light intensity at the pupil plane 16.
There are disadvantages attached to the use of the arrangement of Figure 3, one principal disadvantage being that the magnification of the images at locations 19 to 21 differ. An alternative and preferred arrangement for providing an optical input to the sensor 11 is shown in Figure 4, where the lens 15 is set to focus sources at infinity on a central focussed spot 19' the sensor 17. The grating 18 acts with the lens 15 so that an image of a closer real image plane A is provided by the plus one diffraction order at a position 20' on the sensor 17 on one side of the central position 19', and an equal magnification image 21' of a virtual plane is provided by the minus one diffraction order at an equal distance on the other side of the central position 19'. The virtual plane is shown as a plane C, shown on the other side of the lens 15, but the skilled person will realise that this represents an image plane effectively beyond infinity, i.e. the images 20' and 21' effectively relate to planes either side of the pupil plane. The images 20' and 21 ' are used in lieu of the images 20 and 21 of Figure 3.
To provide a somewhat more detailed explanation of the reasoning behind the invention, it is necessary first to consider the Intensity Transport Equation (I.T.E.)
For light of wavelength λ propagating along the z-axis in the paraxial approximation, the complex amplitude uz(r) satisfies the parabolic equation
Figure imgf000015_0001
d2 d1 2π where V - — - ++- — - , r = (χ,y) , and k = - dx2 dy2
The complex amplitude may be expressed in terms of the real-valued intensity, Iz(r), and phase, φz(r), as follows: "z(r) = [ _(r)f exp _(r)]
Multiplying Equation (1) on the left by uz *(r) and the complex conjugate of Equation (1). on the left by uz(r), and subtracting the resulting expressions, leads to the Intensity Transport Equation (ITE),
-*- /,(r) = V-(J,(r)Vfc(r)) (2) oz
To help in understanding the meaning of the I.T.E., Equation (2) can be written
-A- /,(r) = /,(r)VV,(r) + V/,(r) .V^(r) (3) oz
This contains a "curvature" term I2(r) 2φz(r) , and a "slope" term VJ,(r) -V z(r) . In a region where Iz(r) is uniform the slope term vanishes ( Wz (r) = 0) and Equation (3) simplifies to:
Figure imgf000016_0001
which relates the local increase or decrease in intensity as light propagates along the optical axis to the local curvature of the wavefront, as indicated in Figure 2. Thus, if the intensity distribution and its derivative in the z-direction are measured in the pupil plane of an optical system, the wavefront curvature can be obtained from Equation (4).
In practice, measurement of the intensity in two closely-spaced planes symmetrically placed with respect to the entrance pupil provides a practical method for the estimation of the axial intensity gradient. The schematic in Figure 2 illustrates this principle, which is similar to reconstructing the shape of waves on the surface of a swimming pool from the scintillation patterns observed on the pool floor.
Knowledge of the wavefront curvature alone is sufficient to determine the wavefront only to within an arbitrary harmonic function. Boundary conditions are also required in order to obtain a unique solution. If the region in which we wish to reconstruct the wavefront consists of a uniformly illuminated aperture, we have: I = IQWA, VI = -I0δcή
where
Io is the constant intensity,
A denotes the system aperture,
C is the aperture perimeter,
WA is the aperture function (=1 inside A, =0 outside A),
δc is a delta-function around C,
ή is an outward-pointing unit vector normal to C.
Equation (3) then becomes
~k^Iz(r) = I0WAV2φ2(r)-IQδcή -Vφz(r) . (5) oz
Measurement of the axial intensity derivative thus consists of both wavefront curvature within the aperture and wavefront slope around the aperture edge. This information is sufficient to determine the phase uniquely to within an arbitrary additive constant, which is consistent with an inability to measure absolute phase.
The above equation can be solved using Green's 2nd identity. Let (r) and g(r) be any two continuous functions of r having finite first derivatives within a two-dimensional region R bounded by perimeter P and with first derivatives that are bounded at all point within R
£(r')V2/(r') d2r' +
Figure imgf000017_0001
where n is the unit vector normal to P. If the function g(r') is replaced by the function G(r,r') satisfying
V2G(r,r') = (r-r'), (7) the LHS of Equation (6) becomes a formula for fir) in terms of its Laplacian inside R and boundary conditions on P:
/(r) = G(v, r') V2 (r') d V + cfp /(r ') VG(r, r') • n dr' - (Jp G(v, r') V/(r') • n dr' (8)
The function G(r,r') is called a Green's function and is defined by its Laplacian according to Equation (7). This means we are free to choose the boundary conditions for the Green's function. In general this choice is used to simplify Equation (8) by eliminating one of the two perimeter integrals.
If the problem involves Neumann boundary conditions,
n -V/Jr) = /.(r) for r e P ,
where h(r) is known, the middle term of Equation (8) can be eliminated by specifying
VG(r,r')-n = 0 for r e P. (9)
The Green's function cannot simultaneously satisfy Equation (9) and Equation (7), as can be seen by applying Gauss' theorem:
{ΛV2G(r,r')-i2r = cf VG(r,r') -n- r .
The RHS of this equation is identically zero, from Equation (9). The LHS must therefore also be zero. This is clearly not the case if Equation (7) is satisfied exactly; so instead we must have
V2G(r,r') = (r-r')--l
A
where A is the area of the region R. This has the effect of subtracting the mean value of fir) on the LHS of Equation (8), which is acceptable since fir) can only be determined to within an additive constant under Neumann boundary conditions.
Referring to Equation (5) we see that the axial intensity derivative contains information about the normal wavefront slope around the aperture edge, rather than the function itself. The problem therefore involves Neumann boundary conditions in a natural way, so we adopt a solution with a Green's function satisfying Equation (9). Inserting this into Equation (8) and replacing fir) by φ(r) gives the final solution for the wavefront phase as
φ(r) = f G(r,r')V (rVV -cf G(r,r')Vφ(r') -ήdr' (10)
Let us suppose that we can make a measurement of
S(r') = -^Iz(r') . (11)
If we multiply S(r') by G(r,r') and integrate over all space, we obtain (using Equation (5)):
\S(r')G(r,r')d2r' =
Figure imgf000019_0001
- δcn - φ)G(r,r')d2r'
The aperture function WA restricts the integral over ~V2φ to the region A, and the delta-function δc converts the integral over n -V^ into a line integral around C. Therefore we have:
Figure imgf000019_0002
G(r,r') φ(r')d2r' - cG(r,r')Vφ(r') -Vιdr' (12)
It can be seen that if we equate the regions A and R and the curves C and P, the RHS of Equation (10) is the same as the RHS of Equation (12). Thus
Figure imgf000019_0003
gives the complete solution for the wavefront from a single area integral.
As noted previously, many applications do not require the full wavefront φ(r), but one or more coefficients describing the amplitude of specific components, e.g. the amount of a particular Zernike mode or a set corresponding to the available modes of a wavefront modulator. If the wavefront is expanded as a linear combination of orthonormal functions --j(r), a particular coefficient is obtained by integration:
Figure imgf000019_0004
Substituting for φ(r) from Equation (13) gives at = |JS(r')G(r,r ,.(r)rf2r 2r . (15)
Defining:
G,.(r') = {G(r,r ,.(r)- 2r (16)
Equation (15) becomes:
- (. = {S(r')G(.(r')rf2 (17)
Therefore, the coefficient is obtained by the integral of the signal S(r) multiplied by a 'modal projector function' G\(r).
Frequently the wavefront will be expressed as a vector, each value corresponding to a sample of the wavefront at a particular point, or the mean value of the wavefront over a small area. For sampling at a set of points {R;}, the corresponding expansion functions are:
u,(r) = δ(r-R,) ,
and for mean values over a set of pixels centred on a set of points {Rj},
--^ ^(r -R,) ®^) ,
where p(r) is the support function for one pixel centred at the origin. Once the set of expansion functions ux(r) is defined, the corresponding set of modal projector functions can be calculated from Equation (16), and the ax obtained by Equation (17).
The function S(r') is approximated by the difference between two intensity distributions on either side of the aperture plane, separated by a small distance δz:
a/(r , z) rj 7(r, z + δzl2) -I(r,z- δzl2) dz δz
If the intensity distributions are to be recorded on a pixellated detector, the data will be in the form of a vector of intensity values, Sj, each being the mean value of S(r') over a pixel centred at r' = pj. The complete function is therefore approximated by S(r ∑S; (S(r'-p, ®<7(r')) . j
where g(r') is the support function for one pixel centred at the origin. Substituting this expression for S(r') into Equation (17) gives
Figure imgf000021_0001
Reversing the order of summation and integration and defining:
G, . = ^(r' -P/) (E)^r'))G,.(r') V
Equation (19) becomes
a^ ∑SjG-j , (20)
J
which is a simple matrix-vector multiplication. The simplicity of this solution, combined with the fact that the modal projector matrix Gy can be pre-calculated, provides for the possibility of very fast modal wavefront sensing.
The fact that the longitudinal intensity derivative is approximated by a difference in measured intensity distributions located a non-zero distance from the pupil plane means that there will be, in the measurement planes, non-zero intensity in the region exterior to the projected aperture. Thus, the modal projector function G\(r) must be suitably defined in the region outside the aperture as well as within it. It has been found that a suitable scheme for circular apertures is to continue the boundary condition of Equation (9) to infinity. That is, the value of the Green's function at any point outside the aperture is taken to be the same as the value at the edge of the aperture at the same angle. Various other schemes were tested and this one was found to give the best performance in terms of minimising the error when sensing the low- order Zernike modes.
Thus it will be seen that the re-formulation of the phase-diversity algorithm provides fast wavefront reconstructions and requires only a simple matrix multiply for data inversion. As implemented in experimental tests, the data collection is achieved in a single image from a single, pixellated focal plane by use of a distorted diffraction grating.
The algorithm is capable of working with both point sources and with extended, low- contrast scenes and can provide wavefront reconstruction data in any required set of basis functions. In particular, the formulation for point by point or for Zernike- polynomial decompositions is equally easily achieved.
The algorithm can be used on wavefronts with severe aberrations, with extended sources, with partially obscured wavefronts, with strongly-scintillated wavefronts and with some cases where the wavefront is discontinuous (e.g. in multiply-connected pupils). It has already been tested in computer simulation and experimentally and has been found to be robust to many sources of experimental error. In most cases it is found experimentally that a departure from the theoretical restrictions, implied expressly or implicitly by the theory, leads to a wavefront reconstruction that is low- pass filtered. In this case the reconstruction of the lowest order modes, which are generally the most important sources of image degradation in the terrestrial imaging applications for which the algorithm was formulated, are found to be very accurate.
When the modes are reconstructed serially on a 450 MHz PC, the data reduction takes 50 μsec per wavefront mode reconstructed. The best experimental results to date show a relative accuracy of better than 2 nm in the defocus terms when measured on a bright, monochromatic point source at 633 nm.
As thus far described, it is found that when the wavefront sensor is used with a compact high brightness source on a dark background, such as a laser beacon imaged through a narrow-band filter, the accuracy of the results is good.
When such a beacon cannot be used, a natural beacon, such as a sun glint or a bright object on a dark background is sought. However, in such circumstances the source power is unlikely to be concentrated in a narrow waveband, making it difficult or impossible to eliminate background radiation from the rest of the viewed scene, and this tends to lead to a loss in accuracy. This loss arises because the basic wavefront curvature signal is given by the difference in the two intensity profiles measured or recorded either side of the system pupil, divided by their integration sum for normalisation:
Figure imgf000023_0001
While a compact source (i.e. one completely contained within the field of view of the system) will provide intensity profiles Ii and I2 which lead to a correct curvature signal and hence correct wavefront mode coefficients, generally uniform background illumination will contribute identically to Ii and I > thus contributing nothing to the difference signal (numerator) but increasing the normalisation factor (divisor). The result is that when the system so far described views a scene with a compact source on a uniform background it will provide curvature signals which have the correct spatial structure but which are inappropriately scaled, resulting in incorrect wavefront mode coefficients.
By splitting the intensity profiles into a contribution from the compact source and the background:
J.(r) = /5ιl(r) + /,(_) , I2(r) = ISt2(r)+IB(r)
the curvature signal takes the form:
Figure imgf000023_0002
where S and B indicate source and background contributions to the intensity profiles.
By rescaling according to :
Figure imgf000023_0003
the correct curvature signal can be obtained.
It follows that correct measurements can be made provided Is and IB can be measured. This is extremely hard to do in the pupil plane where measurements of Ii and I2 are made. Nevertheless, since in a lossless system by the conservation of energy the integrated intensity in any plane perpendicular to the optic axis is invariant, the quantities 1$ and IB can be measured in the image plane where source and background can be more easily separated.
hi a numerically simulated example, the scene consisted of a uniformly bright background with an off- axis source of twice the background intensity. A defocus of 0.5 was assigned to all points within the scene and the wavefront sensor intensity profiles calculated as well as the image, which is shown in Figure 6 as an out-of focus spot 29 on a darker background 30. The defocus measurement without background correction was 0.07 waves, significantly in error. By applying background corrections as outlined above, the results were:
Background level = 1
Background integrated over unit circle: B = π
Total integrated image: T = 3.651
Source S = T - B = 0.509
Rescaling factor α = T/S = 3.651/0.509 - 7.17
Corrected defocus measurement = 0.07 x α = 0J x 7.17 = 0.5, which is as expected.
The reader is directed to our copending UK Patent Application No. GB 0301923.9 (ref: P21386GB) which describes and claims alternative apparatus for providing the two focussed near- pupil- plane images in a common focal plane. The reader is also directed to our copending UK Application No. GB 0206242.1 (Ref: P21567GB) of even date with the present application which deals with an alternative method for implementing the matrix function by the use of a radiation (e.g. optical) grey-scale mask.

Claims

1. Measuring apparatus for determining data relating to the local shape (or distribution of local phase) of a radiation wavefront arriving at a pupil plane, wherein said shape is defined by a set of predetermined orthonormal functions, each function being provided with a weighting coefficient for determining the shape, said data comprising at least one said weighting coefficient, the apparatus comprising a said input pupil, rate means responsive to said radiation for determining a pixelwise distribution indicative of rate of radiation intensity change as the radiation traverses the input pupil, and converting means for converting said intensity distribution to said data, wherein said converting means comprises a store holding one or more matrices of predetermined values, each said matrix corresponding to one said orthonormal function, and the size of each said matrix corresponding to the number of pixels in said pixelwise distribution, and calculating means for multiplying said pixelwise distribution by a said matrix and adding the results to provide said weighting coefficient for its said orthonormal function.
2. Apparatus according to claim 1 wherein said rate means comprises optical means for providing focussed first and second images of respective first and second secondary planes lying adjacent, and respectively before and after, the pupil plane, and deriving said pixelwise distribution as the pixelwise distribution of intensity difference between the first and second images.
3. Apparatus according to claim 2 wherein one said secondary plane is a virtual plane.
4. Apparatus according to any preceding claim wherein the optical means is arranged to provide said first and second images in a single plane.
5. Apparatus according to any preceding claim wherein the optical means comprises a distorted diffraction grating or a computer generated hologram.
6. Apparatus according to any preceding claim wherein the set of orthonormal functions are Zernike polynomials.
7. Apparatus according to any preceding claim adapted for use with optical radiation.
8. Apparatus according to any preceding claim wherein said store holds more than one said matrix.
9. Apparatus according to claim 8 wherein said store holds a number of matrices less than the number of pixels in said pixelwise distribution.
10. Apparatus according to any preceding claim, and according to claim 4, and further including illuminating means for illuminating a point in the field of view, the apparatus being arranged so that light from said point in said secondary planes is spatially separated in said single plane.
11. Apparatus according to claim 10 wherein the illuminating means is arranged to illuminate a plurality of said points in the field of view, the light from each point in each of said secondary planes being spatially separated in said single plane.
12. Apparatus according to any preceding plane and including means for measuring the degree of focus, or range, of an object in the field of view as a function of at least one said weighting coefficient.
13. Apparatus according to any preceding claim and further comprising a spatial phase modulator for receiving radiation from said input pupil, and modulator control means responsive to the said weighting coefficient(s) for controlling the modulator.
14. Apparatus according to claim 13 wherein the spatial modulator is located between the input pupil and said rate means.
15. Apparatus according to claim 14 wherein said modulator control means is arranged to minimise said weighting coefficients.
16. Apparatus according to any one of claims 13 to 15 and further comprising imaging means for receiving radiation from said modulator.
17. Apparatus according to claim 13 and further comprising imaging means for receiving radiation from said modulator, wherein said rate means is arranged to receive radiation which has not been transmitted by said modulator.
18. Apparatus according to any one of claims 13 to 17 wherein said modulator includes a tip/tilt mirror.
19. Apparatus according to any one of claims 13 to 18 wherein said modulator includes a deformable mirror or a deflectable mirror array, or a liquid crystal device.
20. Apparatus according to any preceding claim for use with an image comprising at least one relatively bright small area or spot on a background, the apparatus comprising means for measuring the brightness of the bright area or spot and the brightness of the background, and for combining the two brightness measurements to derive a scaling factor for application to a measured weighting coefficient.
21. Apparatus according to claim 20 wherein the brightness measurements are effected in an image plane.
22. A measurement method for determining data relating to the local shape (or distribution of local phase) of a radiation wavefront arriving at a pupil plane, wherein said shape is defined by a set of predetermined orthonormal functions, each function being provided with a weighting coefficient for determining the shape, said data comprising at least one said weighting coefficient, the method comprising determining a pixelwise distribution indicative of rate of radiation intensity change as the radiation traverses the input pupil, and converting said pixelwise distribution to said data, wherein said converting step comprises providing one or more matrices of predetermined values, each said matrix corresponding to one said orthonormal function, and the size of each said matrix corresponding to the number of pixels in each of said first and second images, for each said matrix multiplying said pixelwise distribution by said matrix and adding the result to provide said weighting coefficient for its said orthonormal function.
23. A method according to any one of claim 22 wherein the radiation is optical radiation.
24. A method according to claim 22 or claim 23 and including the step of illuminating or irradiating a point in the field of view, and determining said data from radiation received from said point.
25. A method according to claim 24 wherein a plurality of separated points in the field of view are illuminated or irradiated, and said data is determined from radiation received from each said point.
26. A method according to any one of claims 22 to 25 and comprising the further step of modulating radiation from said input pupil with a spatial phase modulator controlled in response to the said weighting coefficient(s).
27. A method according to claim 26 wherein the step of determining said pixelwise distribution is performed on radiation transmitted by said spatial modulator.
PCT/GB2003/000979 2002-03-06 2003-03-06 Wavefront sensing WO2003074985A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2003212516A AU2003212516A1 (en) 2002-03-06 2003-03-06 Wavefront sensing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0205240A GB0205240D0 (en) 2002-03-06 2002-03-06 Wavefront sensing
GB0205240.5 2002-03-06

Publications (1)

Publication Number Publication Date
WO2003074985A1 true WO2003074985A1 (en) 2003-09-12

Family

ID=9932391

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2003/000979 WO2003074985A1 (en) 2002-03-06 2003-03-06 Wavefront sensing

Country Status (3)

Country Link
AU (1) AU2003212516A1 (en)
GB (1) GB0205240D0 (en)
WO (1) WO2003074985A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006076474A1 (en) * 2005-01-13 2006-07-20 Arete Associates Optical system with wavefront sensor
JP2007522468A (en) * 2004-02-11 2007-08-09 キネテイツク・リミテツド Surface shape measuring apparatus and method
WO2022158957A1 (en) 2021-01-21 2022-07-28 Latvijas Universitates Cietvielu Fizikas Instituts Coded diffraction pattern wavefront sensing device and method
CN114924410A (en) * 2022-05-20 2022-08-19 西南科技大学 Focusing method and device based on small phase modulation and phase compensation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BLANCHARD P M ET AL: "PHASE-DIVERSITY WAVE-FRONT SENSING WITH A DISTORTED DIFFRACTION GRATING", APPLIED OPTICS, OPTICAL SOCIETY OF AMERICA,WASHINGTON, US, vol. 39, no. 35, 10 December 2000 (2000-12-10), pages 6649 - 6655, XP001017744, ISSN: 0003-6935 *
M. L. HOLOHAN, J. C. DAINTY: "Low-order adaptive optics: a possible use in underwater imaging?", OPTICS AND LASER TECHNOLOGY, vol. 29, no. 1, 1997, pages 51 - 55, XP002245504 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007522468A (en) * 2004-02-11 2007-08-09 キネテイツク・リミテツド Surface shape measuring apparatus and method
EP2273230A2 (en) 2004-02-11 2011-01-12 Qinetiq Limited Registered Office Surface shape measurement apparatus and method
US7907262B2 (en) 2004-02-11 2011-03-15 Qinetiq Limited Surface shape measurement apparatus and method
WO2006076474A1 (en) * 2005-01-13 2006-07-20 Arete Associates Optical system with wavefront sensor
WO2022158957A1 (en) 2021-01-21 2022-07-28 Latvijas Universitates Cietvielu Fizikas Instituts Coded diffraction pattern wavefront sensing device and method
CN114924410A (en) * 2022-05-20 2022-08-19 西南科技大学 Focusing method and device based on small phase modulation and phase compensation

Also Published As

Publication number Publication date
GB0205240D0 (en) 2002-04-17
AU2003212516A1 (en) 2003-09-16

Similar Documents

Publication Publication Date Title
Rousset Wave-front sensors
Rousset Wavefront sensing
US7268937B1 (en) Holographic wavefront sensor
Shatokhina et al. Review on methods for wavefront reconstruction from pyramid wavefront sensor data
US6653613B1 (en) Method and device for wavefront optical analysis
Guyon High sensitivity wavefront sensing with a nonlinear curvature wavefront sensor
Codona et al. James Webb Space Telescope segment phasing using differential optical transfer functions
Bardou et al. ELT-scale elongated LGS wavefront sensing: on-sky results
WO2003074985A1 (en) Wavefront sensing
Ko et al. An adaptive optics approach for laser beam correction in turbulence utilizing a modified plenoptic camera
Tallon et al. Shack-Hartmann wavefront reconstruction with elongated sodium laser guide stars: improvements with priors and noise correlations
Deo et al. Wavefront sensing using non-redundant aperture masking interferometry: tests and validation on subaru/scexao
US20090302198A1 (en) Elimination of piston wraps in segmented apertures by image-based measurements at two wavelengths
Geary Wavefront sensors
Moore et al. Picometer differential wavefront metrology by nonlinear Zernike wavefront sensing for LUVOIR
Bikkannavar et al. Phase retrieval methods for wavefront sensing
Hickson et al. Single-image wavefront curvature sensing
Chanan Principles of wavefront sensing and reconstruction
Davis et al. Wavefront-based PSF estimation
Chulani et al. Simulations and laboratory performance results of the weighted Fourier phase slope centroiding algorithm in a Shack–Hartmann sensor
Zepp et al. Simulation of an optimized holographic wavefront sensor for realistic turbulence scenarios
Wang et al. Calibration of non-Common path aberration using multi-Channel phase diversity technique
Hege et al. Computing and telescopes at the frontiers of optical astronomy
Mahajan et al. Adaptive optics without wavefront sensors
Haniff et al. Closure Phase Imaging with Partial Adaptive Correction

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SC SD SE SG SK SL TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP