WO2013124664A1 - Procédé et appareil pour l'imagerie à travers milieu inhomogène à variation temporelle - Google Patents

Procédé et appareil pour l'imagerie à travers milieu inhomogène à variation temporelle Download PDF

Info

Publication number
WO2013124664A1
WO2013124664A1 PCT/GB2013/050428 GB2013050428W WO2013124664A1 WO 2013124664 A1 WO2013124664 A1 WO 2013124664A1 GB 2013050428 W GB2013050428 W GB 2013050428W WO 2013124664 A1 WO2013124664 A1 WO 2013124664A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
coded
decoded
focused
separation
Prior art date
Application number
PCT/GB2013/050428
Other languages
English (en)
Inventor
Antony Joseph Frank Lowe
Original Assignee
Mbda Uk Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GBGB1203075.5A external-priority patent/GB201203075D0/en
Application filed by Mbda Uk Limited filed Critical Mbda Uk Limited
Publication of WO2013124664A1 publication Critical patent/WO2013124664A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation

Definitions

  • the invention relates to the field of imaging through a time-varying inhomogeneous medium, for example the atmosphere.
  • An attempt to obtain a clear image of an object through a medium such as the atmosphere can be hampered by the effects of localised variations in the refractive index of the medium, for example resulting from turbulence.
  • Such localised inhomogeneities create minor distortions in the wavefront of the light reaching the imager, which in turn results in degradation of the image of the object.
  • the degradation typically manifests itself as a distortion of the image (i.e. the separation between equally spaced points in the object varies between pairs of points in the image) and as a defocusing or blurring (i.e. infinitesimal points in the object have a finite extent in the image).
  • the distortion and blurring changes rapidly (for example, for turbulent atmosphere, over the course of a few milliseconds).
  • Temporally varying blurring and distortion effects in the captured image limit the maximum magnification which can be used, and thus the maximum range at which a given resolution can be achieved. Those effects can be particularly strong in littoral regions and in hot environments.
  • a known technique for imaging through the atmosphere is known as "lucky imaging” or “lucky sub-frame selection”. This technique relies on the fact that the random variations in blurring will very occasionally and briefly happen to result in an image (also referred to herein as a "frame") having a portion that is in focus (a “lucky sub-frame”). These moments of clarity will occur at different times for different portions of the image.
  • the technique involves obtaining a sequence of images over a period of time that is sufficiently long for substantially all of the imaged object to have been in focus at some moment, and then forming a composite in-focus image of the object by mosaicing together the in-focus portions or sub-frames from the sequence of images or frames so obtained.
  • sharpness metric for the image, i.e. an assessment of how sharp the image is, with greater sharpness taken to indicate a better-focused sub-frame.
  • An automated system can asses an area of an image to determine whether or not the area is in sharp focus (examples of such methods are used in cameras having an autofocus capability).
  • sharpness metrics including use of, for example, the maximum spatial frequency having an amplitude above a certain threshold, a gradient within the image, wavelets, entropy, eigenvalue sharpness, bicoherence, or Chebyshev moments. There is a trade-off between the performance of different ones of those metrics, and the complexity of the calculations they involve (and hence the time and/or computing power required to carry them out).
  • Such methods can have significant drawbacks, such as being misled by image noise.
  • incoming light from the object is split into three using a quadratically distorted grating in the form of an off-axis Fesnel zone plate (described in more detail in PowerPoint presentation "Lucky sub-frame selection using phase diversity", 7 July 2009, QINETIQ/AT/PS/PUB0900701 , by the same authors), which results in spatially separated +1 , zero, and -1 orders that are converging, collimated and diverging, respectively.
  • the zero order is focused at the imaging plane of a CCD array, whereas the +1 and -1 orders are defocused by equal amounts (in opposite directions, i.e.
  • a metric is then calculated based on the difference between the CCD image of the +1 order and the CCD image of the -1 order, with sub-frames that score well (i.e. a small difference) being selected as "lucky" subframes.
  • the Woods et al. device also addresses the problem of distortion.
  • the zero-order images are summed to provide a low-resolution but undistorted (i.e. geometrically correct) reference image.
  • the transformation required to map each individual zero-order image onto the reference image is calculated, and used to remove distortion from the +1 and- 1 order images prior to calculation of the metric.
  • the "lucky sub-frame" is then selected from the transformed zero-order image, according to the metric, as just described.
  • the optics described uses potentially expensive components, including lenses and a diffraction grating
  • the clear sections are essentially discontinuous, which limits the degree to which distortion can be removed from the image.
  • US2008/0259176 A1 describes an image pickup apparatus in which multiple images are transformed so that positions of corresponding points coincide between the images. The images are composited with the
  • the present invention seeks to mitigate the above-mentioned problems. Alternatively or additionally, the present invention seeks to provide an improved imaging apparatus. Alternatively or additionally, the present invention seeks to provide an improved method of imaging.
  • the invention provides, in a first aspect, a method of imaging an object through a time-varying inhomogeneous medium, the method comprising:
  • the decoding mask is the pattern of the coded aperture mask projected onto a detector. It will be understood that the decoding mask is used to deconvolve the focused image from the initially captured image.
  • the imaged object is itself an image of the object. It may be that the first separation of the coded-aperture mask and the imaging plane is calculated so as to provide a focused image at a range R. It may be that the second separation of the coded-aperture mask and the imaging plane is a second focused range such that the conjugate image distance from the coded aperture corresponding to that object range differs from the first separation by an increment distance. (Note that the actual distance between the coded-aperture mask and the imaging plane has not changed; rather the change is in the range used to calculate the decoding pattern.) It may be that the third separation differs from the second separation by the increment distance, so that the second separation is midway between the first separation and the third separation. It may be that the step of identifying focused portions in a plurality of the decoded images comprises the step of
  • steps (ii), (iii), (iv) and (a) are repeated using a different set of first, second and third decoding masks, thereby identifying further focused portions of further second decoded images. It may be that the composite focused image is formed from the identified focused portions of the second decoded images.
  • the invention provides, in a second aspect, a method of imaging an object through a time-varying inhomogeneous medium, the method
  • Coded aperture imaging effectively makes this possible.
  • Coded aperture imaging is a known process in which an aperture consisting of a pattern of transparent and opaque areas is used (either alone or in conjunction with a conventional lens) to project an image onto a detector.
  • the image is somewhat analogous to a hologram of the imaged scene, in that it simultaneously contains within it information about the clear image of the scene at all ranges, compressed into one layer.
  • An image of the scene that is focused to any single chosen range can be extracted from this "hologram" image by Fourier deconvolution of the aperture function as projected onto the imager focal plane at the focal length which would render the image at that range in focus.
  • a decoded image "corresponding to" a separation of the coded-aperture mask and the imaging plane means a decoded image that would be produced if the coded- aperture mask were moved to that separation, but in fact the coded-aperture mask remains stationary and the "moving" is effected by changing the decoding mask, e.g. by scaling it (as discussed further below). Thus one no longer needs to wait until each part of the image happens to be in focus at the actual focal plane.
  • the composite focused image is generated from one coded image.
  • the method may comprise forming the decoded images corresponding to a range of separations of the coded aperture mask and the imaging plane that is sufficiently wide that the whole of the object is imaged in focus in the composite focused image.
  • the object being imaged may for example be a discrete object, e.g. a ship, or a collection of objections, e.g. a scene. It may be that the varying inhomogeneous medium is the atmosphere. It may be that the varying inhomogeneous medium is seawater, e.g. the sea or ocean.
  • first and third decoded images are formed before the second decoded image. It may be that only the portions of the second decoded image that correspond to identified corresponding portions of the first decoded image and the third decoded image are decoded from the coded image. That approach avoids the need to generate parts of the second decoded image that are not identified as being in focus.
  • the steps of identifying any corresponding portions of, on the one hand, the first decoded image and, on the other hand, the third decoded image in which the object is defocused by the same amount comprises identifying portions of the first decoded image and the third decoded image that are identical to each other.
  • the method may include recoding the coded image at the imaging plane using a plane array.
  • the plane array may be a 2D array of
  • the plane array may be a charge-coupled device (CCD) array.
  • CCD charge-coupled device
  • the steps of decoding the coded images may comprise deconvolving the Fourier transform of the coded-aperture mask from the coded images.
  • the coded aperture mask may comprise a pattern of areas forming at least two sets of different transmissivities. It may be that the pattern of areas consists of a first set of opaque areas and a second set of transparent areas; in that case, the coded aperture mask may comprise a pattern of apertures.
  • the steps of decoding the coded image using each decoding mask to form each decoded image corresponding to each separation of the coded- aperture mask and the imaging plane may include the step of scaling the pattern of areas of the coded-aperture mask according to each said
  • the steps of decoding the coded image using each decoding mask to form each decoded image corresponding to each separation of the coded-aperture mask and the imaging plane may include the step of deriving a ranged image focused at a range R.
  • a series of the second decoded images may be formed.
  • the images in the series may be derived at different values of f separated by amounts, preferably small amounts, the values of which are related to the effective depth of focus of the system at that value of f.
  • the amounts may be half the difference in values of f equivalent in values of f to the depth of field at that value of f.
  • the step of identifying any corresponding portions of, on the one hand, the first decoded image and, on the other hand, the third decoded image in which the object is defocused by the same amount may comprise comparing a phase diversity metric of the first decoded image and the third decoded image.
  • An example phase diversity metric corresponds to that given by Woods et al., i.e.
  • Z k j is the value of the ith pixel in the kth sub-frame of the second decoded image
  • P k i is the value of the ith pixel in the kth sub-frame of the first decoded image
  • M k, i is the value of the ith pixel in the kth sub- frame of the third decoded image.
  • Q k varies between 0 (poor) and 1 (good) image quality.
  • the step of forming the composite focused image may comprise the step of scaling the identified focused portions of the decoded images to take into account changes in projected size and/or position of the portions due to the different separations of the coded aperture mask and the imaging plane for which those focused portions are obtained.
  • the clear but distorted image may be derived from a single frame, the distortion effects across the clear image may be essentially coherent and smoothly changing. This makes them potentially much more amenable to recovery by the de-distortion process than composite images created over many frames.
  • the method may further comprise the step of removing distortion arising from the time-varying inhomogeneous medium from the composite focused image, to form an undistorted, focused image.
  • the step of removing distortion may comprise the steps of:
  • the cumulative image may be a mean image, i.e. an average of the plurality of images used to form the cumulative image.
  • the cumulative image frame may be formed from a constant number of successive ones of the plurality of images, up to and including a most-recent image.
  • the constant number may be chosen to provide sufficient images for the cumulative image to be acceptably stable.
  • the method may further comprise the step of deconvolving the undistorted, focused image from at least one of the decoded images to provide a measure of the distortion and/or blurring caused by the time-varying inhomogeneous medium when the coded image was formed.
  • the coded-aperture mask may be a programmable mask, for example a spatial light modulator.
  • the method may include the step of determining an optimal available coded-aperture pattern for the measured distortion and/or blurring, and applying that pattern to the programmable mask.
  • the focused portions are identified in the plurality of decoded images by identifying a surface of best focus.
  • the focused portions are identified in the plurality of decoded images by identifying corresponding portions in each decoded image, measuring the intensity of each corresponding portion, calculating the mean of the measured intensities of all corresponding portions and identifying as the focused portions the portions having an intensity having the greatest absolute deviation from the calculated mean intensity.
  • the invention provides, in a third aspect, an apparatus for imaging an object through a time-varying inhomogeneous medium, the apparatus comprising:
  • a coded-aperture mask configured to form a coded image at an imaging plane; ager arranged to record the coded image;
  • a a first decoding mask configured to form a first decoded image corresponding to a first separation of the coded- aperture mask and the imaging plane;
  • a third decoding mask configured to form a third decoded image corresponding to a third separation of the coded- aperture mask and the imaging plane;
  • an image processor configured to: a. identify focused portions in a plurality of the decoded images;
  • the coded image is a single coded image.
  • the decoder is configured to decode the single coded image into a plurality of images focused at a plurality of ranges.
  • the third separation differs from the second separation by the increment distance, so that the second separation is midway between the first separation and the third separation.
  • the decoder is configured to form a plurality of different sets of the first, second and third decoded images using a plurality of different sets of first, second and third decoding masks. It may be that the image processor is configured to:
  • steps b and c repeat steps b and c for the plurality of different sets of the first, second and third decoded images and thereby identify further focused portions of the second decoded images. It may be that the composite focused image is formed from the identified focused portions of the second decoded images.
  • the invention also provides, in a fourth aspect, an apparatus for imaging an object through a time-varying inhomogeneous medium, the apparatus comprising:
  • a second decoding mask configured to form a second
  • a third decoding mask configured to form a third decoded image corresponding to a third separation of the coded- aperture mask and the imaging plane, the third separation differing from the second separation by the increment distance, so that the second separation is midway between the first separation and the third separation;
  • the decoder being configured to form a plurality of different sets of the first, second and third decoded images using a plurality of different sets of first, second and third decoding masks;
  • an image processor configured to: a. for each set of first, second and third decoded images,
  • steps a and b repeat steps a and b for the plurality of different sets of the first, second and third decoded images and thereby identify further focused portions of the second decoded images; and d. form a composite focused image from the identified focused portions of the second decoded images.
  • the image processor may include the decoder.
  • the apparatus may be or form part of a passive optical system.
  • the passive optical system may be, for example, a digital camera, binoculars, telescopic sights, an airborne sensor platform, a missile sensor system, or a satellite imaging system.
  • the apparatus may further comprise an active optical element, for example an adaptive optical element, for example an adaptive mirror.
  • the active optical element may be a spatial light modulator, which may also be used to provide the encoding mask.
  • the image processor may be configured to generate an undistorted focused image from the composite focused image and to deconvolve the undistorted focused image from at least one of the decoded images to provide a measure of the distortion and/or defocusing caused by the time-varying inhomogeneous medium when the coded image was formed.
  • the image processor may be configured to provide to the active optical element a signal indicative of the measured distortion and/or defocusing; thus, the active optical element may be configured to compensate for the distorting and/or defocusing effects, preferably before the
  • the apparatus may for example form part of a laser-focusing system, for example in a communications system, and the adaptive mirror may be used to direct and/or to focus the laser.
  • Figure 1 is a schematic view of an apparatus according to a first example embodiment of the invention
  • Figure 2 is a schematic side view of an apparatus according to a second embodiment of the invention.
  • Figure 3 is a schematic illustration of the change in a projected mask deconvolution pattern between different chosen focus planes
  • Figure 4 is a further illustration of the patterns of Fig. 3;
  • Figure 5 is a plot of the variation for a first example pixel of the intensity with distance from best focus, along the optic axis of the apparatus of Fig. 2;
  • Figure 6 is a plot of the variation for a second example pixel of the intensity with distance from best focus, along the optic axis of the apparatus of Fig. 2
  • a coded aperture 10 is used to project an image A, separated by a fixed distance fO from the aperture, onto a focal plane array detector FPA, which is a 2D array of photoreceptors, the whole output of which is captured in a short period of time (a single frame).
  • FPA focal plane array detector
  • the effect of the atmosphere is to defocus patches of the image such that the whole image no longer appears to come from a single range.
  • This is equivalent in a conventional system to changing the distance between the aperture 10 and the detector FPA from fO to f .
  • a series of deconvolved images are derived at different f values separated by small amounts whose values are related to the effective depth of focus of the system at that f value.
  • the step-size is given by half the f difference equivalent to the depth of field at that f value.
  • the images at fa and fb are compared, using the phase diversity metric, to determine which if any areas in the f-sequence images, are in focus at that f value. If the range of f values is wide enough, each point in the image will be at a best focus at some value of f.
  • each frame's data (up to a total of n frames) will also be added to a mean image frame.
  • This mean image frame will eventually generate a blurred but stable image of the target scene. This is then used as a reference against which to compare each individual clear frame to determine the vector distortion field for each frame.
  • the vector distortion field for each frame is then used to remove the distortion from each clear frame, thus enabling the creation of an image which is both clear and undistorted, almost in real time.
  • the first frame in the sequence is removed from the calculated mean frame when the nth frame is added, thus the mean frame is formed by a constant number of frames derived over a period of n frames up to the present frame. The value chosen for n will depend on the degree of distortion experienced and the parameters of the camera, but essentially represents the period required to generate a mean frame which is static within the tolerances of the system.
  • the image generated by a coded aperture system can be thought of as the convolution of the image that would be projected onto the detector using a single pinhole, and the projection of the pattern of holes in the mask from all points on the optical axis.
  • the image of the mask as projected onto the detector from the point on the optical axis which lies at that range is deconvolved from the whole image.
  • the range from which to project the mask image that will be deconvolved from the captured image one can focus the final image at any desired range. Note that, to do that, one does not need to capture multiple images or use multiple masks: the only change is in the image-processing computation.
  • the proposed system is capable of generating and displaying a complete clear (de-blurred) image from a single captured frame. To also remove distortion from the displayed clear image will initially take a little longer (typically less than 1 s), but, once generated, undistorted, de-blurred frames can be produced continuously at the full camera frame rate.
  • Example methods according to embodiments of the invention can be understood to involve measuring the whole Surface Of Best Focus (SOBF) of incoming light.
  • SOBF Surface Of Best Focus
  • a lens 1 10 casts an aerial image (otherwise known as an intermediate image - i.e. one which is not actually projected onto a surface but exists in a space volume) of a scene which is regarded as being effectively at infinity.
  • the nominal plane of focus 120 for this image is at a distance c from the lens, given by the focal length of the lens.
  • an obscuring baffle 130 into which is inserted a coded aperture 140.
  • the aperture 140 is in this example of the MURA type, but the aperture parameters, including pattern type, and size, and the values of f and g, will in general be chosen to optimise the system's performance given the constraints imposed by considerations including for example the number, size and sensitivity of the detector pixels, the focal length of the lens 1 10 and the desired field of view of the system
  • a pixellated detector 150 Behind the coded aperture 140, at a range of g from the lens 1 10, lies a pixellated detector 150.
  • the detector 150 in this example operates in the visual and infra-red wavebands, but detectors operating in other wavebands may of course be used.
  • the detector 150 is operated in a sequence of
  • each frame period includes an integration period, during which each pixel of the detector 150 integrates the signal generated by the flux which illuminates it during that period.
  • the stored signal from each pixel in the detector 150 is read out by suitable electronics and stored in computer memory, as an array of numeric data. Between each frame, that array is processed to calculate the shape of the SOBF of the light in the region surrounding the nominal aerial focus 120.
  • the SOBF would be a plane coincident with the detector surface 120.
  • a well-corrected optical system is used to image a scene at infinity through a vacuum.
  • a long-range target is viewed using a system with imperfectly corrected optics, through a distorting atmosphere which varies over time. That latter scenario typifies the kind of case principally envisaged here.
  • This SOBF information is used to calculate the best estimate of an undistorted image, which is then output by the system 100 for use as required.
  • the SOBF information itself may also be output by the system 100, if required.
  • the image When an optical system casts an image onto a detector, the image will appear to be in focus in those areas of the SOBF which lie in the detector plane.
  • the intensity detected at a given point on such a plane corresponds directly to the intensity of the scene imaged at that point (indeed, that is the nature of imaging devices and is the means by which we can recognise an image as being a representation of the thing of which it is an image).
  • the intensity experienced at a given point on that plane will also partly be influenced by the intensity of the scene in areas immediately surrounding the imaged point.
  • the intensity of a point on the detector plane eventually begins to be determined by the mean intensity of the whole scene.
  • the SOBF can be understood to be the locus of the points where the imaged intensity most closely corresponds to that of the intensity of the imaged point of the scene.
  • a detector were moved parallel to the optical axis, from the side of the SOBF nearest to the lens to the side further away, and the changing output from each pixel observed, one would thus expect to see each pixel intensity varying from near the scene mean, through a maximum absolute (not algebraic) deviation from that mean, and then back to the mean again.
  • the point along the optical axis at which maximum absolute deviation for each pixel was seen would represent the "height" of the SOBF along the optical axis at the x and y co-ordinates represented by that pixel.
  • a coded aperture is used to capture simultaneously all of the information from which the images at any desired plane in the region of the SOBF can subsequently be extracted through suitable processing.
  • Figure 3 shows a cross-sectional view of a simple imaging system 200 with a coded aperture 210 and a detector 220.
  • Objects to the left of the coded aperture 210 will form a pseudo-image on the detector 220, but none will be in focus regardless of range.
  • focused images can be generated from the captured data. If it is required to generate an image of the scene focused at plane A, the pattern 230A of the coded aperture 210 as projected onto the detector 220 from the point on the optical axis lying in plane A must be deconvolved from the captured data.
  • An image focused at plane B can be obtained in an equivalent manner, using the pattern 230B of the coded aperture 210 as projected onto the detector 220 from the point on the optical axis lying in plane B.
  • the required projected deconvolution patterns 230A, 230B are obtained by calculation or by measurement (for example during the manufacturing process).
  • Figure 4 shows a comparison between the patterns required to focus the above system at planes A and B respectively.
  • a SOBF will be generated in the region of the plane 120 at a range of c from the lens 1 10 and may extend some distance in front of and behind that plane - say from a plane 170 at a range of b from the lens 1 10 to a plane 180 at a range of d from the lens 1 10, for example.
  • the coded aperture 140 and detector 150 shown in Fig. 1 may be used, in the manner described above, to decode focused images of for example the planes 160, 170, 120, 180, 190 at a to e respectively (or any desired planes for that matter).
  • the "focused images” extracted at each of the required planes may have to be re-scaled appropriately, so that the x and y values consistently represent the same parts of the scene.
  • the intensity of each pixel in each of these images, plotted as a function of distance from the nominal focal plane, will result in a plot which looks something like those in Figs. 5 or 6.
  • the above information is sufficient to generate an image with much of the atmospheric defocus effects reduced or eliminated. To do so, one simply needs to choose, for each pixel, the value where it shows the greatest absolute deviation from the mean of its values in all planes imaged.
  • the rate of variation of the SOBF would normally also be expected to be constrained in the plane normal to the optical axis, which would mean that locally weighted means of the value of the SOBF at a given pixel can be obtained using an appropriately weighted local shaped function.
  • the above functions vary with a number of factors, including system- related parameters such as f-number, diffraction limit and projected pixel size. The effect of those on the required functions could be pre-determined. Other relevant parameters are user selected or fixed, including range to target. Those categories of features are expected to be sufficient to generate functions which can be used to improve the system performance in generating accurate SOBFs. One can also use other means to improve the signal such as applying filtering.
  • Distortion (rather than defocus) is the stretching of the points of the image in the xy plane, rather than along the optical axis.
  • the shape of the SOBF is believed to be affected by distortion, and it is speculated that generating a good enough SOBF enables one to calculate the degree of distortion present and remove it.
  • removal of distortion is relatively simple problem solvable by known techniques (e.g. by creating a mean image over time, noting the significant points in the image, associating those points in a specific frame with the mean frame, for each point noting its shift in x and y, and for each pixel, interpolating its shift from the nearest points).
  • the nature of the pattern will influence certain aspects of the image generation process, e.g. intensity, resolution, image uniformity, directional spatial frequency response etc. Similarly, some atmospheric effects could be better corrected by some patterns than others.
  • the aperture is not fixed, but rather is generated by means of an SLM (Spatial Light Modulator, essentially a controllable, transmissive LCD); thus the optimum pattern can be used, thus improving the overall camera performance.
  • SLM Spatial Light Modulator
  • Example embodiments may be sufficiently light to be used in very small, hand-held devices.
  • Example applications involving passive systems include:
  • a laser focusing system is adjusted to compensate for the measured wavefront distortion, providing an active system that is robust against atmospheric distortion. This is accomplished using, for example, an adaptive optics element programmed using the measured wavefront distortion information, or (for low power applications) using the sensor aperture itself (if an SLM is employed). Such a laser focusing system may be useful in for example weapons or
  • the system is capable of more than just removing long-range atmospheric blur.
  • Further applications include:

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

La présente invention concerne un appareil comportant un masque à ouverture codée configuré pour la formation d'une image codée, un imageur agencé pour l'enregistrement de l'image codée, un décodeur configuré pour le décodage de l'image codée au moyen d'une pluralité de masques de décodage, et un processeur d'images. Le décodeur est configuré pour la formation d'une pluralité de différents ensembles de première, seconde et troisième images décodées au moyen d'une pluralité de différents ensembles de premier, second et troisième masques de décodage. Le processeur d'images identifie des parties focalisées des images décodées, et forme une image focalisée composite à partir des partie focalisées identifiées des secondes images décodées.
PCT/GB2013/050428 2012-02-22 2013-02-22 Procédé et appareil pour l'imagerie à travers milieu inhomogène à variation temporelle WO2013124664A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
EP12275019.3 2012-02-22
EP12275019 2012-02-22
GBGB1203075.5A GB201203075D0 (en) 2012-02-22 2012-02-22 A method and apparatus for imaging through a time-varying inhomogeneous medium
GB1203075.5 2012-02-22

Publications (1)

Publication Number Publication Date
WO2013124664A1 true WO2013124664A1 (fr) 2013-08-29

Family

ID=47843347

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2013/050428 WO2013124664A1 (fr) 2012-02-22 2013-02-22 Procédé et appareil pour l'imagerie à travers milieu inhomogène à variation temporelle

Country Status (1)

Country Link
WO (1) WO2013124664A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110691229A (zh) * 2019-08-23 2020-01-14 昆明理工大学 一种全息图压缩方法、编码器和再现像输出系统
EP3805848A1 (fr) * 2019-10-08 2021-04-14 Rosemount Aerospace Inc. Chercheur d'imagerie de terminal au moyen d'un masque d'ouverture codée basée sur un modulateur spatial de lumière
US10996104B2 (en) 2017-08-23 2021-05-04 Rosemount Aerospace Inc. Terminal-imaging seeker using a spatial light modulator based coded-aperture mask
WO2022045969A1 (fr) * 2020-08-27 2022-03-03 Ams Sensors Singapore Pte. Ltd. Système et procédé de détection
WO2022173370A1 (fr) * 2021-02-12 2022-08-18 Ams Sensors Singapore Pte. Ltd. Système et procédé d'imagerie à ouverture codée

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002059692A1 (fr) * 2000-12-22 2002-08-01 Afsenius Sven-Aake Camera combinant les parties les mieux focalisees de differentes expositions d'une image
US20080259176A1 (en) 2007-04-20 2008-10-23 Fujifilm Corporation Image pickup apparatus, image processing apparatus, image pickup method, and image processing method
US20090279737A1 (en) * 2006-07-28 2009-11-12 Qinetiq Limited Processing method for coded aperture sensor
US20100283868A1 (en) * 2010-03-27 2010-11-11 Lloyd Douglas Clark Apparatus and Method for Application of Selective Digital Photomontage to Motion Pictures
EP2378760A2 (fr) * 2010-04-13 2011-10-19 Sony Corporation Modèle polynomial quadri-dimensionnel pour l'estimation de la profondeur basé sur la correspondance de deux images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002059692A1 (fr) * 2000-12-22 2002-08-01 Afsenius Sven-Aake Camera combinant les parties les mieux focalisees de differentes expositions d'une image
US20090279737A1 (en) * 2006-07-28 2009-11-12 Qinetiq Limited Processing method for coded aperture sensor
US20080259176A1 (en) 2007-04-20 2008-10-23 Fujifilm Corporation Image pickup apparatus, image processing apparatus, image pickup method, and image processing method
US20100283868A1 (en) * 2010-03-27 2010-11-11 Lloyd Douglas Clark Apparatus and Method for Application of Selective Digital Photomontage to Motion Pictures
EP2378760A2 (fr) * 2010-04-13 2011-10-19 Sony Corporation Modèle polynomial quadri-dimensionnel pour l'estimation de la profondeur basé sur la correspondance de deux images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LUCKY SUB-FRAME SELECTION USING PHASE DIVERSITY, 7 July 2009 (2009-07-07)
S C WOODS; P J KENT; J G BURNETT: "Lucky Imaging Using Phase Diversity Image Quality Metric", PAPER B3, 6TH EMRS DTC TECHNICAL CONFERENCE, 2009

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10996104B2 (en) 2017-08-23 2021-05-04 Rosemount Aerospace Inc. Terminal-imaging seeker using a spatial light modulator based coded-aperture mask
CN110691229A (zh) * 2019-08-23 2020-01-14 昆明理工大学 一种全息图压缩方法、编码器和再现像输出系统
CN110691229B (zh) * 2019-08-23 2021-10-22 昆明理工大学 一种全息图压缩方法、编码器和再现像输出系统
EP3805848A1 (fr) * 2019-10-08 2021-04-14 Rosemount Aerospace Inc. Chercheur d'imagerie de terminal au moyen d'un masque d'ouverture codée basée sur un modulateur spatial de lumière
WO2022045969A1 (fr) * 2020-08-27 2022-03-03 Ams Sensors Singapore Pte. Ltd. Système et procédé de détection
WO2022173370A1 (fr) * 2021-02-12 2022-08-18 Ams Sensors Singapore Pte. Ltd. Système et procédé d'imagerie à ouverture codée

Similar Documents

Publication Publication Date Title
US9142582B2 (en) Imaging device and imaging system
US8432479B2 (en) Range measurement using a zoom camera
US9383199B2 (en) Imaging apparatus
US8305485B2 (en) Digital camera with coded aperture rangefinder
US7260251B2 (en) Systems and methods for minimizing aberrating effects in imaging systems
JP5274623B2 (ja) 画像処理装置、撮像装置、画像処理プログラム、および画像処理方法
US8436912B2 (en) Range measurement using multiple coded apertures
US20060050409A1 (en) Extended depth of field using a multi-focal length lens with a controlled range of spherical aberration and a centrally obscured aperture
WO2010016625A1 (fr) Dispositif de photographie d’images, procédé de calcul de distance pour le dispositif et procédé d’acquisition d’image nette
EP2110702B1 (fr) Zoom optique compact avec profondeur de champ améliorée par codage de front d'onde utilisant un masque de phase
US20110267485A1 (en) Range measurement using a coded aperture
KR20090004428A (ko) 광학 설계 방법 및 시스템과 광학 수차를 갖는 광학 요소를이용한 촬상 소자
KR20140057190A (ko) 이미지들 내에서 초점 에러 추정
JP6594101B2 (ja) 画像処理装置、画像処理方法および画像処理プログラム
WO2013124664A1 (fr) Procédé et appareil pour l'imagerie à travers milieu inhomogène à variation temporelle
CN107209061B (zh) 用于确定场景相关电磁场的复振幅的方法
US10264164B2 (en) System and method of correcting imaging errors for a telescope by referencing a field of view of the telescope
JP2017208642A (ja) 圧縮センシングを用いた撮像装置、撮像方法および撮像プログラム
JP2020030569A (ja) 画像処理方法、画像処理装置、撮像装置、レンズ装置、プログラム、および、記憶媒体
CN106464808B (zh) 图像处理设备、图像拾取设备、图像处理方法、图像处理程序以及存储介质
EP0623209A4 (en) Multifocal optical apparatus.
US20240135508A1 (en) Image processing method, image processing apparatus, image processing system, imaging apparatus, and storage medium
Chen et al. Image restoration based on multiple PSF information with applications to phase-coded imaging system
Scrymgeour et al. Advanced Imaging Optics Utilizing Wavefront Coding.
Colburn et al. Mitigating metalens aberrations via computational imaging

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13708228

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13708228

Country of ref document: EP

Kind code of ref document: A1