WO2009156329A1 - Image deblurring and denoising system, device and method - Google Patents

Image deblurring and denoising system, device and method Download PDF

Info

Publication number
WO2009156329A1
WO2009156329A1 PCT/EP2009/057602 EP2009057602W WO2009156329A1 WO 2009156329 A1 WO2009156329 A1 WO 2009156329A1 EP 2009057602 W EP2009057602 W EP 2009057602W WO 2009156329 A1 WO2009156329 A1 WO 2009156329A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
reference image
deblurring
denoising
exposure time
Prior art date
Application number
PCT/EP2009/057602
Other languages
French (fr)
Inventor
Abdessamad Falhi
Peter Seitz
Original Assignee
CSEM Centre Suisse d'Electronique et de Microtechnique SA - Recherche et Développement
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CSEM Centre Suisse d'Electronique et de Microtechnique SA - Recherche et Développement filed Critical CSEM Centre Suisse d'Electronique et de Microtechnique SA - Recherche et Développement
Publication of WO2009156329A1 publication Critical patent/WO2009156329A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors

Definitions

  • the present invention relates to the field of imaging in general, and to the field of correcting for blur and noise in imaging, in particular.
  • Figure 1A is a schematic block-diagram illustration of an image deblurring and denoising system, according to embodiments of the disclosed technique
  • Figure 1B is a schematic block-diagram illustration of an image deblurring and denoising device, according to embodiments of the disclosed technique
  • Figure 2 is a schematic illustration of an image deblurring and denoising method, according to embodiments of the disclosed technique.
  • a handheld camera a camera mounted on gimbals of a reconnaissance aircraft, a surveillance camera mounted on a mechanically unsteady support
  • a static object e.g., a building, a work of art such as a statue, a parking space
  • a non-static object e.g., a moving vehicle or a walking person
  • both the non-static object and the non-static imager may move relative to one another, during the image acquisition process.
  • static object e.g., a building, a work of art such as a statue, a parking space
  • a non-static object e.g., a moving vehicle or a walking person
  • both the non-static object and the non-static imager may move relative to one another, during the image acquisition process.
  • static as used herein refers to the stable geometric position of a physical entity relative to world coordinates.
  • the lighting conditions of the object to be imaged may have an impact on the total exposure time.
  • a prolonged total exposure time may be required the darker the object to be imaged is, in order to collect a sufficiently high number of photons for visually pleasing image quality and/or to enable digital image processing tasks. Accordingly, when acquiring images of objects, for example, under night-vision/Iow- l ⁇ ght illumination conditions requiring a prolonged total exposure time, this results in an increase of the probability that the abidance time of an object will be reduced relative to the prolonged total exposure time, thus increasing the probability of the introduction of blur.
  • the imager includes a plurality of photosensitive elements or pixels, acquiring for a predetermined time respective portions of the image of the object.
  • Various techniques may be employed to avoid, reduce or remove blurring artifacts of the image. These techniques may be optoelectronic and/or opto-mechanical and/or signal processing (i.e., filter) -based.
  • optoelectronic and/or optomechanical techniques may employ an accelerometer and/or a gyroscope to detect the displacement of the imager from a first to a second position, and opto-mechanical actuators may be employed to return the imager to the first position.
  • Optomechanical techniques require much space, and it is difficult to implement them in small optical devices such as for example handheld cameras.
  • Patent application EP 1906232 entitled “Imaging device with optical image stabilisation” to Suzuki Yusuke discloses an opto-mechanical technique to avoid blurring, wherein an imaging device compensates for shaking by relatively moving an imaging optical system and an imaging element in directions perpendicular to an optical axis direction.
  • the imaging device includes a first actuator that relatively moves the imaging optical system and the imaging element in a first direction perpendicular to the optical axis direction; a second actuator that relatively moves the imaging optical system and the imaging element in a second direction which is perpendicular to the optical axis direction and which intersects the first direction; and a first moving member to which the second actuator is attached, the first moving member that is moved together with the second actuator in the first direction by an action of the first actuator.
  • Patent application WO9000334 entitled “Improvements in or relating to image stabilisation” and patent application CA1306297 entitled “Image stabilization”, both to Blissett et al. disclose a method of and apparatus for electronically stabilising incoming video data against unwanted or undesirable sensor movement. By digitising each or selected pixels of a frame, and comparing them against stored values of the pixels, image movement and sensor movement can be detected. The frame can then be remapped to provide no or only wanted movement of the image, e.g. in panning or in image following, and the processed video data can then be fed to a video recorder or monitor.
  • Patent applications GB2307133 and GB2307371 entitled “Video camera image stabilisation system” to Smith et al. disclose an image stabilisation system comprising means for counteracting a rotation of an image caused by a camera rotating about its optical axis.
  • An image is digitised by a unit and convolved with two filter kernels by two convolution units to enhance the vertical and horizontal components of edges in the image.
  • a histogram of the orientations of the edges is then correlated with an edge orientation histogram of a previous image to determine a relative degree of rotation which is then used to correct the rotated image for display on a display unit.
  • unknown states may be introduced at the boundaries of these images.
  • patent application GB2316255 entitled “Improvements in image stabilisation” to Mansbridge et al. discloses a method wherein a second frame is compared to the first frame and the result is the overwriting of the first frame with the common parts found in the second frame.
  • a similar approach is disclosed in patent application DE19654573 entitled “Image stabilisation circuit with charge-coupled device” to Hwang but with the use of a motion detector device to find the shift of the second frame compared to the first frame, requiring additional means to use the motion detector device.
  • Respective limits are set to the amount of horizontal and vertical picture shift that can be applied, and when a correction limit is reached, an abrupt change of the respective correction is made to a preset value within the relevant limits.
  • GB2411310 entitled “Image stabilisation using field of view and image analysis” to Sablak teaches a video image stabilisation system for removing unwanted camera shake.
  • a processing device receives signals indicating both changes in the camera field of view (FOV) as well as plural captured images, for example first and second images; the field of view may be adjusted by camera panning, tilt or focal changes.
  • An image stabilisation adjustment is then performed using both the FOV changes, i.e. intending camera movements or focus changes, and by analysis of the images themselves.
  • the system discriminates between intended, wanted image changes caused by camera movements such as panning or tilting, and unintended image changes due to camera shake.
  • the final image portions are adjusted to remove shake.
  • Patent application GB2407724 entitled “Video stabilisation dependant on the path of movement of an image capture device” to PiIu discloses a video stabilization scheme which uses a dynamic programming algorithm in order to determine a best path through a notional stabilization band defined in terms of a trajectory of an image capture element.
  • the trajectory of the image capture element is used to define at least two boundaries defining the stabilization band which represents the domain in which compensatory translational motion of an image area is allowed.
  • the best path is used in order to produce an optimally stabilised video sequence within the bounds defined by the image capture area of the image capture element.
  • a system for stabilising video signals generated by a video camera which may be mounted in an unstable manner. It includes a digital processing means for manipulation of each incoming image that attempts to overlay features of the current image onto similar features of a previous image.
  • a mask is used that prevents parts of the image that are likely to cause errors in the overlaying process from being used in the calculation of the required movement to be applied to the image.
  • the mask may include areas where small movements of the image have been detected, and many also include areas where image anomalies including excess noise have been detected.
  • the invention also discloses means for dealing with wanted movements of the camera, such as pan or zoom, and also discloses means for dealing with the edges of video signals as processed by the system.
  • Patent application CN1655591 entitled “Method and apparatus for lowering image noise under low illumination” to Wang et al. discloses an image processing method and a device to reduce image noise and improve image quality under low illuminance, wherein the method comprises the following steps: determining whether the shooting environment is in darkness; if it is, switching the output mode from color image to gray image; and making low-pass filtering to the switched gray image.
  • smoothing filters introduce blur in images.
  • Patent application TW235007 entitled "A method of image processing of low illumination and image processor” to Hou discloses a method of image processing of low illumination adapted to a CMOS image sensor.
  • the noise of the CMOS image sensor can be reduced by the method of the middle-value filtering calculation.
  • Patent application KR20030068738 entitled "Apparatus for processing digital image, removing noise under low illumination environment" to Jang Cheet et al. discloses a video decoder receiving images and a noise remover receiving the output video signal of the video decoder for executing motion adaptive time area filtering at a time axis area, and executing contour conservation filtering at a spatial area with considering color brightness and characteristics in R, G, and B channels.
  • a video encoder receives the output signal of the noise remover for converting the output signal into a composite video signal.
  • apparatus can so adapt the exposure.
  • Embodiments of the disclosed technique disclose an image deblurring and denoising method.
  • the disclosed technique includes acquiring at least one intermediate image during a total exposure time.
  • the disclosed technique includes providing a data representation of the at least one N intermediate images.
  • the disclosed technique includes indexing the at least one intermediate images.
  • the disclosed technique includes selecting a reference image of the at least one intermediate images.
  • the disclosed technique includes determining rectification parameters for each of the at least one non-reference image i.
  • the disclosed technique includes rectifying the at least one non-reference image.
  • the disclosed technique includes merging the reference image and the at least one rectified non-reference image.
  • the disclosed technique includes outputting the merged image.
  • the procedure of image merging is accomplished by weighted addition and averaging of the at least one intermediate image.
  • the procedure of image rectifying is accomplished by performing rectification parameter estimation of c,(k) related to the acquisition of the at least one non-reference image i.
  • the procedure of parameter estimation is performed by at least one of the following techniques: cross-correlation, multi-resolution-based correlation, and distortion compensation,
  • the procedure of acquiring at least one intermediate image is includes integrating photons detected by imager over sequential intermediate exposure times Tj of the total exposure time T e , associating the detected photons with sequential time intervals of the total exposure time T e , wherein the integrated photons respective of each time interval are converted into corresponding image- data.
  • the rectification parameters refer to either one or both the linear and the rotational shift of the at least one non-reference image.
  • the at least one intermediate image is selected as the reference image, wherein a subsequent non-reference image being acquired by the imager and acquisition unit is used for determining the rectification parameters and merged with the first intermediate image.
  • An image deblurring and denoising device includes a) a central controller; b) an acquisition unit; c) memory; d) a memory controller; e) an image rectifier; and f) an image merger; wherein the central controller, the acquisition, the memory, the memory controller, the image rectifier and the image merger are operatively coupled with each other and adapted to perform at least some of the procedures according to any of the preceding claims.
  • the image deblurring and denoising device is free of moveable parts.
  • the object of the invention to disclose an alternative image deblurring and denoising system, device and method, which are adapted to detect photons of an object during an exposure time and to provide an image output representing the detected photons.
  • the image deblurring and denoising system, device and method is adapted to at least reduce blurring artifacts that might be introduced due to changes in the position of the imager relative to the object during the total exposure time.
  • the deblurring and denoising system, device and method according to embodiments of the disclosed technique are adapted to associate photons detected by the imager with a plurality of corresponding time intervals/ intermediate exposure times Ti within the total exposure time and to store information representing at least one intermediate image, which are respective of the photons acquired during each time interval.
  • at least one intermediate image is stored during the total exposure time.
  • acquiring at least one intermediate image or "acquiring at least one intermediate image during the total exposure time”.
  • a reference image of the at least one intermediate image may be selected. Deviations of at least one non-reference image of the at least one intermediate image from the reference image are determined.
  • the at least one non-reference image may be modified according to the reference image. Further, an output image, which is deblurred and associated with both the reference image and the at least one non-reference image may be outputted at an output unit. For example, absolute or weighted pixels values of the reference image and the at least one non-reference image may be merged, (e.g., summed and averaged), and outputted as the output image.
  • entities and/or features such as, for example, the image deblurring and denoising device according to embodiments of the disclosed technique, may be indicated hereinafter as being located in a single geographical and/or architectural location, these entities and/or features may be dispersed and/or parsed over a plurality of geographical and/or architectural locations of an image deblurring and denoising system.
  • a deblurring and denoising system 100 may include an imager 110, an image deblurring and denoising device 120, an output 130, and a power supply 140, at least some of which may be operatively coupled with each other to enable the operation of image deblurring and denoising system 100.
  • Imager 110 may be embodied, for example, by a complimentary metal-oxide semiconductor (CMOS) image sensor, a charge-coupled device (CCD) image sensor, a forward-looking infrared (FLIR) imager, a thermal imager, or any combination of the above.
  • CMOS complimentary metal-oxide semiconductor
  • CCD charge-coupled device
  • FLIR forward-looking infrared
  • thermal imager or any combination of the above.
  • Output 130 may include, for example, a cathode ray tube (CRT) monitor or display unit, a liquid crystal display (LCD) monitor or display unit, a screen, a monitor, a speaker, or other suitable display unit, output device or storage unit (not shown).
  • CTR cathode ray tube
  • LCD liquid crystal display
  • Power supply 140 may be embodied, for example, by a battery, or an external power source.
  • imager 110 acquiring at least one image of an object is adapted to convert during a total exposure time T e photons 105 respective of the imaged object detected by imager 110 into data representing the imaged object (hereinafter: image-data). More specifically, photons 105 detected by imager 110 are integrated over and associated with sequential time intervals of the total exposure time T e , wherein the integrated photons 105 respective of each time interval are converted into corresponding image-data. Thus, the image-data represents at least one intermediate image acquired by imager 110 during the total exposure time T e . Associating detected photons 105 with a plurality of time intervals to obtain image-data representing the at least one intermediate image, respectively, may be accomplished, e.g., by imager 110 and/or by image deblurring and denoising device 120.
  • image deblurring and denoising device 120 is provided with the image-data, whereupon image deblurring and denoising device 120 selects from said at least one intermediate image represented by the image-data a reference image, e.g., according to a predetermined criteria, randomly, or pseudo-randomly, such that the non-selected image-data represents at least one non-reference image.
  • the at least one non-reference image is then analyzed, e.g., by image deblurring and denoising device 120, with regard to potential deviations compared to the referenced image according to at least one criteria, which may be based, for example, on a statistic metric, e.g., known in the art.
  • the values of each pixel of the at least one non-reference image may be compared with the values of the corresponding of the reference image; or for example the average or median value of a group of pixels of the at least one none- reference image may be compared with the average or median value of the corresponding group of pixels of the non-reference image.
  • Pixel values of the at least one non-reference image may then be rectified according to the deviations determined by image deblurring and denoising device 120.
  • pixel value of the at least one non-reference image may only be rectified if the determined deviation exceeds a deviation-threshold.
  • image deblurring and denoising device 120 is adapted to merge the at least one rectified image and the reference image to an output image of the object. Merging of the at least one rectified image and the reference image may be accomplished, for example, by summing up the respective pixel values of the at least one rectified image and the reference image, and by averaging each of the summed up pixel values. Other merging techniques may be implemented, as outlined herein below.
  • image deblurring and denoising device 120 may include a central controller 121 , a memory controller 122, an acquisition unit 123, an image rectifier 124 (implemented, e.g., by a Multiply-Accumulate (MAC) block), an image merger and a memory 126, all of which may be operatively coupled with each other to enable operation of image deblurring and denoising device 120.
  • MAC Multiply-Accumulate
  • Memory 126 may be embodied, for example, by a Random Access Memory (RAM), a Read Only Memory (ROM), a Dynamic RAM (DRAM), a Synchronous DRAM (SD-RAM), a flash memory, a volatile memory, a non-volatile memory, a cache memory, a buffer, a short term memory unit, a long term memory unit, or other suitable memory units or storage units.
  • RAM Random Access Memory
  • ROM Read Only Memory
  • DRAM Dynamic RAM
  • SD-RAM Synchronous DRAM
  • Central controller 121 and/or memory controller 122 may be embodied, for example, by a Central Processing Unit (CPU), a Digital Signal Processor (DSP), one or more processor cores, a microprocessor, a host processor, a controller, a plurality of processors or controllers, a chip, a microchip, one or more circuits, circuitry, a logic unit, an Integrated Circuit (IC), an Application-Specific IC (ASIC), or any other suitable multi-purpose or specific processor or controller.
  • central controller 121 may execute instructions (not shown) resulting in the operation of image rectifier 124, image merger 125, acquisition unit 123 and memory controller 122.
  • memory controller 122 and/or acquisition unit 123 and/or image rectifier 124 and/or image merger 125 may be software-implemented, hardware-implemented, hybrid or combined software- hardware-implemented.
  • Acquisition unit 123 is adapted to gather from imager 110 image-data directly from imager 110, or alternatively, from a storage device (not shown), which may be, for example, a mass storage device.
  • Memory 126 is adapted to store image-data, whilst memory controller 122 is adapted to read from and write the image-data to memory 126.
  • Image rectifier 124 is adapted to determine, e.g., as known in the art, deviations between the at least one non-reference image and the reference image, and if required to rectify, e.g., as known in the art, the at least one non-reference image.
  • Image merger 125 is adapted to merge the at least one rectified non- reference image with the reference image. It should be noted that in embodiments of the disclosed technique, only a selection of the at least one rectified non-reference image may be merged with the reference image, e.g., according to a predetermined criterion.
  • Image merger 125 may include, for example, combinatory adders (not shown) for summing data and shifts (not shown) for the division to determine averages of pixel values.
  • an image deblurring and denoising method may be implemented as follows, e.g., by image deblurring and denoising system 100.
  • the method may include, for example, the procedure of acquiring a number N ⁇ 1 of images (i.e., the at least one intermediate image), e.g., by imager 110 and acquisition unit 123, during a total exposure time T e that may be predetermined or determined adaptively.
  • the at least one intermediate image may be represented by more pixels than the output image to avoid edge effects.
  • the procedure of acquiring the at least one intermediate image may occur in real time, for example, by detecting the photons respective of each of the plurality of photons and converting the photons to image-data directly at imager 110.
  • the at least one intermediate image may then be stored in memory 126.
  • real-time also encompasses the meaning of the term “substantially in real-time” and/or “at least approximately in real-time”.
  • the method may additionally include, for example, the procedure of providing a data (e.g., matrix) representation for each of the at least one intermediate image, e.g., in memory 126.
  • the pixels of each of the at least one intermediate image may, for example, be represented by w*h matrices, wherein w represents the width of the image in pixels and h the height of the image in pixels.
  • the method may further include the procedure of indexing the at least one intermediate image with a number i, wherein 1 ⁇ i ⁇ N.
  • N is a configurable parameter that is limited by the total exposure time T e , the computation power of image deblurring and denoising system 100 and the capacity required by memory 126 for storing each of the at least one intermediate image and for deblurring and denoising.
  • the total exposure time T e is at least determined by the time required to acquire the at least one intermediate image.
  • the intermediate exposure time Tj at the beginning could be small, it could grow longer with the number of the images acquired, and it could decrease again towards the end of the total exposure time T e .
  • the method may then include, for example, the procedure of selecting a reference image from the at least one intermediate image, e.g., by image rectifier 124, thus obtaining the at least one non-reference image, the number of which is N-1.
  • the method may subsequently include, for example, the procedure of performing parameter estimation of Cj(k) related to the acquisition of the at least one non-reference image i to subsequently rectify the at least one non- reference image (procedure 260).
  • parameter determination also encompasses the meaning of the term “parameter estimation”, and vice versa.
  • image rectifier 124 for example, estimates for each of the N-1 at least one non-reference image the set of parameters q(k) considered to describe the sought transformation of the acquired image into the reference image, so that the rectified image and the reference image are closest according to one of the known matching measures for two images, as described for example in 'Digital Image Processing" second edition, Rafael C.
  • a parameter may for example refer to a two- dimensional lateral shift only, implying that each of the at least one non-reference image can be obtained by laterally displacing the reference image by the appropriate vector.
  • this corresponds to a commonly encountered situation of an unsteadily mounted or handheld camera, whose angular orientation with regard to world coordinates is kept at least approximately constant.
  • a two- dimensional lateral shift plus an arbitrary rotation may occur, corresponding to a situation wherein for an unsteadily mounted or heldheld camera the angular orientation is changed additionally.
  • the parameters may thus refer to either one or both the linear or the rotational shift of the at least one non-reference image.
  • optical distortions may be introduced by imaging optics, for example, by a fish-eye lens and/or diffraction and/or aberration.
  • the parameter set q(k) includes model parameters of the imaging optics as well, in order to enable a rectification of the at least one non-reference image.
  • the method may then include the procedure of merging the at least one non-reference image with the reference image, e.g., by image merger 125 to obtain the output image.
  • the pixels respective of the at least one rectified image as well as of the reference image may be weighted, summed up and averaged according to the weights to obtain the output image.
  • the at least one rectified non-reference image and the reference image contribute to the outputted output image according to different or equal proportions.
  • the limits of the output image may be restricted to the picture field that is common to all of the acquired images, i.e., the picture field that corresponds to the part of the scene that has been imaged by all of the acquired images.
  • the method may optionally include, for example, the procedure of performing brightness and/or contrast adaption, e.g., as known in the art, on the output image, if the brightness and/or the contrast of the output image are below respective threshold values.
  • contrast stretching or histogram equalization as for example described in "Digital Image Processing” second edition, Rafael C. Gonzalez and Richard E. Woods, Chapter 3: Image Enhancement in the Spatial Domain.
  • the indexing of the at least one intermediate image may be performed as follows: storing the first one of the at least one intermediate image, and selecting it as the reference image. Further, whereby the reference image is used to initialize the output image, i.e., the output image is initialized with the same data as the reference image. [0062] It should be noted that in some embodiments of the disclosed technique, no additional images are acquired if the noise level of the reference image is below a predetermined threshold.
  • the second or subsequent non-reference image is then rectified with these parameters, optionally weighted and then added up to the output image.
  • the second/subsequent non- reference image can be discarded and the third non-reference is acquired and processed analogously as the second non-reference image.
  • image- data respective of the third image with index i+1 overwrites the image-data respective of the second non-reference.
  • image-data respective of maximal three images are in memory 126 during the implementation of the image deblurring and denoising method.
  • Employing the method for reducing memory produces an output image that is progressively being improved, i.e. the signal-to-noise is constantly increased, with each additional acquired and processed image i.
  • the output image can therefore be inspected continuously, either by a user or by determining a value for the signal-to-noise ratio, while the output image is being improved with the acquisition of each sequentially acquired and processed non-reference image.
  • the procedures employed for the image deblurring and denoising method are stopped, e.g., by the user or automatically by image deblurring and denoising system 100.
  • the last image output prior to stopping the procedures is the final result of the image deblurring and denoising method. It should be noted that when employing the method of reducing memory, the number N of acquired images may remain unknown prior to and throughout the implementation of the image deblurring and denoising method.
  • a first method for realizing the rectifying the at least one non-reference image according to this invention is the usage of cross-correlation to obtain the lateral shift components (s x , s y ), in x and y between the at least one non-reference image and the reference image.
  • shift components can be obtained by determining the Euclidian distance between an image and a selected sliding template window of the image. If the Euclidean distance between the patterns of the sliding template window and the non-reference image is near or equals 0, the content of the image matches with the content selected sliding template.
  • a window with the size w*h of the reference image is selected as a template and cross-correlated with the at least non-reference image, i.e., each pixel p(x,y) of the template window is multiplied with each pixel q(x,y) at the same position and summed up resulting in a term c(u,v) indicating the cross-correlation of the features of the template at positions u and v.
  • a method for determining lateral displacement between the reference image and the at least one non-reference image for rectifying the latter may include employing a multi-resolution (or pyramid) representation of all the images, as described for example by E. H. Adelson et al., in "Pyramid methods in image processing", RCA Engineer, Vol. 29, No. 6, pp. 33-41 , Nov/Dec 1984.
  • the operator LP is a general low-pass filter satisfying the requirements for determining the multi-resolution representation as described in the reference given above.
  • the estimate (Xk.yk) of the displacement at the descending pyramid levels k are recursively improved by determining the maximum of the cross- correlation between q k and P k only for the nine neighbouring values (X k +n, y k +m), where the integers n and m vary between -1 , 0 and +1.
  • Distortion compensation Distortion compensation
  • rectifying the at least one non-reference image may be accomplished by employing a method of distortion compensation, also known as image warping method.
  • This method may be applied to detect many types of distortions (e.g., translation and/or rotation and/or scaling) between two acquired intermediate images, and may even be employed to correct or at least reduce for non-linear aberrations of the optical system (not shown) that may be introduced by imager 110.
  • a synopsis of the use of the image warping method can be found in "A review of image warping methods", CA. Glasbey, Journal of Applied Statistics, 25, 155-171. The method is based on the process of finding the transformation affecting the at least one non-reference image by comparing it with the reference image.
  • This process is applied to some interest points in both the at least one non-reference image and the reference image to find the transformation.
  • Embodiments of the disclosed technique are applicable even when substantial noise is affecting the imaging process under low illumination conditions, wherein visually highly perceptible noise may affect images because of the "Poissonian" nature of the photonic noise, which may be perceptually significant in the composition of an image.
  • embodiments of the disclosed technique may thus be used in handheld cameras in low-illumination conditions of the scene, such as in mobile phone cameras.
  • Embodiments of the disclosed technique may also overcome imaging artifacts that might otherwise be introduced due to changes in magnification and/or changes in perspective.
  • embodiments of the disclosed technique for deblurring and denoising of images is accomplished in a manner that is free of moveable parts.
  • no adjustment in the optical path of imager 110 is necessary to enable deblurring and denoising, i.e., the optical path of imager 110 may remain unadjusted.
  • embodiments of the disclosed technique are operable in an accelerator- free manner, i.e., no independent means for determining the relative motion between camera and scene, such as accelerometers are required as information about the relative displacement and distortions are extracted from the acquired sequence of the at least one intermediate image.
  • Embodiments of the disclosed technique may be employed for deblurring and denoising, even though the at least one intermediate image may contain noise artifacts due to the low illumination of the scene.
  • the output image may be free of reconstruction artifacts, i.e., spurious high frequencies and reconstruction artifacts may not be present in the output image.
  • rectifying and/or merging of the at least one intermediate image may be accomplished during the acquisition of at least one intermediate image.
  • rectifying and merging may be performed after the completion of the acquisition of all of the at least one intermediate image during the total exposure time T e .
  • embodiments of the disclosed technique may be implemented, for example, using a machine-readable medium or article (embodied, e.g., by image deblurring and denoising system 100 or image deblurring and denoising device 120) which may store an instruction or a set of instructions that, if executed by a machine, causes the machine to perform the method in accordance with embodiments of the disclosed technique.
  • a machine-readable medium may include, for example, any suitable processing platform, computing platform, computing device, processing device, computing system, processing system, computer, processor, or the like, and may be implemented by hardware and/or software, and/or firmware and/or hybrid modules.
  • embodiments of the disclosed technique such as, for example, image rectifier 124 and/or image merger 125, include a computer program adapted to execute the image deblurring and denoising method.
  • embodiments of the disclosed technique include a computer program comprising software code adapted to execute the image deblurring and denoising method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

A camera system capable of acquiring sharp images of a scene during a long exposure time, where relative motion between camera system and scene can occur, includes an image sensor which is acquiring a plurality of intermediate images during the exposure time. One of these intermediate image is taken as a reference image according to which the displacements and distortions of the remaining non-reference images are determined. After rectifying for these displacements and distortions in the non-reference images, the rectified non-reference images plus the reference image are merged (e.g., weightily summed), forming the sought sharp and low-noise image of the scene imaged during the exposure time. Preferred use of this method and apparatus are in association with handheld or unstably mounted cameras under low-light illumination conditions.

Description

IMAGE DEBLURRING AND DENOISING SYSTEM, DEVICE AND METHOD
FIELD OF THE INVENTION
[0001] The present invention relates to the field of imaging in general, and to the field of correcting for blur and noise in imaging, in particular.
BRIEF DESCRIPTION OF THE FIGURES
[0001] The disclosed technique will be understood and appreciated more fully from the following detailed description taken in conjunction with the figures in which: [0002] Figure 1A is a schematic block-diagram illustration of an image deblurring and denoising system, according to embodiments of the disclosed technique; [0003] Figure 1B is a schematic block-diagram illustration of an image deblurring and denoising device, according to embodiments of the disclosed technique; and [0004] Figure 2 is a schematic illustration of an image deblurring and denoising method, according to embodiments of the disclosed technique.
BACKGROUND OF THE DISCLOSED TECHNIQUE
[0002] The longer the total exposure time of an imager to an imaged scene, the more likely are variations of the position of the imager relative to the imaged object, and an image representation of the object may thus more likely feature blurring artifacts. More specifically, blurring may occur if the abidance time of the object relative to the imager is briefer than the total exposure time of the imager. Conversely, if the abidance time of the object relative to the imager is equal to or longer than the total exposure time of the imager acquiring light from the object, the image representation of the object may not be blurred. Clearly, either one or both the object or the imager may move relative to one another: a non-static imager (e.g. a handheld camera, a camera mounted on gimbals of a reconnaissance aircraft, a surveillance camera mounted on a mechanically unsteady support) may move relative to a static object (e.g., a building, a work of art such as a statue, a parking space); a non-static object (e.g., a moving vehicle or a walking person) may move relative to a static imager; and both the non-static object and the non-static imager may move relative to one another, during the image acquisition process. It should be noted that the term "static" as used herein refers to the stable geometric position of a physical entity relative to world coordinates.
[0003] The lighting conditions of the object to be imaged may have an impact on the total exposure time. A prolonged total exposure time may be required the darker the object to be imaged is, in order to collect a sufficiently high number of photons for visually pleasing image quality and/or to enable digital image processing tasks. Accordingly, when acquiring images of objects, for example, under night-vision/Iow- lϊght illumination conditions requiring a prolonged total exposure time, this results in an increase of the probability that the abidance time of an object will be reduced relative to the prolonged total exposure time, thus increasing the probability of the introduction of blur.
[0004] The imager includes a plurality of photosensitive elements or pixels, acquiring for a predetermined time respective portions of the image of the object. Various techniques may be employed to avoid, reduce or remove blurring artifacts of the image. These techniques may be optoelectronic and/or opto-mechanical and/or signal processing (i.e., filter) -based. For example, optoelectronic and/or optomechanical techniques may employ an accelerometer and/or a gyroscope to detect the displacement of the imager from a first to a second position, and opto-mechanical actuators may be employed to return the imager to the first position. Optomechanical techniques require much space, and it is difficult to implement them in small optical devices such as for example handheld cameras. Patent application EP 1906232 entitled "Imaging device with optical image stabilisation" to Suzuki Yusuke discloses an opto-mechanical technique to avoid blurring, wherein an imaging device compensates for shaking by relatively moving an imaging optical system and an imaging element in directions perpendicular to an optical axis direction. The imaging device includes a first actuator that relatively moves the imaging optical system and the imaging element in a first direction perpendicular to the optical axis direction; a second actuator that relatively moves the imaging optical system and the imaging element in a second direction which is perpendicular to the optical axis direction and which intersects the first direction; and a first moving member to which the second actuator is attached, the first moving member that is moved together with the second actuator in the first direction by an action of the first actuator. [0005] In the following, publications employing signal-processing techniques addressing blur are listed.
[0006] Patent application WO9000334 entitled "Improvements in or relating to image stabilisation" and patent application CA1306297 entitled "Image stabilization", both to Blissett et al. disclose a method of and apparatus for electronically stabilising incoming video data against unwanted or undesirable sensor movement. By digitising each or selected pixels of a frame, and comparing them against stored values of the pixels, image movement and sensor movement can be detected. The frame can then be remapped to provide no or only wanted movement of the image, e.g. in panning or in image following, and the processed video data can then be fed to a video recorder or monitor.
[0007] Patent applications GB2307133 and GB2307371 entitled "Video camera image stabilisation system" to Smith et al. disclose an image stabilisation system comprising means for counteracting a rotation of an image caused by a camera rotating about its optical axis. An image is digitised by a unit and convolved with two filter kernels by two convolution units to enhance the vertical and horizontal components of edges in the image. A histogram of the orientations of the edges is then correlated with an edge orientation histogram of a previous image to determine a relative degree of rotation which is then used to correct the rotated image for display on a display unit. When shifting the acquired image into the position of the reference image, unknown states may be introduced at the boundaries of these images. To avoid these unknown states, patent application GB2316255 entitled "Improvements in image stabilisation" to Mansbridge et al. discloses a method wherein a second frame is compared to the first frame and the result is the overwriting of the first frame with the common parts found in the second frame. [0008] A similar approach is disclosed in patent application DE19654573 entitled "Image stabilisation circuit with charge-coupled device" to Hwang but with the use of a motion detector device to find the shift of the second frame compared to the first frame, requiring additional means to use the motion detector device. [0009] Patent application GB2365244 entitled "Image stabilisation" to Lebbell et al. discloses a method wherein an acquired image is shifted in response to a global motion vector so as to enable an observer to gain the maximum information about an object in the picture. Respective limits are set to the amount of horizontal and vertical picture shift that can be applied, and when a correction limit is reached, an abrupt change of the respective correction is made to a preset value within the relevant limits.
[0010] Similarly, GB2411310 entitled "Image stabilisation using field of view and image analysis" to Sablak teaches a video image stabilisation system for removing unwanted camera shake. A processing device receives signals indicating both changes in the camera field of view (FOV) as well as plural captured images, for example first and second images; the field of view may be adjusted by camera panning, tilt or focal changes. An image stabilisation adjustment is then performed using both the FOV changes, i.e. intending camera movements or focus changes, and by analysis of the images themselves. Thus, the system discriminates between intended, wanted image changes caused by camera movements such as panning or tilting, and unintended image changes due to camera shake. The final image portions are adjusted to remove shake.
[0011] Patent application GB2407724 entitled "Video stabilisation dependant on the path of movement of an image capture device" to PiIu discloses a video stabilization scheme which uses a dynamic programming algorithm in order to determine a best path through a notional stabilization band defined in terms of a trajectory of an image capture element. The trajectory of the image capture element is used to define at least two boundaries defining the stabilization band which represents the domain in which compensatory translational motion of an image area is allowed. The best path is used in order to produce an optimally stabilised video sequence within the bounds defined by the image capture area of the image capture element. [0012] Patent application US2006061658 entitled "Image stabilisation system and method" to Faulkner et al. discloses a system for stabilising video signals generated by a video camera which may be mounted in an unstable manner. It includes a digital processing means for manipulation of each incoming image that attempts to overlay features of the current image onto similar features of a previous image. A mask is used that prevents parts of the image that are likely to cause errors in the overlaying process from being used in the calculation of the required movement to be applied to the image. The mask may include areas where small movements of the image have been detected, and many also include areas where image anomalies including excess noise have been detected. The invention also discloses means for dealing with wanted movements of the camera, such as pan or zoom, and also discloses means for dealing with the edges of video signals as processed by the system.
[0013] Patent application CN1655591 entitled "Method and apparatus for lowering image noise under low illumination" to Wang et al. discloses an image processing method and a device to reduce image noise and improve image quality under low illuminance, wherein the method comprises the following steps: determining whether the shooting environment is in darkness; if it is, switching the output mode from color image to gray image; and making low-pass filtering to the switched gray image. However, smoothing filters introduce blur in images.
[0014] Patent application TW235007 entitled "A method of image processing of low illumination and image processor" to Hou discloses a method of image processing of low illumination adapted to a CMOS image sensor. When the average brightness of an image is less than the noise threshold limit value of the CMOS image sensor, the noise of the CMOS image sensor can be reduced by the method of the middle-value filtering calculation.
[0015] Patent application KR20030068738 entitled "Apparatus for processing digital image, removing noise under low illumination environment" to Jang Cheet et al. discloses a video decoder receiving images and a noise remover receiving the output video signal of the video decoder for executing motion adaptive time area filtering at a time axis area, and executing contour conservation filtering at a spatial area with considering color brightness and characteristics in R, G, and B channels. A video encoder receives the output signal of the noise remover for converting the output signal into a composite video signal.
[0016] Patent application KR20060003466 entitled "A method and an apparatus of improving image quality on low illumination for mobile phone"; to Kim discloses a method wherein a previewed image determines the illumination of the scene and the
apparatus can so adapt the exposure.
DESCRIPTION OF THE EMBODIMENTS OF THE DISCLOSED TECHNIQUE
[0017] Summary of the embodiments of the disclosed technique
[0018] Embodiments of the disclosed technique disclose an image deblurring and denoising method.
[0019] In embodiments, the disclosed technique includes acquiring at least one intermediate image during a total exposure time.
[0020] In embodiments, the disclosed technique includes providing a data representation of the at least one N intermediate images.
[0021] In embodiments, the disclosed technique includes indexing the at least one intermediate images.
[0022] In embodiments, the disclosed technique includes selecting a reference image of the at least one intermediate images.
[0023] In embodiments, the disclosed technique includes determining rectification parameters for each of the at least one non-reference image i.
[0024] In embodiments, the disclosed technique includes rectifying the at least one non-reference image.
[0025] In embodiments, the disclosed technique includes merging the reference image and the at least one rectified non-reference image.
[0026] In embodiments, the disclosed technique includes outputting the merged image.
[0027] In embodiments, the procedure of image merging is accomplished by weighted addition and averaging of the at least one intermediate image.
[0028] In embodiments, the procedure of image rectifying is accomplished by performing rectification parameter estimation of c,(k) related to the acquisition of the at least one non-reference image i. [0029] In embodiments, the procedure of parameter estimation is performed by at least one of the following techniques: cross-correlation, multi-resolution-based correlation, and distortion compensation,
[0030] In embodiments, the procedure of acquiring at least one intermediate image is includes integrating photons detected by imager over sequential intermediate exposure times Tj of the total exposure time Te, associating the detected photons with sequential time intervals of the total exposure time Te, wherein the integrated photons respective of each time interval are converted into corresponding image- data.
[0031] In embodiments, the rectification parameters refer to either one or both the linear and the rotational shift of the at least one non-reference image. [0032] In embodiments, the at least one intermediate image is selected as the reference image, wherein a subsequent non-reference image being acquired by the imager and acquisition unit is used for determining the rectification parameters and merged with the first intermediate image.
[0033] An image deblurring and denoising device according to embodiments of the disclosed technique includes a) a central controller; b) an acquisition unit; c) memory; d) a memory controller; e) an image rectifier; and f) an image merger; wherein the central controller, the acquisition, the memory, the memory controller, the image rectifier and the image merger are operatively coupled with each other and adapted to perform at least some of the procedures according to any of the preceding claims. [0034] In embodiments, the image deblurring and denoising device is free of moveable parts.
[0035] Detail description of the embodiments of the disclosed technique [0036] It is the object of the invention to disclose an alternative image deblurring and denoising system, device and method, which are adapted to detect photons of an object during an exposure time and to provide an image output representing the detected photons. In embodiments of the disclosed technique, the image deblurring and denoising system, device and method is adapted to at least reduce blurring artifacts that might be introduced due to changes in the position of the imager relative to the object during the total exposure time. More specifically, the deblurring and denoising system, device and method according to embodiments of the disclosed technique are adapted to associate photons detected by the imager with a plurality of corresponding time intervals/ intermediate exposure times Ti within the total exposure time and to store information representing at least one intermediate image, which are respective of the photons acquired during each time interval. Thus, at least one intermediate image is stored during the total exposure time. To simplify the discussion that follows, the above-outlined procedure is herein referred to as "acquiring at least one intermediate image" or "acquiring at least one intermediate image during the total exposure time". In embodiments of the disclosed technique, a reference image of the at least one intermediate image may be selected. Deviations of at least one non-reference image of the at least one intermediate image from the reference image are determined. If the deviations exceed a predetermined threshold, the at least one non-reference image may be modified according to the reference image. Further, an output image, which is deblurred and associated with both the reference image and the at least one non-reference image may be outputted at an output unit. For example, absolute or weighted pixels values of the reference image and the at least one non-reference image may be merged, (e.g., summed and averaged), and outputted as the output image.
[0037] It should be noted that although entities and/or features such as, for example, the image deblurring and denoising device according to embodiments of the disclosed technique, may be indicated hereinafter as being located in a single geographical and/or architectural location, these entities and/or features may be dispersed and/or parsed over a plurality of geographical and/or architectural locations of an image deblurring and denoising system.
[0038] Reference is now made to Figure 1A. In embodiments of the disclosed technique, a deblurring and denoising system 100 may include an imager 110, an image deblurring and denoising device 120, an output 130, and a power supply 140, at least some of which may be operatively coupled with each other to enable the operation of image deblurring and denoising system 100.
[0039] Imager 110 may be embodied, for example, by a complimentary metal-oxide semiconductor (CMOS) image sensor, a charge-coupled device (CCD) image sensor, a forward-looking infrared (FLIR) imager, a thermal imager, or any combination of the above.
[0040] Output 130 may include, for example, a cathode ray tube (CRT) monitor or display unit, a liquid crystal display (LCD) monitor or display unit, a screen, a monitor, a speaker, or other suitable display unit, output device or storage unit (not shown). [0041] Power supply 140 may be embodied, for example, by a battery, or an external power source.
[0042] In embodiments of the disclosed technique, imager 110 acquiring at least one image of an object is adapted to convert during a total exposure time Te photons 105 respective of the imaged object detected by imager 110 into data representing the imaged object (hereinafter: image-data). More specifically, photons 105 detected by imager 110 are integrated over and associated with sequential time intervals of the total exposure time Te, wherein the integrated photons 105 respective of each time interval are converted into corresponding image-data. Thus, the image-data represents at least one intermediate image acquired by imager 110 during the total exposure time Te. Associating detected photons 105 with a plurality of time intervals to obtain image-data representing the at least one intermediate image, respectively, may be accomplished, e.g., by imager 110 and/or by image deblurring and denoising device 120.
[0043] In embodiments of the disclosed technique, image deblurring and denoising device 120 is provided with the image-data, whereupon image deblurring and denoising device 120 selects from said at least one intermediate image represented by the image-data a reference image, e.g., according to a predetermined criteria, randomly, or pseudo-randomly, such that the non-selected image-data represents at least one non-reference image. The at least one non-reference image is then analyzed, e.g., by image deblurring and denoising device 120, with regard to potential deviations compared to the referenced image according to at least one criteria, which may be based, for example, on a statistic metric, e.g., known in the art. For example, the values of each pixel of the at least one non-reference image may be compared with the values of the corresponding of the reference image; or for example the average or median value of a group of pixels of the at least one none- reference image may be compared with the average or median value of the corresponding group of pixels of the non-reference image. Pixel values of the at least one non-reference image may then be rectified according to the deviations determined by image deblurring and denoising device 120. In embodiments of the disclosed technique, pixel value of the at least one non-reference image may only be rectified if the determined deviation exceeds a deviation-threshold. [0044] In embodiments of the disclosed technique, image deblurring and denoising device 120 is adapted to merge the at least one rectified image and the reference image to an output image of the object. Merging of the at least one rectified image and the reference image may be accomplished, for example, by summing up the respective pixel values of the at least one rectified image and the reference image, and by averaging each of the summed up pixel values. Other merging techniques may be implemented, as outlined herein below.
[0045] Additional reference is now made to Figure 1B. In embodiments of the disclosed technique, image deblurring and denoising device 120 may include a central controller 121 , a memory controller 122, an acquisition unit 123, an image rectifier 124 (implemented, e.g., by a Multiply-Accumulate (MAC) block), an image merger and a memory 126, all of which may be operatively coupled with each other to enable operation of image deblurring and denoising device 120. [0046] Memory 126 may be embodied, for example, by a Random Access Memory (RAM), a Read Only Memory (ROM), a Dynamic RAM (DRAM), a Synchronous DRAM (SD-RAM), a flash memory, a volatile memory, a non-volatile memory, a cache memory, a buffer, a short term memory unit, a long term memory unit, or other suitable memory units or storage units.
[0047] Central controller 121 and/or memory controller 122 may be embodied, for example, by a Central Processing Unit (CPU), a Digital Signal Processor (DSP), one or more processor cores, a microprocessor, a host processor, a controller, a plurality of processors or controllers, a chip, a microchip, one or more circuits, circuitry, a logic unit, an Integrated Circuit (IC), an Application-Specific IC (ASIC), or any other suitable multi-purpose or specific processor or controller. In embodiments of the disclosed technique, central controller 121 may execute instructions (not shown) resulting in the operation of image rectifier 124, image merger 125, acquisition unit 123 and memory controller 122. In embodiments of the invention, memory controller 122 and/or acquisition unit 123 and/or image rectifier 124 and/or image merger 125 may be software-implemented, hardware-implemented, hybrid or combined software- hardware-implemented. [0048] Acquisition unit 123 is adapted to gather from imager 110 image-data directly from imager 110, or alternatively, from a storage device (not shown), which may be, for example, a mass storage device. Memory 126 is adapted to store image-data, whilst memory controller 122 is adapted to read from and write the image-data to memory 126. Image rectifier 124 is adapted to determine, e.g., as known in the art, deviations between the at least one non-reference image and the reference image, and if required to rectify, e.g., as known in the art, the at least one non-reference image. Image merger 125 is adapted to merge the at least one rectified non- reference image with the reference image. It should be noted that in embodiments of the disclosed technique, only a selection of the at least one rectified non-reference image may be merged with the reference image, e.g., according to a predetermined criterion. Image merger 125 may include, for example, combinatory adders (not shown) for summing data and shifts (not shown) for the division to determine averages of pixel values.
[0049] Additional reference is now made to Figure 2. In embodiments of the disclosed technique, an image deblurring and denoising method may be implemented as follows, e.g., by image deblurring and denoising system 100. [0050] As indicated by box 210, the method may include, for example, the procedure of acquiring a number N≥1 of images (i.e., the at least one intermediate image), e.g., by imager 110 and acquisition unit 123, during a total exposure time Te that may be predetermined or determined adaptively. The at least one intermediate image may be represented by more pixels than the output image to avoid edge effects. [0051] In embodiments of the disclosed technique, the procedure of acquiring the at least one intermediate image may occur in real time, for example, by detecting the photons respective of each of the plurality of photons and converting the photons to image-data directly at imager 110. The at least one intermediate image may then be stored in memory 126.
[0052] It should be noted that the term "real-time" as used herein also encompasses the meaning of the term "substantially in real-time" and/or "at least approximately in real-time".
[0053] As is indicated by box 220, the method may additionally include, for example, the procedure of providing a data (e.g., matrix) representation for each of the at least one intermediate image, e.g., in memory 126. The pixels of each of the at least one intermediate image may, for example, be represented by w*h matrices, wherein w represents the width of the image in pixels and h the height of the image in pixels. [0054] As is indicated by box 230 the method may further include the procedure of indexing the at least one intermediate image with a number i, wherein 1< i <N. N is a configurable parameter that is limited by the total exposure time Te, the computation power of image deblurring and denoising system 100 and the capacity required by memory 126 for storing each of the at least one intermediate image and for deblurring and denoising. The total exposure time Te is at least determined by the time required to acquire the at least one intermediate image. Ti denotes the intermediate exposure time for the acquisition of image number i, Ti can be a function of the image number i, Tj = f(i), implying that the intermediate exposure time Tj of an image is not restricted to be constant. As an example, the intermediate exposure time Tj at the beginning could be small, it could grow longer with the number of the images acquired, and it could decrease again towards the end of the total exposure time Te.
[0055] As is indicated by box 240, the method may then include, for example, the procedure of selecting a reference image from the at least one intermediate image, e.g., by image rectifier 124, thus obtaining the at least one non-reference image, the number of which is N-1.
[0056] As indicated by box 250, the method may subsequently include, for example, the procedure of performing parameter estimation of Cj(k) related to the acquisition of the at least one non-reference image i to subsequently rectify the at least one non- reference image (procedure 260). It should be noted that the term "parameter determination" also encompasses the meaning of the term "parameter estimation", and vice versa. More specifically, image rectifier 124, for example, estimates for each of the N-1 at least one non-reference image the set of parameters q(k) considered to describe the sought transformation of the acquired image into the reference image, so that the rectified image and the reference image are closest according to one of the known matching measures for two images, as described for example in 'Digital Image Processing" second edition, Rafael C. Gonzalez and Richard E. Woods, Chapter 12 : Object Recognition. To enable the procedure of parameter estimation, the parameterized model associated with the acquisition of the at least one intermediate image is known in the advance, i.e. for each of the N-1 non- reference images number i, a set of parameters Cj(k) is known, where 1< k <M (M=number of parameters), with which the acquired image number i can be transformed into a rectified intermediate image that is perceptibly closer to the reference image than a non-rectified intermediate image. When acquiring the at least one intermediate images such a parameter may for example refer to a two- dimensional lateral shift only, implying that each of the at least one non-reference image can be obtained by laterally displacing the reference image by the appropriate vector. Clearly, this corresponds to a commonly encountered situation of an unsteadily mounted or handheld camera, whose angular orientation with regard to world coordinates is kept at least approximately constant. In another scenario, a two- dimensional lateral shift plus an arbitrary rotation may occur, corresponding to a situation wherein for an unsteadily mounted or heldheld camera the angular orientation is changed additionally. The parameters may thus refer to either one or both the linear or the rotational shift of the at least one non-reference image. Additionally or alternatively, optical distortions may be introduced by imaging optics, for example, by a fish-eye lens and/or diffraction and/or aberration. As a consequence, the parameter set q(k) includes model parameters of the imaging optics as well, in order to enable a rectification of the at least one non-reference image.
[0057] As is indicated by box 270, the method may then include the procedure of merging the at least one non-reference image with the reference image, e.g., by image merger 125 to obtain the output image. For example, the pixels respective of the at least one rectified image as well as of the reference image may be weighted, summed up and averaged according to the weights to obtain the output image. Thus, the at least one rectified non-reference image and the reference image contribute to the outputted output image according to different or equal proportions. [0058] It should be noted that the limits of the output image may be restricted to the picture field that is common to all of the acquired images, i.e., the picture field that corresponds to the part of the scene that has been imaged by all of the acquired images.
[0059] As indicated by box 290, the method may optionally include, for example, the procedure of performing brightness and/or contrast adaption, e.g., as known in the art, on the output image, if the brightness and/or the contrast of the output image are below respective threshold values. For example, gray-level enhancement techniques may be employed, such as contrast stretching or histogram equalization, as for example described in "Digital Image Processing" second edition, Rafael C. Gonzalez and Richard E. Woods, Chapter 3: Image Enhancement in the Spatial Domain. [0060] Method for reducing memory requirements
[0061] In embodiments of the disclosed technique, the indexing of the at least one intermediate image (procedure 220) may be performed as follows: storing the first one of the at least one intermediate image, and selecting it as the reference image. Further, whereby the reference image is used to initialize the output image, i.e., the output image is initialized with the same data as the reference image. [0062] It should be noted that in some embodiments of the disclosed technique, no additional images are acquired if the noise level of the reference image is below a predetermined threshold.
[0063] A second or subsequent non-reference image with index i=2, wherein 2<i<N, may then be acquired by imager 110 and acquisition unit 123, wherein the second or subsequent non-reference image is used for the determination of the parameters with respect to the reference image. The second or subsequent non-reference image is then rectified with these parameters, optionally weighted and then added up to the output image. After the addition (i.e., summation), the second/subsequent non- reference image can be discarded and the third non-reference is acquired and processed analogously as the second non-reference image. Alternatively, image- data respective of the third image with index i+1 overwrites the image-data respective of the second non-reference. At least one non-reference image is acquired until i=N. Thus, image-data respective of maximal three images are in memory 126 during the implementation of the image deblurring and denoising method. Employing the method for reducing memory produces an output image that is progressively being improved, i.e. the signal-to-noise is constantly increased, with each additional acquired and processed image i. The output image can therefore be inspected continuously, either by a user or by determining a value for the signal-to-noise ratio, while the output image is being improved with the acquisition of each sequentially acquired and processed non-reference image. Once the quality of the output image meets an image quality criteria (either visually determined by a user and/or automatically according to an image quality metric), the procedures employed for the image deblurring and denoising method are stopped, e.g., by the user or automatically by image deblurring and denoising system 100. The last image output prior to stopping the procedures is the final result of the image deblurring and denoising method. It should be noted that when employing the method of reducing memory, the number N of acquired images may remain unknown prior to and throughout the implementation of the image deblurring and denoising method. [0064] Parameter determination or template matching using cross-correlation [0065] A first method for realizing the rectifying the at least one non-reference image according to this invention is the usage of cross-correlation to obtain the lateral shift components (sx, sy), in x and y between the at least one non-reference image and the reference image. Generally, shift components can be obtained by determining the Euclidian distance between an image and a selected sliding template window of the image. If the Euclidean distance between the patterns of the sliding template window and the non-reference image is near or equals 0, the content of the image matches with the content selected sliding template. More specifically, a window with the size w*h of the reference image is selected as a template and cross-correlated with the at least non-reference image, i.e., each pixel p(x,y) of the template window is multiplied with each pixel q(x,y) at the same position and summed up resulting in a term c(u,v) indicating the cross-correlation of the features of the template at positions u and v. In other words; if the position of the sliding template is (sx,sy), the cross-correlation between the template and the compared window in an image at the position (sx,sy) is given by the equation C(sx,sy) = ∑(0≤x≤Wwind-1)∑(0≤y≤Hwind-1) p(x,y)q(x+sx,y+sy)). By varying the values of the lateral shift (sx,sy) the cross-correlation will describe a function that is exhibiting a maximum value at a given position of (sx,sy). This position represents the shift where there is the best matching between the template and the at least one non-reference image. [0066] Multi-resolution (pyramid) -based correlation
[0067] In embodiments of the disclosed technique, a method for determining lateral displacement between the reference image and the at least one non-reference image for rectifying the latter may include employing a multi-resolution (or pyramid) representation of all the images, as described for example by E. H. Adelson et al., in "Pyramid methods in image processing", RCA Engineer, Vol. 29, No. 6, pp. 33-41 , Nov/Dec 1984. The multi-resolution representation of two images q(x,,yj) and p(Xι,yj) is given by a stack of images qk(Xι,yj) and Pk(Xι,yj), where the number of pixels in an image at each level k is reduced by a factor of 2*2 compared to the level k-1 below, according to the following algorithm: qk+i(Xι,y,) = LP(qk) (x2ι,y2j) and Pk+i(Xι,yj) = LP(Pk) (X2ι,y2j)- The operator LP is a general low-pass filter satisfying the requirements for determining the multi-resolution representation as described in the reference given above. At the root of the two pyramids are the pictures qo(Xι,yj)=q(Xι,yj) and
Figure imgf000022_0001
A method for determining the cross-correlation between q and p over large displacement distances consists of starting at the top level k=K of the pyramid representation. The estimate (Xk.yk) of the displacement at the descending pyramid levels k are recursively improved by determining the maximum of the cross- correlation between qk and Pk only for the nine neighbouring values (Xk+n, yk+m), where the integers n and m vary between -1 , 0 and +1. Thus, when starting at a pyramid level k=K, a maximum lateral displacement distance of 2K+1-1 can be determined in both directions. [0068] Distortion compensation
[0069] In embodiments of the invention, rectifying the at least one non-reference image may be accomplished by employing a method of distortion compensation, also known as image warping method. This method may be applied to detect many types of distortions (e.g., translation and/or rotation and/or scaling) between two acquired intermediate images, and may even be employed to correct or at least reduce for non-linear aberrations of the optical system (not shown) that may be introduced by imager 110. A synopsis of the use of the image warping method can be found in "A review of image warping methods", CA. Glasbey, Journal of Applied Statistics, 25, 155-171. The method is based on the process of finding the transformation affecting the at least one non-reference image by comparing it with the reference image. This process is applied to some interest points in both the at least one non-reference image and the reference image to find the transformation. The general case grouping all types of movements and deformations affecting images, can be written as polynomial address mapping between (u,v), the pixel in the reference image, and (x, y), the corresponding pixel in an acquired image, e.g., as follows: u =∑(o≤ι≤p)Σ(o≤j≤P) a,jx' y and v =∑(o≤i≤P)∑(o≤j<P) b.jX1 yJ where a(J and b(J represent real coefficients.
[0070] Experiments showed that the second order polynomial representation could be sufficient, so the above transformation could be reduced to the following equation: u = ao + a-ix + 32X2 + a3xy + a4y + a5y2 and v = bo + bix + b2X2 + b3xy + b4y + b§y2
From these last equations, it is possible to find the coefficients a, and b, by using at least 5 points of interest. All the other pixels can be calculated by applying these equations using the found coefficients and placed in their correct position by using the bilinear interpolation. The special case of the transformation, wherein only translation is introduced, reduces the equations to: u = ao + x and v = bo + y
In this case, the process of the image warping will give the same result as the cross- correlation method described in the precedent paragraph.
[0071] Embodiments of the disclosed technique are applicable even when substantial noise is affecting the imaging process under low illumination conditions, wherein visually highly perceptible noise may affect images because of the "Poissonian" nature of the photonic noise, which may be perceptually significant in the composition of an image. In particular, embodiments of the disclosed technique may thus be used in handheld cameras in low-illumination conditions of the scene, such as in mobile phone cameras. Embodiments of the disclosed technique may also overcome imaging artifacts that might otherwise be introduced due to changes in magnification and/or changes in perspective.
[0072] Moreover, embodiments of the disclosed technique for deblurring and denoising of images is accomplished in a manner that is free of moveable parts. In other words, no adjustment in the optical path of imager 110 is necessary to enable deblurring and denoising, i.e., the optical path of imager 110 may remain unadjusted. Moreover, embodiments of the disclosed technique are operable in an accelerator- free manner, i.e., no independent means for determining the relative motion between camera and scene, such as accelerometers are required as information about the relative displacement and distortions are extracted from the acquired sequence of the at least one intermediate image. Embodiments of the disclosed technique may be employed for deblurring and denoising, even though the at least one intermediate image may contain noise artifacts due to the low illumination of the scene. [0073] By employing embodiments of the disclosed technique, the output image may be free of reconstruction artifacts, i.e., spurious high frequencies and reconstruction artifacts may not be present in the output image.
[0074] It should be noted that procedures of the method of the disclosed technique may be performed in parallel, substantially in parallel or in a sequential manner. For example, rectifying and/or merging of the at least one intermediate image may be accomplished during the acquisition of at least one intermediate image. Alternatively, rectifying and merging may be performed after the completion of the acquisition of all of the at least one intermediate image during the total exposure time Te. [0075] It should be understood that embodiments of the disclosed technique may be implemented, for example, using a machine-readable medium or article (embodied, e.g., by image deblurring and denoising system 100 or image deblurring and denoising device 120) which may store an instruction or a set of instructions that, if executed by a machine, causes the machine to perform the method in accordance with embodiments of the disclosed technique. Such a machine-readable medium may include, for example, any suitable processing platform, computing platform, computing device, processing device, computing system, processing system, computer, processor, or the like, and may be implemented by hardware and/or software, and/or firmware and/or hybrid modules.
[0076] Additionally or alternatively, embodiments of the disclosed technique such as, for example, image rectifier 124 and/or image merger 125, include a computer program adapted to execute the image deblurring and denoising method. [0077] Additionally or alternatively, embodiments of the disclosed technique include a computer program comprising software code adapted to execute the image deblurring and denoising method. [0078] It will be appreciated by persons skilled in the art that the disclosed technique
is not limited to what has been particularly shown and described hereinabove.

Claims

1. An image deblurring and denoising method characterized by the following procedures: a)acquiring at least one intermediate image of an object imaged by an imager during a total exposure time; b)providing a data representation of the at least one N intermediate images; c) indexing the at least one intermediate images; d)selecting a reference image of the at least one intermediate images; e)determining rectification parameters for each of the at least one non-reference image i; f) rectifying the at least one non-reference image; g) merging the reference image and the at least one rectified non-reference image; and h)outputting the merged image.
2. The image deblurring and denoising method of claim 1 , characterized by the procedure of image merging that is accomplished by weighted addition and averaging of the at least one intermediate image.
3. The image deblurring and denoising method according to any of the preceding claims, characterized by the procedure of image rectifying that is accomplished by performing rectification parameter estimation of c,(k) related to the acquisition of the at least one non-reference image i.
4. The image deblurring and denoising method of claim 3, characterized in that said parameter estimation is performed by at least one of the following techniques: cross-correlation, multi-resolution-based correlation, and distortion compensation.
5. The method of claim 1 , wherein the procedure of acquiring at least one intermediate image is characterized by integrating photons detected by imager over sequential intermediate exposure times Tj of the total exposure time Te, associating the detected photons with sequential time intervals of the total exposure time T6, wherein the integrated photons respective of each time interval are converted into corresponding image-data.
6. The method according to any of the preceding claims, characterized in that said rectification parameters refer to either one or both the linear and the rotational shift of the at least one non-reference image.
7. The method according to any of the preceding claims, characterized in that a first of said at least one intermediate image is selected as the reference image, wherein a subsequent non-reference image being acquired by said imager and acquisition unit is used for determining the rectification parameters and merged with said first intermediate image.
8. An image deblurring and denoising device characterized by: a) a central controller; b)an acquisition unit; c)a memory; d)a memory controller; e)an image rectifier; and f) an image merger; wherein said central controller, said acquisition, said memory, said memory controller, said image rectifier and said image merger are operatively coupled with each other and adapted to perform at least some of the procedures according to any of the preceding claims.
9. The image deblurring and denoising device of claim 8, characterized by being free of moveable parts.
PCT/EP2009/057602 2008-06-25 2009-06-18 Image deblurring and denoising system, device and method WO2009156329A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US7565808P 2008-06-25 2008-06-25
US61/075,658 2008-06-25

Publications (1)

Publication Number Publication Date
WO2009156329A1 true WO2009156329A1 (en) 2009-12-30

Family

ID=40942408

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2009/057602 WO2009156329A1 (en) 2008-06-25 2009-06-18 Image deblurring and denoising system, device and method

Country Status (1)

Country Link
WO (1) WO2009156329A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150002745A1 (en) * 2013-07-01 2015-01-01 Xerox Corporation System and method for enhancing images and video frames
WO2017034784A1 (en) * 2015-08-21 2017-03-02 Sony Corporation Defocus estimation from single image based on laplacian of gaussian approximation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070230938A1 (en) * 2006-03-29 2007-10-04 Sanyo Electric Co., Ltd. Imaging device
US20070242900A1 (en) * 2006-04-13 2007-10-18 Mei Chen Combining multiple exposure images to increase dynamic range
WO2008031089A2 (en) * 2006-09-08 2008-03-13 Sarnoff Corporation System and method for high performance image processing
US20080112644A1 (en) * 2006-11-09 2008-05-15 Sanyo Electric Co., Ltd. Imaging device
US20080143840A1 (en) * 2006-12-19 2008-06-19 Texas Instruments Incorporated Image Stabilization System and Method for a Digital Camera

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070230938A1 (en) * 2006-03-29 2007-10-04 Sanyo Electric Co., Ltd. Imaging device
US20070242900A1 (en) * 2006-04-13 2007-10-18 Mei Chen Combining multiple exposure images to increase dynamic range
WO2008031089A2 (en) * 2006-09-08 2008-03-13 Sarnoff Corporation System and method for high performance image processing
US20080112644A1 (en) * 2006-11-09 2008-05-15 Sanyo Electric Co., Ltd. Imaging device
US20080143840A1 (en) * 2006-12-19 2008-06-19 Texas Instruments Incorporated Image Stabilization System and Method for a Digital Camera

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BOGONI L ET AL: "Pattern-selective color image fusion", PATTERN RECOGNITION, ELSEVIER, GB, vol. 34, no. 8, 1 August 2001 (2001-08-01), pages 1515 - 1526, XP004362563, ISSN: 0031-3203 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150002745A1 (en) * 2013-07-01 2015-01-01 Xerox Corporation System and method for enhancing images and video frames
US10909845B2 (en) * 2013-07-01 2021-02-02 Conduent Business Services, Llc System and method for enhancing images and video frames
WO2017034784A1 (en) * 2015-08-21 2017-03-02 Sony Corporation Defocus estimation from single image based on laplacian of gaussian approximation
US9646225B2 (en) 2015-08-21 2017-05-09 Sony Corporation Defocus estimation from single image based on Laplacian of Gaussian approximation

Similar Documents

Publication Publication Date Title
JP5917054B2 (en) Imaging apparatus, image data processing method, and program
CN111246089B (en) Jitter compensation method and apparatus, electronic device, computer-readable storage medium
US7796872B2 (en) Method and apparatus for producing a sharp image from a handheld device containing a gyroscope
Hee Park et al. Gyro-based multi-image deconvolution for removing handshake blur
KR101041366B1 (en) Apparatus for digital image stabilizing using object tracking and Method thereof
KR101594300B1 (en) A apparatus and method for estimating PSF
JP5596138B2 (en) Imaging apparatus, image processing apparatus, image processing method, and image processing program
US8830360B1 (en) Method and apparatus for optimizing image quality based on scene content
KR101624450B1 (en) Image processing device, image processing method, and storage medium
JP4956401B2 (en) Imaging apparatus, control method thereof, and program
JP6087671B2 (en) Imaging apparatus and control method thereof
US8532420B2 (en) Image processing apparatus, image processing method and storage medium storing image processing program
US8760526B2 (en) Information processing apparatus and method for correcting vibration
JP5237978B2 (en) Imaging apparatus and imaging method, and image processing method for the imaging apparatus
JP4454657B2 (en) Blur correction apparatus and method, and imaging apparatus
US20100321510A1 (en) Image processing apparatus and method thereof
JP2005252626A (en) Image pickup device and image processing method
JP6577703B2 (en) Image processing apparatus, image processing method, program, and storage medium
JP2008042874A (en) Image processing device, method for restoring image and program
JP2006295626A (en) Fish-eye image processing apparatus, method thereof and fish-eye imaging apparatus
KR100793284B1 (en) Apparatus for digital image stabilizing, method using the same and computer readable medium stored thereon computer executable instruction for performing the method
JP2010200179A (en) Image processor, image processing method, image processing program and program storing medium in which image processing program is stored
JP6824817B2 (en) Image processing device and image processing method
JP4958806B2 (en) Blur detection device, blur correction device, and imaging device
CN113875219A (en) Image processing method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09769178

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09769178

Country of ref document: EP

Kind code of ref document: A1