IES84987Y1 - Image processing method and apparatus - Google Patents

Image processing method and apparatus Download PDF

Info

Publication number
IES84987Y1
IES84987Y1 IE2008/0337A IE20080337A IES84987Y1 IE S84987 Y1 IES84987 Y1 IE S84987Y1 IE 2008/0337 A IE2008/0337 A IE 2008/0337A IE 20080337 A IE20080337 A IE 20080337A IE S84987 Y1 IES84987 Y1 IE S84987Y1
Authority
IE
Ireland
Prior art keywords
image
images
relatively
processing method
exposed
Prior art date
Application number
IE2008/0337A
Other versions
IE20080337U1 (en
Inventor
Albu Felix
Florea Corneliu
Zamfir Adrian
Drimbarean Alexandru
Poenaru Vladamir
Steinberg Eran
Corcoran Peter
Original Assignee
Fotonation Vision Limited
Filing date
Publication date
Application filed by Fotonation Vision Limited filed Critical Fotonation Vision Limited
Publication of IES84987Y1 publication Critical patent/IES84987Y1/en
Publication of IE20080337U1 publication Critical patent/IE20080337U1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction

Abstract

ABSTRACT An image processing apparatus is arranged to process a first relatively underexposed and sharp image of a scene, and a second relatively well exposed and blurred image, nominally of the same scene, the first and second images being derived from respective image sources. The apparatus provides a portion of the relatively first underexposed image as an input signal to an adaptive filter; and a corresponding portion of the second relatively well exposed image as a desired signal to the adaptive filter. The adaptive filter produces an output signal from the input signal and the desired signal; and an image generator constructs a first filtered image from the output signal, relatively less blurred than the second image.

Description

Image Processing Method and Apparatus The present invention relates to an image processing method and apparatus.
It is well known to attempt to use two source images nominally of the same scene to produce a single target image of better quality or higher resolution than either of the source images.
In super-resolution, multiple differently exposed lower resolution images can be combined to produce a single high resolution image of a scene, for example, as disclosed in "High- Resolution Image Reconstruction from Multiple Differently Exposed Images", Gunturk et al., IEEE Signal Processing Letters, Vol. 13, No. 4, April 2006; or "Optimizing and Learning for Super-resolution", Lyndsey Pickup et al, BMVC 2006, 4-7 Sept 2006, Edinburgh, UK.
However, in super-resolution, blurring of the individual source images either because of camera or subject motion are usually not of concern before the combination of the source images.
US 7,072,525 discloses adaptive filtering of a target version of an image that has been produced by processing an original version of the image to mitigate the effects of processing including adaptive gain noise, up-sampling artifacts or compression artifacts.
PCT Application No. PCT/EP2005/011011 (Ref: FNl 09) discloses using information from one or more presumed-sharp short exposure time (SET) preview images to calculate a motion function for a fully exposed higher resolution main image to assist in the de—blurring of the main image.
Indeed many other documents, including US 2006/0187308, Suk Hwan Lim et al.; and "Image Deblurring with Blurred/Noisy Image Pairs", Lu Yuan et al, SIGGRAPH07, August -9, 2007, San Diego, California are directed towards attempting to calculate a blur function in the main image using a second reference image before de—blurring the main image.
Other approaches, such as disclosed in US2006/0017837 have involved selecting information from two or more images, having varying exposure times, to reconstruct a target image where image information is selected from zones with high image details in SET images and from zones with low image details in longer exposure time images.
It is an object of the present invention to provide an improved method ofcombining a sharp image and a blurred image of differing resolution and exposure to produce a relatively high resolution, fully exposed and relatively sharp image.
According to the present invention there is provided a method according to claim 1.
Embodiments of the invention will now be described, by way of example, with reference to the accompanying drawings, in which: Fig. 1 is a block diagram illustrating the processing of images prior to adaptive filtering according to a first embodiment of the present invention; Fig. 2 illustrates corresponding grid points from a preview and a full resolution image used in the processing of Fig. 1; Fig. 3 illustrates the adaptive filtering of images in R/G/B color space according to one implementation of the present invention; Fig. 4 illustrates the adaptive filtering of images in YCbCr color space according to another implementation of the present invention; Figs. 5(a) and (b) illustrate in more detail the adaptive filtering of images according to two variants of the first embodiment of the invention; Fig. 6 illustrates a sliding vector employed in the filtering of Fig. 5 at successive iterations for L =3; Fig. 7 is a block diagram illustrating the processing of images prior to adaptive filtering according to a second embodiment of the present invention; Fig. 8 shows the timing involved in acquiring two images for use in a further embodiment of the present invention; and Fig. 9 shows some image data produced during the image acquisition of Figure 8. .--1‘, ’~Refei‘ring now to Figure 1, in a first embodiment of the present invention, a wel1~exposed bltirred .relatively low resolution image 12 and a sharp but under-exposed full resolution - ‘L; image 10 are available for processing with a View to combining the images to produce an improved quality full resolution image.
The size of the lower resolution image 12 is O x P and the size of the under-exposed full resolution image 10 is Q X R, with O < Q and P < R.
Where the images are acquired in a digital image acquisition device such as a digital stills camera, camera phone or digital video camera, the lower resolution image 12 may be a preview image of a scene acquired soon before or after the acquisition of a main image comprising the full resolution image 10, with the dimensions of the preview and full resolution images depending on the camera type and settings. For example, the preview size can be 320x240 (O=320; P=240) and the full resolution image can be much bigger (e.g.
Q=3648; R=2736).
In accordance with the present invention, adaptive filtering (described in more detail later) is applied to the (possibly pre-processed) source images 10, 12 to produce an improved filtered image. Adaptive filtering requires an input image (referred to in the present specification as x(k)) and a desired image (referred to in the present specification as d(k)) of the same size, with the resultant filtered image (referred to in the present specification as y(k)) having the same size as both input and desired images.
As such, in the preferred embodiment, the preview image is interpolated to the size Q x R of the full resolution image.
It will be seen that in interpolating the preview image, a misalignment between the interpolated image 14 and the full resolution image might exist. As such, in the preferred embodiment, the images are aligned 16 to produce an aligned interpolated preview image 18 and an aligned full resolution image 20. Any known image alignment procedure can be used, for example, as described in Kuglin C D., Hines D C. "The phase correlation image aligninent method", Proc. Int. Conf. Cybernetics and Society, IEEE, Bucharest, Romania, Sept. 19_7S,_pp. 163-165. , t L '3 N’."’ I x.
Other possible image registration methods are surveyed in "Image registration methods: a survey", Image and Vision Computing 21 (2003), 977-1000, Barbara Zitova and Jan Flusser.
Alternatively, the displacements between the images 10 and 12/14 can be measured if camera sensors producing such a measure are available.
In any case, either before or during alignment, the full resolution image can be down-sampled to an intermediate size S x T with the preview image being interpolated accordingly to produce the input and desired images of the required resolution, so that after alignment 16, the size of the aligned interpolated image and the aligned full resolution image will be S x T (S sQ.TsR).
These images are now subjected to further processing 22 to compute the input and desired images (IMAGE l and IMAGE 2) to be used in adaptive filtering after a decision is made based on the displacement value(s) provided from image alignment 16 as indicated by the line 24.
In real situations, there may be relatively large differences between the images l0, 14, with one image being severely blurred and the other one being under-exposed. As such, alignment may fail to give the right displacement between images.
If the displacement values are lower than a specified number of pixels (e.g. 20), then the full resolution aligned image 20 is used as IMAGE 1 and the aligned interpolated preview image 18 is used as IMAGE 2.
Otherwise, if the displacement values are higher than the specified number of pixels, several alternatives are possible for IMAGE 2, although in general these involve obtaining IMAGE 2 by combining the interpolated preview image 14 and the full resolution image 10 in one of a number of manners.
In a first‘ implementation, we compute two coefficients c, and c2 and the pixel values of IMAGE 2 are obitained by multiplying the pixel values of the full resolution image 10 with c] and adding c2 . These coefficients are computed using a linear regression and a common form of linear regression is least square fitting (G. H. Golub and C. F. Van Loan, Matrix Computations. John Hopkins University Press, Baltimore, MD, 3rd edition, 1996).
Referring to Figure 2, a grid comprising for example 25 points is chosen from the preview image 12 and the corresponding 25 grid points from the full resolution image 10. If one pixel of the preview image has the coordinates (k.l), the corresponding chosen pixel from the full resolution image has the coordinates ((k -%,1 -%j). Therefore we obtain two 5 x5 matrices, M,that corresponds to the pixel values chosen from the preview image and M2 that corresponds to the pixel values chosen from the full resolution image. Two vectors are obtained from the pixel values of these matrices by column-wise ordering of M, (a = (a,) and M2 b=(b,)). We therefore have pairs of data (a,,b,) for i=1,2,...,n, where n=25 is the total a, l number of grid points from each image. We define the matrix V = 021 . The coefficient an 1 vector c=[c, c2] is obtained by solving the linear system VTVc = VTb. The linear system can be solved with any known method.
Another alternative is to amplify the pixels of the under-exposed image 10 with the ratio of average values of the 25 grid points of both images 10, 12 and rescale within the [O-255] interval for use as IMAGE 2.
In a still further alternative, IMAGE 2 is obtained by combining the amplitude spectrum of the interpolated blurred preview image 14 and the phase of the under-exposed full resolution image 10. As such. IMAGE 2 will be slightly deblurred, with some color artifacts, although it will be aligned with the under-exposed image 10. This should produce relatively fewer artifacts in the final image produced by adaptive filtering. a Alternatively, instead of computing FFTS on full resolution images to determine phase "values, an intermediate image at preview resolution can be computed by combining the aniplituiicleisizaectrum of the blurred image 12 and the phase of a reduced sized version of the under-exposed image" 10. This can then be interpolated to produce IMAGE 2.
Another possibility is to use as IMAGE 2, a weighted combination of image 20 and image 18, e.g. 0.1*(Image 18) + 0.9*(Image 20). This can be used if the preview image 12 has large saturated areas.
In any case, once the processing 22 is complete, two images of similar size are available for adaptive filtering 30, Figures 3&4.
In a first implementation, the input and desired images are in RGB color space, Figure 3, whereas in another implementation the input and desired images are in YCC space, Figure 4.
For the RGB case, one color plane (e.g. G plane) is selected from both images and the computed filter coefficients from adaptive filtering are used to update the pixel values for all color planes. The filter coefficients w(k) are obtained at each iteration of the filter 36. The updated pixel value for all color planes will be y(,.(k)=w(k)-xG(k), yR(k) = w(k)-x,,(k), y,,(k) =w(k)~x,,(k), where x R(k), xG(k), xB(k) are the sliding vectors 32 for the R,G,B planes respectively. This provides a solution of reduced numerical complexity vis-a-vis filtering all three color planes.
In the YCC case, the Y plane is selected with the Cb and Cr planes being left unchanged.
Referring now to Figure 5(a), where the adaptive filtering of Figures 3 and 4 is shown in more detail. Two sliding one-dimensional vectors 32, 34 with the dimension L are created, L being the length of the adaptive filter. Within the adaptive filter, the input signal x(k) is the first vector signal 32, while the desired signal d(k) is second vector 34.
In the simplest implementation, L=1 and this can be used if the original image acquisition device can provide good quality under—exposed pictures with a low exposure time. Where the acquisition device produces low quality and noisy under—exposed images, a longer filter length L should be chosen (e.g. 2 or 3 coefficients).
The sliding vectors 32, 34 are obtained from the columns of the image matrices, Fig. 6. The vectors scan both matrices, column by column and with each iteration of the adaptive filter the following pixel value is added to the vector and the trailing pixel value is discarded.
When the vectors 32, 34 are combined in the adaptive filter 36, the most recent pixel value added to the first sliding vector 32 is updated. In the preferred embodiment, the updated pixel is the dot product of the filter coefficients and the L pixel values of the first vector. Any adaptive algorithm (Least Mean Square based, Recursive Least Square based) can be applied and many such algorithms can be found in S. Haykin, "Adaptive filter theory", Prentice Hall, 1996. Preferably, the sign-data LMS described in Hayes, M, Statistical Digital Signal Processing and Modeling, New York, Wiley, 1996 is employed.
The formulae are: x(k)= [x(klx(k —l)..x(k — L +1)], w(k)= Mk), w(/t —1)..w(k — L +1)], yo) = w(k)~ x(k), e(/<)= d(k)—y(k), wlk +1)= w(k)+u(k)~e(k)-sign(x(k)) = w(k)+ pi/cl elk), where w(k) are the filter coefficients calculated within the filter 36, ,u(k) is the step size (fixed or variable), x(k) is the most recent pixel value(s) of the sliding vector 32 from Image 1 (it has always positive values), d(k) is the most recent pixel value(s) of the sliding vector 34 from Image 2, y(k) is the scalar product of the sliding vector 32 and the filter coefficients vector w, eflr) is the error signal computed as the difference between d(k) and y(k).
Other considered variants were: w(k +1): w(k)+ e(k)- x(k) (standard LMS) OI‘ wlk +1): w(k)+ ;1(/<)- e(k)/(1 + x(k)) The term 1+x(k) is used above to avoid the division by zero. Alternatively, the formula: w(k+1)=w(k)+ nkyfl x(k) could be used, with any zero-valued x pixel value replaced with a 1.
In a further variant, the step size ,u(k) is variable as follows: p(k)= (k) or X - a k) = EEME .
So, using the above formula: w(k +1): w(k)+ ;1(k)- e(k)~ sign(x(k) = w(k)+ /¢(k)~ e(k) this gives: wW+%%Mfl%MMw@WwMHfl@~MH%%fl@H%MH= awdH+O~aliQ) x(/6) If ;1(k)=;1=l—a, a very close to 1 (eg. 099999), for L=1, we have w(k+1)= w(k)+,u(k)« , with vectors being replaced with scalars. Therefore, for this particular fixed step size, the sign-data LMS and the previous equation are equivalent.
The [3 parameter can be used in order to avoid division by zero and to over-amplify any black pixels. ,6 is preferably in the interval [1 ..10], and preferably in the interval [5 .. 10], particularly if the under-exposed image is too dark. If not, /3 =1 is enough.
Some thresholds or resetting for the filter coefficients w(k) or output values y(k) can be imposed in order to avoid artifacts in the filtered image 38. An upper threshold, 6 , is imposed for the values that can be allowed for the coefficients of w(k) (i.e. w,(k)= 5 for any 1: LL, if its computed value at iteration k is above 6 ). A suitable threshold value for the mentioned LMS algorithm, can be chosen as 6 = I +4i_ , where E and ii are the average values of above ~a mentioned vectors b and a respectively. Also, the filter output can be forced to be within ' the [0 255] interval if uint8 images are used. As can be seen, the updated pixel values y(k) réplac'e,theilg;ild£ri3’ixel values x(l<) and can be taken into account for the next sliding vectors.
The updated color matrix 38 is completed when the last pixel from the last column has been updated. If filtering has been performed in RGB space, then a final reconstructed image 40 is obtained by concatenating the R/G/B updated matrices. Alternatively, if filtering has been performed in YCC space, the concatenated updated Y plane, i.e. matrix 38, with unchanged Cb and Cr planes of the under-exposed image 10 can be converted back to RGB color space.
The filtering can be repeated with the reconstructed image 40 replacing the under-exposed image, i.e. IMAGE 1.
In this case, adaptive filtering can be performed on the Y plane of an image converted from RGB space, if previous filtering had been performed in RGB space; or alternatively filtering can be performed on an RGB color plane of an image converted from YCC space, if previous filtering had been performed on the Y plane.
It will also be seen that filtering can be operated column wise or row wise. As such, adaptive filtering can be performed first column or row wise and subsequently in the other of column or row wise.
In each case where filtering is repeated, it has been found that the quality of the reconstructed image after two filtering operations is superior than for each individual filtering result.
Referring to Figure 5(b), in some cases saturation problems might appear in the filtered image, especially when the coefficient c, has a large value (e.g. when using a very dark under-exposed image and very light blurred image). This saturation can be avoided using, for example, techniques described in Jourlin, M., Pinoli, J.C.: "Logarithmic image processing. the mathematical and physical framework fro the representation and processing of transmitted images" Advances in Imaging and Electron Physics 115 (2001) 129-196; or Deng, G., Cahill, L.W., Tobin, G.R.: "The study of logarithmic image processing model and its application to image enhancement". IEEE Trans. on Image Processing 4 (1995) 506-512.
K’ iTh/erefoire, the pixel value of the filtered image z(k) is generated by the following formula: ;- , - _ _:ci»«_>y‘*’ .(k)—D D(l D where D is the maximum permitted value (e. g. 255 for a 8 bit representation of images). The adaptive filter provides the first filter coefficient w(k) computed using the error signal e(k).
Another alternative to reduce saturation problems is to reduce the value of the step size ;z(k).
Referring now to Figure 7, in a second embodiment of the invention, an under-exposed relatively-sharp low resolution image and a full resolution blurred image 72 are available.
The low resolution image, for example, a preview image as before, is interpolated and aligned with the full resolution image to produce image 70.
A PSF estimation block 74 computes a PSF for the blurred image 72, from the interpolated preview 70 and the full resolution image 72, using any suitable method such as outlined in the introduction.
The blurred 72 image is then deblurred using this estimated PSF to produce a relatively deblurred image 76. Examples of deblui-ring using a PSF are disclosed in "Deconvolution of Images and Spectra" 2nd. Edition, Academic Press, 1997, edited by Jannson, Peter A. and "Digital Image Restoration", Prentice Hall, 1977 authored by Andrews, H. C. and Hunt, B. R.
Prior to adaptive filtering, the average luminance of the interpolated preview image 70 is equalized in processing block 78 with that of the full resolution (relatively) deblurred image 76. Preferably, this comprises a gamma (7) amplification of the under-exposed image. The exact value of gamma is determined by obtaining a ratio of average luminance (7 in YCC format) for the blurred full resolution and the preview image, and then using this ratio as an index for a look-up table to return 7.
The deblurred full resolution image 76 is then chosen as IMAGE 2 and the interpolated /aligned/luminance equalized preview image produced by the processing block 78 is chosen as IMAGE 1.
Adaptive filtering is then applied and re-applied if necessary to IMAGE I and IMAGE 2 as in the first embodiment. Again when repeating adaptive filtering, the under-exposed image, i.e.
IMAGE 1 is replaced with the reconstructed one.
In the second embodiment, the quality of the reconstructed image 76 produced by adaptive filtering may not be good enough, especially if the PSF is relatively large. In such cases, de- blurring using the PSF may not be used, because can it introduce significant ringing.
In cases such as this, re-applying adaptive filtering as in the first embodiment can attenuate the blurring artifacts in the original image 72 and improve the quality of the image to some extent.
Again, the adaptive filtering can be performed on Y plane if RGB filtering had been performed previously and on the RGB color space if Y filtering had been performed previously.
Again, filtering can be operated on columns or rows, and sequentially on columns and rows.
It has also been found that the second embodiment is useful, if the ratio between the full resolution image 72 and the preview image sizes is less than three and the preview image is not too noisy. If this is not the case, the filtered image can have a lower quality than that obtained by deblurring the blurred image with a very good PSF estimation such as described in the introduction.
In both of the above embodiments, a single preview image is described as being interpolated to match the resolution of the full resolution image. However, it will also be appreciated that super-resolution of more than 1 preview image, nominally of the same scene, could also be used to generate the interpolated images 14, 70 of the first and second embodiments.
In the above embodiments and in particular in relation to the second embodiment, the short- exposure time (presumed sharp) image is described as comprising a preview image acquired either soon: before or after acquisition of a main high resolution image.
However, in a further refined embodiment, the two images are acquired within the longer time period of acquisition of the relatively blurred image. In a preferred implementation of this embodiment, an image acquisition device including a CMOS sensor which allows for a non-destructive readout of an image sensor during image acquisition is employed to acquire the images.
A schematic representation of the timing involved in acquiring these images is explained in relation to Figure 8. For a dark scene, the exposure time T|0ng required to expose the image F properly can result in motion blur caused by hand jitter. Nonetheless, using a non-destructive sensor, it is possible to have an intennediate reading at Tshon providing an under-exposed (noise prone), but sharp image G.
In the preferred embodiment, the read-out of the under-exposed image is placed mid-way through the longer exposure period, i.e between To and T0+T,hm. As such, the actual exposing scheme goes as follows: At t=0 start exposing At t=T0 take the first readout to obtain G’ At t=T0+ T,;,0,, take the second readout to obtain G " o The short exposed image is G=G ’-G " At t=T;0,,g take the third (last) readout to obtain the well-exposed frame, F.
Reset the image sensor.
This means that statistically, the chances of content differences between the short exposure and the long exposure images G and F are minimized. Again, statistically, it is therefore more likely that the differences are caused only by the motion existing in the period [0, Tlong]. The well exposed picture is blurred by the motion existing in its exposure period, while the other is not moved at all, i.e. the motion blur makes the content differences.
Referring now to Figure 9, a still image of a scene is recorded. The period To is chosen to be long enough so that motion appears in the image G’ read at time T0, Figure 9(c). The values ofthi:e.PSF for this image are shown in Figure 9(a). From To to Tshon there is not enough time ,, , for extra motion to appear. However, the entire interval, [0 ;T0+T5ho.-I], is long enough so that the resulting image G", Figure 9(d), will be blurred as can be seen from the corresponding PSF values of Figure 9(b). The resulting under-exposed image, G=G"-G’, Figure 9 (e), is not blurred as can be seen from the small difference between the PSF values for the original images G" and G’.
The image G can now be combined with the image F through adaptive filtering as described above and in particular in relation to the second embodiment, luminance enhancement can be performed on the image G before being combined with the image F.
Subsequent to producing the filtered image 40 through one or more steps of adaptive filtering, the filtered image can be subjected to further processing to improve its quality further.
The noise correction of the filtered image can be performed using a modified version of the Lee Least mean square (LLMSE) filter. In the following example, G; is the filtered image, G [X is the convolution of G1 with an XxX uniform averaging kernel; so G ,3 is the convolution of G, with a 3x3 uniform averaging kernel; and G17 is the convolution of G; with a 7x7 uniform averaging kernel.
The noise cleared picture is: G2 = aG," +(1—a)G, Sn where 5‘ = S" + SF Se; is the filtered image standard deviation computed for a 5x5 vicinity of a pixel; SF is the well-exposed image squared standard deviation computed for a 3x3 vicinity of the corresponding pixel; and SM =lS/« ‘ S01‘ If S; is smaller than a predetermined threshold (meaning that the current pixel in a perfectly uniform area) then G,X = G,7otherwise (in the current pixel neighborhood there is an edge) G?’ = G?. It will therefore be seen that where the variation around a pixel is high, G; is approximately equal to G1.
As discussed, the under-exposed acquired image has intensities in the lower part of the range (darkness range). The spectral characteristics of the cameras, in this area, differ from those of normally exposed areas. Therefore, the adaptively filtered image, G; or G2, depending on whether noise filtering has been applied or not, may have deviations in color. To compensate for these deviations, a rotation or a translation in the (Cb,Cr) plane can be applied. The parameter values for these operations will depend on the camera and number of exposure stops between the well-exposed and the under-exposed images. One exemplary scheme for color correction in RBG space is as follows: Compute the average luminance: Compute the color averages Correct G2 to obtain G3 as follows: M = (RG2 ‘gill’ ‘El Rc3(isf)= Rczliaf)" AR AGr = — T’G_2)+ (ii - 5;) GrG3(i, j) = GrG2(i, j) — AGr

Claims (5)

Claims:
1. An image processing method comprising: obtaining a first relatively underexposed and sharp image of a scene; obtaining a second relatively well exposed and blurred image, nominally of the same scene. said first and second images being derived from respective image sources; providing a portion of said relatively first underexposed image as an input signal to an adaptive filter; providing a corresponding portion of said second relatively well exposed image as a desired signal to said adaptive filter; adaptively filtering said input signal to produce an output signal; and constructing a first filtered image from said output signal, relatively less blurred than said second image.
2. An image processing method according to claim 1 wherein said first and second images are in RGB format and wherein said image portions comprise a respective color plane of said first and second images.
3. An image processing method according to claim 2 wherein said adaptively filtering includes producing a set of filter coefficients from a combination of said input signal and a error signal being the difference between said desired signal and said output signal; and further comprising: constructing each color plane of said first filtered image from a combination of said filter coefficients and said input signal color plane information.
4. An image processing method according to claim 1 wherein said first and second images are in YCC format and wherein said image portions comprise a respective Y plane of said first and second images.
5. An image processing method according to claim 4 wherein said constructing said first filtered image comprises using said output signal as a Y plane of said first filtered image and 16 using Cb and Cr planes of said input image as the Cb and Cr planes of said first filtered image.
IE2008/0337A 2008-04-30 Image processing method and apparatus IE20080337U1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
USUNITEDSTATESOFAMERICA18/09/20071

Publications (2)

Publication Number Publication Date
IES84987Y1 true IES84987Y1 (en) 2008-10-01
IE20080337U1 IE20080337U1 (en) 2008-10-01

Family

ID=

Similar Documents

Publication Publication Date Title
US8737766B2 (en) Image processing method and apparatus
US8989516B2 (en) Image processing method and apparatus
US8878967B2 (en) RGBW sensor array
US7412107B2 (en) System and method for robust multi-frame demosaicing and color super-resolution
US8890983B2 (en) Tone mapping for low-light video frame enhancement
US11625815B2 (en) Image processor and method
US9100514B2 (en) Methods and systems for coded rolling shutter
US9307212B2 (en) Tone mapping for low-light video frame enhancement
KR100911890B1 (en) Method, system, program modules and computer program product for restoration of color components in an image model
US9858644B2 (en) Bayer color filter array based high dynamic range video recording method and device
Chang et al. Low-light image restoration with short-and long-exposure raw pairs
CN104168403B (en) High dynamic range video method for recording and device based on Baeyer color filter array
Akyüz Deep joint deinterlacing and denoising for single shot dual-ISO HDR reconstruction
CN111242860B (en) Super night scene image generation method and device, electronic equipment and storage medium
US7430334B2 (en) Digital imaging systems, articles of manufacture, and digital image processing methods
Deever et al. Digital camera image formation: Processing and storage
IES84987Y1 (en) Image processing method and apparatus
IE20080337U1 (en) Image processing method and apparatus
Gil Rodríguez et al. Issues with common assumptions about the camera pipeline and their impact in hdr imaging from multiple exposures
Lim et al. Gain fixed-pattern-noise correction via optical flow
CN111311507A (en) Ultra-low light imaging method based on multi-granularity cooperative network