GB2536429A - Image noise reduction - Google Patents

Image noise reduction Download PDF

Info

Publication number
GB2536429A
GB2536429A GB1504316.9A GB201504316A GB2536429A GB 2536429 A GB2536429 A GB 2536429A GB 201504316 A GB201504316 A GB 201504316A GB 2536429 A GB2536429 A GB 2536429A
Authority
GB
United Kingdom
Prior art keywords
image
images
points
alignment
processing module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1504316.9A
Other versions
GB201504316D0 (en
GB2536429B (en
Inventor
Vivet Marc
Brasnett Paul
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Imagination Technologies Ltd
Original Assignee
Imagination Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Imagination Technologies Ltd filed Critical Imagination Technologies Ltd
Priority to GB1504316.9A priority Critical patent/GB2536429B/en
Publication of GB201504316D0 publication Critical patent/GB201504316D0/en
Priority to EP16158697.9A priority patent/EP3067858B1/en
Priority to CN201610141825.4A priority patent/CN105976328B/en
Priority to US15/068,899 priority patent/US11756162B2/en
Publication of GB2536429A publication Critical patent/GB2536429A/en
Application granted granted Critical
Publication of GB2536429B publication Critical patent/GB2536429B/en
Priority to US15/877,552 priority patent/US10679363B2/en
Priority to US16/865,884 priority patent/US11282216B2/en
Priority to US18/244,393 priority patent/US20230419453A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/32Determination of transform parameters for the alignment of images, i.e. image registration using correlation-based methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/684Vibration or motion blur correction performed by controlling the image sensor readout, e.g. by controlling the integration time
    • H04N23/6845Vibration or motion blur correction performed by controlling the image sensor readout, e.g. by controlling the integration time by combination of a plurality of images sequentially taken
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/37Determination of transform parameters for the alignment of images, i.e. image registration using transform domain methods

Abstract

A reduced noise image can be formed from a set of images. One of the images of the set can be selected to be a reference image and other images of the set are transformed such that they are better aligned with the reference image. A measure of the alignment of each image with the reference image is determined. At least some of the transformed images can then be combined using weights which depend on the alignment of the transformed image with the reference image to thereby form the reduced noise image. By weighting the images according to their alignment with the reference image the effects of misalignment between the images in the combined image are reduced. Furthermore, motion correction may be applied to the reduced noise image.

Description

IMAGE NOISE REDUCTION
Background
Cameras are used to capture images. Often images are noisy in the sense that there is some image noise present in the image. The image noise may be random (or pseudo-random) such that there is little or no correlation between the image noise of two different images of the same scene. In the context of this description, image noise is an unwanted signal which is present in an image resulting from the image capture process, and may be produced, for example, by a sensor and/or by circuitry of a camera which captures the image.
Since there is often little or no correlation between the image noise of two different images of the same scene, the image noise may be reduced by combining a sequence of two or more images captured in quick succession of the same scene. Combining the images will reduce the effect of random fluctuations in each individual image resulting from the image capture process. For example, at each pixel position, the pixel values for the different images may be averaged to determine the pixel values of the combined image. The combined image is a reduced noise image.
Since, the images which are combined are captured at different time instances there may be some motion of objects in the scene between the times at which different images are captured. Furthermore, there may be some movement of the camera between the times at which different images are captured. In particular, if a user is holding a camera while it captures a sequence of images then it is very likely that there will be some camera movement between the times at which different images are captured. The motion between the images which are combined to form the reduced noise image may cause some geometric misalignment between the images, which in turn may introduce some blur into the reduced noise image. There are various types of "alignment" between images, such as geometric alignment, radiometric alignment and temporal alignment. The description herein considers geometric alignment of images which is relevant for handling motion between the images, and the term "alignment" as used herein should be understood to be referring to "geometric alignment". Misalignment between the images causes problems when it comes to combining images in order to reduce noise. Furthermore, movement of the camera while an image is being captured may introduce motion blur into the image which can reduce the sharpness of the image.
Summary
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
There is provided a method of forming a reduced noise image using a set of images, the method comprising: applying respective transformations to at least some of the images of the set to bring them closer to alignment with a reference image from the set of images; determining measures of alignment of the respective transformed images with the reference image; determining weights for one or more of the transformed images using the determined measures of alignment; and combining a plurality of images including said one or more of the transformed images using the determined weights to form a reduced noise image.
There is provided a processing module for forming a reduced noise image using a set of images, the processing module comprising: alignment logic configured to: apply respective transformations to at least some of the images of the set to bring them closer to alignment with a reference image from the set of images; and determine measures of alignment of the respective transformed images with the reference image; and combining logic configured to: determine weights for one or more of the transformed images using the determined measures of alignment; and combine a plurality of images including said one or more of the transformed images using the determined weights to form a reduced noise image.
There is provided a method of transforming a first image to bring it closer to alignment with a second image, the method comprising: implementing a multiple kernel tracking technique to determine positions of a set of candidate regions of the first image based on a similarity between a set of target regions of the second image and the set of candidate regions of the first image, wherein the target regions of the second image are respectively positioned over the positions of a predetermined set of points of the second image; using at least some of the determined positions of the set of candidate regions to initialize a Lucas Kanade Inverse algorithm; using the Lucas Kanade Inverse algorithm to determine a set of points of the first image which correspond to at least some of the predetermined set of points of the second image; determining parameters of a transformation to be applied to the first image based on an error metric which is indicative of an error between a transformation of at least some of the determined set of points of the first image and the corresponding points of the predetermined set of points of the second image; and applying the transformation to the first image to bring it closer to alignment with the second image.
There is provided a processing module for transforming a first image to bring it closer to alignment with a second image, the processing module comprising alignment logic which comprises: multiple kernel tracking logic configured to implement a multiple kernel tracking technique to determine positions of a set of candidate regions of the first image based on a similarity between a set of target regions of the second image and the set of candidate regions of the first image, wherein the target regions of the second image are respectively positioned over the positions of a predetermined set of points of the second image; Lucas Kanade Inverse logic configured to use a Lucas Kanade Inverse algorithm to determine a set of points of the first image which correspond to at least some of the predetermined set of points of the second image, wherein the positions of at least some of the set of candidate regions determined by the multiple kernel tracking logic are used to initialize the Lucas Kanade Inverse algorithm; and transformation logic configured to: (i) determine parameters of a transformation to be applied to the first image based on an error metric which is indicative of an error between a transformation of at least some of the determined set of points of the first image and the corresponding points of the predetermined set of points of the second image, and (H) apply the transformation to the first image to bring it closer to alignment with the second image.
There may also be provided computer readable code adapted to perform the steps of any of the methods described herein when the code is run on a computer. Furthermore, computer readable code may be provided for generating a processing module according to any of the examples described herein. The computer code may be encoded on a computer readable storage medium.
The above features may be combined as appropriate, as would be apparent to a skilled person, and may be combined with any of the aspects of the examples described herein.
Brief Description of the Drawings
Examples will now be described in detail with reference to the accompanying drawings in which: Figure 1 is a schematic diagram of a processing module for forming a reduced 20 noise image; Figure 2 is a flow chart for a method of forming a reduced noise image; Figure 3 is a graph showing the values of sharpness indications for a set of images; Figure 4 is a flow chart for a method of determining point correspondences 25 between two images; Figure 5 represents a set of regions within an image used for a multiple kernel tracking technique and a corresponding set of regions within the image used for a Lucas Kanade Inverse algorithm; Figure 6 is a graph showing the values of misalignment parameters for a set of 30 images; Figure 7a shows an example of an average of a set of images when there is motion in the scene; Figure 7b shows a binary motion mask indicating areas of motion in the set of images; Figure 7c shows a modified motion mask; Figure 7d shows a smoothed motion mask; Figure 8 shows an example of a reference image, a reduced noise image and a motion-corrected reduced noise image; and Figure 9 is a schematic diagram of a computer system in which a processing module for forming a reduced noise image is implemented.
The accompanying drawings illustrate various examples. The skilled person will appreciate that the illustrated element boundaries (e.g., boxes, groups of boxes, or other shapes) in the drawings represent one example of the boundaries. It may be that in some examples, one element may be designed as multiple elements or that multiple elements may be designed as one element. Common reference numerals are used throughout the figures, where appropriate, to indicate similar features.
Detailed Description
In examples described herein, a reduced noise image is formed using a set of images. One of the images of the set may be selected to be a reference image and other images of the set are transformed such that they are better aligned with the reference image. At least some of the images can then be combined using weights which depend on the alignment of the transformed image with the reference image to thereby form the reduced noise image. By weighting the images according to their alignment with the reference image the effects of misalignment between the images in the combined image are reduced.
Furthermore, in examples described herein, point correspondences between a first image of a set of images and a second image (e.g. a reference image) of the set of images can be determined by implementing a multiple kernel tracking (MKT) technique to determine positions of a set of candidate regions of the first image, and using the determined positions to initialize a Lucas Kanade Inverse (LKI) algorithm. The LKI algorithm can then be used to determine a set of points of the first image which correspond to at least some of a predetermined set of points of the second image. These point correspondences can then be used to determine parameters of a transformation to be applied to the first image to bring it closer to alignment with the second image. It is noted that the MKT technique gives a global alignment which includes alignment parameters describing an alignment for the full image, and then this global alignment is used to determine the initial positions for use in the LKI algorithm which then obtains a local alignment to determine the point correspondences. As described in more detail below, the use of a multiple kernel tracking technique to initialize a Lucas Kanade Inverse algorithm solves some problems which can sometimes be encountered with a Lucas Kanade Inverse algorithm. For example, without a sufficiently accurate initialization, the Lucas Kanade Inverse algorithm may fail to converge on an accurate solution. The use of a multiple kernel tracking technique can provide a sufficiently accurate initialization for the Lucas Kanade Inverse algorithm even if the point correspondences involve a large shift in position and even if there are affine transformations, such as rotations, between the images. Furthermore, the Lucas Kanade Inverse algorithm does not perform well in flat areas of an image because the algorithm uses gradients to converge on a solution. A multiple kernel tracking technique includes the calculation of feature histograms which can be used to indicate whether a region is flat and should therefore be discarded such that it is not used when implementing the Lucas Kanade Inverse algorithm.
In more detail, in examples described herein, the candidate images (i.e. the images other than the reference image) are warped back to the reference image using the MKT parameters, such that any region from a candidate image should be close to the corresponding region of the reference image. The LKI algorithm can then use the same regions that were used when performing the MKT, because some information is already computed for them (e.g. as described below, an intensity histogram may be computed for a region which can be used to determine if the region is flat or not). The MKT technique can include scaling and rotation functions, so warping the full candidate image back to the reference image can have some accuracy advantages, since the LKI algorithm described herein does not include scaling or rotation functions. The LKI algorithm described herein does not include scaling or rotation functions because it operates on small regions, so allowing scaling and rotations would introduce too many degrees of freedom for the small region thereby resulting in errors. So, the use of the MKT technique takes scaling and rotation into account, such that the LKI algorithm does not need to, and the method still has tolerance to rotations and scaling. It is noted that the point correspondences obtained by the LKI algorithm provide projective transformations which may include scaling and rotation. Projective transformations are not estimated on the MKT step in the examples described herein because the MKT technique would become unstable due to too many degrees of freedom. The MKT technique described herein has four degrees of freedom (x, y, scale, angle) and a projective transformation has eight degrees of freedom.
Embodiments will now be described by way of example only.
Figure 1 shows a processing module 100 which is configured to receive a set of images and to form a reduced noise image using the set of images. Furthermore, in the example shown in Figure 1, the processing module 100 is configured to apply motion correction such that the image which is output from the processing module 100 is a motion-corrected, reduced noise image. The processing module 100 comprises selection logic 102, alignment logic 104, combining logic 106 and motion correction logic 108. The alignment logic 104 comprises point correspondence logic 110, transformation logic 112 and alignment measuring logic 114. The point correspondence logic 110 comprises multiple kernel tracking logic 116 and Lucas Kanade Inverse logic 118. The processing module 100, and its logic blocks, may be implemented in hardware, software or a combination thereof.
The operation of the processing module 100 is described with reference to the flow chart shown in Figure 2. In step S202 the processing module 100 receives a set of images. To give some examples, the images may be received from an image sensor, from some other processing module or from a memory which may be implemented on the same device (e.g. camera, smartphone, tablet, etc.) as the processing module 100. The images of the set of images are similar in the sense that they are substantially of the same scene. For example, the set of images may be captured in quick succession, e.g. with a camera operating in a burst mode such that a plurality of images (e.g. 24 images) are captured over a shod time period (e.g. 3 seconds). The numbers given herein are given by way of example and may be different in different implementations. The set of images may comprise frames of a video sequence. The set of images are received at the selection logic 102.
As a very brief overview of the noise reduction method implemented by the 5 processing module 100: the selection logic 102 selects a reference image from the set of images based on the sharpness of the images, and discards blurry images (steps S204 to S208); - the alignment logic 104 transforms images such that they more closely align with the reference image, and discards those which are highly misaligned (steps S210 to S218); the combining logic 106 combines images to form a reduced noise image (steps S220 and S222); and - the motion correction logic 108 corrects artifacts in the reduced noise image which are produced by motion between the images (steps S224 and S226).
These processes are described in more detail below.
In step S204 the selection logic 102 determines sharpness indications for the images. It is noted that a camera capturing the images may be implemented in a handheld device and, as such, some of the images may be blurry due to motion of the camera. Blur caused by motion of the camera is not normally a desired effect. Therefore, in step S206, if the determined sharpness indication for an image is below a sharpness threshold then the image is discarded.
As an example, the sharpness indications may be sums of absolute values of image Laplacian estimates for the respective images. The image Laplacian is a good indicator of the presence of high frequencies in an image, and a blurry image usually has less high frequency energy. The Laplacian, L(l;(x,y)), at a pixel position (x,y) of the image II, is the 2nd derivative of the image at that pixel position and is given by the equation: LOa2. 1, ;(x,y)) = + azy, where li(x, y) is the image pixel value at the location (x,y) and L is the Laplacian operator.
Computing the Laplacian is a simpler operation than computing the magnitude of the gradients. The second derivatives (which are calculated for the Laplacian) are more sensitive to noise than the magnitude of the gradients, so in some examples the magnitude of the gradients may be used to determine the sharpness indications, but in the examples described in detail herein the Laplacian is used due to its simplicity and an assumption can be made that the noise will be approximately the same for each image. For example, the Laplacian may be estimated by filtering the image with a suitable filter.
The sharpness indication for an image, i, is denoted (pi, and is the sum of the absolute values of the image Laplacian over all of the pixel positions of the image, such that: (Pi = Exy I Y)) I, The sharpness indication of an image is a measure of the sharpness (or conversely the blurriness) of the image. The sharpness threshold may be determined using the mean, Kw), and standard deviation, a(p), of the set of sharpness indications, cp, for the set of images, where (p = ..., y0 for a set of N images. For example, the threshold may be set at p.(cp) -aicy(w) (where as an example s may be in the range 1.1 < < 1.4), wherein an image is discarded if its sharpness indication is below this threshold. That is, the image, i, is discarded in step S206 if: < P((p) -cia((p) As an example, Figure 3 shows a graph of sharpness indications 302; for a set of ten images (i=0..9). In this example, the sharpness threshold is shown by the dashed line 304. Images 6 and 7 have sharpness indications 3026 and 3027 which are below the sharpness threshold 304. Therefore images 6 and 7 are discarded in step S206 because they are determined to be too blurry. The sharpness indications 302 of the other images are above the sharpness threshold 304 and as such those other images are not discarded in step S206. It is noted that in some other examples step S206 might not be performed. That is, in some examples, images are not discarded based on their sharpness. This may help to simplify the process, but may result in more blurriness appearing in the final image.
In step S208, based on the sharpness indications 302, the selection logic 102 selects the sharpest image from the set of images to be the reference image. Therefore, in the example shown in Figure 3, image 5 is selected to be the reference image because its sharpness indication 3025 is higher than the sharpness indications 302 of the other images in the set of ten images. Selecting the sharpest image as the reference image is beneficial to the rest of the method described below. For example, it is easier to determine alignment to a sharp image than to determine alignment to a blurry image. In other examples, a reference image could be selected using different criteria, e.g. a combination of different criteria. For example, a reference image could be selected based on the content of the images, e.g. the image from the set of images in which the greatest number of people are smiling or in which the greatest number of people have their eyes open and/or are looking at the camera may be selected as the reference image. In general, the "best" image may be selected as the reference image, but the criteria which determine which image is considered to be the best may be different in different examples.
The images which have not been discarded (e.g. images 0 to 5 and 8 and 9 in the example shown in Figure 3) are passed from the selection logic 102 to the alignment logic 104. In steps S210 to S214 the alignment logic 104 determines and applies a respective transformation to each of the images (other than the reference image and the images discarded by the selection logic 102) to bring them closer to alignment with the reference image. In the examples described below, the transformation for an image, is represented as a homography, F11. The homography a is a matrix which is determined with the aim of satisfying the equation: xi = Hix,_, where x; is a set of points of the image I; and xr is a corresponding set of points of the reference image Ir. So in order to determine the parameters of the transformation (i.e. the components of the homography matrix Fli) point correspondences are first determined, i.e. it is determined which points of the image correspond to at least some of the set of points x, of the reference image Ir. The set of points x" of the reference image Ir is a predetermined set of points, and may for example comprise points of a uniform lattice.
Therefore, in step S210, the point correspondence logic 110 determines, for each of the images to which a transformation is to be applied, a set of points xi which correspond to the predetermined set of points x,-of the reference image I. In the example described herein, the set of points xi is determined using the Lucas Kanade Inverse (LKI) algorithm. Furthermore, the LKI algorithm is initialized using the results of a multiple kernel tracking (MKT) technique.
Details of step S210 are shown in the flow chart of Figure 4. In particular, step S210 includes steps S402, S404 and S406. In step S402 the MKT logic 116 implements a MKT technique to determine positions of a set of candidate regions of the image I; based on a similarity between a set of target regions of the reference image I, and the set of candidate regions of the image I. Figure 5 represents an image li, denoted 502. The positions of the predetermined set of points of the reference image create a uniform lattice over at least part of the image 502, and Figure 5 shows these points (one of which is denoted with reference numeral 504). In this example the lattice is a 5x7 lattice of points 504 but in other examples a different arrangement of predetermined points may be used, e.g. a 10x10 lattice. The circles 506 shown in Figure 5 represent the candidate regions for which the positions are determined by the MKT logic 116 in step S402. The squares 508 shown in Figure 5 represent candidate regions used by the LKI algorithm as described below.
In the MKT technique, the candidate regions 506 are compared to target regions of the reference image I,. The circles 506 in Figure 5 are merely illustrative, and the regions could have any suitable shape, e.g. the target regions may be blocks of 31x31 pixels of the reference image, positioned over (e.g. centred on) the positions of the points 504 from the predetermined set of points of the reference image Ir.
Multiple kernel tracking techniques are known in the art, for example as described in "Multiple kernel tracking with SSD" by Hager, Dewan and Stewart, IEEE Conference on Computer Vision and Pattern Recognition, 2004, pp790-679. As such, for conciseness, an in depth explanation of a multiple kernel tracking technique is not provided herein. However, as a higher-level explanation, a MKT technique represents each of the target regions of the reference image 1,. with a kernel-weighted histogram q, e.g. of the pixel intensity values contained in the target region. The histogram q comprises a plurality of histogram bins, i.e. q = (q1, q2, qm)T, where m is the number of bins in the histogram. The bins of the histogram are weighted with a kernel function centred at position c in the reference image It which corresponds to the position of one of the predetermined set of points 504. In the same way for a candidate region 506 of the image 1,, a kernel-weighted histogram p(c') is determined with the kernel function centred at position c' in the image 1,. It is assumed that the position c' is close to the position c, and the difference between c and c' can be expressed as 4c = c' -c. A similarity function between the two histograms q(c) and p(c') can be used to find a value for Ac which provides an improved correspondence between the target region of the reference image It and the candidate region 506 of the image 1,. This method can be iterated until the value of Ac falls below a threshold or until a maximum number of iterations have been performed. This idea can be expanded to multiple kernels such that a transformation AC can be found which provides a good correspondence for tracking multiple target regions of the reference image 1r to the candidate regions 506 of the image /i. With single kernel tracking, 4c can be found as a translation, i.e. 4c = (4c",4cy); but with multiple kernel tracking, 4C can be found as a more complex transformation, e.g. an affine transformation which includes rotations (0) and/or scaling (A) functions, i.e. 4C = (4c,,4cy, Therefore, in summary, the MKT logic 116 implements the MKT technique by iteratively optimizing the similarity between feature histograms (e.g. intensity histograms) of the set of target regions of the reference image and corresponding feature histograms of the set of candidate regions by iteratively varying the positions of the candidate regions.
Some of the candidate regions 506 of the image l may be rejected if they are determined to be too flat for the LKI algorithm to work with. The LKI algorithm relies on gradients in order to converge to a solution, so if image regions are flat the LKI algorithm does not always provide good results. The MKT technique can provide a simple way of determining whether a region is flat, such that a point in a flat region can be rejected, such that it is not used by the LKI logic 118. For example, a counter (or "weight") for a region can be used as an indication as to whether the region is flat. Reading the histogram from left to right, if a bin is not zero its weight is incremented. If, in addition, the bin to the left of the current bin is zero then the weight of the current bin is incremented by another 1. If the sum of all the weights is greater than 3 then the region is used for tracking in the LKI algorithm. Otherwise the region is discarded because it is determined to be flat. If the weight is lower than 4 it means that the patch has constant colour, so that, it has a high probability of being a flat region. The reasoning for this is that if a region of the image has constant colour (i.e. it is a flat region), this leads to a histogram with a single non-zero bin, because all the pixels have the same value. A flat region can be altered by noise and the quantization of its values (when generating the histogram) which can lead to histograms with two consecutive nonzero bins for flat regions. For a region to be considered non-flat, its histogram should have at least two non-consecutive non-zero bins (so the colours in the region are more different than colours altered by noise) or three consecutive nonzero bins. The algorithm of this methodology can be seen below: weight = 0 for each bin '1' in the histogram if the bin(i) != 0 then weight = weight + 1 if the bin(i-1) == 0 then weight = weight + 1 enf for if weight > 3 then use the point to track Figure 5 shows the regions which are not determined to be too flat as squares, e.g. square 508. Some of the regions which are determined by the MKT logic 116 are not provided to the LKI logic 118 because they are too flat and as such Figure shows that some regions (e.g. region 50613) do not have an associated square meaning that they are not used by the LKI logic 118.
In step S404 the LKI logic 118 uses the positions of at least some of the set of candidate regions determined by the MKT technique to initialize the LKI algorithm. The LKI algorithm is known in the art, for example as described in "Lucas-Kanade 20 Years On: A Unifying Framework" by Simon Baker and lain Matthews, International Journal of Computer Vision, 2004, pp221-255.
In step S406 the LKI logic 118 uses the LKI algorithm to determine a set of points of the image I; which correspond to at least some of the points of the predetermined set of points of the reference image Ir. Since the LKI algorithm is known in the art, for conciseness, an in depth explanation of the LKI algorithm is not provided herein. However, as a higher-level explanation, the LKI algorithm aims to minimise the sum of squared error between two image patches: a first patch being a target region of the reference image lr and the second patch being a candidate region of the image which is warped back onto the coordinates of the reference image. The sum of squared error between the two image patches is minimised by varying the warping parameter p (i.e. changing p to p+Ap) to find a value for Ap which minimises the sum of squared error. According to the LKI algorithm this is done iteratively until the value of Ap is below a threshold or until a maximum number of iterations have been performed. The final value of the warping parameter p after the LKI algorithm has been performed is used to determine the positions of a set of points in the image I; which correspond to at least some of the predetermined set of points of the reference image I. A problem which the LKI algorithm can sometimes encounter is related to the image gradients in the reference image /,,. When a gradient in one direction dominates the gradient in the perpendicular direction (e.g. when the gradient on the x axis V,/ dominates the gradient on the y axis Vy/, or vice-versa), the results of the LKI algorithm may be erroneous. However, as can be seen in the more detailed description of the LKI algorithm provided below, for each of the points of the set of points of the image which aren't determined to be too flat, the LKI algorithm includes determining a warped version of an image patch surrounding the point, and determining a Hessian matrix for the image patch. The elements of the Hessian matrix indicate sums of squared values of the gradients in different directions across the warped version of the image patch. The problem of gradients in one direction dominating gradients in another direction can be addressed by comparing the sum of the squared values of the gradients on x and y axes. If the sum of the squared values of the gradients for a region in one direction is at least 20 times bigger than in the perpendicular direction then the region is discarded. By discarding a region in this way, the LKI logic 118 will not output a point correspondence for the discarded region. It is noted that this comparison does not significantly add to the computation performed by the point correspondence logic 110 because the sum of the squared values of the gradients can be extracted from the Hessian matrix (which is computed as part of the LKI algorithm). The Hessian is referred to as "ill-conditioned" when the ratio between the two gradients is large (e.g. a.20).
Figure 5 shows regions which have ill-conditioned Hessians with squares having dashed lines, such as region 50852. The point correspondences determined by the point correspondence logic 110 are provided to the transformation logic 112. In the example shown in Figure 5 there are 35 regions determined by the MKT logic 116 surrounding the respective 35 predetermined points 504. Seven of those regions are flat (and do not have corresponding squares shown in Figure 5) and as such the MKT logic 116 discards them. Of the remaining 28 regions, the LKI logic 118 determines that five of them have ill conditioned Hessians (and have squares shown with dashed lines in Figure 5) and as such the LKI logic 118 discards them. Therefore the point correspondences are determined for the remaining 23 regions (i.e. those regions shown with solid line squares in Figure 5) and the point correspondences for these regions are provided to the transformation logic 112.
In step S212 the transformation logic 112 determines parameters of a transformation to be applied to the image based on an error metric which is indicative of an error between a transformation of the set of points received from the point correspondence logic 110 and the corresponding points of the set of predetermined set of points of the reference image Ii..
For example, the transformation for image I; is a homography which is described by a matrix Hi which can be used to more closely align the pixel positions of the image with the corresponding pixel positions of the reference image Ir. As an example, the homography may be restricted to be a 2D projective transformation. This provides a good trade-off between flexibility and simplicity of the alignment estimation. The step of determining parameters of the transformation may comprise determining the elements of the homography matrix, Hi, such that: xi = Hix, , where x; is the set of points of the image li which correspond to the points xr of the reference image, as determined by the point correspondence logic 110.
Step S212 comprises optimizing the elements of the homography matrix, Hi, by computing the Minimum Mean Squared Error (MMSE) over the two sets of points, xi and xi_ This comprises finding values for the elements of the matrix H1 which provide the minimum mean squared error for the set of points, e.g. by solving the equation: Co o j j xr yr -4 -yr 1 xry? 31-31 /h2 h2 i xi 0 0 0 -44 -yrxi h3 h4 h5 h6 \hh8/ for j = 0 to N where N is the number of points for which correspondences are determined. It is noted that N is at least four so that a solution can be found for H1 and in the example described above with reference to Figure 5, N=23. Usually, increasing N would increase the accuracy of the values determined for the matrix h, h2 h3. (
H To arrive at the equation above, it is noted that Hi = h4 h., h6, 4 = h7 h9 1 (4, yri) for the jth point of the reference image /, and x = (xii,A) for the ith point of the image /i In other examples, other error metrics (other than the MMSE) may be used to find a solution for the matrix H. In step S214 the transformation logic 112 applies the transformation to the image Ii to bring it closer to alignment with the reference image 4. The alignment logic 104 performs steps S210 to S214 for each of the images that are received from the selection logic102 except for the reference image (there is no need to transform the reference image), such that a respective transformation is applied to the different images.
Steps 5210, S212 and S214 could be implemented as a stand-alone method for transforming a first image (e.g. an image li) to bring it closer to alignment with a second image (e.g. the reference image It). These steps are described herein in the context of part of the noise reduction method shown in Figure 2, but they could be used in other scenarios in which it would be useful to transform a first image such that it more closely aligns with a second image.
Even though the images have been transformed, there may still exist some misalignment between the images and the reference image. Misalignment between the images may be detrimental when the images are combined. Therefore if a transformed image is significantly misaligned with the reference image then that transformed image may be discarded by the alignment logic 104, as described below in steps S216 and S218.
In step S216 the alignment measuring logic 114 determines measures of alignment of the respective transformed images with the reference image. The transformed images are denoted Wi. As an example, the measure of alignment of a transformed image Wi is a misalignment parameter xi, which may for example be determined as the sum (over all of the pixel positions (x,y) of the image) of the absolute differences between the transformed image Wt(x, y) and the reference image 1,(x, y). That is: =Exy I Wi (x, 3) - 30 I In step S218 the alignment measuring logic 114 determines, for each of the transformed images, whether the respective measure of alignment indicates that the alignment of the transformed image Wi with the reference image 1r is below a threshold alignment level. In dependence thereon, the alignment measuring logic 114 selectively discards images which are determined to be misaligned. Images which are discarded are not provided from the alignment logic 104 to the combining logic 106. In the example in which the measure of alignment of an image I; is a misalignment parameter 'Eh an image may be discarded if the misalignment parameter -ri is above a threshold. As an example, the threshold may depend on the mean of the misalignment parameters, kt(r), for the different images and on the standard deviation of the misalignment parameters, g(r), for the different images, where T represents all of the misalignment parameters for the different images, i.e. T = {-c1, ..., TN}, where N is the number of different images for which a misalignment parameter is determined. For example, the threshold may be p.(T) + c26(T) (where as an example e2 may be in the range 1.2 < e2 < 1.5). A hugely misaligned image may adversely affect the threshold, so in another example, rather than using a threshold to discard misaligned images, a predetermined number of the best aligned images (i.e. those images with the lowest misalignment parameters TO may be selected for use, and the other images may be discarded.
As an example, Figure 6 shows a graph of the misalignment parameters 602 for a set of images. The images 6 and 7 were discarded by the selection logic 102 because they were too blurry and so misalignment parameters are not calculated for those images. Image number 5 is the reference image and as such its misalignment parameter is zero. The dashed line 604 represents the misalignment threshold (e.g. set at ti(r) + e26(t)). It can be seen in this example that the misalignment parameters for images 0 and 8 are above the misalignment threshold 604, whereas the misalignment parameters for images 1 to 4 and 9 are below the misalignment threshold 604.
In step S218 the alignment measuring logic 114 discards misaligned images, i.e. 30 images for which the misalignment parameter is above the misalignment threshold. This corresponds to discarding images if their measures of alignment are below a threshold alignment level.
Therefore, in the example described above, in step S218 an image 1 with a misalignment parameter Ti is discarded if ti > p(r) + E2a(t). It is noted that in some other examples step S218 might not be performed. That is, in some examples, images are not discarded based on their alignment with the reference image. This may help to simplify the process, but may result in misalignment artefacts appearing in the final image.
Images which pass the alignment test are passed from the alignment logic 104 to the combining logic 106. Conversely, images which are discarded by the alignment logic 104 are not passed from the alignment logic 104 to the combining logic 106.
The combining logic 106 operates to combine the transformed images it receives from the alignment logic 104. In order to do this, in step S220 the combining logic 106 determines weights for the transformed images using the measures of alignment determined by the alignment measuring logic 114. Then in step S222 the combining logic 106 combines a plurality of images including the transformed images received from the alignment logic 104 using the determined weights to form a reduced noise image. The plurality of images which are combined in step S222 may or may not include the reference image. In preferred examples described herein the plurality of images which are combined in step S222 includes the reference image, which is the sharpest of the images. In other examples, e.g. if the reference image is selected differently, e.g. as the temporally middle image, then it may be beneficial to leave the reference image out of the group of images which are combined in step S222, e.g. if the reference image is particularly blurry. Selecting the temporally middle image as the reference image may sometimes be a suitable choice since it is likely that, on average, the images will be closer to alignment with the temporally middle image than to a different image.
Furthermore, selecting the temporally middle image as the reference image would avoid the processing needed to determine the sharpness of the images in order to select the reference image. In these examples, the other images are aligned to the reference image and then some of the aligned images (which might not include the reference image) are combined to form the reduced noise image in step S222.
As an example, the images may be combined using a bilateral filter with weights for each pixel of each image defined in dependence on the misalignment parameter of the image, TI, and the difference in pixel value between the pixel of the image and the corresponding pixel of the reference image. The resultant image is the accumulation of the transformed images after weighting each pixel with the appropriate weight. For example, the images may be ordered depending on their alignment with the reference image, e.g. by ordering the images using the misalignment parameters to form an ordered set of images. An index value, 1., indicates the position of an image in the ordered set. A low index value, i, is given to a highly aligned image (i.e. an image with a low misalignment parameter, TO, whereas a higher index value, i, is given to a less aligned image (i.e. an image with a higher misalignment parameter, Tu. For example, if there are N images, an index value of i = 1 is given to the best aligned image (i.e. the image with the lowest misalignment parameter, TO, and an index value of i = N is given to the worst aligned image (i.e. the image with the highest misalignment parameter, Tu. For example, a transformed image W has red, green and blue pixel values at a pixel position (x, y), denoted respectively as Wit(x, y), IA!? (x, y) and Wr (x, y). Similarly, the reference image /, has red, green and blue pixel values at a pixel position (x, y), denoted respectively as /Nx, y), /2(x, y) and /r(x,y). As an example, the weight, coi(x,y), for a pixel at position (x, y) of the transformed image 13/4 is determined according to the equation: 2 caiff Cpii (X, y) = e 2T e Ei wi(x,y) i7 where a, is the standard deviation used to define the 0 mean Gaussian of the misalignment weighting (this is a parameter that can be tuned, and as an example may be equal to 6); and adiff is the standard deviation used to define the 0 mean 25 Gaussian of the pixel difference (this is a parameter that can also be tuned, and as an example may be equal to 20). The factor of 1 is a normalization factor E; w; (x,y) which means that for each pixel position (x, y) the weights of the different images sum to one.
(Wr(x,y)-Itt(x,y))2 +0,A.t(x,y)-I-(x,y))2 +(lithx,y)-IP(x,y))2 It can be appreciated that since the weights depend upon the alignment of the image with the reference image, the resulting combined pixel values are weighted in favour of images which are closely aligned with the reference image. This reduces artifacts which may occur due to misalignment between the images which are combined.
The reduced noise image (which is denoted S' herein) is output from the combining logic 106. In some examples, this could be the end of the noise reduction process and the reduced noise image could be provided as the output of the processing module 100. However, in other examples, some motion correction may be applied to the reduced noise image before it is outputted from the processing module 100. Motion correction may be beneficial because when the captured scene has regions with motion then the combined image S' may contain artifacts due to the motion in the scene (and/or motion of the camera) between the times at which different ones of the combined images are captured.
As an example, the reduced noise image output from the combining logic 106 may be received by the motion correction logic 108, and in step S224 the motion correction logic 108 determines motion indications indicating levels of motion for areas of the reduced noise image, S'. In examples described herein, this is done by first determining a "background image", B, which has pixel values corresponding to an average (e.g. mean or median) of the corresponding pixel values of the transformed images Wand optionally the reference image /,-, determined pixel by pixel. The background image, B, may be a downscaled version of the images. For example, the original images may comprise 1440x1080 pixels and the downscaled background image may comprise 256x192 pixels.
These numbers are just given by way of example. Downscaling processes are known in the art to convert images between different resolutions or aspect ratios, etc. Downscaling the background image reduces the number of pixels in the background image and therefore reduces the amount of computation that is performed on the background image, without significantly affecting the result of the motion correction.
Figure 7a shows an example of a background image 702. There is some motion in the scene between the times at which the different images are captured and, as such, parts of the background image 702 are blurred.
A binary motion mask can then be determined which indicates for each pixel of the background image whether or not there is motion. For example, the binary value of the motion mask mask(x, y) at the pixel position (x, y) can be determined 5 according to the equation: mask(x, y) = (x, y) -BR(x, y)I > AV 11,9(x, y) -BG(x, y) I > AV It(x,y) -BB(x,y)I > where BR (x, y), LIG (x, y) and 13n (x, y) are the red green and blue components of the background image at pixel position (x,y), A. is a threshold parameter which may for example be set to 8, and V is an OR operator. So if any of the colour components of the background image differ from the corresponding colour components of the reference image by more than the threshold parameter then the mask(x, y) value is set to 1 to indicate that there is motion at the pixel position (x,y), otherwise the mask(x, y) value is set to 0 to indicate that there is not motion at the pixel position (x,y).
Figure 7b shows the motion mask 704 for the background image shown in Figure 7a. In Figure 7b a pixel is white if the motion mask at that position indicates that there is motion in the background image (e.g. if mask(x,y) = 1), and a pixel is black if the motion mask at that position indicates that there is not motion in the background image (e.g. if mask(x, y) = 0) It can be seen in Figure 7b that the binary motion mask includes a lot of small regions which appear to be indicative of motion but when compared to the image 702 it can be seen that these small regions often do not relate to significant motion in the scene. Therefore, the binary motion mask may be cleaned using a set of morphological operations, e.g. consisting of two erosion operations followed by two dilatation operations.
The cleaned motion mask 706 is shown in Figure 7c. It can be appreciated that the white areas in Figure 7c correspond closely to areas of motion in the image 30 702.
The motion mask 706 is smoothed in order to smooth transitions between black and white areas of the mask. In order to smooth the mask 706, the mask 706 may be convolved using a Gaussian filter. The resulting smoothed mask 708 is shown in Figure 7d. The smoothed mask 708 is not restricted to binary values and may include values between 0 and 1.
Then the smoothed motion mask 708 is upscaled to match the resolution of the original images (e.g. 1440x1080 pixels). Methods of upscaling are known in the art. In step S226, the motion correction logic 108 combines the reference image 4.(x) y) and the reduced noise image S'(x,y) using the upscaled smoothed motion mask (denoted MASK(x,y)) to form a motion-corrected reduced noise image S"(x,y). In this way, areas of the reduced noise image S' are mixed with corresponding areas of the reference image I based on the motion mask MASK (x, y), e.g. according to the equation: S" (x, y) = 1,.(x, y) * MASK(x, y) + S' (x, y) * (1 -MASK(x) y)) Furthermore, in some examples, a spatial bilateral filter may be applied to those regions which are taken from the reference picture. That is, the reference image, y), may be spatially filtered before using it to determine the motion-corrected reduced noise image S"(x,y) according to the equation given above.
In step S228 the motion-corrected, reduced noise image S" is outputted from the processing module 100 as the result of the method. The image S" may subsequently be used for any suitable purpose, e.g. it may be stored in a memory or used by some other processing module or displayed on a display.
Figure 8 shows a reference image (lr) 802, a reduced noise image (S') 804 and a motion-corrected reduced noise image (S") 806 in one example. The amount of random noise in static regions of the images 804 and 806 (e.g. on the white wall of the background in the image) is less than the random noise in the corresponding region of the reference image 802. The image 804 exhibits some motion artifacts, for example the bin and the leg seem to blur together in image 804. These motion artifacts have been corrected in image 806.
The set of images in the examples described above may comprise a plurality of images captured in a burst mode. Alternatively the set of images may comprise a plurality of frames of a video sequence. When working with videos, the method may have a few variations. With a set of video frames, it is the most recent (i.e. the last frame) to which the denoising is applied, and the previous n frames are used to denoise the last frame. The number, n, can vary depending on the needs or capabilities of the hardware. In this case the last frame of the video sequence may be used as a reference image, and hence it is not necessary to select a reference image and discard blurry images. In addition the alignment step may be performed using a plurality of n previous frames and it is computed incrementally, such that aligning the frame n -2 uses the output of the alignment of the frame n -1, and so on. Since a video sequence may contain sudden scene changes (which may be referred to as "cuts"), it may be important to detect the scene changes after aligning the images. A cut detector may be implemented based on generating a 3D histogram of 8 bins for each channel (red, green and blue), giving a total of 512 bins. The histogram of a current frame (histi(r, g, b)) is compared with a histogram the previous frame (his ti_,(r, g, b)) and a cut is detected if the sum of the absolute differences of all bins divided by the number of pixels (N) is greater than a threshold, r3, where as an example the threshold may be in the range 0.02 < E3 < 0.1. That is, a cut may be detected when the following equation is satisfied: 8 8 8 Ihisti(r, g, b) -histi_1(r,g,b)I > £3 In some examples, rather than determining the histograms histi(r, g, b) and histt_i(r, g, b) using the reference image /,.(x, y), the histograms may be determined using the previously computed background image (B) because this is a small (i.e. downscaled) image, e.g. formed by computing the average of the aligned images and then downscaling.
The processing module 100 described above can be implemented in a computer 30 system. The computing system could be implemented in a camera, smartphone, tablet or any other suitable computing device. For example, Figure 9 shows a computer system which comprises a GPU 902, a CPU 904 and a memory 906. The computer system also comprises other devices 908, such as a display 910, speakers 912, a camera 914 and a keypad 916. The components of the computer system can communicate with each other via a communications bus 918. The processing module 100 may be implemented on the GPU 902 as shown in Figure 9 in hardware or software or a combination thereof. For example, if the logic blocks (102, 104, 106 and 108) of the processing module 100 are implemented in hardware they may be formed as particular arrangements of transistors and other hardware components suited for performing the desired functions of the logic blocks as described herein. In contrast, if the logic blocks (102, 104, 106 and 108) of the processing module 100 are implemented in software they may comprise sets of computer instructions which can be stored in the memory 906 and can be provided to the GPU 902 for execution thereon. In other examples the processing module 100 could be implemented on the CPU 904. The set of images are received at the processing module 100, e.g. from the camera 914, and the processing module 100 outputs the motion-correction reduced noise image, which may then, for example, be displayed on the display 910 and/or stored in the memory 906.
Generally, any of the functions, methods, techniques or components described above (e.g. the processing module 100 and its components) can be implemented in modules using software, firmware, hardware (e.g., fixed logic circuitry), or any combination of these implementations. The terms "module," "functionality," "component", "block", "unit" and "logic" are used herein to generally represent software, firmware, hardware, or any combination thereof.
In the case of a software implementation, the module, functionality, component, unit or logic represents program code that performs specified tasks when executed on a processor (e.g. one or more CPUs). In one example, the methods described may be performed by a computer configured with software in machine readable form stored on a computer-readable medium. One such configuration of a computer-readable medium is signal bearing medium and thus is configured to transmit the instructions (e.g. as a carrier wave) to the computing device, such as via a network. The computer-readable medium may also be configured as a non-transitory computer-readable storage medium and thus is not a signal bearing medium. Examples of a computer-readable storage medium include a random-access memory (RAM), read-only memory (ROM), an optical disc, flash memory, hard disk memory, and other memory devices that may use magnetic, optical, and other techniques to store instructions or other data and that can be accessed by a machine.
The software may be in the form of a computer program comprising computer program code for configuring a computer to perform the constituent portions of described methods or in the form of a computer program comprising computer program code means adapted to perform all the steps of any of the methods described herein when the program is run on a computer and where the computer program may be embodied on a computer readable medium. The program code can be stored in one or more computer readable media. The features of the techniques described herein are platform-independent, meaning that the techniques may be implemented on a variety of computing platforms having a variety of processors.
Those skilled in the art will also realize that all, or a portion of the functionality, techniques or methods may be carried out by a dedicated circuit, an application-specific integrated circuit, a programmable logic array, a field-programmable gate array, or the like. For example, the module, functionality, component, unit or logic (e.g. the logic blocks of the processing module 100) may comprise hardware in the form of circuitry. Such circuitry may include transistors and/or other hardware elements available in a manufacturing process. Such transistors and/or other elements may be used to form circuitry or structures that implement and/or contain memory, such as registers, flip flops, or latches, logical operators, such as Boolean operations, mathematical operators, such as adders, multipliers, or shifters, and interconnects, by way of example. Such elements may be provided as custom circuits or standard cell libraries, macros, or at other levels of abstraction. Such elements may be interconnected in a specific arrangement. The module, functionality, component, unit or logic (e.g. the logic blocks of the processing module 100) may include circuitry that is fixed function and circuitry that can be programmed to perform a function or functions; such programming may be provided from a firmware or software update or control mechanism. In an example, hardware logic has circuitry that implements a fixed function operation, state machine or process.
It is also intended to encompass software which "describes" or defines the configuration of hardware that implements a module, functionality, component, unit or logic described above, such as HDL (hardware description language) software, as is used for designing integrated circuits, or for configuring programmable chips, to carry out desired functions. That is, there may be provided a computer readable storage medium having encoded thereon computer readable program code for generating a processing module configured to perform any of the methods described herein, or for generating a processing module comprising any apparatus described herein.
The term 'processor' and 'computer' are used herein to refer to any device, or portion thereof, with processing capability such that it can execute instructions, or a dedicated circuit capable of carrying out all or a portion of the functionality or methods, or any combination thereof.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. It will be understood that the benefits and advantages described above may relate to one example or may relate to several examples.
Any range or value given herein may be extended or altered without losing the effect sought, as will be apparent to the skilled person. The steps of the methods described herein may be carried out in any suitable order, or simultaneously where appropriate. Aspects of any of the examples described above may be combined with aspects of any of the other examples described to form further examples without losing the effect sought.

Claims (52)

  1. Claims 1. A method of forming a reduced noise image using a set of images, the method comprising: applying respective transformations to at least some of the images of the set to bring them closer to alignment with a reference image from the set of images; determining measures of alignment of the respective transformed images with the reference image; determining weights for one or more of the transformed images using the determined measures of alignment; and combining a plurality of images including said one or more of the transformed images using the determined weights to form a reduced noise image.
  2. 2. The method of claim 1 wherein said plurality of images which are combined to form the reduced noise image further includes the reference image.
  3. 3. The method of claim 1 wherein said plurality of images which are combined to form the reduced noise image does not include the reference image.
  4. 4. The method of any preceding claim further comprising selecting one of the images of the set of images to be the reference image.
  5. 5. The method of claim 4 wherein said selecting one of the images of the set 25 of images to be the reference image comprises: determining sharpness indications for the images of the set of images; and based on the determined sharpness indications, selecting the sharpest image from the set of images to be the reference image.
  6. 6. The method of claim 5 wherein if the determined sharpness indication for an image is below a sharpness threshold then the image is discarded such that it is not one of said at least some of the images to which respective transformations are applied.
  7. 7. The method of claim 5 or 6 wherein the sharpness indications are sums of absolute values of image Laplacian estimates for the respective images.
  8. 8. The method of any preceding claim further comprising determining the 5 transformations to apply to said at least some of the images, wherein for each of said at least some of the images the respective transformation is determined by: determining a set of points of the image which correspond to a predetermined set of points of the reference image; and determining parameters of the transformation for the image based on an 10 error metric which is indicative of an error between a transformation of at least some of the determined set of points of the image and the corresponding points of the predetermined set of points of the reference image.
  9. 9. The method of claim 8 wherein the predetermined set of points of the 15 reference image are points of a uniform lattice.
  10. 10. The method of claim 8 or 9 wherein the set of points of the image are determined using the Lucas Kanade Inverse algorithm.
  11. 11. The method of claim 10 wherein the Lucas Kanade Inverse algorithm is initialized using the results of a multiple kernel tracking technique.
  12. 12. The method of claim 11 wherein the multiple kernel tracking technique determines the positions of a set of candidate regions based on a similarity between a set of target regions and the set of candidate regions, wherein the target regions are respectively positioned over the positions of the predetermined set of points of the reference image, and wherein the determined positions of the set of candidate regions are used to initialize the Lucas Kanade Inverse algorithm.
  13. 13. The method of any preceding claim further comprising, for each of the transformed images, determining whether the respective measure of alignment indicates that the alignment of the transformed image with the reference image is below a threshold alignment level, and in dependence thereon selectively including the transformed image as one of said one or more of the transformed images for which weights are determined.
  14. 14. The method of any preceding claim further comprising applying motion 5 correction to the reduced noise image.
  15. 15. The method of claim 14 wherein said applying motion correction to the reduced noise image comprises: determining motion indications indicating levels of motion for areas of the 10 reduced noise image; and mixing areas of the reduced noise image with corresponding areas of the reference image based on the motion indications to form a motion-corrected, reduced noise image.
  16. 16. The method of any preceding claim wherein the set of images comprises either: (i) a plurality of images captured in a burst mode, or (ii) a plurality of frames of a video sequence.
  17. 17. A processing module for forming a reduced noise image using a set of images, the processing module comprising: alignment logic configured to: apply respective transformations to at least some of the images of the set to bring them closer to alignment with a reference image from the set of images; and determine measures of alignment of the respective transformed images with the reference image; and combining logic configured to: determine weights for one or more of the transformed images using the determined measures of alignment; and combine a plurality of images including said one or more of the transformed images using the determined weights to form a reduced noise image.
  18. 18. The processing module of claim 17 wherein said plurality of images which the combining logic is configured to combine to form the reduced noise image further includes the reference image.
  19. 19. The processing module of claim 17 wherein said plurality of images which the combining logic is configured to combine to form the reduced noise image does not include the reference image.
  20. 20. The processing module of any of claims 17 to 19 further comprising 10 selection logic configured to select one of the images of the set of images to be the reference image.
  21. 21. The processing module of claim 20 wherein the selection logic is configured to select one of the images of the set of images to be the reference image by: determining sharpness indications for the images of the set of images; and based on the determined sharpness indications, selecting the sharpest image from the set of images to be the reference image.
  22. 22. The processing module of claim 21 wherein the selection logic is further 20 configured to discard an image such that it is not provided to the alignment logic if the determined sharpness indication for the image is below a sharpness threshold.
  23. 23. The processing module of claim 21 or 22 wherein the sharpness indications are sums of absolute values of image Laplacian estimates for the respective 25 images.
  24. 24. The processing module of any of claims 17 to 23 wherein the alignment logic is further configured to determine the transformations to apply to said at least some of the images by, for each of said at least some of the images,: determining a set of points of the image which correspond to a predetermined set of points of the reference image; and determining parameters of the transformation for the image based on an error metric which is indicative of an error between a transformation of at least some of the determined set of points of the image and the corresponding points of the predetermined set of points of the reference image.
  25. 25. The processing module of claim 24 wherein the predetermined set of points 5 of the reference image are points of a uniform lattice.
  26. 26. The processing module of claim 24 or 25 wherein the alignment logic is configured to determine the set of points of the image using the Lucas Kanade Inverse algorithm.
  27. 27. The processing module of claim 26 wherein the alignment logic is configured to initialize the Lucas Kanade Inverse algorithm using the results of a multiple kernel tracking technique.
  28. 28. The processing module of claim 27 wherein the alignment logic is configured to implement the multiple kernel tracking technique to determine the positions of a set of candidate regions based on a similarity between a set of target regions and the set of candidate regions, wherein the target regions are respectively positioned over the positions of the predetermined set of points of the reference image, and wherein alignment logic is configured to use the determined positions of the set of candidate regions to initialize the Lucas Kanade Inverse algorithm.
  29. 29. The processing module of any of claims 17 to 28 wherein the alignment logic is further configured to, for each of the transformed images, determine whether the respective measure of alignment indicates that the alignment of the transformed image with the reference image is below a threshold alignment level, and in dependence thereon selectively include the transformed image as one of said one or more of the transformed images to be passed to the combining logic for which weights are to be determined.
  30. 30. The processing module of any of claims 17 to 29 further comprising motion correction logic configured to apply motion correction to the reduced noise image formed by the combining logic.
  31. 31. The processing module of claim 30 wherein the motion correction logic is configured to apply motion correction to the reduced noise image by: determining motion indications indicating levels of motion for areas of the 5 reduced noise image; and mixing areas of the reduced noise image with corresponding areas of the reference image based on the motion indications to form a motion-corrected, reduced noise image.
  32. 32. The processing module of any of claims 17 to 31 wherein the set of images comprises either: (i) a plurality of images captured in a burst mode, or (ii) a plurality of frames of a video sequence.
  33. 33. A method of transforming a first image to bring it closer to alignment with a second image, the method comprising: implementing a multiple kernel tracking technique to determine positions of a set of candidate regions of the first image based on a similarity between a set of target regions of the second image and the set of candidate regions of the first image, wherein the target regions of the second image are respectively positioned over the positions of a predetermined set of points of the second image; using at least some of the determined positions of the set of candidate regions to initialize a Lucas Kanade Inverse algorithm; using the Lucas Kanade Inverse algorithm to determine a set of points of the first image which correspond to at least some of the predetermined set of 25 points of the second image; determining parameters of a transformation to be applied to the first image based on an error metric which is indicative of an error between a transformation of at least some of the determined set of points of the first image and the corresponding points of the predetermined set of points of the second image; and applying the transformation to the first image to bring it closer to alignment with the second image.
  34. 34. The method of claim 33 wherein said implementing a multiple kernel tracking technique comprises iteratively optimizing the similarity between feature histograms of the set of target regions and corresponding feature histograms of the set of candidate regions by iteratively varying the positions of the candidate regions.
  35. 35. The method of claim 34 wherein said using at least some of the determined positions of the set of candidate regions to initialize a Lucas Kanade Inverse algorithm comprises discarding a candidate region if the feature histogram of the candidate region indicates that the candidate region is flat, wherein a discarded candidate region is not used to initialize the Lucas Kanade Inverse algorithm.
  36. 36. The method of any of claims 33 to 35 wherein said using the Lucas Kanade Inverse algorithm to determine a set of points of the first image which correspond to at least some of the predetermined set of points of the second image comprises, for each of the points of the set of points of the first image,: determining a warped version of an image patch surrounding the point; and determining a Hessian matrix for the warped image patch which indicates a first sum of squared values of the gradients in the warped image in a first direction and a second sum of squared values of the gradients in the warped image in a second direction which is perpendicular to the first direction, wherein the point is discarded if the ratio between the first and second sums of squared values of the gradients is greater than a threshold value or if the ratio between the second and first sums of squared values of the gradients is greater than the threshold value, wherein a discarded point is not used to determine the parameters of the transformation to be applied to the first image.
  37. 37. The method of any of claims 33 to 36 wherein the predetermined set of points of the second image are points of a uniform lattice.
  38. 38. The method of any of claims 33 to 37 wherein the first and second images 30 are from a set of images, and wherein the method further comprises combining the transformed first image with the second image to form a reduced noise image.
  39. 39. The method of claim 38 wherein the second image is a reference image of the set of images.
  40. 40. The method of claim 38 or 39 wherein the set of images comprises either: (i) a plurality of images captured in a burst mode, or (ii) a plurality of frames of a video sequence.
  41. 41. A processing module for transforming a first image to bring it closer to alignment with a second image, the processing module comprising alignment logic which comprises: multiple kernel tracking logic configured to implement a multiple kernel tracking technique to determine positions of a set of candidate regions of the first image based on a similarity between a set of target regions of the second image and the set of candidate regions of the first image, wherein the target regions of the second image are respectively positioned over the positions of a predetermined set of points of the second image; Lucas Kanade Inverse logic configured to use a Lucas Kanade Inverse algorithm to determine a set of points of the first image which correspond to at least some of the predetermined set of points of the second image, wherein the positions of at least some of the set of candidate regions determined by the multiple kernel tracking logic are used to initialize the Lucas Kanade Inverse algorithm; and transformation logic configured to: (i) determine parameters of a transformation to be applied to the first image based on an error metric which is indicative of an error between a transformation of at least some of the determined set of points of the first image and the corresponding points of the predetermined set of points of the second image, and (ii) apply the transformation to the first image to bring it closer to alignment with the second image.
  42. 42. The processing module of claim 41 wherein the multiple kernel tracking logic is configured to implement the multiple kernel tracking technique by iteratively optimizing the similarity between feature histograms of the set of target regions and corresponding feature histograms of the set of candidate regions by iteratively varying the positions of the candidate regions.
  43. 43. The processing module of claim 42 wherein the multiple kernel tracking logic is configured to discard a candidate region if the feature histogram of the candidate region indicates that the candidate region is flat, wherein processing module is configured such that the Lucas Kanade Inverse logic does not use a discarded candidate region to initialize the Lucas Kanade Inverse algorithm.
  44. 44. The processing module of any of claims 41 to 43 wherein the Lucas Kanade Inverse logic is configured to use the Lucas Kanade Inverse algorithm to determine the set of points of the first image which correspond to at least some of the predetermined set of points of the second image by, for each of the points of the set of points of the first image,: determining a warped version of an image patch surrounding the point; and determining a Hessian matrix for the image patch which indicates a first sum of squared values of the gradients in the warped image in a first direction and a second sum of squared values of the gradients in the warped image in a second direction which is perpendicular to the first direction, wherein the Lucas Kanade Inverse logic is configured to discard the point if the ratio between the first and second sums of squared values of the gradients is greater than a threshold value or if the ratio between the second and first sums of squared values of the gradients is greater than the threshold value, wherein the Lucas Kanade Inverse logic is further configured to not use a discarded point to determine the parameters of the transformation to be applied to the first image.
  45. 45. The processing module of any of claims 41 to 44 wherein the 25 predetermined set of points of the second image are points of a uniform lattice.
  46. 46. The processing module of any of claims 41 to 45 wherein the first and second images are from a set of images, and wherein the processing module further comprises combining logic configured to combine the transformed first 30 image with the second image to form a reduced noise image.
  47. 47. The processing module of claim 46 wherein the second image is a reference image of the set of images.
  48. 48. The processing module of claim 46 or 47 wherein the set of images comprises either: (i) a plurality of images captured in a burst mode, or (ii) a plurality of frames of a video sequence.
  49. 49. Computer readable code adapted to perform the steps of the method of any of claims 1 to 16 or 33 to 40 when the code is run on a computer.
  50. 50. A computer readable storage medium having encoded thereon the computer readable code of claim 49.
  51. 51. Computer readable code for generating a processing module according to any of claims 17 to 32 or 41 to 48.
  52. 52. A computer readable storage medium having encoded thereon computer 15 readable code for generating a processing module according to any of claims 17 to 32 or 41 to 48.Amendments to the claims have been made as follows Claims 1. A method of forming a reduced noise image using a set of images, the method comprising: applying respective transformations to at least some of the images of the set to bring them closer to alignment with a reference image from the set of images; determining, for each of the transformed images, a respective measure of alignment of that transformed image with the reference image; determining weights for one or more of the transformed images using the determined measures of alignment; and combining a plurality of images including said one or more of the transformed images using the determined weights to form a reduced noise image.2. The method of claim 1 wherein the measure of alignment for a transformed image is a misalignment parameter -ci determined as the sum, over all of the pixel positions (x,y) of the transformed image, of the absolute differences between the transformed image Wi(x, y) and the reference image 1,(x, y).CO 3. The method of claim 1 wherein said plurality of images which are combined to form the reduced noise image further includes the reference image.4. The method of claim 1 wherein said plurality of images which are combined to form the reduced noise image does not include the reference image.5. The method of any preceding claim further comprising selecting one of the images of the set of images to be the reference image.6. The method of claim 5 wherein said selecting one of the images of the set of images to be the reference image comprises: determining sharpness indications for the images of the set of images; and based on the determined sharpness indications, selecting the sharpest image from the set of images to be the reference image.7. The method of claim 6 wherein if the determined sharpness indication for an image is below a sharpness threshold then the image is discarded such that it is not one of said at least some of the images to which respective transformations are applied.8. The method of claim 6 or 7 wherein the sharpness indications are sums of absolute values of image Laplacian estimates for the respective images.9. The method of any preceding claim further comprising determining the transformations to apply to said at least some of the images, wherein for each of said at least some of the images the respective transformation is determined by: determining a set of points of the image which correspond to a predetermined set of points of the reference image; and determining parameters of the transformation for the image based on an error metric which is indicative of an error between a transformation of at least some of the determined set of points of the image and the corresponding points of the predetermined set of points of the reference image.CO 10. The method of claim 9 wherein the predetermined set of points of the reference image are points of a uniform lattice.11. The method of claim 9 or 10 wherein the set of points of the image are determined using the Lucas Kanade Inverse algorithm.12. The method of claim 11 wherein the Lucas Kanade Inverse algorithm is initialized using the results of a multiple kernel tracking technique.13. The method of claim 12 wherein the multiple kernel tracking technique determines the positions of a set of candidate regions based on a similarity between a set of target regions and the set of candidate regions, wherein the target regions are respectively positioned over the positions of the predetermined set of points of the reference image, and wherein the determined positions of the set of candidate regions are used to initialize the Lucas Kanade Inverse algorithm.14. The method of any preceding claim further comprising, for each of the transformed images, determining whether the respective measure of alignment indicates that the alignment of the transformed image with the reference image is below a threshold alignment level, and in dependence thereon selectively including the transformed image as one of said one or more of the transformed images for which weights are determined.15. The method of any preceding claim further comprising applying motion correction to the reduced noise image.16. The method of claim 15 wherein said applying motion correction to the reduced noise image comprises: determining motion indications indicating levels of motion for areas of the reduced noise image; and mixing areas of the reduced noise image with corresponding areas of the reference image based on the motion indications to form a motion-corrected, reduced noise image.CO 17. The method of any preceding claim wherein the set of images comprises either: (i) a plurality of images captured in a burst mode, or (ii) a plurality of frames of a video sequence.18. A processing module for forming a reduced noise image using a set of images, the processing module comprising: alignment logic configured to: apply respective transformations to at least some of the images of the set to bring them closer to alignment with the reference image from the set of images; and determine, for each of the transformed images, a respective measure of alignment of that transformed image with the reference image; and combining logic configured to: determine weights for one or more of the transformed images using the determined measures of alignment; and combine a plurality of images including said one or more of the transformed images using the determined weights to form a reduced noise image.19. The processing module of claim 18, wherein the measure of alignment for a transformed image is a misalignment parameter Ti determined as the sum, over all of the pixel positions (x,y) of the transformed image, of the absolute differences between the transformed image Wi(x, y) and the reference image ljx, y).20. The processing module of claim 18 wherein said plurality of images which the combining logic is configured to combine to form the reduced noise image further includes the reference image.21. The processing module of claim 18 wherein said plurality of images which the combining logic is configured to combine to form the reduced noise image does not include the reference image.22. The processing module of any of claims 18 to 21 further comprising selection logic configured to select one of the images of the set of images to be the reference CO image.O23. The processing module of claim 22 wherein the selection logic is configured to select one of the images of the set of images to be the reference image by: determining sharpness indications for the images of the set of images; and based on the determined sharpness indications, selecting the sharpest image from the set of images to be the reference image.24. The processing module of claim 23 wherein the selection logic is further configured to discard an image such that it is not provided to the alignment logic if the determined sharpness indication for the image is below a sharpness threshold.25. The processing module of claim 23 or 24 wherein the sharpness indications are sums of absolute values of image Laplacian estimates for the respective images.26. The processing module of any of claims 18 to 25 wherein the alignment logic is further configured to determine the transformations to apply to said at least some of the images by, for each of said at least some of the images,: determining a set of points of the image which correspond to a predetermined set of points of the reference image; and determining parameters of the transformation for the image based on an error metric which is indicative of an error between a transformation of at least some of the determined set of points of the image and the corresponding points of the predetermined set of points of the reference image.27. The processing module of claim 26 wherein the predetermined set of points of the reference image are points of a uniform lattice.28. The processing module of claim 26 or 27 wherein the alignment logic is configured to determine the set of points of the image using the Lucas Kanade Inverse algorithm.29. The processing module of claim 28 wherein the alignment logic is configured CO to initialize the Lucas Kanade Inverse algorithm using the results of a multiple kernel tracking technique.30. The processing module of claim 29 wherein the alignment logic is configured to implement the multiple kernel tracking technique to determine the positions of a set of candidate regions based on a similarity between a set of target regions and the set of candidate regions, wherein the target regions are respectively positioned over the positions of the predetermined set of points of the reference image, and wherein alignment logic is configured to use the determined positions of the set of candidate regions to initialize the Lucas Kanade Inverse algorithm.31. The processing module of any of claims 18 to 30 wherein the alignment logic is further configured to, for each of the transformed images, determine whether the respective measure of alignment indicates that the alignment of the transformed image with the reference image is below a threshold alignment level, and in dependence thereon selectively include the transformed image as one of said one or more of the transformed images to be passed to the combining logic for which weights are to be determined.32. The processing module of any of claims 18 to 31 further comprising motion correction logic configured to apply motion correction to the reduced noise image formed by the combining logic.33. The processing module of claim 32 wherein the motion correction logic is configured to apply motion correction to the reduced noise image by: determining motion indications indicating levels of motion for areas of the reduced noise image; and mixing areas of the reduced noise image with corresponding areas of the reference image based on the motion indications to form a motion-corrected, reduced noise image.34. The processing module of any of claims 18 to 33 wherein the set of images comprises either: (i) a plurality of images captured in a burst mode, or (ii) a plurality of frames of a video sequence.CO35. Computer readable code adapted to perform the steps of the method of any of claims 1 to 17 when the code is run on a computer.36. A computer readable storage medium having encoded thereon the computer readable code of claim 35.37. Computer readable code for generating a processing module according to any of claims 18 to 34.38. A computer readable storage medium having encoded thereon computer readable code for generating a processing module according to any of claims 18 to 34.
GB1504316.9A 2015-03-13 2015-03-13 Image noise reduction Active GB2536429B (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
GB1504316.9A GB2536429B (en) 2015-03-13 2015-03-13 Image noise reduction
EP16158697.9A EP3067858B1 (en) 2015-03-13 2016-03-04 Image noise reduction
CN201610141825.4A CN105976328B (en) 2015-03-13 2016-03-11 Image noise reduction
US15/068,899 US11756162B2 (en) 2015-03-13 2016-03-14 Image noise reduction
US15/877,552 US10679363B2 (en) 2015-03-13 2018-01-23 Image noise reduction
US16/865,884 US11282216B2 (en) 2015-03-13 2020-05-04 Image noise reduction
US18/244,393 US20230419453A1 (en) 2015-03-13 2023-09-11 Image noise reduction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1504316.9A GB2536429B (en) 2015-03-13 2015-03-13 Image noise reduction

Publications (3)

Publication Number Publication Date
GB201504316D0 GB201504316D0 (en) 2015-04-29
GB2536429A true GB2536429A (en) 2016-09-21
GB2536429B GB2536429B (en) 2018-01-10

Family

ID=53016119

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1504316.9A Active GB2536429B (en) 2015-03-13 2015-03-13 Image noise reduction

Country Status (4)

Country Link
US (4) US11756162B2 (en)
EP (1) EP3067858B1 (en)
CN (1) CN105976328B (en)
GB (1) GB2536429B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170032172A1 (en) * 2015-07-29 2017-02-02 Hon Hai Precision Industry Co., Ltd. Electronic device and method for splicing images of electronic device
US10559073B2 (en) * 2016-03-23 2020-02-11 Intel Corporation Motion adaptive stream processing for temporal noise reduction
US10489662B2 (en) * 2016-07-27 2019-11-26 Ford Global Technologies, Llc Vehicle boundary detection
US10719927B2 (en) * 2017-01-04 2020-07-21 Samsung Electronics Co., Ltd. Multiframe image processing using semantic saliency
CN107203976B (en) * 2017-04-19 2019-07-23 武汉科技大学 A kind of adaptive non-local mean denoising method and system based on noise detection
CN107194961B (en) * 2017-05-19 2020-09-22 西安电子科技大学 Method for determining multiple reference images in group image coding
CN111869194B (en) * 2018-03-16 2023-04-14 索尼公司 Information processing apparatus, information processing method, and storage medium
US11064196B2 (en) * 2018-09-03 2021-07-13 Qualcomm Incorporated Parametrizable, quantization-noise aware bilateral filter for video coding
CN110188614B (en) * 2019-04-30 2021-03-30 杭州电子科技大学 NLM filtering finger vein denoising method based on skin crack segmentation
CN112652021B (en) * 2020-12-30 2024-04-02 深圳云天励飞技术股份有限公司 Camera offset detection method, device, electronic equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150010247A1 (en) * 2012-03-30 2015-01-08 Fujifilm Corporation Image processing device, imaging device, computer-readable storage medium, and image processing method

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7623683B2 (en) * 2006-04-13 2009-11-24 Hewlett-Packard Development Company, L.P. Combining multiple exposure images to increase dynamic range
US8812240B2 (en) * 2008-03-13 2014-08-19 Siemens Medical Solutions Usa, Inc. Dose distribution modeling by region from functional imaging
US8750645B2 (en) * 2009-12-10 2014-06-10 Microsoft Corporation Generating a composite image from video frames
US8588551B2 (en) * 2010-03-01 2013-11-19 Microsoft Corp. Multi-image sharpening and denoising using lucky imaging
WO2012132129A1 (en) * 2011-03-31 2012-10-04 富士フイルム株式会社 Image-capturing device, image-capturing method, and program
US9014421B2 (en) 2011-09-28 2015-04-21 Qualcomm Incorporated Framework for reference-free drift-corrected planar tracking using Lucas-Kanade optical flow
US8665376B2 (en) * 2012-04-12 2014-03-04 Texas Instruments Incorporated Methods and systems for filtering noise in video data
KR101949294B1 (en) 2012-07-24 2019-02-18 삼성전자주식회사 apparatus and method of calculating histogram accumulation of image
US9202431B2 (en) * 2012-10-17 2015-12-01 Disney Enterprises, Inc. Transfusive image manipulation
US8866928B2 (en) * 2012-12-18 2014-10-21 Google Inc. Determining exposure times using split paxels
CN103729845A (en) 2013-12-23 2014-04-16 西安华海盈泰医疗信息技术有限公司 Breast X-ray image registration method and system based on barycenter
CN103778433B (en) 2014-01-15 2017-02-22 广东华中科技大学工业技术研究院 Generalized-point-set matching method based on distances from points to lines
CN103903280B (en) 2014-03-28 2017-01-11 哈尔滨工程大学 Subblock weight Mean-Shift tracking method with improved level set target extraction
US9629587B2 (en) * 2014-07-10 2017-04-25 General Electric Company Systems and methods for coronary imaging
CN104200216A (en) 2014-09-02 2014-12-10 武汉大学 High-speed moving target tracking algorithm for multi-feature extraction and step-wise refinement
US10554956B2 (en) * 2015-10-29 2020-02-04 Dell Products, Lp Depth masks for image segmentation for depth-based computational photography

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150010247A1 (en) * 2012-03-30 2015-01-08 Fujifilm Corporation Image processing device, imaging device, computer-readable storage medium, and image processing method

Also Published As

Publication number Publication date
US11756162B2 (en) 2023-09-12
CN105976328B (en) 2022-06-10
EP3067858B1 (en) 2020-05-06
US20200265595A1 (en) 2020-08-20
GB201504316D0 (en) 2015-04-29
US20160267660A1 (en) 2016-09-15
EP3067858A1 (en) 2016-09-14
GB2536429B (en) 2018-01-10
US20230419453A1 (en) 2023-12-28
CN105976328A (en) 2016-09-28
US11282216B2 (en) 2022-03-22
US20180150959A1 (en) 2018-05-31
US10679363B2 (en) 2020-06-09

Similar Documents

Publication Publication Date Title
US11282216B2 (en) Image noise reduction
GB2536430B (en) Image noise reduction
Jeon et al. Accurate depth map estimation from a lenslet light field camera
Zheng et al. Single-image vignetting correction
Rengarajan et al. From bows to arrows: Rolling shutter rectification of urban scenes
Paramanand et al. Non-uniform motion deblurring for bilayer scenes
US20090052743A1 (en) Motion estimation in a plurality of temporally successive digital images
CN107851321B (en) Image processing method and dual-camera system
CN107749987B (en) Digital video image stabilization method based on block motion estimation
Pickup Machine learning in multi-frame image super-resolution
Ito et al. Blurburst: Removing blur due to camera shake using multiple images
CN110390645B (en) System and method for improved 3D data reconstruction for stereoscopic transient image sequences
Yu et al. Joint learning of blind video denoising and optical flow estimation
Liu et al. Depth-guided sparse structure-from-motion for movies and tv shows
WO2008102898A1 (en) Image quality improvement processig device, image quality improvement processig method and image quality improvement processig program
El-Yamany et al. Robust color image superresolution: An adaptive M-estimation framework
US20120038785A1 (en) Method for producing high resolution image
Yang et al. Image deblurring utilizing inertial sensors and a short-long-short exposure strategy
Kaur et al. An improved adaptive bilateral filter to remove gaussian noise from color images
JP2012073703A (en) Image blur amount calculation device and program for the same
CN116266356A (en) Panoramic video transition rendering method and device and computer equipment
CN107251089B (en) Image processing method for motion detection and compensation
Galego et al. Auto-calibration of pan-tilt cameras including radial distortion and zoom
Pickup et al. Multiframe super-resolution from a Bayesian perspective
Maier et al. Distortion compensation for movement detection based on dense optical flow