CN113469908A - Image noise reduction method, device, terminal and storage medium - Google Patents

Image noise reduction method, device, terminal and storage medium Download PDF

Info

Publication number
CN113469908A
CN113469908A CN202110726047.6A CN202110726047A CN113469908A CN 113469908 A CN113469908 A CN 113469908A CN 202110726047 A CN202110726047 A CN 202110726047A CN 113469908 A CN113469908 A CN 113469908A
Authority
CN
China
Prior art keywords
image
images
target
channel
aligned
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110726047.6A
Other languages
Chinese (zh)
Other versions
CN113469908B (en
Inventor
张晓盟
刘春婷
接丹枫
陈欢
彭晓峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spreadtrum Communications Shanghai Co Ltd
Original Assignee
Spreadtrum Communications Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spreadtrum Communications Shanghai Co Ltd filed Critical Spreadtrum Communications Shanghai Co Ltd
Priority to CN202110726047.6A priority Critical patent/CN113469908B/en
Publication of CN113469908A publication Critical patent/CN113469908A/en
Application granted granted Critical
Publication of CN113469908B publication Critical patent/CN113469908B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention provides an image noise reduction method, an image noise reduction device, a terminal and a storage medium, wherein the method comprises the following steps: acquiring a first image group, wherein the first image group comprises a reference image and a plurality of frame alignment images aligned with the reference image; respectively processing the L types of channel images in the plurality of frame alignment images to acquire first motion information corresponding to each type of channel image; and integrating the first motion information of different channels to obtain second motion information, and performing spatial filtering according to the second motion information and the filter to output a final image. The problem that the noise is too large in a motion area due to less fusion information can be solved.

Description

Image noise reduction method, device, terminal and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image denoising method, apparatus, terminal, and storage medium.
Background
With the improvement of hardware technology and the update of software algorithms, in order to obtain better visual effect, the application of the multi-frame computed radiography technology in the mobile phone camera is more and more common. When the photographic subject is a static scene, the registration between pixels from different images is usually accurate, and good results can be obtained. However, when a dynamic moving object is included in a shooting scene, once registration is not possible, ghost images are generated if fusion is performed with a large weight, and even if the algorithm is well designed, the error rate of registration can be greatly reduced, and the problem that a moving region is too noisy due to less fusion information is still faced.
Disclosure of Invention
The embodiment of the invention provides an image noise reduction method, an image noise reduction device, a terminal and a storage medium, and solves the problems that in the prior art, when a shooting scene contains a dynamic moving object, once registration cannot be performed, ghost images can be generated if fusion is performed with a larger weight, even if the algorithm design is good, the error rate of registration can be greatly reduced, and the problem that a moving area is overlarge in noise due to less fusion information is still faced.
In a first aspect, an embodiment of the present invention provides an image denoising method, where the method includes: acquiring a first image group, wherein the first image group comprises a reference image and a plurality of frame alignment images aligned with the reference image, each alignment image comprises a plurality of target points, the target points of each alignment image are aligned with the feature points in the reference image one by one, and the reference image and the alignment images both comprise L types of channel images; respectively processing the L types of channel images in the plurality of frame alignment images to acquire first motion information corresponding to each type of channel image, wherein the processing process of one type of channel image comprises: acquiring similarity information of each target point in the channel image of the target type of the plurality of frames of aligned images and the corresponding aligned feature point in the channel image corresponding to the reference image, and determining first motion information corresponding to the channel image of the target type according to the similarity information of each target point in the channel image of the target type of the plurality of frames of aligned images and the corresponding aligned feature point in the channel image corresponding to the reference image, wherein the first motion information comprises a motion area; and integrating the first motion information determined by the L types of channel images to obtain second motion information, respectively performing spatial filtering on the L types of fusion images according to the second motion information and a filter to obtain L types of output images, and synthesizing the L types of output images into a final image, wherein each type of fusion image is an image obtained by fusing a corresponding type of channel image in the plurality of frames of alignment images and a corresponding type of channel image in the reference image.
Further, the acquiring the first image group includes: determining whether the multiple frames of images in the input group of images are exposed identically; if the exposure of multiple frames of images in a group of input images is different, performing brightness correction on the multiple frames of images to enable the brightness of the multiple frames of images to be consistent; and if the multi-frame images in the input group of images are determined to be exposed identically, selecting one frame of image with the highest analytic power in the multi-frame images as the reference image, and aligning the rest images except the reference image in the multi-frame images to the reference image to acquire the first image group.
Further, the one-to-one alignment of the target points of each of the aligned images with the feature points in the reference image includes: and aligning the target points of each aligned image with the pixel points in the reference image one by one.
Further, the obtaining similarity information between each target point in the channel image of the target type of the plurality of frames of aligned images and the corresponding aligned feature point in the channel image corresponding to the reference image, and determining first motion information corresponding to the channel image of the target type according to the similarity information between each target point in the channel image of the target type of the plurality of frames of aligned images and the corresponding aligned feature point in the channel image corresponding to the reference image includes: acquiring a fusion weight value of fusion of each target point in a target alignment channel image to the corresponding aligned feature point in a target reference channel image, wherein the target alignment channel image is the channel image of the target type of the frames of alignment images, and the target reference channel image is the channel image of the target type of the reference image; calculating and obtaining an image similarity value determined after the target alignment channel image is fused to the target reference channel image according to a fusion weight value of fusion of each target point in the target alignment channel image to the corresponding aligned feature point in the target reference channel image; obtaining a motion probability map according to the image similarity value corresponding to each target point in the target alignment channel image; modulating the obtained motion probability map to obtain the first motion information; and determining the motion area according to the comparison result of the motion size of each pixel point in the motion probability graph and the motion threshold.
Further, the calculating and obtaining the image similarity value determined after the fusion of the target alignment channel image to the target reference channel image according to the fusion weight value of the fusion of each target point in the target alignment channel image to the corresponding aligned feature point in the target reference channel image includes determining the image similarity value corresponding to each target point in the target alignment channel image according to the following formula:
Figure BDA0003138679290000021
wherein (i, j) represents the position of the target point, wn(I, j) a fusion weight value of a channel image of a target type of an nth frame aligned image in the plurality of frame aligned images relative to the reference image at a pixel point (I, j), similarity (I, j) represents the image similarity value of the target point (I, j), fdark、fmid、fbrightThe calculation functions of the image similarity corresponding to the brightness dark are respectively expressed at three different brightness levels of brightness dark, brightness mid and brightness bright, and the calculation function of the image similarity corresponding to the brightness dark is adopted when the pixel value of the feature point aligned with the target point is lower than a first brightness threshold brightness thr _ l, the calculation function of the image similarity corresponding to the brightness bright is adopted when the pixel value of the feature point aligned with the target point is higher than a second brightness threshold brightness thr _ h, and the calculation function of the image similarity corresponding to the brightness mid is adopted if the pixel value of the feature point aligned with the target point is not lower than the first brightness threshold brightness thr _ l and is not higher than the first brightness threshold brightness.
Further, the formats of the reference image and the alignment image include: bayer, YUV or RGB.
Further, the integrating of the first motion information determined according to the L types of the channel images to obtain second motion information, performing spatial filtering on the L types of the fusion images according to the second motion information and a filter to obtain L types of output images, and synthesizing the L types of output images into a final image includes: acquiring the motion probability graph corresponding to the image of the L-type channel; integrating to obtain the second motion information based on the motion probability maps of different channels, wherein the second motion information comprises a motion probability map set, and the motion probability map set comprises the motion probability maps of all channels; adjusting the motion probability map set to generate a corresponding denoising intensity map; and performing spatial filtering on the fused image of the target type according to the denoising intensity map and the filter to output a final image.
Further, the denoising intensity map is a denoising intensity parameter of the filter.
Further, the filter comprises a steering filter.
In a second aspect, an embodiment of the present application further provides an image noise reduction apparatus, including: the image acquisition module is used for acquiring a first image group, wherein the first image group comprises a reference image and a plurality of frame alignment images aligned with the reference image, each alignment image comprises a plurality of target points, the target points of each alignment image are aligned with the feature points in the reference image one by one, and the reference image and the alignment images both comprise L types of channel images; a motion detection module, configured to process the L types of channel images in the plurality of frame alignment images, respectively, to obtain first motion information corresponding to each type of the channel image, where a process of processing one type of the channel image includes: acquiring similarity information of each target point in the channel image of the target type of the plurality of frames of aligned images and the corresponding aligned feature point in the channel image corresponding to the reference image, and determining first motion information corresponding to the channel image of the target type according to the similarity information of each target point in the channel image of the target type of the plurality of frames of aligned images and the corresponding aligned feature point in the channel image corresponding to the reference image, wherein the first motion information comprises a motion area; and the noise reduction module is used for integrating the first motion information determined by the L types of channel images to obtain second motion information, respectively performing spatial filtering on the L types of fusion images according to the second motion information and the filter to obtain L types of output images, and synthesizing the L types of output images into a final image, wherein each type of fusion image is an image obtained by fusing a corresponding type of channel image in the plurality of frames of alignment images and a corresponding type of channel image in the reference image.
In a third aspect, an embodiment of the present application further provides an image noise reduction apparatus, where the apparatus includes: a processor and a memory for storing at least one instruction which is loaded and executed by the processor to implement the image noise reduction method provided by the first aspect.
In one embodiment, the image noise reduction device provided by the third aspect may be a chip.
In a fourth aspect, another embodiment of the present application further provides a chip, where the chip is connected to a memory, or the chip is integrated with a memory (such as the image noise reduction apparatus provided in the third aspect), and when a program or an instruction stored in the memory is executed, the image noise reduction method provided in the first aspect is implemented.
In a fifth aspect, an embodiment of the present application further provides a terminal, where the terminal may include a terminal body and the image noise reduction apparatus provided in the third aspect.
In another embodiment, the terminal provided by the fifth aspect may comprise a terminal body and the chip provided by the fourth aspect.
In a sixth aspect, this application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the implementation of the computer program when executed by the processor
By the above technical solution, after the first image group is obtained, the L types of channel images in the plurality of frame alignment images may be processed respectively to obtain first motion information corresponding to each type of the channel image, where a process of processing one type of the channel image includes: acquiring similarity information of each target point in the channel images of the target types of the plurality of frames of aligned images and the corresponding aligned feature point in the channel image corresponding to the reference image, determining first motion information corresponding to the channel images of the target types according to the similarity information of each target point in the channel images of the target types of the plurality of frames of aligned images and the corresponding aligned feature point in the channel image corresponding to the reference image, integrating the first motion information determined according to the L types of the channel images to obtain second motion information, respectively performing spatial filtering on the L types of fused images according to the second motion information and a filter to obtain L types of output images, and synthesizing the L types of output images into a final image. The method can solve the problems that in the prior art, once the dynamic moving objects are contained in the shooting scene, registration can not be carried out, ghost images can be generated if the dynamic moving objects are fused by a larger weight, and even if the algorithm is well designed, the error rate of registration can be greatly reduced, and the problems that a moving area is too large in noise because of less fusion information are still faced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart of an image denoising method according to an embodiment of the present application;
FIG. 2a is a flow chart of motion region detection provided in one embodiment of the present application;
FIG. 2b is a flow chart of motion region detection provided in accordance with one embodiment of the present application;
FIG. 3 is a flow chart of motion region noise reduction provided by an embodiment of the present application;
FIG. 4 is a schematic structural diagram of an image noise reduction apparatus according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an image noise reduction apparatus according to another embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In a high-dynamic scene, a single image is difficult to achieve the effects of high signal-to-noise ratio in a dark place and large dynamic range. In recent years, the advent of computational photography has addressed this difficulty well. For example, the technique may use a plurality of LDR images with different exposures to compose an HDR effect map, may use a plurality of different field-of-view maps to compose an image with more details, or may use a plurality of same exposure maps to compose an image with high snr and increased dynamic range.
When processing a moving scene, the multi-frame synthesis technology selectively fuses similar contents, and directly replaces the contents with large differences with the contents of a reference frame. By the processing, the motion ghost can be excluded, but the problem of overlarge noise of a motion area is introduced while the ghost is removed. For solving the problem, the existing solution is to search a map of the ghost in the frequency domain, and then guide the intensity of frequency domain spatial filtering according to the map.
The existing HDR + technology and other computing and photographing technologies based on multi-frame fusion generally have operations of motion detection and ghost elimination in order to ensure the image quality when processing a motion scene. However, the ghost area is removed, the fused data is reduced, the noise reduction is limited, the noise of the area after the ghost processing is prominent, and the phenomenon of noise layering occurs. This phenomenon is more pronounced in dark scenes.
In order to overcome the above technical problem, an embodiment of the present application provides an image denoising method, and fig. 1 is a flowchart of the image denoising method provided in an embodiment of the present application, and as shown in fig. 1, the image denoising method includes the following steps:
step 101: a first image group is acquired.
The first image group comprises a reference image and a plurality of frame alignment images aligned with the reference image, each alignment image comprises a plurality of target points, the target points of each alignment image are aligned with the feature points in the reference image one by one, and the reference image and the alignment images both comprise L types of channel images.
Step 102: respectively processing the L types of channel images in the plurality of frame alignment images to acquire first motion information corresponding to each type of channel image, wherein the processing process of one type of channel image comprises: and acquiring similarity information of each target point in the channel image of the target type of the plurality of frames of aligned images and the corresponding aligned feature point in the channel image corresponding to the reference image, and determining first motion information corresponding to the channel image of the target type according to the similarity information of each target point in the channel image of the target type of the plurality of frames of aligned images and the corresponding aligned feature point in the channel image corresponding to the reference image.
Wherein the first motion information comprises a motion region.
Step 103: and integrating the first motion information determined according to the L types of channel images to obtain second motion information, respectively performing spatial filtering on the L types of fusion images according to the second motion information and a filter to obtain L types of output images, and synthesizing the L types of output images into a final image, wherein each type of fusion image is an image obtained by fusing a corresponding type of channel image in the plurality of frame alignment images and a corresponding type of channel image in the reference image.
In one embodiment of step 101, an input set of images is obtained, where the set of images includes a plurality of frame images. After the group of images are acquired, whether the multi-frame images in the group of images are the same in exposure or not can be confirmed, wherein if the multi-frame images in the group of acquired and input images are different in exposure, the group of images can be subjected to brightness correction, and the brightness of the multi-frame images in the group of images subjected to brightness correction is consistent. Further, under the condition that the brightness of a plurality of frames of images in the group of images is determined to be consistent, one frame of image with the highest resolving power is selected as a reference image (reference frame), and the rest of images except the reference image in the group of images are aligned to the reference image. In an implementation manner, the alignment operation may be implemented by an overall alignment manner, may be implemented by a block alignment manner, and may also be implemented by a combination manner of the overall alignment and the block alignment, and a specific implementation manner is not limited. The first image group in step 101 may be acquired by performing at least one of the above-described brightness correction and alignment operations.
In another embodiment of step 101, the first image group may be directly obtained, where the first image group includes a reference image and a plurality of frame alignment images aligned with the reference image, each alignment image includes a plurality of target points, and the plurality of target points of each alignment image are aligned with a plurality of pixel points in the reference image one by one. I.e. without performing the above-mentioned brightness correction or alignment operations.
In one embodiment, the feature points (x, y) in the reference image or the target points (x, y) in the registered image that are aligned one-to-one with the feature points in the reference image may be represented by coordinates. The feature point may be a pixel point.
In any embodiment of step 101, the format of the acquired multi-frame image may be an image in Bayer (Bayer) format, an image in YUV format, or an image in RGB domain. YUV is a type of a compiled true-color space (color space), "Y" represents brightness (Luma), and "UV" represents Chroma (Chroma), and a specific image format is not limited herein. Further, if the format of the acquired multi-frame image is RGB, the multi-channel image is an image acquired by an R channel, an image acquired by a G channel, and an image acquired by a B channel, respectively.
In one embodiment, after the first image group is acquired, the multi-frame image (each frame image) in the first image group may be subjected to ghost detection and ghost elimination. After the ghosting images in the multi-frame images (each frame image) in the first image group are removed through the operation, the multi-frame images can obtain a multi-frame time domain fusion effect in a point-by-point addition and averaging mode.
Step 102 shown in fig. 1 specifically includes the following steps shown in fig. 2 a:
step 201: and acquiring a fusion weight value of each target point in the plurality of frames of aligned images fused with the corresponding aligned feature point.
Step 202: and calculating the similarity sum of all the aligned images in the same target point according to the fusion weight value of each target point fused with the corresponding aligned feature point.
Step 203: the average of the sum of the similarities calculated in the calculating step 202.
Step 204: and obtaining a motion probability map (move map) according to the average value of the similarity sum of each target point.
Step 205: and determining the motion area according to the comparison result of the motion size of each pixel point in the motion probability graph and the motion threshold.
Step 206: and modulating the acquired motion probability map to obtain the first motion information.
In the specific implementation of step 201, based on the detection and determination mechanism of time domain noise reduction, we can easily obtain the similarity between each point and the corresponding point of the reference frame, which can be represented by a weight. Specifically, the difference between the aligned image and the reference image may be calculated, the calculated difference is compared with the similarity threshold, and the fusion weight for fusing the aligned image to the reference image is determined according to the comparison result. When the comparison result of the calculated difference value and the similarity threshold value is within a first range, the corresponding fusion weight value is 1, and when the comparison result of the calculated difference value and the similarity threshold value is within a second range (namely, when the image content difference is large), the corresponding fusion weight value is 0. And the difference value and the fusion weight value are in a linear relation, and then when the comparison result of the calculated difference value and the similarity threshold value is in a third range, the corresponding fusion weight value is determined according to the linear relation, wherein the third range is between the first range and the second range.
It should be noted that a plurality of fusion weight values exist in the same target point for a plurality of frame alignment images. For example, the fusion weight value of the fusion of the target point (x, y) of the ith frame of aligned image in the plurality of frames of aligned images to the corresponding feature point (x, y) in the reference image is Wi (x, y), wherein i ≧ 1, and i is a positive integer. The corresponding feature point (x, y) in the reference image may be a corresponding pixel point (x, y) of the reference image.
In a specific implementation of step 202, the similarity of all the registered images at the same target point may be summed. In the above embodiment of representing the similarity by the fusion weight value, the fusion weight value of the target point (x, y) of the i-th frame aligned image fused to the corresponding pixel point (x, y) in the reference image is Wi (x, y), a plurality of fusion weight values { W1(x, y), W2(x, y), …, Wi (x, y) } exist for a plurality of frame aligned images at the same target point, and then the SUM of the similarities of all aligned images at the same target point SUM { W1(x, y), W2(x, y), …, Wi (x, y) } can be obtained by calculation using a summation function.
In the embodiment of step 203, the average of the SUM of the similarity calculated in step 202 is calculated, in the above embodiment of representing the similarity by the fusion weight value, the SUM of the similarities of all the aligned images at the same target point SUM { W1(x, y), W2(x, y), …, Wi (x, y) }, and further the fusion weight average SUM { W1(x, y), W2(x, y), …, Wi (x, y) }/i is calculated.
In the specific implementation of steps 204 to 205, a motion probability map (move map) may be obtained according to an average value of the total similarity sum corresponding to each target point. The Move _ map is 1-similar, the similar represents an average value of the sum of the similarity corresponding to each target point, the obtained map value (motion size) of the Move map has a numerical range of (0-1), that is, the map value is greater than or equal to 0 and less than or equal to 1, and the higher the map value is, the higher the probability of motion is, and then the motion area can be determined according to the map value.
In the specific implementation of step 206, the move map obtained in step 204 may be modulated, which aims to remove burrs and make the whole move map smoother, so as to ensure the stability of the image effect.
It should be noted that the first motion information determining method provided in the embodiment shown in fig. 2a is determined based on an average similarity, and in other embodiments, the first motion information determining method provided in the embodiment shown in fig. 2b may also be used to determine, as shown in fig. 2b, the first motion information determining method includes the following steps:
step 211: and acquiring a fusion weight value of fusion of each target point in the target alignment channel image to the corresponding aligned feature point in the target reference channel image.
Step 212: and calculating and obtaining the image similarity value determined after the target alignment channel image is fused to the target reference channel image according to the fusion weight value of each target point in the target alignment channel image fused to the corresponding aligned feature point in the target reference channel image.
Step 213: and obtaining a motion probability map according to the image similarity value corresponding to each target point in the target alignment channel image.
Step 214: and determining the motion area according to the comparison result of the motion size of each pixel point in the motion probability graph and the motion threshold.
Step 215: and modulating the acquired motion probability map to obtain the first motion information.
In the specific implementation of step 211, the difference between each target point and the corresponding aligned pixel point in the reference frame may be calculated point by point based on the point noise reduction, and the first difference between each target point and the corresponding aligned pixel point may be calculated by the following formula:
diff1(i, j) ═ alt (i, j) -ref (i, j) equation one
Wherein diff (i, j) represents the first difference, alt (i, j) represents the value of the pixel (i, j) in the aligned image, and ref (i, j) represents the value of the pixel (i, j) in the reference image.
In a specific implementation of step 202, a difference between the first difference and a first threshold may be calculated to obtain a second difference, and specifically, the second difference may be calculated by the following formula:
diff2(i, j) ═ Diff1(i, j) -thr (i, j) formula two
Wherein Diff2(i, j) represents the second difference value, and thr (i, j) represents the difference threshold (i.e., the first threshold) corresponding to the target point. Wherein, an adaptive difference threshold thr (i, j) can be set for each position (target pixel point), the size of thr varies with the size of ref (i, j), and the larger the ref (i, j) value is, the larger the threshold is, and vice versa. An adaptive confidence level, thegma (i, j), may also be set for each location (target pixel point), the magnitude of thegma varying with the magnitude of ref (i, j), with higher values of ref (i, j) being larger, and vice versa.
After the second difference is obtained through calculation, the second difference may be compared with a second threshold, if the second difference is smaller than the second threshold, it is determined that the fusion weight value of the feature point aligned correspondingly by the target point is a first weight, if the second difference is not smaller than the second threshold, it is determined that the fusion weight value of the feature point aligned correspondingly by the target point is a second weight, and the second weight may be calculated according to the second difference corresponding to the target patch and the confidence corresponding to the target point. For example, the second threshold may be "0", and if the second difference diff2 is smaller than 0, the first weight W1(i, j) is determined to be 1. If the second difference diff2 is not less than 0, the second weight W2(i, j) may be calculated by the following formula:
W2(i,j)=exp^(-diff2(i,j)*diff2(I,j)/(thegma(i,j)*thegma(I,j)));
wherein (i, j) represents the location of the target point, W2(i, j) represents the second weight for the target point (i, j), diff2(i, j) represents the second difference for the target point, and thegma (i, j) represents the confidence for the target point (i, j).
In a specific implementation of step 212, said calculating and obtaining an image similarity value determined after fusing a plurality of frames to the reference image according to a fusion weight value of each target point to the corresponding aligned feature point includes determining the image similarity value of each target point according to the following formula:
Figure BDA0003138679290000081
wherein (i, j) represents the position of the target point, wn(I, j) represents a fusion weight value, simila, of the nth frame of aligned image relative to the reference image at pixel point (I, j) in the plurality of frame of aligned imagesr (I, j) represents the image similarity value of the target point (I, j), fdark、fmid、fbrightThe calculation functions of the image similarity corresponding to the brightness dark are respectively expressed at three different brightness levels of brightness dark, brightness mid and brightness bright, and the calculation function of the image similarity corresponding to the brightness dark is adopted when the pixel value of the feature point aligned with the target point is lower than a first brightness threshold brightness thr _ l, the calculation function of the image similarity corresponding to the brightness bright is adopted when the pixel value of the feature point aligned with the target point is higher than a second brightness threshold brightness thr _ h, and the calculation function of the image similarity corresponding to the brightness mid is adopted if the pixel value of the feature point aligned with the target point is not lower than the first brightness threshold brightness thr _ l and is not higher than the first brightness threshold brightness.
In the specific implementation of steps 213 to 214, in step 212, after the corresponding image similarity calculation function is selected according to the pixel values of the feature points aligned with the target points, the similarity corresponding to each target point is obtained, and further, a motion probability map (move map) can be obtained according to the average value of the sum of the similarities corresponding to each target point. The Move _ map is 1-similar, the similar represents the image similarity value corresponding to each target point determined by the formula three, the numerical range of the map value (motion size) of the obtained Move map is (0-1), that is, the map value is not less than 0 and not more than 1, if the map value is greater than the motion threshold, it is determined that the point has motion, and if the map value is higher, it represents that the probability of motion is higher, and then the motion area can be determined according to the map value.
In the specific implementation of step 215, the move map obtained in step 204 may be modulated, which aims to remove burrs and make the whole move map smoother, so as to ensure the stability of the image effect.
In another embodiment, the max mode may be adopted, and the min mode may be adopted at the highlight, which is specifically as follows:
fdark=max(w1(i,j),w2(i,j),...wn(i, j) b) formula four
fmid=sum(w1(i,j),w2(i,j),...wn(i, j),)/n equation five
fbright=min(w1(i,j),w2(i,j),...wn(i, j) b) formula six
Wherein f isdark、fmid、fbrightThe calculation functions representing the similarity of the images at three different luminance levels of the luminance dark, the luminance mid, and the luminance bright, respectively.
And when the pixel value of the feature point aligned with the target point is lower than the first brightness threshold value thr _ l, determining the image similarity by adopting a formula IV, namely, selecting the maximum value of a plurality of fusion weight values of a plurality of frames of aligned images in the same target point (i, j) as the image similarity.
And when the pixel value of the feature point aligned with the target point is higher than the second brightness threshold value thr _ h, determining the image similarity by adopting a formula six, namely, selecting the minimum value of a plurality of fusion weight values of a plurality of frames of aligned images in the same target point (i, j) as the image similarity.
And if the pixel value of the feature point aligned at the target point is not lower than the first brightness threshold value thr _ l and not higher than the first brightness threshold value thr _ l, determining the image similarity by adopting a formula five, namely, taking the average value of a plurality of fusion weight values of a plurality of frames of aligned images at the same target point (i, j) as the image similarity.
Step 103 shown in fig. 1 specifically includes the following steps shown in fig. 3:
step 301: and integrating a motion probability map set (move map (f)) based on the move maps of different channels.
Step 302: adjusting the move map (f) generated in step 301 generates a dense map.
Step 303: and performing spatial filtering according to the dense map and the filter to output a final image.
In the implementation of step 301, one move map (f) may be integrated based on move maps of different channels, and in the above integration process, the maximum value may be taken based on the same position, and specifically, the integration process may be as shown in the following formula: the move map (f) (i, j) ═ max [ move map 1(i, j) move map 2(i, j), move map L (i, j) ], where L is the number of channels.
In the specific implementation of step 302, a dense map may be generated by modulation based on the move map (f), and the modulation process may be implemented in a linear manner, that is, may be represented as: dense _ map (i, j) ═ a × move _ map _ f + B; a represents a first denoising intensity adjustment factor, B represents a second denoising intensity adjustment factor, the values of A and B can be configured correspondingly according to the ambient brightness BV, and the first denoising intensity adjustment factor A is larger than the second denoising intensity adjustment factor B.
In a specific implementation of step 303, spatial filtering may be performed based on the dense map and the filter to output the final image. Wherein the filter may be a pilot filter (guide filter) and the spatial filtering may be done channel by channel.
Output=guide filter(input,denoise map,win_w);
Wherein, the noise map is the theogma parameter of the filter, and the win _ w parameter represents the window width of the guidefilter. Finally, the filtered image is output.
An embodiment of the present application further provides an image noise reduction apparatus, and fig. 4 is a schematic structural diagram of an image noise reduction apparatus provided in another embodiment of the present application, and as shown in fig. 4, the apparatus includes:
an image obtaining module 401, configured to obtain a first image group, where the first image group includes a reference image and a plurality of frame alignment images aligned with the reference image, each of the alignment images includes a plurality of target points, the target points of each of the alignment images are aligned with a plurality of feature points in the reference image one by one, and the reference image and the alignment images both include L types of channel images;
a motion detection module 402, configured to process the L types of channel images in the plurality of frame alignment images, respectively, to obtain first motion information corresponding to each type of the channel image, where a process of processing one type of the channel image includes: acquiring similarity information of each target point in the channel image of the target type of the plurality of frames of aligned images and the corresponding aligned feature point in the channel image corresponding to the reference image, and determining first motion information corresponding to the channel image of the target type according to the similarity information of each target point in the channel image of the target type of the plurality of frames of aligned images and the corresponding aligned feature point in the channel image corresponding to the reference image, wherein the first motion information comprises a motion area; and
a denoising module 403, configured to integrate second motion information according to the first motion information determined by the L types of channel images, perform spatial filtering on the L types of fused images according to the second motion information and a filter to obtain L types of output images, and synthesize the L types of output images into a final image, where each type of fused image is an image obtained by fusing a corresponding type of channel image in the plurality of frames of aligned images and a corresponding type of channel image in the reference image.
Another embodiment of the present application further provides an image noise reduction apparatus, and fig. 5 is a schematic structural diagram of the image noise reduction apparatus provided in another embodiment of the present application, as shown in fig. 5, the apparatus includes: a processor 501 and a memory 502, the memory 502 being configured to store at least one instruction that is loaded and executed by the processor 501 to implement the image denoising method provided by any of the embodiments shown in fig. 1, 2, and 3.
In one embodiment, the image noise reduction device provided in the embodiment shown in fig. 5 may be a chip.
Another embodiment of the present application further provides a chip, where the chip is connected to a memory, or the chip is integrated with a memory (such as the image noise reduction apparatus provided in the embodiment shown in fig. 5), and when a program or an instruction stored in the memory is executed, the image noise reduction method provided in any of the embodiments shown in fig. 1, fig. 2, and fig. 3 is implemented.
The embodiment of the present application further provides a terminal, where the terminal includes a terminal body and the image noise reduction device provided in the embodiment shown in fig. 5 or the chip connected to the memory provided in the above embodiment. The terminal implements the image noise reduction method provided by any of the embodiments shown in fig. 1, fig. 2 and fig. 3 by executing a corresponding program or instruction through the image noise reduction device provided by the embodiment shown in fig. 5 or the chip connected to the memory provided by the above embodiment.
Still another embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the image denoising method provided by any of the embodiments shown in fig. 1, 2 and 3.
It should be noted that the terminal according to the embodiment of the present invention may include, but is not limited to, a Personal Computer (PC), a Personal Digital Assistant (PDA), a wireless handheld device, a Tablet Computer (Tablet Computer), a mobile phone, an MP3 player, an MP4 player, and the like.
It should be understood that the application may be an application program (native app) installed on the terminal, or may also be a web page program (webApp) of a browser on the terminal, which is not limited in this embodiment of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions in actual implementation, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a Processor (Processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (12)

1. A method for image noise reduction, the method comprising:
acquiring a first image group, wherein the first image group comprises a reference image and a plurality of frame alignment images aligned with the reference image, each alignment image comprises a plurality of target points, the target points of each alignment image are aligned with the feature points in the reference image one by one, and the reference image and the alignment images both comprise L types of channel images;
respectively processing the L types of channel images in the plurality of frame alignment images to acquire first motion information corresponding to each type of channel image, wherein the processing process of one type of channel image comprises: acquiring similarity information of each target point in the channel image of the target type of the plurality of frames of aligned images and the corresponding aligned feature point in the channel image corresponding to the reference image, and determining first motion information corresponding to the channel image of the target type according to the similarity information of each target point in the channel image of the target type of the plurality of frames of aligned images and the corresponding aligned feature point in the channel image corresponding to the reference image, wherein the first motion information comprises a motion area; and
and integrating the first motion information determined according to the L types of channel images to obtain second motion information, respectively performing spatial filtering on the L types of fusion images according to the second motion information and a filter to obtain L types of output images, and synthesizing the L types of output images into a final image, wherein each type of fusion image is an image obtained by fusing a corresponding type of channel image in the plurality of frame alignment images and a corresponding type of channel image in the reference image.
2. The method of claim 1, wherein said obtaining a first set of images comprises:
determining whether the multiple frames of images in the input group of images are exposed identically;
if the exposure of multiple frames of images in a group of input images is different, performing brightness correction on the multiple frames of images to enable the brightness of the multiple frames of images to be consistent; and
and if the multi-frame images in the input group of images are determined to be exposed identically, selecting one frame image with the highest analytic power in the multi-frame images as the reference image, and aligning the rest images except the reference image in the multi-frame images to the reference image to acquire the first image group.
3. The method of claim 1, wherein the one-to-one alignment of the target points of each of the aligned images with the feature points of the reference image comprises:
and aligning the target points of each aligned image with the pixel points in the reference image one by one.
4. The method according to claim 3, wherein the obtaining similarity information between each target point in the channel image of the target type of the plurality of frames of aligned images and the corresponding aligned feature point in the channel image of the reference image, and the determining the first motion information corresponding to the channel image of the target type according to the similarity information between each target point in the channel image of the target type of the plurality of frames of aligned images and the corresponding aligned feature point in the channel image of the reference image comprises:
acquiring a fusion weight value of fusion of each target point in a target alignment channel image to the corresponding aligned feature point in a target reference channel image, wherein the target alignment channel image is the channel image of the target type of the frames of alignment images, and the target reference channel image is the channel image of the target type of the reference image;
calculating and obtaining an image similarity value determined after the target alignment channel image is fused to the target reference channel image according to a fusion weight value of fusion of each target point in the target alignment channel image to the corresponding aligned feature point in the target reference channel image;
obtaining a motion probability map according to the image similarity value corresponding to each target point in the target alignment channel image; and
modulating the obtained motion probability map to obtain the first motion information;
and determining the motion area according to the comparison result of the motion size of each pixel point in the motion probability graph and the motion threshold.
5. The method according to claim 4, wherein the calculating the image similarity value determined after the fusion of the target alignment channel image to the target reference channel image according to the fusion weight value of each target point in the target alignment channel image fused to the corresponding aligned feature point in the target reference channel image comprises determining the image similarity value corresponding to each target point in the target alignment channel image by the following formula:
Figure FDA0003138679280000021
wherein (i, j) represents the position of the target point, wn(I, j) a fusion weight value of a channel image of a target type of an nth frame aligned image in the plurality of frame aligned images relative to the reference image at a pixel point (I, j), similarity (I, j) represents the image similarity value of the target point (I, j), fdark、fmid、fbrightA calculation function representing the similarity of the images at three different brightness levels of brightness dark, brightness mid, and brightness bright, respectively, and the pixel value of the feature point aligned at the target point is lower than a first brightness threshold valueThe image similarity calculation function corresponding to the brightness dark is adopted when the brightness thr _ l is higher than a second brightness threshold value thr _ h, the image similarity calculation function corresponding to the brightness bright is adopted when the pixel value of the feature point aligned with the target point is higher than the second brightness threshold value thr _ h, and the image similarity calculation function corresponding to the brightness mid is adopted when the pixel value of the feature point aligned with the target point is not lower than the first brightness threshold value thr _ l and is not higher than the first brightness threshold value thr _ l.
6. The method of any of claims 1-5, wherein the format of the reference image and the alignment image comprises: bayer, YUV or RGB.
7. The method according to claim 6, wherein the integrating of the first motion information determined according to the L types of the channel images to obtain second motion information, the performing of spatial filtering on the L types of fused images according to the second motion information and a filter to obtain L types of output images, and the synthesizing of the L types of output images into a final image comprises:
acquiring the motion probability graph corresponding to the image of the L-type channel;
integrating to obtain the second motion information based on the motion probability maps of different channels, wherein the second motion information comprises a motion probability map set, and the motion probability map set comprises the motion probability maps of all channels;
adjusting the motion probability map set to generate a corresponding denoising intensity map; and
and performing spatial filtering on the fused image of the target type according to the denoising intensity map and the filter to output a final image.
8. The method of claim 7, wherein the denoising intensity map is a denoising intensity parameter of the filter.
9. An image noise reduction apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for acquiring a first image group, wherein the first image group comprises a reference image and a plurality of frame alignment images aligned with the reference image, each alignment image comprises a plurality of target points, the target points of each alignment image are aligned with the feature points in the reference image one by one, and the reference image and the alignment images both comprise L types of channel images;
a motion detection module, configured to process the L types of channel images in the plurality of frame alignment images, respectively, to obtain first motion information corresponding to each type of the channel image, where a process of processing one type of the channel image includes: acquiring similarity information of each target point in the channel image of the target type of the plurality of frames of aligned images and the corresponding aligned feature point in the channel image corresponding to the reference image, and determining first motion information corresponding to the channel image of the target type according to the similarity information of each target point in the channel image of the target type of the plurality of frames of aligned images and the corresponding aligned feature point in the channel image corresponding to the reference image, wherein the first motion information comprises a motion area; and
and the noise reduction module is used for integrating the first motion information determined by the L types of channel images to obtain second motion information, respectively performing spatial filtering on the L types of fusion images according to the second motion information and the filter to obtain L types of output images, and synthesizing the L types of output images into a final image, wherein each type of fusion image is an image obtained by fusing a corresponding type of channel image in the plurality of frames of alignment images and a corresponding type of channel image in the reference image.
10. An image noise reduction apparatus, characterized in that the apparatus comprises:
a processor and a memory for storing at least one instruction which is loaded and executed by the processor to implement the image denoising method of any one of claims 1-8.
11. A terminal characterized by comprising the image noise reduction apparatus according to claim 10.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the image denoising method according to any one of claims 1 to 8.
CN202110726047.6A 2021-06-29 2021-06-29 Image noise reduction method, device, terminal and storage medium Active CN113469908B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110726047.6A CN113469908B (en) 2021-06-29 2021-06-29 Image noise reduction method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110726047.6A CN113469908B (en) 2021-06-29 2021-06-29 Image noise reduction method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN113469908A true CN113469908A (en) 2021-10-01
CN113469908B CN113469908B (en) 2022-11-18

Family

ID=77873635

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110726047.6A Active CN113469908B (en) 2021-06-29 2021-06-29 Image noise reduction method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN113469908B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106355559A (en) * 2016-08-29 2017-01-25 厦门美图之家科技有限公司 Image sequence denoising method and device
US20170111582A1 (en) * 2014-06-30 2017-04-20 Huawei Technologies Co., Ltd. Wide-Area Image Acquiring Method and Apparatus
CN108429887A (en) * 2017-02-13 2018-08-21 中兴通讯股份有限公司 A kind of image processing method and device
CN108898567A (en) * 2018-09-20 2018-11-27 北京旷视科技有限公司 Image denoising method, apparatus and system
CN110070511A (en) * 2019-04-30 2019-07-30 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN110827336A (en) * 2019-11-01 2020-02-21 厦门美图之家科技有限公司 Image alignment method, device, equipment and storage medium
CN111353948A (en) * 2018-12-24 2020-06-30 Tcl集团股份有限公司 Image noise reduction method, device and equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170111582A1 (en) * 2014-06-30 2017-04-20 Huawei Technologies Co., Ltd. Wide-Area Image Acquiring Method and Apparatus
CN106355559A (en) * 2016-08-29 2017-01-25 厦门美图之家科技有限公司 Image sequence denoising method and device
CN108429887A (en) * 2017-02-13 2018-08-21 中兴通讯股份有限公司 A kind of image processing method and device
CN108898567A (en) * 2018-09-20 2018-11-27 北京旷视科技有限公司 Image denoising method, apparatus and system
CN111353948A (en) * 2018-12-24 2020-06-30 Tcl集团股份有限公司 Image noise reduction method, device and equipment
CN110070511A (en) * 2019-04-30 2019-07-30 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN110827336A (en) * 2019-11-01 2020-02-21 厦门美图之家科技有限公司 Image alignment method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113469908B (en) 2022-11-18

Similar Documents

Publication Publication Date Title
CN108335279B (en) Image fusion and HDR imaging
CN111418201B (en) Shooting method and equipment
EP2987134B1 (en) Generation of ghost-free high dynamic range images
CN108694705B (en) Multi-frame image registration and fusion denoising method
US9344636B2 (en) Scene motion correction in fused image systems
WO2018176925A1 (en) Hdr image generation method and apparatus
CN108604293B (en) Apparatus and method for improving image quality
CN113992861B (en) Image processing method and image processing device
WO2015011707A1 (en) Digital image processing
CN113888437A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
KR102106537B1 (en) Method for generating a High Dynamic Range image, device thereof, and system thereof
CN113344821B (en) Image noise reduction method, device, terminal and storage medium
CN111242860B (en) Super night scene image generation method and device, electronic equipment and storage medium
CN113313661A (en) Image fusion method and device, electronic equipment and computer readable storage medium
CN115550570B (en) Image processing method and electronic equipment
WO2020171300A1 (en) Processing image data in a composite image
CN113674193A (en) Image fusion method, electronic device and storage medium
JP2022179514A (en) Control apparatus, imaging apparatus, control method, and program
CN110942427A (en) Image noise reduction method and device, equipment and storage medium
CN113379609A (en) Image processing method, storage medium and terminal equipment
EP3179716B1 (en) Image processing method, computer storage medium, device, and terminal
US20240127403A1 (en) Multi-frame image fusion method and system, electronic device, and storage medium
CN113469908B (en) Image noise reduction method, device, terminal and storage medium
CN113344822B (en) Image denoising method, device, terminal and storage medium
CN113379608A (en) Image processing method, storage medium and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant