CN117764855A - Image noise reduction method, device, computer readable storage medium and equipment - Google Patents

Image noise reduction method, device, computer readable storage medium and equipment Download PDF

Info

Publication number
CN117764855A
CN117764855A CN202211150866.1A CN202211150866A CN117764855A CN 117764855 A CN117764855 A CN 117764855A CN 202211150866 A CN202211150866 A CN 202211150866A CN 117764855 A CN117764855 A CN 117764855A
Authority
CN
China
Prior art keywords
image
pixel
result
map
difference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211150866.1A
Other languages
Chinese (zh)
Inventor
李海军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202211150866.1A priority Critical patent/CN117764855A/en
Publication of CN117764855A publication Critical patent/CN117764855A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The application provides an image noise reduction method, an image noise reduction device, a computer readable storage medium and electronic equipment, and relates to the technical field of image processing.

Description

Image noise reduction method, device, computer readable storage medium and equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image noise reduction method, an image noise reduction device, a computer readable storage medium, and a computer readable storage device.
Background
When a user photographs in a dark environment (e.g., night environment), the photosensitive element receives less light per unit time. In the case where the exposure time is unchanged, the camera increases the sensitivity ISO in order to expose as accurately as possible. As ISO increases, more picture-independent impurities, i.e., noise, are generated in the picture.
Generally, in order to eliminate noise, a plurality of photographs can be obtained at the same time, and the noise of a patch is reduced by pixel replacement and fusion of the plurality of photographs. However, sheeting produced in this manner is prone to smearing of moving objects, making sheeting less effective.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present application and thus may include information that does not constitute a related art known to those of ordinary skill in the art.
Disclosure of Invention
An object of the present invention is to provide an image denoising method, an apparatus, a computer readable storage medium, and an electronic device, which can determine a static area and a moving area by using a pixel difference map for representing a difference between a first image and a second image, and fuse the images based on different types of areas, so as to implement multi-frame denoising based on motion detection, so as to avoid a moving object smear in a denoising result map, and have a better denoising effect compared with the related art.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned in part by the practice of the application.
According to an aspect of the present application, there is provided an image noise reduction method, the method including:
Acquiring a first image and a second image from an image sequence;
generating a pixel disparity map for characterizing a disparity between the first image and the second image;
dividing a static area and a moving area in the pixel difference diagram;
and fusing the first image and the second image based on the static area and the moving area in the pixel difference image to obtain a noise reduction result image.
According to an aspect of the present application, there is provided an image noise reduction apparatus including:
an image acquisition unit configured to acquire a first image and a second image from an image sequence;
a difference determination unit for generating a pixel difference map for characterizing a difference between the first image and the second image;
a region dividing unit for dividing a stationary region and a moving region in the pixel difference map;
and the fusion unit is used for fusing the first image and the second image based on the static area and the moving area in the pixel difference image to obtain a noise reduction result image.
According to an aspect of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions are read from the computer-readable storage medium by a processor of a computer device, and executed by the processor, cause the computer device to perform the methods provided in the various alternative implementations described above.
According to an aspect of the present application, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of any of the above.
According to an aspect of the present application, there is provided an electronic apparatus including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the method of any of the above via execution of executable instructions.
Exemplary embodiments of the present application may have some or all of the following benefits:
in the image denoising method provided by an example embodiment of the present application, a stationary region and a moving region may be determined through a pixel difference map for representing a difference between a first image and a second image, and the images are fused based on different types of regions, so as to implement multi-frame denoising based on motion detection, so as to avoid a moving object smear in a denoising result map, and compared with the related art, the image denoising method has a better denoising effect.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. It is apparent that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 schematically illustrates a flow chart of an image denoising method according to one embodiment of the present application;
FIG. 2 schematically illustrates a schematic of a conversion of an image to be processed into a gray scale map according to one embodiment of the present application;
FIG. 3 schematically illustrates a contrast schematic before and after gamma correction according to one embodiment of the present application;
FIG. 4 schematically illustrates a flow chart of an image denoising method according to another embodiment of the present application;
fig. 5 schematically illustrates a structural diagram of an image noise reduction device according to an embodiment of the present application;
fig. 6 schematically shows a schematic of a computer system suitable for use in implementing the electronic device of the embodiments of the present application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present application. One skilled in the relevant art will recognize, however, that the aspects of the application may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known aspects have not been shown or described in detail to avoid obscuring aspects of the present application.
Referring to fig. 1, fig. 1 schematically illustrates a flow chart of an image denoising method according to one embodiment of the present application. As shown in fig. 1, the method includes the following steps.
Step S110: a first image and a second image are acquired from a sequence of images.
Step S120: a pixel disparity map is generated that characterizes a disparity between the first image and the second image.
Step S130: the stationary region and the moving region in the pixel difference map are divided.
Step S140: and fusing the first image and the second image based on the static area and the moving area in the pixel difference image to obtain a noise reduction result image.
By implementing the method shown in fig. 1, the static area and the motion area can be determined through the pixel difference graph for representing the difference between the first image and the second image, and the images are fused based on the areas of different types, so that multi-frame noise reduction based on motion detection is realized, the occurrence of motion object smear in the noise reduction result graph is avoided, and compared with the related art, the method has better noise reduction effect.
Next, the above steps of the present exemplary embodiment will be described in more detail.
In step S110, a first image and a second image may be acquired from a sequence of images; wherein the first image and the second image may be from the same image set, which may be acquired based on a shutter operation triggered by a user, the plurality of images in the image set being images corresponding to different time stamps but being relatively similar, one frame may be selected as a reference frame (i.e., the first image) and the other images as reference frames (i.e., the second image) after the images in the image set are gray-scaled, and thus the number of the first images may be one and the number of the second images may be one or more.
As an alternative embodiment, acquiring the first image and the second image from the image sequence comprises: carrying out gray scale processing on each image in the image sequence to obtain a plurality of gray scale images; a first image is selected from the plurality of gray scale images, and other gray scale images than the first image in the plurality of gray scale images are determined as a second image. Therefore, each image in the image sequence can be processed into a smaller gray scale image, the calculated amount can be reduced in the image noise reduction process, and the image noise reduction efficiency is improved.
Specifically, the image sequence may include at least two images, where each image may correspond to a different image format or may correspond to the same image format, and the embodiment of the present application is not limited. The image format may include RAW format, YUV format, and the like; the RAW format is image data in a digital format which is initially acquired, each pixel information only has certain color information in RGB, and each 4 pixels contains 2 pixels which are G information, 1R information and 1B information, namely an RGGB format; the YUV format represents color by luminance and chrominance. In addition, a plurality of gray level images obtained after gray level processing is carried out on each image in the image sequence are in a one-to-one correspondence relationship with each image, namely, each image to be processed corresponds to one gray level image, and after gray level processing, the size of the gray level image is the preset proportional size of the image to be processed, for example, the size of the gray level image is 1/4 of the size of the image to be processed. The size of the gray scale map depends on the viewing ratio of the camera module or is set by the user.
As an alternative embodiment, further comprising: in response to a shutter triggering operation, determining continuously acquired multi-frame images as an image sequence; alternatively, in response to the shutter trigger operation, the preview image acquired before the shutter trigger operation and the image acquired after the shutter trigger operation are determined as the image sequence. Therefore, more various image set acquisition modes can be provided, the embodiment of the application can be applied to more various scenes, and the application range of the embodiment of the application is enriched.
Specifically, the shutter triggering operation may be a clicking operation, a touch screen operation, a gesture control operation, a voice control operation, or the like, which is not limited in the embodiments of the present application. The multi-frame image obtained continuously is determined as an image sequence, and it is understood that the multi-frame image is obtained continuously as an image sequence at the time when the user triggers the shutter. The preview images acquired before the shutter triggering operation and the images acquired after the shutter triggering operation are determined as an image sequence, and it can be understood that after the user starts the camera module, the preview images are continuously acquired based on the camera module, the multiple frame images are continuously acquired at the moment when the user triggers the shutter and one or more preview images before the moment are acquired, and the multiple frame images and the preview images are continuously acquired to form the image sequence.
As an alternative embodiment, gray-scale processing is performed on each image in the image sequence to obtain a plurality of gray-scale images, including: determining a processing channel corresponding to each image based on the image format; and carrying out gray scale processing on each image according to the first sliding window for carrying out gray scale processing and the processing channel of each image to obtain a plurality of gray scale images. In this way, a processing channel suitable for an image can be determined based on the image format, and further, a gradation processing operation can be adaptively performed based on the processing channel, thereby obtaining a gradation map more efficiently.
Specifically, if the image format is a RAW format, determining a processing channel G corresponding to each image, calculating a gray value corresponding to each first sliding window according to the first sliding window for gray processing and the processing channel G, wherein in the image, the gray value corresponding to each first sliding window can form a gray map corresponding to the image, and the corresponding gray map can be obtained for each image in such a way; the first sliding window may correspond to any preset size, for example, 2×2, 4*4, etc., and the sliding step length corresponding to the first sliding window is the same as the side length of the first sliding window, and if the first sliding window is 2×2, the step length may be 2. For example, gray value= (G r +G b )/2。
If the image format is YUV format, determining a processing channel Y corresponding to each image, calculating a gray value corresponding to each first sliding window according to the first sliding window for gray processing and the processing channel Y, wherein in the image, the gray value corresponding to each first sliding window may form a gray map corresponding to the image, and for each image, the corresponding gray map may be obtained in such a way, for example, gray value= (y1+y2+y3+y4)/4.
Further, referring to fig. 2, fig. 2 schematically shows a schematic diagram of converting an image to be processed into a gray scale according to an embodiment of the present application. As shown in fig. 2, if the image to be processed 210 is an 8×8 image, gray-scale processing is performed on the image to be processed according to a first sliding window with a step length of 2×2, so as to obtain a gray-scale image 220 of 4*4; wherein the gray scale processing may depend on the expression: gray value= (G) r +G b ) 2, the expression may also be relied upon: gray value= (y1+y2+y3+y4)/4.
As an alternative embodiment, selecting the first image from the plurality of gray maps includes: generating a difference result between adjacent frame gray maps based on a frame order between the plurality of gray maps; a first image is selected from the plurality of gray scale maps based on each differential result. Therefore, the most suitable gray level image can be selected as the first image based on the difference result between the gray level images of the adjacent frames, so that the subsequent noise reduction effect is improved.
In particular, the frame order between the plurality of gray maps coincides with the frame order of the images in the image sequence, which can be understood as the order of the time stamps from early to late/late to early.
For example, if there are 5 gray-scale images, the following is obtained after the frame sequence: gray scale fig. 1-2-3-4-5, gray scale fig. 1-2 is a set of adjacent frame gray scale fig. 01, gray scale fig. 2-3 is a set of adjacent frame gray scale fig. 02, gray scale fig. 3-4 is a set of adjacent frame gray scale fig. 03, and gray scale fig. 4-5 is a set of adjacent frame gray scale fig. 04. Wherein, the difference result between different adjacent frame gray scale patterns is used for representing the difference between each adjacent frame gray scale pattern.
As an alternative embodiment, generating a difference result between adjacent frame gray maps based on a frame order between a plurality of gray maps includes: generating the difference value of the para-pixel of the gray map of the adjacent frame based on the frame sequence among the gray maps to obtain the difference value of each pixel position; and summing the difference values of the pixel positions to obtain a difference result of the gray level images of the adjacent frames. Therefore, the difference result of each adjacent frame gray level image can be determined based on the difference value of the alignment pixels of the adjacent frame gray level images, the difference between the adjacent frame gray level images can be known based on the difference result, and the most suitable gray level image can be selected as the first image based on the difference.
In particular, the difference between the para-pixels is understood as the difference between the pixel values of the adjacent frames in the same pixel position, and the number of the differences is identical to the number of the pixel positions in the gray-scale image, that is, each pixel position corresponds to a difference. For the adjacent frame gray level diagrams, the difference value of each pixel position is summed, so that a difference result between the adjacent frame gray level diagrams can be obtained.
As an alternative embodiment, summing the differences of the pixel positions to obtain a difference result of the gray scale map of the adjacent frame includes: selecting a target difference value larger than a preset difference value from the difference values of the pixel positions; and summing the target difference values to obtain a difference result of the gray level images of the adjacent frames. Therefore, effective target difference values (namely, target difference values larger than the preset difference values) can be selected for summation, the target difference values smaller than or equal to the preset difference values are prevented from participating in the summation process, and the representation precision of the calculated difference results on the image difference can be improved.
Specifically, the expression if (Gray i -Gray i-1 ) From the difference values of the pixel positions, a target difference value (Gray) greater than a preset difference value Th_Gray is selected from the difference values of the pixel positions i -Gray i-1 ). Further, the expression gray_diff= Σ (Gray i -Gray i-1 ) And summing the target difference values larger than the preset difference value Th_Gray to obtain a difference result Gray_Diff of the Gray level diagrams of the adjacent frames. If there are a plurality of adjacent frame gray levels, i.e., adjacent frame gray level 01, adjacent frame gray level 02, adjacent frame gray level 03, and adjacent frame gray level 04. The adjacent frame Gray map 01, the adjacent frame Gray map 02, the adjacent frame Gray map 03, and the adjacent frame Gray map 04 correspond to the differential result gray_diff01, the differential result gray_diff02, the differential result gray_diff03, and the differential result gray_diff04, respectively.
As an alternative embodiment, selecting the first image from the plurality of gray maps based on each differential result includes: sequencing each differential result to obtain a sequencing result; for each differential result in the sequencing results, adjacent differential results are added to obtain a plurality of differential sums; calculating a first weighted sum of a first differential result and a first preset weight in the sequencing results, and calculating a second weighted sum of a last differential result and a second preset weight in the sequencing results; determining a minimum value from the plurality of differential sums, the first weighted sum, and the second weighted sum; a first image is selected from the plurality of gray scale maps based on the minimum value. Therefore, the gray level image with the smallest difference with the front frame and the rear frame is favorable to be selected as the first image, and the first image is used as the reference frame, so that more accurate pixel alignment and better noise reduction effect can be realized.
For example, sorting the differential results, the obtained sorting results are: gray_Diff01-Gray_Diff02-Gray_Diff03-Gray_Diff04. And adding adjacent differential results to obtain a plurality of differential sums, namely Gray_Diff01+Gray-Diff02, gray_Diff02+Gray_Diff03 and Gray_Diff03+Gray_Diff04. Furthermore, a first weighted sum a1×gray_diff01 of the first differential result gray_diff01 and the first preset weight a1 in the sorting result may be calculated, and a second weighted sum a2×gray_diff04 of the last differential result gray_diff04 and the second preset weight a2 in the sorting result may be calculated. Wherein the first preset weight a1 (e.g., 2) and the second preset weight a2 (e.g., 2) may be represented as constants. Further, the minimum value can be determined from a plurality of differential sums (gray_diff01+gray_diff02, gray_diff02+gray_diff03, gray_diff03+gray_diff04), a first weighted sum (a1×gray_diff01), and a second weighted sum (a2×gray_diff04).
As an alternative embodiment, selecting the first image from the plurality of gray scale maps based on the minimum value includes: if the minimum value is the first weighted sum, selecting a first frame gray scale image from the plurality of gray scale images as a first image; if the minimum value is the second weighted sum, selecting a last frame gray scale image from the plurality of gray scale images as a first image; if the minimum value is any one of the differential sums, determining adjacent target differential results corresponding to the corresponding differential sums, and taking the same gray scale image corresponding to the adjacent target differential results as the first image. In this way, when the minimum value is any one of the first weighted sum, the second weighted sum and any one of the differential sums, a proper gray scale image can be determined as the first image, so as to realize more accurate pixel alignment and better noise reduction effect.
Based on the above example, if the minimum value is the first weighted sum, the gray level map 1 may be regarded as the first image. If the minimum value is the second weighted sum, the gray level map 2 may be used as the first image. If the minimum value is any one of a plurality of differential sums, for example, gray_diff01+gray_diff02, a Gray map corresponding to gray_diff01, that is, fig. 1 and 2, can be determined, and a Gray map corresponding to gray_diff02, that is, fig. 2 and 3, can be determined. It can be seen that the same Gray scale map corresponding to gray_diff01 and gray_diff02 is fig. 2, i.e., both gray_diff01 and gray_diff02 are calculated based on fig. 2, and thus fig. 2 can be regarded as the first image.
In step S120, a pixel disparity map is generated that characterizes the difference between the first image and the second image.
Specifically, each pixel difference value in the pixel difference map is used to characterize the difference in pixel values of the first image and the second image at the same location. For example, the pixel difference value s is at the pixel position (1, 1) in the pixel difference map, and then the pixel difference value s is the difference between the pixel value of the pixel position (1, 1) in the first image and the pixel value of the pixel position (1, 1) in the second image.
Further, if there are a plurality of second images, a pixel disparity map for characterizing the difference between the first image and each of the second images, i.e. a plurality of pixel disparity maps, each for characterizing the pixel disparity between the first image and one of the second images, may be generated. For example, if there is a second image a and a second image b, a pixel disparity map a for characterizing the difference between the first image and the second image a may be generated, and a pixel disparity map b for characterizing the difference between the first image and the second image b may be generated.
As an alternative embodiment, before generating the pixel difference map for characterizing the difference between the first image and the second image, the method further comprises: the first image and the second image are gamma corrected. Therefore, the first image and the second image can be lightened through gamma correction, and the noise reduction effect on night scene images can be improved.
Specifically, in a night view image, the image content is easily in a dark environment, and if noise in a dark place in the image is heavy and texture is weak, poor image alignment effect is easily caused. To solve this problem, the present application performs Gamma (Gamma) correction on the first image and the second image to achieve the image brightness effect, and the front-back contrast chart for performing Gamma (Gamma) correction can refer to fig. 3. The Gamma (Gamma) correction is to change the Gamma value to match the middle gray of the monitor, and the Gamma correction can compensate the color display difference of different output devices, so that the images show the same effect on different monitors.
As an alternative embodiment, generating a pixel disparity map for characterizing a disparity between a first image and a second image, comprises: performing alignment processing on the second image based on the first image to obtain a second target image; a pixel disparity map is generated that characterizes a disparity between the first image and the second target image. In this way, the alignment of the images can be realized, in the practical application process, although the shooting time of the second image is very close to that of the first image, the second image may shake somewhat compared with the first image, and then the positions of the key objects in the two images may be different.
Specifically, there is a one-to-one correspondence of pixels between the second target image and the first image.
As an alternative embodiment, performing alignment processing on the second image based on the first image to obtain a second target image, including: dividing the second image into image blocks to obtain an image block set; controlling each image block in the image block set to traverse the search area of the first image so as to determine a matching result corresponding to each image block; and performing image block offset on the second image according to the matching results respectively corresponding to the image blocks to obtain a second target image aligned with the first image. Thus, the images can be aligned in units of image blocks, and the alignment efficiency and the alignment accuracy can be improved.
Specifically, performing image block division on the second image to obtain an image block set, including: and dividing the image blocks of the second image according to a preset dividing size (for example, 1×10) to obtain an image block set. Wherein each image block in the set of image blocks corresponds to the same size. Furthermore, each image block in the image block set can be controlled to traverse the search area of the first image so as to determine the matching result corresponding to each image block; the matching result corresponding to each image block is used for indicating the motion vector of the image block. Further, performing image block offset on the second image according to the matching result corresponding to each image block to obtain a second target image aligned with the first image, including: and respectively moving each image block in the second image according to the matching result respectively corresponding to each image block so that each image block is positioned at a new position, and each image block positioned at the new position is used for forming a second target image.
As an optional embodiment, controlling each image block in the image block set to traverse the search area of the first image to determine a matching result corresponding to each image block, includes: each image block in the image block set is controlled to traverse the search area of the first image according to a preset moving step length, and a reference matching result corresponding to each movement is obtained; and selecting a minimum matching result from the reference matching results, and determining the minimum matching result as the matching result of the corresponding image block so as to obtain the matching result corresponding to each image block. In this way, an optimal matching result can be determined for each image block based on the way that the image block traverses the search area, so that the first image and the second image are aligned according to the matching result, and the noise reduction effect is improved.
Specifically, controlling each image block in the image block set to traverse the search area of the first image according to a preset moving step (e.g., 1), including: controlling each image block in the image block set to move for multiple times in the search area of the first image according to a preset moving step length until the search area of the first image is traversed; the search area of the first image may be set manually, or may be automatically selected according to the image recognition result, where the size (e.g., 15×15, 8×8, etc.) of the search area of the first image is larger than the size of the image block.
Furthermore, the method further comprises: when a search area of a first image is traversed, if a plurality of reference matching results corresponding to the same pixel position are detected, an average value of the plurality of reference matching results is calculated as a final reference matching result of the pixel position.
Furthermore, the reference matching result corresponding to each movement may be expressed as a sum of differences of pixel positions at the current position of the image block in the search area. The minimum matching result selected from the reference matching results can be understood as that when the image block is at the position of the search area, the difference between the minimum matching result and the corresponding image block in the first image is minimum, and the minimum matching result is determined as the matching result of the corresponding image block, so that the matching result corresponding to each image block is obtained.
As an alternative embodiment, generating a pixel disparity map for characterizing a disparity between a first image and a second target image, comprises: calculating an inter-frame pixel difference value between the second target image and the first image, a maximum pixel difference value in a preset area and a minimum pixel difference value in the preset area; summing the inter-frame pixel difference value of each pixel position in the second target image, the maximum pixel difference value in the preset area and the minimum pixel difference value in the preset area to obtain the comprehensive difference value of each pixel position; a pixel disparity map is generated that characterizes the disparity between the first image and the second target image based on the integrated disparity values for each pixel location. Therefore, the integrated difference value corresponding to each pixel position can be calculated based on the inter-frame pixel difference value, the maximum pixel difference value in the preset area and the minimum pixel difference value in the preset area, the pixel difference map can be obtained based on the integrated difference value of each pixel position, the image noise reduction processing is carried out based on the pixel difference map, and a personalized processing mode can be adopted for different difference areas in a targeted manner, so that the noise reduction processing effect is improved.
Specifically, calculating an inter-pixel difference value between the second target image and the first image, a maximum pixel difference value in a preset area, and a minimum pixel difference value in the preset area includes: calculating an inter-frame pixel difference value grayDiff between the second target image and the first image based on the expression graydiff=abs (Gray 1-Gray 2), wherein grayDiff is used for representing the inter-frame pixel difference value between the pixel Gray1 in the second target image and the pixel Gray2 at the same position in the first image, and abs is used for representing taking an absolute value; calculating a maximum pixel difference value maxDiff in a preset area (e.g., 2 x 2) between the second target image and the first image based on an expression maxdiff=abs (Max 1-Max 2), one maxDiff for each pixel position, for example, for a pixel position of (1, 1), taking a maximum pixel value Max1 in the preset area of 2 x2 containing the pixel position in the second target image, and a maximum pixel value Max2 in the preset area of 2 x2 containing the pixel position in the first image, based on the expression, a maxDiff corresponding to the pixel position (1, 1) can be calculated; the minimum pixel difference value minDiff in a preset area (e.g., 2×2) between the second target image and the first image is calculated based on the expression mindiff=abs (Min 1-Min 2), one minDiff for each pixel position, for example, for a pixel position of (1, 1), the minimum pixel value Min1 in the preset area of 2×2 containing the pixel position is taken in the second target image, the minimum pixel value Min2 in the preset area of 2×2 containing the pixel position is taken in the first image, and the minDiff corresponding to the pixel position (1, 1) can be calculated based on the expression.
As can be seen from the above steps, for the second target image, each pixel location corresponds to a set of [ grayDiff, maxDiff, minDiff ].
Based on this, summing the inter-pixel difference value of each pixel position in the second target image, the maximum pixel difference value in the preset area, and the minimum pixel difference value in the preset area, to obtain a comprehensive difference value of each pixel position, including: for each pixel position in the second target image, a comprehensive difference value DiffMap corresponding to each pixel position can be calculated based on the expression diffmap=graydiff+maxdiff+mindiff.
Based on this, generating a pixel disparity map for characterizing a disparity between the first image and the second target image from the integrated disparity value for each pixel location, comprising: the integrated difference value of each pixel position is combined into a pixel difference map according to the arrangement mode of each pixel position, the pixel difference map can be expressed in a matrix form, and each element in the matrix is the integrated difference value of one pixel position.
As an alternative embodiment, further comprising: traversing the pixel difference map according to the second sliding windows to determine the number of effective pixels which are larger than a preset pixel value in each second sliding window; and carrying out noise elimination processing on the pixel difference map according to the number of the effective pixels in each second sliding window. Thus, noise in the pixel difference map can be eliminated, and adverse effects of discrete integrated difference values on signal-to-noise ratio are reduced.
Specifically, the dimensions of the second sliding window and the first sliding window may be the same or different, which is not limited in the embodiment of the present application. The preset number may be determined according to the size of the second sliding window (for example, the preset number=1/4×the total number of pixels in the second sliding window, the preset number=1/6×the total number of pixels in the second sliding window, the preset number=1/8×the total number of pixels in the second sliding window, etc.), and the pixels larger than the preset pixel value may be determined as effective pixels, if the number of effective pixels in the second sliding window is greater than or equal to the preset number of pixels, the pixels in the area defined by the second sliding window are considered to be effective, noise cancellation is not performed thereon, and if the number of effective pixels in the second sliding window is smaller than the preset number of pixels, discrete noise is considered to be included in the area defined by the second sliding window, and noise cancellation processing should be performed thereon.
As an alternative embodiment, performing noise cancellation processing on the pixel difference map according to the number of effective pixels in each second sliding window includes: if the number of effective pixels in the second sliding window is smaller than the preset number, setting the pixel value in the corresponding second sliding window as a specific value. Thus, noise interference can be effectively eliminated, and the purity degree of the image is improved.
Specifically, the specific value may be any form of a constant, a character, a symbol, etc., which is not limited by the embodiment of the present application. The latest specific value setting information may also be acquired before setting the pixel value within the corresponding second sliding window to a specific value to determine a specific value (e.g., 0) to be set.
As an alternative embodiment, further comprising: and carrying out Gaussian filtering processing on the pixel difference map. Thus, sharp edges in the image can be smoothed, and the image effect is improved.
Specifically, the gaussian filtering process is performed on the pixel difference map, including: the pixel disparity map is gaussian filtered based on the filter window of 3*3. The 3*3 may be modified to any size suitable for the current situation, and the embodiments of the present application are not limited thereto.
In step S130, the stationary region and the moving region in the pixel difference map are divided.
In particular, the pixel values in the stationary region are different from the pixel values in the moving region. In the pixel difference map, the larger DiffMap indicates the larger difference from the reference frame, and the smaller DiffMap indicates the smaller difference from the reference frame.
As an alternative embodiment, dividing the stationary region and the moving region in the pixel disparity map includes: determining each pixel difference value in the pixel difference map; determining a region corresponding to the pixel difference value in the first numerical value interval as a static region; determining a region corresponding to the pixel difference value in the second numerical interval as a motion region; wherein the second numerical interval does not overlap with the first numerical interval. Therefore, the static area and the moving area can be distinguished, pixels in different areas can be processed in a targeted manner, and therefore the image noise reduction effect is improved.
Specifically, the first value interval (e.g., 0-10) and the second value interval (e.g., 11-100) are used to define different data value ranges, and there is no intersection between the first value interval and the second value interval.
As an alternative embodiment, dividing the stationary region and the moving region in the pixel disparity map includes: performing binarization processing on the pixel difference map; determining a region with a pixel difference value of 1 as a stationary region; the region where the pixel difference value is 0 is determined as a motion region. Therefore, the static area and the moving area can be distinguished, pixels in different areas can be processed in a targeted manner, and therefore the image noise reduction effect is improved. In addition, the binarization processing of the pixel difference map can simplify the pixel difference map, so that the noise reduction processing efficiency of the image is improved.
Specifically, after binarizing the pixel difference map, each pixel position in the pixel difference map is 0 or 1, if 1, this pixel position is larger than the same pixel position of the reference frame, and if 0, this pixel position is smaller than the same pixel position of the reference frame.
In step S140, the first image and the second image are fused based on the still region and the moving region in the pixel difference map, and a noise reduction result map is obtained.
Specifically, the size of the noise reduction result map is consistent with the size of the images in the image sequence, and the sizes of the images in the image sequence are consistent. After obtaining the noise reduction result graph, it is also possible to: and displaying the noise reduction result graph as an image noise reduction result to a user.
As an alternative embodiment, further comprising: and performing interpolation amplification processing on the pixel difference map, the first image and the second image according to a specific size (for example, 512×512) corresponding to the image to be processed, wherein the pixel difference map, the first image and the second image after interpolation amplification correspond to the specific size. Therefore, the finally output noise reduction result graph is consistent with the size of the image to be processed, the dissonance brought to the user due to the inconsistent size is avoided, the noise reduction perception of the user is weakened, the user can directly obtain the noise reduction image with the normal size from the visual effect, and the use experience of the user is improved.
Specifically, performing interpolation amplification processing on the pixel difference map, the first image, and the second image includes: the pixel difference map, the first image, and the second image are subjected to interpolation enlargement processing with reference to a gradation rule for reducing the image size.
As an alternative embodiment, fusing the first image and the second image based on the still region and the moving region in the pixel difference map to obtain a noise reduction result map includes: performing static region para-pixel fusion on the first image and the second image based on the static region in the pixel difference map to obtain a first fusion result; performing motion region frequency domain fusion on the first image and the second image based on the motion region in the pixel difference map to obtain a second fusion result; and generating a noise reduction result graph according to the first fusion result and the second fusion result. Thus, a final noise reduction result diagram can be generated based on different processing modes aiming at different areas so as to reduce noise interference in the noise reduction result diagram.
Specifically, performing still region para-pixel fusion on the first image and the second image based on a still region in the pixel difference map to obtain a first fusion result, including: based on the expressionif DiffMap (x, y) =0 processes each pixel position separately, that is, in units of pixel positions, acquires the pixel value of the same pixel position from each image to perform fusion, thereby obtaining a fusion result outputj [ i ] corresponding to the pixel position]When the fusion result OutPut [ i ] corresponding to each pixel position in the static area is obtained]Thereafter, each OutPut [ i ]]And determining as a first fusion result. i is used to represent the i-th frame image (i.e., the first image or either the second image).
In addition, generating a noise reduction result graph according to the first fusion result and the second fusion result, including: combining the first fusion results and the second fusion results to obtain a combined result which is determined to be a noise reduction result diagram; the first fusion results are in one-to-one correspondence with the pixel positions of the static area, and the second fusion results are in one-to-one correspondence with the pixel positions of the moving area.
As an optional embodiment, performing motion region frequency domain fusion on the first image and the second image based on the motion region in the pixel difference map to obtain a second fusion result, including: generating fusion weights corresponding to the second image according to the motion areas in the pixel difference map; performing motion region Fourier transform on the first image and the second image based on the motion region in the pixel difference map to obtain frequency domain images respectively corresponding to the first image and the second image; and generating a second fusion result according to the fusion weight and the frequency domain images respectively corresponding to the first image and the second image. Therefore, the pixels of the motion area can be fused in the frequency domain based on Fourier transformation, and further a fusion result which is more accurate for the motion area is obtained.
Specifically, generating a fusion weight corresponding to the second image according to the motion region in the pixel disparity map includes: based on the expressionCalculating fusion weight + corresponding to each frame i . Wherein 3 is a preset coefficient, which can be set by a user.
In addition, the first image and the second image may be subjected to motion region fourier transform based on the motion region in the pixel difference map, so as to obtain a frequency domain image for representing the first image and a frequency domain image for representing each second image, where the frequency domain image may include high-frequency subband information and low-frequency subband information, and may be understood as motion high-frequency information and motion low-frequency information, respectively.
As an optional embodiment, generating the second fusion result according to the fusion weight and the frequency domain images respectively corresponding to the first image and the second image includes: determining first motion high-frequency information and first motion low-frequency information based on a frequency domain image of the first image, and determining second motion high-frequency information and second motion low-frequency information based on a frequency domain image of the second image; the first motion high-frequency information and the second motion high-frequency information are fused based on the fusion weight to obtain comprehensive motion high-frequency information, and the first motion low-frequency information and the second motion low-frequency information are fused based on the fusion weight to obtain comprehensive motion low-frequency information; and carrying out inverse Fourier transform on the comprehensive motion high-frequency information and the comprehensive motion low-frequency information to obtain a second fusion result. In this way, the comprehensive motion high-frequency information and the comprehensive motion low-frequency information can be obtained in a mode of respectively fusing the low frequency and the high frequency, and the inverse Fourier transform can be realized through the comprehensive motion high-frequency information and the comprehensive motion low-frequency information, so that a fusion result of a required time domain is obtained.
Specifically, determining the first motion high-frequency information and the first motion low-frequency information based on the frequency-domain image of the first image, and determining the second motion high-frequency information and the second motion low-frequency information based on the frequency-domain image of the second image, includes: the frequency domain image of the first image is decomposed into first motion high-frequency information and first motion low-frequency information, the first motion high-frequency information and the first motion low-frequency information can be respectively represented in an image mode, and similarly, the frequency domain image of each second image can be decomposed into second motion high-frequency information and second motion low-frequency information to obtain second motion high-frequency information and second motion low-frequency information of each second image, and the second motion high-frequency information and the second motion low-frequency information can also be respectively represented in an image mode.
In addition, based on the fusion weight, the first motion high-frequency information and the second motion high-frequency information are fused to obtain comprehensive motion high-frequency information, and based on the fusion weight, the first motion low-frequency information and the second motion low-frequency information are fused to obtain comprehensive motion low-frequency information, comprising: based on the fusion weight, the first motion high-frequency information and the second motion high-frequency information are fused to obtain comprehensive motion high-frequency information, and the method comprises the following steps: based on the substitution expression F (i) = Σweight i *T i And fusing the first motion high-frequency information and the second motion high-frequency information according to the fusion weight to obtain comprehensive motion high-frequency information, and fusing the first motion low-frequency information and the second motion low-frequency information according to the fusion weight to obtain comprehensive motion low-frequency information. Wherein T is i For representing frequency domain information, e.g., first motion high frequency information, second motion high frequency information, first motion low frequency information, second motion low frequency information.
Referring to fig. 4, fig. 4 schematically illustrates a flow chart of an image denoising method according to another embodiment of the present application. As shown in fig. 4, the image denoising method includes: step S400 to step S438.
Step S400: in response to a shutter triggering operation, determining continuously acquired multi-frame images as an image sequence; alternatively, in response to the shutter trigger operation, the preview image acquired before the shutter trigger operation and the image acquired after the shutter trigger operation are determined as the image sequence.
Step S402: and determining processing channels corresponding to the images based on the image formats, and carrying out gray scale processing on the images according to the first sliding window for carrying out gray scale processing and the processing channels of the images to obtain a plurality of gray scale images.
Step S404: generating the difference value of the para-pixel of the gray level image of the adjacent frame based on the frame sequence among the gray level images to obtain the difference value of each pixel position, selecting a target difference value larger than a preset difference value from the difference values of each pixel position, and further summing the target difference values to obtain the difference result of the gray level image of the adjacent frame.
Step S406: sequencing the differential results to obtain a sequencing result, adding adjacent differential results to obtain a plurality of differential sums aiming at other differential results except the first differential result and the last differential result in the sequencing result, calculating a first weighted sum of the first differential result and a first preset weight, and calculating a second weighted sum of the last differential result and a second preset weight.
Step S408: determining a minimum value from the plurality of differential sums, the first weighted sum and the second weighted sum, and selecting a first frame gray scale image from the plurality of gray scale images as a first image if the minimum value is the first weighted sum; if the minimum value is the second weighted sum, selecting a last frame gray scale image from the plurality of gray scale images as a first image; if the minimum value is any one of the differential sums, determining adjacent target differential results corresponding to the corresponding differential sums, and taking the same gray scale image corresponding to the adjacent target differential results as the first image.
Step S410: the first image and the second image are gamma corrected.
Step S412: dividing the second image into image blocks to obtain an image block set, controlling each image block in the image block set to traverse a search area of the first image according to a preset moving step length to obtain a reference matching result corresponding to each movement, further selecting a minimum matching result from the reference matching results, determining the minimum matching result as a matching result of the corresponding image block to obtain a matching result corresponding to each image block respectively, and further performing pixel offset on the second image according to the matching result corresponding to each image block respectively to obtain a second target image aligned with the first image pixel.
Step S414: calculating an inter-frame pixel difference value between the second target image and the first image, a maximum pixel difference value in a preset area and a minimum pixel difference value in the preset area, summing the inter-frame pixel difference value of each pixel bit in the second target image, the maximum pixel difference value in the preset area and the minimum pixel difference value in the preset area to obtain a comprehensive difference value of each pixel bit, and generating a pixel difference map for representing the difference between the first image and the second target image according to the comprehensive difference value of each pixel bit.
Step S416: traversing the pixel difference map according to the second sliding windows to determine the number of effective pixels larger than the preset pixel value in each second sliding window, and setting the pixel value in the corresponding second sliding window as a specific value if the number of effective pixels in the second sliding window is smaller than the preset number.
Step S418: and carrying out Gaussian filtering processing on the pixel difference map. Further, step S420 or step S422 is performed.
Step S420: determining each pixel difference value in the pixel difference map; determining a region corresponding to the pixel difference value in the first numerical value interval as a static region; and determining the region corresponding to the pixel difference value in the second numerical range as a motion region. Further, step S424 is performed.
Step S422: performing binarization processing on the pixel difference map; determining a region with a pixel difference value of 1 as a stationary region; the region where the pixel difference value is 0 is determined as a motion region. Further, step S424 is performed.
Step S424: and carrying out interpolation amplification processing on the pixel difference map, the first image and the second image according to the specific size corresponding to the image to be processed, wherein the pixel difference map, the first image and the second image after interpolation amplification correspond to the specific size.
Step S426: and carrying out static region para-pixel fusion on the first image and the second image based on the static region in the pixel difference graph to obtain a first fusion result.
Step S428: fusion weights corresponding to the second image are generated from the motion regions in the pixel disparity map.
Step S430: and carrying out motion region Fourier transform on the first image and the second image based on the motion region in the pixel difference map to obtain frequency domain images respectively corresponding to the first image and the second image.
Step S432: the first motion high frequency information and the first motion low frequency information are determined based on the frequency domain image of the first image, and the second motion high frequency information and the second motion low frequency information are determined based on the frequency domain image of the second image.
Step S434: and fusing the first motion high-frequency information and the second motion high-frequency information based on the fusion weight to obtain comprehensive motion high-frequency information, and fusing the first motion low-frequency information and the second motion low-frequency information based on the fusion weight to obtain comprehensive motion low-frequency information.
Step S436: and carrying out inverse Fourier transform on the comprehensive motion high-frequency information and the comprehensive motion low-frequency information to obtain a second fusion result.
Step S438: and generating a noise reduction result graph according to the first fusion result and the second fusion result.
It should be noted that, the steps S400 to S438 correspond to the steps and the embodiments shown in fig. 1, and for the specific implementation of the steps S400 to S438, please refer to the steps and the embodiments shown in fig. 1, and the description thereof is omitted here.
Therefore, by implementing the method shown in fig. 4, the static area and the moving area can be determined through the pixel difference map for representing the difference between the first image and the second image, and the images are fused based on the areas of different types, so that multi-frame noise reduction based on motion detection is realized, and the occurrence of the smear of the moving object in the noise reduction result map is avoided.
Referring to fig. 5, fig. 5 schematically shows a block diagram of an image noise reduction device according to an embodiment of the present application. As shown in fig. 5, the image noise reduction apparatus 500 may include the following units.
An image acquisition unit 501 for acquiring a first image and a second image from an image sequence;
a difference determination unit 502 for generating a pixel difference map for characterizing differences between the first image and the second image;
a region dividing unit 503 for dividing a stationary region and a moving region in the pixel difference map;
And a fusion unit 504, configured to fuse the first image and the second image based on the still region and the moving region in the pixel difference map, so as to obtain a noise reduction result map.
Therefore, by implementing the device shown in fig. 5, the static area and the moving area can be determined through the pixel difference map for representing the difference between the first image and the second image, and the images are fused based on the areas of different types, so that multi-frame noise reduction based on motion detection is realized, and the occurrence of motion object smear in the noise reduction result map is avoided.
As an alternative embodiment, the image acquisition unit 501 acquires a first image and a second image from an image sequence, including:
carrying out gray scale processing on each image in the image sequence to obtain a plurality of gray scale images;
a first image is selected from the plurality of gray scale images, and other gray scale images than the first image in the plurality of gray scale images are determined as a second image.
Therefore, by implementing the alternative embodiment, each image in the image sequence can be processed into a smaller gray scale image, so that the calculated amount can be reduced in the image noise reduction process, and the image noise reduction efficiency is improved.
As an alternative embodiment, further comprising:
A sequence acquisition unit configured to determine a plurality of frame images continuously acquired as an image sequence in response to a shutter trigger operation;
or,
and a sequence acquisition unit configured to determine, as an image sequence, a preview image acquired before the shutter trigger operation and an image acquired after the shutter trigger operation in response to the shutter trigger operation.
It can be seen that by implementing this alternative embodiment, a more various image set acquisition manners may be provided, so that the embodiments of the present application may be applied to a more various scenes, and enrich the application range of the embodiments of the present application.
As an alternative embodiment, the image obtaining unit 501 performs gray-scale processing on each image in the image sequence to obtain a plurality of gray-scale images, including:
determining a processing channel corresponding to each image based on the image format;
and carrying out gray scale processing on each image according to the first sliding window for carrying out gray scale processing and the processing channel of each image to obtain a plurality of gray scale images.
It can be seen that by implementing this alternative embodiment, a processing channel suitable for an image can be determined based on the image format, and thus a gray scale processing operation can be adaptively performed based on the processing channel, thereby obtaining a gray scale map more efficiently.
As an alternative embodiment, the image obtaining unit 501 selects the first image from the plurality of gray maps, including:
generating a difference result between adjacent frame gray maps based on a frame order between the plurality of gray maps;
a first image is selected from the plurality of gray scale maps based on each differential result.
It can be seen that, by implementing this alternative embodiment, the most suitable gray scale map may be selected as the first image based on the difference result between the gray scale maps of adjacent frames, so as to promote the subsequent noise reduction effect.
As an alternative embodiment, the image acquisition unit 501 generates a difference result between adjacent frame gray maps based on a frame order between a plurality of gray maps, including:
generating the difference value of the para-pixel of the gray map of the adjacent frame based on the frame sequence among the gray maps to obtain the difference value of each pixel position;
and summing the difference values of the pixel positions to obtain a difference result of the gray level images of the adjacent frames.
It can be seen that, by implementing this alternative embodiment, the difference result of each adjacent frame gray scale map may be determined based on the difference value of the alignment pixels of the adjacent frame gray scale maps, the difference between the adjacent frame gray scale maps may be known based on the difference result, and the most suitable gray scale map may be advantageously selected as the first image based on the difference.
As an alternative embodiment, the image obtaining unit 501 sums the differences of the pixel positions to obtain a difference result of the gray scale map of the adjacent frame, including:
selecting a target difference value larger than a preset difference value from the difference values of the pixel positions;
and summing the target difference values to obtain a difference result of the gray level images of the adjacent frames.
It can be seen that, by implementing this alternative embodiment, the effective target difference value (i.e., the target difference value greater than the preset difference value) may be selected for summation, so that participation in the summation process for the target difference value smaller than or equal to the preset difference value is avoided, and the accuracy of the calculated difference result in characterizing the image difference may be improved.
As an alternative embodiment, the image obtaining unit 501 selects a first image from a plurality of gray maps based on each difference result, including:
sequencing each differential result to obtain a sequencing result;
for each differential result in the sequencing results, adjacent differential results are added to obtain a plurality of differential sums;
calculating a first weighted sum of a first differential result and a first preset weight in the sequencing results, and calculating a second weighted sum of a last differential result and a second preset weight in the sequencing results;
Determining a minimum value from the plurality of differential sums, the first weighted sum, and the second weighted sum;
a first image is selected from the plurality of gray scale maps based on the minimum value.
Therefore, implementing the alternative embodiment is beneficial to selecting the gray scale image with the smallest difference from the previous and subsequent frames as the first image, and using the first image as the reference frame can realize more accurate pixel alignment and better noise reduction effect.
As an alternative embodiment, the image obtaining unit 501 selects the first image from the plurality of gray maps based on the minimum value, including:
if the minimum value is the first weighted sum, selecting a first frame gray scale image from the plurality of gray scale images as a first image;
if the minimum value is the second weighted sum, selecting a last frame gray scale image from the plurality of gray scale images as a first image;
if the minimum value is any one of the differential sums, determining adjacent target differential results corresponding to the corresponding differential sums, and taking the same gray scale image corresponding to the adjacent target differential results as the first image.
It can be seen that when the minimum value is any one of the first weighted sum, the second weighted sum, and any one of the plurality of differential sums, the optional embodiment is implemented, and a suitable gray scale map is determined as the first image, so as to achieve more accurate pixel alignment and better noise reduction effect.
As an alternative embodiment, further comprising:
a brightening unit for performing gamma correction on the first image and the second image before the difference determining unit 502 generates a pixel difference map for characterizing the difference between the first image and the second image.
It can be seen that implementing this alternative embodiment, the first image and the second image may be brightened by gamma correction, which is beneficial to improving the noise reduction effect on the night scene image.
As an alternative embodiment, the difference determining unit 502 generates a pixel difference map for characterizing the difference between the first image and the second image, comprising:
performing alignment processing on the second image based on the first image to obtain a second target image;
a pixel disparity map is generated that characterizes a disparity between the first image and the second target image.
It can be seen that, by implementing this alternative embodiment, image alignment may be achieved, and in the practical application process, although the shooting time of the second image and the first image is very close, there may be some shake of the second image compared with that of the first image, and then the positions of the key objects in the two images may be different, and by aligning the second image with the first image, the positions of the key objects in the two images may be consistent, which may be beneficial to obtaining better noise reduction results in the subsequent noise reduction fusion process.
As an alternative embodiment, the difference determining unit 502 performs an alignment process on the second image based on the first image to obtain a second target image, including:
dividing the second image into image blocks to obtain an image block set;
controlling each image block in the image block set to traverse the search area of the first image so as to determine a matching result corresponding to each image block;
and performing image block offset on the second image according to the matching results respectively corresponding to the image blocks to obtain a second target image aligned with the first image.
It can be seen that implementing this alternative embodiment, image alignment can be performed in units of image blocks, and alignment efficiency and alignment accuracy can be improved.
As an alternative embodiment, the difference determining unit 502 controls each image block in the image block set to traverse the search area of the first image to determine a matching result corresponding to each image block, including:
each image block in the image block set is controlled to traverse the search area of the first image according to a preset moving step length, and a reference matching result corresponding to each movement is obtained;
and selecting a minimum matching result from the reference matching results, and determining the minimum matching result as the matching result of the corresponding image block so as to obtain the matching result corresponding to each image block.
It can be seen that implementing this alternative embodiment, an optimal matching result may be determined for each image block based on the manner in which the image block traverses the search area, so as to align the first image and the second image according to the matching result, so as to enhance the noise reduction effect.
As an alternative embodiment, the difference determining unit 502 generates a pixel difference map for characterizing the difference between the first image and the second target image, comprising:
calculating an inter-frame pixel difference value between the second target image and the first image, a maximum pixel difference value in a preset area and a minimum pixel difference value in the preset area;
summing the inter-frame pixel difference value of each pixel position in the second target image, the maximum pixel difference value in the preset area and the minimum pixel difference value in the preset area to obtain the comprehensive difference value of each pixel position;
a pixel disparity map is generated that characterizes the disparity between the first image and the second target image based on the integrated disparity values for each pixel location.
Therefore, by implementing the alternative embodiment, the integrated difference value corresponding to each pixel position can be calculated based on the inter-frame pixel difference value, the maximum pixel difference value in the preset area and the minimum pixel difference value in the preset area, the pixel difference map can be obtained based on the integrated difference value of each pixel position, the image noise reduction processing is performed based on the pixel difference map, and a personalized processing mode for pertinently adopting different difference areas can be obtained, so that the noise reduction processing effect is improved.
As an alternative embodiment, further comprising:
the noise elimination unit is used for traversing the pixel difference graph according to the second sliding windows so as to determine the number of effective pixels which are larger than a preset pixel value in each second sliding window; and carrying out noise elimination processing on the pixel difference map according to the number of the effective pixels in each second sliding window.
It can be seen that implementing this alternative embodiment can eliminate noise in the pixel disparity map, reducing the adverse effect of the discrete integrated disparity values on the signal-to-noise ratio.
As an alternative embodiment, the noise cancellation unit performs noise cancellation processing on the pixel difference map according to the number of effective pixels in each second sliding window, including:
if the number of effective pixels in the second sliding window is smaller than the preset number, setting the pixel value in the corresponding second sliding window as a specific value.
Therefore, by implementing the alternative embodiment, noise interference can be effectively eliminated, and the purity degree of the image is improved.
As an alternative embodiment, further comprising:
and the smooth edge unit is used for carrying out Gaussian filtering processing on the pixel difference map.
It can be seen that implementing this alternative embodiment, sharp edges in the image can be smoothed, enhancing the image effect.
As an alternative embodiment, the region dividing unit 503 divides a stationary region and a moving region in the pixel difference map, including:
determining each pixel difference value in the pixel difference map;
determining a region corresponding to the pixel difference value in the first numerical value interval as a static region;
determining a region corresponding to the pixel difference value in the second numerical interval as a motion region; wherein the second numerical interval does not overlap with the first numerical interval.
It can be seen that by implementing this alternative embodiment, a distinction between a stationary region and a moving region can be achieved, so that it is advantageous to process pixels of different regions in a targeted manner, thereby improving the image noise reduction effect.
As an alternative embodiment, the region dividing unit 503 divides a stationary region and a moving region in the pixel difference map, including:
performing binarization processing on the pixel difference map;
determining a region with a pixel difference value of 1 as a stationary region;
the region where the pixel difference value is 0 is determined as a motion region.
It can be seen that by implementing this alternative embodiment, a distinction between a stationary region and a moving region can be achieved, so that it is advantageous to process pixels of different regions in a targeted manner, thereby improving the image noise reduction effect. In addition, the binarization processing of the pixel difference map can simplify the pixel difference map, so that the noise reduction processing efficiency of the image is improved.
As an alternative embodiment, further comprising:
the interpolation amplifying unit is used for carrying out interpolation amplifying processing on the pixel difference map, the first image and the second image according to the specific size corresponding to the image to be processed, and the pixel difference map, the first image and the second image after interpolation amplifying correspond to the specific size.
Therefore, by implementing the optional embodiment, the finally output noise reduction result graph is consistent with the size of the image to be processed, so that the dissonance caused by the inconsistent size is avoided, the noise reduction perception of the user is weakened, the user can directly obtain the noise reduction image with the normal size from the visual effect, and the use experience of the user is improved.
As an alternative embodiment, the fusing unit 504 fuses the first image and the second image based on the still region and the moving region in the pixel difference map, to obtain a noise reduction result map, including:
performing static region para-pixel fusion on the first image and the second image based on the static region in the pixel difference map to obtain a first fusion result;
performing motion region frequency domain fusion on the first image and the second image based on the motion region in the pixel difference map to obtain a second fusion result;
and generating a noise reduction result graph according to the first fusion result and the second fusion result.
It can be seen that implementing this alternative embodiment, a final noise reduction result map may be generated based on different processing manners for different regions, so as to reduce noise interference in the noise reduction result map.
As an optional embodiment, the fusing unit 504 performs, based on the motion region in the pixel difference map, motion region frequency domain fusion on the first image and the second image, to obtain a second fusion result, including:
generating fusion weights corresponding to the second image according to the motion areas in the pixel difference map;
performing motion region Fourier transform on the first image and the second image based on the motion region in the pixel difference map to obtain frequency domain images respectively corresponding to the first image and the second image;
and generating a second fusion result according to the fusion weight and the frequency domain images respectively corresponding to the first image and the second image.
It can be seen that, by implementing this alternative embodiment, the pixels of the motion region may be fused in the frequency domain based on fourier transform, so as to obtain a more accurate fusion result for the motion region.
As an optional embodiment, the fusing unit 504 generates a second fused result according to the fused weight and the frequency domain images corresponding to the first image and the second image respectively, including:
Determining first motion high-frequency information and first motion low-frequency information based on a frequency domain image of the first image, and determining second motion high-frequency information and second motion low-frequency information based on a frequency domain image of the second image;
the first motion high-frequency information and the second motion high-frequency information are fused based on the fusion weight to obtain comprehensive motion high-frequency information, and the first motion low-frequency information and the second motion low-frequency information are fused based on the fusion weight to obtain comprehensive motion low-frequency information;
and carrying out inverse Fourier transform on the comprehensive motion high-frequency information and the comprehensive motion low-frequency information to obtain a second fusion result.
It can be seen that, by implementing the alternative embodiment, the integrated motion high-frequency information and the integrated motion low-frequency information can be obtained by respectively fusing the low frequency and the high frequency, and the inverse fourier transform can be implemented by the integrated motion high-frequency information and the integrated motion low-frequency information, so as to obtain the fusion result of the required time domain.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit, in accordance with embodiments of the present application. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Since each functional module of the image noise reduction device according to the exemplary embodiment of the present application corresponds to the steps of the foregoing exemplary embodiment of the image noise reduction device, for details not disclosed in the embodiments of the present application, please refer to the foregoing embodiments of the image noise reduction device of the present application.
Referring to fig. 6, fig. 6 shows a schematic diagram of a computer system suitable for implementing the electronic device of the embodiments of the present application.
It should be noted that, the computer system 600 of the electronic device shown in fig. 6 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU) 601, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for system operation are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, mouse, etc.; an output portion 607 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The drive 610 is also connected to the I/O interface 605 as needed. Removable media 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on drive 610 so that a computer program read therefrom is installed as needed into storage section 608.
In particular, according to embodiments of the present application, the processes described below with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network through the communication portion 609, and/or installed from the removable medium 611. The computer program, when executed by a Central Processing Unit (CPU) 601, performs the various functions defined in the methods and apparatus of the present application.
As another aspect, the present application also provides a computer-readable medium that may be contained in the electronic device described in the above embodiment; or may exist alone without being incorporated into the electronic device. The computer-readable medium carries one or more programs which, when executed by one of the electronic devices, cause the electronic device to implement the methods of the above-described embodiments.
It should be noted that the computer readable medium shown in the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present application may be implemented by means of software, or may be implemented by means of hardware, and the described units may also be provided in a processor. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.

Claims (26)

1. A method of image denoising, comprising:
acquiring a first image and a second image from an image sequence;
generating a pixel disparity map for characterizing a difference between the first image and the second image;
dividing a static area and a moving area in the pixel difference map;
and fusing the first image and the second image based on the static area and the moving area in the pixel difference image to obtain a noise reduction result image.
2. The method according to claim 1, wherein the method further comprises:
determining continuously acquired multi-frame images as the image sequence in response to a shutter trigger operation;
Or,
and responding to the shutter trigger operation, determining the preview image acquired before the shutter trigger operation and the image acquired after the shutter trigger operation as the image sequence.
3. The method of claim 1, wherein acquiring the first image and the second image from the sequence of images comprises:
carrying out gray scale processing on each image in the image sequence to obtain a plurality of gray scale images;
selecting a first image from the plurality of gray maps, and determining other gray maps except the first image in the plurality of gray maps as a second image.
4. A method according to claim 3, wherein the grey-scale processing of each image in the sequence of images to obtain a plurality of grey-scale images comprises:
determining a processing channel corresponding to each image based on the image format;
and carrying out gray scale processing on each image according to the first sliding window for carrying out gray scale processing and the processing channel of each image to obtain a plurality of gray scale images.
5. A method according to claim 3, wherein selecting a first image from the plurality of grey-scale maps comprises:
generating a difference result between adjacent frame gray maps based on the frame order between the plurality of gray maps;
A first image is selected from the plurality of gray scale maps based on each differential result.
6. The method of claim 5, wherein generating a difference result between adjacent frame gray maps based on a frame order between the plurality of gray maps comprises:
generating the difference value of the para-pixel of the gray level diagram of the adjacent frame based on the frame sequence among the plurality of gray level diagrams to obtain the difference value of each pixel position;
and summing the difference values of the pixel positions to obtain a difference result of the gray level diagrams of the adjacent frames.
7. The method of claim 6, wherein summing the differences for each pixel location results in a difference result for a gray scale map for an adjacent frame, comprising:
selecting a target difference value larger than a preset difference value from the difference values of the pixel positions;
and summing the target difference values to obtain a difference result of the gray level images of the adjacent frames.
8. The method of claim 5, wherein selecting a first image from the plurality of gray scale maps based on each differential result comprises:
sequencing each differential result to obtain a sequencing result;
for each differential result in the sequencing results, adjacent differential results are added to obtain a plurality of differential sums;
Calculating a first weighted sum of a first differential result and a first preset weight in the sequencing results, and calculating a second weighted sum of a last differential result and a second preset weight in the sequencing results;
determining a minimum value from the plurality of differential sums, the first weighted sum, and the second weighted sum;
and selecting a first image from the plurality of gray scale images based on the minimum value.
9. The method of claim 8, wherein selecting a first image from the plurality of gray scale maps based on the minimum value comprises:
if the minimum value is the first weighted sum, selecting a first frame gray scale image from the plurality of gray scale images as a first image;
if the minimum value is the second weighted sum, selecting a last frame gray scale image from the plurality of gray scale images as a first image;
and if the minimum value is any one of the differential sums, determining an adjacent target differential result corresponding to the corresponding differential sum, and taking the same gray level image corresponding to the adjacent target differential result as a first image.
10. The method of claim 1, wherein prior to generating a pixel disparity map for characterizing a disparity between the first image and the second image, the method further comprises:
And performing gamma correction on the first image and the second image.
11. The method of claim 1, wherein generating a pixel disparity map for characterizing a disparity between the first image and the second image comprises:
performing alignment processing on the second image based on the first image to obtain a second target image;
a pixel disparity map is generated that characterizes a disparity between the first image and the second target image.
12. The method of claim 11, wherein aligning the second image based on the first image results in a second target image, comprising:
dividing the second image into image blocks to obtain an image block set;
controlling each image block in the image block set to traverse the search area of the first image so as to determine a matching result corresponding to each image block;
and performing image block offset on the second image according to the matching results respectively corresponding to the image blocks to obtain a second target image aligned with the first image.
13. The method of claim 12, wherein controlling each image block in the set of image blocks to traverse the search area of the first image to determine a respective corresponding matching result for each image block comprises:
Controlling each image block in the image block set to traverse the search area of the first image according to a preset moving step length to obtain a reference matching result corresponding to each movement;
and selecting a minimum matching result from the reference matching results, and determining the minimum matching result as the matching result of the corresponding image block so as to obtain the matching result respectively corresponding to each image block.
14. The method of claim 11, wherein generating a pixel disparity map for characterizing a disparity between the first image and the second target image comprises:
calculating an inter-frame pixel difference value between the second target image and the first image, a maximum pixel difference value in a preset area and a minimum pixel difference value in the preset area;
summing the inter-frame pixel difference value of each pixel position in the second target image, the maximum pixel difference value in a preset area and the minimum pixel difference value in the preset area to obtain the comprehensive difference value of each pixel position;
and generating a pixel difference map for representing the difference between the first image and the second target image according to the integrated difference value of each pixel position.
15. The method according to claim 1, wherein the method further comprises:
traversing the pixel difference map according to the second sliding windows to determine the number of effective pixels which are larger than a preset pixel value in each second sliding window;
and carrying out noise elimination processing on the pixel difference map according to the number of the effective pixels in each second sliding window.
16. The method of claim 15, wherein noise cancelling the pixel difference map according to the number of valid pixels in each second sliding window comprises:
if the number of effective pixels in the second sliding window is smaller than the preset number, setting the pixel value in the corresponding second sliding window as a specific value.
17. The method according to claim 1, wherein the method further comprises:
and carrying out Gaussian filtering processing on the pixel difference map.
18. The method of claim 1, wherein dividing the stationary region and the moving region in the pixel disparity map comprises:
determining each pixel difference value in the pixel difference map;
determining a region corresponding to the pixel difference value in the first numerical value interval as a static region;
And determining a region corresponding to the pixel difference value in a second numerical value interval as a motion region, wherein the second numerical value interval is not overlapped with the first numerical value interval.
19. The method of claim 1, wherein dividing the stationary region and the moving region in the pixel disparity map comprises:
performing binarization processing on the pixel difference map;
determining a region with a pixel difference value of 1 as a stationary region;
the region where the pixel difference value is 0 is determined as a motion region.
20. The method according to claim 1, wherein the method further comprises:
and carrying out interpolation amplification processing on the pixel difference map, the first image and the second image according to the specific size corresponding to the image to be processed, wherein the pixel difference map, the first image and the second image after interpolation amplification correspond to the specific size.
21. The method of claim 1, wherein fusing the first image and the second image based on a stationary region and a moving region in the pixel disparity map results in a noise reduction result map, comprising:
performing static region para-pixel fusion on the first image and the second image based on the static region in the pixel difference map to obtain a first fusion result;
Performing motion region frequency domain fusion on the first image and the second image based on the motion region in the pixel difference map to obtain a second fusion result;
and generating a noise reduction result graph according to the first fusion result and the second fusion result.
22. The method of claim 21, wherein performing a motion region frequency domain fusion of the first image and the second image based on the motion region in the pixel disparity map to obtain a second fusion result comprises:
generating fusion weights corresponding to the second image according to the motion areas in the pixel difference map;
performing motion region Fourier transform on the first image and the second image based on the motion region in the pixel difference map to obtain frequency domain images respectively corresponding to the first image and the second image;
and generating a second fusion result according to the fusion weight and the frequency domain images respectively corresponding to the first image and the second image.
23. The method of claim 22, wherein generating a second fusion result from the fusion weights and the frequency domain images to which the first image and the second image correspond, respectively, comprises:
Determining first motion high-frequency information and first motion low-frequency information based on the frequency domain image of the first image, and determining second motion high-frequency information and second motion low-frequency information based on the frequency domain image of the second image;
the first motion high-frequency information and the second motion high-frequency information are fused based on the fusion weight, so that comprehensive motion high-frequency information is obtained, and the first motion low-frequency information and the second motion low-frequency information are fused based on the fusion weight, so that comprehensive motion low-frequency information is obtained;
and carrying out inverse Fourier transform on the comprehensive motion high-frequency information and the comprehensive motion low-frequency information to obtain a second fusion result.
24. An image noise reduction apparatus, comprising:
an image acquisition unit configured to acquire a first image and a second image from an image sequence;
a difference determination unit for generating a pixel difference map for characterizing a difference between the first image and the second image;
a region dividing unit for dividing a stationary region and a moving region in the pixel difference map;
and the fusion unit is used for fusing the first image and the second image based on the static area and the moving area in the pixel difference image to obtain a noise reduction result image.
25. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of any of claims 1-23.
26. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1-23 via execution of the executable instructions.
CN202211150866.1A 2022-09-21 2022-09-21 Image noise reduction method, device, computer readable storage medium and equipment Pending CN117764855A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211150866.1A CN117764855A (en) 2022-09-21 2022-09-21 Image noise reduction method, device, computer readable storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211150866.1A CN117764855A (en) 2022-09-21 2022-09-21 Image noise reduction method, device, computer readable storage medium and equipment

Publications (1)

Publication Number Publication Date
CN117764855A true CN117764855A (en) 2024-03-26

Family

ID=90316803

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211150866.1A Pending CN117764855A (en) 2022-09-21 2022-09-21 Image noise reduction method, device, computer readable storage medium and equipment

Country Status (1)

Country Link
CN (1) CN117764855A (en)

Similar Documents

Publication Publication Date Title
CN108702496B (en) System and method for real-time tone mapping
US8965120B2 (en) Image processing apparatus and method of controlling the same
JP5389903B2 (en) Optimal video selection
US10013739B2 (en) Image enhancement methods and systems using the same
US7382928B2 (en) Digital image processing system and method for processing digital images
US20160301868A1 (en) Automated generation of panning shots
US10367976B2 (en) Single image haze removal
CN114298944A (en) Image enhancement method, device, equipment and storage medium
CN107395991B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
JP2006121713A (en) Enhancing contrast
WO2012170462A2 (en) Automatic exposure correction of images
CN104335565A (en) Image processing method with detail-enhancing filter with adaptive filter core
JP2013192224A (en) Method and apparatus for deblurring non-uniform motion blur using multi-frame including blurred image and noise image
WO2023273868A1 (en) Image denoising method and apparatus, terminal, and storage medium
US20080158258A1 (en) Method and System For Obtaining a Digitally Enhanced Image
CN116823628A (en) Image processing method and image processing device
CN113344820B (en) Image processing method and device, computer readable medium and electronic equipment
KR101854432B1 (en) Method and apparatus for detecting and compensating back light frame
KR102136716B1 (en) Apparatus for Improving Image Quality and Computer-Readable Recording Medium with Program Therefor
CN117764855A (en) Image noise reduction method, device, computer readable storage medium and equipment
CN114266803A (en) Image processing method, image processing device, electronic equipment and storage medium
JP2000333003A (en) Image forming device, method for controlling image forming device and computer-readable storage medium storing program
KR20120137874A (en) Method of improving contrast and apparatus using the method
EP1363461A2 (en) Processing of digital images
JP4795314B2 (en) Flicker correction apparatus and flicker correction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination