CN111784605B - Image noise reduction method based on region guidance, computer device and computer readable storage medium - Google Patents

Image noise reduction method based on region guidance, computer device and computer readable storage medium Download PDF

Info

Publication number
CN111784605B
CN111784605B CN202010611911.3A CN202010611911A CN111784605B CN 111784605 B CN111784605 B CN 111784605B CN 202010611911 A CN202010611911 A CN 202010611911A CN 111784605 B CN111784605 B CN 111784605B
Authority
CN
China
Prior art keywords
image
layer
frequency information
pixels
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010611911.3A
Other languages
Chinese (zh)
Other versions
CN111784605A (en
Inventor
易翔
潘文培
钟午
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Allwinner Technology Co Ltd
Original Assignee
Allwinner Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Allwinner Technology Co Ltd filed Critical Allwinner Technology Co Ltd
Priority to CN202010611911.3A priority Critical patent/CN111784605B/en
Publication of CN111784605A publication Critical patent/CN111784605A/en
Application granted granted Critical
Publication of CN111784605B publication Critical patent/CN111784605B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention provides an image noise reduction method based on region guidance, a computer device and a computer readable storage medium, wherein the method comprises the steps of obtaining an initial image and constructing an image pyramid; non-local mean filtering is carried out on the low-frequency information of each layer of image, and a neighborhood similarity graph of the layer of image is obtained by utilizing a similarity block search result in the non-local mean filtering process; detecting edge areas, texture areas and flat areas in the images by using neighborhood similarity graphs of images of all layers and high-frequency information of high-layer images, applying low-frequency information of at least one layer of images and high-frequency information of at least one layer of images to the edge areas, the texture areas and the flat areas, performing fusion calculation on gray values of all pixels by using a corresponding fusion method, and outputting the image after noise reduction. The invention also provides a computer device for realizing the method and a computer readable storage medium. The invention can reduce the calculated amount of image noise reduction, improve the self-adaptability of noise reduction and has better noise reduction effect.

Description

Image noise reduction method based on region guidance, computer device and computer readable storage medium
Technical Field
The invention relates to the technical field of image processing, in particular to an image noise reduction method based on region guidance, a computer device for realizing the method and a computer readable storage medium.
Background
Many existing intelligent electronic devices have an image capturing function, for example, a smart phone, a tablet computer, a vehicle recorder and the like are all provided with an image capturing device, and the image capturing device is usually provided with a CMOS or CCD image sensor to acquire an image. Typically, an image includes a large number of pixels, and color information of each pixel may be represented by an RGB value or a YUV value.
For example, CMOS or CCD image sensors commonly used at present generally use a BAYER arrangement format, and color information of each pixel is usually RGB, but the RGB is not three primary colors, and thus, color distortion occurs in an image, so that a "demosaic" process needs to be performed on each pixel to obtain RGB three primary colors information, so as to restore the original color of the image.
As image resolution increases, the amount of light sensed by individual pixels decreases, and low-light scenes are increasingly used, image noise output by the image sensor increases greatly. In the process of converting the RAW image into the RGB image, the demosaicing operation needs to refer to all image pixels in a certain area to obtain RGB three primary colors of one pixel, noise of each color component is diffused mutually in a large range, so that a large block (from a few pixels to hundreds of pixels) of color spots appear in a final image, and the visual perception of human eyes is seriously influenced. Therefore, it is generally necessary to perform noise reduction processing on an image output from the image sensor.
Currently, the most representative image denoising methods are denoising methods implemented by a Non-Local filter, such as Non-Local mean filtering (NLM) and three-dimensional Block Matching (bm3 d), which all utilize self-similarity of images and obtain denoising results by weighting and averaging similar pixel blocks. Along with the continuous research of people on the noise reduction method, the idea of combining the region detection with the noise reduction method of the non-local filter, namely, dividing the region of the image and setting the corresponding noise reduction parameters or strategies according to the region information is widely accepted. In fact, the noise reduction method not only can obtain better noise reduction effect, but also has great benefit for subsequent image analysis.
The non-local mean value filtering method is a noise reduction method based on the local similarity of images, and on the basis of the neighborhood average noise reduction method, the similarity weight coefficient of the non-local mean value filtering method is determined by the similarity between the current pixel to be noise reduced and the image slices taking other pixels in the neighborhood as the center, the calculation of the weight of the non-local mean value filtering method has no substantial relation with the spatial positions of the two pixels, and the non-local mean value filtering method is only related to the similarity of the two image slices, so that the non-local mean value filtering method can better avoid the introduction of false information. Since the noise of the image can be equivalent to additive Gaussian white noise, the weighted average similar pixels can remove the noise of the image better. The NLM method has the characteristics of simple algorithm, excellent performance and easiness in improvement and expansion, and becomes one of the main stream methods for noise reduction of the current image.
The basic principle of the NLM method is as follows: taking a neighborhood window with a pixel to be denoised as a center, wherein the window size is N multiplied by N (N is generally 3, 5, 7, 9 and the like), taking an image area with a certain surrounding size as a search area, marking the image area as M multiplied by M (the whole image can be selected, but the calculated amount is too large, and M is generally smaller than 41), searching similar image blocks in the search area, and the common method is to calculate Gaussian weighted Euclidean distance between the image blocks, and calculating by using the following formula:
wherein Ga is Gaussian kernel with standard deviation a, and u (Ni) and u (Nj) are corresponding pixels in the image block of the central window and the image block of the search area respectively. Then, the pixel estimation value after noise reduction is calculated, and the following formula is used for calculation:
wherein w (i, j) =exp (-d (i, j)/h) 2 ) As the weight coefficient of the light-emitting diode,for normalization factor, Ω (i) represents the search area of the center pixel i, h is a similar weight parameter that determines the degree of balance after image noise reduction.
Existing noise reduction methods based on region information and non-local filters can be divided into two main categories: the first is to use gradient information to detect the edges of the image, divide the image into a detail area and a flat area, and then select different filtering parameters according to the detail information; the second type is to divide the image into various regions by using structural statistical information, and select different filters for different regions, including median filtering, bilateral filtering, non-local mean filtering, BM3D, and the like.
However, because the noise-containing image in the real scene has complex scene information and larger noise type and intensity variation, the existing noise reduction methods based on region detection have insufficient adaptability, namely the noise reduction methods cannot effectively balance the relation between the noise reduction effect and the noise reduction cost, and specifically, on one hand, in the existing noise reduction methods, the region detection module only depends on the analysis result of a single size of the image, the detection accuracy is poor, and the selection of partial parameters has more interference of human factors, so that the self-adaptability of the final detection result is insufficient, the subsequent noise reduction effect is influenced, and on the other hand, in the existing noise reduction methods, the region detection module is realized by using a complex calculation method, the calculated amount is larger, the image noise reduction processing time is overlong, and if the noise reduction calculation is realized by using a hardware circuit, the area of the hardware circuit is overlarge, and the development difficulty is large.
Disclosure of Invention
The main purpose of the invention is to provide the image noise reduction method based on the regional guidance, which has good noise reduction effect and less calculation amount.
Another object of the present invention is to provide a computer apparatus implementing the above image denoising method based on region guidance.
It is still another object of the present invention to provide a computer readable storage medium embodying the above region-based image denoising method.
In order to achieve the main purpose of the invention, the image denoising method based on the regional guidance provided by the invention comprises the steps of obtaining an initial image, constructing an image pyramid by applying the initial image, wherein each layer of image of the image pyramid comprises low-frequency information and high-frequency information of the layer of image; non-local mean filtering is carried out on the low-frequency information of each layer of image, and a neighborhood similarity graph of the layer of image is obtained by utilizing a similarity block search result in the non-local mean filtering process; detecting edge areas, texture areas and flat areas in the images by using neighborhood similarity graphs of images of all layers and high-frequency information of high-layer images, applying low-frequency information of at least one layer of images and high-frequency information of at least one layer of images to the edge areas, the texture areas and the flat areas, performing fusion calculation on gray values of all pixels by using a corresponding fusion method, and outputting the image after noise reduction.
According to the scheme, by constructing the image pyramid and applying the non-local mean filtering method, texture information reflected by the similarity searching result of the images of different layers in the non-local mean filtering can be deeply mined according to the low-frequency information and the high-frequency information of the images of different layers, so that different areas can be accurately judged, and a better and more efficient area detection result can be obtained. And, appoint the pixel noise reduction algorithm of different areas according to the above-mentioned result, namely use different noise reduction methods to calculate the gray value of the pixel after noise reduction to different areas, make the noise reduction effect of the picture more rational.
In addition, the operation of region detection is realized by directly applying search results in the non-local mean filtering process, and the calculated amount of noise reduction is not increased remarkably, so that the calculated amount of noise reduction of the image is small, and the efficiency of the calculation of noise reduction of the image is improved.
In a preferred embodiment, detecting edge regions, texture regions, and flat regions in an image by using a neighborhood similarity map of each layer image and high frequency information of a higher layer image includes: determining pixels of an edge area according to neighborhood similarity values of all pixels in a neighborhood similarity graph of the low-layer image; and determining pixels of the texture region according to the neighborhood similarity values of all pixels in the neighborhood similarity graph of the high-level image and the high-frequency information of the high-level image.
Therefore, the edge region in the image can be accurately detected through the neighborhood similarity value of each pixel in the neighborhood similarity map of the low-layer image because the gray value difference between the pixels in the edge region and the peripheral pixels is larger, and the neighborhood similarity map of the high-layer image and the high-frequency information of the high-layer image are comprehensively considered for detecting the texture region, so that the detection of the texture region is more accurate.
Further, determining the pixels of the edge area according to the neighborhood similarity values of the pixels in the neighborhood similarity graph of the lower-layer image includes: threshold segmentation is carried out on the neighborhood similarity values of all pixels in the neighborhood similarity graphs of the two or more layers of low-layer images, and the threshold segmentation results are combined, so that the pixels of the edge area are determined according to the combined results.
In this way, when the pixels of the edge area are determined, the threshold segmentation result according to the two-layer image is realized, so that the edge area detection can be ensured to be more accurate.
In a further aspect, determining the pixels of the texture region according to the neighborhood similarity values of each pixel in the neighborhood similarity graph of the high-level image and the high-frequency information of the high-level image includes: and after the detection result of the edge area is subjected to mask operation, determining pixels with non-zero neighborhood similarity values in the neighborhood similarity graph of the high-level image and non-zero high-frequency information in the high-level image as pixels of the texture area.
Therefore, the edge area of the image is determined, then the texture area and the flat area are detected after the pixels of the edge area are shielded, so that the interference of the pixels of the edge area on the detection of the texture area and the flat area can be avoided, and the accuracy of the detection of the texture area and the flat area is improved.
In a preferred embodiment, the obtaining the neighborhood similarity map of the layer image using the search result of the similarity block in the non-local mean filtering process includes: in the similar block searching process of the non-local mean filtering, the number of the matching windows and the similar windows corresponding to each pixel is calculated, and the number of the matching windows and the similar windows corresponding to the pixel is used as a neighborhood similarity value of the pixel in the neighborhood similarity graph.
Therefore, the neighborhood similarity graph of the image is composed of the neighborhood similarity value of each pixel, and the neighborhood similarity value of each pixel is determined when the neighborhood similarity window is detected in the non-local mean value filtering process, so that the calculation amount of the neighborhood similarity graph is less.
In a further aspect, applying the low frequency information of the at least one layer of image and the high frequency information of the at least one layer of image to perform fusion calculation on the gray values of the pixels of the edge region includes: and accumulating the low-frequency information of the low-layer image and the high-frequency information of at least two layers of high-layer images to obtain the gray value of the pixel of the edge area.
Therefore, the gray value calculation is realized by a direct fusion mode, namely, the low-frequency information and the high-frequency information of the multi-layer image are directly accumulated, the calculated amount is less, and the high-frequency information of the edge area can be reserved.
In a further aspect, applying the low frequency information of the at least one layer of image and the high frequency information of the at least one layer of image to perform fusion calculation on the gray value of the pixel of the flat area includes: and accumulating the low-frequency information of the low-layer image and the high-frequency information of at least two layers of high-layer images, and performing median filtering to obtain the gray value of the pixel of the flat area.
Therefore, for the pixels in the flat area, the gray values of the pixels in the flat area are more balanced in a median filtering mode, the characteristics of the pixels in the flat area are more met, and the quality of the filtered image is improved.
In a further scheme, when a neighborhood similarity graph of each layer of image and high-frequency information of a high-layer image are applied to detect a texture region in the image, similarity coefficients of pixels of the texture region are calculated; applying the low frequency information of the at least one layer of image and the high frequency information of the at least one layer of image to carry out fusion calculation on the gray value of the pixel of the texture region comprises the following steps: and determining the weighting coefficient of the multi-layer image according to the similarity coefficient, and carrying out weighted fusion calculation on the low-frequency information of at least one layer of image and the high-frequency information of at least one layer of image by using the weighting coefficient to obtain the gray value of the pixel of the texture region.
Therefore, for the pixels of the texture region, the self-adaptive weighting coefficients are used for fusion calculation, so that the pixels with different similarity coefficients adopt different weighting coefficients for images of different layers in the fusion calculation process, the gray value calculation of the texture region is closer to the actual gray value, the texture region is clearer in the filtered image, the texture display is closer to the actual state, and the filtering effect is more rational.
To achieve the above another object, the present invention provides a computer apparatus including a processor and a memory, the memory storing a computer program, which when executed by the processor, implements the steps of the above-described image denoising method based on region guidance.
To achieve still another object of the present invention, there is provided a computer program stored on a computer readable storage medium, which when executed by a processor, implements the steps of the above-described image denoising method based on region guidance.
Drawings
FIG. 1 is a flow chart of an embodiment of the image denoising method based on region guidance of the present invention.
Fig. 2 is a schematic diagram of color information of each pixel of an initial image.
FIG. 3 is a flow chart of region detection in an embodiment of the region-based image denoising method of the present invention.
Fig. 4 is a schematic diagram of a region detection result in an embodiment of the region-based image denoising method according to the present invention.
Fig. 5 is a flowchart of performing fusion calculation on gray values of pixels in an embodiment of an image denoising method based on region guidance according to the present invention.
The invention is further described below with reference to the drawings and examples.
Detailed Description
The image denoising method based on the region guidance is applied to intelligent electronic equipment, and preferably, the intelligent electronic equipment is provided with an imaging device, such as a camera, and the imaging device is provided with an image sensor, such as a CMOS (complementary metal oxide semiconductor), a CCD (charge coupled device) and the like, and the intelligent electronic equipment acquires an initial image by using the imaging device. Preferably, the intelligent electronic device is provided with a processor and a memory, wherein the memory stores a computer program, and the processor implements the image noise reduction method based on the region guidance by executing the computer program.
Image denoising method embodiment based on region guidance:
the embodiment mainly aims at a noise reduction method of an initial image acquired by an image sensor, specifically, after the initial image containing noise is acquired, an image pyramid is constructed by applying the initial image, wherein the image pyramid contains multiple layers of images, and each layer of image contains low-frequency information and high-frequency information; then, carrying out noise reduction processing on the low-frequency information of each layer of image of the image pyramid by utilizing non-local mean filtering, and acquiring the regional statistical information of the similarity detection result in the non-local mean filtering process of the low-frequency information of each layer of image in the noise reduction processing process; then, dividing the image into an edge area, a texture area and a flat area by using the area statistical information; and finally, calculating the gray values of the filtered pixels by using different fusion calculation methods for different areas respectively by using the area statistical information to obtain an output image.
Referring to fig. 1, first, step S1 is performed to acquire an initial image. The initial image of the present embodiment is an image output by an image sensor such as CMOS or CCD, and generally, the color information of the initial image is RGB information, that is, the format of the image is a BAYER image format. The BAYER image format is shown in fig. 2, where the original image has a large number of pixels each having one color information, for example, the color information of the first row of pixels is the color information of red R or green Gr, the red R pixels are arranged at intervals with the green Gr pixels, the color information of the second row of pixels is the color information of green Gb or blue B, and the green Gb pixels are arranged at intervals with the blue B pixels. The color information of each pixel is a chrominance value, which is typically an 8-bit to 16-bit binary number. In each color channel, the chromaticity value of the pixel, that is, the gray value of the pixel, is calculated in this embodiment.
Because the colors of the adjacent four pixels in the initial image are different, when filtering is performed on images with multiple colors, the gray values of the pixels with different chromaticity are easy to interfere with each other, and the noise reduction effect of the images is affected. Therefore, step S1 also needs to extract the image of each color channel in the initial image, where each color channel contains only pixels of one color. For example, the pixels of all red R pixels in the initial image are extracted to form an image of one red R pixel, all green Gr pixels and Gb pixels in the initial image are extracted to form an image of one green G pixel, and the pixels of all blue B pixels in the initial image are extracted to form an image of one blue B pixel. Of course, the relative positional relationship of the respective pixels is not changed in the extracted color channel images.
The subsequent steps S2 to S6 are all performed for each color channel, that is, the operations of steps S2 to S6 are all performed once for the images of the three color channels, and finally, the image filtered by the three color channels is subjected to the inverse interpolation calculation according to the arrangement mode of each pixel shown in fig. 2 to form an output image.
For example, for an image of the red channel, step S2 is performed to construct an image pyramid. In this embodiment, after the initial image is extracted from the images of the channels, for example, the pixel size of the image of the red channel is 1028×720, and then the pixel size of each layer of image in the image pyramid corresponding to the constructed red channel is 1028×720.
Specifically, based on an initial image of a red channel, a convolution operation is performed on the initial image of the color channel and a Gaussian low-pass filter with different sizes, so that a multi-layer Gaussian image is obtained to form a Gaussian pyramid. For example, a gaussian low-pass filter corresponding to a first layer image of a gaussian pyramid is set to be 7×7 in size, and convolution operation is performed between the gaussian low-pass filter and an initial image of the color channel to obtain the first layer gaussian image. And setting the size of a Gaussian low-pass filter corresponding to the Gaussian pyramid second-layer image to be 9 multiplied by 9, performing convolution operation on the Gaussian low-pass filter and the initial image of the color channel to obtain the second-layer Gaussian image, and the like. Preferably, the size of the gaussian low pass filter used is gradually increased as the number of layers is gradually increased. Thus, each layer of image of the gaussian pyramid contains low frequency information of most features of the original image and a small portion of noise.
After the multi-layer image of the Gaussian pyramid is obtained, the multi-layer image of the Laplacian pyramid is calculated by using the multi-layer image of the Gaussian pyramid, and specifically, the value of each pixel of each layer of Laplacian image of the Laplacian pyramid is a value obtained by subtracting the gray value of the pixel of the current layer of Gaussian image from the gray value of the pixel of the previous layer of Gaussian image. For example, the gray values of the pixels of the first layer gaussian image are subtracted from the gray values of the pixels of the second layer gaussian image to obtain the gray values of the pixels of the first layer laplacian image, and so on. For the highest layer laplacian image, the highest layer gaussian image is directly used as the highest layer laplacian image. Thus, the Laplacian pyramid images contain more noise and high frequency information of partial contours.
It can be seen that the image pyramid actually includes a gaussian pyramid and a laplacian pyramid, and the layers of the gaussian pyramid and the laplacian pyramid are the same and correspond to each other one by one, wherein the information of each layer of image of the gaussian pyramid is low-frequency information, and the information of each layer of image of the laplacian pyramid is high-frequency information.
Then, step S3 is executed to perform non-local mean filtering on the low-frequency information of each layer of image in the image pyramid, specifically, performing non-local mean filtering by using the information of each layer of image in the gaussian pyramid of the image pyramid. For example, for any one pixel i of each layer of image in the gaussian pyramid, assuming that the gray value is f (i), a matching window P (i) with a window size s and a search area with a window size t (t > s) are extracted by taking the pixel as a center, and an image block P (j) with a higher similarity to the matching window P (i) and the window size of the image block P (j) being the same as the size of the matching window P (i) are detected by traversing the search area.
For example, the sum of the gray values of the pixels in the matching window P (i) is calculated, and the sum of the gray values of the pixels in the window to be matched is calculated, and if the difference between the sum of the gray values of the pixels in the window to be matched and the sum of the gray values of the pixels in the current matching window P (i) is smaller than a preset threshold value, the similarity between the window to be matched and the matching window P (i) is considered to be higher, and the window to be matched is marked as an image block P (j).
Then, the gaussian weighted euclidean distance d (i, j) of the image block P (j) with higher similarity to the current matching window P (i) is calculated, for example, by applying the following formula:
d(i,j)=G 0 *||P(i)-P(j)|| 2 (3)
Then, the weighting coefficient w (i, j) of the image block P (j) with higher similarity is calculated as follows:
finally, the gray value of the current pixel after non-local mean value filtering is calculated, and the specific formula is as follows:
in the above formulas 4 and 5, G 0 H is a parameter for controlling the smoothness, which is a predetermined gaussian function.
Then, step S4 is performed to apply the search result of the non-local mean filtering to obtain a neighborhood similarity map of each layer of image. Each layer of image of the image pyramid corresponds to a neighborhood similarity graph of the image pyramid, and the value of each pixel in the neighborhood similarity graph is a neighborhood similarity value. The purpose of acquiring the neighborhood similarity map of each layer of image is to detect an edge area, a texture area and a flat area in the image, and as the edge area in the image mainly contains image contour information, the difference between the gray value of the pixel in the edge area and the gray value of the peripheral pixel is larger, so that the similarity between the gray value of the pixel in the edge area and the gray value of the peripheral pixel is lower; the texture region contains some detail information and weak textures of the image, and the similarity between the pixel gray value of the texture region and the gray value of the peripheral pixels is moderate; the pixel gray value of the flat area has higher similarity with the gray value of the peripheral pixel. Based on these characteristics, the present embodiment determines which region each pixel is in by searching for the similarity between the gray value of each pixel and the gray value of the surrounding pixels, i.e., the region information is obtained by analyzing the similarity statistics of the non-local mean filtering.
In step S4, when the non-local mean filtering is performed on each layer of image of the gaussian pyramid, the number of image blocks P (j) with higher similarity to the matching window P (i) with the current pixel as the center point in the search window is calculated, and in this embodiment, the number of image blocks P (j) with higher similarity may be directly used as the neighborhood similarity value cnt (i) of the current pixel. Or, setting a difference threshold according to the noise curve value calibrated in advance, counting the number of image blocks in the search area, wherein the difference value between the image blocks and the matching window P (i) is smaller than the corresponding difference threshold, and taking the number as a neighborhood similarity value cnt (i) of the current pixel. After determining the neighborhood similarity value cnt (i) of each pixel, a neighborhood similarity graph corresponding to the layer of image can be obtained, and the value corresponding to each pixel in the neighborhood similarity graph is the neighborhood similarity value cnt (i) of the pixel. Thus, the value of each pixel in the neighborhood similarity map is independent of the gray value of that pixel, and the neighborhood similarity value cnt (i) characterizes the degree of similarity of that pixel to surrounding pixels.
Then, step S5 is performed to detect an edge region, a flat region and a texture region in the image according to the neighborhood similarity map. The specific flow of detection of each region will be described below with reference to fig. 3, taking an image pyramid with three layers of images as an example.
First, an edge region in an image is determined from a neighborhood similarity map of a low-level image. Specifically, statistics is performed on a neighborhood similar graph of the first layer image and a neighborhood similar graph of the second layer image, threshold segmentation is performed, and an edge region in the image is detected by using a threshold segmentation result. Specifically, step S21 is performed to obtain a neighborhood similarity graph of the first layer image, and step S22 is performed to perform mean statistics on the neighborhood similarity graph of the first layer image, for example, an adaptive threshold th of the first layer image is set, where the adaptive threshold th is obtained by calculating using the following formula:
where num is the total number of pixels of the layer image. As can be seen from equation 6, the adaptive threshold th of the layer image is the average value of the neighborhood similarity values of all pixels of the layer image. Further, since the total number of pixels of each layer image is the same in the present embodiment, the value of num is the same when the adaptive threshold of each layer image is calculated.
Next, step S23 is executed to perform threshold segmentation on the neighborhood similarity graph of the first layer image, specifically, the pixels in the neighborhood similarity graph with the neighborhood similarity value smaller than the adaptive threshold th are marked as pixels in the edge region of the layer.
Correspondingly, the same steps are executed for the second layer image, namely, step S24 is executed firstly to obtain a neighborhood similarity graph of the second layer image, then step S25 is executed to perform mean statistics on neighborhood similarity values of pixels of the neighborhood similarity graph of the second layer image, namely, calculation of an adaptive threshold is performed by using formula 6, step S26 is executed to perform threshold segmentation on the second layer image, and pixels with neighborhood similarity values smaller than an adaptive threshold th in the neighborhood similarity graph are marked as pixels of an edge region of the layer.
Finally, step S27 is performed to combine the threshold segmentation result of the first layer image with the threshold segmentation result of the second layer image, i.e. to determine pixels in the two layers of images marked as edge regions of the layers as pixels in the edge regions of the images. Because the pixels of the first layer image and the pixels of the second layer image are in one-to-one correspondence, whether a certain pixel is marked as an edge area in the first layer image or the second layer image can be judged according to the one-to-one correspondence relation of the pixels in the two layers of images, and if so, the pixel is determined to be the pixel of the edge area.
After the pixels of the edge region are determined, the pixels of the texture region are detected. Specifically, step S28 is performed to obtain a neighborhood similarity map of the higher layer image, that is, obtain a neighborhood similarity map of the third layer image, and step S29 is performed to obtain high frequency information of the third layer image, for example, obtain data of a laplace image of the third layer image, and ignore information of an edge region, and step S30 is performed to perform further decision and normalization processing on pixels of a non-edge region.
For example, the neighborhood similarity map of the third layer image is masked, that is, the neighborhood similarity value of the pixel of the edge region is directly set to 0, by masking the pixel already marked as the edge region in step S27. Then, the pixels in the neighborhood similarity map of the third layer image, for which neither the neighborhood similarity value nor the high frequency information of the third layer image is 0, are marked as pixels of the texture region, that is, step S31 is performed. After the pixels of the texture region are determined, the remaining pixels are determined as pixels of the flat region, i.e., step S32 is performed. To this end, each pixel of the image is divided into pixels of different areas.
In addition to step S31, step S30 is also performed, and normalization calculation is performed on the pixels of the texture region, where the normalization calculation uses the following formula:
wherein cnt (i) is a neighborhood similarity value of the current pixel, and max (cnt) and min (cnt) are respectively a maximum value and a minimum value of neighborhood similarity values of pixels in the third-layer neighborhood similarity graph. It can be seen that, according to the above formula, the normalized value of the pixels in the texture region is between 0 and 1, and the normalized value of the pixels in the flat region is 0. Further, the normalized calculated value of the pixels of the edge region may be set to 1. Also, for the pixels of the texture region, each pixel has its own similarity coefficient, i.e., the calculation result g of formula 7.
In fig. 4, (a) in fig. 4 is an initial image, and the image obtained after the region detection is shown in fig. 4 (b), the image is divided into three types of regions after the region detection, but the gray value of the pixel of each region is not subjected to fusion weighting calculation. Therefore, step S6 is required to be executed, and the corresponding fusion calculation method is applied to the pixels in different areas to perform fusion calculation on the gray values of the pixels, so as to obtain the gray value of each pixel after filtering.
Referring to fig. 5, step S51 is first performed to acquire high-frequency information and low-frequency information of each layer image, that is, to acquire a value of each pixel in each layer image of the gaussian pyramid and the laplacian pyramid. Then, step S52 is executed to determine whether the current pixel is a pixel in the edge area, if so, step S59 is executed to calculate the gray value of the pixel by adopting a direct fusion method, specifically, the gray value of the pixel is calculated by applying the low frequency information of at least one layer of image and the high frequency information of at least one layer of image. For example, the pixel gray value of the first layer image of the gaussian pyramid and the values of the pixels of the second layer image and the third layer image of the laplacian pyramid are accumulated, and the accumulated result is used as the fused gray value, and the gray value is the filtered gray value of the pixel. Through a direct fusion mode, the characteristics of pixels in the edge area can be ensured to be reserved as far as possible, so that the gray value of the pixels in the edge area is more true, and the high-frequency detail characteristics in the image are reserved.
If the current pixel is not the pixel of the edge area, step S53 is executed to determine whether the current pixel is the pixel of the flat area, if so, step S60 is executed to calculate the gray value of the pixel by using the median filtering fusion algorithm. Specifically, the pixel gray values of the first layer image using the gaussian pyramid and the values of the pixels of the second layer image and the third layer image using the laplacian pyramid are added up, the result of the same adding up operation of a plurality of pixels around the current pixel is obtained, median filtering is performed by using the adding up result of the plurality of pixels around the current pixel, for example, an average value of the added up values of the plurality of pixels around the current pixel is calculated, and the average value is used as the gray value of the flat area. Therefore, the gray value of the pixel of the flat area is obtained through median filtering, so that the image of the flat area is smoother and more consistent with the characteristics of the flat area.
If the current pixel is not a pixel of the flat area, step S54 is executed to determine whether the current pixel is a pixel of the texture area, if so, step S55 is executed to determine a weighting coefficient of the multi-layer image according to the similarity coefficient of the current pixel, for example, determine weighting coefficients of the first layer gaussian image, the second layer laplacian image and the third layer laplacian image according to the similarity coefficient g, and perform weighted fusion calculation on the values of the pixels of the three layers of images by using the weighting coefficients of the three layers of images to obtain the gray value of the pixel of the texture area.
For example, if the weighting coefficient of the first layer gaussian image is g, the weighting coefficient of the second layer laplace image is 1-g/2, and the weighting coefficient of the third layer laplace image is 1-g/2, the gray value of the pixel of the first layer gaussian image is multiplied by the weighting coefficient of the first layer gaussian image, the value of the pixel of the second layer laplace image is multiplied by the weighting coefficient of the second layer laplace image, the value of the pixel of the third layer laplace image is multiplied by the weighting coefficient of the third layer laplace image, and then the gray values of the pixel are obtained by accumulating the three values.
Then, step S56 is executed to determine whether the current pixel is the last pixel in the image, if not, step S58 is executed to obtain the next pixel, and step S52 is executed again, otherwise, the current pixel is the last pixel in the current image, the denoised image is output, and the gray value of each pixel in the denoised image is the gray value calculated according to the weighted fusion. Because the pixels of the texture region are not subjected to fusion calculation by using fixed weighting coefficients, the weighting coefficients are adaptively adjusted according to the similarity coefficients of the pixels, so that the pixel gray value calculation of the texture region is more flexible, and the texture characteristics of the image are maintained.
Since steps S2 to S6 are all performed for a single color channel, steps S2 to S6 are required to be performed for three color channels, respectively, and after step S6 is performed, the gray value of each pixel of each image of the three color channels can be obtained. However, since the initial image is in the BAYER format, step S7 needs to restore the positions of the pixels in the initial image in the opposite process of extracting the color channel images in step S1 according to the format of the initial image, so as to form an output image, that is, perform the inverse interpolation calculation, where the image obtained by the inverse interpolation calculation is the noise reduction image to be output.
Because the region detection of the invention only depends on the similarity detection result of the non-local mean filtering in the noise reduction algorithm framework, the characteristics of the self framework are fully utilized, the precision of the region detection is improved, and no extra operand is introduced, especially compared with the traditional BM3D noise reduction method, the method has lower calculation complexity and hardware implementation cost, and is easy to realize. In addition, because each region is detected based on the image pyramid, and the targeted fusion calculation is carried out on the pixels of different regions, the situation that the adaptability to the scene is insufficient can be effectively avoided, for example, the problems that texture regions in images are excessively smoothed or the noise reduction effect of flat regions is not ideal and the like are avoided. In addition, most of the threshold parameters of the method are adaptively set, for example, parameters such as an adaptive threshold th and a similarity coefficient g used for threshold segmentation are not fixed values, so that the method is highly adaptive to a scene with more complexity and more noise.
Of course, the above examples are merely preferred embodiments of the present invention, and the following changes may be made in practical application: when constructing the image pyramid, the method is not limited to the mode, for example, DOG image pyramid can be constructed, or wavelet, curved wave and other methods can be used for substitution; or in the process of region detection, the self-adaptive threshold selection mode of the images of different layers can be replaced by multiplying a fixed threshold by a fixed coefficient; alternatively, when the gray value of the pixel in the texture region is calculated, the weighting coefficient can be replaced by a larger fixed value.
In addition, the image noise reduction method of the invention can be used in a series of image video processing devices including a vehicle-mounted image capturing device, a network image capturing device, a motion camera and the like, and various parts in the method can be appropriately adjusted or deleted according to actual requirements.
Computer apparatus embodiment:
the computer device of the present embodiment may be an intelligent electronic device, and the computer device includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor, where the processor executes the computer program to implement the steps of the image noise reduction method based on the region guidance. Of course, the intelligent electronic device further comprises an image capturing device for acquiring the initial image.
For example, a computer program may be split into one or more modules, which are stored in memory and executed by a processor to perform the various modules of the invention. One or more of the modules may be a series of computer program instruction segments capable of performing specific functions for describing the execution of the computer program in the terminal device.
The processor referred to in the present invention may be a central processing unit (Central Processing Unit, CPU), or other general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, the processor being a control center of the terminal device, and the various interfaces and lines being used to connect the various parts of the overall terminal device.
The memory may be used to store computer programs and/or modules, and the processor may implement various functions of the terminal device by running or executing the computer programs and/or modules stored in the memory, and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash Card (Flash Card), at least one disk storage device, flash memory device, or other volatile solid-state storage device.
Computer-readable storage medium embodiments:
the computer program stored in the above-mentioned computer means may be stored in a computer readable storage medium if it is implemented in the form of software functional units and sold or used as a separate product. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and the computer program may implement the steps of the image denoising method based on region guidance when being executed by a processor.
Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the content of the computer readable medium can be appropriately increased or decreased according to the requirements of the jurisdiction's jurisdiction and the patent practice, for example, in some jurisdictions, the computer readable medium does not include electrical carrier signals and telecommunication signals according to the jurisdiction and the patent practice.
Finally, it should be emphasized that the present invention is not limited to the above embodiments, for example, the manner of acquiring the image pyramid, or the manner of computing the fusion of pixels in each region, and such modifications are also included in the scope of the claims of the present invention.

Claims (7)

1. An image denoising method based on region guidance, comprising:
acquiring an initial image;
the method is characterized in that:
constructing an image pyramid by using the initial image, wherein each layer of image of the image pyramid comprises low-frequency information and high-frequency information of the layer of image;
non-local mean filtering is carried out on the low-frequency information of each layer of image, and a neighborhood similarity graph of the layer of image is obtained by utilizing a similarity block search result in the non-local mean filtering process;
detecting an edge area, a texture area and a flat area in the image by applying a neighborhood similarity graph of each layer of image and high-frequency information of a high-layer image, applying low-frequency information of at least one layer of image and high-frequency information of at least one layer of image to the edge area, the texture area and the flat area, performing fusion calculation on gray values of pixels by using a corresponding fusion method, and outputting a noise-reduced image;
wherein, the detecting edge area, texture area and flat area in the image by applying the neighborhood similarity graph of each layer image and the high frequency information of the high layer image comprises: determining pixels of the edge area according to neighborhood similarity values of all pixels in a neighborhood similarity graph of the low-layer image; determining pixels of the texture region according to neighborhood similarity values of pixels in a neighborhood similarity graph of the high-level image and high-frequency information of the high-level image;
determining the pixels of the edge region according to the neighborhood similarity values of the pixels in the neighborhood similarity graph of the low-layer image comprises: threshold segmentation is carried out on neighborhood similarity values of all pixels in the neighborhood similarity graphs of the low-layer images with more than two layers, threshold segmentation results are combined, and the pixels of the edge area are determined according to the combined results;
determining the pixels of the texture region according to the neighborhood similarity values of each pixel in the neighborhood similarity graph of the high-level image and the high-frequency information of the high-level image comprises the following steps: and after the detection result of the edge area is subjected to mask operation, determining pixels with non-zero neighborhood similarity values in the neighborhood similarity graph of the high-level image and non-zero high-frequency information of the high-level image as pixels of the texture area.
2. The region-based guided image denoising method according to claim 1, wherein:
the step of obtaining the neighborhood similarity graph of the layer image by using the similarity block search result in the non-local mean filtering process comprises the following steps:
in the similar block searching process of the non-local mean filtering, the number of the matching windows and the similar windows corresponding to each pixel is calculated, and the number of the matching windows and the similar windows corresponding to the pixel is used as a neighborhood similarity value of the pixel in the neighborhood similarity graph.
3. The region-based guided image denoising method according to claim 1 or 2, wherein:
applying the low-frequency information of at least one layer of image and the high-frequency information of at least one layer of image to carry out fusion calculation on the gray value of the pixel of the edge area comprises the following steps:
and accumulating the low-frequency information of the low-layer image and the high-frequency information of at least two layers of high-layer images to obtain the gray value of the pixel of the edge area.
4. The region-based guided image denoising method according to claim 1 or 2, wherein:
applying the low frequency information of the at least one layer of image and the high frequency information of the at least one layer of image to carry out fusion calculation on the gray value of the pixel of the flat area comprises the following steps:
and accumulating the low-frequency information of the low-layer image and the high-frequency information of at least two layers of high-layer images, and performing median filtering to obtain the gray value of the pixel of the flat area.
5. The region-based guided image denoising method according to claim 1 or 2, wherein:
when the neighborhood similarity graph of each layer of image and the high-frequency information of the high-layer image are applied to detect the texture region in the image, calculating the similarity coefficient of each pixel of the texture region;
applying the low frequency information of the at least one layer of image and the high frequency information of the at least one layer of image to carry out fusion calculation on the gray value of the pixel of the texture region comprises the following steps:
and determining the weighting coefficient of the multi-layer image according to the similarity coefficient, and carrying out weighted fusion calculation on the low-frequency information of at least one layer of image and the high-frequency information of at least one layer of image by using the weighting coefficient to obtain the gray value of the pixel of the texture region.
6. Computer device, characterized in that it comprises a processor and a memory, said memory storing a computer program which, when executed by the processor, implements the steps of the region-based directed image denoising method according to any one of claims 1 to 5.
7. A computer readable storage medium having stored thereon a computer program characterized by: the computer program, when executed by a processor, implements the steps of the region-based guided image denoising method according to any one of claims 1 to 5.
CN202010611911.3A 2020-06-30 2020-06-30 Image noise reduction method based on region guidance, computer device and computer readable storage medium Active CN111784605B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010611911.3A CN111784605B (en) 2020-06-30 2020-06-30 Image noise reduction method based on region guidance, computer device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010611911.3A CN111784605B (en) 2020-06-30 2020-06-30 Image noise reduction method based on region guidance, computer device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111784605A CN111784605A (en) 2020-10-16
CN111784605B true CN111784605B (en) 2024-01-26

Family

ID=72761285

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010611911.3A Active CN111784605B (en) 2020-06-30 2020-06-30 Image noise reduction method based on region guidance, computer device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111784605B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112446838A (en) * 2020-11-24 2021-03-05 海南大学 Image noise detection method and device based on local statistical information
CN112884667B (en) * 2021-02-04 2021-10-01 湖南兴芯微电子科技有限公司 Bayer domain noise reduction method and noise reduction system
CN112862717B (en) * 2021-02-10 2022-09-20 山东英信计算机技术有限公司 Image denoising and blurring method, system and medium
CN113033574A (en) * 2021-02-26 2021-06-25 天津大学 Image data noise reduction system and method based on FPGA
CN114509061A (en) * 2021-12-30 2022-05-17 重庆特斯联智慧科技股份有限公司 Method and system for determining robot traveling path based on barrier attributes
CN114240941B (en) * 2022-02-25 2022-05-31 浙江华诺康科技有限公司 Endoscope image noise reduction method, device, electronic apparatus, and storage medium
CN116342891B (en) * 2023-05-24 2023-08-15 济南科汛智能科技有限公司 Structured teaching monitoring data management system suitable for autism children

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109544478A (en) * 2018-11-26 2019-03-29 重庆大学 A kind of non-local mean CT image denoising method based on singular value decomposition
CN109785246A (en) * 2018-12-11 2019-05-21 深圳奥比中光科技有限公司 A kind of noise-reduction method of non-local mean filtering, device and equipment
CN111161188A (en) * 2019-12-30 2020-05-15 珠海全志科技股份有限公司 Method for reducing image color noise, computer device and computer readable storage medium
CN111260580A (en) * 2020-01-17 2020-06-09 珠海全志科技股份有限公司 Image denoising method based on image pyramid, computer device and computer readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9489720B2 (en) * 2014-09-23 2016-11-08 Intel Corporation Non-local means image denoising with detail preservation using self-similarity driven blending

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109544478A (en) * 2018-11-26 2019-03-29 重庆大学 A kind of non-local mean CT image denoising method based on singular value decomposition
CN109785246A (en) * 2018-12-11 2019-05-21 深圳奥比中光科技有限公司 A kind of noise-reduction method of non-local mean filtering, device and equipment
CN111161188A (en) * 2019-12-30 2020-05-15 珠海全志科技股份有限公司 Method for reducing image color noise, computer device and computer readable storage medium
CN111260580A (en) * 2020-01-17 2020-06-09 珠海全志科技股份有限公司 Image denoising method based on image pyramid, computer device and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
双尺度自适应非局部均值图像降噪算法;赵婧娟;周祚峰;曹剑中;王华;;红外与激光工程(第S1期);全文 *

Also Published As

Publication number Publication date
CN111784605A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
CN111784605B (en) Image noise reduction method based on region guidance, computer device and computer readable storage medium
US8594451B2 (en) Edge mapping incorporating panchromatic pixels
US8363123B2 (en) Image pickup apparatus, color noise reduction method, and color noise reduction program
US7844127B2 (en) Edge mapping using panchromatic pixels
US7876956B2 (en) Noise reduction of panchromatic and color image
KR102523505B1 (en) Method and Apparatus for Inverse Tone Mapping
CN111260580B (en) Image denoising method, computer device and computer readable storage medium
CN111784603B (en) RAW domain image denoising method, computer device and computer readable storage medium
US9667842B2 (en) Multi-band YCbCr locally-adaptive noise modeling and noise reduction based on scene metadata
CN110246087B (en) System and method for removing image chroma noise by referring to multi-resolution of multiple channels
CN111161188B (en) Method for reducing image color noise, computer device and readable storage medium
CN109214996B (en) Image processing method and device
CN110517206B (en) Method and device for eliminating color moire
US9525804B2 (en) Multi-band YCbCr noise modeling and noise reduction based on scene metadata
CN111031241B (en) Image processing method and device, terminal and computer readable storage medium
US7430334B2 (en) Digital imaging systems, articles of manufacture, and digital image processing methods
US9860456B1 (en) Bayer-clear image fusion for dual camera
US20060012693A1 (en) Imaging process system, program and memory medium
US8385671B1 (en) Digital camera and method
CN113379609B (en) Image processing method, storage medium and terminal equipment
CN111161299A (en) Image segmentation method, computer program, storage medium, and electronic device
Hong et al. A single image dehazing method based on adaptive gamma correction
CN113674158A (en) Image processing method, device, equipment and storage medium
US7734060B2 (en) Method and apparatus for estimating noise determination criteria in an image sensor
CN112907653B (en) Image processing method, image pickup device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant