CN110796615B - Image denoising method, device and storage medium - Google Patents

Image denoising method, device and storage medium Download PDF

Info

Publication number
CN110796615B
CN110796615B CN201910994941.4A CN201910994941A CN110796615B CN 110796615 B CN110796615 B CN 110796615B CN 201910994941 A CN201910994941 A CN 201910994941A CN 110796615 B CN110796615 B CN 110796615B
Authority
CN
China
Prior art keywords
value
pixel
noise
original image
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910994941.4A
Other languages
Chinese (zh)
Other versions
CN110796615A (en
Inventor
李鹏
方瑞东
林聚财
殷俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN201910994941.4A priority Critical patent/CN110796615B/en
Publication of CN110796615A publication Critical patent/CN110796615A/en
Application granted granted Critical
Publication of CN110796615B publication Critical patent/CN110796615B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image denoising method, an image denoising device and a storage medium, wherein the image denoising method comprises the following steps of: acquiring an original image; edge point detection is carried out on the original image so as to determine edge point pixels in original pixels of the original image; performing noise estimation on the original image with the edge point pixels removed to obtain the noise level of the original image; the original image is similarity filtered based on noise levels, wherein different ones of the noise levels correspond to different ones of the filtering parameters. Through the mode, the denoising effect of the image can be effectively improved.

Description

Image denoising method, device and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image denoising method.
Background
The video image data is inevitably interfered by noise in the processes of acquisition, transmission and the like. Such as CCD and CMOS devices, introduce optical noise, etc., during photographing. In general, video images are severely affected by gaussian white noise and impulse noise. In some special scenes, such as night scenes, gaussian white noise in the video image tends to be relatively high, resulting in poor quality of the acquired video image.
Image denoising is generally used as an image preprocessing means, and after the image denoising, a clearer image is obtained, and other image processing operations such as image segmentation, target recognition and the like are performed. Image denoising has become an important research topic in the fields of image processing and computer vision, and particularly, finding a good balance point between image denoising and detail edge maintenance has become an important point of research in recent years.
Typically, video images need to be encoded by an encoder prior to transmission, outputting an encoded bitstream. The current mainstream video coding standards are h.264 and h.265, which are coding modes based on a combination of predictive coding and transform coding. In transform coding, noise corresponds to a high-frequency component, and the intensity of the high-frequency component increases, which leads to an increase in the code rate after coding and reduces the compression efficiency of the encoder.
At present, a denoising method of a video image mainly comprises a spatial domain denoising method and a frequency domain denoising method, wherein the spatial domain method carries out smooth filtering on the image through weighted summation; the frequency domain method filters by removing the noise coefficient after the frequency domain transformation, thereby reducing the noise as much as possible. However, the current denoising methods all assume known image noise intensities, and if filtering is performed on images with little or no noise, clear images can be blurred.
Disclosure of Invention
The technical problem that the application mainly solves is to provide an image denoising method, which can effectively solve the problem of poor denoising effect caused by filtering under the condition that the image noise intensity is assumed to be known in the prior art.
In order to solve the technical problems, the application adopts the following technical scheme: provided is an image denoising method, comprising the steps of: acquiring an original image; performing edge point detection on the original image to determine edge point pixels in original pixels of the original image; performing noise estimation on the original image from which the edge point pixels are removed, so as to obtain the noise level of the original image; the original image is similarity filtered based on the noise levels, wherein different ones of the noise levels correspond to different filtering parameters.
The beneficial effects of this application are: according to the method and the device, noise estimation is firstly carried out on the original image except for the edge point pixels, and the accuracy of the noise estimation can be effectively improved. Further, different similarity filtering strategies are executed according to the noise level, so that the problem of poor denoising effect caused by similarity filtering operation of the same parameters on images with different noise intensities is avoided.
Drawings
FIG. 1 is a flow chart of an embodiment of an image denoising method of the present application;
FIG. 2 is a flow chart of an embodiment of an edge point detection process of the present application;
FIG. 3 is a flow diagram of one embodiment of a noise estimation process of the present application;
FIG. 4 is a flow diagram of one embodiment of a similarity filtering process of the present application;
FIG. 5 is a flow diagram of one embodiment of an edge point pixel detail enhancement process of the present application;
FIG. 6 is a schematic block diagram of one embodiment of an image denoising apparatus of the present application;
fig. 7 is a schematic block diagram of an embodiment of an image denoising apparatus of the present application.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the following detailed description of the specific embodiments of the present application will be given with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of an embodiment of an image denoising method of the present application, as shown in fig. 1, in the embodiment of the present application, the image denoising method includes the following steps:
step S10, an original image is acquired.
The embodiment can be through images or recorded videos shot by a mobile terminal, a camera, a video recorder, a monitoring device and the like. For video, the original image may be one or more frames of images selected from the video. Of course, the original image can be obtained from the cloud or the network, can be obtained from the local storage of the device, and can be obtained from the mobile hard disk and the USB flash disk. The format of the original image may be PNG, JPG, tiff or the like, or may be other formats, and is not limited thereto.
In step S20, edge point detection is performed on the original image to determine edge point pixels in the original image.
The image denoising method in the prior art often causes the image to blur the edge detail part while denoising, the edge point detection is beneficial to screening out the edge point pixels, the influence of the edge point pixels is eliminated when the image noise estimation is carried out, the accuracy of the noise estimation is improved, the edge detail is enhanced after the image denoising processing is also convenient, and the image becomes clearer.
Referring to fig. 2, fig. 2 is a flow chart illustrating an embodiment of an edge point detection process of the present application. In the flow shown in fig. 2, an edge point detection method based on an adaptive threshold is provided, which specifically includes the steps of:
in step S201, a convolution operation is performed on the original image by using the gradient operator to obtain a gradient map.
In order to determine the edge pixel part, a gradient operator is required to carry out convolution operation on each original pixel in the original image, and the specific operation is that the gradient operator is respectively transversely and longitudinally convolved with each original pixel point as a center, so as to respectively obtain the transverse and longitudinal brightness difference approximate values G of the current original pixel and surrounding original pixels i And G j The gradient operator in this embodiment adopts a Sobel operator with a size of 3×3, and performs convolution calculation with the original image, and the calculation process is as follows:
Figure BDA0002239451790000031
Figure BDA0002239451790000041
wherein, O (i, j) represents the original pixel value taking the ith row and jth column pixel points in the original image as the center.
An approximation value G of the difference between the brightness of the current original pixel and the brightness of the surrounding original pixels in the horizontal and vertical directions i And G j Through G= |G i |+|G j And carrying out I operation to obtain a gradient map G of the whole original image.
Step S202, carrying out histogram statistics on pixel values of gradient pixels in the gradient map, and generating a gradient threshold according to the histogram statistics result.
The process of histogram statistics is in effect to count the number of gradient pixels whose pixel values fall within different gradient sections. In this embodiment, each gradient section may be a numerical segment defined by two numerical endpoints. At this time, the pixel value falling into the gradient section refers to the pixel value of the gradient pixel being equal to or between two numerical end points, and any one of the two numerical end points or any numerical value between the two numerical end points can be selected as the representative value of the gradient section. Of course, each gradient section may also be a single numerical point. At this time, the pixel value of the gradient pixel falling into the gradient section is equal to the value point, and the value point is the representative value of the gradient section.
In the histogram of the gradient pixels formed finally, the abscissa corresponds to the representative value of the gradient section, and the ordinate corresponds to the number of pixels in each gradient section. Further, for the representative value of the gradient section in the histogram of gradient pixels, the product of the number of gradient pixels in each gradient section and the representative value of the gradient section is summed in a small-to-large manner until the summation result is greater than or equal to a preset percentage p% of the total pixel value of the gradient map. Then, the maximum representative value of the gradient pixels participating in the summation operation is taken as a gradient threshold value. Here, the total pixel value of the gradient map refers to the result of summing the pixel values of all gradient pixels within the gradient map.
In step 203, the original pixel corresponding to the gradient pixel with the pixel value of the gradient pixel being greater than or equal to the gradient threshold is used as the edge point pixel.
The original pixel corresponding to the gradient pixel with the pixel value larger than or equal to the gradient threshold value can be determined as the edge point pixel, so that all the edge point pixels of the whole original image can be detected.
The p% value can be set according to the needs of a user, even if a fixed p% value is selected when different images are subjected to image edge detection, the different images are different according to the calculated gradient threshold values, but are all gradient threshold values suitable for edge detection of the corresponding images, so that the image edge detection method has the characteristic of self-adaption in image edge detection.
Step S30, noise estimation is carried out on the original image with the edge point pixels removed, so as to obtain the noise level of the original image.
In the process of producing, acquiring, transmitting, storing and the like, noise is introduced under the influence of various factors, noise sources are different, noise density and types of different images are different, and when the image denoising processing is performed, if the images with different noise degrees are subjected to the same denoising processing, the image with smaller noise becomes more blurred after the denoising processing, or the image with larger noise is insufficiently denoised. In the present application, the purpose of noise estimation is to divide different images into different noise levels, and to perform corresponding filtering denoising processing on each noise level image.
Referring to fig. 3, fig. 3 is a flowchart illustrating an embodiment of a noise estimation process of the present application, and according to the previous step S20, removing edge point pixels of an original image, performing a specific step of noise estimation on the original image from which the edge point pixels are removed, including:
step S301, convolution summation is carried out on the original image with the edge point pixels removed by using a noise estimation operator, and a noise estimation diagram is obtained.
The noise estimation operator N in the embodiment is obtained after the laplace operator is corrected, and the corrected noise estimation operator performs convolution summation with pixels in the image and plays a role in inhibiting the image texture structure, so that the influence of the image texture structure on noise estimation is reduced, and the accuracy of noise estimation is improved. In other embodiments, other operators may be used that have the same or similar effect as operator N, with noise estimation operator N used in this embodiment as follows:
Figure BDA0002239451790000051
the convolution operation excludes other original pixels of the edge point pixels from the original image, and the embodiment also excludes pixels located in the edge rows and the edge columns of the original image, where the convolution operation results in the following steps: i (I, j) =o (I, j) ×n, I (I, j) is a pixel value of an I-th row and j-th column noise estimation pixel in the noise estimation diagram obtained by performing convolution operation on other original pixels excluding edge point pixels and pixels located in an image edge row and an image edge column in the original image by using the noise estimation operator N.
Step S302, performing an average value operation on the absolute value of the pixel value of each noise estimation pixel in the noise estimation map.
The calculation formula is as follows:
Figure BDA0002239451790000061
wherein sigma n The standard deviation of noise is represented by W, the number of pixels of the original image along the row direction, and H, the number of pixels of the original image along the column direction.
Step S303, obtaining the noise estimation value of the noise estimation graph according to the result of the mean value operation.
Square the standard deviation obtained in step S302 to obtain variance value
Figure BDA0002239451790000068
As a noise estimate.
Step S304, if the noise estimation value is smaller than the first noise threshold value, the original image is judged to be a noise-free image; if the noise estimation value is between the first noise threshold value and the second noise threshold value, the original image is judged to be a low-noise image; and if the noise estimation value is larger than the second noise threshold value, judging the original image to be a high-noise image. Wherein the second noise threshold is greater than the first noise threshold and the step of similarity filtering the original image based on noise level is not performed for a noise-free image.
Setting a first threshold T1 and a second threshold T2, T1 < T2, if the noise estimation value obtained in step S303
Figure BDA0002239451790000062
Less than a first threshold, i.e.)>
Figure BDA0002239451790000063
Judging the original image as a non-noise image, and not performing similarity filtering denoising operation on the non-noise image; if the noise estimation value obtained in step S303 +.>
Figure BDA0002239451790000064
Greater than the first threshold and less than the second threshold, i.e. +.>
Figure BDA0002239451790000065
Then the original image is determined to be a low noise image; if the noise estimation value obtained in step S303 +.>
Figure BDA0002239451790000066
Greater than a second threshold, i.e
Figure BDA0002239451790000067
The original image is determined to be a high noise image. And respectively carrying out similarity filtering denoising of different filtering parameters on the image judged to be low in noise and the image judged to be high in noise.
In this embodiment, two thresholds of the first threshold and the second threshold are set, and in other embodiments, a third threshold and a fourth threshold may be set as required, so as to make finer division on the noise level of the image, so as to obtain a better denoising effect.
In step S40, the original image is subjected to similarity filtering based on the noise level, wherein different noise levels correspond to different filtering parameters.
The image filtering can inhibit noise of an original image under the condition of retaining detailed characteristics of the image as much as possible, and the similarity filtering focus of the application is to set different filtering radiuses and similarity thresholds for different noise levels, wherein the larger the noise level is, the larger the filtering radiuses and the similarity thresholds are set, and the smaller the noise level is, the smaller the filtering radiuses and the similarity thresholds are set. The filtering radius is used for determining a neighborhood range, and the similarity threshold is used for determining the number of pixels in the neighborhood range, which are calculated by reference to similarity filtering. Further, according to the method, different weighting coefficients are set according to different spatial distances between the pixel points participating in similarity filtering calculation and the current pixel point, weighted summation calculation is carried out on all the pixel points participating in the similarity filtering, and the pixel value of the current original pixel point is replaced by a weighted summation result. In this way, through the selection of similar pixels and the weighted calculation of the similar pixels based on the space distance, the detail features of the image are well protected while denoising.
Referring to fig. 4, fig. 4 is a flowchart illustrating an embodiment of a similarity filtering process, which includes the following specific steps:
in step S401, a filter radius and a similarity threshold are set based on the noise level, wherein the larger the noise level is, the larger the filter radius and the similarity threshold are.
In this embodiment, the original image determined as the noise image may be divided into two noise levels, namely a high noise image and a low noise image, and the filtering radius is set to be 5 for the high noise image, and the similarity threshold is set to be 50; a filter radius of 1 is set for the low noise image, a similarity threshold of 25, and other suitable values may be set in other embodiments.
Step S402, setting a neighborhood range by using the filter radius with the current original pixel in the original image as the center.
A neighborhood range with the size of 2R multiplied by 2R is set by taking the current original pixel as the center and the filtering radius R, wherein the range of the neighborhood range in the row direction of the original image is [ i-R, i+R ], and the range in the column direction of the original image is [ j-R, j+R ].
Step S403, searching for similar pixels in the neighborhood range, wherein the absolute value of the difference value between the pixel value of the current original pixel and the pixel value of the current original pixel is smaller than or equal to the similarity threshold.
And taking the current original pixel as a reference, selecting pixels with the difference value between the pixel value and the pixel value of the current original pixel point being smaller than or equal to a similarity threshold value as similar pixels, wherein it is understood that the pixels with the pixel values falling in the intervals [ O (i, j) -Thr, O (i, j) +Thr ] in the filter radius can be selected as similar pixels.
Step S404, the pixel values of the similar pixels are weighted and summed.
According to the difference of the spatial distances between the pixel points participating in the similarity filtering calculation and the current original pixel, all the pixel points participating in the similarity filtering are weighted and summed, and the weighted and summed operation is performed by adopting a Gaussian kernel function in the embodiment. In other embodiments, other functions may be used that may perform a weighted sum calculation on similar pixels based on spatial distance.
The weights that weight similar pixels via a gaussian kernel function can be expressed as:
Figure BDA0002239451790000081
where L is the spatial distance between the similar pixel and the current original pixel, and as the spatial distance L increases, the weight G decreases, i.e., the weight of the similar pixel farther from the current original pixel decreases, and σ is a preset coefficient. The calculation formula of the weighted summation is:
Figure BDA0002239451790000082
wherein O (i, j) e [ O (i, j) -Thr, O (i, j) +Thr ].
Step S404, substituting the pixel value of the current original pixel with the result of the weighted summation calculation.
And each original pixel in the original image is subjected to similarity filtering calculation, and the original pixels are replaced by corresponding weighted summation calculation results to obtain a similarity filtered image, so that the purpose of eliminating noise is achieved.
In order to enhance the detail display effect, the embodiment may enhance the details of the edge point pixels on the image after the similarity filtering, for example, the following step S50.
And S50, carrying out detail enhancement on the edge point pixels on the image subjected to similarity filtering.
The similarity filtering processing causes that some edge point pixels are lost while noise is eliminated in the original image, the image is blurred due to the loss of the edge point pixels, the visual effect is poor, and the lost edge point pixels can be complemented or enhanced through edge point detail enhancement.
Referring to fig. 5, fig. 5 is a flowchart of an embodiment of edge point detail enhancement according to the present application, and the specific steps are as follows:
in step S501, it is determined whether the absolute value of the difference between the pixel value of the edge point pixel on the original image after the similarity filtering and the pixel value of the edge point pixel on the original image before the similarity filtering is greater than or equal to a preset difference threshold, and if so, the sum operation is performed on the preset ratio of the pixel value of the edge point pixel on the original image after the similarity filtering and the difference value.
According to step S20, an edge map of the original image is obtained, edge point pixels in the image and the original image after the similarity filtering are filtered by using the edge map, and then the pixel value P (i, j) -O (i, j) =diff (i, j) of the edge point pixel on the original image is subtracted from the pixel value of the edge point pixel on the image after the similarity filtering to obtain a difference image diff (i, j).
Judging the magnitude relation between the absolute value of the difference value and a preset difference threshold, if the absolute value of the difference value is larger than the preset difference threshold, adding the preset proportion of the difference value to the pixel value of the edge point pixel in the image after similarity filtering corresponding to the difference value to obtain a new pixel value, and replacing the pixel value of the edge point pixel corresponding to the filtered image with the new pixel value. If the absolute value of the difference value is smaller than or equal to the preset difference threshold value, no processing is performed, and the pixel value of the edge point pixel in the corresponding similarity filtered image still takes the pixel value of the similarity filtered image.
The specific calculation formula is as follows:
Figure BDA0002239451790000091
wherein, C is a preset difference threshold, P (i, j) is a filtered image, D (i, j) is a detail enhanced image, A is a quantity parameter, and the preset difference threshold and the quantity parameter can be set according to the needs.
In step S502, it is determined whether the summation result is smaller than a preset minimum allowable value or larger than a preset maximum allowable value. If the sum is smaller than the minimum allowable value, the sum result is set to the minimum allowable value. If the sum is greater than the maximum allowable value, the sum result is set to the maximum allowable value.
In step S502, when the absolute value of the difference is greater than a preset difference threshold, adding a preset proportion of the difference to the pixel value of the edge point pixel in the image after the similarity filtering corresponding to the difference, determining whether the summation result obtained by adding is smaller than a preset minimum allowable value or greater than a preset maximum allowable value, and if the summation result obtained by adding is smaller than the preset minimum allowable value, taking the summation result as the minimum allowable value; if the summation result is larger than a preset maximum allowable value, taking the summation result as the maximum allowable value; and if the summation result is within the range contained by the preset minimum allowable value and the preset maximum allowable value, the original value of the summation result is taken. In this embodiment, the minimum allowable value is set to 0, and the maximum allowable value is set to 255.
The value of the summation result can be expressed as the following relationship:
Figure BDA0002239451790000101
so far, through the steps, the original image to be processed can be subjected to denoising processing, and a clear image after noise elimination is output.
According to the image denoising method, the application further provides an image denoising device, and the image denoising device is described in detail below with reference to the drawings and the embodiments.
Fig. 6 is a schematic block diagram of an embodiment of an image denoising apparatus of the present application. As shown in fig. 6, the image denoising apparatus of this embodiment includes: the device comprises an acquisition module 100, an edge point detection module 200, a noise estimation module 300 and a similarity filtering module 400.
The acquisition module 100 is used for acquiring an original image. The edge point detection module 200 is configured to perform edge point detection on an original image to determine edge point pixels in original pixels of the original image. The noise estimation module 300 is configured to perform noise estimation on an original image from which edge point pixels are removed, so as to obtain a noise level of the original image. The similarity filtering module 400 is configured to perform similarity filtering on the original image based on noise levels, where different noise levels correspond to different filtering parameters.
Optionally, the edge detection module 200 is further configured to perform a convolution operation on the original image by using a gradient operator to obtain a gradient map; carrying out histogram statistics on pixel values of gradient pixels in the gradient map, and generating a gradient threshold according to a histogram statistical result; and taking the original pixel corresponding to the gradient pixel with the pixel value of the gradient pixel being larger than or equal to the gradient threshold value as an edge point pixel.
Further, the edge detection module 200 is configured to count the number of gradient pixels whose pixel values fall within different gradient sections; carrying out summation operation on products of the number of gradient pixels in each gradient section and the representative value of the gradient section in a mode of from small to large according to the representative value of the gradient section until the summation result is larger than or equal to a preset percentage of the total pixel value of the gradient map; the maximum representative value of the gradient section participating in the summation operation is taken as a gradient threshold value.
Optionally, the noise estimation module 300 is further configured to perform a convolution operation on the original image with the edge point pixels removed by using a noise estimation operator, so as to obtain a noise estimation map; carrying out average value operation on the absolute value of the pixel value of each noise estimation pixel in the noise estimation diagram; and obtaining a noise estimation value of the noise estimation graph according to the average value operation result.
Optionally, the noise estimation operator is:
Figure BDA0002239451790000111
further, the noise estimation module 300 calculates the noise standard deviation of the noise estimation map by the following formula;
Figure BDA0002239451790000112
wherein sigma n I (I, j) is the pixel value of the noise estimation pixels of the ith row and the jth column in the noise estimation diagram obtained after convolution operation is carried out on the original pixels which are not positioned on the edge rows and the edge columns of the original pixels by using a noise estimation operator, W is the number of pixels of the original image along the row direction, and H is the number of pixels of the original image along the column direction; the step of obtaining the noise estimation value of the noise estimation graph according to the result of the mean value operation comprises the following steps: and carrying out square operation on the noise standard deviation to obtain the noise variance of the noise estimation graph as a noise estimation value.
Further, if the noise estimation value is smaller than the first noise threshold value, the original image is judged to be a noise-free image; if the noise estimation value is between the first noise threshold value and the second noise threshold value, the original image is judged to be a low-noise image; if the noise estimation value is larger than the second noise threshold value, judging that the original image is a high-noise image; wherein the second noise threshold is greater than the first noise threshold and the step of similarity filtering the original image based on the noise level is not performed for the noise-free image.
Optionally, the similarity filtering module 400 sets a filter radius and a similarity threshold based on the noise level, wherein the greater the noise level, the greater the filter radius and the similarity threshold; setting a neighborhood range by using a filtering radius with a current original pixel in an original image as a center; searching similar pixels with the absolute value of the difference value of the pixel value of the current original pixel smaller than or equal to a similarity threshold value in a neighborhood range; performing weighted summation on pixel values of similar pixels; and replacing the pixel value of the current original pixel by using the summation result.
Further, the similarity filtering module 400 sets the summation weights corresponding to the similar pixels based on the spatial distances of the current original pixel and the similar pixels on the original image, wherein the larger the spatial distance is, the smaller the summation weights are.
Still further, the summing weight of the similarity filtering module 400 satisfies the following formula:
Figure BDA0002239451790000113
wherein G is the summation weight of the similar pixels, L is the spatial distance between the similar pixels and the current original pixels, and sigma is a preset coefficient.
Optionally, the image denoising apparatus further includes a detail enhancement module 500, configured to perform detail enhancement on edge point pixels on the original image after similarity filtering.
Further, the detail enhancement module 500 is configured to determine whether an absolute value of a difference between a pixel value of an edge point pixel on the original image after the similarity filtering and a pixel value of an edge point pixel on the original image before the similarity filtering is greater than or equal to a preset difference threshold; if the pixel value of the edge point pixel on the original image is larger than or equal to the difference threshold value, carrying out summation operation on the pixel value of the edge point pixel on the original image after similarity filtering and the preset proportion of the difference value.
Still further, the detail enhancement module 500 is configured to determine whether the summation result is smaller than a preset minimum allowable value or larger than the preset minimum allowable value; if the sum is smaller than the minimum allowable value, setting the sum result to the minimum allowable value; if the sum is greater than the maximum allowable value, the sum result is set to the maximum allowable value.
For descriptions of functions, processes and the like implemented by each functional module of the image denoising apparatus, please refer to the descriptions of corresponding steps of the embodiment of the image denoising method of the present application, and the descriptions are omitted herein.
Referring to fig. 7, fig. 7 is a schematic block diagram of a circuit structure of an embodiment of the image denoising apparatus of the present application. As shown in fig. 7, the image denoising apparatus includes a processor 11 and a memory 12 coupled to each other. The memory 12 has stored therein a computer program, and the processor 11 is configured to execute the computer program to implement the steps of the image denoising method embodiment of the present application as described above.
For the description of each step of the processing execution, please refer to the description of each step of the embodiment of the image denoising method of the present application, and the description is omitted herein.
In the embodiments of the present application, the disclosed image denoising method and image denoising apparatus may be implemented in other manners. For example, the embodiments of the image denoising apparatus described above are merely illustrative, for example, the division of the modules or units is merely a logical function division, and there may be another division manner in actual implementation, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution, in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing description is only exemplary embodiments of the present application and is not intended to limit the scope of the present application, and all equivalent structures or equivalent processes using the descriptions and the drawings of the present application, or direct or indirect application in other related technical fields are included in the scope of the present application.

Claims (10)

1. A method of denoising an image, the method comprising:
acquiring an original image;
performing edge point detection on the original image to determine edge point pixels in original pixels of the original image;
performing noise estimation on the original image from which the edge point pixels are removed to obtain a noise level of the original image, including: performing convolution operation on the original image with the edge point pixels removed by using a noise estimation operator to obtain a noise estimation diagram; carrying out average value operation on the absolute value of the pixel value of each noise estimation pixel in the noise estimation diagram; square operation is carried out on the average value operation result to obtain a noise variance of the noise estimation graph as a noise estimation value;
performing similarity filtering on the original image based on the noise level, including: setting a filter radius and a similarity threshold based on the noise level, wherein the larger the noise level is, the larger the filter radius and the similarity threshold are; setting a neighborhood range by using the filtering radius with the current original pixel in the original image as a center; searching similar pixels with absolute values of differences from the pixel values of the current original pixels smaller than or equal to the similarity threshold value in the neighborhood range; carrying out weighted summation on the pixel values of the similar pixels; replacing the pixel value of the current original pixel by using the weighted summation result;
wherein different ones of said noise levels correspond to different ones of the filter parameters.
2. The method of claim 1, wherein the step of edge point detecting the original image comprises:
performing convolution operation on the original image by using a gradient operator to obtain a gradient map;
carrying out histogram statistics on pixel values of gradient pixels in the gradient map, and generating a gradient threshold according to a histogram statistics result;
and taking the original pixel corresponding to the gradient pixel with the pixel value of the gradient pixel being larger than or equal to the gradient threshold value as the edge point pixel.
3. The method of claim 2, wherein the step of histogram-counting pixel values of each gradient pixel in the gradient map and generating a gradient threshold based on the histogram statistics comprises:
counting the number of gradient pixels whose pixel values fall within different gradient sections;
carrying out summation operation on products of the number of gradient pixels in each gradient section and the representative value of the gradient section in a mode of from small to large according to the representative value of the gradient section until the summation result is larger than or equal to a preset percentage of the total pixel value of the gradient map;
the maximum representative value of the gradient section participating in the summation operation is taken as a gradient threshold value.
4. The method of claim 1, wherein the step of noise estimating the original image from which the edge point pixels are removed further comprises:
if the noise estimation value is smaller than a first noise threshold value, judging that the original image is a noise-free image;
if the noise estimation value is between the first noise threshold value and the second noise threshold value, judging that the original image is a low-noise image;
if the noise estimation value is larger than the second noise threshold value, judging that the original image is a high-noise image;
wherein the second noise threshold is greater than the first noise threshold and the step of similarity filtering the original image based on the noise level is not performed for the noiseless image.
5. The method of claim 1, wherein the step of weighted summing the pixel values of the similar pixels comprises:
and setting a summation weight corresponding to each similar pixel on the basis of the spatial distance between the current original pixel and each similar pixel on the original image, wherein the summation weight is smaller as the spatial distance is larger.
6. The method according to claim 1, wherein the method further comprises:
and carrying out detail enhancement on the edge point pixels on the original image after similarity filtering.
7. The method of claim 6, wherein the step of detail enhancing the edge point pixels on the similarity filtered original image comprises:
judging whether the absolute value of the difference value between the pixel value of the edge point pixel on the original image after the similarity filtering and the pixel value of the edge point pixel on the original image before the similarity filtering is larger than or equal to a preset difference threshold value;
and if the difference threshold value is larger than or equal to the difference threshold value, carrying out summation operation on the pixel value of the edge point pixel on the original image after similarity filtering and the preset proportion of the difference value.
8. The method of claim 7, wherein the step of detail enhancing the edge point pixels on the similarity filtered original image further comprises:
judging whether the summation result is smaller than a preset minimum allowable value or larger than a preset maximum allowable value;
if the sum is smaller than the minimum allowable value, setting the sum result to the minimum allowable value;
and if the sum is larger than the maximum allowable value, setting the sum result to the maximum allowable value.
9. An image denoising apparatus, comprising a processor and a memory; the memory has stored therein a computer program, the processor being adapted to execute the computer program to carry out the steps of the method according to any of claims 1-8.
10. A computer storage medium, characterized in that it stores a computer program which, when executed, implements the steps of the method according to any of claims 1-8.
CN201910994941.4A 2019-10-18 2019-10-18 Image denoising method, device and storage medium Active CN110796615B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910994941.4A CN110796615B (en) 2019-10-18 2019-10-18 Image denoising method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910994941.4A CN110796615B (en) 2019-10-18 2019-10-18 Image denoising method, device and storage medium

Publications (2)

Publication Number Publication Date
CN110796615A CN110796615A (en) 2020-02-14
CN110796615B true CN110796615B (en) 2023-06-02

Family

ID=69439330

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910994941.4A Active CN110796615B (en) 2019-10-18 2019-10-18 Image denoising method, device and storage medium

Country Status (1)

Country Link
CN (1) CN110796615B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111402172B (en) * 2020-03-24 2023-08-22 湖南国科微电子股份有限公司 Image noise reduction method, system, equipment and computer readable storage medium
CN111445427B (en) * 2020-05-20 2022-03-25 青岛信芯微电子科技股份有限公司 Video image processing method and display device
CN112651886B (en) * 2020-11-25 2022-10-11 杭州微帧信息科技有限公司 Method for removing color bands in enhanced image by combining original image
CN112862708B (en) * 2021-01-27 2024-02-23 牛津仪器科技(上海)有限公司 Adaptive recognition method of image noise, sensor chip and electronic equipment
CN113793257A (en) * 2021-09-15 2021-12-14 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN113989309A (en) * 2021-10-12 2022-01-28 豪威科技(武汉)有限公司 Edge detection method and readable storage medium
WO2023182848A1 (en) * 2022-03-25 2023-09-28 주식회사 엘지경영개발원 Artificial intelligence model training device for applying priority based on signal-to-noise ratio, and artificial intelligence model training method using same
CN117423113B (en) * 2023-12-18 2024-03-05 青岛华正信息技术股份有限公司 Adaptive denoising method for archive OCR (optical character recognition) image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003046781A (en) * 2001-07-31 2003-02-14 Canon Inc Method and device for image processing
CN109064418A (en) * 2018-07-11 2018-12-21 成都信息工程大学 A kind of Images Corrupted by Non-uniform Noise denoising method based on non-local mean

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7983501B2 (en) * 2007-03-29 2011-07-19 Intel Corporation Noise detection and estimation techniques for picture enhancement
CN103186888B (en) * 2011-12-30 2017-11-21 Ge医疗系统环球技术有限公司 A kind of method and device of removal CT picture noises
JP6349614B2 (en) * 2015-05-15 2018-07-04 エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd Image processing method and image processing system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003046781A (en) * 2001-07-31 2003-02-14 Canon Inc Method and device for image processing
CN109064418A (en) * 2018-07-11 2018-12-21 成都信息工程大学 A kind of Images Corrupted by Non-uniform Noise denoising method based on non-local mean

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
引进Canny边缘检测算子的噪音强度估计;王景泉;《湖南工程学院学报》;20081231;第18卷(第4期);第59-60页第2节 *
章新友.医学图像增强.《全国中医药行业高等教育"十三五"规划教材 医学图形图像处理 新世纪第3版》.2018, *

Also Published As

Publication number Publication date
CN110796615A (en) 2020-02-14

Similar Documents

Publication Publication Date Title
CN110796615B (en) Image denoising method, device and storage medium
Li et al. Three-component weighted structural similarity index
KR101112139B1 (en) Apparatus and method for estimating scale ratio and noise strength of coded image
US7522782B2 (en) Digital image denoising
CN109584198B (en) Method and device for evaluating quality of face image and computer readable storage medium
CN109743473A (en) Video image 3 D noise-reduction method, computer installation and computer readable storage medium
US10013772B2 (en) Method of controlling a quality measure and system thereof
WO2011011445A1 (en) System and method for random noise estimation in a sequence of images
CN107230208B (en) Image noise intensity estimation method of Gaussian noise
CN111445424B (en) Image processing method, device, equipment and medium for processing mobile terminal video
CN112862753B (en) Noise intensity estimation method and device and electronic equipment
CN110992264B (en) Image processing method, processing device, electronic equipment and storage medium
CN115393216A (en) Image defogging method and device based on polarization characteristics and atmospheric transmission model
CN106846262B (en) Method and system for removing mosquito noise
CN110136085B (en) Image noise reduction method and device
van Zyl Marais et al. Robust defocus blur identification in the context of blind image quality assessment
Chen et al. A universal reference-free blurriness measure
CN111161177A (en) Image self-adaptive noise reduction method and device
CN115829967A (en) Industrial metal surface defect image denoising and enhancing method
KR20200099834A (en) Imaging processing device for motion detection and method thereof
RU2405200C2 (en) Method and device for fast noise filtration in digital images
Tsai et al. Foveation-based image quality assessment
CN115063302A (en) Effective removal method for salt and pepper noise of fingerprint image
CN114723663A (en) Preprocessing defense method aiming at target detection and resisting attack
Storozhilova et al. 2.5 D extension of neighborhood filters for noise reduction in 3D medical CT images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant