CN104778669A - Fast image denoising method and device - Google Patents

Fast image denoising method and device Download PDF

Info

Publication number
CN104778669A
CN104778669A CN201510181277.3A CN201510181277A CN104778669A CN 104778669 A CN104778669 A CN 104778669A CN 201510181277 A CN201510181277 A CN 201510181277A CN 104778669 A CN104778669 A CN 104778669A
Authority
CN
China
Prior art keywords
mrow
value
point
pixel
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510181277.3A
Other languages
Chinese (zh)
Other versions
CN104778669B (en
Inventor
王学丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN201510181277.3A priority Critical patent/CN104778669B/en
Publication of CN104778669A publication Critical patent/CN104778669A/en
Application granted granted Critical
Publication of CN104778669B publication Critical patent/CN104778669B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides a fast image denoising method and a fast image denoising device. The fast image denoising method comprises the following steps: calculating a gradient mean of each pixel point of a raw noisy image within a first neighborhood range; judging whether the gradient mean is greater than or equal to a preset first threshold value; if the gradient mean is greater than or equal to the preset first threshold value, respectively calculating a weight between the pixel point and the pixel point in the first neighborhood range, calculating the weighted mean of a gray value of the pixel point, and using the weighted mean of the gray value as an output gray value of the pixel point with the same location of the pixel point in a denoised image; if the gradient mean is less than the preset first threshold value, calculating a gray value mean of the pixel point in a second neighborhood range, and using the gray value mean as the output gray value of the pixel point with the same location of the pixel point in the denoised image. The fast image denoising method disclosed by the invention can filter out the random noise and maintain margins and angular point details of a photographic field in the image.

Description

Rapid image denoising method and device
Technology neighborhood
The invention relates to the field of image processing technology, in particular to a rapid image denoising method and device.
Background
Because the image acquisition system can be interfered by various random signals such as temperature, electromagnetic waves and the like, sometimes obvious noise appears in the acquired image, many characteristics contained in the image can be covered by the noise, some details in the image cannot be identified, and the image visual effect and the data quality are poor. Therefore, the image processing technology is researched, the influence of random noise on the image is weakened, the contrast and the definition of the image are increased, the image information quality is ensured, a computer vision system can work reliably and stably under the action of signal interference, and the method has very important theoretical and practical application values undoubtedly.
Although the existing image smoothing filtering processing is simple, the existing image smoothing filtering processing can only remove extremely strong noise with extreme gray level distribution, and the smoothing filtering processing is carried out on the image, so that on one hand, the influence of the noise is weakened, on the other hand, the edges and the angular points of the original scenery in the image are blurred, a lot of detail information is lost, and the visual effect and the recognition effect are influenced.
Disclosure of Invention
The invention provides a rapid image denoising method and a rapid image denoising device, which are used for solving the problem that the prior art is easy to lose original edge information in the image denoising process.
In a first aspect, the present invention provides a fast image denoising method, including:
calculating a gradient average value in a first neighborhood range of each pixel point of the original noisy image, and judging whether the gradient average value is larger than a preset first threshold value or not; wherein the size of the first neighborhood is a preset value;
if the gradient average value is larger than or equal to a preset first threshold value, respectively calculating weights of the pixel points and the pixel points in the first neighborhood range, calculating a weighted average value of gray values of the pixel points according to the weights and the gray values corresponding to the pixel points, and taking the weighted average value of the gray values as an output gray value of the pixel points at the same positions as the pixel points in the de-noising image; if the gradient average value is smaller than the preset first threshold, calculating a gray value average value in a second neighborhood range of the pixel point, and taking the gray value average value as an output gray value of the pixel point at the same position as the pixel point in the denoised image, wherein the size of the second neighborhood is a preset value.
Optionally, before the calculating the weights of the pixel point and the pixel point in the first neighborhood range, the method includes:
respectively calculating the absolute difference value of the gradient average value of the pixel point in the first neighborhood range and the gradient average value of the pixel point in the first neighborhood range;
and judging whether the absolute difference value is greater than or equal to a preset second threshold value.
Optionally, the method further comprises:
and if the absolute difference value is smaller than the preset second threshold, setting the weight value of the pixel point and the pixel point in the first neighborhood range to be 0.
Optionally, the calculating a gradient average value in a first neighborhood range of each pixel point of the original noisy image includes:
calculating the gradient average value by the following formula (1):
<math> <mrow> <mover> <mi>G</mi> <mo>&OverBar;</mo> </mover> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <munder> <mrow> <mi>&Sigma;G</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mi>y</mi> <mo>&Element;</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow> </munder> <mo>)</mo> </mrow> <mo>/</mo> <msup> <mi>s</mi> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein, the x point is any pixel point of the original noisy image,the gradient mean value in a first neighborhood range of the x point, N (x) is a first neighborhood with the x point as the center and the size of s multiplied by s, the y point is a point in the first neighborhood range with the x point as the center, and G (y) is the gray value of the gradient image of the original noise-containing image at the y point; s is a preset value.
Optionally, the respectively calculating an absolute difference between the average value of the gradients in the first neighborhood range of the pixel point and the average value of the gradients in the first neighborhood range of the pixel point in the first neighborhood range includes:
calculating the absolute difference value by the following formula (2):
<math> <mrow> <mi>&Delta;G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>|</mo> <mover> <mi>G</mi> <mo>&OverBar;</mo> </mover> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>-</mo> <mover> <mi>G</mi> <mo>&OverBar;</mo> </mover> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein,is the mean of the gradients in the first neighbourhood of the x point,is the mean of the gradients in a first neighbourhood of the y-point, and Δ G (x, y) is the absolute difference of the mean of the gradients in said first neighbourhood of the x-point and the y-point.
Optionally, the calculating weights of the pixel point and the pixel point in the first neighborhood range respectively includes:
if the absolute difference is greater than or equal to the preset second threshold, calculating a weight between the pixel point and the pixel point in the first neighborhood range according to the following formula (3):
<math> <mrow> <mi>W</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mn>2</mn> <msup> <mi>&sigma;</mi> <mn>2</mn> </msup> </mrow> </mfrac> <msup> <mi>e</mi> <msub> <mrow> <mo>|</mo> <mo>|</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msub> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein, <math> <mrow> <msub> <mrow> <mo>|</mo> <mo>|</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msub> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>&Element;</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>j</mi> <mo>&Element;</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </munder> <msup> <mrow> <mo>(</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>/</mo> <msup> <mi>s</mi> <mn>2</mn> </msup> <mo>;</mo> </mrow> </math>
wherein, the x point is a pixel point of the original noisy image, the absolute difference value of which is greater than or equal to the preset second threshold, and the y point is a point within the first neighborhood range by taking the x point as a center; n (x) is a first neighborhood region with x point as the center and with the size of s multiplied by s, and N (y) is a first neighborhood region with y point as the center and with the size of s multiplied by s; the point i is a pixel point which takes the point x as the center and is in the first neighborhood range; the j point is a pixel point which takes the y point as the center and is in the first neighborhood range; i (i) is the gray value of the original noisy image at the point i; i (j) is the gray value of the original noisy image at the j point; s is a preset value.
Optionally, the value range of σ is 10< σ < 15.
Optionally, the calculating a weighted average of the gray values of the pixel points according to the weight values and the gray values corresponding to the pixel points includes:
calculating the accumulated value of the weight values corresponding to the pixel points, including:
adding the weight values W (x, y) to a matrix W respectively0Is calculated by the following equations (4) and (5):
W0(x)=W0(x)+W(x,y) (4)
W0(y)=W0(y)+W(x,y) (5)
wherein, W0(x) Is the matrix W0Value at point x, W0(y) is the matrix W0The value at point y; the matrix W0The matrix is the same as the original noisy image in size and has the initialization value of 0;
calculating a weighted accumulated value of the gray value of each pixel point, including:
respectively accumulating the products of the weight W (x, y) corresponding to the pixel points and the gray value of the pixel points in the first neighborhood range of the pixel points to a matrix C0Is calculated by the following equations (6) and (7):
C0(x)=C0(x)+W(x,y)×I(y) (6)
C0(y)=C0(y)+W(x,y)×I(x) (7)
wherein i (x) is the gray value of the original noisy image at the x point, and i (y) is the gray value of the original noisy image at the y point; c0(x) Is the matrix C0Value at point x, C0(y) is the matrix C0The value at point y; c0The matrix is the same as the original noisy image in size and has the initialization value of 0;
calculating the weighted average value of the pixel point normalized gray value, comprising:
calculating a weighted average of the pixel point normalized gray values by the following formula (8):
I'(v)=C0(v)/W0(v) (8)
the v point is a pixel point of the original noisy image, wherein the absolute difference value is greater than or equal to the preset second threshold; c0(v) Is the matrix C0Weighted accumulation of gray values at said pixels, W0(v) Is the matrix W0The accumulated value of the weight value corresponding to the pixel point v; i' (v) is the output gray value of the denoised image at the pixel point v.
Optionally, the calculating the average value of the gray values in the second neighborhood range of the pixel point includes:
the gray value average value is calculated by the following formula (9):
<math> <mrow> <msup> <mi>I</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>u</mi> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <munder> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>&Element;</mo> <mi>&Psi;</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>)</mo> </mrow> </mrow> </munder> <mi>I</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>/</mo> <msup> <mi>a</mi> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow> </math>
the u point is a pixel point of the original noisy image, wherein the gradient average value is smaller than the preset first threshold value; i' (u) is the mean value of the gray scale of the denoised image at the u point; Ψ (u) is a second neighborhood of size a × a centered on u; the point y is a point in the second neighborhood range by taking the point u as a center; i (y) is the gray value of the original noisy image at the y point; a is a preset value.
Optionally, the value range of s is 15< s < 31; the value range of a is as follows: s +5< a < s + 10.
Optionally, the preset first threshold is 50.
In a second aspect, the present invention provides a fast image denoising apparatus, including:
the first processing module is used for calculating a gradient average value in a first neighborhood range of each pixel point of the original noisy image and judging whether the gradient average value is larger than a preset first threshold value or not; wherein the size of the first neighborhood is a preset value;
the second processing module is used for respectively calculating the weight values of the pixel points and the pixel points in the first neighborhood range if the gradient average value is greater than or equal to a preset first threshold value, calculating the weighted average value of the gray values of the pixel points according to the weight values and the gray values corresponding to the pixel points, and taking the weighted average value of the gray values as the output gray value of the pixel points at the same position as the pixel points in the denoised image; and the third processing module is used for calculating the gray value average value in a second neighborhood range of the pixel point if the gradient average value is smaller than the preset first threshold value, and taking the gray value average value as the output gray value of the pixel point at the same position as the pixel point in the denoised image, wherein the size of the second neighborhood is a preset value.
The invention relates to a fast image denoising method and a device, which judge whether a gradient average value in a first neighborhood range of each pixel point of an original noisy image is larger than a preset first threshold value or not by calculating the gradient average value; wherein the size of the first neighborhood is a preset value; if the gradient average value is larger than or equal to a preset first threshold value, respectively calculating weights of the pixel points and the pixel points in the first neighborhood range, calculating a weighted average value of gray values of the pixel points according to the weights and the gray values corresponding to the pixel points, and taking the weighted average value of the gray values as an output gray value of the pixel points at the same positions as the pixel points in the de-noising image; if the gradient average value is smaller than the preset first threshold, calculating a gray value average value in a second neighborhood range of the pixel point, and taking the gray value average value as an output gray value of the pixel point at the same position as the pixel point in the denoising image, wherein the size of the second neighborhood is a preset value, so that more serious random noise with unknown distribution can be filtered, the edge and corner point details of a scene in the image can be well kept while the noise is filtered, the efficiency is high, and the problem that the original edge information is easily lost in the image denoising process is solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flowchart illustrating a fast image denoising method according to an embodiment of the present invention;
FIG. 2A is a schematic diagram of an original noisy image according to an embodiment of the present invention;
FIG. 2B is a schematic diagram of the image of FIG. 2A after denoising by a conventional adaptive median filtering method;
FIG. 2C is a schematic diagram of the image of FIG. 2A after being denoised by the fast image denoising method of the present invention;
FIG. 3A is a schematic diagram of an original noisy image according to an embodiment of the method of the present invention;
FIG. 3B is a schematic diagram of the image of FIG. 3A after denoising by a conventional adaptive median filtering method;
FIG. 3C is a schematic diagram of the image of FIG. 3A after being denoised by the fast image denoising method of the present invention;
FIG. 4A is a schematic diagram of an original noisy image according to an embodiment of the present invention;
FIG. 4B is a schematic diagram of the image of FIG. 4A after denoising by the conventional adaptive median filtering method;
FIG. 4C is a schematic diagram of the image of FIG. 4A after being denoised by the fast image denoising method of the present invention;
FIG. 5 is a schematic structural diagram of an embodiment of a fast image denoising apparatus according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments obtained by persons of ordinary skill in the art based on the embodiments of the present invention without any creative work belong to the protection scope of the present invention.
In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Fig. 1 is a flowchart of an embodiment of a fast image denoising method according to the present invention, and as shown in fig. 1, the method of this embodiment may include:
step 101, calculating a gradient average value in a first neighborhood range of each pixel point of an original noisy image, and judging whether the gradient average value is greater than a preset first threshold value or not; wherein the size of the first neighborhood is a preset value;
102, if the gradient average value is greater than or equal to a preset first threshold, respectively calculating weights of the pixel points and the pixel points in the first neighborhood range, calculating a weighted average value of gray values of the pixel points according to the weights and the gray values corresponding to the pixel points, and taking the weighted average value of the gray values as an output gray value of the pixel points at the same positions as the pixel points in the de-noised image;
step 103, if the gradient average value is smaller than the preset first threshold, calculating a gray value average value in a second neighborhood range of the pixel point, and taking the gray value average value as an output gray value of the pixel point at the same position as the pixel point in the denoised image, wherein the size of the second neighborhood is a preset value.
Due to the fact that the idea of non-local mean filtering can well measure the similarity of gray structures of regions in the image, the weight of the pixel points is determined through the similarity, the contribution of different pixel points is adjusted, and under the condition that the structure edge and the angular point of the original image are kept, the noise and the miscellaneous points are filtered. However, this measurement method is computationally expensive and time-consuming. Therefore, in order to reduce the time consumption of the algorithm, in the embodiment of the invention, the complexity of the region of the first neighborhood range where the pixel point is located is simply measured, if the gradient of the region is small, the pixel point can be considered to be in a region with relatively flat gray scale, a non-local mean filtering algorithm with edge holding capacity is not needed, and only mean filtering is needed.
Specifically, in the first step, the gradient average value of each pixel point of the original noisy image in the first neighborhood range is calculated, and a preset first threshold value is set for comparison;
if the gradient is larger, filtering by using a non-local mean value, namely if the gradient mean value is larger than or equal to a preset first threshold value, respectively calculating the weight values of the pixel points and the pixel points in the first neighborhood range, calculating the weighted mean value of the gray value of each pixel point according to the weight value and the gray value corresponding to the pixel points, and taking the weighted mean value of the gray value as the output gray value of the pixel point at the same position as the pixel point in the de-noised image;
and if the gradient is small, using mean filtering, namely if the gradient mean value is smaller than a preset first threshold value, calculating the gray value mean value in a second neighborhood range of the pixel point, and taking the gray value mean value as the output gray value of the pixel point at the same position as the pixel point in the denoised image.
Optionally, before the calculating the weights of the pixel point and the pixel point in the first neighborhood range, the method includes:
respectively calculating the absolute difference value of the gradient average value of the pixel point in the first neighborhood range and the gradient average value of the pixel point in the first neighborhood range;
and judging whether the absolute difference value is greater than or equal to a preset second threshold value.
Optionally, the method further comprises:
and if the absolute difference value is smaller than the preset second threshold, setting the weight value of the pixel point and the pixel point in the first neighborhood range to be 0.
Specifically, if the absolute difference is greater than or equal to a preset second threshold, respectively calculating weights of the pixel points and the pixel points in the first neighborhood range, calculating a weighted average value of a gray value of each pixel point according to the weight and the gray value corresponding to the pixel point, and taking the weighted average value of the gray value as an output gray value of the pixel point at the same position as the pixel point in the denoised image;
if the absolute difference value is smaller than a preset second threshold, setting the weight value of the pixel point and the pixel point in the first neighborhood range to be 0, namely skipping the weight value accumulation calculation between the two points, and regarding the weight value as 0;
fig. 2A is a schematic diagram of an original noisy image in an embodiment of the method of the present invention, fig. 2B is a schematic diagram of fig. 2A after being denoised by an existing adaptive median filtering method, fig. 2C is a schematic diagram of fig. 2A after being denoised by the fast image denoising method of the present invention, fig. 3A is a schematic diagram of an original noisy image in an embodiment of the method of the present invention, fig. 3B is a schematic diagram of fig. 3A after being denoised by an existing adaptive median filtering method, fig. 3C is a schematic diagram of fig. 3A after being denoised by the fast image denoising method of the present invention, fig. 4A is a schematic diagram of an original noisy image in an embodiment of the method of the present invention, fig. 4B is a schematic diagram of fig. 4A after being denoised by an existing adaptive median filtering method, and fig. 4C is a schematic diagram of fig. 4A after being deno.
As shown in fig. 2A, 2B and 2C, fig. 2A is a grayed portrait with severe color noise, and the pixel resolution is 256 × 256; FIG. 2B is a result image of the denoising of FIG. 2A by the conventional adaptive median filtering method, which takes about 3.8s to calculate; fig. 2C is a result image of fig. 2A denoised by the method of the embodiment of the invention, and the calculation takes about 320 ms.
As shown in fig. 3A, 3B, and 3C, fig. 3A is a grayed snow scene containing color noise, and the pixel resolution is 354 × 221; FIG. 3B is a result image of the denoising of FIG. 3A by the adaptive median filtering method, the calculation time is about 2.5 s; FIG. 3C is the image of FIG. 3A denoised by the method of the present invention, which takes about 560ms to compute.
As shown in fig. 4A, 4B, and 4C, fig. 4A is a grayed complex color hand painting containing color noise, the pixel resolution is 305 × 400, and the details in the image are hard to be resolved due to the serious influence of the noise. FIG. 4B is the result image of FIG. 4A denoised by the adaptive median filtering method, and the calculation takes about 7.9 s; FIG. 4C is the image of FIG. 4A after denoising in the method of the present invention, which takes about 780 ms.
Comparing fig. 2B, fig. 3B, fig. 4B with fig. 2C, fig. 3C, and fig. 4C, it can be seen that some noise influence remains in the existing adaptive median filtering result, and there is an unnatural blocking phenomenon; the method of the embodiment of the invention has natural and smooth denoising result, clear details and far higher speed than the self-adaptive median filtering method, and can meet the requirement of real-time interactive denoising.
The sending embodiment provides a method for effectively filtering random image noise based on the idea of non-local mean filtering, which can filter relatively serious random noise with unknown distribution, and solve the problem that an image acquisition system and a transmission system are interfered by random noise with unknown distribution.
The method can well keep the details of the edges and the corners of the scenery in the image while filtering the noise, and solves the problem that the original edge information is easily lost in the image denoising process.
The method has high running speed, fully optimizes and accelerates by utilizing mathematical calculation methods such as layer-by-layer pre-screening and symmetry, can process color standard definition images (720 multiplied by 576) on a GPU in parallel within 1s, can completely meet the requirement of real-time interaction for black and white images only within 300ms, and solves the problem that the non-local mean filtering speed is extremely low, and the time consumption is dozens of seconds, so that the non-local mean filtering method cannot be widely applied.
Optionally, the calculating a gradient average value in a first neighborhood range of each pixel point of the original noisy image includes:
calculating the gradient average value by the following formula (1):
<math> <mrow> <mover> <mi>G</mi> <mo>&OverBar;</mo> </mover> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <munder> <mrow> <mi>&Sigma;G</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mi>y</mi> <mo>&Element;</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow> </munder> <mo>)</mo> </mrow> <mo>/</mo> <msup> <mi>s</mi> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein, the x point is any pixel point of the original noisy image,the gradient mean value in a first neighborhood range of the x point, N (x) is a first neighborhood with the x point as the center and the size of s multiplied by s, the y point is a point in the first neighborhood range with the x point as the center, and G (y) is the gray value of the gradient image of the original noise-containing image at the y point; s is a preset value.
Specifically, the gradient average value in step 101 is calculated by using the above formula (1), that is, the gradient average value in a first neighborhood range of any pixel point x of the original noisy image is calculated, where the first neighborhood is a local rectangular region with x as a center and with a size of s × s;
i.e. the final gradient average is
Optionally, the respectively calculating an absolute difference between the average value of the gradients in the first neighborhood range of the pixel point and the average value of the gradients in the first neighborhood range of the pixel point in the first neighborhood range includes:
calculating the absolute difference value by the following formula (2):
<math> <mrow> <mi>&Delta;G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>|</mo> <mover> <mi>G</mi> <mo>&OverBar;</mo> </mover> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>-</mo> <mover> <mi>G</mi> <mo>&OverBar;</mo> </mover> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein,is the mean of the gradients in the first neighbourhood of the x point,is the mean of the gradients in a first neighbourhood of the y-point, and Δ G (x, y) is the absolute difference of the mean of the gradients in said first neighbourhood of the x-point and the y-point.
Specifically, for the pixel points determined in the step 101 as having a large regional gradient, that is, the average value of the gradient is greater than or equal to the preset first threshold, the region has strong structural information such as edges and corners, and the details are excessively blurred by using the mean filtering. Therefore, the points use non-local mean filtering to smooth the areas with flat gray scale under the condition of keeping the edge and corner structure, thereby achieving the purpose of removing noise.
For the pixel points using non-local mean filtering, there is still a very large amount of calculation for calculating the weight and the gray-scale weighted average value by using each point of the whole graph, and under the condition of not reducing the smoothing effect too much, the calculation time can be reduced by reducing the points participating in the contribution of the weight. There are many points participating in weight calculation, the difference with the gray structure of the region where the current pixel point is located is large, the calculated weight is very small, and the contribution can be ignored. According to the embodiment of the invention, whether the weight value can be ignored or not is judged in advance by comparing the regional gradient size of the first neighborhood range of the two pixel points. Therefore, a preset second threshold value of the absolute difference value of the gradient average value is set, points where the difference between the regional gradients is smaller than the preset second threshold value do not participate in weight calculation, and points which are larger than or equal to the preset second threshold value participate in weight calculation.
Respectively calculating the absolute difference value of the gradient mean value in the first neighborhood range of the pixel point and the gradient mean value in the first neighborhood range of the pixel point in the first neighborhood range, wherein the pixel point x is the pixel point of which the gradient mean value in the first neighborhood range is greater than or equal to a preset first threshold value, namely calculating the absolute difference value of the gradient mean values in the first neighborhood range of the x point and the y point by the formula (2),andcan be calculated by the above equation (1).
Optionally, the preset first threshold is 50.
The preset first threshold value of 50 is an optimal empirical value determined through experiments, and can universally cope with random noise situations.
Optionally, the calculating weights of the pixel point and the pixel point in the first neighborhood range respectively includes:
if the absolute difference is greater than or equal to a preset second threshold, calculating a weight between the pixel point and the pixel point in the first neighborhood range according to the following formula (3):
<math> <mrow> <mi>W</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mn>2</mn> <msup> <mi>&sigma;</mi> <mn>2</mn> </msup> </mrow> </mfrac> <msup> <mi>e</mi> <msub> <mrow> <mo>|</mo> <mo>|</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msub> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein, <math> <mrow> <msub> <mrow> <mo>|</mo> <mo>|</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msub> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>&Element;</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>j</mi> <mo>&Element;</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </munder> <msup> <mrow> <mo>(</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>/</mo> <msup> <mi>s</mi> <mn>2</mn> </msup> <mo>;</mo> </mrow> </math>
wherein, the x point is a pixel point of the original noisy image, the absolute difference value of which is greater than or equal to the preset second threshold, and the y point is a point within the first neighborhood range by taking the x point as a center; n (x) is a first neighborhood region with x point as the center and with the size of s multiplied by s, and N (y) is a first neighborhood region with y point as the center and with the size of s multiplied by s; the point i is a pixel point which takes the point x as the center and is in the first neighborhood range; the j point is a pixel point which takes the y point as the center and is in the first neighborhood range; i (i) is the gray value of the original noisy image at the point i; i (j) is the gray value of the original noisy image at the j point; s is a preset value.
Optionally, the value range of σ is 10< σ < 15.
Specifically, for points participating in weight calculation, the gaussian distance of the area gray matrix in the first neighborhood range where two pixel points are located is used for measurement, and the gaussian distance is calculated according to the following formula:
<math> <mrow> <msub> <mrow> <mo>|</mo> <mo>|</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msub> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>&Element;</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>j</mi> <mo>&Element;</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </munder> <msup> <mrow> <mo>(</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>/</mo> <msup> <mi>s</mi> <mn>2</mn> </msup> </mrow> </math>
the small Gaussian distance indicates that the gray structures of the areas where the two pixel points are located are close, so that a larger weight is given, otherwise, a smaller weight is given, and the original edge and corner details in the gray structures are kept.
The weight of the pixel point and the pixel point in the first neighborhood range can be obtained by calculation according to the formula (3). Here, the pixel point x is a pixel point where the absolute difference is greater than or equal to a preset second threshold.
σ is a smoothing degree control parameter, is an optimum empirical value obtained through experiments, and can universally cope with random noise situations.
Optionally, the calculating a weighted average of the gray values of the pixel points according to the weight values and the gray values corresponding to the pixel points includes:
calculating the accumulated value of the weight values corresponding to the pixel points, including:
adding the weight values W (x, y) to a matrix W respectively0Is calculated by the following equations (4) and (5):
W0(x)=W0(x)+W(x,y) (4)
W0(y)=W0(y)+W(x,y) (5)
wherein, W0(x) Is the matrix W0Value at point x, W0(y) is the matrix W0The value at point y; the matrix W0The matrix is the same as the original noisy image in size and has the initialization value of 0;
calculating a weighted accumulated value of the gray value of each pixel point, including:
respectively accumulating the products of the weight W (x, y) corresponding to the pixel points and the gray value of the pixel points in the first neighborhood range of the pixel points to a matrix C0Is calculated by the following equations (6) and (7):
C0(x)=C0(x)+W(x,y)×I(y) (6)
C0(y)=C0(y)+W(x,y)×I(x) (7)
wherein i (x) is the gray value of the original noisy image at the x point, and i (y) is the gray value of the original noisy image at the y point; c0(x) Is the matrix C0Value at point x, C0(y) is the matrix C0The value at point y; c0The matrix is the same as the original noisy image in size and has the initialization value of 0;
calculating the weighted average value of the pixel point normalized gray value, comprising:
calculating a weighted average of the pixel point normalized gray values by the following formula (8):
I'(v)=C0(v)/W0(v) (8)
the v point is a pixel point of the original noisy image, wherein the absolute difference value is greater than or equal to the preset second threshold; c0(v) Is the matrix C0Weighted accumulation of gray values at said pixels, W0(v) Is the matrix W0The accumulated value of the weight value corresponding to the pixel point v; i' (v) is the output gray value of the denoised image at the pixel point v.
Specifically, in order to calculate the weighted average value of the normalized gray-scale value of the pixel point, the accumulated value of the weight values corresponding to the pixel point is calculated first, and the calculation is performed according to the formulas (4) and (5), wherein in the calculation process, for each pixel point x in the image, the contribution of the pixel point y in another first neighborhood range to x is calculated. The x and y points described herein are both varied and are a scanning process. The x point scans each pixel point of the whole image, and the layer of y points scans the first neighborhood where the current x point is located for each given x point. x and y are temporary codes given according to the hierarchy of the scan cycle.
There is symmetry in the calculation of this filtering. For example, if we are computing the filtering result of point p1 ═ 25,30, then all points in the first neighborhood where p1 is located are needed to participate in the computation, including one point p2 ═ 30, 35. After the contribution of p2 to p1 is calculated, when we calculate the filtering result of the point p2, the point p1 (25,30) is also in the first neighborhood of the point p2, so the contribution of the point p1 to the point p2 is calculated once for calculating the result of the point p 2.
In practice, there are a large number of repeated calculations, and the weight W (p1, p2) is equal to W (p2, p1) in the mutual contribution of p1 and p 2. The embodiment of the invention optimizes the point:
when calculating the p1 point filtering result, the influence of W (p1, p2) in the p2 point filtering result is added to the sum immediately after the first calculation. That is, when calculating the filtering result of p1 point, we calculate and make contributions of p1 to all points that it can affect, so the same operations are performed on pixel point y in the above equations (4) - (7) when calculating pixel point x. Thus, when we calculate the p2 point, the contribution of the p1 point to the p2 point is not needed to be calculated.
The above equations (4) - (7) show that when a pixel x is processed for a certain cycle, there is a pixel y in the first neighborhood, and the contributions of x to y and y to x are accumulated by two equations and recorded in C0In a matrix.
Namely, the weights W (x, y) corresponding to the pixel points are respectively added to the matrix W0And the product of the weight W (x, y) corresponding to the pixel point and the gray value of the pixel point in the first neighborhood range of the pixel point is respectively accumulated to the matrix C0In the corresponding position of (a); and finally, calculating the weighted average value of the pixel point normalized gray value through a formula (8):
I'(v)=C0(v)/W0(v)
and the v point is a pixel point of the original noisy image, wherein the absolute difference value is greater than or equal to the preset second threshold value.
And if the absolute difference value is smaller than the preset second threshold, setting the weight value of the pixel point and the pixel point in the first neighborhood range to be 0.
By utilizing the symmetry of the weight calculation, the embodiment of the invention adopts a mode of only calculating half points, sequentially scanning each pixel point and accumulating weight numerator and denominator for optimization, thereby reducing about half of the calculation amount.
Optionally, the calculating the average value of the gray values in the second neighborhood range of the pixel point includes:
the gray value average value is calculated by the following formula (9):
<math> <mrow> <msup> <mi>I</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>u</mi> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <munder> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>&Element;</mo> <mi>&Psi;</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>)</mo> </mrow> </mrow> </munder> <mi>I</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>/</mo> <msup> <mi>a</mi> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow> </math>
the u point is a pixel point of the original noisy image, wherein the gradient average value is smaller than the preset first threshold value; i' (u) is the mean value of the gray scale of the denoised image at the u point; Ψ (u) is a second neighborhood of size a × a centered on u; the point y is a point in the second neighborhood range by taking the point u as a center; i (y) is the gray value of the original noisy image at the y point; a is a preset value.
Optionally, the value range of s is 15< s < 31; the value range of a is as follows: s +5< a < s + 10.
Specifically, for the pixel points determined in the step 101 as having a smaller regional gradient, that is, the pixel points whose gradient average is smaller than the preset first threshold, the noise is smoothed by using a conventional mean filtering algorithm, and the average of the gray values of the pixel points of the second neighborhood, which takes the current pixel point as the center and has a radius a, as the output gray value of the corresponding pixel point of the denoised image after noise filtering is used, the gray average can be calculated by formula (9).
The above-mentioned a is a filter radius parameter, and the above-mentioned range is an optimum empirical value determined by experiment, and can universally cope with random noise situations.
FIG. 5 is a schematic structural diagram of an embodiment of a fast image denoising apparatus according to the present invention. As shown in fig. 5, the apparatus of this embodiment may include: a first processing module 501, a second processing module 502 and a third processing module 503; the first processing module 501 is configured to calculate a gradient average value in a first neighborhood range of each pixel point of the original noisy image, and determine whether the gradient average value is greater than a preset first threshold; wherein the size of the first neighborhood is a preset value;
a second processing module 502, configured to respectively calculate weights of the pixel point and a pixel point in the first neighborhood range if the gradient average is greater than or equal to a preset first threshold, calculate a weighted average of gray values of the pixel point according to the weight and the gray value corresponding to the pixel point, and use the weighted average of gray values as an output gray value of a pixel point in the denoising image at the same position as the pixel point;
a third processing module 503, configured to calculate a mean value of gray values in a second neighborhood range of the pixel point if the gradient mean value is smaller than the preset first threshold, and use the mean value of gray values as an output gray value of a pixel point in the denoised image, where the pixel point is at the same position as the pixel point, and the size of the second neighborhood is a preset value.
Optionally, the second processing module 502 is specifically configured to:
if the gradient average value is larger than or equal to a preset first threshold value, respectively calculating the absolute difference value of the gradient average value of the pixel point in the first neighborhood range and the gradient average value of the pixel point in the first neighborhood range;
and judging whether the absolute difference value is greater than or equal to a preset second threshold value.
Optionally, the second processing module 502 is further configured to:
and if the absolute difference value is smaller than the preset second threshold, setting the weight value of the pixel point and the pixel point in the first neighborhood range to be 0.
Optionally, the first processing module 501 is specifically configured to:
calculating the gradient average value by the following formula (1):
<math> <mrow> <mover> <mi>G</mi> <mo>&OverBar;</mo> </mover> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <munder> <mrow> <mi>&Sigma;G</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mi>y</mi> <mo>&Element;</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow> </munder> <mo>)</mo> </mrow> <mo>/</mo> <msup> <mi>s</mi> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein, the x point is any pixel point of the original noisy image,the gradient mean value in a first neighborhood range of the x point, N (x) is a first neighborhood with the x point as the center and the size of s multiplied by s, the y point is a point in the first neighborhood range with the x point as the center, and G (y) is the gray value of the gradient image of the original noise-containing image at the y point; s is a preset value.
Optionally, the second processing module 502 is specifically configured to:
calculating the absolute difference value by the following formula (2):
<math> <mrow> <mi>&Delta;G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>|</mo> <mover> <mi>G</mi> <mo>&OverBar;</mo> </mover> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>-</mo> <mover> <mi>G</mi> <mo>&OverBar;</mo> </mover> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein,is the mean of the gradients in the first neighbourhood of the x point,is the mean of the gradients in a first neighbourhood of the y-point, and Δ G (x, y) is the absolute difference of the mean of the gradients in said first neighbourhood of the x-point and the y-point.
Optionally, the second processing module 502 is specifically configured to:
if the absolute difference is greater than or equal to the preset second threshold, calculating a weight between the pixel point and the pixel point in the first neighborhood range according to the following formula (3):
<math> <mrow> <mi>W</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mn>2</mn> <msup> <mi>&sigma;</mi> <mn>2</mn> </msup> </mrow> </mfrac> <msup> <mi>e</mi> <msub> <mrow> <mo>|</mo> <mo>|</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msub> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein, <math> <mrow> <msub> <mrow> <mo>|</mo> <mo>|</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msub> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>&Element;</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>j</mi> <mo>&Element;</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </munder> <msup> <mrow> <mo>(</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>/</mo> <msup> <mi>s</mi> <mn>2</mn> </msup> <mo>;</mo> </mrow> </math>
wherein, the x point is a pixel point of the original noisy image, the absolute difference value of which is greater than or equal to the preset second threshold, and the y point is a point within the first neighborhood range by taking the x point as a center; n (x) is a first neighborhood region with x point as the center and with the size of s multiplied by s, and N (y) is a first neighborhood region with y point as the center and with the size of s multiplied by s; the point i is a pixel point which takes the point x as the center and is in the first neighborhood range; the j point is a pixel point which takes the y point as the center and is in the first neighborhood range; i (i) is the gray value of the original noisy image at the point i; i (j) is the gray value of the original noisy image at the j point; s is a preset value.
Optionally, the value range of σ is 10< σ < 15.
Optionally, the second processing module 502 includes:
the first unit is used for calculating the accumulated value of the weight corresponding to the pixel point;
the first unit is specifically configured to:
adding the weight values W (x, y) to a matrix W respectively0Is calculated by the following equations (4) and (5):
W0(x)=W0(x)+W(x,y) (4)
W0(y)=W0(y)+W(x,y) (5)
wherein, W0(x) Is the matrix W0Value at point x, W0(y) is the matrix W0The value at point y; the matrix W0The matrix is the same as the original noisy image in size and has the initialization value of 0;
the second unit is used for calculating the weighted accumulated value of the gray value of each pixel point;
the second unit is specifically configured to:
will be described inThe products of the weight W (x, y) corresponding to the pixel point and the gray value of the pixel point in the first neighborhood range of the pixel point are respectively accumulated to the matrix C0Is calculated by the following equations (6) and (7):
C0(x)=C0(x)+W(x,y)×I(y) (6)
C0(y)=C0(y)+W(x,y)×I(x) (7)
wherein i (x) is the gray value of the original noisy image at the x point, and i (y) is the gray value of the original noisy image at the y point; c0(x) Is the matrix C0Value at point x, C0(y) is the matrix C0The value at point y; c0The matrix is the same as the original noisy image in size and has the initialization value of 0;
the third unit is used for calculating the weighted average value of the pixel point normalized gray value;
the third unit is specifically configured to:
calculating a weighted average of the pixel point normalized gray values by the following formula (8):
I'(v)=C0(v)/W0(v) (8)
the v point is a pixel point of the original noisy image, wherein the absolute difference value is greater than or equal to the preset second threshold; c0(v) Is the matrix C0Weighted accumulation of gray values at said pixels, W0(v) Is the matrix W0The accumulated value of the weight value corresponding to the pixel point v; i' (v) is the output gray value of the denoised image at the pixel point v.
Optionally, the third processing module is specifically configured to:
the gray value average value is calculated by the following formula (9):
<math> <mrow> <msup> <mi>I</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>u</mi> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <munder> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>&Element;</mo> <mi>&Psi;</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>)</mo> </mrow> </mrow> </munder> <mi>I</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>/</mo> <msup> <mi>a</mi> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow> </math>
the u point is a pixel point of the original noisy image, wherein the gradient average value is smaller than the preset first threshold value; i' (u) is the mean value of the gray scale of the denoised image at the u point; Ψ (u) is a second neighborhood of size a × a centered on u; the point y is a point in the second neighborhood range by taking the point u as a center; i (y) is the gray value of the original noisy image at the y point; a is a preset value.
Optionally, the value range of s is 15< s < 31; the value range of a is as follows: s +5< a < s + 10.
Optionally, the preset first threshold is 50.
The apparatus of this embodiment may be used to implement the technical solution of the method embodiment shown in fig. 1, and the implementation principle and the technical effect are similar, which are not described herein again.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (12)

1. A fast image denoising method is characterized by comprising the following steps:
calculating a gradient average value in a first neighborhood range of each pixel point of the original noisy image, and judging whether the gradient average value is greater than or equal to a preset first threshold value or not; wherein the size of the first neighborhood is a preset value;
if the gradient average value is larger than or equal to a preset first threshold value, respectively calculating weights of the pixel points and the pixel points in the first neighborhood range, calculating a weighted average value of gray values of the pixel points according to the weights and the gray values corresponding to the pixel points, and taking the weighted average value of the gray values as an output gray value of the pixel points at the same positions as the pixel points in the de-noising image;
if the gradient average value is smaller than the preset first threshold, calculating a gray value average value in a second neighborhood range of the pixel point, and taking the gray value average value as an output gray value of the pixel point at the same position as the pixel point in the denoised image, wherein the size of the second neighborhood is a preset value.
2. The method of claim 1, wherein before calculating weights of the pixel point and the pixel points in the first neighborhood region, respectively, comprises:
respectively calculating the absolute difference value of the gradient average value of the pixel point in the first neighborhood range and the gradient average value of the pixel point in the first neighborhood range;
and judging whether the absolute difference value is greater than or equal to a preset second threshold value.
3. The method of claim 2, further comprising:
and if the absolute difference value is smaller than the preset second threshold, setting the weight value of the pixel point and the pixel point in the first neighborhood range to be 0.
4. The method according to any one of claims 1-3, wherein said calculating the mean value of the gradient in the first neighborhood of each pixel point of the original noisy image comprises:
calculating the gradient average value by the following formula (1):
<math> <mrow> <mover> <mi>G</mi> <mo>&OverBar;</mo> </mover> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <munder> <mrow> <mi>&Sigma;G</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mi>y</mi> <mo>&Element;</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow> </munder> <mo>)</mo> </mrow> <mo>/</mo> <msup> <mi>s</mi> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein, the x point is any pixel point of the original noisy image,the gradient mean value in a first neighborhood range of the x point, N (x) is a first neighborhood with the x point as the center and the size of s multiplied by s, the y point is a point in the first neighborhood range with the x point as the center, and G (y) is the gray value of the gradient image of the original noise-containing image at the y point; s is a preset value.
5. The method according to claim 2 or 3, wherein said calculating an absolute difference between the mean value of the gradients in said first neighborhood range of said pixel points and the mean value of the gradients in said first neighborhood range of said pixel points, respectively, comprises:
calculating the absolute difference value by the following formula (2):
<math> <mrow> <mi>&Delta;G</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>|</mo> <mover> <mi>G</mi> <mo>&OverBar;</mo> </mover> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>-</mo> <mover> <mi>G</mi> <mo>&OverBar;</mo> </mover> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein,is the mean of the gradients in the first neighbourhood of the x point,is the mean of the gradients in a first neighbourhood of the y-point, and Δ G (x, y) is the absolute difference of the mean of the gradients in said first neighbourhood of the x-point and the y-point.
6. The method according to claim 2 or 3, wherein the calculating the weight of the pixel point and the pixel point in the first neighborhood range respectively comprises:
if the absolute difference is greater than or equal to the preset second threshold, calculating a weight between the pixel point and the pixel point in the first neighborhood range according to the following formula (3):
<math> <mrow> <mi>W</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mrow> <mn>2</mn> <mi>&sigma;</mi> </mrow> <mn>2</mn> </msup> </mfrac> <msup> <mi>e</mi> <msub> <mrow> <mo>|</mo> <mo>|</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msub> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein, <math> <mrow> <msub> <mrow> <mo>|</mo> <mo>|</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msub> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>&Element;</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>j</mi> <mo>&Element;</mo> <mi>N</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </munder> <msup> <mrow> <mo>(</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>/</mo> <msup> <mi>s</mi> <mn>2</mn> </msup> <mo>;</mo> </mrow> </math>
wherein, the x point is a pixel point of the original noisy image, the absolute difference value of which is greater than or equal to the preset second threshold, and the y point is a point within the first neighborhood range by taking the x point as a center; n (x) is a first neighborhood region with x point as the center and with the size of s multiplied by s, and N (y) is a first neighborhood region with y point as the center and with the size of s multiplied by s; the point i is a pixel point which takes the point x as the center and is in the first neighborhood range; the j point is a pixel point which takes the y point as the center and is in the first neighborhood range; i (i) is the gray value of the original noisy image at the point i; i (j) is the gray value of the original noisy image at the j point; s is a preset value.
7. The method of claim 6, wherein σ has a value in a range of 10< σ < 15.
8. The method according to claim 6, wherein the calculating a weighted average of the gray-level values of the pixels according to the weights and the gray-level values corresponding to the pixels comprises:
calculating the accumulated value of the weight values corresponding to the pixel points, including:
adding the weight values W (x, y) to a matrix W respectively0Is calculated by the following equations (4) and (5):
W0(x)=W0(x)+W(x,y) (4)
W0(y)=W0(y)+W(x,y) (5)
wherein, W0(x) Is the matrix W0Value at point x, W0(y) is the matrix W0The value at point y; the matrix W0The matrix is the same as the original noisy image in size and has the initialization value of 0;
calculating a weighted accumulated value of the gray value of each pixel point, including:
respectively accumulating the products of the weight W (x, y) corresponding to the pixel points and the gray value of the pixel points in the first neighborhood range of the pixel points to a matrix C0Is calculated by the following equations (6) and (7):
C0(x)=C0(x)+W(x,y)×I(y) (6)
C0(y)=C0(y)+W(x,y)×I(x) (7)
wherein i (x) is the gray value of the original noisy image at the x point, and i (y) is the gray value of the original noisy image at the y point; c0(x) Is the matrix C0Value at point x, C0(y) is the matrix C0The value at point y; c0The matrix is the same as the original noisy image in size and has the initialization value of 0;
calculating the weighted average value of the pixel point normalized gray value, comprising:
calculating a weighted average of the pixel point normalized gray values by the following formula (8):
I'(v)=C0(v)/W0(v) (8)
the v point is a pixel point of the original noisy image, wherein the absolute difference value is greater than or equal to the preset second threshold; c0(v) Is the matrix C0Weighted accumulation of gray values at said pixels, W0(v) Is the matrix W0The accumulated value of the weight value corresponding to the pixel point v; i' (v) is the output gray value of the denoised image at the pixel point v.
9. The method according to any of claims 1-3, wherein said calculating the mean of the gray values in the second neighborhood of said pixel comprises:
the gray value average value is calculated by the following formula (9):
<math> <mrow> <msup> <mi>I</mi> <mo>&prime;</mo> </msup> <mrow> <mo>(</mo> <mi>u</mi> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <munder> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>&Element;</mo> <mi>&Psi;</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>)</mo> </mrow> </mrow> </munder> <mi>I</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>/</mo> <msup> <mi>a</mi> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow> </math>
the u point is a pixel point of the original noisy image, wherein the gradient average value is smaller than the preset first threshold value; i' (u) is the mean value of the gray scale of the denoised image at the u point; Ψ (u) is a second neighborhood of size a × a centered on u; the point y is a point in the second neighborhood range by taking the point u as a center; i (y) is the gray value of the original noisy image at the y point; a is a preset value.
10. The method according to any one of claims 1 to 3, wherein s has a value in the range of 15< s < 31; the value range of a is as follows: s +5< a < s + 10.
11. A method according to any of claims 1-3, wherein the preset first threshold value is 50.
12. A fast image denoising apparatus, comprising:
the first processing module is used for calculating a gradient average value in a first neighborhood range of each pixel point of the original noisy image and judging whether the gradient average value is larger than a preset first threshold value or not; wherein the size of the first neighborhood is a preset value;
the second processing module is used for respectively calculating the weight values of the pixel points and the pixel points in the first neighborhood range if the gradient average value is greater than or equal to a preset first threshold value, calculating the weighted average value of the gray values of the pixel points according to the weight values and the gray values corresponding to the pixel points, and taking the weighted average value of the gray values as the output gray value of the pixel points at the same position as the pixel points in the de-noised image;
and the third processing module is used for calculating the gray value average value in a second neighborhood range of the pixel point if the gradient average value is smaller than the preset first threshold value, and taking the gray value average value as the output gray value of the pixel point at the same position as the pixel point in the denoised image, wherein the size of the second neighborhood is a preset value.
CN201510181277.3A 2015-04-16 2015-04-16 rapid image denoising method and device Expired - Fee Related CN104778669B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510181277.3A CN104778669B (en) 2015-04-16 2015-04-16 rapid image denoising method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510181277.3A CN104778669B (en) 2015-04-16 2015-04-16 rapid image denoising method and device

Publications (2)

Publication Number Publication Date
CN104778669A true CN104778669A (en) 2015-07-15
CN104778669B CN104778669B (en) 2017-12-26

Family

ID=53620117

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510181277.3A Expired - Fee Related CN104778669B (en) 2015-04-16 2015-04-16 rapid image denoising method and device

Country Status (1)

Country Link
CN (1) CN104778669B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017028742A1 (en) * 2015-08-17 2017-02-23 比亚迪股份有限公司 Image denoising system and image denoising method
CN106683108A (en) * 2016-12-07 2017-05-17 乐视控股(北京)有限公司 Method and apparatus for determining the flat areas of video frame and electronic device
CN108460733A (en) * 2018-01-31 2018-08-28 北京大学深圳研究生院 A kind of image de-noising method gradually refined and system
CN108830798A (en) * 2018-04-23 2018-11-16 西安电子科技大学 Improved image denoising method based on propagation filter
CN110298858A (en) * 2019-07-01 2019-10-01 北京奇艺世纪科技有限公司 A kind of image cropping method and device
CN110324617A (en) * 2019-05-16 2019-10-11 西安万像电子科技有限公司 Image processing method and device
CN112118367A (en) * 2019-06-20 2020-12-22 瑞昱半导体股份有限公司 Image adjusting method and related image processing circuit
CN112700375A (en) * 2019-10-22 2021-04-23 杭州三坛医疗科技有限公司 Illumination compensation method and device
CN112950490A (en) * 2021-01-25 2021-06-11 宁波市鄞州区测绘院 Unmanned aerial vehicle remote sensing mapping image enhancement processing method
CN113017699A (en) * 2019-10-18 2021-06-25 深圳北芯生命科技有限公司 Image noise reduction method for reducing noise of ultrasonic image
CN115908154A (en) * 2022-09-20 2023-04-04 盐城众拓视觉创意有限公司 Video late-stage particle noise removing method based on image processing
CN118229538A (en) * 2024-05-22 2024-06-21 中国人民解放军空军军医大学 Intelligent enhancement method for bone quality CT image

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111488802B (en) * 2020-03-16 2024-03-01 沈阳二一三电子科技有限公司 Temperature curve synthesis algorithm utilizing thermal imaging and fire disaster early warning system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102298774A (en) * 2011-09-21 2011-12-28 西安电子科技大学 Non-local mean denoising method based on joint similarity

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102298774A (en) * 2011-09-21 2011-12-28 西安电子科技大学 Non-local mean denoising method based on joint similarity

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
ANTONI BUADES等: "A non-local algorithm for image denoising", 《IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION & PATTERN RECOGNITION》 *
MONA MAHMOUDI等: "Fast Image and Video Denoising via Nonlocal Means of Similar Neighborhoods", 《IEEE SIGNAL PROCESSING LETTERS》 *
YAN-LI LIU等: "A Robust and Fast Non-Local Means Algorithm for Image Denoising", 《JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY》 *
张权等: "医学图像的自适应非局部均值去噪算法", 《计算机工程》 *
肖鹏等: "基于梯度信息的快速非局部均值图像去噪算法", 《机械与电子》 *
许光宇等: "自适应的有效非局部图像滤波", 《中国图象图形学报》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017028742A1 (en) * 2015-08-17 2017-02-23 比亚迪股份有限公司 Image denoising system and image denoising method
CN106683108A (en) * 2016-12-07 2017-05-17 乐视控股(北京)有限公司 Method and apparatus for determining the flat areas of video frame and electronic device
CN108460733A (en) * 2018-01-31 2018-08-28 北京大学深圳研究生院 A kind of image de-noising method gradually refined and system
CN108830798A (en) * 2018-04-23 2018-11-16 西安电子科技大学 Improved image denoising method based on propagation filter
CN110324617B (en) * 2019-05-16 2022-01-11 西安万像电子科技有限公司 Image processing method and device
CN110324617A (en) * 2019-05-16 2019-10-11 西安万像电子科技有限公司 Image processing method and device
CN112118367A (en) * 2019-06-20 2020-12-22 瑞昱半导体股份有限公司 Image adjusting method and related image processing circuit
CN112118367B (en) * 2019-06-20 2023-05-02 瑞昱半导体股份有限公司 Image adjusting method and related image processing circuit
CN110298858A (en) * 2019-07-01 2019-10-01 北京奇艺世纪科技有限公司 A kind of image cropping method and device
CN110298858B (en) * 2019-07-01 2021-06-22 北京奇艺世纪科技有限公司 Image clipping method and device
CN113017699A (en) * 2019-10-18 2021-06-25 深圳北芯生命科技有限公司 Image noise reduction method for reducing noise of ultrasonic image
CN113057676A (en) * 2019-10-18 2021-07-02 深圳北芯生命科技有限公司 Image noise reduction method of IVUS system
CN112700375A (en) * 2019-10-22 2021-04-23 杭州三坛医疗科技有限公司 Illumination compensation method and device
CN112950490A (en) * 2021-01-25 2021-06-11 宁波市鄞州区测绘院 Unmanned aerial vehicle remote sensing mapping image enhancement processing method
CN112950490B (en) * 2021-01-25 2022-07-19 宁波市鄞州区测绘院 Unmanned aerial vehicle remote sensing mapping image enhancement processing method
CN115908154A (en) * 2022-09-20 2023-04-04 盐城众拓视觉创意有限公司 Video late-stage particle noise removing method based on image processing
CN115908154B (en) * 2022-09-20 2023-09-29 盐城众拓视觉创意有限公司 Video later-stage particle noise removing method based on image processing
CN118229538A (en) * 2024-05-22 2024-06-21 中国人民解放军空军军医大学 Intelligent enhancement method for bone quality CT image

Also Published As

Publication number Publication date
CN104778669B (en) 2017-12-26

Similar Documents

Publication Publication Date Title
CN104778669A (en) Fast image denoising method and device
Wang et al. Noise detection and image denoising based on fractional calculus
US10339643B2 (en) Algorithm and device for image processing
KR102620105B1 (en) Method for upscaling noisy images, and apparatus for upscaling noisy images
US20180122051A1 (en) Method and device for image haze removal
CN105740876B (en) A kind of image pre-processing method and device
TWI393073B (en) Image denoising method
CN102968770A (en) Method and device for eliminating noise
CN101853497A (en) Image enhancement method and device
WO2020093914A1 (en) Content-weighted deep residual learning for video in-loop filtering
US9508134B2 (en) Apparatus, system, and method for enhancing image data
Rahman et al. Gaussian noise reduction in digital images using a modified fuzzy filter
CN103020918A (en) Shape-adaptive neighborhood mean value based non-local mean value denoising method
CN109410147A (en) A kind of supercavity image enchancing method
Zhu et al. Fast single image dehazing through edge-guided interpolated filter
Qi et al. A neutrosophic filter for high-density salt and pepper noise based on pixel-wise adaptive smoothing parameter
CN103971345A (en) Image denoising method based on improved bilateral filtering
Wang et al. A wavelet-based image denoising using least squares support vector machine
CN103871031A (en) Kernel regression-based SAR image coherent speckle restraining method
Lai et al. Improved non-local means filtering algorithm for image denoising
Yang et al. A design framework for hybrid approaches of image noise estimation and its application to noise reduction
CN103337055A (en) Deblurring method for text image based on gradient fitting
CN103839237B (en) SAR image despeckling method based on SVD dictionary and linear minimum mean square error estimation
CN104966271A (en) Image denoising method based on biological vision receptive field mechanism
Kim et al. Separable bilateral nonlocal means

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20171226

CF01 Termination of patent right due to non-payment of annual fee