Disclosure of Invention
The technical problem to be solved by the invention is as follows: the method is used for removing image noise and simultaneously preserving edge details and textures of an image.
In order to solve the technical problem, the invention adopts a context quantization technology in information theory to solve the estimation problem of the image noise model. Firstly, smoothing an image by using a filter based on gradient, and then calculating a filtering error energy function according to the obtained smoothed image, namely estimating high-frequency information on each pixel point, including edge information and noise around the pixel point. Then, a dynamic programming approach is applied to quantize the resulting error energy function to four different levels. Then, a set of quantized contexts is constructed after the resulting quantized error energies and texture features of the image. And finally, according to different quantization contexts, a regression analysis method is applied to construct filters with different parameters aiming at different context models, so that a self-adaptive filter is realized.
Specifically, the invention provides an image denoising method in an intelligent monitoring system of a power transmission line, wherein a given noise-free image is set as I, and a noise image to be processed is set as INoiseThe method comprises the following steps:
A. designing a gradient-based anisotropic filter, applying the filter to a given noise-free image I to obtain a filtered image
B. Based on the filtered image obtained in step ACalculating filtering residual error function g for each pixel point, and forming corresponding image context
C. According to the image context obtained in the step B
Constructing a 32-level vector quantizer
D. For each quantized context CQSolving the filter coefficient b in the formula (1) by applying a regression analysis methodkAnd α, constructing a filter f (x | C) in a diamond-shaped windowQ);
<math><mrow><munderover><mi>Σ</mi><mrow><mi>k</mi><mo>=</mo><mn>1</mn></mrow><mn>12</mn></munderover><msub><mi>b</mi><mi>k</mi></msub><msub><mi>x</mi><mi>k</mi></msub><mo>+</mo><mi>α</mi><mo>=</mo><mi>y</mi><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>1</mn><mo>)</mo></mrow></mrow></math>
E. Applying the filter described in step A to a noisy image I
NoiseFiltering to obtain initial smooth image
F. Will I
NoiseAnd
subtracting to obtain residual g
NoiseAnd forming a corresponding image context
G. The context corresponding to each pixel is determined
Bringing in
Obtaining the corresponding grade;
H. using filtersTo INoiseAnd filtering to obtain an output image.
A non-local edge retaining filter is calculated, important edge information is kept, and meanwhile noise of an image is removed to the maximum extent, so that the problems of denoising and enhancing in a video monitoring image are solved.
Preferably, the gradient-based anisotropic filter in step a is specifically designed as follows:
a1, calculating the gradient of a given noise-free image I
Difference in four directions
And
and corresponding filter coefficients c
Ni,j、c
Si,j、c
Wi,jAnd c
Ei,j:
<math><mrow><mo>|</mo><mo>|</mo><mo>▿</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>|</mo><mo>|</mo><mo>=</mo><mo>|</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn><mo>,</mo><mi>j</mi></mrow></msub><mo>-</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi></mrow></msub><mo>|</mo><mo>+</mo><mo>|</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>-</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>|</mo></mrow></math>
<math><mfenced open='{' close=''><mtable><mtr><mtd><msub><mo>▿</mo><mi>N</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi></mrow></msub><mo>-</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mtd></mtr><mtr><mtd><msub><mo>▿</mo><mi>S</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn><mo>,</mo><mi>j</mi></mrow></msub><mo>-</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mtd></mtr><mtr><mtd><msub><mo>▿</mo><mi>W</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>-</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mtd></mtr><mtr><mtd><msub><mo>▿</mo><mi>E</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>-</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mtd></mtr></mtable></mfenced></math>
<math><mfenced open='{' close=''><mtable><mtr><mtd><msub><mi>c</mi><mrow><mi>Ni</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><mfrac><mn>1</mn><mrow><mn>1</mn><mo>+</mo><msup><mrow><mo>(</mo><mfrac><mrow><msub><mo>▿</mo><mi>N</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mrow><mi>K</mi></mfrac><mo>)</mo></mrow><mn>2</mn></msup></mrow></mfrac></mtd></mtr><mtr><mtd><msub><mi>c</mi><mrow><mi>Si</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><mfrac><mn>1</mn><mrow><mn>1</mn><mo>+</mo><msup><mrow><mo>(</mo><mfrac><mrow><msub><mo>▿</mo><mi>S</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mrow><mi>K</mi></mfrac><mo>)</mo></mrow><mn>2</mn></msup></mrow></mfrac></mtd></mtr><mtr><mtd><msub><mi>c</mi><mrow><mi>Wi</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><mfrac><mn>1</mn><mrow><mn>1</mn><mo>+</mo><msup><mrow><mo>(</mo><mfrac><mrow><msub><mo>▿</mo><mi>W</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mrow><mi>K</mi></mfrac><mo>)</mo></mrow><mn>2</mn></msup></mrow></mfrac></mtd></mtr><mtr><mtd><msub><mi>c</mi><mrow><mi>Ei</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><mfrac><mn>1</mn><mrow><mn>1</mn><mo>+</mo><msup><mrow><mo>(</mo><mfrac><mrow><msub><mo>▿</mo><mi>E</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mrow><mi>K</mi></mfrac><mo>)</mo></mrow><mn>2</mn></msup></mrow></mfrac></mtd></mtr></mtable></mfenced></math>
A2, calculating the anisotropic filtering according to the parameters obtained above
As a result of (1):
<math><mrow><msub><mover><mi>I</mi><mo>^</mo></mover><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><mfrac><mrow><msub><mi>c</mi><mrow><mi>Ni</mi><mo>,</mo><mi>j</mi></mrow></msub><msub><mo>▿</mo><mi>N</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>+</mo><msub><mi>c</mi><mrow><mi>Si</mi><mo>,</mo><mi>j</mi></mrow></msub><msub><mo>▿</mo><mi>S</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>+</mo><msub><mi>c</mi><mrow><mi>Wi</mi><mo>,</mo><mi>j</mi></mrow></msub><msub><mo>▿</mo><mi>W</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>+</mo><msub><mi>c</mi><mrow><mi>Ei</mi><mo>,</mo><mi>j</mi></mrow></msub><msub><mo>▿</mo><mi>E</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mrow><mn>4</mn></mfrac></mrow></math>
a3, according to gradientDetermines the damping degree of filtering
<math><mrow><mi>if</mi><mo>|</mo><mo>|</mo><msub><mrow><mo>▿</mo><mi>I</mi></mrow><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>|</mo><mo>|</mo><mo><</mo><mi>C</mi></mrow></math>
else
Further, the step B specifically includes:
b1 based on image I and filtered image
Computing a filtered error image
B2, using the high frequency information of the image in 3 x 3 where the current pixel point is as the image context to form the corresponding context vector
<math><mrow><mover><msub><mi>c</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>→</mo></mover><mo>=</mo><mo>|</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn><mo>,</mo><mi>j</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn><mo>,</mo><mi>j</mi></mrow></msub><msub><mi>g</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn><mo>,</mo><mi>j</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>}</mo><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>4</mn><mo>)</mo></mrow><mo>.</mo></mrow></math>
Still further, the step C specifically includes:
based on the obtained context vector
And filtering the error image g, applying a dynamic programming algorithm to minimize the values of the following conditional entropies:
<math><mrow><mo>-</mo><munderover><mi>Σ</mi><mrow><mi>d</mi><mo>=</mo><mn>1</mn></mrow><mi>m</mi></munderover><mi>p</mi><mrow><mo>(</mo><mi>Q</mi><mrow><mo>(</mo><mover><mi>c</mi><mo>→</mo></mover><mo>)</mo></mrow><mo>=</mo><mi>d</mi><mo>)</mo></mrow><mi>Σp</mi><mrow><mo>(</mo><mi>g</mi><mo>|</mo><mi>Q</mi><mrow><mo>(</mo><mover><mi>c</mi><mo>→</mo></mover><mo>)</mo></mrow><mo>=</mo><mi>d</mi><mo>)</mo></mrow><mi>log</mi><mi>p</mi><mrow><mo>(</mo><mi>g</mi><mo>|</mo><mi>Q</mi><mrow><mo>(</mo><mover><mi>c</mi><mo>→</mo></mover><mo>)</mo></mrow><mo>=</mo><mi>d</mi><mo>)</mo></mrow><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>5</mn><mo>)</mo></mrow></mrow></math>
wherein,
is the vector of the current pixel point in the image g relative to the context
Resulting conditional probability, i.e. solving a 32-level quantitizer
Vector context
Quantized into 32 bins of its defined space:
<math><mrow><mover><mi>c</mi><mo>→</mo></mover><mo>=</mo><msubsup><mo>∪</mo><mrow><mi>d</mi><mo>=</mo><mn>1</mn></mrow><mn>32</mn></msubsup><mo>{</mo><msub><mi>C</mi><mi>Q</mi></msub><mo>|</mo><msub><mi>C</mi><mi>Q</mi></msub><mo>=</mo><mi>d</mi><mo>}</mo><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>6</mn><mo>)</mo></mrow><mo>.</mo></mrow></math>
still further, the step D specifically includes:
d1, in the image I, for each pixel point y, taking a diamond neighborhood r (y) around it, as shown in fig. 2;
d2, obtaining a pixel point set satisfying Y ═ Y | C from the 32 quantized context sets C obtained in step CQ(y)∈C,CQ(y)=CQAnd has r (y) ═ r (y) | CQ(y)∈C,CQ(y)=CQAnd setting gray value y of pixel points and gray values x of other 12 pixel points in the rhombic neighborhood of all R (Y)kThe following relationship is satisfied:
<math><mrow><munderover><mi>Σ</mi><mrow><mi>k</mi><mo>=</mo><mn>1</mn></mrow><mn>12</mn></munderover><msub><mi>b</mi><mi>k</mi></msub><msub><mi>x</mi><mi>k</mi></msub><mo>+</mo><mi>α</mi><mo>=</mo><mi>y</mi><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>8</mn><mo>)</mo></mrow></mrow></math>
based on the obtained R (Y), we can analyze the coefficient b by regressionkAnd a, estimated coefficient bkAnd α is also f (x | C)Q) The filter coefficients of the filter.
Further, the step H specifically includes:
h1, in image INoiseIn the method, for each pixel point y, a diamond neighborhood r (y) is taken around the pixel point y, as shown in fig. 2;
h2, application filter f (x | C)
Q) To I
NoiseFiltering is carried out, namely:
the image denoising method based on the above comprises the following steps: a gradient-based anisotropic filter or predictor, a 32-level vector quantizerOne filter f (x | C)Q). The functions and actions of the parts are described in the corresponding parts above.
The invention has the beneficial effects that: the invention applies the context quantization technology and the self-adaptive regression analysis method in the information theory to design a non-local edge-preserving filter, and removes the noise of the image to the maximum extent while keeping the important edge information, so as to solve the problems of denoising and enhancing in the video monitoring image. By designing the adaptive filter with an edge preserving, the edge detail and texture of the image are enhanced simultaneously while removing the image noise. Unlike the traditional noise estimation method, the noise estimation method by applying the context quantization technology in the information theory does not depend on the noise signal and the image signal, and the estimation method has strong robustness and is suitable for the unknown denoising problem of all noise models.
Detailed Description
Hereinafter, the present invention will be described in more detail by way of examples with reference to the accompanying drawings. This example is merely a description of the best mode of carrying out the invention and does not limit the scope of the invention in any way.
Examples
FIG. 1 is a schematic flow chart of a two-dimensional image denoising method according to the present invention, which illustrates calculating a gradient image from an input noise-free
training image sample 11, and designing an adaptive filter based on the obtained horizontal and vertical gradients, applying the filter to obtain a smoothed
image 12;
step 13 combines the original image I and the initial smoothed image
Obtaining high-frequency information of each pixel point, namely filtering residual errors, and forming context vectors by using residual error information in a neighborhood;
step 14, quantizing the context vector into 32 levels according to a minimum conditional entropy criterion;
step 15 applies regression analysis to estimate the filter coefficients under each set of context features from the quantized context features, constituting 32 filters corresponding to the contexts. The operations of
steps 22 and 23 are similar to those of
steps 12 and 13, and the difference is that initial filtering is performed on the input noise image, and corresponding context information is obtained;
step 24, calculating the level of the current noise image context by using the quantizer calculated in
step 14;
step 31 performs filtering operation on the noise image according to the level of the context by using the 32 sets of filters obtained in
step 15, and obtains an output
noise reduction image 32. The method comprises the following specific steps:
1. calculating horizontal gradient and vertical gradient of a noise-free training image sample I, designing a gradient-adaptive anisotropic filter based on the horizontal gradient and the vertical gradient, and applying the filter to the image I to obtain a filtered image
2. Based on the resulting filtered image
Calculating a filtering residual error function g for each pixel point and generating a corresponding context vector
3. Based on the obtained context vector
Design 32-level quantizer of context vector by solving minimum conditional entropy
4. For each quantized context
Interval, applying regression analysis, constructing a filter in a diamond-shaped window
k=1,2,Λ,32;
5. Noisy image to be processed I
NoiseObtaining a filtered image by applying the same filtering operation as the
step 1
6. Will I
NoiseAnd
subtracting to obtain residual g
NoiseAnd forming a corresponding image context
7. The context corresponding to each pixel is determined
Bringing in
Obtaining a corresponding grade;
8. using filters
To I
NoiseAnd filtering to obtain an output image.
The above process flow can be summarized as the establishment and invocation of the filter model. The establishment of the model is a core part and comprises three major parts, the estimation of image high-frequency information, the selection of an image context model and the design of a filter according to quantized context. The estimation of high frequency information of an image is usually a necessary process for designing an adaptive filter, and different noise models and estimation methods are different. The method is different from the most critical point of other technologies, the estimated high-frequency component quantity is not directly applied to the design of a filter, but is used for further describing and quantizing a context model of the characteristics around the image pixel points, and the method can be deduced according to the general source coding theory in the information theory: such a method has the advantage that a statistical model can be found that closely approximates the noise itself without knowing the noise model in the image.
The detailed steps of the invention are as follows:
calculating the gradient of the training image sample I
Difference in four directions
And
and corresponding filter coefficients c
Ni,j、c
Si,j、c
Wi,jAnd c
Ei,j:
<math><mrow><mo>|</mo><mo>|</mo><mo>▿</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>|</mo><mo>|</mo><mo>=</mo><mo>|</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn><mo>,</mo><mi>j</mi></mrow></msub><mo>-</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi></mrow></msub><mo>|</mo><mo>+</mo><mo>|</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>-</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>|</mo></mrow></math>
<math><mfenced open='{' close=''><mtable><mtr><mtd><msub><mo>▿</mo><mi>N</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi></mrow></msub><mo>-</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mtd></mtr><mtr><mtd><msub><mo>▿</mo><mi>S</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn><mo>,</mo><mi>j</mi></mrow></msub><mo>-</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mtd></mtr><mtr><mtd><msub><mo>▿</mo><mi>W</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>-</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mtd></mtr><mtr><mtd><msub><mo>▿</mo><mi>E</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>-</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mtd></mtr></mtable></mfenced></math>
<math><mfenced open='{' close=''><mtable><mtr><mtd><msub><mi>c</mi><mrow><mi>Ni</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><mfrac><mn>1</mn><mrow><mn>1</mn><mo>+</mo><msup><mrow><mo>(</mo><mfrac><mrow><msub><mo>▿</mo><mi>N</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mrow><mi>K</mi></mfrac><mo>)</mo></mrow><mn>2</mn></msup></mrow></mfrac></mtd></mtr><mtr><mtd><msub><mi>c</mi><mrow><mi>Si</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><mfrac><mn>1</mn><mrow><mn>1</mn><mo>+</mo><msup><mrow><mo>(</mo><mfrac><mrow><msub><mo>▿</mo><mi>S</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mrow><mi>K</mi></mfrac><mo>)</mo></mrow><mn>2</mn></msup></mrow></mfrac></mtd></mtr><mtr><mtd><msub><mi>c</mi><mrow><mi>Wi</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><mfrac><mn>1</mn><mrow><mn>1</mn><mo>+</mo><msup><mrow><mo>(</mo><mfrac><mrow><msub><mo>▿</mo><mi>W</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mrow><mi>K</mi></mfrac><mo>)</mo></mrow><mn>2</mn></msup></mrow></mfrac></mtd></mtr><mtr><mtd><msub><mi>c</mi><mrow><mi>Ei</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><mfrac><mn>1</mn><mrow><mn>1</mn><mo>+</mo><msup><mrow><mo>(</mo><mfrac><mrow><msub><mo>▿</mo><mi>E</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mrow><mi>K</mi></mfrac><mo>)</mo></mrow><mn>2</mn></msup></mrow></mfrac></mtd></mtr></mtable></mfenced></math>
Calculating anisotropic filtering according to the parametersAs a result of (1):
<math><mrow><msub><mover><mi>I</mi><mo>^</mo></mover><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><mfrac><mrow><msub><mi>c</mi><mrow><mi>Ni</mi><mo>,</mo><mi>j</mi></mrow></msub><msub><mo>▿</mo><mi>N</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>+</mo><msub><mi>c</mi><mrow><mi>Si</mi><mo>,</mo><mi>j</mi></mrow></msub><msub><mo>▿</mo><mi>S</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>+</mo><msub><mi>c</mi><mrow><mi>Wi</mi><mo>,</mo><mi>j</mi></mrow></msub><msub><mo>▿</mo><mi>W</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>+</mo><msub><mi>c</mi><mrow><mi>Ei</mi><mo>,</mo><mi>j</mi></mrow></msub><msub><mo>▿</mo><mi>E</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mrow><mn>4</mn></mfrac></mrow></math>
according to the gradient
Determines the damping degree of filtering
<math><mrow><mi>if</mi><mo>|</mo><mo>|</mo><msub><mrow><mo>▿</mo><mi>I</mi></mrow><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>|</mo><mo>|</mo><mo><</mo><mi>C</mi></mrow></math>
else
The anisotropic filter is a filter with different smoothing strengths designed according to different edge strengths and directions, and edge information in an image is well reserved while smoothing is performed. K. C is a parameter of the filter, K is responsible for controlling the degree of anisotropy, and may be a fixed value or may be dynamically determined from the histogram distribution of the gradient image, and C is used to further control the degree of filtering. Without loss of generality, K-64 and C-32 may be desirable.
Generally, the high frequency information of an image can be estimated by applying an adaptive filter, in other words, an adaptive filter
g may also be considered a somewhat noisy image, but also contains edge information and texture features. Using the high-frequency information of the image in 3 x 3 where the current pixel point is as the image context to form the corresponding context vector
<math><mrow><mover><msub><mi>c</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>→</mo></mover><mo>=</mo><mo>|</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn><mo>,</mo><mi>j</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn><mo>,</mo><mi>j</mi></mrow></msub><msub><mi>g</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn><mo>,</mo><mi>j</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>}</mo><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>4</mn><mo>)</mo></mrow><mo>.</mo></mrow></math>
Re-targeting image contextIs further estimated so that the noise model can be approximately described. According to Bayes's law and general source coding theory, the most approximate estimation of the image signal g is to solve the minimum conditional entropy value by an approximate model, namely, the minimum solving Kullback-Leibler distance:
<math><mrow><mo>-</mo><munderover><mi>Σ</mi><mrow><mi>d</mi><mo>=</mo><mn>1</mn></mrow><mi>m</mi></munderover><mi>p</mi><mrow><mo>(</mo><mi>Q</mi><mrow><mo>(</mo><mover><mi>c</mi><mo>→</mo></mover><mo>)</mo></mrow><mo>=</mo><mi>d</mi><mo>)</mo></mrow><mi>Σp</mi><mrow><mo>(</mo><mi>g</mi><mo>|</mo><mi>Q</mi><mrow><mo>(</mo><mover><mi>c</mi><mo>→</mo></mover><mo>)</mo></mrow><mo>=</mo><mi>d</mi><mo>)</mo></mrow><mi>log</mi><mi>p</mi><mrow><mo>(</mo><mi>g</mi><mo>|</mo><mi>Q</mi><mrow><mo>(</mo><mover><mi>c</mi><mo>→</mo></mover><mo>)</mo></mrow><mo>=</mo><mi>d</mi><mo>)</mo></mrow></mrow></math>
i.e. solving a vector quantizer
Will be provided with
Is divided into 32 intervals and arbitrary
Uniquely corresponding to one of these 32 intervals. In general, the problem of solving the minimum conditional entropy in a high-dimensional space is not convex, but we can apply a projective transformation to solve it
Projecting to a low-dimensional space to make the pair in the low-dimensional space
The resolution ratio of the method is the highest, the problem is converted into the problem of the minimum conditional entropy in a one-dimensional space, and the problem can be solved by a dynamic programming method.
Step 14 classifies each pixel into one of 32 models, so we will get 32 groups of data, each group of data contains the gray value of several current pixels y and the value of surrounding pixels in a diamond neighborhood, as shown in fig. 2, which illustrates the diamond neighborhood used by the spatial filter of the present invention, and also defines the pixels participating in the following regression analysis. For each set of data for diamond neighborhoods, the following assumptions may be made:
<math><mrow><munderover><mi>Σ</mi><mrow><mi>k</mi><mo>=</mo><mn>1</mn></mrow><mn>12</mn></munderover><msub><mi>b</mi><mi>k</mi></msub><msub><mi>x</mi><mi>k</mi></msub><mo>+</mo><mi>α</mi><mo>=</mo><mi>y</mi></mrow></math>
bkis the estimated filter coefficients and alpha is the estimated approximate noise, xkIs the gray value of the pixel point in the rhombus neighborhood of the current pixel point y. The filter coefficient b can be estimated by applying a regression analysis method based on the least square methodkAnd the value of the model noise a.
Steps 12 to 15 complete the filter model building, and in order to apply it to the actual noise reduction process, we perform the same
initial filtering operation 22 as
step 12 on the
noise image 21 to be processed; subtracting the smooth image from the original image to obtain a filtering residual error, and forming a
corresponding image context 23; substituting the context into the vector quantizer obtained in
step 14
Obtaining its
quantization level 24; and finally, filtering each pixel by using a corresponding filter to obtain an output image.