CN102143303A - Image denoising method in transmission line intelligent monitoring system - Google Patents

Image denoising method in transmission line intelligent monitoring system Download PDF

Info

Publication number
CN102143303A
CN102143303A CN 201110063780 CN201110063780A CN102143303A CN 102143303 A CN102143303 A CN 102143303A CN 201110063780 CN201110063780 CN 201110063780 CN 201110063780 A CN201110063780 A CN 201110063780A CN 102143303 A CN102143303 A CN 102143303A
Authority
CN
China
Prior art keywords
mrow
msub
image
math
mfrac
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 201110063780
Other languages
Chinese (zh)
Inventor
何冰
刘新平
沈超
刘振海
胡凌靖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
Shanghai Municipal Electric Power Co
Shanghai Jiulong Electric Power Group Co Ltd
Original Assignee
SHANGHAI ELECTRIC POWER LIVE WORKING TECHNOLOGY DEVELOPMENT Co Ltd
Shanghai Municipal Electric Power Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI ELECTRIC POWER LIVE WORKING TECHNOLOGY DEVELOPMENT Co Ltd, Shanghai Municipal Electric Power Co filed Critical SHANGHAI ELECTRIC POWER LIVE WORKING TECHNOLOGY DEVELOPMENT Co Ltd
Priority to CN 201110063780 priority Critical patent/CN102143303A/en
Publication of CN102143303A publication Critical patent/CN102143303A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an image denoising method in a transmission line intelligent monitoring system. The method is mainly used for filtering out noise in an image which is acquired by a video monitoring system and solving the problems of reduction of picture contrast and edge and the like under the conditions of dense fog, sand and dust and other severe weather. The image denoising method comprises the following steps of: smoothening the image by employing an anisotropy-based filter to estimate noise energy of each pixel point of the image so as to calculate a filtering residual function; taking the filtering residual function in a pixel neighborhood range as context features of the image; quantifying a context vector which corresponds to each pixel to 32 levels through a vector quantizer by employing a dynamic programming algorithm; and finally, constructing a filter with different parameters for all pixels on each level by employing a regression analysis method so as to enhance the image.

Description

Image denoising method in intelligent monitoring system of power transmission line
Technical Field
The invention relates to an image denoising method, and belongs to the technical field of image processing. In particular to an image denoising method in an intelligent monitoring system of a power transmission line.
Background
The video monitoring system has a lot of noises in the acquired video images due to insufficient light of the monitoring environment and severe weather and environment (such as sand storm), and the contrast and edges of the images are very blurred, while the video monitoring image noises are not a single noise generated by a noise model but a complex mixed noise, namely: random noise, quantization noise, and impulse noise. The most important difficulties encountered when applying the conventional image denoising method to the video image enhancement problem are: while the image noise is eliminated, some important image details or edge information is also weakened. This makes the design of the edge preserving filter a research focus for image processing. The traditional edge-preserving filter applies mathematical morphology operator, which can remove additive or multiplicative noise of image and enhance edge information, such as median filter. However, they still do not overcome the problem of some weak boundaries being weakened.
Due to the fact that noises with different properties exist in images obtained in severe weather and environment, the image quality of remote video monitoring images is often greatly reduced due to noise pollution and fuzzy contrast, the reliability and accuracy of video monitoring can be seriously affected, and the aims of timely alarming and early warning which are required by a monitoring system can not be achieved. Therefore, before sending the image to the monitoring center platform and the client, preprocessing such as noise reduction or enhancement is usually required to be performed on the video frame image. The usual enhancement is to apply a spatial filter to smooth the incoming noisy image within a local window of fixed size. However, such processing can seriously destroy the texture features and edge details of the image itself, and the contrast of the image can be reduced.
To solve this practical problem, many edge-preserving filters have been extensively studied and applied to software development of video surveillance images in recent years. For example, median filters, bilateral filters, anisotropic filters, etc. are widely used in various fields for image noise reduction, but most of them are effective only for gaussian noise or multiplicative noise, and are not ideal for solving the image enhancement problem with mixed noise.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the method is used for removing image noise and simultaneously preserving edge details and textures of an image.
In order to solve the technical problem, the invention adopts a context quantization technology in information theory to solve the estimation problem of the image noise model. Firstly, smoothing an image by using a filter based on gradient, and then calculating a filtering error energy function according to the obtained smoothed image, namely estimating high-frequency information on each pixel point, including edge information and noise around the pixel point. Then, a dynamic programming approach is applied to quantize the resulting error energy function to four different levels. Then, a set of quantized contexts is constructed after the resulting quantized error energies and texture features of the image. And finally, according to different quantization contexts, a regression analysis method is applied to construct filters with different parameters aiming at different context models, so that a self-adaptive filter is realized.
Specifically, the invention provides an image denoising method in an intelligent monitoring system of a power transmission line, wherein a given noise-free image is set as I, and a noise image to be processed is set as INoiseThe method comprises the following steps:
A. designing a gradient-based anisotropic filter, applying the filter to a given noise-free image I to obtain a filtered image
Figure BDA0000050575330000021
B. Based on the filtered image obtained in step ACalculating filtering residual error function g for each pixel point, and forming corresponding image context
C. According to the image context obtained in the step B
Figure BDA0000050575330000024
Constructing a 32-level vector quantizer
Figure BDA0000050575330000025
D. For each quantized context CQSolving the filter coefficient b in the formula (1) by applying a regression analysis methodkAnd α, constructing a filter f (x | C) in a diamond-shaped windowQ);
<math><mrow><munderover><mi>&Sigma;</mi><mrow><mi>k</mi><mo>=</mo><mn>1</mn></mrow><mn>12</mn></munderover><msub><mi>b</mi><mi>k</mi></msub><msub><mi>x</mi><mi>k</mi></msub><mo>+</mo><mi>&alpha;</mi><mo>=</mo><mi>y</mi><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>1</mn><mo>)</mo></mrow></mrow></math>
E. Applying the filter described in step A to a noisy image INoiseFiltering to obtain initial smooth image
Figure BDA0000050575330000027
F. Will INoiseAnd
Figure BDA0000050575330000028
subtracting to obtain residual gNoiseAnd forming a corresponding image context
Figure BDA0000050575330000029
G. The context corresponding to each pixel is determinedBringing in
Figure BDA00000505753300000211
Obtaining the corresponding grade;
H. using filtersTo INoiseAnd filtering to obtain an output image.
A non-local edge retaining filter is calculated, important edge information is kept, and meanwhile noise of an image is removed to the maximum extent, so that the problems of denoising and enhancing in a video monitoring image are solved.
Preferably, the gradient-based anisotropic filter in step a is specifically designed as follows:
a1, calculating the gradient of a given noise-free image I
Figure BDA0000050575330000031
Difference in four directions
Figure BDA0000050575330000032
Figure BDA0000050575330000033
And
Figure BDA0000050575330000034
and corresponding filter coefficients cNi,j、cSi,j、cWi,jAnd cEi,j
<math><mrow><mo>|</mo><mo>|</mo><mo>&dtri;</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>|</mo><mo>|</mo><mo>=</mo><mo>|</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn><mo>,</mo><mi>j</mi></mrow></msub><mo>-</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi></mrow></msub><mo>|</mo><mo>+</mo><mo>|</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>-</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>|</mo></mrow></math>
<math><mfenced open='{' close=''><mtable><mtr><mtd><msub><mo>&dtri;</mo><mi>N</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi></mrow></msub><mo>-</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mtd></mtr><mtr><mtd><msub><mo>&dtri;</mo><mi>S</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn><mo>,</mo><mi>j</mi></mrow></msub><mo>-</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mtd></mtr><mtr><mtd><msub><mo>&dtri;</mo><mi>W</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>-</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mtd></mtr><mtr><mtd><msub><mo>&dtri;</mo><mi>E</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>-</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mtd></mtr></mtable></mfenced></math>
<math><mfenced open='{' close=''><mtable><mtr><mtd><msub><mi>c</mi><mrow><mi>Ni</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><mfrac><mn>1</mn><mrow><mn>1</mn><mo>+</mo><msup><mrow><mo>(</mo><mfrac><mrow><msub><mo>&dtri;</mo><mi>N</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mrow><mi>K</mi></mfrac><mo>)</mo></mrow><mn>2</mn></msup></mrow></mfrac></mtd></mtr><mtr><mtd><msub><mi>c</mi><mrow><mi>Si</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><mfrac><mn>1</mn><mrow><mn>1</mn><mo>+</mo><msup><mrow><mo>(</mo><mfrac><mrow><msub><mo>&dtri;</mo><mi>S</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mrow><mi>K</mi></mfrac><mo>)</mo></mrow><mn>2</mn></msup></mrow></mfrac></mtd></mtr><mtr><mtd><msub><mi>c</mi><mrow><mi>Wi</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><mfrac><mn>1</mn><mrow><mn>1</mn><mo>+</mo><msup><mrow><mo>(</mo><mfrac><mrow><msub><mo>&dtri;</mo><mi>W</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mrow><mi>K</mi></mfrac><mo>)</mo></mrow><mn>2</mn></msup></mrow></mfrac></mtd></mtr><mtr><mtd><msub><mi>c</mi><mrow><mi>Ei</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><mfrac><mn>1</mn><mrow><mn>1</mn><mo>+</mo><msup><mrow><mo>(</mo><mfrac><mrow><msub><mo>&dtri;</mo><mi>E</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mrow><mi>K</mi></mfrac><mo>)</mo></mrow><mn>2</mn></msup></mrow></mfrac></mtd></mtr></mtable></mfenced></math>
A2, calculating the anisotropic filtering according to the parameters obtained above
Figure BDA0000050575330000038
As a result of (1):
<math><mrow><msub><mover><mi>I</mi><mo>^</mo></mover><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><mfrac><mrow><msub><mi>c</mi><mrow><mi>Ni</mi><mo>,</mo><mi>j</mi></mrow></msub><msub><mo>&dtri;</mo><mi>N</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>+</mo><msub><mi>c</mi><mrow><mi>Si</mi><mo>,</mo><mi>j</mi></mrow></msub><msub><mo>&dtri;</mo><mi>S</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>+</mo><msub><mi>c</mi><mrow><mi>Wi</mi><mo>,</mo><mi>j</mi></mrow></msub><msub><mo>&dtri;</mo><mi>W</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>+</mo><msub><mi>c</mi><mrow><mi>Ei</mi><mo>,</mo><mi>j</mi></mrow></msub><msub><mo>&dtri;</mo><mi>E</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mrow><mn>4</mn></mfrac></mrow></math>
a3, according to gradientDetermines the damping degree of filtering
<math><mrow><mi>if</mi><mo>|</mo><mo>|</mo><msub><mrow><mo>&dtri;</mo><mi>I</mi></mrow><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>|</mo><mo>|</mo><mo>&lt;</mo><mi>C</mi></mrow></math>
I ^ i , j = I ^ i , j + I i , j 2
else
I ^ i , j = 3 I ^ i , j + I i , j 4 .
Further, the step B specifically includes:
b1 based on image I and filtered image
Figure BDA00000505753300000315
Computing a filtered error image
Figure BDA00000505753300000316
B2, using the high frequency information of the image in 3 x 3 where the current pixel point is as the image context to form the corresponding context vector
Figure BDA0000050575330000041
<math><mrow><mover><msub><mi>c</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>&RightArrow;</mo></mover><mo>=</mo><mo>|</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn><mo>,</mo><mi>j</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn><mo>,</mo><mi>j</mi></mrow></msub><msub><mi>g</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn><mo>,</mo><mi>j</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>}</mo><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>4</mn><mo>)</mo></mrow><mo>.</mo></mrow></math>
Still further, the step C specifically includes:
based on the obtained context vector
Figure BDA0000050575330000043
And filtering the error image g, applying a dynamic programming algorithm to minimize the values of the following conditional entropies:
<math><mrow><mo>-</mo><munderover><mi>&Sigma;</mi><mrow><mi>d</mi><mo>=</mo><mn>1</mn></mrow><mi>m</mi></munderover><mi>p</mi><mrow><mo>(</mo><mi>Q</mi><mrow><mo>(</mo><mover><mi>c</mi><mo>&RightArrow;</mo></mover><mo>)</mo></mrow><mo>=</mo><mi>d</mi><mo>)</mo></mrow><mi>&Sigma;p</mi><mrow><mo>(</mo><mi>g</mi><mo>|</mo><mi>Q</mi><mrow><mo>(</mo><mover><mi>c</mi><mo>&RightArrow;</mo></mover><mo>)</mo></mrow><mo>=</mo><mi>d</mi><mo>)</mo></mrow><mi>log</mi><mi>p</mi><mrow><mo>(</mo><mi>g</mi><mo>|</mo><mi>Q</mi><mrow><mo>(</mo><mover><mi>c</mi><mo>&RightArrow;</mo></mover><mo>)</mo></mrow><mo>=</mo><mi>d</mi><mo>)</mo></mrow><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>5</mn><mo>)</mo></mrow></mrow></math>
wherein,
Figure BDA0000050575330000045
is the vector of the current pixel point in the image g relative to the context
Figure BDA0000050575330000046
Resulting conditional probability, i.e. solving a 32-level quantitizer
Figure BDA0000050575330000047
Vector context
Figure BDA0000050575330000048
Quantized into 32 bins of its defined space:
<math><mrow><mover><mi>c</mi><mo>&RightArrow;</mo></mover><mo>=</mo><msubsup><mo>&cup;</mo><mrow><mi>d</mi><mo>=</mo><mn>1</mn></mrow><mn>32</mn></msubsup><mo>{</mo><msub><mi>C</mi><mi>Q</mi></msub><mo>|</mo><msub><mi>C</mi><mi>Q</mi></msub><mo>=</mo><mi>d</mi><mo>}</mo><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>6</mn><mo>)</mo></mrow><mo>.</mo></mrow></math>
still further, the step D specifically includes:
d1, in the image I, for each pixel point y, taking a diamond neighborhood r (y) around it, as shown in fig. 2;
d2, obtaining a pixel point set satisfying Y ═ Y | C from the 32 quantized context sets C obtained in step CQ(y)∈C,CQ(y)=CQAnd has r (y) ═ r (y) | CQ(y)∈C,CQ(y)=CQAnd setting gray value y of pixel points and gray values x of other 12 pixel points in the rhombic neighborhood of all R (Y)kThe following relationship is satisfied:
<math><mrow><munderover><mi>&Sigma;</mi><mrow><mi>k</mi><mo>=</mo><mn>1</mn></mrow><mn>12</mn></munderover><msub><mi>b</mi><mi>k</mi></msub><msub><mi>x</mi><mi>k</mi></msub><mo>+</mo><mi>&alpha;</mi><mo>=</mo><mi>y</mi><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>8</mn><mo>)</mo></mrow></mrow></math>
based on the obtained R (Y), we can analyze the coefficient b by regressionkAnd a, estimated coefficient bkAnd α is also f (x | C)Q) The filter coefficients of the filter.
Further, the step H specifically includes:
h1, in image INoiseIn the method, for each pixel point y, a diamond neighborhood r (y) is taken around the pixel point y, as shown in fig. 2;
h2, application filter f (x | C)Q) To INoiseFiltering is carried out, namely:
Figure BDA00000505753300000411
the image denoising method based on the above comprises the following steps: a gradient-based anisotropic filter or predictor, a 32-level vector quantizerOne filter f (x | C)Q). The functions and actions of the parts are described in the corresponding parts above.
The invention has the beneficial effects that: the invention applies the context quantization technology and the self-adaptive regression analysis method in the information theory to design a non-local edge-preserving filter, and removes the noise of the image to the maximum extent while keeping the important edge information, so as to solve the problems of denoising and enhancing in the video monitoring image. By designing the adaptive filter with an edge preserving, the edge detail and texture of the image are enhanced simultaneously while removing the image noise. Unlike the traditional noise estimation method, the noise estimation method by applying the context quantization technology in the information theory does not depend on the noise signal and the image signal, and the estimation method has strong robustness and is suitable for the unknown denoising problem of all noise models.
Drawings
FIG. 1 is a schematic flow chart of a two-dimensional image denoising method according to an embodiment of the present invention;
FIG. 2 is a diagram of a diamond neighborhood used in a spatial filter in an embodiment of the present invention.
Detailed Description
Hereinafter, the present invention will be described in more detail by way of examples with reference to the accompanying drawings. This example is merely a description of the best mode of carrying out the invention and does not limit the scope of the invention in any way.
Examples
FIG. 1 is a schematic flow chart of a two-dimensional image denoising method according to the present invention, which illustrates calculating a gradient image from an input noise-free training image sample 11, and designing an adaptive filter based on the obtained horizontal and vertical gradients, applying the filter to obtain a smoothed image 12; step 13 combines the original image I and the initial smoothed image
Figure BDA0000050575330000051
Obtaining high-frequency information of each pixel point, namely filtering residual errors, and forming context vectors by using residual error information in a neighborhood; step 14, quantizing the context vector into 32 levels according to a minimum conditional entropy criterion; step 15 applies regression analysis to estimate the filter coefficients under each set of context features from the quantized context features, constituting 32 filters corresponding to the contexts. The operations of steps 22 and 23 are similar to those of steps 12 and 13, and the difference is that initial filtering is performed on the input noise image, and corresponding context information is obtained; step 24, calculating the level of the current noise image context by using the quantizer calculated in step 14; step 31 performs filtering operation on the noise image according to the level of the context by using the 32 sets of filters obtained in step 15, and obtains an output noise reduction image 32. The method comprises the following specific steps:
1. calculating horizontal gradient and vertical gradient of a noise-free training image sample I, designing a gradient-adaptive anisotropic filter based on the horizontal gradient and the vertical gradient, and applying the filter to the image I to obtain a filtered image
Figure BDA0000050575330000061
2. Based on the resulting filtered image
Figure BDA0000050575330000062
Calculating a filtering residual error function g for each pixel point and generating a corresponding context vector
3. Based on the obtained context vector
Figure BDA0000050575330000064
Design 32-level quantizer of context vector by solving minimum conditional entropy
Figure BDA0000050575330000065
4. For each quantized context
Figure BDA0000050575330000066
Interval, applying regression analysis, constructing a filter in a diamond-shaped windowk=1,2,Λ,32;
5. Noisy image to be processed INoiseObtaining a filtered image by applying the same filtering operation as the step 1
Figure BDA0000050575330000068
6. Will INoiseAndsubtracting to obtain residual gNoiseAnd forming a corresponding image context
Figure BDA00000505753300000610
7. The context corresponding to each pixel is determined
Figure BDA00000505753300000611
Bringing in
Figure BDA00000505753300000612
Obtaining a corresponding grade;
8. using filters
Figure BDA00000505753300000613
To INoiseAnd filtering to obtain an output image.
The above process flow can be summarized as the establishment and invocation of the filter model. The establishment of the model is a core part and comprises three major parts, the estimation of image high-frequency information, the selection of an image context model and the design of a filter according to quantized context. The estimation of high frequency information of an image is usually a necessary process for designing an adaptive filter, and different noise models and estimation methods are different. The method is different from the most critical point of other technologies, the estimated high-frequency component quantity is not directly applied to the design of a filter, but is used for further describing and quantizing a context model of the characteristics around the image pixel points, and the method can be deduced according to the general source coding theory in the information theory: such a method has the advantage that a statistical model can be found that closely approximates the noise itself without knowing the noise model in the image.
The detailed steps of the invention are as follows:
calculating the gradient of the training image sample I
Figure BDA00000505753300000614
Difference in four directionsAndand corresponding filter coefficients cNi,j、cSi,j、cWi,jAnd cEi,j
<math><mrow><mo>|</mo><mo>|</mo><mo>&dtri;</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>|</mo><mo>|</mo><mo>=</mo><mo>|</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn><mo>,</mo><mi>j</mi></mrow></msub><mo>-</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi></mrow></msub><mo>|</mo><mo>+</mo><mo>|</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>-</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>|</mo></mrow></math>
<math><mfenced open='{' close=''><mtable><mtr><mtd><msub><mo>&dtri;</mo><mi>N</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi></mrow></msub><mo>-</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mtd></mtr><mtr><mtd><msub><mo>&dtri;</mo><mi>S</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn><mo>,</mo><mi>j</mi></mrow></msub><mo>-</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mtd></mtr><mtr><mtd><msub><mo>&dtri;</mo><mi>W</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>-</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mtd></mtr><mtr><mtd><msub><mo>&dtri;</mo><mi>E</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>-</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mtd></mtr></mtable></mfenced></math>
<math><mfenced open='{' close=''><mtable><mtr><mtd><msub><mi>c</mi><mrow><mi>Ni</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><mfrac><mn>1</mn><mrow><mn>1</mn><mo>+</mo><msup><mrow><mo>(</mo><mfrac><mrow><msub><mo>&dtri;</mo><mi>N</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mrow><mi>K</mi></mfrac><mo>)</mo></mrow><mn>2</mn></msup></mrow></mfrac></mtd></mtr><mtr><mtd><msub><mi>c</mi><mrow><mi>Si</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><mfrac><mn>1</mn><mrow><mn>1</mn><mo>+</mo><msup><mrow><mo>(</mo><mfrac><mrow><msub><mo>&dtri;</mo><mi>S</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mrow><mi>K</mi></mfrac><mo>)</mo></mrow><mn>2</mn></msup></mrow></mfrac></mtd></mtr><mtr><mtd><msub><mi>c</mi><mrow><mi>Wi</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><mfrac><mn>1</mn><mrow><mn>1</mn><mo>+</mo><msup><mrow><mo>(</mo><mfrac><mrow><msub><mo>&dtri;</mo><mi>W</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mrow><mi>K</mi></mfrac><mo>)</mo></mrow><mn>2</mn></msup></mrow></mfrac></mtd></mtr><mtr><mtd><msub><mi>c</mi><mrow><mi>Ei</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><mfrac><mn>1</mn><mrow><mn>1</mn><mo>+</mo><msup><mrow><mo>(</mo><mfrac><mrow><msub><mo>&dtri;</mo><mi>E</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mrow><mi>K</mi></mfrac><mo>)</mo></mrow><mn>2</mn></msup></mrow></mfrac></mtd></mtr></mtable></mfenced></math>
Calculating anisotropic filtering according to the parametersAs a result of (1):
<math><mrow><msub><mover><mi>I</mi><mo>^</mo></mover><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><mfrac><mrow><msub><mi>c</mi><mrow><mi>Ni</mi><mo>,</mo><mi>j</mi></mrow></msub><msub><mo>&dtri;</mo><mi>N</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>+</mo><msub><mi>c</mi><mrow><mi>Si</mi><mo>,</mo><mi>j</mi></mrow></msub><msub><mo>&dtri;</mo><mi>S</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>+</mo><msub><mi>c</mi><mrow><mi>Wi</mi><mo>,</mo><mi>j</mi></mrow></msub><msub><mo>&dtri;</mo><mi>W</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>+</mo><msub><mi>c</mi><mrow><mi>Ei</mi><mo>,</mo><mi>j</mi></mrow></msub><msub><mo>&dtri;</mo><mi>E</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mrow><mn>4</mn></mfrac></mrow></math>
according to the gradient
Figure BDA0000050575330000076
Determines the damping degree of filtering
Figure BDA0000050575330000077
<math><mrow><mi>if</mi><mo>|</mo><mo>|</mo><msub><mrow><mo>&dtri;</mo><mi>I</mi></mrow><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>|</mo><mo>|</mo><mo>&lt;</mo><mi>C</mi></mrow></math>
I ^ i , j = I ^ i , j + I i , j 2
else
I ^ i , j = 3 I ^ i , j + I i , j 4
The anisotropic filter is a filter with different smoothing strengths designed according to different edge strengths and directions, and edge information in an image is well reserved while smoothing is performed. K. C is a parameter of the filter, K is responsible for controlling the degree of anisotropy, and may be a fixed value or may be dynamically determined from the histogram distribution of the gradient image, and C is used to further control the degree of filtering. Without loss of generality, K-64 and C-32 may be desirable.
Generally, the high frequency information of an image can be estimated by applying an adaptive filter, in other words, an adaptive filter
Figure BDA00000505753300000711
g may also be considered a somewhat noisy image, but also contains edge information and texture features. Using the high-frequency information of the image in 3 x 3 where the current pixel point is as the image context to form the corresponding context vector
Figure BDA0000050575330000081
<math><mrow><mover><msub><mi>c</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>&RightArrow;</mo></mover><mo>=</mo><mo>|</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn><mo>,</mo><mi>j</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn><mo>,</mo><mi>j</mi></mrow></msub><msub><mi>g</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn><mo>,</mo><mi>j</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>}</mo><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>4</mn><mo>)</mo></mrow><mo>.</mo></mrow></math>
Re-targeting image contextIs further estimated so that the noise model can be approximately described. According to Bayes's law and general source coding theory, the most approximate estimation of the image signal g is to solve the minimum conditional entropy value by an approximate model, namely, the minimum solving Kullback-Leibler distance:
<math><mrow><mo>-</mo><munderover><mi>&Sigma;</mi><mrow><mi>d</mi><mo>=</mo><mn>1</mn></mrow><mi>m</mi></munderover><mi>p</mi><mrow><mo>(</mo><mi>Q</mi><mrow><mo>(</mo><mover><mi>c</mi><mo>&RightArrow;</mo></mover><mo>)</mo></mrow><mo>=</mo><mi>d</mi><mo>)</mo></mrow><mi>&Sigma;p</mi><mrow><mo>(</mo><mi>g</mi><mo>|</mo><mi>Q</mi><mrow><mo>(</mo><mover><mi>c</mi><mo>&RightArrow;</mo></mover><mo>)</mo></mrow><mo>=</mo><mi>d</mi><mo>)</mo></mrow><mi>log</mi><mi>p</mi><mrow><mo>(</mo><mi>g</mi><mo>|</mo><mi>Q</mi><mrow><mo>(</mo><mover><mi>c</mi><mo>&RightArrow;</mo></mover><mo>)</mo></mrow><mo>=</mo><mi>d</mi><mo>)</mo></mrow></mrow></math>
i.e. solving a vector quantizer
Figure BDA0000050575330000085
Will be provided with
Figure BDA0000050575330000086
Is divided into 32 intervals and arbitraryUniquely corresponding to one of these 32 intervals. In general, the problem of solving the minimum conditional entropy in a high-dimensional space is not convex, but we can apply a projective transformation to solve it
Figure BDA0000050575330000088
Projecting to a low-dimensional space to make the pair in the low-dimensional spaceThe resolution ratio of the method is the highest, the problem is converted into the problem of the minimum conditional entropy in a one-dimensional space, and the problem can be solved by a dynamic programming method.
Step 14 classifies each pixel into one of 32 models, so we will get 32 groups of data, each group of data contains the gray value of several current pixels y and the value of surrounding pixels in a diamond neighborhood, as shown in fig. 2, which illustrates the diamond neighborhood used by the spatial filter of the present invention, and also defines the pixels participating in the following regression analysis. For each set of data for diamond neighborhoods, the following assumptions may be made:
<math><mrow><munderover><mi>&Sigma;</mi><mrow><mi>k</mi><mo>=</mo><mn>1</mn></mrow><mn>12</mn></munderover><msub><mi>b</mi><mi>k</mi></msub><msub><mi>x</mi><mi>k</mi></msub><mo>+</mo><mi>&alpha;</mi><mo>=</mo><mi>y</mi></mrow></math>
bkis the estimated filter coefficients and alpha is the estimated approximate noise, xkIs the gray value of the pixel point in the rhombus neighborhood of the current pixel point y. The filter coefficient b can be estimated by applying a regression analysis method based on the least square methodkAnd the value of the model noise a.
Steps 12 to 15 complete the filter model building, and in order to apply it to the actual noise reduction process, we perform the same initial filtering operation 22 as step 12 on the noise image 21 to be processed; subtracting the smooth image from the original image to obtain a filtering residual error, and forming a corresponding image context 23; substituting the context into the vector quantizer obtained in step 14
Figure BDA00000505753300000811
Obtaining its quantization level 24; and finally, filtering each pixel by using a corresponding filter to obtain an output image.

Claims (6)

1. An image denoising method in an intelligent monitoring system of a power transmission line is provided, wherein a given noise-free image is set as I, and a noise image to be processed is set as INoiseThe method is characterized in that: the method comprises the following steps:
A. designing a gradient-based anisotropic filter, applying the filter to a given noise-free image I to obtain a filtered image
Figure FDA0000050575320000011
B. Based on the stepsThe filtered image obtained in step A
Figure FDA0000050575320000012
Calculating filtering residual error function g for each pixel point, and forming corresponding image context
Figure FDA0000050575320000013
C. According to the image context obtained in the step B
Figure FDA0000050575320000014
Constructing a 32-level vector quantizer
Figure FDA0000050575320000015
D. For each quantized context CQSolving the filter coefficient b in the formula (1) by applying a regression analysis methodkAnd α, constructing a filter f (x | C) in a diamond-shaped windowQ);
<math><mrow><munderover><mi>&Sigma;</mi><mrow><mi>k</mi><mo>=</mo><mn>1</mn></mrow><mn>12</mn></munderover><msub><mi>b</mi><mi>k</mi></msub><msub><mi>x</mi><mi>k</mi></msub><mo>+</mo><mi>&alpha;</mi><mo>=</mo><mi>y</mi><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>1</mn><mo>)</mo></mrow></mrow></math>
E. Applying the filter described in step A to a noisy image INoiseFiltering to obtain initial smooth image
Figure FDA0000050575320000017
F. Will INoiseAnd
Figure FDA0000050575320000018
subtracting to obtain residual gNoiseAnd forming a corresponding image context
Figure FDA0000050575320000019
G. The context corresponding to each pixel is determined
Figure FDA00000505753200000110
Bringing in
Figure FDA00000505753200000111
Obtaining the corresponding grade;
H. using filters
Figure FDA00000505753200000112
To INoiseAnd filtering to obtain an output image.
2. The image denoising method in the intelligent monitoring system of the power transmission line according to claim 1, characterized in that: the gradient-based anisotropic filter in step a is specifically designed as follows:
a1, calculating the gradient of a given noise-free image I
Figure FDA00000505753200000113
Difference in four directions
Figure FDA00000505753200000114
Figure FDA00000505753200000115
And
Figure FDA00000505753200000116
and corresponding filter coefficients CNi,j、cSi,j、cWi,jAnd cEi,j
<math><mrow><mo>|</mo><mo>|</mo><mo>&dtri;</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>|</mo><mo>|</mo><mo>=</mo><mo>|</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn><mo>,</mo><mi>j</mi></mrow></msub><mo>-</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi></mrow></msub><mo>|</mo><mo>+</mo><mo>|</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>-</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>|</mo></mrow></math>
<math><mfenced open='{' close=''><mtable><mtr><mtd><msub><mo>&dtri;</mo><mi>N</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi></mrow></msub><mo>-</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mtd></mtr><mtr><mtd><msub><mo>&dtri;</mo><mi>S</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn><mo>,</mo><mi>j</mi></mrow></msub><mo>-</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mtd></mtr><mtr><mtd><msub><mo>&dtri;</mo><mi>W</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>-</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mtd></mtr><mtr><mtd><msub><mo>&dtri;</mo><mi>E</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>-</mo><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mtd></mtr></mtable></mfenced></math>
<math><mfenced open='{' close=''><mtable><mtr><mtd><msub><mi>c</mi><mrow><mi>Ni</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><mfrac><mn>1</mn><mrow><mn>1</mn><mo>+</mo><msup><mrow><mo>(</mo><mfrac><mrow><msub><mo>&dtri;</mo><mi>N</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mrow><mi>K</mi></mfrac><mo>)</mo></mrow><mn>2</mn></msup></mrow></mfrac></mtd></mtr><mtr><mtd><msub><mi>c</mi><mrow><mi>Si</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><mfrac><mn>1</mn><mrow><mn>1</mn><mo>+</mo><msup><mrow><mo>(</mo><mfrac><mrow><msub><mo>&dtri;</mo><mi>S</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mrow><mi>K</mi></mfrac><mo>)</mo></mrow><mn>2</mn></msup></mrow></mfrac></mtd></mtr><mtr><mtd><msub><mi>c</mi><mrow><mi>Wi</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><mfrac><mn>1</mn><mrow><mn>1</mn><mo>+</mo><msup><mrow><mo>(</mo><mfrac><mrow><msub><mo>&dtri;</mo><mi>W</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mrow><mi>K</mi></mfrac><mo>)</mo></mrow><mn>2</mn></msup></mrow></mfrac></mtd></mtr><mtr><mtd><msub><mi>c</mi><mrow><mi>Ei</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><mfrac><mn>1</mn><mrow><mn>1</mn><mo>+</mo><msup><mrow><mo>(</mo><mfrac><mrow><msub><mo>&dtri;</mo><mi>E</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mrow><mi>K</mi></mfrac><mo>)</mo></mrow><mn>2</mn></msup></mrow></mfrac></mtd></mtr></mtable></mfenced></math>
A2, calculating the anisotropic filtering according to the parameters obtained above
Figure FDA0000050575320000024
As a result of (1):
<math><mrow><msub><mover><mi>I</mi><mo>^</mo></mover><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>=</mo><mfrac><mrow><msub><mi>c</mi><mrow><mi>Ni</mi><mo>,</mo><mi>j</mi></mrow></msub><msub><mo>&dtri;</mo><mi>N</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>+</mo><msub><mi>c</mi><mrow><mi>Si</mi><mo>,</mo><mi>j</mi></mrow></msub><msub><mo>&dtri;</mo><mi>S</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>+</mo><msub><mi>c</mi><mrow><mi>Wi</mi><mo>,</mo><mi>j</mi></mrow></msub><msub><mo>&dtri;</mo><mi>W</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>+</mo><msub><mi>c</mi><mrow><mi>Ei</mi><mo>,</mo><mi>j</mi></mrow></msub><msub><mo>&dtri;</mo><mi>E</mi></msub><msub><mi>I</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mrow><mn>4</mn></mfrac></mrow></math>
a3, according to gradient
Figure FDA0000050575320000026
Determines the damping degree of filtering
Figure FDA0000050575320000027
<math><mrow><mi>if</mi><mo>|</mo><mo>|</mo><msub><mrow><mo>&dtri;</mo><mi>I</mi></mrow><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>|</mo><mo>|</mo><mo>&lt;</mo><mi>C</mi></mrow></math>
I ^ i , j = I ^ i , j + I i , j 2
else
I ^ i , j = 3 I ^ i , j + I i , j 4 .
3. The image denoising method in the intelligent monitoring system of the power transmission line according to claim 1, characterized in that: the step B specifically comprises the following steps:
b1 based on image I and filtered image
Figure FDA00000505753200000211
Computing a filtered error image
Figure FDA00000505753200000212
B2, using the high frequency information of the image in 3 x 3 where the current pixel point is as the image context to form the corresponding context vector
Figure FDA0000050575320000031
<math><mrow><mover><msub><mi>c</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>&RightArrow;</mo></mover><mo>=</mo><mo>|</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>,</mo><mi>j</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>-</mo><mn>1</mn><mo>,</mo><mi>j</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn><mo>,</mo><mi>j</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>,</mo><msub><mi>g</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn><mo>,</mo><mi>j</mi></mrow></msub><msub><mi>g</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn><mo>,</mo><mi>j</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>}</mo><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>4</mn><mo>)</mo></mrow><mo>.</mo></mrow></math>
4. The image denoising method in the intelligent monitoring system of the power transmission line according to claim 1, characterized in that: the step C is specifically as follows:
based on the obtained context vectorAnd filtering the error image g, applying a dynamic programming algorithm to minimize the values of the following conditional entropies:
<math><mrow><mo>-</mo><munderover><mi>&Sigma;</mi><mrow><mi>d</mi><mo>=</mo><mn>1</mn></mrow><mi>m</mi></munderover><mi>p</mi><mrow><mo>(</mo><mi>Q</mi><mrow><mo>(</mo><mover><mi>c</mi><mo>&RightArrow;</mo></mover><mo>)</mo></mrow><mo>=</mo><mi>d</mi><mo>)</mo></mrow><mi>&Sigma;p</mi><mrow><mo>(</mo><mi>g</mi><mo>|</mo><mi>Q</mi><mrow><mo>(</mo><mover><mi>c</mi><mo>&RightArrow;</mo></mover><mo>)</mo></mrow><mo>=</mo><mi>d</mi><mo>)</mo></mrow><mi>log</mi><mi>p</mi><mrow><mo>(</mo><mi>g</mi><mo>|</mo><mi>Q</mi><mrow><mo>(</mo><mover><mi>c</mi><mo>&RightArrow;</mo></mover><mo>)</mo></mrow><mo>=</mo><mi>d</mi><mo>)</mo></mrow><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>5</mn><mo>)</mo></mrow></mrow></math>
wherein,
Figure FDA0000050575320000035
is the vector of the current pixel point in the image g relative to the context
Figure FDA0000050575320000036
Resulting conditional probability, i.e. solving a 32-level quantitizer
Figure FDA0000050575320000037
Vector context
Figure FDA0000050575320000038
Quantized into 32 bins of its defined space:
<math><mrow><mover><mi>c</mi><mo>&RightArrow;</mo></mover><mo>=</mo><msubsup><mo>&cup;</mo><mrow><mi>d</mi><mo>=</mo><mn>1</mn></mrow><mn>32</mn></msubsup><mo>{</mo><msub><mi>C</mi><mi>Q</mi></msub><mo>|</mo><msub><mi>C</mi><mi>Q</mi></msub><mo>=</mo><mi>d</mi><mo>}</mo><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>6</mn><mo>)</mo></mrow><mo>.</mo></mrow></math>
5. the image denoising method in the intelligent monitoring system of the power transmission line according to claim 1, characterized in that: the step D is specifically as follows:
d1, in the image I, aiming at each pixel point y, a diamond neighborhood R (y) is taken around the pixel point y;
d2, obtaining the pixel point set satisfying Y ═ yC according to the 32 quantized context sets C obtained in step CQ(y)∈C,CQ(y)=CQAnd has r (y) ═ r (y) | CQ(y)∈C,CQ(y)=CQAnd setting gray value y of pixel points and gray values x of other 12 pixel points in the rhombic neighborhood of all R (Y)kThe following relationship is satisfied:
<math><mrow><munderover><mi>&Sigma;</mi><mrow><mi>k</mi><mo>=</mo><mn>1</mn></mrow><mn>12</mn></munderover><msub><mi>b</mi><mi>k</mi></msub><msub><mi>x</mi><mi>k</mi></msub><mo>+</mo><mi>&alpha;</mi><mo>=</mo><mi>y</mi><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>8</mn><mo>)</mo></mrow></mrow></math>
based on the obtained R (Y), we can analyze the coefficient b by regressionkAnd a, estimated coefficient bkAnd α is also f (x | C)Q) The filter coefficients of the filter.
6. The image denoising method in the intelligent monitoring system of the power transmission line according to claim 1, characterized in that: the step H is specifically as follows:
h1, in image INoiseIn the method, a rhombic neighborhood R (y) is taken around each pixel point y;
h2, application filter f (x | C)Q) To INoiseFiltering is carried out, namely:
CN 201110063780 2011-03-16 2011-03-16 Image denoising method in transmission line intelligent monitoring system Pending CN102143303A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110063780 CN102143303A (en) 2011-03-16 2011-03-16 Image denoising method in transmission line intelligent monitoring system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110063780 CN102143303A (en) 2011-03-16 2011-03-16 Image denoising method in transmission line intelligent monitoring system

Publications (1)

Publication Number Publication Date
CN102143303A true CN102143303A (en) 2011-08-03

Family

ID=44410505

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110063780 Pending CN102143303A (en) 2011-03-16 2011-03-16 Image denoising method in transmission line intelligent monitoring system

Country Status (1)

Country Link
CN (1) CN102143303A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104486533A (en) * 2014-12-31 2015-04-01 珠海全志科技股份有限公司 Image sharpening method and device
CN105915762A (en) * 2016-01-18 2016-08-31 上海斐讯数据通信技术有限公司 Noise-pixel adaptive filtering method and noise-pixel adaptive filtering system
CN109523583A (en) * 2018-10-09 2019-03-26 河海大学常州校区 A kind of power equipment based on feedback mechanism is infrared and visible light image registration method
CN110753243A (en) * 2019-11-05 2020-02-04 深圳市巨潮科技股份有限公司 Image processing method, image processing server and image processing system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5771318A (en) * 1996-06-27 1998-06-23 Siemens Corporate Research, Inc. Adaptive edge-preserving smoothing filter
CN101142614A (en) * 2004-09-09 2008-03-12 奥普提克斯晶硅有限公司 Single channel image deformation system and method using anisotropic filtering
US20090245679A1 (en) * 2008-03-27 2009-10-01 Kazuyasu Ohwaki Image processing apparatus
CN101930599A (en) * 2010-08-24 2010-12-29 黄伟萍 Medical image enhancement method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5771318A (en) * 1996-06-27 1998-06-23 Siemens Corporate Research, Inc. Adaptive edge-preserving smoothing filter
CN101142614A (en) * 2004-09-09 2008-03-12 奥普提克斯晶硅有限公司 Single channel image deformation system and method using anisotropic filtering
US20090245679A1 (en) * 2008-03-27 2009-10-01 Kazuyasu Ohwaki Image processing apparatus
CN101930599A (en) * 2010-08-24 2010-12-29 黄伟萍 Medical image enhancement method and system

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104486533A (en) * 2014-12-31 2015-04-01 珠海全志科技股份有限公司 Image sharpening method and device
CN104486533B (en) * 2014-12-31 2017-09-22 珠海全志科技股份有限公司 Image sharpening method and its device
CN105915762A (en) * 2016-01-18 2016-08-31 上海斐讯数据通信技术有限公司 Noise-pixel adaptive filtering method and noise-pixel adaptive filtering system
CN109523583A (en) * 2018-10-09 2019-03-26 河海大学常州校区 A kind of power equipment based on feedback mechanism is infrared and visible light image registration method
CN109523583B (en) * 2018-10-09 2021-07-13 河海大学常州校区 Infrared and visible light image registration method for power equipment based on feedback mechanism
CN110753243A (en) * 2019-11-05 2020-02-04 深圳市巨潮科技股份有限公司 Image processing method, image processing server and image processing system

Similar Documents

Publication Publication Date Title
Ma et al. Efficient and fast real-world noisy image denoising by combining pyramid neural network and two-pathway unscented Kalman filter
Sun et al. Postprocessing of low bit-rate block DCT coded images based on a fields of experts prior
CN108564597B (en) Video foreground object extraction method fusing Gaussian mixture model and H-S optical flow method
CN103093441B (en) Based on the non-local mean of transform domain and the image de-noising method of two-varaible model
CN108921800A (en) Non-local mean denoising method based on form adaptive search window
CN102289792A (en) Method and system for enhancing low-illumination video image
CN101916433B (en) Denoising method of strong noise pollution image on basis of partial differential equation
CN104103041B (en) Ultrasonoscopy mixed noise Adaptive Suppression method
CN105427257A (en) Image enhancement method and apparatus
CN104657951A (en) Multiplicative noise removal method for image
CN108648162A (en) A kind of gradient correlation TV factor graph picture denoising deblurring methods based on noise level
Shahdoosti et al. Combined ripplet and total variation image denoising methods using twin support vector machines
CN112862753A (en) Noise intensity estimation method and device and electronic equipment
CN102143303A (en) Image denoising method in transmission line intelligent monitoring system
CN101504769B (en) Self-adaptive noise intensity estimation method based on encoder frame work
CN107392879A (en) A kind of low-light (level) monitoring image Enhancement Method based on reference frame
CN104537624B (en) SAR image method for reducing speckle based on SSIM correction cluster rarefaction representations
CN104616259B (en) A kind of adaptive non-local mean image de-noising method of noise intensity
CN104616252B (en) Digital image enhancement method based on NSCT and PCNN
CN103077507A (en) Beta algorithm-based multiscale SAR (Synthetic Aperture Radar) image denoising method
CN110335196A (en) A kind of super-resolution image reconstruction method and system based on fractal decoding
CN102222321A (en) Blind reconstruction method for video sequence
Laksmi et al. Novel image enhancement technique using CLAHE and wavelet transforms
CN103595933A (en) Method for image noise reduction
CN103778615A (en) Multi-focus image fusion method based on region similarity

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: SHANGHAI JIULONG ELECTRIC POWER (GROUP) CO., LTD.

Free format text: FORMER OWNER: SHANGHAI ELECTRIC POWER TECHNOLOGY DEVELOPMENT CO., LTD.;SUZHOU AIJIAN PORCELAIN CO.,LTD.;SHANGHAI SOUTH POWER SUPPLY ENGINEERING CO., LTD.

Effective date: 20120717

C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20120717

Address after: 200122 Shanghai City, Pudong New Area source deep road, No. 1122

Applicant after: Shanghai Electric Power Corporation

Co-applicant after: Shanghai Jiulong Electric Power (Group) Co., Ltd.

Address before: 200122 Shanghai City, Pudong New Area source deep road, No. 1122

Applicant before: Shanghai Electric Power Corporation

Co-applicant before: Shanghai Electric Power Live Working Technology Development Co., Ltd.

ASS Succession or assignment of patent right

Owner name: STATE ELECTRIC NET CROP. SHANGHAI JIULONG ELECTRIC

Free format text: FORMER OWNER: SHANGHAI JIULONG ELECTRIC POWER (GROUP) CO., LTD.

Effective date: 20121018

C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20121018

Address after: 200122 Shanghai City, Pudong New Area source deep road, No. 1122

Applicant after: Shanghai Electric Power Corporation

Applicant after: State Grid Corporation of China

Applicant after: Shanghai Jiulong Electric Power (Group) Co., Ltd.

Address before: 200122 Shanghai City, Pudong New Area source deep road, No. 1122

Applicant before: Shanghai Electric Power Corporation

Applicant before: Shanghai Jiulong Electric Power (Group) Co., Ltd.

C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20110803