CN104484865A - Method for removing raindrops in video image - Google Patents

Method for removing raindrops in video image Download PDF

Info

Publication number
CN104484865A
CN104484865A CN201410855080.9A CN201410855080A CN104484865A CN 104484865 A CN104484865 A CN 104484865A CN 201410855080 A CN201410855080 A CN 201410855080A CN 104484865 A CN104484865 A CN 104484865A
Authority
CN
China
Prior art keywords
pixel
raindrop
pure
region
moving object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410855080.9A
Other languages
Chinese (zh)
Inventor
朱青松
李佳恒
王磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201410855080.9A priority Critical patent/CN104484865A/en
Publication of CN104484865A publication Critical patent/CN104484865A/en
Pending legal-status Critical Current

Links

Abstract

The invention provides a method for removing raindrops in a video image. The method comprises the steps of (A) estimating the moving object area in a preset frame image of the video image by means of the optical flow; (B) optimizing the estimated moving object area by means of a color clustering algorithm to obtain the optimized moving object area; (C) determining the raindrop area in the preset frame image; (D) removing the raindrops according to the overlapping of the raindrop area and the optimized moving object area. According to the method, the raindrops in the video image can be removed in a classified mode based on the moving object area and the raindrop area in the video image, therefore, the raindrops in the video image including moving objects can be removed effectively, and the accuracy in raindrop removal is improved. Accordingly, the method is extensive in applicability.

Description

Video image is gone to the method for rain
Technical field
The present invention relates to image processing field, more particularly, relate to a kind of method of video image being removed to rain.
Background technology
The imaging of rain to video image has a great impact, image image blur and information can be caused to cover, the sharpness of video image is caused to decline, thus the further process (such as, based on the target detection of image, identification, tracking, segmentation and monitoring etc.) making video image also can by this affects hydraulic performance decline.Being subject to the extensive concern of international academic community about the research of raindrop characteristic in video image, by removing rain to the video image polluted by raindrop, being conducive to the further process of image.The rain that goes of video image is with a wide range of applications in fields such as modern military, traffic and security monitorings, has become new study hotspot.
From Starik in 2003 etc. propose to carry out the method for rain based on median computation methods, go video image the research of the method for rain to obtain and develop rapidly.At present, the method of rain is gone no longer to be confined to initial simple median computation methods to video image, a lot of method such as degree of bias calculating, K mean cluster, Kalman filtering, dictionary learning and sparse coding, guiding filtering, interframe luminance difference, HSV space, optical flow method, motion segmentation is also applied to the detection of raindrop in video image gradually with removal, improves the effect of removing rain of video image.
Such as, the interframe luminance difference that Garg etc. propose to utilize raindrop to bring carries out raindrop initial survey, then the rectilinearity of the raindrop feature consistent with direction is utilized to screen further, remove raindrop impact according to the brightness value of the pixel of front and back frame afterwards, thus can carry out detecting to raindrop preferably when raindrop do not cover sequential frame image and remove; Zhang etc. are going in the process of rain to video image, the influence of color that raindrop bring to pixel are taken into account, thus improve raindrop detect accuracy, improve based on brightness value change go rain method on coloured image go rain effect; The brightness impact of raindrop and influence of color are applied in the method for rain by Liu etc. simultaneously, detect raindrop and remove raindrop by two two field pictures; Tripathi etc. have studied the probabilistic statistical characteristics of raindrop pixel intensity change, and the symmetry utilizing raindrop pixel intensity to change achieves the detection to raindrop; Kang etc. utilize bilateral filtering method that rain figure is divided into HFS and low frequency part, are further processed and obtain non-rain composition, and carry out rain in conjunction with low frequency part to HFS; Huang etc. utilize context to retrain to carry out Iamge Segmentation, and utilize context-aware to carry out single image to remove rain, and propose innovatory algorithm on this basis, use super complete dictionary to carry out process to HFS and remove rain to video image.
Above-mentioned these go the method for rain to video image, can remove the raindrop in the video image (not comprising the video image of moving object) of static scene well.But, when comprising moving object in video image, due to the interference of moving object, existingly go the method for rain can not remove raindrop well to video image.
Therefore, the existing applicability of the method for rain of going video image is not high.
Summary of the invention
Exemplary embodiment of the present invention is to provide a kind of method of video image being removed to rain, to solve the technical matters cannot removing raindrop in prior art well from the video image comprising moving object.
According to an exemplary embodiment of the present, a kind of method of video image being removed to rain is provided, comprises: (A) utilizes the moving object region in the predetermined frame image of optical flow method estimation video image; (B) utilize color clustering algorithm to be optimized the moving object region estimated, obtain and optimize moving object region; (C) the raindrop region in described predetermined frame image is determined; (D) raindrop removal is carried out according to the overlapping cases between described raindrop region and described optimization moving object region.
Alternatively, step (B) comprising: described moving object Region dividing, according to the chromatic characteristic of pixel in described moving object region, is the subset of predetermined quantity by hierarchical subtractive clustering by (B1); (B2) central point based on each subset carries out fuzzy clustering, obtains and optimizes moving object region.
Alternatively, step (C) comprising: (C1) is for each pixel in described predetermined frame image, calculate the gray-value variation amount of described pixel, wherein, described gray-value variation amount indicates in the gray-scale value of described pixel and previous frame image and the difference of described pixel between the gray-scale value of the pixel of same position; (C2) according to the gray-value variation amount of each pixel in described predetermined frame image, in described predetermined frame image, all regions affected by raindrop are determined; (C3) for each pixel in each region affected by raindrop, calculate the color component value variable quantity of described pixel, wherein, described color component value variable quantity indicates in the color component value of described pixel and previous frame image and the difference of described pixel between the color component value of the pixel of same position; (C4) for each region affected by raindrop, based on the color component value variable quantity of each pixel in the number of pixels comprised in the gray-value variation amount of pixel each in described region, described region and described region, determine whether described region belongs to raindrop region.
Alternatively, in step (C2), determine that gray-value variation amount is greater than all pixels of first threshold in described predetermined frame image, wherein, described all pixels form each region affected by raindrop according to position connectedness.
Alternatively, in step (C4), the region affected by raindrop met the following conditions is defined as raindrop region: by the gray-value variation amount of pixel each in the region that raindrop affect all in preset range, predetermined number is greater than by the number of pixels comprised in the region that raindrop affect, further, in the region affected by raindrop, the color component value variable quantity of each pixel is all less than Second Threshold.
Alternatively, step (D) comprising: each raindrop Region dividing is and the nonoverlapping pure raindrop part in described optimization moving object region and the non-pure raindrop part overlapping with described optimization moving object region by (D1); (D2) for the pure raindrop pixel of the pure raindrop part of composition, the brightness value of described pure raindrop pixel is calculated based on the prior images of predetermined quantity and the brightness value being positioned at the pixel of same position with described pure raindrop pixel in rear image of predetermined quantity; (D3) for the non-pure raindrop pixel of the non-pure raindrop part of composition, brightness value based on the space-time neighborhood territory pixel of described non-pure raindrop pixel calculates the brightness value of described non-pure raindrop pixel, wherein, described space-time neighborhood territory pixel comprises the pixel being positioned at same position in neighborhood pixels adjacent with described non-pure raindrop pixel in described predetermined frame image and previous frame image and a rear two field picture with described non-pure raindrop pixel and neighborhood pixels thereof.
Alternatively, described predetermined frame image is the N two field picture in video image, and described predetermined quantity is 3, N is positive integer, wherein, in step (D2), is calculated the brightness value of described pure raindrop pixel by equation below:
I ( x , y , N ) = Σ t = N - 3 t = N + 3 F b ( t ) I ( x , y , t ) Σ t = N - 3 t = N + 3 F b ( t ) ,
Wherein, (x, y, N) position coordinates of pure raindrop pixel described in N two field picture is represented, I (x, y, N) brightness value of the described pure raindrop pixel calculated is represented, (x, y, t) represents in t two field picture and is positioned at the position coordinates of the pixel of same position with described pure raindrop pixel, I (x, y, t) represent in t two field picture and be positioned at the brightness value of the pixel of same position, F with described pure raindrop pixel bt () is the weighting coefficient of I (x, y, t), t ∈ [N-3, N+3], wherein, as t=N, and F b(t)=0; As t=N-1 or t=N+1, F b(t)=4; As t=N-2 or t=N+2, F b(t)=2; As t=N-3 or t=N+3, F b(t)=1.
Alternatively, described predetermined frame image is the n-th two field picture in video image, and n is positive integer, wherein, in step (D3), is calculated the brightness value of described non-pure raindrop pixel by equation below:
I n ( p , q ) = Σ v m ( i , j ) ∈ V w p , q ( i , j ) I ( i , j ) Σ v m ( i , j ) ∈ V w p , q ( i , j ) ,
Wherein, (p, q) represents the position coordinates of non-pure raindrop pixel described in the n-th two field picture, I n(p, q) represents the brightness value of the described non-pure raindrop pixel calculated, and (i, j) represents the position coordinates of arbitrary space-time neighborhood territory pixel of described non-pure raindrop pixel, v m(i, j) represents the position coordinates vector of described space-time neighborhood territory pixel, and V represents the set of the coordinate vector of all space-time neighborhood territory pixels of described non-pure raindrop pixel, and I (i, j) represents the brightness value of described space-time neighborhood territory pixel, w p,q(i, j) represents the coefficient of diffusion of described space-time neighborhood territory pixel relative to described non-pure raindrop pixel, wherein, a is normalisation coefft, v n(p, q) represents the position coordinates vector of described non-pure raindrop pixel, d (v n(p, q), v m(i, j)=(v n(p, q)-v m(i, j)) tΨ -1(v n(p, q)-v m(i, j)), wherein, Ψ is diffusion tensor.
Going in the method for rain to video image according to an exemplary embodiment of the present invention, based on the moving object region in video image and raindrop region, the raindrop in video image can be removed taxonomically, thus can effectively remove the raindrop comprised in the video image of moving object, improve the precision of rain, there is applicability widely.
Accompanying drawing explanation
By below in conjunction with exemplarily illustrating the description that the accompanying drawing of embodiment carries out, the above and other object of exemplary embodiment of the present and feature will become apparent, wherein:
Fig. 1 illustrates the process flow diagram of the method for according to an exemplary embodiment of the present invention video image being removed to rain;
Fig. 2 illustrates and goes to obtain the process flow diagram optimizing moving object region step in the method for rain to video image according to an exemplary embodiment of the present invention;
Fig. 3 illustrates the process flow diagram going to determine in the method for rain raindrop region step according to an exemplary embodiment of the present invention to video image;
Fig. 4 illustrates the process flow diagram according to an exemplary embodiment of the present invention video image being gone to raindrop removal step in the method for rain;
Fig. 5 illustrates the example of space-time neighborhood territory pixel according to an exemplary embodiment of the present invention;
Fig. 6 illustrates the design sketch of the method for according to an exemplary embodiment of the present invention video image being removed to rain.
Embodiment
Below, describe exemplary embodiment of the present invention more fully with reference to the accompanying drawings, exemplary embodiment is shown in the drawings.But, can exemplifying embodiment embodiment in many different forms, and should not be construed as limited to exemplary embodiment set forth herein.On the contrary, these embodiments are provided thus the scope of exemplary embodiment thoroughly and complete, and fully will will be conveyed to those skilled in the art by the disclosure.
Go the method for rain can be implemented by corresponding equipment according to exemplary embodiment of the present invention to video image, also implement by computer program.Such as, described method is by going the specialized equipment of rain or specific program to perform for performing image.
Fig. 1 illustrates the process flow diagram of the method for according to an exemplary embodiment of the present invention video image being removed to rain.
Particularly, in step S100, utilize the moving object region in the predetermined frame image of optical flow method estimation video image.
Motion due to raindrop and object all can affect the brightness value of pixel in the two field picture of video image, therefore, when removing rain to the two field picture comprising moving object, need to remove the interference that the brightness value of raindrop to pixel brings, and retain the interference that moving object brings.So, need the moving object region first determined in video image.
Optical flow method estimates moving object region by the first order derivative of the brightness value of pixel.Particularly, motion due to object causes the position of the pixel that brightness value is identical in two adjacent two field pictures different, therefore, the movement of object can be seen as the movement of the pixel of same brightness value in two adjacent two field pictures, and determined the movement velocity of object by the movement of pixel, and then estimate moving object region.Usually, optical flow method can be represented by following definition:
E xu+E yv+E t=0,
Wherein, E xrepresent the local derviation of described brightness value to x, E yrepresent the local derviation of described brightness value to y, E trepresent the local derviation of described brightness value to time t, wherein, described brightness value is the function relevant to x, y, t.Here, identical rectangular coordinate system can be set up in each two field picture of video image, represent transverse and longitudinal coordinate axis respectively with x, y.U represents the speed (that is, moving object in the speed in x-axis direction) of the pixel in two field picture in x-axis direction, and v represents the speed (that is, moving object in the speed in y-axis direction) of the pixel in two field picture in y-axis direction.Just can be estimated the pixel by moving object effect in the predetermined frame image of video image by formula above, and then the moving object region be made up of by the pixel of moving object effect these can be estimated.
But, because the moving object region estimated by above formula also may exist raindrop, here the region of the just moving object obtained is needed, and the movement velocity of raindrop is greater than the movement velocity of moving object usually, therefore, the threshold value by arranging speed removes the raindrop in the moving object region of optical flow method acquisition.
Exemplarily, by the mean value of brightness value of moving region that estimates and the product of the parameter relevant to the size of the force of rain to set described threshold value.Such as, when the speed of the moving object estimated is less than described threshold value, can determine that described moving object is not raindrop, when the speed of the moving object estimated is greater than described threshold value, can determine that described moving object is raindrop, and removed from the moving object region estimated by the raindrop determined, thus estimate moving object region accurately.
In step S200, utilize color clustering algorithm to be optimized the moving object region estimated, obtain and optimize moving object region.
Particularly, because optical flow method can only detect the obvious change of color brightness of object edge, and effectively can not detect the change of interior of articles brightness value, thus so-called " aperture " problem can be caused.In order to address this problem, color clustering algorithm can be utilized here to be optimized the moving object region that optical flow method is estimated, to obtain the moving object region of optimization.
Fig. 2 illustrates and goes to obtain the process flow diagram optimizing moving object region step S200 in the method for rain to video image according to an exemplary embodiment of the present invention.
As shown in Figure 2, in step S210, can according to the chromatic characteristic of pixel in described moving object region, be the subset of predetermined quantity by hierarchical subtractive clustering by described moving object Region dividing.
Because color clustering algorithm first can carry out layering to image in the process of computing, carry out finding center point, and when the pixel in the moving object region estimated is very many, the calculated amount of layering can be caused very large, therefore can consume long time.Here, in order to accelerate layered velocity and shorten computing time, be the subset that the color of some is close by described moving object Region dividing by hierarchical subtractive clustering.
In step S220, the central point based on each subset carries out fuzzy clustering, obtains and optimizes moving object region.
Here, due to the subset dividing predetermined quantity, relative to the moving object region before division, the pixel quantity for fuzzy clustering in subset significantly reduces, and thus significantly can mention the speed of fuzzy clustering.
Referring again to Fig. 1, in step S300, determine the raindrop region in described predetermined frame image.
Here, obtain the region affected by raindrop by frame difference method, and determine raindrop region by arranging corresponding threshold value to the pixel in the region affected by raindrop.
Usually, by the brightness value of a pixel affected by raindrop in equation Description Image below:
I 0 ( x 0 , y 0 ) = ∫ 0 τ E r ( x 0 , y 0 ) dt + ∫ τ T E b ( x 0 , y 0 ) dt ,
Wherein, (x 0, y 0) represent this position coordinates of pixel in described predetermined frame image affected by raindrop, I 0(x 0, y 0) representing the brightness value of described pixel, τ represents the time through described pixel in raindrop dropping process, and T represents the time shutter, E r(x 0, y 0) represent the average irradiance of raindrop through described pixel, E b(x 0, y 0) representing average irradiance when described pixel does not affect by raindrop, t represents the time.
Due to E r(x 0, y 0) be greater than E b(x 0, y 0), when the pixel in image is affected by raindrop, the brightness value of the pixel be affected can increase.Therefore, based on the distinguishable pixel that goes out to affect by raindrop of the change of the brightness value of the pixel of same position between the two field picture of front and back with not by the pixel affected by raindrop.
Fig. 3 illustrates the process flow diagram going to determine in the method for rain raindrop region step according to an exemplary embodiment of the present invention to video image.
With reference to Fig. 3, in step S310, for each pixel in described predetermined frame image, calculate the gray-value variation amount of described pixel, wherein, described gray-value variation amount indicates in the gray-scale value of described pixel and previous frame image and the difference of described pixel between the gray-scale value of the pixel of same position.
Here, the brightness value of the larger expression of the gray-scale value due to pixel pixel is larger, and the brightness value of the gray-scale value less expression pixel of pixel is less, and the brightness value of the pixel affected by raindrop can increase, and therefore, the gray-scale value of the pixel affected by raindrop also can increase.Therefore, the situation of change of the brightness value of each pixel is judged by calculating the difference (that is, gray-value variation amount) being positioned at the gray-scale value of the pixel of same position in the gray-scale value of the pixel in described predetermined frame image and previous frame image.
Exemplarily, for the gray level image of 8, the scope of described gray-scale value can be set to [0,1] (under normal circumstances, the span of the gray-scale value of 8 gray level images is [0,255], here, conveniently calculate, by gray-scale value under normal circumstances divided by 255, to make the span of the gray-scale value after process for [0,1]), and gray-value variation amount is calculated on the basis of this span.
In step S320, according to the gray-value variation amount of each pixel in described predetermined frame image, in described predetermined frame image, determine all regions affected by raindrop.
Because when pixel affects by raindrop, brightness value can increase, therefore the gray-scale value of described pixel also can increase, thus by gray-value variation amount is set first threshold to judge whether each pixel in described predetermined frame image is subject to the impact of raindrop, and then the region that affects by raindrop in described predetermined frame image can be determined.Exemplarily, can determine that gray-value variation amount is greater than all pixels of first threshold in described predetermined frame image, here, described all pixels can form each region affected by raindrop according to position connectedness.Here, in described two field picture, gray-value variation amount is greater than all pixels of first threshold, is in described predetermined frame image by all pixels that raindrop affect.
Such as, in the examples described above, first threshold can be set to
Should be appreciated that, described first threshold is not limited to upper threshold value, also can be other threshold values arranged according to actual conditions.
In step S330, for each pixel in each region affected by raindrop, calculate the color component value variable quantity of described pixel, wherein, described color component value variable quantity indicates in the color component of described pixel (red component R, green component G and blue component B) value and previous frame image and the difference of described pixel between the color component value of the pixel of same position (Δ R, Δ G and Δ B).
Such as, each color component (R, G, B) is all had to the image of 256 grades of brightness, the span of each color component value can be set to [0,1] (under normal circumstances, the span of color component value is [0,255], here, conveniently calculate, by color component value under normal circumstances divided by 255, to make the span of the color component value after process for [0,1]), and color component value variable quantity is calculated on the basis of this span.
In step S340, for each region affected by raindrop, based on the color component value variable quantity of each pixel in the number of pixels comprised in the gray-value variation amount of pixel each in described region, described region and described region, determine whether described region belongs to raindrop region.
Here, due to the region determining by means of only the gray-scale value difference of pixel to affect by raindrop in step S320, and what affect the gray-scale value difference of pixel may be other factors except raindrop.Therefore, in order to improve degree of accuracy further, by the optical characteristics (number of pixels comprised in the gray-value variation amount of each pixel, described region in described region) of raindrop and chromatic characteristic the color component value variable quantity of each pixel (in the described region), determine whether described region belongs to raindrop region.
Exemplarily, the region affected by raindrop met the following conditions is defined as raindrop region: by the gray-value variation amount of pixel each in the region that raindrop affect all in preset range, predetermined number is greater than by the number of pixels comprised in the region that raindrop affect, further, in the region affected by raindrop, the color component value variable quantity of each pixel is all less than Second Threshold.
Such as, in above-mentioned example, the preset range by the gray-value variation amount of pixel each in the region that raindrop affect can be second Threshold by the color component value variable quantity of pixel each in the region that raindrop affect can be be the image of 240 × 320 for resolution, the predetermined number by the number of pixels comprised in the region that raindrop affect can be one in (30,50) interval.
Should be appreciated that, described preset range is not limited to above-mentioned scope, also can be according to raindrop size, camera focus and other scopes arranged; Described predetermined number is not limited to above-mentioned number, also can be according to raindrop size, camera focus and other numbers arranged; Described Second Threshold is not limited to above-mentioned threshold value, also can be according to raindrop size, camera focus and other threshold values arranged.
Referring again to Fig. 1, in step S400, carry out raindrop removal according to the overlapping cases between described raindrop region and described optimization moving object region.
Here, by the overlapping cases between raindrop region and optimization moving region, the pixel can knowing in raindrop region is pure raindrop pixel (only by the pixel that raindrop affect) the also pure raindrop pixel of right and wrong (not only having affected but also be subject to the pixel of moving object effect by raindrop), and then rain can be carried out for pure raindrop pixel and non-pure raindrop pixel by diverse ways respectively, thus reach and better go rain effect.
Fig. 4 illustrates the process flow diagram according to an exemplary embodiment of the present invention video image being gone to raindrop removal step in the method for rain.
With reference to Fig. 4, in step S410, by each raindrop Region dividing be and the nonoverlapping pure raindrop part in described optimization moving object region and the non-pure raindrop part overlapping with described optimization moving object region.
Exemplarily, when the pixel of in raindrop region is also present in described optimization moving object region, illustrate that raindrop region is overlapping at this pixel place with described optimization moving object region, similarly, all pixels be simultaneously present in described optimization moving object region can be determined in described raindrop region.The pixel existing only in described raindrop region can form pure raindrop part, and the pixel being simultaneously present in two regions can form non-pure raindrop part.
In step S420, for the pure raindrop pixel of the pure raindrop part of composition, calculate the brightness value of described pure raindrop pixel based on the prior images of predetermined quantity and the brightness value being positioned at the pixel of same position with described pure raindrop pixel in rear image of predetermined quantity.
Here, because the pure raindrop pixel of pure raindrop part is not in described optimization moving object region, so pure raindrop pixel is the pixel not being subject to moving object effect.Again because raindrop falling speed is very fast, therefore, the pixel being positioned at same position in two field picture before and after described predetermined frame image with described pure raindrop pixel all can not be subject to the impact of raindrop, thus can calculate the brightness value of described pure raindrop pixel based on the brightness value being positioned at the pixel of same position with described pure raindrop pixel in rear image of the prior images of predetermined quantity and predetermined quantity.
Exemplarily, described predetermined frame image can be the N two field picture in video image, and it is positive integer that described predetermined quantity can be 3, N.The brightness value of described pure raindrop pixel is calculated by equation below:
I ( x , y , N ) = Σ t = N - 3 t = N + 3 F b ( t ) I ( x , y , t ) Σ t = N - 3 t = N + 3 F b ( t ) ,
Wherein, (x, y, N) position coordinates of pure raindrop pixel described in N two field picture is represented, I (x, y, N) brightness value of the described pure raindrop pixel calculated is represented, (x, y, t) represents in t two field picture and is positioned at the position coordinates of the pixel of same position with described pure raindrop pixel, I (x, y, t) represent in t two field picture and be positioned at the brightness value of the pixel of same position, F with described pure raindrop pixel bt () is the weighting coefficient of I (x, y, t), t ∈ [N-3, N+3], wherein, as t=N, and F b(t)=0; As t=N-1 or t=N+1, F b(t)=4; As t=N-2 or t=N+2, F b(t)=2; As t=N-3 or t=N+3, F b(t)=1.
In step S430, for the non-pure raindrop pixel of the non-pure raindrop part of composition, brightness value based on the space-time neighborhood territory pixel of described non-pure raindrop pixel calculates the brightness value of described non-pure raindrop pixel, wherein, described space-time neighborhood territory pixel comprises the pixel being positioned at same position in neighborhood pixels adjacent with described non-pure raindrop pixel in described predetermined frame image and previous frame image and a rear two field picture with described non-pure raindrop pixel and neighborhood pixels thereof.
Here, the non-pure raindrop pixel due to non-pure raindrop part had not only affected by raindrop but also by the impact of moving object, thus described non-pure raindrop pixel is larger with the relevance of the brightness value of the pixel in its space-time neighborhood.Therefore, by being positioned at the pixel of same position in neighborhood pixels adjacent with described non-pure raindrop pixel in described predetermined frame image and previous frame image and a rear two field picture with described non-pure raindrop pixel and neighborhood pixels thereof, namely, space-time neighborhood territory pixel, removes rain to described non-pure raindrop pixel.
Fig. 5 illustrates the example of space-time neighborhood territory pixel according to an exemplary embodiment of the present invention.
As shown in Figure 5, pixel in figure indicated by arrow is the non-pure raindrop pixel of in described predetermined frame image, and the space-time neighborhood territory pixel of described non-pure raindrop pixel comprises the pixel (totally 18 pixels) being positioned at same position totally in neighborhood pixels (8 pixels) adjacent with described non-pure raindrop pixel in described predetermined frame image and previous frame image and a rear two field picture with described non-pure raindrop pixel and neighborhood pixels thereof.
Referring again to Fig. 4, exemplarily, described predetermined frame image is the n-th two field picture in video image, and n is positive integer, calculates the brightness value of described non-pure raindrop pixel by equation below:
I n ( p , q ) = Σ v m ( i , j ) ∈ V w p , q ( i , j ) I ( i , j ) Σ v m ( i , j ) ∈ V w p , q ( i , j ) ,
Wherein, (p, q) represents the position coordinates of non-pure raindrop pixel described in the n-th two field picture, I n(p, q) represents the brightness value of the described non-pure raindrop pixel calculated, and (i, j) represents the position coordinates of arbitrary space-time neighborhood territory pixel of described non-pure raindrop pixel, v m(i, j) represent the position coordinates vector of described space-time neighborhood territory pixel, V represents the set of the coordinate vector of all space-time neighborhood territory pixels (pixel of 26 shown in Fig. 5) of described non-pure raindrop pixel, I (i, j) brightness value of described space-time neighborhood territory pixel is represented, w p,q(i, j) represents the coefficient of diffusion of described space-time neighborhood territory pixel relative to described non-pure raindrop pixel, wherein, a is normalisation coefft, v n(p, q) represents the position coordinates vector of described non-pure raindrop pixel, d (v n(p, q), v m(i, j)=(v n(p, q)-v m(i, j)) tΨ -1(v n(p, q)-v m(i, j)), wherein, Ψ is diffusion tensor, and here, diffusion tensor Ψ can set according to actual needs.
Fig. 6 illustrates the design sketch of the method for according to an exemplary embodiment of the present invention video image being removed to rain.
As shown in Figure 6, (a) in Fig. 6 is the original image in the described predetermined frame image in video image, clearly can see in original image there are a large amount of raindrop.
(b) in Fig. 6 goes the method for rain to remove the design sketch after rain to described predetermined frame image by of the present invention to video image.As can be seen from (b) in Fig. 6, the marginal information of image is well saved, and goes rain effect very desirable.
Going in the method for rain to video image according to an exemplary embodiment of the present invention, based on the moving object region in video image and raindrop region, the raindrop in video image can be removed taxonomically, thus can effectively remove the raindrop comprised in the video image of moving object, improve the precision of rain, there is applicability widely.
It should be noted that each embodiment above of the present invention is only exemplary, and the present invention is not limited to this.Those skilled in the art should understand that: without departing from the principles and spirit of the present invention, can change these embodiments, wherein, scope of the present invention limits in claim and equivalent thereof.

Claims (8)

1. video image is gone to a method for rain, comprising:
(A) the moving object region in the predetermined frame image of optical flow method estimation video image is utilized;
(B) utilize color clustering algorithm to be optimized the moving object region estimated, obtain and optimize moving object region;
(C) the raindrop region in described predetermined frame image is determined;
(D) raindrop removal is carried out according to the overlapping cases between described raindrop region and described optimization moving object region.
2. the method for claim 1, wherein step (B) comprising:
(B1) according to the chromatic characteristic of pixel in described moving object region, be the subset of predetermined quantity by hierarchical subtractive clustering by described moving object Region dividing;
(B2) central point based on each subset carries out fuzzy clustering, obtains and optimizes moving object region.
3. the method for claim 1, wherein step (C) comprising:
(C1) for each pixel in described predetermined frame image, calculate the gray-value variation amount of described pixel, wherein, described gray-value variation amount indicates in the gray-scale value of described pixel and previous frame image and the difference of described pixel between the gray-scale value of the pixel of same position;
(C2) according to the gray-value variation amount of each pixel in described predetermined frame image, in described predetermined frame image, all regions affected by raindrop are determined;
(C3) for each pixel in each region affected by raindrop, calculate the color component value variable quantity of described pixel, wherein, described color component value variable quantity indicates in the color component value of described pixel and previous frame image and the difference of described pixel between the color component value of the pixel of same position;
(C4) for each region affected by raindrop, based on the color component value variable quantity of each pixel in the number of pixels comprised in the gray-value variation amount of pixel each in described region, described region and described region, determine whether described region belongs to raindrop region.
4. method as claimed in claim 3, wherein, in step (C2), in described predetermined frame image, determine that gray-value variation amount is greater than all pixels of first threshold, wherein, described all pixels form each region affected by raindrop according to position connectedness.
5. method as claimed in claim 3, wherein, in step (C4), the region affected by raindrop met the following conditions is defined as raindrop region: by the gray-value variation amount of pixel each in the region that raindrop affect all in preset range, predetermined number is greater than by the number of pixels comprised in the region that raindrop affect, further, in the region affected by raindrop, the color component value variable quantity of each pixel is all less than Second Threshold.
6. method as claimed in claim 3, wherein, step (D) comprising:
(D1) by each raindrop Region dividing be and the nonoverlapping pure raindrop part in described optimization moving object region and the non-pure raindrop part overlapping with described optimization moving object region;
(D2) for the pure raindrop pixel of the pure raindrop part of composition, the brightness value of described pure raindrop pixel is calculated based on the prior images of predetermined quantity and the brightness value being positioned at the pixel of same position with described pure raindrop pixel in rear image of predetermined quantity;
(D3) for the non-pure raindrop pixel of the non-pure raindrop part of composition, brightness value based on the space-time neighborhood territory pixel of described non-pure raindrop pixel calculates the brightness value of described non-pure raindrop pixel, wherein, described space-time neighborhood territory pixel comprises the pixel being positioned at same position in neighborhood pixels adjacent with described non-pure raindrop pixel in described predetermined frame image and previous frame image and a rear two field picture with described non-pure raindrop pixel and neighborhood pixels thereof.
7. method as claimed in claim 6, wherein, described predetermined frame image is the N two field picture in video image, and described predetermined quantity is 3, N is positive integer,
Wherein, in step (D2), calculated the brightness value of described pure raindrop pixel by equation below:
I ( x , y , N ) = Σ t = N - 3 t = N + 3 F b ( t ) I ( x , y , t ) Σ t = N - 3 t = N + 3 F b ( t ) ,
Wherein, (x, y, N) position coordinates of pure raindrop pixel described in N two field picture is represented, I (x, y, N) brightness value of the described pure raindrop pixel calculated is represented, (x, y, t) represents in t two field picture and is positioned at the position coordinates of the pixel of same position with described pure raindrop pixel, I (x, y, t) represent in t two field picture and be positioned at the brightness value of the pixel of same position, F with described pure raindrop pixel bt () is the weighting coefficient of I (x, y, t), t ∈ [N-3, N+3], wherein, as t=N, and F b(t)=0; As t=N-1 or t=N+1, F b(t)=4; As t=N-2 or t=N+2, F b(t)=2; As t=N-3 or t=N+3, F b(t)=1.
8. method as claimed in claim 6, wherein, described predetermined frame image is the n-th two field picture in video image, and n is positive integer,
Wherein, in step (D3), calculated the brightness value of described non-pure raindrop pixel by equation below:
I n ( p , q ) = Σ v m ( i , j ) ∈ V w p , q ( i , j ) I ( i , j ) Σ v m ( i , j ) ∈ V w p , q ( i , j ) ,
Wherein, (p, q) represents the position coordinates of non-pure raindrop pixel described in the n-th two field picture, I n(p, q) represents the brightness value of the described non-pure raindrop pixel calculated, and (i, j) represents the position coordinates of arbitrary space-time neighborhood territory pixel of described non-pure raindrop pixel, v m(i, j) represents the position coordinates vector of described space-time neighborhood territory pixel, and V represents the set of the coordinate vector of all space-time neighborhood territory pixels of described non-pure raindrop pixel, and I (i, j) represents the brightness value of described space-time neighborhood territory pixel, w p,q(i, j) represents the coefficient of diffusion of described space-time neighborhood territory pixel relative to described non-pure raindrop pixel, wherein, a is normalisation coefft, v n(p, q) represents the position coordinates vector of described non-pure raindrop pixel, d (v n(p, q), v m(i, j)=(v n(p, q)-v m(i, j)) tΨ -1(v n(p, q)-v m(i, j)), wherein, Ψ is diffusion tensor.
CN201410855080.9A 2014-12-31 2014-12-31 Method for removing raindrops in video image Pending CN104484865A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410855080.9A CN104484865A (en) 2014-12-31 2014-12-31 Method for removing raindrops in video image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410855080.9A CN104484865A (en) 2014-12-31 2014-12-31 Method for removing raindrops in video image

Publications (1)

Publication Number Publication Date
CN104484865A true CN104484865A (en) 2015-04-01

Family

ID=52759405

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410855080.9A Pending CN104484865A (en) 2014-12-31 2014-12-31 Method for removing raindrops in video image

Country Status (1)

Country Link
CN (1) CN104484865A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046670A (en) * 2015-08-28 2015-11-11 中国科学院深圳先进技术研究院 Image rain removal method and system
CN105139358A (en) * 2015-08-28 2015-12-09 中国科学院深圳先进技术研究院 Video raindrop removing method and system based on combination of morphology and fuzzy C clustering
CN105205791A (en) * 2015-08-28 2015-12-30 中国科学院深圳先进技术研究院 Gaussian-mixture-model-based video raindrop removing method and system
CN109145696A (en) * 2017-06-28 2019-01-04 安徽清新互联信息科技有限公司 A kind of Falls Among Old People detection method and system based on deep learning
CN112686828A (en) * 2021-03-16 2021-04-20 腾讯科技(深圳)有限公司 Video denoising method, device, equipment and storage medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
ABHISHEK KUMAR TRIPATHI等: "《Meteorological approach for detection and removal of rain from videos》", 《IET COMPUTER VISION》 *
MINMIN SHEN等: "《A FAST ALGORITHM FOR RAIN DETECTION AND REMOVAL FROM VIDEOS》", 《2012 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (2011)》 *
X XUE: "《Motion robust rain detection and removal from videos》", 《2012 IEEE 14TH INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING (MMSP)》 *
刘鹏等: "《一种受雨滴污染视频的快速分析方法》", 《自动化学报》 *
林开颜等: "《快速模糊C均值聚类彩色图像分割方法》", 《中国图象图形学报》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046670A (en) * 2015-08-28 2015-11-11 中国科学院深圳先进技术研究院 Image rain removal method and system
CN105139358A (en) * 2015-08-28 2015-12-09 中国科学院深圳先进技术研究院 Video raindrop removing method and system based on combination of morphology and fuzzy C clustering
CN105205791A (en) * 2015-08-28 2015-12-30 中国科学院深圳先进技术研究院 Gaussian-mixture-model-based video raindrop removing method and system
CN109145696A (en) * 2017-06-28 2019-01-04 安徽清新互联信息科技有限公司 A kind of Falls Among Old People detection method and system based on deep learning
CN109145696B (en) * 2017-06-28 2021-04-09 安徽清新互联信息科技有限公司 Old people falling detection method and system based on deep learning
CN112686828A (en) * 2021-03-16 2021-04-20 腾讯科技(深圳)有限公司 Video denoising method, device, equipment and storage medium
CN112686828B (en) * 2021-03-16 2021-07-02 腾讯科技(深圳)有限公司 Video denoising method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
JP5865552B2 (en) Video processing apparatus and method for removing haze contained in moving picture
CN102768760B (en) Quick image dehazing method on basis of image textures
CN104484865A (en) Method for removing raindrops in video image
US9798951B2 (en) Apparatus, method, and processor for measuring change in distance between a camera and an object
CN108682039B (en) Binocular stereo vision measuring method
CN110189339A (en) The active profile of depth map auxiliary scratches drawing method and system
CN104766065B (en) Robustness foreground detection method based on various visual angles study
CN106327488B (en) Self-adaptive foreground detection method and detection device thereof
CN109478329B (en) Image processing method and device
WO2016165064A1 (en) Robust foreground detection method based on multi-view learning
CN103729828B (en) video rain removing method
CN104272347A (en) Image processing apparatus for removing haze contained in still image and method thereof
CN108734109B (en) Visual target tracking method and system for image sequence
CN105631898A (en) Infrared motion object detection method based on spatio-temporal saliency fusion
CN104599256A (en) Single-image based image rain streak eliminating method and system
CN102111530A (en) Device and method for movable object detection
CN103985106A (en) Equipment and method used for multi-frame fusion of strong noise images
CN103700109A (en) Synthetic aperture radar (SAR) image change detection method based on multi-objective evolutionary algorithm based on decomposition (MOEA/D) and fuzzy clustering
CN105898111A (en) Video defogging method based on spectral clustering
Wang et al. Haze removal algorithm based on single-images with chromatic properties
CN105046670A (en) Image rain removal method and system
CN106056545A (en) Image rain removing method and image rain removing system
CN105551014A (en) Image sequence change detection method based on belief propagation algorithm with time-space joint information
CN104463812A (en) Method for repairing video image disturbed by raindrops in shooting process
Li et al. Fast visual tracking using motion saliency in video

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20150401