CN102298781B - Motion shadow detection method based on color and gradient characteristics - Google Patents

Motion shadow detection method based on color and gradient characteristics Download PDF

Info

Publication number
CN102298781B
CN102298781B CN201110233551.9A CN201110233551A CN102298781B CN 102298781 B CN102298781 B CN 102298781B CN 201110233551 A CN201110233551 A CN 201110233551A CN 102298781 B CN102298781 B CN 102298781B
Authority
CN
China
Prior art keywords
image
point
pixel
gradient
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201110233551.9A
Other languages
Chinese (zh)
Other versions
CN102298781A (en
Inventor
许雪梅
石震宇
孙克辉
刘雄飞
李岸
倪兰
尹林子
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHANGSHA ZHONGYI ELECTRONIC TECHNOLOGY Co Ltd
Original Assignee
CHANGSHA ZHONGYI ELECTRONIC TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHANGSHA ZHONGYI ELECTRONIC TECHNOLOGY Co Ltd filed Critical CHANGSHA ZHONGYI ELECTRONIC TECHNOLOGY Co Ltd
Priority to CN201110233551.9A priority Critical patent/CN102298781B/en
Publication of CN102298781A publication Critical patent/CN102298781A/en
Application granted granted Critical
Publication of CN102298781B publication Critical patent/CN102298781B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a motion shadow detection method based on color and gradient characteristics. The method is characterized by: collecting several frames of initial images, and establishing a background model by using a mean value displacement method so as to obtain a background image; taking a current obtained video stream image as an input image which is used to be different with the background image, detecting a moving object and determining a moving object area; carrying out shadow detections to the moving object area by using methods based on a color characteristic and a gradient characteristic respectively; carrying out an AND operation and a morphologic process to the shadow areas obtained from the two methods so as to determine the final shadow area; real-timely updating the background model by using the current input image. By using the method of the invention, a background model accuracy is high. The color characteristic and a textural feature are combined to distinguish the moving object and the shadow. Interferences caused by noise and other factors to the shadow can be reduced so as to improve a shadow detection accuracy.

Description

Based on the Moving Shadow Detection Approach of color and gradient feature
Technical field
The technology of the present invention relates to a kind of Moving Shadow Detection Approach based on color and gradient feature.
Technical background
Such as, to being the basic task of many machine vision and video analysis application cutting apart of moving object in video sequence, video monitoring, human body detection and tracking, multimedia index and Video coding etc.Accurate moving object is cut apart also and will greatly be improved the performance of target following, identification, classification and motion analysis.But in moving object detection and the process cut apart, due to light irradiating object, can make moving target produce corresponding shade.The shade producing can move along with the motion of moving target, in the time carrying out target detection, if shade is not processed, and probably can be using shade as moving object detection out, like this, tracking, identification to the later stage have caused very large interference.
At present, shadow detection method is mainly divided into based on model or based on two kinds of features.Method based on model utilizes the geometric properties of scene, illumination and target to set up model, and as the three-dimensional structure of scene, the description of foreground object available model etc., but it is only applicable to some special scenes, and method limitation is large.The method calculation of complex in addition, is unsuitable for real-time application.On the other hand, the method based on feature is utilized color, gradient, the Texture eigenvalue identification shade of image.These class methods are generally first to detect sport foreground region, are more further divided into object and shade.But existing method is usually only considered the difference of moving target and the single feature of shade, causes detecting accuracy not high.If a kind of method is for adopting hsv color space, think that shade has reduced the brightness value of institute overlay area, but colourity and intensity value change within the specific limits, the shortcoming of the method is comparatively responsive to illumination variation, if the partial pixel of object has similar color characteristic with shade, object can be mistaken for to shade.Also have utilize the joint probability density of multiple features carry out modeling again with the method for threshold, but its data calculated amount is large, expends time in many, affects the real-time of shadow Detection, is difficult to be applied to real-time occasion.
Summary of the invention
The object of the invention is to propose a kind of Moving Shadow Detection Approach based on color and gradient feature, the method can adapt to various complex scenes, improves accuracy and the real-time of shadow Detection.
Technical solution of the present invention is as follows:
Based on a Moving Shadow Detection Approach for color and gradient feature, it is characterized in that, comprise the following steps:
Step 1: gather t frame initial pictures, set up background model, obtain background image; ([wherein the span of t is 50~300 frames, and t preferably gets 100 frames]
Step 2: using current obtained video streaming image as input picture and background image difference, detect moving target, determine motion target area;
Step 3: adopt respectively based on color characteristic and the method based on Gradient Features motion target area is carried out to shadow Detection;
Step 4: the shadow region that two kinds of methods are obtained is carried out and operated and do morphology processing, determines final shadow region;
Step 5: utilize current input image to upgrade background model, turn back to step 2.
Described step 1 is:
The front t two field picture reference picture of Image estimation as a setting that utilizes current input image, utilizes this t two field picture to set up background model;
If x 1, x 2..., x tfor the pixel set of certain same position in t two field picture, them, image is at a group observations sample of this point as a setting, and the estimated value method that adopts mean shift method to obtain background image this point is:
m h ( x ) = Σ i = 1 n G ( x i - x h ) w ( x i ) x i Σ i = 1 n G ( x i - x h ) w ( x i )
The initial value of given x is the arbitrary value in sample set, kernel function
Figure BDA0000083503700000022
allowable error ε, adopts mean shift method cyclically to carry out three steps below, until termination condition meets, wherein h is the size [span 10~50 of h, value is 20 conventionally] of mean shift method search window, and the number that n is sample point is t, w (x i) be one and be assigned to sampled point x iweight, span is nonnegative number, (conventionally value be 1)
The first step: calculate m h(x);
Second step: if || m h(x)-x|| < ε, end loop;
The 3rd step: m h(x) be assigned to x, and continue to carry out the first step; Obtain m h(x) be the solution of background model, background image is at the best estimate of this point;
Described step 2 is: in front input picture and background, in the RGB component difference of corresponding pixel points, select a maximal value, be labeled as DV (x, y), in the time that DV (x, y) is greater than threshold value TL, judgement (x, y) pixel is motion pixel, is labeled as g (x, y)=1, otherwise mark g (x, y)=0; For the image of M*N size, the computing formula of TL is as follows
TL = min ( 1 MN &Sigma; i = 1 M &Sigma; j = 1 N I R ( x i , y j ) , 1 MN &Sigma; i = 1 M &Sigma; j = 1 N I G ( x i , y j ) , 1 MN &Sigma; i = 1 M &Sigma; j = 1 N I B ( x i , y j ) ) - - - ( 2 )
Wherein I r(x i, y j) be pixel (x i, y j) the value of R passage, I g(x i, y j) be pixel (x i, y j) the value of G passage, I b(x i, y j) be pixel (x i, y j) the value of B passage;
Then use morphological method closed operation to carry out filtering processing to g (x, y) bianry image, eliminate the isolated point being caused by noise.
Described step 3 is:
Step a: utilize Sobel gradient operator to carry out rim detection to the RGB triple channel of original image moving region, and three-channel RGB Grad is had to the point of consistent change direction as the point of candidate shadow region, remember that now the set of point is S c, obviously S cin hypographous point not only, also may have the point of the lower object of brightness;
Step b: image conversion is arrived to C 1c 2c 3color space, then the imagery exploitation Sobel gradient operator after conversion is carried out to rim detection; Due to C 1c 2c 3color space is to illumination-insensitive, the point S set therefore now obtaining min, only comprise the point of object;
C 1c 2c 3color space definition is
C 1 = arctan ( R max ( G , B ) )
C 2 = arctan ( G max ( R , B ) )
C 3 = arctan ( B max ( R , G ) )
C 1c 2c 3be a kind of color character invariant, what it was described is the function of the color scheme of pixel, and the feature invariant amount of pixel is not subject to the impact of the factors such as visual angle, shade, surface direction and illumination condition.
Step c: comprehensive above step a and b, the point S set of shadow region s=S c-S mthereby, the point S set of acquisition shadow region sin the profile forming S set a little ms1;
The described method based on Gradient Features is that the gradient vector figure after structure normalization also compares the gradient vector figure of prospect and background, thereby distinguishes the shadow region of moving target and its generation.Concrete algorithm is as follows:
Steps A: calculate the gradient vector of background image, along image level direction, carry out convolution with Soble operator and the background image of 0 °, 45 °, 90 °, 135 ° 4 directions successively, utilize formula 6 to calculate 4 yardstick g of each pixel (x, y) f, 1(x, y), g f, 2(x, y), g f, 3(x, y) and g f, 4(x, y), obtains scaling vector g f(x, y)={ g f, 1, g f, 2, g f, 3, g f, 4| x, y, g f(x, y) reflected the trend of background image in this pixel graded, is also the form of expression of background gradient:
g f,i(x,y)=f(x,y)*H i(x,y) i=1,2,3,4
Wherein f (x, y) is the pixel value on coordinate (x, y), H i(x, y) is 0 °, 45 °, 90 °, the 135 ° Soble operators in 4 directions, is respectively:
- 1 0 1 - 2 0 2 - 1 0 1 , 0 1 2 - 1 0 1 - 2 - 1 0 , - 1 - 2 - 1 0 0 0 1 2 1 , 2 1 0 1 0 - 1 0 - 1 - 2 ;
The normalization of step B background image gradient vector:
According to formula
Figure BDA0000083503700000055
i=1,2,3,4 are normalized the gradient vector of each pixel of background, obtain the gradient vector after normalization
Figure BDA0000083503700000056
and normalized gradient vector is stored, storehouse (the gradient vector comparison of context vault and present image hereinafter) as a setting, according to gradient direction principle of invariance, the gradient vector after normalization is a build-in attribute of background, under level of illumination visible, does not change with illumination.
Step C: calculate the gradient vector of current frame image, and normalization: along current frame image horizontal direction, carry out convolution with Soble operator and the sequence image of 4 directions successively, calculate 4 yardstick g of each pixel (x, y) c, 1(x, y), g c, 2(x, y), g c, 3(x, y) and g c, 4(x, y), obtains scaling vector g c(x, y)={ g c, 1, g c, 2, g c, 3, g c, 4| x, y, g c(x, y) reflected the trend of current frame image in this pixel graded, utilizes formula:
g i * = g i &Sigma; j = 1 4 | g i | , i=1,2,3,4
Be normalized, obtained vector g c * ( x , y ) = { g c , 1 * , g c , 2 * , g c , 3 * , g c , 4 * | x , y } ;
Step D: the difference d that calculates the normalized gradient vector of pixel corresponding in current frame image pixel and context vault:
d = | | g f * ( x , y ) - g c * ( x , y ) | | = &Sigma; i = 1 4 | g c , i * - g f , i * |
Template is cut apart in employing
Figure BDA00000835037000000510
threshold value Δ ε=0.3 splits motion shade from motion target area, cutting apart in template, be denoted as 1 be the point of motion shadow region;
Step e: scanning entire image, repeating step D, until the normalized gradient vector of entire image and background image normalized gradient vector is more complete, realizes motion shade cutting apart from motion target area, and motion shadow region point is gathered and is labeled as S ms2.
In described step 4, morphology processing procedure is:
For further improving moving shadow detection result, to the S obtaining ms1region and S ms2aND-operation and OR operation are carried out in region, obtain "AND" figure and "or" figure.Then, utilize connected component analysis to obtain each connected region, be called an agglomerate.In "AND" figure, obtain the center of mass point of each agglomerate, then judge which agglomerate in "or" figure contains this center of mass point, if contained, retain this agglomerate in "or" figure. finally the agglomerate of the center of mass point not containing in "AND" figure and contain in "or" figure is deleted, finally the result of "or" figure is carried out to closing operation of mathematical morphology again one time, obtain final moving target.
The described Moving Shadow Detection Approach based on color and gradient feature, it is characterized in that, the described wherein Moving Shadow Detection Approach based on color and gradient feature, only the pixel of the moving region on present image carries out shadow Detection, to improve arithmetic speed.
Beneficial effect:
Compared with prior art, beneficial effect of the present invention is as follows:
First, proposed the background model based on mean-shift algorithm, can extract background image to having in scene in the interference of moving target and the situation of light variation, background model accuracy is high.
Secondly, adopt and respectively shadow Detection is carried out in moving region based on color invariance and the method based on gradient, again the shade detecting is carried out to morphology processing, reduced noise and the interference of other factors to shade, thereby improved the correctness of shadow Detection.
Again, in last handling process, two shadow regions are carried out respectively the operation of "AND" and "or" and determined shade connected region after the OR operation shadow region as final detection by the barycenter of shade connected region after AND-operation, further reduced false drop rate.
Accompanying drawing explanation
Fig. 1 is method flow diagram of the present invention;
Fig. 2 is background modeling process flow diagram;
Fig. 3 is the shadow region testing process based on color characteristic;
Fig. 4 is the shadow region testing process based on tonsure feature.
Embodiment
Below in conjunction with accompanying drawing and specific implementation process, the present invention is described in further details, but protection scope of the present invention is not limited to following embodiment.
Embodiment 1:
Moving Shadow Detection Approach based on color and gradient feature as shown in Figure 1, comprises the steps:
1. set up background model
Obtain the color characteristic information of the pixel in the front N two field picture of current input image, start to detect target from the N+1 frame image of input: present frame image pixel point is as estimation point, according to mean_shift algorithm, obtain the probable value that estimation point belongs to background model, and current frame picture point is upgraded to background model as new sampled point, as shown in Figure 2.Specifically be divided into again following steps:
1. by the pixel value of each pixel (x, y) of front N two field picture as a sample space, and be divided into the class for q
C={{c i,w i}},i=1,…,q (1)
Wherein c ithe eigenwert of pixel, w iit is all kinds of weights.
2. calculate the weight of each classification
w i = l i m , i = 1 , &CenterDot; &CenterDot; &CenterDot; , q - - - ( 2 )
Wherein l ifor the number of Different categories of samples point, the sum that m is sample point
3. utilizing mean-shift (mean shift) algorithm to obtain reliable background model estimates
C ^ = c i * , i *=arg max{w i} (3)
Wherein
Figure BDA0000083503700000081
for the estimated value of the most reliable background model pixel
Mean-shift (mean shift method) described above obtains reliable background model method of estimation and is
m h ( x ) = &Sigma; i = 1 n G ( x i - x h ) w ( x i ) x i &Sigma; i = 1 n G ( x i - x h ) w ( x i ) - - - ( 4 )
A given initial point x, kernel function G (X), allowable error ε, execution three steps below of Mean Shift algorithm circulation, until termination condition is satisfied,
The first step. calculate m h(x);
If second step. || m h(x)-x|| < ε, end loop;
The 3rd step. m h(x) be assigned to x, and continue to carry out the first step.
Obtain m h(x) be the best estimate of background model pixel value.
2, background difference
In calculating current input image and background, in the RGB component difference of corresponding pixel points, select a maximal value, be labeled as DV (x, y) suc as formula (5), in the time that DV (x, y) is greater than threshold value TL, judgement (x, y) pixel is motion pixel, is labeled as g (x, y)=1, otherwise mark g (x, y)=0; Then use morphological method to carry out filtering processing to g (x, y) bianry image, remove the isolated area that area is less, non-connected region simultaneously.
DV(x,y)=max(|I(x,y).c-B(x,y).c|).c=R,G,B (5)
For the image of M*N size, TL is defined as follows
TL = min ( 1 MN &Sigma; j = 1 N &Sigma; i = 1 M I ( x i , y j ) . c ) , .c=R,G,B (6)
3, the shadow Detection based on color and gradient feature
Be not subject to the impact of the factors such as visual angle, shade, surface direction and illumination condition according to color character invariant.Color character invariant can obtain from the RGB triple channel value transform of pixel, and conventional color character invariant has normalized rgb color space and C 1c 2c 3color space.Wherein C 1c 2c 3color space is:
C 1 = arctan ( R max ( G , B ) )
C 2 = atc tan ( G max ( R , B ) ) - - - ( 7 )
C 3 = arctan ( B max ( R , G ) )
Concrete grammar flow process as shown in Figure 3.
Step 1, according to the low feature in brightness ratio peripheral region, shadow region, is designated as candidate shadow region region mark one low brightness.Exactly rim detection is carried out in original image moving region specifically, the three-channel gradient of RGB has the point of consistent descent direction, just thinks the point of candidate shadow region.Remember that now the set of point is S c, obviously S cin hypographous point not only, also may have the point of the lower object of brightness.
Step 2, first arrives C by image conversion 1c 2c 3color space, then the image after conversion is carried out to rim detection.The similar step 1 of operating process.Due to C 1c 2c 3color space is to illumination-insensitive, the point S set therefore now obtaining min, only comprise the point of object.
Step 3, comprehensive above two steps, the point S set of shadow region s=S c-S m.
Shadow detection method based on Gradient Features is, the gradient vector figure after structure normalization also compares the gradient vector figure of prospect and background, thereby the shadow region of distinguishing moving target and its generation as shown in Figure 4.Concrete algorithm is as follows:
Step 1 is calculated the gradient vector of background image, along image level direction, carries out convolution successively with Soble operator and the background image of 4 directions, utilizes formula (8) to calculate 4 yardstick g of each pixel (x, y) f, 1(x, y), g f, 2(x, y), g f, 3(x, y) and g f, 4(x, y) obtains scaling vector g f(x, y)={ g f, 1, g f, 2, g f, 3, g f, 4| x, y, g f(x, y) reflected the trend of background image in this pixel graded, is also the form of expression of road surface background texture.
g i(x,y)=f(x,y)*H i(x,y) i=1,2,3,4 (8)
Wherein H i(x, y) is 4 Soble operators in direction.
The normalization of step 2 background image gradient vector utilizes formula
g i * = g i &Sigma; j = 1 4 | g i | , i=1,2,3,4 (9)
The gradient vector of each pixel of background is normalized, obtains the gradient vector after normalization
Figure BDA0000083503700000102
and normalized gradient vector is stored as to background dictionary according to gradient direction principle of invariance, the gradient vector after normalization is a build-in attribute of background, under level of illumination visible, does not change with illumination.
Step 3 is calculated the gradient vector of current frame image, and normalization. and in like manner, the process of repeating step 1, along image level direction, carries out convolution with Soble operator and the sequence image of 4 directions successively, calculates 4 yardstick g of each pixel (x, y) c, 1(x, y), g c, 2(x, y), g c, 3(x, y) and g c, 4(x, y), obtains scaling vector g c(x, y)={ g c, 1, g c, 2, g c, 3, g c, 4| x, y.G c(x, y) reflected the trend of current frame image in this pixel graded. be normalized, obtained vector
Figure BDA0000083503700000103
Step 4, according to formula (8), compares the normalized gradient vector of the pixel in current frame image with the normalized gradient vector of corresponding pixel in background dictionary storehouse:
d = | | g f * ( x , y ) - g c * ( x , y ) | | = &Sigma; i = 1 4 | g c , i * - g f , i * | - - - ( 10 )
Making the template of cutting apart of moving target is F (x, y), owing to being subject to external environment and electromagnetic interference (EMI), often
Figure BDA0000083503700000105
therefore,, in order to reduce the impact of this interference, need setting threshold Δ ε to adjudicate background and moving target
F ( x , y ) = { 0 , d > &Delta;&epsiv; 1 , d &le; &Delta;&epsiv; - - - ( 11 )
Step 5 scans entire image. repeating step 4, until complete the comparison of entire image and background image normalized gradient vector. cutting apart in template, being denoted as 1 region is background shadow region, road surface, and being denoted as 0 region is motion target area. so just motion shade is cut apart out from motion target area.
Illustrate: the described Moving Shadow Detection Approach based on color and gradient feature, only the pixel of the moving region on foreground image carries out shadow Detection, to improve arithmetic speed.
4, the morphology processing of shadow region
Carry out AND-operation and OR operation to what obtain based on color characteristic and the shadow Detection result based on tonsure respectively, obtain "AND" figure and "or" figure.Then, utilize connected component analysis to obtain each connected region, be called an agglomerate. in "AND" figure, obtain the center of mass point of each agglomerate, then which agglomerate in judgement or figure contains this center of mass point, if contained, retain this agglomerate in "or" figure. finally by the not agglomerate deletion containing center of mass point in "AND" figure in "or" figure, the result of "or" figure is carried out to a morphology again and process and connected component analysis, determine final moving target.

Claims (3)

1. the Moving Shadow Detection Approach based on color and gradient feature, is characterized in that, comprises the following steps:
Step 1: gather t frame initial pictures, set up background model, obtain background image;
Step 2: using current obtained video streaming image as input picture and background image difference, detect moving target, determine motion target area;
Step 3: adopt respectively based on color characteristic and the method based on Gradient Features motion target area is carried out to shadow Detection;
Step 4: the shadow region that two kinds of methods are obtained carry out with operation and or operate and do morphology processing, determine final shadow region;
Step 5: judge whether to finish, if not, utilize current input image to upgrade background image, turn back to step 2; If so, stop;
Described step 1 is:
The front t two field picture reference picture of Image estimation as a setting that utilizes current input image, utilizes this t two field picture to set up background model;
If x 1, x 2..., x tfor the pixel set of certain same position in t two field picture, them, image is at a group observations sample of this point as a setting, and the estimated value method that adopts mean shift method to obtain background image this point is:
m h ( x ) = &Sigma; i = 1 n G ( x i - x h ) w ( x i ) x i &Sigma; i = 1 n G ( x i - x h ) w ( x i ) ,
The initial value of given x is the arbitrary value in sample set, kernel function
Figure FDA0000433282620000012
allowable error ε, adopts mean shift method cyclically to carry out three steps below, until termination condition meets, wherein h is the size of mean shift method search window, and the number that n is sample point is t, w (x i) be one and be assigned to sampled point x iweight, span is nonnegative number;
The first step: calculate m h(x);
Second step: if || m h(x)-x||< ε, end loop;
The 3rd step: m h(x) be assigned to x, and continue to carry out the first step; Obtain m h(x) be the solution of background model, background image is at the best estimate of this point;
Detection method based on color characteristic in described step 3 is:
Step a: utilize Sobel gradient operator to carry out rim detection to the RGB triple channel of input picture moving region, and three-channel RGB Grad is had to the point of consistent change direction as the point of candidate shadow region, remember that now the set of point is S c, obviously S cin hypographous point not only, also may have the point of the lower object of brightness;
Step b: image conversion is arrived to C 1c 2c 3color space, then the imagery exploitation Sobel gradient operator after conversion is carried out to rim detection; Due to C 1c 2c 3color space is to illumination-insensitive, the point S set therefore now obtaining min, only comprise the point of object;
C 1c 2c 3color space definition is
C 1 = arctan ( R max ( G , B ) )
C 2 = arctan ( G max ( R , B ) )
C 3 = arctan ( B max ( R , G ) )
Step c: comprehensive above step a and b, the point S set of shadow region s=S c-S mthereby, the point S set of acquisition shadow region sin the profile forming S set a little ms1;
The described method based on Gradient Features is that the gradient vector figure after structure normalization also compares the gradient vector figure of prospect and background, thereby distinguishes the shadow region of moving target and its generation; Concrete algorithm is as follows:
Steps A: calculate the gradient vector of background image, along image level direction, carry out convolution with Sobel operator and the background image of 0 °, 45 °, 90 °, 135 ° 4 directions successively, utilize following formula to calculate 4 yardstick g of each pixel (x, y) f, 1(x, y), g f, 2(x, y), g f, 3(x, y) and g f, 4(x, y), obtains scaling vector g f(x, y)={ g f, 1, g f, 2, g f, 3, g f, 4| x,y, g f(x, y) reflected the trend of background image in this pixel graded, is also the form of expression of background gradient:
g f,i(x,y)=f(x,y)*H i(x,y) i=1,2,3,4,
Wherein f (x, y) is the pixel value on coordinate (x, y), H i(x, y) is 0 °, 45 °, 90 °, the 135 ° Sobel operators in 4 directions, is respectively:
- 1 0 1 - 2 0 2 - 1 0 1 , 0 1 2 - 1 0 1 - 2 - 1 0 , - 1 - 2 - 1 0 0 0 1 2 1 , 2 1 0 1 0 - 1 0 - 1 - 2 ;
The normalization of step B background image gradient vector:
According to formula
Figure FDA0000433282620000032
i=1,2,3,4 are normalized the gradient vector of each pixel of background, obtain the gradient vector after normalization
Figure FDA0000433282620000033
and normalized gradient vector is stored, storehouse as a setting, according to gradient direction principle of invariance, the gradient vector after normalization is a build-in attribute of background, under level of illumination visible, does not change with illumination;
Step C: the gradient vector that calculates current frame image, and normalization: along current frame image horizontal direction, carry out convolution with Sobel operator and the sequence image of 0 °, 45 °, 90 °, 135 ° 4 directions successively, calculate 4 yardstick g of each pixel (x, y) c, 1(x, y), g c, 2(x, y), g c, 3(x, y) and g c, 4(x, y), obtains scaling vector g c(x, y)={ g c, 1, g c, 2, g c, 3, g c, 4| x,y, g c(x, y) reflected the trend of current frame image in this pixel graded, utilizes formula:
g i * = g i &Sigma; j = 1 4 | g i | , i = 1,2,3,4
Be normalized, obtained vector g c * ( x , y ) = { g c , 1 * , g c , 2 * , g c , 3 * , g c , 4 * | x , y } ;
Step D: the difference d that calculates the normalized gradient vector of pixel corresponding in current frame image pixel and context vault:
d = | | g f * ( x , y ) - g c * ( x , y ) | | = &Sigma; i = 1 4 | g c , i * - g f , i * |
Template is cut apart in employing threshold value △ ε=0.3 splits motion shade from motion target area, cutting apart in template, be denoted as 1 be the point of motion shadow region;
Step e: scanning current frame image, repeating step D, until the normalized gradient vector of the entire image of current frame image and background image normalized gradient vector is more complete, realizes motion shade cutting apart from motion target area, and the set of motion shadow region point is labeled as to S ms2.
2. the Moving Shadow Detection Approach based on color and gradient feature according to claim 1, is characterized in that, described step 2 is: in current input image and background, in the RGB component difference of corresponding pixel points, select a maximal value, be labeled as DV(x, y), work as DV(x, y) while being greater than threshold value TL, judgement (x, y) pixel is motion pixel, is labeled as g(x, y)=1, otherwise mark g(x, y)=0; For the image of M*N size, the computing formula of TL is as follows
TL = min ( 1 MN &Sigma; i = 1 M &Sigma; j = 1 N I R ( x i , y j ) , 1 MN &Sigma; i = 1 M &Sigma; j = 1 N I G ( x i , y j ) , 1 MN &Sigma; i = 1 M &Sigma; j = 1 N I B ( x i , y j ) ) - - - ( 2 )
Wherein I r(x i, y j) be pixel (x i, y j) the value of R passage, I g(x i, y j) be pixel (x i, y j) the value of G passage, I b(x i, y j) be pixel (x i, y j) the value of B passage;
Then using morphological method closed operation to g(x, y) bianry image carries out filtering processing, eliminates the isolated point being caused by noise.
3. the Moving Shadow Detection Approach based on color and gradient feature according to claim 1, is characterized in that, in described step 4, morphology processing procedure is:
For further improving moving shadow detection result, to the S obtaining ms1region and S ms2aND-operation and OR operation are carried out in region, obtain "AND" figure and "or" figure, then, utilize connected component analysis to obtain each connected region, be called an agglomerate, in "AND" figure, obtain the center of mass point of each agglomerate, then judge which agglomerate in "or" figure contains this center of mass point, if contained, retain this agglomerate in "or" figure; Finally the agglomerate of the center of mass point not containing in "AND" figure and contain in "or" figure is deleted, finally the result of "or" figure is carried out to closing operation of mathematical morphology again one time, obtain final moving target.
CN201110233551.9A 2011-08-16 2011-08-16 Motion shadow detection method based on color and gradient characteristics Active CN102298781B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110233551.9A CN102298781B (en) 2011-08-16 2011-08-16 Motion shadow detection method based on color and gradient characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110233551.9A CN102298781B (en) 2011-08-16 2011-08-16 Motion shadow detection method based on color and gradient characteristics

Publications (2)

Publication Number Publication Date
CN102298781A CN102298781A (en) 2011-12-28
CN102298781B true CN102298781B (en) 2014-06-25

Family

ID=45359177

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110233551.9A Active CN102298781B (en) 2011-08-16 2011-08-16 Motion shadow detection method based on color and gradient characteristics

Country Status (1)

Country Link
CN (1) CN102298781B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11783017B2 (en) 2017-08-09 2023-10-10 Jumio Corporation Authentication using facial image comparison

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014028769A2 (en) 2012-08-15 2014-02-20 Jumio Inc. Image processing for credit validation
CN103714552B (en) * 2012-09-29 2016-08-24 浙江大华技术股份有限公司 Motion shadow removing method and device and intelligent video analysis system
CN103793921B (en) * 2012-10-29 2017-02-22 浙江大华技术股份有限公司 Moving object extraction method and moving object extraction device
CN103886577A (en) * 2013-11-26 2014-06-25 天津思博科科技发展有限公司 Shadow processor
CN103839232B (en) * 2014-01-17 2016-09-07 河海大学 A kind of pedestrian's cast shadow suppressing method based on agglomerate model
CN104599263B (en) * 2014-12-23 2017-08-15 安科智慧城市技术(中国)有限公司 A kind of method and device of image detection
CN105261021B (en) * 2015-10-19 2019-03-08 浙江宇视科技有限公司 Remove the method and device of foreground detection result shade
CN105513053B (en) * 2015-11-26 2017-12-22 河海大学 One kind is used for background modeling method in video analysis
CN106056069A (en) * 2016-05-27 2016-10-26 刘文萍 Unmanned aerial vehicle image analysis-based forest land resource asset evaluation method and evaluation system
CN106157318B (en) * 2016-07-26 2018-10-16 电子科技大学 Monitor video background image modeling method
CN106651930B (en) * 2016-09-29 2019-09-10 重庆邮电大学 A kind of multi-level manifold learning medical image color aware method
CN106651908B (en) * 2016-10-13 2020-03-31 北京科技大学 Multi-moving-target tracking method
CN107146210A (en) * 2017-05-05 2017-09-08 南京大学 A kind of detection based on image procossing removes shadow method
CN107220949A (en) * 2017-05-27 2017-09-29 安徽大学 The self adaptive elimination method of moving vehicle shade in highway monitoring video
CN109587466B (en) * 2017-09-29 2020-02-21 华为技术有限公司 Method and apparatus for color shading correction
CN108520259B (en) * 2018-04-13 2021-05-25 国光电器股份有限公司 Foreground target extraction method, device, equipment and storage medium
CN110858281B (en) * 2018-08-22 2022-10-04 浙江宇视科技有限公司 Image processing method, image processing device, electronic eye and storage medium
CN113298845A (en) * 2018-10-15 2021-08-24 华为技术有限公司 Image processing method, device and equipment
CN113628153A (en) * 2020-04-22 2021-11-09 北京京东乾石科技有限公司 Shadow region detection method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040028137A1 (en) * 2002-06-19 2004-02-12 Jeremy Wyn-Harris Motion detection camera
CN101324927B (en) * 2008-07-18 2011-06-29 北京中星微电子有限公司 Method and apparatus for detecting shadows
CN101373515B (en) * 2008-10-20 2010-08-18 东软集团股份有限公司 Method and system for detecting road area
CN101909206A (en) * 2010-08-02 2010-12-08 复旦大学 Video-based intelligent flight vehicle tracking system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11783017B2 (en) 2017-08-09 2023-10-10 Jumio Corporation Authentication using facial image comparison

Also Published As

Publication number Publication date
CN102298781A (en) 2011-12-28

Similar Documents

Publication Publication Date Title
CN102298781B (en) Motion shadow detection method based on color and gradient characteristics
CN109325960B (en) Infrared cloud chart cyclone analysis method and analysis system
Rakibe et al. Background subtraction algorithm based human motion detection
CN104392468B (en) Based on the moving target detecting method for improving visual background extraction
CN105528794A (en) Moving object detection method based on Gaussian mixture model and superpixel segmentation
CN105139015B (en) A kind of remote sensing images Clean water withdraw method
CN102722891A (en) Method for detecting image significance
CN109086724B (en) Accelerated human face detection method and storage medium
CN102521616B (en) Pedestrian detection method on basis of sparse representation
Timofte et al. Combining traffic sign detection with 3D tracking towards better driver assistance
CN104036523A (en) Improved mean shift target tracking method based on surf features
CN110569782A (en) Target detection method based on deep learning
CN106204594A (en) A kind of direction detection method of dispersivity moving object based on video image
CN105160649A (en) Multi-target tracking method and system based on kernel function unsupervised clustering
CN110197494B (en) Pantograph contact point real-time detection algorithm based on monocular infrared image
CN105741319B (en) Improvement visual background extracting method based on blindly more new strategy and foreground model
CN103020992A (en) Video image significance detection method based on dynamic color association
CN105493141A (en) Unstructured road boundary detection
CN110874592A (en) Forest fire smoke image detection method based on total bounded variation
CN110175556B (en) Remote sensing image cloud detection method based on Sobel operator
Campos et al. Discrimination of abandoned and stolen object based on active contours
Van den Bergh et al. Depth SEEDS: Recovering incomplete depth data using superpixels
CN112164093A (en) Automatic person tracking method based on edge features and related filtering
CN103577804A (en) Abnormal human behavior identification method based on SIFT flow and hidden conditional random fields
CN102509308A (en) Motion segmentation method based on mixtures-of-dynamic-textures-based spatiotemporal saliency detection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant