CN104899881B - Moving vehicle shadow detection method in a kind of video image - Google Patents

Moving vehicle shadow detection method in a kind of video image Download PDF

Info

Publication number
CN104899881B
CN104899881B CN201510282190.5A CN201510282190A CN104899881B CN 104899881 B CN104899881 B CN 104899881B CN 201510282190 A CN201510282190 A CN 201510282190A CN 104899881 B CN104899881 B CN 104899881B
Authority
CN
China
Prior art keywords
mrow
seg
image
region
msub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201510282190.5A
Other languages
Chinese (zh)
Other versions
CN104899881A (en
Inventor
贺科学
李树涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Changsha University of Science and Technology
Original Assignee
Hunan University
Changsha University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University, Changsha University of Science and Technology filed Critical Hunan University
Priority to CN201510282190.5A priority Critical patent/CN104899881B/en
Publication of CN104899881A publication Critical patent/CN104899881A/en
Application granted granted Critical
Publication of CN104899881B publication Critical patent/CN104899881B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Abstract

The invention discloses moving vehicle shadow detection method in a kind of video image, to eliminate the shade of moving target in foreground image, improves the accuracy of object detecting and tracking.Moving Shadow Detection Approach provided by the invention, including:The characteristic image of the information such as structure fusion brightness, colourity and edge gradient, the maximum region of colour difference is taken to search for initiation region for car body after image is split, iterated search simultaneously absorbs the region that characteristic value is maximum in the adjacent area of periphery, after meeting stopping criterion for iteration, remaining subregion collection then shows that moving target shade can be ignored if empty set, if non-NULL then using remaining subregion as shade candidate region, its shade subregion is searched for from each shade candidate subregion again, all shade subregions searched are finally combined into whole shadow region.This method can automatic discrimination whether there is shade, can more reasonably merge various features, reduce manual intervention, shadow Detection rate is high, and universality is good.

Description

Moving vehicle shadow detection method in a kind of video image
Technical field
The invention belongs to digital image processing field, the shadow Detection side of moving vehicle in more particularly to a kind of video image Method.
Background technology
Shade is mainly that object blocks light source and in the dark areas of formation, can be divided into from shade and throwing according to the feature of shade Penetrate shade.It is the part that object is not irradiated by light in itself from shade, moving target should belong to mesh from shade in video analysis Mark a part for object.And cast shadow is object blocks the shade projected after light source in the scene, such as trees, vehicle, OK The objects such as people are incident upon the shade on road surface, and this is a part rather than moving target for video scene.Projection in video is cloudy Shadow can be divided into static shade and motion shade.Static shade is gradually dissolved into background in background modeling.Moving shade is Shut out the light by the foreground target moved in scene and project dark areas in the scene, as target is moved together.
There is much like motion feature because vehicle is incident upon the motion shade on road surface and vehicle, thus it is normal to move shade Often it is mistaken for a part for foreground moving object.If not eliminating this type games shade, foreground target shape distortion, more can be caused The defects of individual moving target adhesion, the image processing operations such as follow-up vehicle detection, vehicle tracking, vehicle detection are produced very big Influence, be always one of hot subject of research so detection moving vehicle shade is very important link.
At present, the algorithm of shadow Detection can be divided into the method based on shadow model, the method based on shading attributes feature With the type such as the method based on machine learning.
(1) method based on shadow model:Mainly video scene is modeled, including to source modeling and projection side To types such as modelings.In source modeling, one assumes that pure white light is unique light source, and shade is due to that light is made by linear attenuation Into, another kind assumes that light source is direct light (such as sunshine) and the collective effect that diffuses, and light is non-linearly decayed And form shade.Such as Nadimi and Bhanu propose a kind of double-colored model, shadow of two kinds of light sources to shadow region is considered Ring.In terms of projecting direction modeling, Liu Zhifang etc. proposes a kind of cast shadow direction model, and Yuan Jiwei and Shi Zhongke are in this base Eight kinds of vehicle/shadow models are further provided on plinth.Method based on model needs that illumination condition, motion mesh is obtained ahead of time The prior informations such as mark, scene, under numerous scenes such as background complexity, illuminance abrupt variation, there is adaptability aspect in the model established The defects of.
(2) method based on shading attributes feature:Mainly using the information such as the brightness of shade, gradient, color, texture come Identify shadow region.It has been generally acknowledged that the brightness step-down of shadow region (particularly penumbra region), and gradient, colourity, texture etc. change It is smaller or ignore.The method of feature based has stronger robustness, but algorithm to different scene and illumination condition Universality still have much room for improvement.
(3) method based on machine learning:By constructing the sorter models such as SVMs, neutral net, line is established The learning database of the features such as reason, gray scale, and study is trained to sorter model, then the grader by training differentiate dark space Whether domain is shade.This kind of method solves automatic study, pattern-recognition etc. problem, but the generalization ability of grader, general Adaptive etc. waits to improve.
In summary, all kinds of methods still have certain limitation in processing shadow problem, it is necessary to shadow Detection Method is improved.
The content of the invention
In consideration of it, the present invention provides a kind of detection method of moving vehicle shade in video image, to overcome existing method Present in defect and deficiency.
To achieve the above object, the invention provides a kind of moving vehicle shadow removing method, including:
(1) the image I of moving vehicle target is extractedFWith background image IBAfterwards, YCbCr color spaces are respectively converted into, are obtained Image IFLuminance component Y, chrominance C b and Cr component with background, then build fusion brightness, the image of chromaticity information Ifea, image IfeaThe pixel value of middle each point is
Wherein, the binaryzation foreground target image that BW is obtained for the moving target detecting method based on background subtracting, D are Pixel P (x, y) neighborhood, then by image IfeaThreshold value T is tried to achieve through maximum variance between clustersfea, then by image IfeaIn be less than threshold Value TfeaPixel value vanishing, so as to obtain image Hfea
(2) to image IFRegion segmentation is carried out, and calculates image IFEach region and its background image between chrominance C b and Cr Between difference absolute value summation, by the region D of its maximumseg(i) the search initiation region as carbody, i.e.,
(3) using boundary operator to image IfeaRim detection is carried out, edge image E is obtained, is asked according to maximum variance between clusters Obtain edge image E binary-state threshold Tedge, threshold value T will be less thanedgeMarginal value vanishing, obtain the image containing edge feature Iedge, and by image HfeaWith image IedgeIt is overlapped, forms the characteristic image M containing edge, brightness and chrominance information, i.e.,
M (x, y)=Iedge+Hfea(x,y)
Wherein,
(4) construction suppresses the mask image of shadow region and background edge, first tries to achieve binaryzation foreground target image BW side Boundary B, the every bit P (x, y) all over border is gone through along boundary B, then P (x, y) is put into the pixel M (x, y) in corresponding neighborhood L and is arranged to Zero, neighborhood L width calculation formula is
L=λ arccot [α (dc-dz)]
Wherein, λ is amplitude Dynamic gene, and α is rate of change Dynamic gene, dcIt is that P (x, y) points arrive characteristic image M centres of form C's Distance, dzCharacteristic image M barycenter Z distances are arrived for P (x, y) points, work as L<When 0, L is arranged to zero;
(5) search initiation region D is obtainedseg(i) border b, all and D is found out along border bseg(i) r area being connected Domain, form set [Dseg(1),Dseg(2)…Dseg(r) the maximum region of the pixel average of characteristic image M in the set], is taken Dseg(j), then by DsegAnd D (j)seg(i) region merging technique forms bigger region Dseg(i), i.e.,
Wherein,N is Dseg(m) in region pixel number, k is current Iteration count, iterate until obtained region Dseg(i) meet to stop during formula (7), i.e. region Dseg(i) in periphery subregion No longer domain of the existence pixel average is more than the subregion of threshold xi, and threshold xi takes segment of the neighborhood L width more than zero in boundary B The region D touchedseg(j) characteristic image M pixel averages;
Region Dseg(i) untouched other subregions form 2 set Q1=[Dseg(r+1),Dseg(r+2)…Dseg(r+ ] and Q k)2=[Dseg(r+k+1),Dseg(r+k+2)…Dseg(N)], Q1It is Dseg(i) adjacent subarea domain set, Q2For not with Dseg(i) adjacent regional ensemble, if Q1And Q2It is empty set, illustrates that shade is not present in moving target, that is, completes whole shade and wait The detection process of favored area;Otherwise, set Q is judged2Whether the area pixel average value in middle region is more than threshold xi, if being not more than Then set Q1And Q2All elements composition candidate regions of the region as shade, if more than if explanation there are two moving targets to send out Adhesion has been given birth to, then will set Q2Shadowing analysis is carried out to another target by formula (6), will set Q when meeting (7) formula1And Q2 All elements composition region be shade candidate regions;
(6) using shade candidate regions as starting point, the shadow region on shade candidate regions and its periphery is searched using region-growing method, The criterion of region growing search is
S (j)={ P (x, y):M (x, y) < Tedge&|Ifea(x,y)-Ifea(x+d, y+d) | < ζ σ &BW (x, y)=1 }
Go through all over set Q1And Q2Subregion representated by each element, search subregion is interior and its neighboring area, acquisition are cloudy Shadow region S (r+1), S (r+2) ... S (N), all shade subregions obtain the whole of moving vehicle by logical "or" computing Shadow region S, i.e.,
S=Dseg(r+1)|Dseg(r+2)|…Dseg(N)|S(r+1)|S(r+2)|…S(N)。
Moving vehicle shadow detection method in a kind of described video image, in described step (1), as background image IB Luminance component Y, any channel components are less than normal when to cause denominator be zero in chrominance C b and Cr component, using the flat of the channel components Average substitutes
The present invention considers the multicharacteristic informations such as brightness, colourity and edge gradient simultaneously, is its background using shadow region brightness Ratio decay and the features such as small colourity change, using the brightness of movement destination image and its background image, chromatic component it Between attenuation ratio value, construct shadow character image.This feature image has effectively been decayed the texture of shadow region, goes forward side by side one The texture of step enhancing nonshaded area.After image region segmentation, using the maximum region of colour difference as start point search vehicle body area Domain, and untouched remaining area is considered as shade candidate regions.Its neighboring area is searched for from each shade candidate regions again, it is final whole All shade subregions are closed, obtain the whole shadow region of vehicle.Compared with existing shadow detection method, test result indicates that: The method proposed can independently discriminate whether shade be present, have good shadow Detection performance and universality, reduce shade Influence to subsequent video analysis, suitable for application fields such as the target following of traffic video analysis, vehicle, vehicle Flow Detections.
Brief description of the drawings
Fig. 1 is a two field picture of present invention video sequence in embodiment;
Fig. 2 is the background image I of present invention video scene in embodimentB
Fig. 3 moving vehicle image I containing shadow region in embodiment for the present inventionF
Fig. 4 is the moving vehicle image I that the present invention merges brightness and chrominance information in embodimentfea
Fig. 5 the search starting point region of moving vehicle car body and shade candidate regions in embodiment for the present invention;
Fig. 6 is the moving vehicle characteristic image M that the present invention merges gray scale, edge and colourity in embodiment;
Fig. 7 is the mask image that the present invention suppresses shadow region marginal information in embodiment;
Fig. 8 is the characteristic image M that the present invention eliminates shadow region marginal information in embodiment;
Fig. 9 is the testing result of present invention moving vehicle shade in embodiment.
Embodiment
With reference to the accompanying drawings and detailed description, to the method for moving vehicle shadow Detection in the video image of the present invention Further illustrate.
Moving vehicle shadow detection method comprises the following steps in video image of the present invention in embodiment.
Step A. establishes background model for traffic monitoring scene as shown in Figure 1.Background real-time update is moved The background of vehicle, as shown in Figure 2.Moving vehicle target is obtained using the PBAS methods in moving target detecting method, such as Fig. 3 institutes Show.
Step B. is by movement destination image IFWith background image IBChanged from rgb color space to YCbCr color spaces, point Not Huo get brightness Y, the value of chrominance C tri- components of b and Cr, conversion formula is as follows
Y=16+65.481*R+128.553*G+24.966*B
Cb=128-37.797*R-74.203*G+112*B
Cr=128+112*R-93.785*G-18.214*B
Step C. structure images Ifea, reach abatement shadow region texture and strengthen the purpose of car body texture, as shown in Figure 4.Figure As IfeaIn shown in each calculated for pixel values formula such as formula (1)
Wherein, BW is the binary image of foreground target, and D is pixel P (x, y) neighborhood, Size of Neighborhood D in the present embodiment Value takes 3, i.e. 8 neighborhoods.When denominator is 0, background image I is respectively adoptedBBrightness Y, chrominance C b and Cr average value replace it is corresponding Denominator.
Image I is tried to achieve according to maximum variance between clustersfeaBinary-state threshold Tfea, in view of being less than threshold value TfeaPixel it is usual It is caused by shade or black car body, to reduce the influence that this kind of pixel is searched for car body, therefore, threshold value T will be less thanfeaValue It is changed into 0, obtains image Hfea, i.e.,
Step D. carries out image I using Meanshift algorithmsFSplit some subregions and form set Dseg(N), according to formula (2) Take image IFChrominance C b, Cr component maximum region D is differed with background imageseg(i), the search starting point as carbody Region, as shown in figure 5, wherein the most deep region of color is search starting point region, i.e.,
Step E. is using boundary operator to image IfeaRim detection is carried out, is obtained in the present embodiment using Sobel boundary operators Edge image E is obtained, its calculation formula is
Wherein, Gx=Ifea(x-1,y-1)+2Ifea(x,y-1)+Ifea(x+1,y-1)-Ifea(x-1,y+1)-2Ifea(x,y+ 1)-Ifea(x+1, y+1),
Gy=Ifea(x-1,y-1)+2Ifea(x-1,y)+Ifea(x-1,y+1)-Ifea(x+1,y-1)-2Ifea(x+1,y)- Ifea(x+1,y+1)
Edge image E binary-state threshold T is tried to achieve using maximum variance between clustersedge, threshold value T will be less thanedgeMarginal value It is changed into 0, to reduce interference caused by slight edge, i.e.,
Then, by image HfeaWith edge image IedgeIt is overlapped, edge gradient, brightness and colourity have been merged in acquisition Characteristic image M, the value of each pixel are
M (x, y)=Iedge+Hfea(x,y) (4)
The characteristic image M obtained in the present embodiment is as shown in Figure 6.
Step F. obtains binaryzation foreground target image BW, calculates binaryzation BW centre of form C (xc,yc), centroid calculation formula For
Wherein
Due to characteristic image M marginal information skewness, show as car body area marginal information and enrich and shadow region Marginal information it is very rare, therefore characteristic image M barycenter is closer to car body, its barycenter Z (xz,yz) calculation formula be
Wherein
In order to cut down the edge in shadow region and background area, structure suppresses the mask image of shadow region marginal information, such as Fig. 7 It is shown.In order to retain car body marginal information, neighborhood L width is variable, and L values are larger when in shadow region, and in non-shadow L values are smaller during area.If any point in binaryzation foreground image BW boundary B is that P (x, y) to centre of form C distance is dc, P (x, Y) it is d that point, which arrives barycenter Z distances,z, then neighborhood L width be
L=λ arccot [α (dc-dz)] (5)
In the present embodiment, work as L<When 0, L is put as 0;λ is amplitude Dynamic gene, and λ value is 1 in the present embodiment;α is rate of change Dynamic gene, α values are 1 in the present embodiment.The effect of the mask image is to set the pixel M (x, y) in P (x, y) vertex neighborhood L For 0.To characteristic image M after the operation of mask image, the marginal information in shadow region is restrained effectively, as shown in Figure 8.
Step G. obtains search starting point region Dseg(i) border b, all and D is found out along border bseg(i) area being connected Domain, it is assumed that region quantity is r, forms set [Dseg(1),Dseg(1)…Dseg(r) each regions of characteristic image M], are calculated Pixel average, the D for maximum of averagingseg(j) region and original Dseg(i) merge, i.e.,
Wherein,N is Dseg(m) in region pixel number, k for work as Preceding iteration count.
Then loop iteration tries to achieve region Dseg(i), terminate iteration when meeting formula (7), i.e. in characteristic image M with Dseg (i) no longer domain of the existence pixel characteristic average value is respectively less than the subregion of threshold xi in adjacent other subregions, and threshold xi is near Approximately equal to zero decimal, the value of threshold xi is adjusted according to the difference of video scene.
Region Dseg(i) untouched other subregions form 2 set Q1=[Dseg(r+1),Dseg(r+2)…Dseg(r+ ] and Q k)2=[Dseg(r+k+1),Dseg(r+k+2)…Dseg(N)], Q1It is Dseg(i) periphery subregion set, Q2Be not with Dseg(i) adjacent regional ensemble.
Q in the present embodiment1And Q2Set is non-NULL, illustrates that moving target has shade.Again due to set Q2Middle region picture Plain average value is respectively less than threshold xi, illustrates the shade of an only moving target.Then, by set Q1And Q2All elements be considered as The candidate regions of shade.Region in boundary B involved by segment of the neighborhood L width values more than zero is mainly shadow region, such as Fig. 6 and Shown in Fig. 7, therefore, threshold xi takes the characteristic image M pixels in the region involved by segment of the neighborhood L width values more than zero to be averaged It is worth, threshold xi is 0.1 in the present embodiment.D after iteration endsseg(i) gray area such as in Fig. 5, i.e. carbody part, and The white space not being colored is then shade candidate region.
Step H. is from Q1And Q2Each element [D of setseg(r+1),Dseg(r+2)…Dseg(N) sub-district representated by] Domain is set out, using region-growing method in characteristic image IfeaShadow region S (j) is searched in (x, y), the criterion of region growing is according to formula (8) calculate.
S (j)={ P (x, y):M (x, y) < Tedge&|Ifea(x,y)-Ifea(x+d, y+d) | < ζ σ &BW (x, y)=1 } (8)
After acquisition shade subregion S (r+1), S (r+2) ... S (N), all subregions are obtained by logical "or" computing Whole shade subregion S of moving vehicle, i.e.,
S=Dseg(r+1)|Dseg(r+2)|…Dseg(N)|S(r+1)|S(r+2)|…S(N)
Whole shade subregion S in the present embodiment are as shown in figure 9, its grey area is the shade detected, white area Domain is the car body of vehicle.From fig. 9, it can be seen that the effect that the present invention detects shade from moving target is good.

Claims (2)

1. moving vehicle shadow detection method in a kind of video image, it is characterised in that comprise the steps of:
(1) the image I of moving vehicle target is extractedFWith background image IBAfterwards, YCbCr color spaces are respectively converted into, obtain image IFLuminance component Y, chrominance C b and Cr component with background, then build fusion brightness, the image I of chromaticity informationfea, figure As IfeaThe pixel value of middle each point is
<mrow> <msub> <mi>I</mi> <mrow> <mi>f</mi> <mi>e</mi> <mi>a</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <munder> <mi>&amp;Sigma;</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> <mo>&amp;Element;</mo> <mi>D</mi> </mrow> </munder> <msub> <mi>Y</mi> <mi>F</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <munder> <mi>&amp;Sigma;</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> <mo>&amp;Element;</mo> <mi>D</mi> </mrow> </munder> <msub> <mi>Y</mi> <mi>B</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>&amp;CenterDot;</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mfrac> <mrow> <munder> <mi>&amp;Sigma;</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> <mo>&amp;Element;</mo> <mi>D</mi> </mrow> </munder> <msub> <mi>Cb</mi> <mi>F</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <munder> <mi>&amp;Sigma;</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> <mo>&amp;Element;</mo> <mi>D</mi> </mrow> </munder> <msub> <mi>Cb</mi> <mi>B</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <munder> <mi>&amp;Sigma;</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> <mo>&amp;Element;</mo> <mi>D</mi> </mrow> </munder> <mo>&amp;lsqb;</mo> <mi>B</mi> <mi>W</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&amp;CenterDot;</mo> <msub> <mi>Cb</mi> <mi>B</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>&amp;CenterDot;</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mfrac> <mrow> <munder> <mi>&amp;Sigma;</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> <mo>&amp;Element;</mo> <mi>D</mi> </mrow> </munder> <msub> <mi>Cr</mi> <mi>F</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <munder> <mi>&amp;Sigma;</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> <mo>&amp;Element;</mo> <mi>D</mi> </mrow> </munder> <msub> <mi>Cr</mi> <mi>B</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <munder> <mi>&amp;Sigma;</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> <mo>&amp;Element;</mo> <mi>D</mi> </mrow> </munder> <mo>&amp;lsqb;</mo> <mi>B</mi> <mi>W</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&amp;CenterDot;</mo> <msub> <mi>Cr</mi> <mi>B</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
Wherein, the binaryzation foreground target image that BW is obtained for the moving target detecting method based on background subtracting, D is pixel P (x, y) neighborhood, then by image IfeaThreshold value T is tried to achieve through maximum variance between clustersfea, then by image IfeaIn be less than threshold value TfeaPixel value vanishing, so as to obtain image Hfea
(2) to image IFRegion segmentation is carried out, and calculates image IFEach region and its background image between chrominance C b and Cr The absolute value summation of difference, by the region D of its maximumseg(i) the search initiation region as carbody, i.e.,
<mrow> <msub> <mi>D</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>g</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mi>argmax</mi> <mi>F</mi> </munder> <mo>&amp;lsqb;</mo> <munder> <mo>&amp;Sigma;</mo> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> <mo>&amp;Element;</mo> <mi>D</mi> <mi>s</mi> <mi>e</mi> <mi>g</mi> </mrow> </munder> <mrow> <mo>(</mo> <mo>|</mo> <msub> <mi>Cb</mi> <mi>F</mi> </msub> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> <mo>-</mo> <msub> <mi>Cb</mi> <mi>B</mi> </msub> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> <mo>|</mo> <mo>+</mo> <mo>|</mo> <msub> <mi>Cr</mi> <mi>F</mi> </msub> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> <mo>-</mo> <msub> <mi>Cr</mi> <mi>B</mi> </msub> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> <mo>|</mo> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <mo>;</mo> </mrow>
(3) using boundary operator to image IfeaRim detection is carried out, edge image E is obtained, side is tried to achieve according to maximum variance between clusters Edge image E binary-state threshold Tedge, threshold value T will be less thanedgeMarginal value vanishing, obtain the image I containing edge featureedge, And by image HfeaWith image IedgeIt is overlapped, forms the characteristic image M containing edge, brightness and chrominance information, i.e.,
M (x, y)=Iedge+Hfea(x,y)
Wherein,
(4) construction suppresses the mask image of shadow region and background edge, first tries to achieve binaryzation foreground target image BW boundary B, The every bit P (x, y) all over border is gone through along boundary B, then P (x, y) is put into the pixel M (x, y) in corresponding neighborhood L and is arranged to zero, neighbour Domain L width calculation formula is
L=λ arccot [α (dc-dz)]
Wherein, λ is amplitude Dynamic gene, and α is rate of change Dynamic gene, dcCharacteristic image M centres of form C distance is arrived for P (x, y) points, dzCharacteristic image M barycenter Z distances are arrived for P (x, y) points, work as L<When 0, L is arranged to zero;
(5) search initiation region D is obtainedseg(i) border b, all and D is found out along border bseg(i) r region being connected, Form set [Dseg(1),Dseg(2)…Dseg(r) the maximum region D of the pixel average of characteristic image M in the set], is takenseg (j), then by DsegAnd D (j)seg(i) region merging technique forms bigger region Dseg(i), i.e.,
<mrow> <msubsup> <mi>D</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>g</mi> </mrow> <mi>k</mi> </msubsup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <msubsup> <mi>D</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>g</mi> </mrow> <mrow> <mi>k</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>+</mo> <msubsup> <mi>D</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>g</mi> </mrow> <mrow> <mi>k</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow>
Wherein,N is Dseg(m) in region pixel number, k is current iteration Count, iterate until obtained region Dseg(i) meet to stop during formula (7), i.e. region Dseg(i) in periphery subregion no longer Domain of the existence pixel average is more than the subregion of threshold xi, and threshold xi takes segment of the neighborhood L width more than zero in boundary B to touch And region Dseg(j) characteristic image M pixel averages;
<mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> <mo>{</mo> <mfrac> <mn>1</mn> <mi>n</mi> </mfrac> <munder> <mo>&amp;Sigma;</mo> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> <mo>&amp;Element;</mo> <mi>D</mi> <mi>s</mi> <mi>e</mi> <mi>g</mi> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> </munder> <mi>M</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>}</mo> <mo>&lt;</mo> <mi>&amp;xi;</mi> <mo>,</mo> <mi>m</mi> <mo>=</mo> <mi>r</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>r</mi> <mo>+</mo> <mn>2</mn> <mo>...</mo> <mi>r</mi> <mo>+</mo> <mi>k</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow>
Region Dseg(i) untouched other subregions form 2 set Q1=[Dseg(r+1),Dseg(r+2)…Dseg(r+k)] and Q2=[Dseg(r+k+1),Dseg(r+k+2)…Dseg(N)], Q1It is Dseg(i) adjacent subarea domain set, Q2For not with Dseg(i) Adjacent regional ensemble, if Q1And Q2It is empty set, illustrates that shade is not present in moving target, that is, complete whole shade candidate region Detection process;Otherwise, set Q is judged2Whether the area pixel average value in middle region is more than threshold xi, if the set Q no more than if1 And Q2All elements composition region be shade candidate regions, if more than if explanation have two moving targets there occurs adhesion, Again will set Q2Shadowing analysis is carried out to another target by formula (6), will set Q when meeting (7) formula1And Q2All members Candidate regions of the region of element composition as shade;
(6) using shade candidate regions as starting point, the shadow region on shade candidate regions and its periphery, region are searched using region-growing method Growing the criterion searched is
S (j)={ P (x, y):M (x, y) < Tedge&|Ifea(x,y)-Ifea(x+d, y+d) | < ζ σ &BW (x, y)=1 }
Go through all over set Q1And Q2Subregion representated by each element, search subregion is interior and its neighboring area, acquisition shade are sub Region S (r+1), S (r+2) ... S (N), all shade subregions obtain whole shades of moving vehicle by logical "or" computing Region S, i.e.,
S=Dseg(r+1)|Dseg(r+2)|…Dseg(N)|S(r+1)|S(r+2)|…S(N)
Wherein N is set DsegThe sum of the number of middle element, i.e. subregion block.
2. moving vehicle shadow detection method in a kind of video image according to claim 1, it is characterised in that described In step (1), as background image IBLuminance component Y, any channel components are less than normal in chrominance C b and Cr component causes the denominator to be When zero, substituted using the average value of the channel components.
CN201510282190.5A 2015-05-28 2015-05-28 Moving vehicle shadow detection method in a kind of video image Expired - Fee Related CN104899881B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510282190.5A CN104899881B (en) 2015-05-28 2015-05-28 Moving vehicle shadow detection method in a kind of video image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510282190.5A CN104899881B (en) 2015-05-28 2015-05-28 Moving vehicle shadow detection method in a kind of video image

Publications (2)

Publication Number Publication Date
CN104899881A CN104899881A (en) 2015-09-09
CN104899881B true CN104899881B (en) 2017-11-28

Family

ID=54032526

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510282190.5A Expired - Fee Related CN104899881B (en) 2015-05-28 2015-05-28 Moving vehicle shadow detection method in a kind of video image

Country Status (1)

Country Link
CN (1) CN104899881B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106940784A (en) * 2016-12-26 2017-07-11 无锡高新兴智能交通技术有限公司 A kind of bus detection and recognition methods and system based on video
CN108446705B (en) * 2017-02-16 2021-03-23 华为技术有限公司 Method and apparatus for image processing
CN107220949A (en) * 2017-05-27 2017-09-29 安徽大学 The self adaptive elimination method of moving vehicle shade in highway monitoring video
CN108197622A (en) * 2017-12-26 2018-06-22 新智数字科技有限公司 A kind of detection method of license plate, device and equipment
RU2676026C1 (en) * 2018-03-23 2018-12-25 Акционерное Общество "Крафтвэй Корпорэйшн Плс" Video stream analysis method
CN108986058B (en) * 2018-06-22 2021-11-19 华东师范大学 Image fusion method for brightness consistency learning
CN112215168A (en) * 2020-10-14 2021-01-12 上海爱购智能科技有限公司 Image editing method for commodity identification training
CN113052845B (en) * 2021-06-02 2021-09-24 常州星宇车灯股份有限公司 Method and device for detecting vehicle carpet lamp
CN113763410B (en) * 2021-09-30 2022-08-02 江苏天汇空间信息研究院有限公司 Image shadow detection method based on HIS combined with spectral feature detection condition

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101982825A (en) * 2010-11-04 2011-03-02 杭州海康威视系统技术有限公司 Method and device for processing video image under intelligent transportation monitoring scene
US7970168B1 (en) * 2004-02-19 2011-06-28 The Research Foundation Of State University Of New York Hierarchical static shadow detection method
CN102509101A (en) * 2011-11-30 2012-06-20 昆山市工业技术研究院有限责任公司 Background updating method and vehicle target extracting method in traffic video monitoring
CN103106796A (en) * 2013-01-15 2013-05-15 江苏大学 Vehicle detection method and device of intelligent traffic surveillance and control system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7970168B1 (en) * 2004-02-19 2011-06-28 The Research Foundation Of State University Of New York Hierarchical static shadow detection method
CN101982825A (en) * 2010-11-04 2011-03-02 杭州海康威视系统技术有限公司 Method and device for processing video image under intelligent transportation monitoring scene
CN102509101A (en) * 2011-11-30 2012-06-20 昆山市工业技术研究院有限责任公司 Background updating method and vehicle target extracting method in traffic video monitoring
CN103106796A (en) * 2013-01-15 2013-05-15 江苏大学 Vehicle detection method and device of intelligent traffic surveillance and control system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Shadow Detection and Removal Based on YCbCr Color Space;Kaushik Deb 等;《Smart Computing Review》;20140228;第4卷(第1期);23-33 *
Shadow removal based on YCbCr color space;Xia Zhu 等;《Neurocomputing》;20150303;第151卷;252-258 *
融合纹理特征和阴影属性的阴影检测方法;余孟泽 等;《计算机工程与设计》;20111031;第32卷(第10期);3431-3434 *

Also Published As

Publication number Publication date
CN104899881A (en) 2015-09-09

Similar Documents

Publication Publication Date Title
CN104899881B (en) Moving vehicle shadow detection method in a kind of video image
CN104392468B (en) Based on the moving target detecting method for improving visual background extraction
JP4668921B2 (en) Object detection in images
CN102385753B (en) Illumination-classification-based adaptive image segmentation method
CN110276267A (en) Method for detecting lane lines based on Spatial-LargeFOV deep learning network
CN104952256B (en) A kind of detection method of the intersection vehicle based on video information
CN112036254B (en) Moving vehicle foreground detection method based on video image
CN106204494B (en) A kind of image defogging method and system comprising large area sky areas
CN102663743A (en) Multi-camera cooperative character tracking method in complex scene
CN104063885A (en) Improved movement target detecting and tracking method
TWI394096B (en) Method for tracking and processing image
KR20150032176A (en) Color video processing system and method, and corresponding computer program
CN102306307B (en) Positioning method of fixed point noise in color microscopic image sequence
CN109754440A (en) A kind of shadow region detection method based on full convolutional network and average drifting
CN108181316A (en) A kind of bamboo strip defect detection method based on machine vision
CN110390677A (en) A kind of defect positioning method and system based on sliding Self Matching
Zhu et al. Fast detection of moving object based on improved frame-difference method
Ma et al. Vision-based lane detection and lane-marking model inference: A three-step deep learning approach
CN106683062A (en) Method of checking the moving target on the basis of ViBe under a stationary camera
CN103164847A (en) Method for eliminating shadow of moving target in video image
CN108875589B (en) Video detection method for road area
CN104574313B (en) The red light color Enhancement Method and its strengthening system of traffic lights
CN107516320A (en) A kind of Moving Workpieces target non-formaldehyde finishing method suitable for high dynamic illumination condition
Russell et al. Vehicle detection based on color analysis
KR20110121261A (en) Method for removing a moving cast shadow in gray level video data

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20171128

Termination date: 20210528