CN110111355A - Resist the moving vehicle tracking of strong shadow interference - Google Patents
Resist the moving vehicle tracking of strong shadow interference Download PDFInfo
- Publication number
- CN110111355A CN110111355A CN201811396414.5A CN201811396414A CN110111355A CN 110111355 A CN110111355 A CN 110111355A CN 201811396414 A CN201811396414 A CN 201811396414A CN 110111355 A CN110111355 A CN 110111355A
- Authority
- CN
- China
- Prior art keywords
- scale
- channel
- mask
- value
- formula
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 claims abstract description 49
- 230000009466 transformation Effects 0.000 claims abstract description 20
- 238000005070 sampling Methods 0.000 claims abstract description 11
- 230000003044 adaptive effect Effects 0.000 claims abstract description 7
- 238000006243 chemical reaction Methods 0.000 claims abstract description 6
- 230000011218 segmentation Effects 0.000 claims description 7
- 238000010008 shearing Methods 0.000 claims description 6
- 239000000284 extract Substances 0.000 claims description 4
- 238000013459 approach Methods 0.000 claims description 3
- 238000012937 correction Methods 0.000 claims description 3
- 230000010339 dilation Effects 0.000 claims description 3
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 claims 1
- 102100021236 Dynamin-1 Human genes 0.000 description 7
- 101000817604 Homo sapiens Dynamin-1 Proteins 0.000 description 7
- 230000000052 comparative effect Effects 0.000 description 6
- 238000001514 detection method Methods 0.000 description 5
- 230000003068 static effect Effects 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 230000008030 elimination Effects 0.000 description 2
- 238000003379 elimination reaction Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000007476 Maximum Likelihood Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000006735 deficit Effects 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The present invention discloses a kind of moving vehicle tracking for the resistance strong shadow interference that accuracy is high, robustness is good, has adaptive ability, shears wave zone Zero tree structure based on non-lower sampling, after video frame is transformed into hsv color space from RGB color, carries out non-lower sampling and shear wave conversion;Assuming that transformation coefficient Gaussian distributed, the weighting masks of each scale are calculated using the mean value and standard deviation of transformation coefficient;According to zero tree distribution character of multi-scale transform coefficient, the weighting masks of thin scale are corrected using the weighting masks of thick scale, and the weighting masks of each scale, each Color Channel are subjected to linear combination, obtain public mask;Adaptivenon-uniform sampling threshold value is calculated using the maximum entropy method based on least square fitting, public mask is subjected to binaryzation;Moving vehicle region is determined in a manner of ballot, and then target vehicle is tracked using mean shift algorithm.
Description
Technical field
The present invention relates to Intelligent traffic video process field, especially a kind of accuracy is high, robustness is good, it is adaptive to have
The moving vehicle tracking that ability, the resistance strong shadow based on non-lower sampling shearing wave zone Zero tree structure are interfered.
Background technique
During intelligent transportation system automatically tracks vehicle target, the static yin that is generated by surrounding static scenery
Shadow can cause the feature of moving vehicle to occur to change in short-term;And movement shade caused by moving vehicle will increase vehicle in video
Ratio in image is easy to be erroneously detected as a part of moving target so that target vehicle is linked together with shade.So
It has been recognized that shade is a kind of disturbing factor to vehicle target tracking, segmentation and Information Statistics.Either static yin
Shadow, or be movement shade, the feature consistency of target vehicle can be destroyed, the validity of target tracking algorism is influenced, causes
Track the loss of target.In this case, track algorithm how is effectively improved to the robustness or even elimination of shadow interference
Shade, the tracking accuracy for improving moving vehicle target are of great significance.
Although there are many situations for the performance of shade, but most of shade all has four aspect denominators:Shade
Brightness is lower than the brightness of prospect;Shade will not excessively change the color of background;Shade will not change sport foreground and static
The texture feature of background;Shade is often only present in except real motion target area.According to These characteristics, grind in recent years
Study carefully personnel and proposes a variety of effective moving targets from various aspects such as the features of the decision process of algorithm, the model of use and foundation
Shadow Detection and elimination mainly include the method based on color space pixel, the method based on edge detection, based on gray scale sky
Between contour line method etc..
Firstly, the method based on color space pixel mainly utilizes brightness and the color characteristics of shade, before analysis
The intensity of scape and background pixel ratio, then combine multiple threshold values to be judged in HSV color space, to the shaded side of varying strength
Reason ability is stronger.Cucchiara et al. is sentenced using intensity ratio of the multi-threshold to foreground and background pixel in hsv color space
Disconnected, the DNM1 method proposed inhibits shade to a certain extent, but due to being related to multiple threshold values, and adaptive choose is deposited
In certain difficulty, it is difficult to accomplish the adaptivity to varying environment.Also, when target area and shadow region have similar face
When color and gray value, the moving target for having similar gray-value with shade is cannot be distinguished in this method, be easy to cause erroneous judgement.Choi etc.
People proposes with 1 rank gradient information to combine normalized RGB to judge shade, reduces False Rate to a certain extent.
Xiang et al. improves the robust that track algorithm changes illumination condition than model using the local strength modeled based on illumination
Property.Ouivirach et al., in hsv color spatial extraction sport foreground, then utilizes maximum likelihood side using gauss hybrid models
The sport foreground pixel that method judgement is extracted belongs to target or shade.This method effectively improves detection effect, but still remains
It largely judges by accident and calculation amount is larger.Similar with the method for Ouivirach et al., Liu et al. people also utilizes gauss hybrid models pair
Each pixel of HSV space carries out projection modeling, and to reduce False Rate, they have been introduced based on Markov random field (MRF)
Pre-classifier extract the color characteristic of shade in video frame, and the feature of continuous multiple frames shade is counted, thus
Guarantee that pre-classifier can effectively adapt to shade variation, achieves good results.But, when training sample can not match yin
When the relative motion of the pace of change of shadow, i.e. vehicle and shade is very fast, global shade statistical nature will be no longer credible, False Rate
It rises with it.
Since shadow region is smoother, and vehicle target usually contains certain texture and marginal information, is examined based on edge
It the method for survey and is analyzed from the angle of texture and marginal information based on the method for gray space contour line and detects shade, in turn
Inhibit influence of the shade to object tracking process.Tian et al. passes through by the texture analysis method typically based on cross-correlation
Comparison present frame and background model are in the texture similarity between the pixel and its neighborhood territory pixel of same position, propose one kind
Shadow interference is judged based on the method for texture information normalized crosscorrelation, achieves certain effect.In view of wavelet transformation
Edge direction analysis ability, standard deviation of the Guan et al. by choosing each wavelet sub-band are gone to a certain extent as threshold value
The shade generated in addition to traditional background difference method;Khare et al. is then further handled using relative standard deviation as threshold value
Wavelet sub-band.Kingliness sea et al. is according to relationship between the scale of wavelet coefficient, by the zero-tree wavelet mask of tectonic movement prospect, and
Thick scale mask is corrected using thin scale mask, obtains accurate Subband thresholds and shadow detection result, but the party
Method requires user to input a background frames, and background subtraction is recycled to obtain sport foreground region, and need to interact setting multivalue and cover
The binarization threshold of code, there are still obvious limitations for self-adaptive processing ability.It is above-mentioned compared with the method based on gray scale or color
Method resists the influence of shade using the characteristic at texture, edge, achieves certain algorithm stability.However, these methods
Key be effectively to extract texture and marginal information, can wavelet transformation be only capable of optimal expression one-dimensional point it is unusual and along it is horizontal,
Vertically, the two-dimentional straight line of diagonal is unusual, the straight line in other directions for being widely present in video and image it is unusual and
Curve is unusual helpless.So above-mentioned still have some deficits to edge and portraying for texture based on the method for wavelet transformation, in turn
Subsequent vehicle target track algorithm is restricted and affected to the robustness of shadow interference.
Although a variety of shadow Detections and removal algorithm have been proposed in domestic and foreign scholars, and it is applied to moving vehicle
Target following, but it still can steadily be resisted without one kind at present static or movement shadow interference, without man-machine interactively
Vehicle target tracking.
Summary of the invention
The present invention is to provide that a kind of accuracy is high, robustness to solve above-mentioned technical problem present in the prior art
The moving vehicle tracking that resistance strong shadow that is good, having adaptive ability, sheared wave zone Zero tree structure based on non-lower sampling is interfered
Method.
The technical solution of the invention is as follows: a kind of moving vehicle tracking for resisting strong shadow interference, feature exist
In progress in accordance with the following steps:
Step 1. inputs one and contains hypographous Traffic Surveillance Video VI;
Step 2. is from VIIt is middle read in one having a size ofPixel, untreated video frame F, it is empty from RGB color
Between be transformed into hsv color space;
Step 3. carries out 2 grades of non-lower samplings shearing wave conversions to the channel H of video frame F and the channel V respectively, under each scale
Directional subband number is 4;
Step 4. calculates the mean value of the lowest frequency sub-band coefficients in the channel H and the channel V, wherein subscriptIndicate Color Channel
And;
Step 5. calculates the standard deviation of the different scale in the channel H and the channel V, different directions subband medium-high frequency coefficient, wherein
SubscriptIndicate scale and, subscriptIndicate direction and;
Step 6. is according to formulaDefinition, calculate lowest frequency subband two-value mask:
It is describedIt indicates in Color ChannelLowest frequency subband in be located at coordinateThe transformation coefficient at place,
Indicate two-value mask corresponding to the transformation coefficient,,;
Step 7. is according to formulaDefinition, calculate the two-value mask of each high frequency direction subband:
It is describedIndicate Color Channel?Under a scale,In a directional subband, it is located at coordinateThe transformation at place
Coefficient,Indicate two-value mask corresponding to the transformation coefficient;
Step 8. is according to formulaDefinition, for all subbands one weighting masks of calculating that scale 1 is lower:
(3)
It is describedIndicate Color ChannelUnder scale 1,In a directional subband, it is located at coordinateThe weighting at place is covered
Code;
Step 9. is according to formulaDefinition, for all subbands one weighting masks of calculating that scale 2 is lower:
It is describedIndicate Color ChannelUnder scale 2,In a directional subband, it is located at coordinateThe weighting at place is covered
Code;
Step 10. utilizes the weighting masks of thick scale according to zero tree distribution character of multi-scale transform coefficientCorrection
The weighting masks of thinner scaleIf: the weighting masks of thick scaleIn coordinateThe value at place is 0, then will be thin
The weighting masks of scaleIn coordinateThe value at place is also configured as 0;
Step 11. is according to formulaDefinition, by under scale 1 and scale 2 weighting masks carry out linear combination, obtain two
Unified mask under scale:
Step 12. is according to formulaDefinition, by the unified mask in the channel H and the channel V carry out linear combination, obtain two face
The public mask of chrominance channel:
It is describedWithThe channel H and the channel V are respectively indicated in coordinateThe unified mask at place;
Step 13. calculates the public mask of two Color Channels of H, V using least-square fitting approachIt is adaptive
Segmentation threshold;
Step 13.1 is siding-to-siding block length with 0.1, willValue be divided into 10 sections:、、、、、、、、、, and countValue be in each section
Frequency, described, to establishHistogram;
Step 13.2 enables;
Step 13.3 is according to formula~ formulaDefinition, calculate withIt will as global thresholdEach picture
Element is divided into the comentropy of foreground pixel or background pixel:
Step 13.4 enablesIf, then it is transferred to step 13.5, otherwise return step 13.3;
Step 13.5 is according to formulaDefinition, utilize least square method and 2 equation of n th order n of unitary to be fitted best global segmentation threshold
ValueComentropy curve, obtain 3 coefficients of the equation、With:
It is described、WithRespectively indicate 2 term coefficients, 1 term coefficient and constant term of 2 equation of n th order n of unitary;
Step 13.6 enablesAs global threshold, and according to formulaDefinition, by public maskThresholding is carried out, the public mask of two-value is obtained:
Step 14. determines the two-value mask in moving vehicle region in a manner of ballot;
The channel H of video frame F and the channel V are carried out thresholding using maximum variance between clusters by step 14.1 respectively, obtain two
The two-value mask in channelWith;
Step 14.2 is according to formulaDefinition, calculate two-value mask:
Step 15. utilizes structural elementIt is rightMorphological dilations operation is carried out, two-value mask is obtained;
Step 16. is according to formula, by two-value maskIt is multiplied with video frame F, extracts the candidate of moving vehicle
Region:
It is describedIt indicates to be located at coordinate in output video frame OThe pixel value at place,It indicates to be located in video frame F and sit
MarkThe pixel value at place;
Step 17. is by video frameBe input to average drifting Meanshift algorithm, in the candidate region of moving vehicle into
Row vehicle tracking, to obtain the location information of target vehicle in the video frame;
If step 18. VIAll videos frame it is processed finish, then export target vehicle in each video frame position letter
Breath, algorithm terminate;Otherwise, return step 2.
Compared with prior art, advantages of the present invention is as follows: first, non-lower sampling shearing wave conversion can compare wavelet transformation
It is special to be conducive to the texture sufficiently excavated between moving vehicle region and shadow region for the grain distribution for more effectively analyzing video frame
Sex differernce, to more accurately be distinguished to the two;Second, compared with wavelet transformation, non-lower sampling shears wave conversion
Adjacent coefficient has stronger correlation, so, the present invention is closed using related between the scale of non-lower sampling shearing wave transformation coefficient
System and zero tree distribution character, construct the unified mask and two-value mask in target vehicle region, can get more accurate shade
Region detection and removal as a result, help to realize in turn higher precision moving vehicle tracking, be effectively improved strong shadow interfere
In the case of target Loss existing for tradition MeanShift method;Third is not necessarily to man-machine interactively, both defeated in advance without user
Enter a background frames and moving vehicle region is obtained by background subtraction, and devises based on least square fitting and maximum entropy
Automatic division method, avoid the inconvenience of interactively manual setting binarization threshold, have better self-adaptive processing
Ability.
Detailed description of the invention
Fig. 1 is the shadow removal comparative result figure of the present invention with prior art Traffic Surveillance Video scene 1.
Fig. 2 is the shadow removal comparative result figure of the present invention with prior art Traffic Surveillance Video scene 2.
Fig. 3 is the shadow removal comparative result figure of the present invention with prior art Traffic Surveillance Video scene 3.
Fig. 4 is the vehicle tracking comparative result figure of the present invention with prior art Traffic Surveillance Video scene 4.
Fig. 5 is the vehicle tracking comparative result figure of the present invention with prior art Traffic Surveillance Video scene 5.
Specific embodiment
A kind of moving vehicle tracking of resistance strong shadow interference of the invention, carries out in accordance with the following steps;
Step 1. inputs one and contains hypographous Traffic Surveillance Video VI;
Step 2. is from VIIt is middle read in one having a size ofPixel, untreated video frame F, it is empty from RGB color
Between be transformed into hsv color space;
Step 3. carries out 2 grades of non-lower samplings shearing wave conversions to the channel H of video frame F and the channel V respectively, under each scale
Directional subband number is 4;
Step 4. calculates the mean value of the lowest frequency sub-band coefficients in the channel H and the channel V, wherein subscriptIndicate Color Channel
And;
Step 5. calculates the standard deviation of the different scale in the channel H and the channel V, different directions subband medium-high frequency coefficient, wherein
SubscriptIndicate scale and, subscriptIndicate direction and;
Step 6. is according to formulaDefinition, calculate lowest frequency subband two-value mask:
It is describedIt indicates in Color ChannelLowest frequency subband in be located at coordinateThe transformation coefficient at place,
Indicate two-value mask corresponding to the transformation coefficient,,;
Step 7. is according to formulaDefinition, calculate the two-value mask of each high frequency direction subband:
It is describedIndicate Color Channel?Under a scale,In a directional subband, it is located at coordinatePlace
Transformation coefficient,Indicate two-value mask corresponding to the transformation coefficient;
Step 8. is according to formulaDefinition, for all subbands one weighting masks of calculating that scale 1 is lower:
It is describedIndicate Color ChannelUnder scale 1,In a directional subband, it is located at coordinateThe weighting at place
Mask;
Step 9. is according to formulaDefinition, for all subbands one weighting masks of calculating that scale 2 is lower:
It is describedIndicate Color ChannelUnder scale 2,In a directional subband, it is located at coordinateThe weighting at place
Mask;
Step 10. utilizes the weighting masks of thick scale according to zero tree distribution character of multi-scale transform coefficientCorrection
The weighting masks of thinner scaleIf: the weighting masks of thick scaleIn coordinateThe value at place is 0, then will be thin
The weighting masks of scaleIn coordinateThe value at place is also configured as 0;
Step 11. is according to formulaDefinition, by under scale 1 and scale 2 weighting masks carry out linear combination, obtain two
Unified mask under scale:
Step 12. is according to formulaDefinition, by the unified mask in the channel H and the channel V carry out linear combination, obtain two face
The public mask of chrominance channel:
It is describedWithThe channel H and the channel V are respectively indicated in coordinateThe unified mask at place;
Step 13. calculates the public mask of two Color Channels of H, V using least-square fitting approachIt is adaptive
Answer segmentation threshold;
Step 13.1 is siding-to-siding block length with 0.1, willValue be divided into 10 sections:、、、、、、、、、, and countValue be in each section
Frequency, described, to establishHistogram;
Step 13.2 enables;
Step 13.3 is according to formula~ formulaDefinition, calculate withIt will as global thresholdIt is each
A pixel is divided into the comentropy of foreground pixel or background pixel:
Step 13.4 enablesIf, then it is transferred to step 13.5, otherwise return step 13.3;
Step 13.5 is according to formulaDefinition, utilize least square method and 2 equation of n th order n of unitary to be fitted best global segmentation threshold
ValueComentropy curve, obtain 3 coefficients of the equation、With:
It is described、WithRespectively indicate 2 term coefficients, 1 term coefficient and constant term of 2 equation of n th order n of unitary;
Step 13.6 enablesAs global threshold, and according to formulaDefinition, by public maskThresholding is carried out, the public mask of two-value is obtained:
Step 14. determines the two-value mask in moving vehicle region in a manner of ballot;
The channel H of video frame F and the channel V are carried out thresholding using maximum variance between clusters by step 14.1 respectively, obtain two
The two-value mask in channelWith;
Step 14.2 is according to formulaDefinition, calculate two-value mask:
Step 15. utilizes structural elementIt is rightMorphological dilations operation is carried out, two-value mask is obtained;
Step 16. is according to formula, by two-value maskIt is multiplied with video frame F, extracts the candidate regions of moving vehicle
Domain:
It is describedIt indicates to be located at coordinate in output video frame OThe pixel value at place,Indicate position in video frame F
In coordinateThe pixel value at place;
Step 17. is by video frameIt is input to average drifting Meanshift algorithm, in the candidate region of moving vehicle
Vehicle tracking is carried out, to obtain the location information of target vehicle in the video frame;
If step 18. VIAll videos frame it is processed finish, then export target vehicle in each video frame position letter
Breath, algorithm terminate;Otherwise, return step 2.
Using the present invention with DNM1 method, zero-tree wavelet method to the shadow removal result pair of Traffic Surveillance Video scene 1
Than as shown in Figure 1.Wherein (a) is original video;It (b) is the result of DNM1 method;(c) result of zero-tree wavelet method;(d) originally
The result of invention.
Using the present invention with DNM1 method, zero-tree wavelet method to the shadow removal result pair of Traffic Surveillance Video scene 2
Than as shown in Figure 2.Wherein (a) is original video;It (b) is the result of DNM1 method;(c) result of zero-tree wavelet method;(d) originally
The result of invention.
Using the present invention and non-overall situation MRF method, overall situation MRFDNM1 method, the result of DNM1 method, zero-tree wavelet method
And the present invention is as shown in Figure 3 to the shadow removal Comparative result of Traffic Surveillance Video scene 3.Wherein (a) is original video;(b)
The result of non-overall situation MRF method;(c) result of overall situation MRF method;(d) result of DNM1 method;(e) zero-tree wavelet method
As a result;(f) result of the invention.
Using the present invention and traditional mean shift process, the mean shift process based on profile wave to Traffic Surveillance Video field
The result that the moving vehicle of scape 4,5 is tracked is respectively as shown in Fig. 4 ~ Fig. 5, and wherein black lines indicate the vehicle that tracking obtains
Running track.Since road surface shade is heavier, target vehicle starts just to have lost shortly after traditional mean shift process in tracking;Base
Tracking target is also lost at strong shadow in the mean shift process of profile wave;The present invention has then been effective against the dry of shade
It disturbs, the pursuit path that can be accurately tracked by, and be drawn to moving vehicle is approximately straight line, is shown higher
Robustness.
Claims (1)
1. a kind of moving vehicle tracking for resisting strong shadow interference, it is characterised in that carry out in accordance with the following steps:
Step 1. inputs one and contains hypographous Traffic Surveillance Video VI;
Step 2. is from VIIt is middle read in one having a size ofPixel, untreated video frame F, by it from RGB color
It is transformed into hsv color space;
Step 3. carries out 2 grades of non-lower samplings shearing wave conversions to the channel H of video frame F and the channel V respectively, under each scale
Directional subband number is 4;
Step 4. calculates the mean value of the lowest frequency sub-band coefficients in the channel H and the channel V, wherein subscriptIndicate Color Channel and;
Step 5. calculates the standard deviation of the different scale in the channel H and the channel V, different directions subband medium-high frequency coefficient, wherein
SubscriptIndicate scale and, subscriptIndicate direction and;
Step 6. calculates the two-value mask of lowest frequency subband according to the definition of formula (1):
(1)
It is describedIt indicates in Color ChannelLowest frequency subband in be located at coordinateThe transformation coefficient at place,
Indicate two-value mask corresponding to the transformation coefficient,,;
Step 7. calculates the two-value mask of each high frequency direction subband according to the definition of formula (2):
(2)
It is describedIndicate Color Channel?Under a scale,In a directional subband, it is located at coordinatePlace
Transformation coefficient,Indicate two-value mask corresponding to the transformation coefficient;
Step 8. is that all subbands under scale 1 calculate a weighting masks according to the definition of formula (3):
(3)
It is describedIndicate Color ChannelUnder scale 1,In a directional subband, it is located at coordinateThe weighting at place is covered
Code;
Step 9. is that all subbands under scale 2 calculate a weighting masks according to the definition of formula (4):
(4)
It is describedIndicate Color ChannelUnder scale 2,In a directional subband, it is located at coordinateThe weighting at place
Mask;
Step 10. utilizes the weighting masks of thick scale according to zero tree distribution character of multi-scale transform coefficientCorrection compared with
The weighting masks of thin scaleIf: the weighting masks of thick scaleIn coordinateThe value at place is 0, then by thin ruler
The weighting masks of degreeIn coordinateThe value at place is also configured as 0;
Weighting masks under scale 1 and scale 2 are carried out linear combination, obtain two by step 11. according to the definition of formula (5)
Unified mask under scale:
(5)
The unified mask in the channel H and the channel V is carried out linear combination, obtains two face by step 12. according to the definition of formula (6)
The public mask of chrominance channel:
(6)
It is describedWithThe channel H and the channel V are respectively indicated in coordinateThe unified mask at place;
Step 13. calculates the public mask of two Color Channels of H, V using least-square fitting approachIt is adaptive
Segmentation threshold;
Step 13.1 is siding-to-siding block length with 0.1, willValue be divided into 10 sections:、、、、、、、、、, and countValue be in each section
Frequency, described, to establishHistogram;
Step 13.2 enables;
Step 13.3 according to the definition of formula (7) ~ formula (9), calculate withIt will as global thresholdIt is each
A pixel is divided into the comentropy of foreground pixel or background pixel:
(7)
(8)
(9)
Step 13.4 enablesIf, then it is transferred to step 13.5, otherwise return step 13.3;
Step 13.5 is fitted best global segmentation threshold using least square method and 2 equation of n th order n of unitary according to the definition of formula (10)
ValueComentropy curve, obtain 3 coefficients of the equation、With:
(10)
It is described、WithRespectively indicate 2 term coefficients, 1 term coefficient and constant term of 2 equation of n th order n of unitary;
Step 13.6 enablesAs global threshold, and according to the definition of formula (11), by public mask
Thresholding is carried out, the public mask of two-value is obtained:
(11)
Step 14. determines the two-value mask in moving vehicle region in a manner of ballot;
The channel H of video frame F and the channel V are carried out thresholding using maximum variance between clusters by step 14.1 respectively, obtain two
The two-value mask in channelWith;
Step 14.2 calculates two-value mask according to the definition of formula (12):
(12)
Step 15. utilizes structural elementIt is rightMorphological dilations operation is carried out, two-value mask is obtained;
Step 16. is according to formula (13), by two-value maskIt is multiplied with video frame F, extracts the candidate of moving vehicle
Region:
(13)
It is describedIt indicates to be located at coordinate in output video frame OThe pixel value at place,It indicates to be located in video frame F
CoordinateThe pixel value at place;
Step 17. is by video frameBe input to average drifting Meanshift algorithm, in the candidate region of moving vehicle into
Row vehicle tracking, to obtain the location information of target vehicle in the video frame;
If step 18. VIAll videos frame it is processed finish, then export location information of the target vehicle in each video frame,
Algorithm terminates;Otherwise, return step 2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811396414.5A CN110111355B (en) | 2018-11-22 | 2018-11-22 | Moving vehicle tracking method capable of resisting strong shadow interference |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811396414.5A CN110111355B (en) | 2018-11-22 | 2018-11-22 | Moving vehicle tracking method capable of resisting strong shadow interference |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110111355A true CN110111355A (en) | 2019-08-09 |
CN110111355B CN110111355B (en) | 2023-04-14 |
Family
ID=67483386
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811396414.5A Active CN110111355B (en) | 2018-11-22 | 2018-11-22 | Moving vehicle tracking method capable of resisting strong shadow interference |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110111355B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110853000A (en) * | 2019-10-30 | 2020-02-28 | 北京中交国通智能交通系统技术有限公司 | Detection method of track |
CN111179311A (en) * | 2019-12-23 | 2020-05-19 | 全球能源互联网研究院有限公司 | Multi-target tracking method and device and electronic equipment |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102938057A (en) * | 2012-10-19 | 2013-02-20 | 株洲南车时代电气股份有限公司 | Vehicle shadow eliminating method and device |
WO2017054455A1 (en) * | 2015-09-30 | 2017-04-06 | 深圳大学 | Motion target shadow detection method and system in monitoring video |
-
2018
- 2018-11-22 CN CN201811396414.5A patent/CN110111355B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102938057A (en) * | 2012-10-19 | 2013-02-20 | 株洲南车时代电气股份有限公司 | Vehicle shadow eliminating method and device |
WO2017054455A1 (en) * | 2015-09-30 | 2017-04-06 | 深圳大学 | Motion target shadow detection method and system in monitoring video |
Non-Patent Citations (1)
Title |
---|
王相海;王凯;刘美瑶;苏元贺;宋传鸣;: "基于零树小波的交通视频车辆运动阴影滤除方法" * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110853000A (en) * | 2019-10-30 | 2020-02-28 | 北京中交国通智能交通系统技术有限公司 | Detection method of track |
CN110853000B (en) * | 2019-10-30 | 2023-08-11 | 北京中交国通智能交通系统技术有限公司 | Rut detection method |
CN111179311A (en) * | 2019-12-23 | 2020-05-19 | 全球能源互联网研究院有限公司 | Multi-target tracking method and device and electronic equipment |
CN111179311B (en) * | 2019-12-23 | 2022-08-19 | 全球能源互联网研究院有限公司 | Multi-target tracking method and device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN110111355B (en) | 2023-04-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108805904B (en) | Moving ship detection and tracking method based on satellite sequence image | |
CN101854467B (en) | Method for adaptively detecting and eliminating shadow in video segmentation | |
CN102496016B (en) | Infrared target detection method based on space-time cooperation framework | |
CN102184550A (en) | Mobile platform ground movement object detection method | |
CN102722891A (en) | Method for detecting image significance | |
CN101957997A (en) | Regional average value kernel density estimation-based moving target detecting method in dynamic scene | |
CN105005766A (en) | Vehicle body color identification method | |
CN102842037A (en) | Method for removing vehicle shadow based on multi-feature fusion | |
CN109255326A (en) | A kind of traffic scene smog intelligent detecting method based on multidimensional information Fusion Features | |
CN113763427B (en) | Multi-target tracking method based on coarse-to-fine shielding processing | |
CN110111355A (en) | Resist the moving vehicle tracking of strong shadow interference | |
Surkutlawar et al. | Shadow suppression using rgb and hsv color space in moving object detection | |
CN113077494A (en) | Road surface obstacle intelligent recognition equipment based on vehicle orbit | |
Xia et al. | Automatic multi-vehicle tracking using video cameras: An improved CAMShift approach | |
CN112163606B (en) | Infrared small target detection method based on block contrast weighting | |
CN111626107B (en) | Humanoid contour analysis and extraction method oriented to smart home scene | |
CN106021610B (en) | A kind of method for extracting video fingerprints based on marking area | |
CN103839232B (en) | A kind of pedestrian's cast shadow suppressing method based on agglomerate model | |
Wu et al. | Face detection based on YCbCr Gaussian model and KL transform | |
Si-ming et al. | Moving shadow detection based on Susan algorithm | |
Kapileswar et al. | Automatic traffic monitoring system using lane centre edges | |
Balcilar et al. | Moving object detection using Lab2000HL color space with spatial and temporal smoothing | |
Ganesan et al. | Video object extraction based on a comparative study of efficient edge detection techniques. | |
Zhao et al. | A novel method for moving object detection in intelligent video surveillance systems | |
Kushwaha et al. | Performance evaluation of various moving object segmentation techniques for intelligent video surveillance system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20231215 Address after: 116000, Room 6C1102, Yiyang Road, Qixianling, High tech Industrial Park, Dalian City, Liaoning Province Patentee after: Xinghan Wanglian Automotive Technology (Dalian) Co.,Ltd. Address before: No. 116500, Shahekou Road, Dalian City, Liaoning Province Patentee before: LIAONING NORMAL University |
|
TR01 | Transfer of patent right |