CN102867183A - Method and device for detecting littered objects of vehicle and intelligent traffic monitoring system - Google Patents

Method and device for detecting littered objects of vehicle and intelligent traffic monitoring system Download PDF

Info

Publication number
CN102867183A
CN102867183A CN2012103026377A CN201210302637A CN102867183A CN 102867183 A CN102867183 A CN 102867183A CN 2012103026377 A CN2012103026377 A CN 2012103026377A CN 201210302637 A CN201210302637 A CN 201210302637A CN 102867183 A CN102867183 A CN 102867183A
Authority
CN
China
Prior art keywords
sorter
vehicle
sample
module
adaboost
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012103026377A
Other languages
Chinese (zh)
Other versions
CN102867183B (en
Inventor
吴金勇
王一科
薛俊锋
刘德健
龚灼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anke Robot Co ltd
SHANGHAI QINGTIAN ELECTRONIC TECHNOLOGY CO LTD
Original Assignee
China Security and Surveillance Technology PRC Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Security and Surveillance Technology PRC Inc filed Critical China Security and Surveillance Technology PRC Inc
Priority to CN201210302637.7A priority Critical patent/CN102867183B/en
Publication of CN102867183A publication Critical patent/CN102867183A/en
Application granted granted Critical
Publication of CN102867183B publication Critical patent/CN102867183B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention is applied to the field of intelligent traffic, and provides a method and a device for detecting littered objects of a vehicle and an intelligent traffic monitoring system. The method comprises the following steps of: decoding a real-time video stream to obtain the video data of a preset virtual detection area; when the vehicle enters the preset virtual detection area, acquiring a foreground image of the vehicle to be detected; extracting the color and texture features of the foreground image of the vehicle to be detected; and judging whether the foreground image of the vehicle to be detected comprises the littered objects or not according to a constructed AdaBoost classification model, an AdaBoost judgment rules and the color and texture features of the foreground image of the vehicle to be detected. According to the method, the device and the system, the video stream is acquired in real time, and if the vehicle enters the preset detection area, the vehicle is detected according to the AdaBoost judgment rule to judge whether the littered objects such as sandy soil of the vehicle exist or not, so that the effects of high-speed and high-efficiency detection and accurate monitoring are achieved, and manpower and material resources are saved.

Description

A kind of vehicle is dropped object detecting method, device and intelligent traffic monitoring system
Technical field
The invention belongs to intelligent transportation field, relate in particular to a kind of vehicle and drop object detecting method, device and intelligent traffic monitoring system.
Background technology
Along with the fast development of national Construction of Highway Traffic and a large amount of uses of dump truck, the harm that dump truck causes is increasing, particularly dump truck is dropped sandy soil, both caused environmental pollution, serious also can affect traffic, thereby caused the attention of traffic department, set up the vehicle such as dump truck to drop the thing monitoring be inexorable trend.At present, mainly take manual detection for the quality testing survey of dropping of vehicle, a large amount of human and material resources resources that expend, and monitoring efficiency is lower.
Summary of the invention
The embodiment of the invention provides a kind of vehicle to drop object detecting method, is intended to solve prior art and adopts manual type that the thing of dropping of the vehicles such as dump truck is detected, and expends the problem of a large amount of human and material resources.
The embodiment of the invention is achieved in that a kind of vehicle drops object detecting method, and described method comprises the steps:
The video data in default virtual detection zone is obtained in decoding from live video stream;
When vehicle enters described default virtual detection zone, obtain the foreground image of vehicle to be measured;
Extract color and the textural characteristics of the foreground image of described vehicle to be measured;
According to AdaBoost disaggregated model, the AdaBoost decision rule of structure, and the color of the foreground image of described vehicle to be measured and textural characteristics, whether the foreground image of judging described vehicle to be measured exists is dropped thing.
Further, the described step of obtaining default virtual detection zone video data from live video stream decoding is specially:
Upgrade decode component and decoding relation table;
According to described decoding relation table, search the homographic solution Code Element;
According to described decode component, make up complete decoding link and decode;
Send the video requency frame data in the virtual detection zone of decoding generation.
Further, described when vehicle enters default virtual detection zone, the step of obtaining vehicle foreground image to be measured is specially:
Adopt multi-modal Gaussian Background model to the video image motion background modeling, determine background distributions;
Utilize described background distributions, obtain the moving region, and foreground segmentation and morphology processing are carried out in described moving region;
From described moving region, intercept test pattern.
Further, described AdaBoost disaggregated model is constructed by following step:
4.1 input m samples pictures (x 1, y 1) ..., (x m, y m);
4.2 extract color and the texture feature vector of described samples pictures;
4.3 the described sample weights of initialization is D 1(i)=1/m;
4.4 find one corresponding to distribution D tWeak hypothesis h t: X → and 1 ,+1}, and obtain its error in classification ϵ t = Σ i : h t ( x i ) ≠ y i D t ( i ) ;
4.5 according to formula
Figure BDA00002049506900022
Calculate the sorter weight;
4.6 according to formula Upgrade described sample weights;
Whether equal the sorter number 4.7 judge described classification number of times, if the classification number of times equals the sorter number, then enter step 4.8, otherwise return step 4.4;
4.8 obtain final strong classifier.
Wherein, m is number of samples; x i∈ X; y i∈ Y={+1 ,-1}; D t(i) be the weight of i training sample in the t time iteration; ε tIt is the error in classification of sorter in the t time iteration; Z tNormalized factor, so that described sample weights becomes a distribution; h tSorter for character pair t; h t: X → { 1 ,+1} is corresponding to distribution D tWeak hypothesis; α tFor the parameter of AdaBoost selection, as the weight of Weak Classifier.
Further, described AdaBoost decision rule is constructed by following step:
5.1 input samples pictures (x 1, y 1) ..., (x n, y n);
5.2 extract color and the texture feature vector of described samples pictures;
5.3 respectively with described y iThe weight of=-1 and 1 sample is initialized as
Figure BDA00002049506900031
Figure BDA00002049506900032
5.4 the described sample weights ω of normalization t, the normalization computing formula is
Figure BDA00002049506900033
5.5 according to described features training sorter, and calculate its error, its error is according to following formula estimation ε j=∑ iω i| h j(x i)-y i|;
5.6 select error ε in the described sorter tMinimum sorter;
5.7 according to formula
Figure BDA00002049506900034
Upgrade described sample weights, wherein, if classification is correctly then e i=0: otherwise, e i=1;
Whether equal the sorter number 5.8 judge described classification number of times, if the classification number of times equals the sorter number, then enter step 5.9, otherwise return step 5.4;
5.9 obtain final strong classifier be: h ( x ) = 1 Σ t = 1 T α t h t ( x ) ≥ 1 2 Σ t = 1 T α t 0 otherwise .
Wherein, n is total sample number; M and l are respectively the quantity of negative data and positive example sample; y i=-1,1 is expressed as respectively negative data and positive example sample; x i∈ X; y i∈ Y={+1 ,-1}; ω tBe the t time sample weights corresponding to iteration; h jSorter for character pair j; ε tIt is the error in classification of sorter in the t time iteration; Parameter β t = ϵ t 1 - ϵ t ; Parameter α t = log 1 β t .
The embodiment of the invention also provides a kind of vehicle to drop quality testing and surveys device, and described device comprises:
The video data acquiring unit is used for obtaining the video data in default virtual detection zone from the live video stream decoding;
The foreground image interception unit is used for obtaining the foreground image of vehicle to be measured when vehicle enters described default virtual detection zone;
Feature extraction unit is for color and the textural characteristics of the foreground image that extracts described vehicle to be measured;
Drop the thing judging unit, be used for AdaBoost disaggregated model, AdaBoost decision rule according to structure, and the color of the foreground image of described vehicle to be measured and textural characteristics, whether the foreground image of judging described vehicle to be measured exists is dropped thing;
AdaBoost disaggregated model tectonic element is used for utilizing the sample characteristics that extracts, structure AdaBoost disaggregated model;
AdaBoost decision rule tectonic element is used for utilizing the sample characteristics that extracts, structure AdaBoost decision rule.
Further, described video data acquiring unit specifically comprises:
Decoding relation table update module is used for upgrading decode component and decoding relation table;
Decode component is searched module, is used for according to described decoding relation table, searches the homographic solution Code Element;
Decoder module is used for according to described decode component, makes up complete decoding link and decodes;
The decoding sending module is for the video requency frame data in the virtual detection zone that sends the decoding generation.
Further, described foreground image interception unit specifically comprises:
The background model configuration module is used for adopting multi-modal Gaussian Background model to the video image motion background modeling, determines background distributions;
The moving region configuration module is used for utilizing described background distributions, obtains the moving region, and foreground segmentation and morphology processing are carried out in the moving region;
The test pattern interception module is used for intercepting test pattern from described moving region.
Further, described AdaBoost disaggregated model tectonic element specifically comprises:
The first sample load module is used for m samples pictures (x of input 1, y 1) ..., (x m, y m), x wherein i∈ X, y i∈ Y={+1 ,-1};
The First Characteristic extraction module is for the color and the texture feature vector that extract described samples pictures;
The first weight initialization module, being used for the described sample weights of initialization is D 1(i)=1/m;
The Weak Classifier configuration module is used for finding one corresponding to distribution D tWeak hypothesis h t: X → and 1 ,+1}, and obtain its error in classification
Figure BDA00002049506900051
D wherein t(i) be the weight of i training sample in the t time iteration;
Weight computation module is used for according to formula Calculate described sorter weight;
The first weight update module is used for according to formula
Figure BDA00002049506900053
Upgrade described sample weights, wherein Z tNormalized factor, so that described sample weights becomes a distribution;
The first classification number of times judge module is used for judging whether described classification number of times equals the sorter number;
Final sorter configuration module is used for obtaining final strong classifier.
Wherein, m is number of samples; x i∈ X; y i∈ Y={+1 ,-1}; D t(i) be the weight of i training sample in the t time iteration; ε tIt is the error in classification of sorter in the t time iteration; Z tNormalized factor, so that described sample weights becomes a distribution; h tSorter for character pair t; h t: X → { 1 ,+1} is corresponding to distribution D tWeak hypothesis; α tFor the parameter of AdaBoost selection, as the weight of Weak Classifier.
Further, described AdaBoost decision rule tectonic element specifically comprises:
The second sample load module is used for input samples pictures (x 1, y 1) ..., (x n, y n), wherein, y i=-1,1 is expressed as respectively negative data and positive example sample;
The Second Characteristic extraction module is for the color and the texture feature vector that extract described samples pictures;
The second weight initialization module is used for respectively with described y iThe weight of=-1 and 1 sample is initialized as
Figure BDA00002049506900054
Figure BDA00002049506900055
The m here and l are respectively the quantity of negative data and positive example sample;
The normalized weight module is used for the described sample weights ω of normalization t, the normalization computing formula is ω t , i ← ω t , i Σ j = 1 n ω t , j ;
The sorter training module is used for according to described features training sorter, and calculates its error, and its error is according to following formula estimation ε j=∑ iω i| h j(x i)-y i|;
Error minimum classification device configuration module is used for selecting described sorter error ε tMinimum sorter;
The second weight update module is used for upgrading described sample weights:
Figure BDA00002049506900061
Wherein, if classification correctly then e i=0: otherwise, e i=1, and
Figure BDA00002049506900062
The second classification number of times judge module is used for judging whether described classification number of times equals the sorter number; The strong classifier configuration module for obtaining final strong classifier is:
Figure BDA00002049506900063
Wherein, n is total sample number; M and l are respectively the quantity of negative data and positive example sample; y i=-1,1 is expressed as respectively negative data and positive example sample; x i∈ X; y i∈ Y={+1 ,-1}; ω tBe the t time sample weights corresponding to iteration; h jSorter for character pair j; ε tIt is the error in classification of sorter in the t time iteration; Parameter β t = ϵ t 1 - ϵ t ; Parameter α t = log 1 β t .
The embodiment of the invention also provides a kind of intelligent traffic monitoring system that above-mentioned vehicle is dropped quality testing survey device that comprises.
In embodiments of the present invention, the Real-time Obtaining video flowing is if there is vehicle to enter the surveyed area of setting, then detect according to the AdaBoost decision rule, judged whether that the vehicle such as sandy soil drops thing, reached detect fast, efficient, check and control is effect accurately, has saved manpower and materials.
Description of drawings
Fig. 1 is the realization flow figure that vehicle that the embodiment of the invention provides is dropped object detecting method;
Fig. 2 is the realization flow figure of the structure AdaBoost disaggregated model that provides of the embodiment of the invention;
Fig. 3 is the realization flow figure of the structure AdaBoost judgment rule that provides of the embodiment of the invention;
Fig. 4 is that the vehicle that the embodiment of the invention provides is dropped the structural drawing that quality testing is surveyed device.
Embodiment
In order to make purpose of the present invention, technical scheme and advantage clearer, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, is not intended to limit the present invention.
In embodiments of the present invention, the Real-time Obtaining video flowing, if there is vehicle to enter the surveyed area of setting, then according to the AdaBoost decision rule, whether the vehicles such as detection dump truck exist is dropped thing, can alleviate manual supervisory workload.
The vehicle that Fig. 1 shows the embodiment of the invention to be provided is dropped the realization flow of object detecting method, and details are as follows:
In step S101, the video data in default virtual detection zone is obtained in decoding from live video stream;
In embodiments of the present invention, decoding is obtained the step of video data in default virtual detection zone details are as follows from live video stream:
1.1 upgrade up-to-date decode component and decoding relation table;
1.2 according to the decoding relation table, search the homographic solution Code Element;
1.3 according to decode component, make up complete decoding link and decode;
1.4 send the video requency frame data in the virtual detection zone of decoding generation.
In step S102, when detecting vehicle and enter default virtual detection zone, obtain the foreground image of vehicle to be measured;
In embodiments of the present invention, the user will preset the virtual detection zone.When detecting vehicle and enter default virtual detection zone, utilize polymorphic Gaussian Background model to obtain vehicle foreground image to be measured, details are as follows for specific implementation:
2.1 movement background modeling;
The embodiment of the invention adopts multi-modal Gaussian Background modeling, and the video image that adopts multi-modal Gaussian Background model that hard-wired video camera is taken carries out background modeling, and the Gauss model that each pixel is set up is at least 2.In the present embodiment, adopt the mode of each pixel being set up 3 Gauss models, be specially:
The pixel of supposing the t two field picture of input is I t, μ I, t-1Be the average of pixel value of i the Gaussian distribution of (t-1) two field picture, the average of pixel value equal each pixel value addition and divided by the number of pixel, σ I, t-1Be the standard deviation of pixel value of i the Gaussian distribution of (t-1) two field picture, D is for satisfying formula | I tI, t-1|≤D. σ I, t-1Preset parameter, this parameter can be obtained by practical experience, wherein, μ I, t=(1-α) μ T, t-1+ ρ I t,
Figure BDA00002049506900081
ρ=α/ω I, t, α is learning rate, 0≤α≤1, and ρ is the parameter learning rate, ω I, tThe weights of i Gaussian distribution of t two field picture.
All weights that normalization calculates, and each gauss of distribution function pressed ω I, t/ σ I, tArrange from big to small, if i 1, i 2..., i kRepresent each Gaussian distribution, with i 1, i 2..., i kAccording to ω I, t/ σ I, tOrder is from big to small arranged, if a front M Gaussian distribution satisfies formula:
Figure BDA00002049506900082
Then this M Gaussian distribution is considered to background distributions, and wherein, τ is weight threshold, can obtain according to actual conditions, usually τ value 0.7.
2.2 utilize background distributions, obtain the moving region, and foreground segmentation and morphology processing are carried out in the moving region;
In the embodiment of the invention, after having determined background distributions, the background model that present frame is corresponding with the background distributions of this present frame is subtracted each other, obtain the moving region of this present frame, binaryzation and morphology processing are carried out in the moving region that obtains, so that it is more complete, independent to cut apart the fuzzy motion target area that obtains.
2.3 from the moving region, intercept test pattern.
Because dump truck moves, sandy soil etc. are dropped thing in the dump truck back, with background modeling the moving object zone are detected first, if the moving region is elongated, then can intercept the image of this part moving region as test pattern.
In step S103, extract color and the textural characteristics of the foreground image of vehicle to be measured;
In embodiments of the present invention, on the basis of color layering, extract color characteristic by color histogram, and detected image carried out texture feature extraction, be implemented as follows:
3.1 color space conversion;
In embodiments of the present invention, need to come colo(u)r breakup in the hsv color space, thus at first with image from the RGB color space conversion to the hsv color space:
H = arccos ( R - G ) + ( R - B ) 2 ( R - G ) * ( R - G ) + ( R - B ) * ( G - * B ) ( B ≤ G ) 2 π - arccos ( R - G ) + ( R - B ) 2 ( R - G ) * ( R - G ) + ( R - B ) * ( G - * B ) ( B > G ) - - - ( 1 )
S = max ( R , G , B ) - min ( R , G , B ) max ( R + G + B ) - - - ( 2 )
V = max ( R , G , B ) 255 - - - ( 3 )
3.2 the color layering is calculated;
Colo(u)r breakup is exactly that color space is mapped in certain subset, thereby improves image retrieval speed.The nearly 224 kinds of colors of general color of image system, and the color that human eye can really be distinguished is limited, therefore when carrying out the image processing, need to carry out layering to color space, the dimension size of layering is extremely important, the layering dimension is higher, and retrieval precision is just higher, but retrieval rate can descend thereupon.
Colo(u)r breakup is divided into the colo(u)r breakup of equivalent spacing and the colo(u)r breakup of non-equivalent spacing, because if the dimension of equivalent spacing layering is excessively low, then precision can descend greatly, if too highly can cause calculation of complex again, by analysis and the experiment, the embodiment of the invention is selected the colo(u)r breakup of non-equivalent spacing, is implemented as follows:
According to the human perception ability, tone H is divided into 8 parts, saturation degree S and brightness V are divided into 3 parts, according to color space and people the subjective perception characteristic of color are quantized layering, and formula is as follows:
H = 0 if h ∈ [ 316,20 ] 1 if h ∈ [ 21,40 ] 2 if h ∈ [ 41,75 ] 3 if h ∈ [ 76,155 ] 4 if h ∈ [ 156,190 ] 5 if h ∈ [ 191,270 ] 6 if h ∈ [ 271,195 ] 7 if h ∈ [ 296,315 ] - - - ( 4 )
S = 0 if s ∈ [ 0,0.2 ] 1 if s ∈ [ 0.2,0.7 ] 2 if s ∈ [ 0.7,1 ] - - - ( 5 )
V = 0 if v ∈ [ 0,0.2 ] 1 if v ∈ [ 0.2,0.7 ] 2 if v ∈ [ 0.7,1 ] - - - ( 6 )
According to above method color space is divided into 72 kinds of colors.
3.3 component merges:
Y=HQsQv+SQv+V (7)
Wherein, QS and QV are respectively the quantification progression of S and V, get QS=3 in the experiment, QV=3, and therefore actual is Y=9H+3S+V.
Like this, H, S, three components of V merge at Y, the span of Y be [0,1 ... 71].
3.4 the textural characteristics of gray level co-occurrence matrixes;
At first converting coloured image to gray level image, is the image of N level for gray scale, and co-occurrence matrix is N*N dimension matrix, namely
Figure BDA00002049506900101
Wherein, be positioned at the element m of (h, k) HkValue representation be h at a distance of the gray scale of (h, k), and another gray scale is the number of times of pixel to occurring of k.
Four characteristic quantities that extracted by the texture co-occurrence matrix are:
Contrast: CON = Σ h Σ k ( h - k ) 2 m hk - - - ( 8 )
Energy: ASM = Σ h Σ k ( m hk ) 2 - - - ( 9 )
Entropy: ENT = - Σ h Σ k m hk lg ( m hk ) - - - ( 10 )
Relevant: COR = [ Σ h Σ k hkm hk - μ x μ y ] / σ x σ y - - - ( 11 )
Wherein,
Figure BDA00002049506900106
It is every column element sum in the matrix M;
Figure BDA00002049506900107
It is every row element sum; μ x, μ y, σ x, σ yRespectively m x, m yAverage and standard deviation.
Concrete steps are as follows:
A) gray scale with image is divided into 64 gray shade scales;
B) structure four direction gray level co-occurrence matrixes: M (1,0), M (0,1), M (1,1), M (1 ,-1);
C) calculate respectively four texture characteristic amounts on each co-occurrence matrix;
Average and standard deviation with each characteristic quantity: μ CON, σ CON, μ ASM, σ ASM, μ ENT, σ ENT, μ COR, σ COREight components as textural characteristics.
In step S104, according to AdaBoost disaggregated model, the AdaBoost decision rule of structure, and the color of the foreground image of vehicle to be measured and textural characteristics, whether the foreground image of judging vehicle to be measured exists is dropped thing.
In embodiments of the present invention, vehicle is dropped thing and is referred to fluid, the bulkload such as rubbish, dregs, sandstone, the earthwork, mortar.
In embodiments of the present invention, AdaBoost specifically solves two problems: how processing training sample and how merging Weak Classifier becomes a strong classifier.
Fig. 2 shows the realization flow of the structure AdaBoost disaggregated model that the embodiment of the invention provides, and details are as follows:
In embodiments of the present invention, the AdaBoost algorithm main thought is exactly to make the weight of all samples in the sample set form a distribution.The initial weight of each training sample is all identical, takes turns in the iterative process every, if sample is not correctly classified, its weight will be enhanced, otherwise then reduces.Like this, AdaBoost is placed on more notice on the difficult sample that divides.For the organizational form of Weak Classifier, strong classifier is expressed as linear weighted function and the form of some Weak Classifiers, and the Weak Classifier weight that accuracy rate is higher is higher.
Set D t(i) be the weight of i training sample in the t time iteration, find one corresponding to distribution D by weak learner tWeak hypothesis h t: X → and 1 ,+1}.Its error rate is
Figure BDA00002049506900111
In step S201, input m samples pictures (x 1, y 1) ..., (x m, y m);
In step S202, extract sample of color and textural characteristics;
In step S203, the initialization sample weight is D 1(i)=1/m;
In step S204, find one corresponding to distribution D tWeak hypothesis h t: X → and 1 ,+1}, and obtain its error in classification ε t, computing formula as shown in Equation 12;
ϵ t = Σ i : h t ( x i ) ≠ y i D t ( i ) - - - ( 12 )
In step S205, calculate the sorter weight α t, computing formula as shown in Equation 13;
α t = 1 2 ln ( 1 - ϵ t ϵ t ) - - - ( 13 )
In step S206, upgrade sample weights D T+1(i), computing formula as shown in Equation 14;
Figure BDA00002049506900121
Figure BDA00002049506900122
In step S207, judge whether circulation execution T time, whether the number of times of namely classifying reaches T time, if execution in step S208 returns execution in step S204 if not;
In step S208, obtain final strong classifier.
In embodiments of the present invention, in step S201-S208, m is number of samples; x i∈ X; y i∈ Y={+1 ,-1}; ε tIt is the error in classification of sorter in the t time iteration; Z tNormalized factor, so that described sample weights becomes a distribution; h tSorter for character pair t; h t: X → { 1 ,+1} is corresponding to distribution D tWeak hypothesis; α tFor the parameter of AdaBoost selection, as the weight of Weak Classifier.
Final hypothesis H is the combination of T weak hypothesis in embodiments of the present invention, and final strong classifier is the linear combination of Weak Classifier, each weak hypothesis h tCorresponding weight α t
Can find out that from above algorithm in case selected after the Weak Classifier, AdaBoost selects a parameter alpha t, as the weight of Weak Classifier.Notice, when The time, α t〉=0, and along with ε tReduce α tIncrease gradually, that is to say, the Weak Classifier that the classification error rate is lower, its weight is larger.
In embodiments of the present invention, adopt the AdaBoost learning algorithm to judge, can reach the purpose that forms an effective sorter with a small amount of feature.
The realization flow of the structure AdaBoost decision rule that provides in the embodiment of the invention is provided Fig. 3, and details are as follows:
Each weak learning algorithm is used for selecting a rectangular characteristic of can be best positive example and counter-example being separated, and determines simultaneously the threshold value of an optimum for each rectangular characteristic, so that the wrong number of samples that divides is minimum.For a Weak Classifier function h j(x), a feature f is arranged j, a threshold value θ j, also have a sign function p who is used to refer to the inequality direction j, be shown below:
h j ( x ) = 1 if ( p j f j ( x ) < p j &theta; j ) 0 otherwise - - - ( 15 )
The x here is the subwindow of a 64*64 pixel size in the piece image.
Concrete operation step is:
In step S301, given one group of samples pictures (x 1, y 1) ..., (x n, y n);
In step S302, extract color and the textural characteristics of sample;
In step S303, respectively with y iThe weight of=-1 and 1 sample is initialized as
Figure BDA00002049506900131
In step S304, normalized weight w T, i, the normalization computing formula as shown in Equation 16, w then tObey probability distribution;
w t , i &LeftArrow; w t , i &Sigma; j = 1 n w t , i - - - ( 16 )
In step S305, according to the features training sorter, and calculate its error, for each feature j, train a sorter h j, strictly observe the principle of using a feature, its error is ε j, obtained by formula 17 estimations;
ε j=∑ iw i|h j(x i)-y i| (17)
In step S306, selection sort device error ε tMinimum sorter;
In step S307, upgrade sample weights.More new formula as shown in Equation 18;
w t + 1 , i = w t , i &beta; t 1 - e i - - - ( 18 )
&beta; t = &epsiv; t 1 - &epsiv; t - - - ( 19 )
Wherein, if correct classification, then e i=0, otherwise e i=1.
In step S308, judge whether to circulate T time, whether the number of times of namely classifying equals the sorter number, if execution in step S309 returns execution in step S304 if not;
In step S309, obtain final strong classifier, for:
h ( x ) = 1 &Sigma; t = 1 T &alpha; t h t ( x ) &GreaterEqual; 1 2 &Sigma; t = 1 T &alpha; t 0 otherwise - - - ( 20 )
In embodiments of the present invention, in step S301-S309, n is total sample number; M and l are respectively the quantity of negative data and positive example sample; y i=-1,1 is expressed as respectively negative data and positive example sample; x i∈ X; y i∈ Y={+1 ,-1}; ω tBe the t time sample weights corresponding to iteration; h jSorter for character pair j; ε tIt is the error in classification of sorter in the t time iteration; Parameter
Figure BDA00002049506900141
Parameter
Figure BDA00002049506900142
In embodiments of the present invention, extract color characteristic and the textural characteristics of detected image, classify by strong classifier.Yardstick according to convergent-divergent embodies different testing results.When dropping thing when existing, detected value is 1 in embodiments of the present invention, otherwise detected value is 0, gets the mean value of detected value, if the mean value of detected value, judges then that dropping thing exists greater than 0.5.
In embodiments of the present invention, Fig. 2 is the theoretical implementation method of AdaBoost, and Fig. 3 is specifically for the concrete methods of realizing of the embodiment of the invention, and the method that difference is to upgrade weight is different.
The vehicle that Fig. 4 shows the embodiment of the invention to be provided is dropped the structure that quality testing is surveyed device, for convenience of explanation, only shows the part relevant with the embodiment of the invention.
This vehicle is dropped quality testing survey device can be used for intelligent traffic monitoring system, can be to run on the unit that software unit, hardware cell or software and hardware in the intelligent traffic monitoring system combine, also can be used as independently, suspension member is integrated in the intelligent traffic monitoring system, wherein:
Video data acquiring unit 41 is decoded from live video stream and is obtained the video data in default virtual detection zone.
Video data acquiring unit 41 comprises that decoding relation table update module 411, decode component are searched module 412, decoder module 413, the sending module 414 of decoding.Wherein:
Decoding relation table update module 411 is upgraded decode component and decoding relation table.
Decode component is searched module 412 according to described decoding relation table, searches the homographic solution Code Element.
Decoder module 413 makes up complete decoding link and decodes according to described decode component.
Decoding sending module 414 sends the video requency frame data in the virtual detection zone of decoding generation.
Default virtual detection zone arranges according to the track in embodiments of the present invention.
Foreground image interception unit 42 is obtained the foreground image of vehicle to be measured when vehicle enters described default virtual detection zone.
Foreground image interception unit 42 comprises background model configuration module 421, moving region configuration module 422, test pattern interception module 423.Wherein:
Background model configuration module 421 adopts multi-modal Gaussian Background model to the video image motion background modeling, determines background distributions.
In the embodiment of the invention, background model configuration module 421 adopts multi-modal Gaussian Background modeling, the video image that adopts multi-modal Gaussian Background model that hard-wired video camera is taken carries out background modeling, and the Gauss model that each pixel is set up is at least 2.In the present embodiment, adopt the mode of each pixel being set up 3 Gauss models, be specially:
The pixel of supposing the t two field picture of input is I t, μ I, t-1Be the average of pixel value of i the Gaussian distribution of (t-1) two field picture, the average of pixel value equal each pixel value addition and divided by the number of pixel, σ I, t-1Be the standard deviation of pixel value of i the Gaussian distribution of (t-1) two field picture, D is for satisfying formula | I tI, t-1|≤D. σ I, t-1Preset parameter, this parameter can be obtained by practical experience, wherein, μ I, t=(1-α) μ I, t-1+ ρ I t,
Figure BDA00002049506900151
ρ=α/ω I, t, α is learning rate, 0≤α≤1, and ρ is the parameter learning rate, ω I, tThe weights of i Gaussian distribution of t two field picture.
All weights that normalization calculates, and each gauss of distribution function pressed ω I, t/ σ I, tArrange from big to small, if i 1, i 2... i kRepresent each Gaussian distribution, with i 1, i 2... i kAccording to ω I, t/ σ I, tOrder is from big to small arranged, if a front M Gaussian distribution satisfies formula:
Figure BDA00002049506900152
Then this M Gaussian distribution is considered to background distributions, and wherein, τ is weight threshold, can obtain according to actual conditions, usually τ value 0.7.
Moving region configuration module 422 utilizes described background distributions, obtains the moving region, and foreground segmentation and morphology processing are carried out in the moving region.
In the embodiment of the invention, after having determined background distributions, the background model that present frame is corresponding with the background distributions of this present frame is subtracted each other, obtain the moving region of this present frame, binaryzation and morphology processing are carried out in the moving region that obtains, so that it is more complete, independent to cut apart the fuzzy motion target area that obtains.
Test pattern interception module 423 intercepts test pattern from described moving region.
Because dump truck moves, sandy soil etc. are dropped thing in the dump truck back, with background modeling the moving object zone are detected first, if the moving region is elongated, then can intercept the image of this part moving region as test pattern.
Feature extraction unit 43 is extracted color and the textural characteristics of the foreground image of described vehicle to be measured.
In embodiments of the present invention, characteristic extracting module 43 is extracted the color characteristic of positive and negative sample image and the textural characteristics of gray level co-occurrence matrixes.
AdaBoost disaggregated model tectonic element 44 utilizes the sample characteristics that extracts, structure AdaBoost disaggregated model.
AdaBoost disaggregated model tectonic element 44 comprises the first sample load module 441, First Characteristic extraction module 442, the first weight initialization module 443, Weak Classifier configuration module 444, weight computation module 445, the first weight update module 446, the first classification number of times judge module 447, final sorter configuration module 448.Wherein:
M samples pictures (x of the first sample load module 441 inputs 1, y 1) ..., (x m, y m).
First Characteristic extraction module 442 extracts color and the texture feature vector of described samples pictures.
The described sample weights of the first weight initialization module 443 initialization is D 1(i)=1/m.
Weak Classifier configuration module 444 finds one corresponding to distribution D tWeak hypothesis h t: X → and 1 ,+1}, and obtain its error in classification
Figure BDA00002049506900161
Weight computation module 445 is according to formula
Figure BDA00002049506900162
Calculate described sorter weight.
The first weight update module 446 is according to formula
Figure BDA00002049506900163
Upgrade described sample weights.
The first classification number of times judge module 447 judges whether described classification number of times equals the sorter number.
Final sorter configuration module 448 obtains final strong classifier.
Wherein, in above device, m is number of samples; x i∈ X; y i∈ Y={+1 ,-1}; D t(i) be the weight of i training sample in the t time iteration; ε tIt is the error in classification of sorter in the t time iteration; Z tNormalized factor, so that described sample weights becomes a distribution; h tSorter for character pair t; h t: X → { 1 ,+1} is corresponding to distribution D tWeak hypothesis; α tFor the parameter of AdaBoost selection, as the weight of Weak Classifier.
In embodiments of the present invention, the AdaBoost algorithm main thought is exactly to make the weight of all samples in the sample set form a distribution.The initial weight of each training sample is all identical, takes turns in the iterative process every, if sample is not correctly classified, its weight will be enhanced, otherwise then reduces.Like this, AdaBoost is placed on more notice on the difficult sample that divides.For the organizational form of Weak Classifier, strong classifier is expressed as linear weighted function and the form of some Weak Classifiers, and the Weak Classifier weight that accuracy rate is higher is higher.
AdaBoost decision rule tectonic element 45 utilizes the sample characteristics that extracts, structure AdaBoost decision rule.
AdaBoost decision rule tectonic element 45 comprises the second sample load module 451, Second Characteristic extraction module 452, the second weight initialization module 453, normalized weight module 454, sorter training module 455, error minimum classification device configuration module 456, the second weight update module 457, the second classification number of times judge module 458, strong classifier configuration module 459.Wherein:
The second sample load module 451 input samples pictures (x 1, y 1) ..., (x m, y m), wherein, y i=-1,1 is expressed as respectively negative data and positive example sample.
In embodiments of the present invention, for the structural classification device, at first collect positive negative sample, for example adopting and dropping object image is about 1000 of positive sample, and the non-object image of dropping is that negative sample is about 1500.
Second Characteristic extraction module 452 extracts color and the texture feature vector of described samples pictures.
The second weight initialization module 453 is respectively with described y iThe weight of=-1 and 1 sample is initialized as
Figure BDA00002049506900171
Figure BDA00002049506900172
Wherein, m and l are respectively the quantity of negative data and positive example sample.
The described sample weights ω of normalized weight module 454 normalization t, the normalization computing formula is
Figure BDA00002049506900173
Thus, ω tObey probability distribution.
Sorter training module 455 is according to described features training sorter, and calculates its error, and its error estimates ε according to following formula j=∑ iω i| h j(x i)-y i|.
Error minimum classification device configuration module 456 is selected error ε in the described sorter tMinimum sorter.The second weight update module 457 is upgraded described sample weights:
Figure BDA00002049506900181
Wherein, if classification correctly then e i=0: otherwise, e i=1.
The second classification number of times judge module 458 judges whether described classification number of times equals the sorter number.
Strong classifier configuration module 459 obtains final strong classifier:
h ( x ) = 1 &Sigma; t = 1 T &alpha; t h t ( x ) &GreaterEqual; 1 2 &Sigma; t = 1 T &alpha; t 0 otherwise .
Wherein, in above device, n is total sample number; y i=-1,1 is expressed as respectively negative data and positive example sample; x i∈ X; y i∈ Y={+1 ,-1}; ω tBe the t time sample weights corresponding to iteration; h jSorter for character pair j; ε tIt is the error in classification of sorter in the t time iteration; Parameter Parameter
Figure BDA00002049506900184
Drop thing judging unit 46 according to AdaBoost disaggregated model, the AdaBoost decision rule of structure, and the color of the foreground image of described vehicle to be measured and textural characteristics, whether the foreground image of judging described vehicle to be measured exists is dropped thing.
In embodiments of the present invention, extract color characteristic and the textural characteristics of detected image, classify by strong classifier.Yardstick according to convergent-divergent embodies different testing results.When dropping thing when existing, detected value is 1 in embodiments of the present invention, otherwise detected value is 0, gets the mean value of detected value, if the mean value of detected value, judges then that dropping thing exists greater than 0.5.
In embodiments of the present invention, vehicle is dropped thing and is referred to fluid, the bulkload such as rubbish, dregs, sandstone, the earthwork, mortar.
In embodiments of the present invention, the Real-time Obtaining video flowing is if there is vehicle to enter the surveyed area of setting, then detect according to the AdaBoost decision rule, judged whether that the vehicle such as sandy soil drops thing, reached detect fast, efficient, check and control is effect accurately, has saved manpower and materials.
The above only is preferred embodiment of the present invention, not in order to limiting the present invention, all any modifications of doing within the spirit and principles in the present invention, is equal to and replaces and improvement etc., all should be included within protection scope of the present invention.

Claims (11)

1. a vehicle is dropped object detecting method, it is characterized in that, described method comprises the steps:
The video data in default virtual detection zone is obtained in decoding from live video stream;
When vehicle enters described default virtual detection zone, obtain the foreground image of vehicle to be measured;
Extract color and the textural characteristics of the foreground image of described vehicle to be measured;
According to AdaBoost disaggregated model, the AdaBoost decision rule of structure, and the color of the foreground image of described vehicle to be measured and textural characteristics, whether the foreground image of judging described vehicle to be measured exists is dropped thing.
2. the method for claim 1 is characterized in that, the described step of obtaining default virtual detection zone video data from live video stream decoding is specially:
Upgrade decode component and decoding relation table;
According to described decoding relation table, search the homographic solution Code Element;
According to described decode component, make up complete decoding link and decode;
Send the video requency frame data in the virtual detection zone of decoding generation.
3. the method for claim 1 is characterized in that, described when vehicle enters default virtual detection zone, the step of obtaining vehicle foreground image to be measured is specially:
Adopt multi-modal Gaussian Background model to the video image motion background modeling, determine background distributions;
Utilize described background distributions, obtain the moving region, and foreground segmentation and morphology processing are carried out in described moving region;
From described moving region, intercept test pattern.
4. the method for claim 1 is characterized in that, described AdaBoost disaggregated model is constructed by following step:
4.1 input m samples pictures (x 1, y 1) ..., (x m, y m);
4.2 extract color and the texture feature vector of described samples pictures;
4.3 the described sample weights of initialization is D 1(i)=1/m;
4.4 find one corresponding to distribution D tWeak hypothesis h t: X → and 1 ,+1}, and obtain its error in classification &epsiv; t = &Sigma; i : h t ( x i ) &NotEqual; y i D t ( i ) ;
4.5 according to formula
Figure FDA00002049506800022
Calculate the sorter weight;
4.6 according to formula
Figure FDA00002049506800023
Upgrade described sample weights;
Whether equal the sorter number 4.7 judge described classification number of times, if the classification number of times equals the sorter number, then enter step 4.8, otherwise return step 4.4;
4.8 obtain final strong classifier.
Wherein, m is number of samples; x i∈ X; y i∈ Y={+1 ,-1}; D t(i) be the weight of i training sample in the t time iteration; ε tIt is the error in classification of sorter in the t time iteration; Z tNormalized factor, so that described sample weights becomes a distribution; h tSorter for character pair t; h t: X → { 1 ,+1} is corresponding to distribution D tWeak hypothesis; α tFor the parameter of AdaBoost selection, as the weight of Weak Classifier.
5. the method for claim 1 is characterized in that, described AdaBoost decision rule is constructed by following step:
5.1 input samples pictures (x 1, y 1) ..., (x n, y n);
5.2 extract color and the texture feature vector of described samples pictures;
5.3 respectively with described y iThe weight of=-1 and 1 sample is initialized as
Figure FDA00002049506800025
5.4 the described sample weights ω of normalization t, the normalization computing formula is
Figure FDA00002049506800026
5.5 according to described features training sorter, and calculate its error, its error is according to following formula estimation ε j=∑ iω i| h j(x i)-y i|;
5.6 select error ε in the described sorter tMinimum sorter;
5.7 according to formula
Figure FDA00002049506800027
Upgrade described sample weights, wherein, if classification is correctly then e i=0: otherwise, e i=1;
Whether equal the sorter number 5.8 judge described classification number of times, if the classification number of times equals the sorter number, then enter step 5.9, otherwise return step 5.4;
5.9 obtain final strong classifier be: h ( x ) = 1 &Sigma; t = 1 T &alpha; t h t ( x ) &GreaterEqual; 1 2 &Sigma; t = 1 T &alpha; t 0 otherwise .
Wherein, n is total sample number; M and l are respectively the quantity of negative data and positive example sample; y i=-1,1 is expressed as respectively negative data and positive example sample; x i∈ X; y i∈ Y={+1 ,-1}; ω tBe the t time sample weights corresponding to iteration; h jSorter for character pair j; ε tIt is the error in classification of sorter in the t time iteration; Parameter &beta; t = &epsiv; t 1 - &epsiv; t ; Parameter &alpha; t = log 1 &beta; t .
6. a vehicle is dropped quality testing and is surveyed device, it is characterized in that described device comprises:
The video data acquiring unit is used for obtaining the video data in default virtual detection zone from the live video stream decoding;
The foreground image interception unit is used for obtaining the foreground image of vehicle to be measured when vehicle enters described default virtual detection zone;
Feature extraction unit is for color and the textural characteristics of the foreground image that extracts described vehicle to be measured;
Drop the thing judging unit, be used for AdaBoost disaggregated model, AdaBoost decision rule according to structure, and the color of the foreground image of described vehicle to be measured and textural characteristics, whether the foreground image of judging described vehicle to be measured exists is dropped thing;
AdaBoost disaggregated model tectonic element is used for utilizing the sample characteristics that extracts, structure AdaBoost disaggregated model;
AdaBoost decision rule tectonic element is used for utilizing the sample characteristics that extracts, structure AdaBoost decision rule.
7. device as claimed in claim 6 is characterized in that, described video data acquiring unit specifically comprises:
Decoding relation table update module is used for upgrading decode component and decoding relation table;
Decode component is searched module, is used for according to described decoding relation table, searches the homographic solution Code Element;
Decoder module is used for according to described decode component, makes up complete decoding link and decodes;
The decoding sending module is for the video requency frame data in the virtual detection zone that sends the decoding generation.
8. device as claimed in claim 6 is characterized in that, described foreground image interception unit specifically comprises:
The background model configuration module is used for adopting multi-modal Gaussian Background model to the video image motion background modeling, determines background distributions;
The moving region configuration module is used for utilizing described background distributions, obtains the moving region, and foreground segmentation and morphology processing are carried out in the moving region;
The test pattern interception module is used for intercepting test pattern from described moving region.
9. device as claimed in claim 6 is characterized in that, described AdaBoost disaggregated model tectonic element specifically comprises:
The first sample load module is used for m samples pictures (x of input 1, y 1) ..., (x m, y m), x wherein i∈ X, y i∈ Y={+1 ,-1};
The First Characteristic extraction module is for the color and the texture feature vector that extract described samples pictures;
The first weight initialization module, being used for the described sample weights of initialization is D 1(i)=1/m;
The Weak Classifier configuration module is used for finding one corresponding to distribution D tWeak hypothesis h t: X → and 1 ,+1}, and obtain its error in classification
Figure FDA00002049506800041
D wherein t(i) be the weight of i training sample in the t time iteration;
Weight computation module is used for according to formula
Figure FDA00002049506800042
Calculate described sorter weight;
The first weight update module is used for according to formula Upgrade described sample weights, wherein Z tNormalized factor, so that described sample weights becomes a distribution;
The first classification number of times judge module is used for judging whether described classification number of times equals the sorter number;
Final sorter configuration module is used for obtaining final strong classifier.
Wherein, m is number of samples; x i∈ X; y i∈ Y={+1 ,-1}; D t(i) be the weight of i training sample in the t time iteration; ε tIt is the error in classification of sorter in the t time iteration; Z tNormalized factor, so that described sample weights becomes a distribution; h tSorter for character pair t; h t: X → { 1 ,+1} is corresponding to distribution D tWeak hypothesis; α tFor the parameter of AdaBoost selection, as the weight of Weak Classifier.
10. device as claimed in claim 6 is characterized in that, described AdaBoost decision rule tectonic element specifically comprises:
The second sample load module is used for input samples pictures (x 1, y 1) ..., (x n, y n), wherein, y i=-1,1 is expressed as respectively negative data and positive example sample;
The Second Characteristic extraction module is for the color and the texture feature vector that extract described samples pictures;
The second weight initialization module is used for respectively with described y iThe weight of=-1 and 1 sample is initialized as The m here and l are respectively the quantity of negative data and positive example sample;
The normalized weight module is used for the described sample weights ω of normalization t, the normalization computing formula is &omega; t , i &LeftArrow; &omega; t , i &Sigma; j = 1 n &omega; t , j ;
The sorter training module is used for according to described features training sorter, and calculates its error, and its error is according to following formula estimation ε j=∑ iω i| h j(x i)-y i|;
Error minimum classification device configuration module is used for selecting described sorter error ε tMinimum sorter;
The second weight update module is used for upgrading described sample weights:
Figure FDA00002049506800054
Wherein, if classification correctly then e i=0: otherwise, e i=1, and
Figure FDA00002049506800055
The second classification number of times judge module is used for judging whether described classification number of times equals the sorter number;
The strong classifier configuration module for obtaining final strong classifier is: h ( x ) = 1 &Sigma; t = 1 T &alpha; t h t ( x ) &GreaterEqual; 1 2 &Sigma; t = 1 T &alpha; t 0 otherwise . Wherein &alpha; t = log 1 &beta; t .
Wherein, n is total sample number; M and l are respectively the quantity of negative data and positive example sample; y i=-1,1 is expressed as respectively negative data and positive example sample; x i∈ X; y i∈ Y={+1 ,-1}; ω tBe the t time sample weights corresponding to iteration; h jSorter for character pair j; ε tIt is the error in classification of sorter in the t time iteration; Parameter &beta; t = &epsiv; t 1 - &epsiv; t ; Parameter &alpha; t = log 1 &beta; t .
11. an intelligent traffic monitoring system is characterized in that, described intelligent traffic monitoring system comprises the described vehicle of the arbitrary claim of claim 6-10 and drops quality testing survey device.
CN201210302637.7A 2012-08-23 2012-08-23 Method and device for detecting littered objects of vehicle and intelligent traffic monitoring system Active CN102867183B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210302637.7A CN102867183B (en) 2012-08-23 2012-08-23 Method and device for detecting littered objects of vehicle and intelligent traffic monitoring system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210302637.7A CN102867183B (en) 2012-08-23 2012-08-23 Method and device for detecting littered objects of vehicle and intelligent traffic monitoring system

Publications (2)

Publication Number Publication Date
CN102867183A true CN102867183A (en) 2013-01-09
CN102867183B CN102867183B (en) 2015-04-15

Family

ID=47446047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210302637.7A Active CN102867183B (en) 2012-08-23 2012-08-23 Method and device for detecting littered objects of vehicle and intelligent traffic monitoring system

Country Status (1)

Country Link
CN (1) CN102867183B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914688A (en) * 2014-03-27 2014-07-09 北京科技大学 Urban road obstacle recognition system
CN104599263A (en) * 2014-12-23 2015-05-06 安科智慧城市技术(中国)有限公司 Image detecting method and device
CN105390021A (en) * 2015-11-16 2016-03-09 北京蓝卡科技股份有限公司 Parking spot state detection method and parking spot state detection device
CN106297278A (en) * 2015-05-18 2017-01-04 杭州海康威视数字技术股份有限公司 A kind of method and system shedding thing vehicle for inquiry
CN107124327A (en) * 2017-04-11 2017-09-01 千寻位置网络有限公司 The method that the reverse-examination of JT808 car-mounted terminal simulators is surveyed
CN107358207A (en) * 2017-07-14 2017-11-17 重庆大学 A kind of method for correcting facial image
CN107918762A (en) * 2017-10-24 2018-04-17 江西省高速公路投资集团有限责任公司 A kind of highway drops thing rapid detection system and method
CN108873895A (en) * 2018-06-11 2018-11-23 北京航空航天大学 Drop intelligent patrol detection vehicle in road surface
WO2019079965A1 (en) * 2017-10-24 2019-05-02 江西省高速公路投资集团有限责任公司 Rapid detection system and method for article dropped on road
CN109978465A (en) * 2019-03-29 2019-07-05 江苏满运软件科技有限公司 Source of goods recommended method, device, electronic equipment, storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101398893A (en) * 2008-10-10 2009-04-01 北京科技大学 Adaboost arithmetic improved robust human ear detection method
CN102324183A (en) * 2011-09-19 2012-01-18 华中科技大学 Vehicle detection and grasp shoot method based on compound virtual coil

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101398893A (en) * 2008-10-10 2009-04-01 北京科技大学 Adaboost arithmetic improved robust human ear detection method
CN102324183A (en) * 2011-09-19 2012-01-18 华中科技大学 Vehicle detection and grasp shoot method based on compound virtual coil

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
叶丽燕等: "基于Adaboost的轿车尾部检测方法", 《计算机仿真》, vol. 28, no. 1, 31 January 2011 (2011-01-31), pages 327 - 330 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914688A (en) * 2014-03-27 2014-07-09 北京科技大学 Urban road obstacle recognition system
CN103914688B (en) * 2014-03-27 2018-02-02 北京科技大学 A kind of urban road differentiating obstacle
CN104599263A (en) * 2014-12-23 2015-05-06 安科智慧城市技术(中国)有限公司 Image detecting method and device
CN104599263B (en) * 2014-12-23 2017-08-15 安科智慧城市技术(中国)有限公司 A kind of method and device of image detection
CN106297278A (en) * 2015-05-18 2017-01-04 杭州海康威视数字技术股份有限公司 A kind of method and system shedding thing vehicle for inquiry
CN106297278B (en) * 2015-05-18 2019-12-20 杭州海康威视数字技术股份有限公司 Method and system for querying a projectile vehicle
CN105390021A (en) * 2015-11-16 2016-03-09 北京蓝卡科技股份有限公司 Parking spot state detection method and parking spot state detection device
CN107124327B (en) * 2017-04-11 2019-04-02 千寻位置网络有限公司 The method that JT808 car-mounted terminal simulator reverse-examination is surveyed
CN107124327A (en) * 2017-04-11 2017-09-01 千寻位置网络有限公司 The method that the reverse-examination of JT808 car-mounted terminal simulators is surveyed
CN107358207A (en) * 2017-07-14 2017-11-17 重庆大学 A kind of method for correcting facial image
WO2019079965A1 (en) * 2017-10-24 2019-05-02 江西省高速公路投资集团有限责任公司 Rapid detection system and method for article dropped on road
CN107918762A (en) * 2017-10-24 2018-04-17 江西省高速公路投资集团有限责任公司 A kind of highway drops thing rapid detection system and method
CN108873895A (en) * 2018-06-11 2018-11-23 北京航空航天大学 Drop intelligent patrol detection vehicle in road surface
CN109978465A (en) * 2019-03-29 2019-07-05 江苏满运软件科技有限公司 Source of goods recommended method, device, electronic equipment, storage medium

Also Published As

Publication number Publication date
CN102867183B (en) 2015-04-15

Similar Documents

Publication Publication Date Title
CN102867183B (en) Method and device for detecting littered objects of vehicle and intelligent traffic monitoring system
CN112380952B (en) Power equipment infrared image real-time detection and identification method based on artificial intelligence
CN111444821B (en) Automatic identification method for urban road signs
CN110059554B (en) Multi-branch target detection method based on traffic scene
CN103020978B (en) SAR (synthetic aperture radar) image change detection method combining multi-threshold segmentation with fuzzy clustering
CN103049763B (en) Context-constraint-based target identification method
CN103839065B (en) Extraction method for dynamic crowd gathering characteristics
CN104166841A (en) Rapid detection identification method for specified pedestrian or vehicle in video monitoring network
CN114998852A (en) Intelligent detection method for road pavement diseases based on deep learning
CN109657610A (en) A kind of land use change survey detection method of high-resolution multi-source Remote Sensing Images
CN106778687A (en) Method for viewing points detecting based on local evaluation and global optimization
CN105354595A (en) Robust visual image classification method and system
CN106408030A (en) SAR image classification method based on middle lamella semantic attribute and convolution neural network
CN109948593A (en) Based on the MCNN people counting method for combining global density feature
CN104182985A (en) Remote sensing image change detection method
CN105930794A (en) Indoor scene identification method based on cloud computing
CN103853724A (en) Multimedia data sorting method and device
US20220315243A1 (en) Method for identification and recognition of aircraft take-off and landing runway based on pspnet network
CN103093243B (en) The panchromatic remote sensing image clouds of high-resolution sentences method
CN104820841A (en) Hyper-spectral classification method based on low-order mutual information and spectral context band selection
CN112560675A (en) Bird visual target detection method combining YOLO and rotation-fusion strategy
CN103617413A (en) Method for identifying object in image
CN104751175A (en) Multi-label scene classification method of SAR (Synthetic Aperture Radar) image based on incremental support vector machine
CN105469117A (en) Image recognition method and device based on robust characteristic extraction
CN103679214A (en) Vehicle detection method based on online area estimation and multi-feature decision fusion

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 518000 Guangdong province Shenzhen city Futian District District Shennan Road Press Plaza room 1306

Patentee after: ANKE ROBOT CO.,LTD.

Address before: 518000 Guangdong province Shenzhen city Futian District District Shennan Road Press Plaza room 1306

Patentee before: ANKE SMART CITY TECHNOLOGY (PRC) Co.,Ltd.

CP01 Change in the name or title of a patent holder
TR01 Transfer of patent right

Effective date of registration: 20171026

Address after: 518000 Guangdong province Shenzhen city Futian District District Shennan Road Press Plaza room 1306

Co-patentee after: SHANGHAI QINGTIAN ELECTRONIC TECHNOLOGY Co.,Ltd.

Patentee after: ANKE ROBOT CO.,LTD.

Address before: 518000 Guangdong province Shenzhen city Futian District District Shennan Road Press Plaza room 1306

Patentee before: ANKE ROBOT CO.,LTD.

TR01 Transfer of patent right