CN108108724A - A kind of wagon detector training method learnt automatically based on multiple subarea area image feature - Google Patents

A kind of wagon detector training method learnt automatically based on multiple subarea area image feature Download PDF

Info

Publication number
CN108108724A
CN108108724A CN201810053698.1A CN201810053698A CN108108724A CN 108108724 A CN108108724 A CN 108108724A CN 201810053698 A CN201810053698 A CN 201810053698A CN 108108724 A CN108108724 A CN 108108724A
Authority
CN
China
Prior art keywords
mrow
msub
sample
classifier
mfrac
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810053698.1A
Other languages
Chinese (zh)
Other versions
CN108108724B (en
Inventor
陈卫刚
王勋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Gongshang University
Original Assignee
Zhejiang Gongshang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Gongshang University filed Critical Zhejiang Gongshang University
Priority to CN201810053698.1A priority Critical patent/CN108108724B/en
Publication of CN108108724A publication Critical patent/CN108108724A/en
Application granted granted Critical
Publication of CN108108724B publication Critical patent/CN108108724B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/285Selection of pattern recognition techniques, e.g. of classifiers in a multi-classifier system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Abstract

The invention discloses a kind of wagon detector training methods learnt automatically based on multiple subarea area image feature, are related to technical field of computer vision.With RealBoost algorithms selections, those apply the feature with preferable performance to the present invention for vehicle detection, each feature corresponds to a Weak Classifier, multiple Weak Classifiers are combined into a strong classifier;Multiple strong classifiers are built into wagon detector in the form of cascade.The advantages of this method for expressing, is the external appearance characteristic that not only can extract image, but also has impliedly modeled the geometrical relationship for the subregion for generating these external appearance characteristics.

Description

A kind of wagon detector training method learnt automatically based on multiple subarea area image feature
Technical field
It is more particularly to a kind of to be learnt automatically based on multiple subarea area image feature the present invention relates to technical field of computer vision Wagon detector training method.
Background technology
With the rapid development of sensor technology and electronic technology, advanced driving assistance system (ADAS) has become automobile work One important development direction of industry.Foregut fermenters play vital role in the ADAS systems of view-based access control model, are bases In the basis of the applications such as the ranging and front truck anticollision of vision.
The grader for object detection is trained in a manner of study, the external appearance characteristic of usually extraction whole region, which is used as, to be divided The foundation of class.If the size in region is larger, the interference content in some background images inevitably can be wherein included;According to smaller Region, then with the decline of resolution ratio be easy to cause region separability decline.
In the image-region comprising vehicle, some parts include more differentiation vehicle and the appearance of other targets is believed Breath, and other regions hardly include any valuable appearance contents.Based on above-mentioned observation, the present invention draws image-region It is divided into subregion, with the set expression image-region of several sub-regions;The feature of all subregion is extracted by connection and dimensionality reduction Handle the feature as region;In above-mentioned method for expressing, the number of different subregion combinations determines the kind of extractable feature Class, with RealBoost algorithms selections, those apply the feature with preferable performance, each feature to the present invention for vehicle detection Multiple Weak Classifiers are combined into a strong classifier by a corresponding Weak Classifier;By multiple strong classifiers in the form of cascade It is built into wagon detector.The advantages of this method for expressing, is the external appearance characteristic that not only can extract image, but also impliedly builds Mould generates the geometrical relationship of the subregion of these external appearance characteristics.
The content of the invention
It is an object of the invention to solve problems of the prior art, and provide a kind of special based on multiple subarea area image The wagon detector training method learnt automatically is levied, trains the detector of gained using the image that vehicle-mounted vidicon gathers as input, Vehicle Object in detection image.
For above-mentioned technical problem, the present invention is addressed by the following technical programs:
Based on the wagon detector training method that multiple subarea area image feature learns automatically, this method includes:
S1:Subregion will be divided into for trained sample image, takes the set expression image of wherein several sub-regions Region is denoted as:R=(r (x1,y1),r(x2,y2),...,r(xm,ym);W, h), wherein R representative images region, r (xk,yk) represent K-th of subregion in described image region, (xk,yk) for the X and Y-direction coordinate of k-th subregion upper left angle point, m is image The number of subregion in region, all subregions in same set have identical wide and high, are respectively w and h;
S2:Histograms of oriented gradients (HOG) feature per sub-regions in image-region is calculated, subregion is pressed into its upper left Corner location is ranked up, and the HOG of each sub-regions is connected into a vector by the order after sequence, to connection gained to Amount makees standardization and the feature as image-region after dimension-reduction treatment;
S3:If image-region comprising M sub-regions, takes m sub-regions therein to form subregion set altogether, shareThe different combination of kind, each combination are corresponded to a Weak Classifier, are calculated with RealBoost Method selects the Weak Classifier that for vehicle detection there is performance to meet preset standard and is combined into a strong classifier;
S4:Multiple strong classifiers are formed wagon detector in the form of cascade, and each of which grade corresponds to one strong classification Device.
Based on above-mentioned technical proposal, each step may be employed following preferred embodiment and realize.
The HOG features in image-region per sub-regions are calculated described in S2, including:
If GxAnd GyThe respectively X of input picture and the gradient image of Y-direction, Gx(u, v) and Gy(u, v) is respectively pixel Point (u, v) in the X of input picture and the gradient intensity of Y-direction, calculate as follows pixel (u, v) gradient intensity G (u, And gradient direction α (u, v) v):
α (u, v)=tan-1(Gy(u,v)/Gx(u,v))
By uniform quantization into b grade, the HOG features per sub-regions are one and include b element the gradient direction Vector, i-th of element of the vector is all pixels for the gradient direction covered in subregion with i-th of quantification gradation The sum of gradient intensity.
The HOG by each sub-regions described in S2 connects into a vector, to connection gained vector make standardization and Dimension-reduction treatment comprises the following steps:
S21:Equipped with m sub-regions, the HOG per sub-regions is a vector for including b element, connect obtained by Amount is denoted as H, then H includes b × m element, makees Min-Max standardization processings to H as follows:
The least member value and greatest member value of wherein min (H) and max (H) difference representation vectors H, H(j)For in vectorial H J-th of element;
S22:To each Weak Classifier, each sub-district is calculated according to the subregion combination of the Weak Classifier defined The HOG in domain connects the HOG of each sub-regions and makees the obtained column vector H of Min-Max standardization processings described in S21i, by with Lower formula calculates covariance matrix:
Wherein NpFor positive sample number, μ is the mean vector calculated all positive samples;
S23:The characteristic value of the covariance matrix S is sought, to all characteristic values of S by sorting from big to small, takes it successively In be more than threshold value d characteristic value λ0、λ1、...、λd-1, take and λiCorresponding eigenvector ci, with c0、c1、...、cd-1As row Vector forms Matrix C, as the following formula to making HiDimensionality reduction computing,
Hi=CTHi
The weak typing that for vehicle detection there is performance to meet preset standard is selected with RealBoost algorithms described in S3 Device is combined into a strong classifier, comprises the following steps:
S31:For t from 1 to T iteration, wherein T is to allow the Weak Classifier number contained up in a default strong classifier Mesh often takes turns iteration and selects a Weak Classifier;
S32:The positive sample and negative sample concentrated using training sample calculate the weighting of each Weak Classifier respectively as input Divide loss function by mistake, select that there is minimum to divide the Weak Classifier of loss function value by mistake for optimal Weak Classifier, add it to and work as Preceding strong classifier;
S33:The weights of each sample are updated as follows,
Wherein xiRepresent i-th of sample, yiFor the mark of sample, corresponding positive negative sample yiRespectively+1 and -1, wiFor correspondence Sample xiWeights, ft(xi) represent by the optimal Weak Classifier chosen of t wheel iteration to sample xiIt is defeated caused by being classified Go out, Z is one and is used for normalized number, calculates as follows:
S34:Classified using current strong classifier to the sample for being used to test, if verification and measurement ratio is more than default first Threshold value and false alarm rate are less than default second threshold, then terminate iteration, and output strong classifier F (x) otherwise continues to change next time Generation;Strong classifier F (x) is as shown by the following formula:
Wherein sign () representatives take symbolic operation, and q is that a default value is distinguished for 0 constant, verification and measurement ratio η and false alarm rate e It calculates as follows:
Wherein NpIt is all positive sample quantity, NppIt is the number of samples being correctly detected in all positive samples, NfIt is all Negative sample quantity, NfpIt is the number being mistakenly detected in all negative samples as positive sample.
Selection divides the optimal Weak Classifier of loss function value with minimum described in S32 by mistake when, wherein dividing loss function by mistake Computational methods it is as follows:
If Weak Classifier to be selected, as described in S2 to its HOG feature work standardize and dimension-reduction treatment after formed vector H, general It is regarded as the Multivariate Random Vector H=(h being made of multiple variables0,h1,...,hd-1), wherein d is dimension vectorial after dimensionality reduction, To each variable hi, its value range is divided into L section, wherein 0 corresponding variable-value scope of section is hi≤ -1, area Between L-1 correspond to hi>+1, the scope that n-th of section corresponds to variate-value be,
Wherein 0 < n < L-1;
Calculate the sum of weights of training sample in section as follows:
Wherein, aj∈ [0, L-1] represents a section in L section, and δ is Kronecker function,Represent training sample This xiThe Weak Classifier to be selected is corresponded to through connecting, standardizing, the vector that dimension-reduction treatment is formed,In representation vector J-th of variable, Tr () is a mapping function, ifValue fall u-th of interval range in the L section Interior, then Tr () maps it onto u;
The Weak Classifier of corresponding feature to be selected is:
Wherein V (a0,a1,...,ad-1) calculate as follows:
Wherein Ppositive(a0,a1,...,ad-1) and Pnegative(a0,a1,...,ad-1) it is by positive sample and negative sample respectively The sum of weights of calculating;
If there is Weak Classifier to sample xiBe categorized as by mistake point, then sign (f (xi))≠sign(yi), it is described to be selected weak Grader to caused by all sample classifications loss function being divided to calculate as follows by mistake,
Multiple strong classifiers described in S4 are formed wagon detector in the form of cascade, are comprised the following steps:
S41:Newly after one strong classifier of training, if existing cascade classifier is sky, which is the 0th grade Grader, if existing cascade classifier has contained k grades of graders, the strong classifier be added to the kth of cascade classifier+ 1 grade;
S42:Test image is detected with the new cascade classifier for adding in a strong classifier, if verification and measurement ratio is more than in advance If the 3rd threshold value and false detection rate are less than default 4th threshold value or total series of cascade classifier has reached default number, Then output cascade grader terminates training process;Otherwise, the positive sample divided by mistake and negative sample are collected, is separately added into positive and negative sample This collection, and the negative sample that the strong classifier that negative sample concentration has currently been trained correctly detects is deleted, with updated positive and negative sample This collection trains a new strong classifier.
In terms of existing technologies, advantage is the external appearance characteristic that not only can extract image to the present invention, but also impliedly The geometrical relationship for the subregion for generating these external appearance characteristics is modeled.
Description of the drawings
Fig. 1 is shown for the schematic diagram of trained positive sample image;
Fig. 2 illustrates to represent image-region with the set of subregion;
Fig. 3 is the wagon detector training method flow that the embodiment of the present invention is learnt automatically based on multiple subarea area image feature Schematic diagram;
Fig. 4 illustrates to select the flow diagram of optimal Weak Classifier.
Specific embodiment
The present invention is further elaborated and illustrated with reference to the accompanying drawings and detailed description.
In the present invention, a kind of wagon detector training method learnt automatically based on multiple subarea area image feature includes following Step:
S1:Subregion will be divided into for trained sample image, takes the set expression image of wherein several sub-regions Region is denoted as:R=(r (x1,y1),r(x2,y2),...,r(xm,ym);W, h), wherein R representative images region, r (xk,yk) represent K-th of subregion in described image region, (xk,yk) for the X and Y-direction coordinate of k-th subregion upper left angle point, m is image The number of subregion in region, all subregions in same set have identical wide and high, are respectively w and h;
S2:The HOG features per sub-regions in image-region are calculated, subregion is arranged by its upper left corner location The HOG of each sub-regions is connected into a vector by sequence by the order after sequence, and the vector of connection gained is made to standardize and drop Feature after dimension processing as image-region;
Wherein, the HOG features in image-region per sub-regions are calculated, specific implementation is as follows:
If GxAnd GyThe respectively X of input picture and the gradient image of Y-direction, Gx(u, v) and Gy(u, v) is respectively pixel Point (u, v) calculates its gradient intensity G (u, v) and gradient side in the X of input picture and the gradient intensity of Y-direction as follows To α (u, v):
α (u, v)=tan-1(Gy(u,v)/Gx(u,v))
By uniform quantization into b grade, the HOG features per sub-regions are one and include b element the gradient direction Vector, i-th of element of the vector is all pixels for the gradient direction covered in subregion with i-th of quantification gradation The sum of gradient intensity.
Wherein, the HOG of each sub-regions is connected into a vector, to connection gained vector make standardize and dimensionality reduction at Reason, comprises the following steps:
S21:Equipped with m sub-regions, the HOG per sub-regions is a vector for including b element, connect obtained by Amount is denoted as H, then H includes b × m element, makees Min-Max standardization processings to H as follows:
The least member value and greatest member value of wherein min (H) and max (H) difference representation vectors H, H(j)For in vectorial H J-th of element;
S22:To each Weak Classifier, each sub-district is calculated according to the subregion combination of the Weak Classifier defined The HOG in domain connects the HOG of each sub-regions and makees the obtained column vector H of Min-Max standardization processings described in S21i, by with Lower formula calculates covariance matrix:
Wherein NpFor positive sample number, μ is the mean vector calculated all positive samples;
S23:The characteristic value of the covariance matrix S is sought, to all characteristic values of S by sorting from big to small, takes it successively In be more than threshold value d characteristic value λ0、λ1、...、λd-1, take and λiCorresponding eigenvector ci, with c0、c1、...、cd-1As row Vector forms Matrix C, as the following formula to making HiDimensionality reduction computing,
Hi=CTHi
S3:If image-region comprising M sub-regions, takes m sub-regions therein to form subregion set altogether, shareThe different combination of kind, each combination are corresponded to a Weak Classifier, are calculated with RealBoost Method selects the Weak Classifier that for vehicle detection there is performance to meet preset standard and is combined into a strong classifier;
Wherein, the Weak Classifier group that for vehicle detection there is performance to meet preset standard is selected with RealBoost algorithms A strong classifier is synthesized, is comprised the following steps:
S31:For t from 1 to T iteration, wherein T is to allow the Weak Classifier number contained up in a default strong classifier Mesh often takes turns iteration and selects a Weak Classifier;
S32:The positive sample and negative sample concentrated using training sample calculate the weighting of each Weak Classifier respectively as input Divide loss function by mistake, select that there is minimum to divide the Weak Classifier of loss function value by mistake for optimal Weak Classifier, add it to and work as Preceding strong classifier.Wherein divide the computational methods of loss function as follows by mistake:
If Weak Classifier to be selected, as described in S2 to its HOG feature work standardize and dimension-reduction treatment after formed vector H, general It is regarded as the Multivariate Random Vector H=(h being made of multiple variables0,h1,...,hd-1), wherein d is dimension vectorial after dimensionality reduction, To each variable hi, its value range is divided into L section, wherein 0 corresponding variable-value scope of section is hi≤ -1, area Between L-1 correspond to hi>+1, the scope that n-th of section corresponds to variate-value be,
Wherein 0 < n < L-1;
Calculate the sum of weights of training sample in section as follows:
Wherein, aj∈ [0, L-1] represents a section in L section, and δ is Kronecker function,Represent training sample This xiThe Weak Classifier to be selected is corresponded to through connecting, standardizing, the vector that dimension-reduction treatment is formed,In representation vector J-th of variable, Tr () is a mapping function, ifValue fall u-th of interval range in the L section Interior, then Tr () maps it onto u;
The Weak Classifier of corresponding feature to be selected is:
Wherein V (a0,a1,...,ad-1) calculate as follows:
Wherein Ppositive(a0,a1,...,ad-1) and Pnegative(a0,a1,...,ad-1) it is by positive sample and negative sample respectively The sum of weights of calculating;
If there is Weak Classifier to sample xiBe categorized as by mistake point, then sign (f (xi))≠sign(yi), it is described to be selected weak Grader to caused by all sample classifications loss function being divided to calculate as follows by mistake,
S33:The weights of each sample are updated as follows,
Wherein xiRepresent i-th of sample, yiFor the mark of sample, corresponding positive negative sample yiRespectively+1 and -1, wiFor correspondence Sample xiWeights, ft(xi) represent by the optimal Weak Classifier chosen of t wheel iteration to sample xiIt is defeated caused by being classified Go out, Z is one and is used for normalized number, calculates as follows:
S34:Classified using current strong classifier to the sample for being used to test, if verification and measurement ratio is more than default first Threshold value and false alarm rate are less than default second threshold, then terminate iteration, and output strong classifier F (x) otherwise continues to change next time Generation;Strong classifier F (x) is as shown by the following formula:
Wherein sign () representatives take symbolic operation, and q is that a default value is distinguished for 0 constant, verification and measurement ratio η and false alarm rate e It calculates as follows:
Wherein NpIt is all positive sample quantity, NppIt is the number of samples being correctly detected in all positive samples, NfIt is all Negative sample quantity, NfpIt is the number being mistakenly detected in all negative samples as positive sample.
S4:Multiple strong classifiers are formed wagon detector in the form of cascade, and each of which grade corresponds to one strong classification Device.Specifically include following steps:
S41:Newly after one strong classifier of training, if existing cascade classifier is sky, which is the 0th grade Grader, if existing cascade classifier has contained k grades of graders, the strong classifier be added to the kth of cascade classifier+ 1 grade;
S42:Test image is detected with the new cascade classifier for adding in a strong classifier, if verification and measurement ratio is more than in advance If the 3rd threshold value and false detection rate are less than default 4th threshold value or total series of cascade classifier has reached default number, Then output cascade grader terminates training process;Otherwise, the positive sample divided by mistake and negative sample are collected, is separately added into positive and negative sample This collection, and the negative sample that the strong classifier that negative sample concentration has currently been trained correctly detects is deleted, with updated positive and negative sample This collection trains a new strong classifier.
Embodiment
The present embodiment is based on the above method, the vehicle for illustrating automatically to learn based on multiple subarea area image feature with reference to specific example The realization process of detector training method.
Fig. 1 shows the embodiment of the present invention for training the positive sample image schematic diagram of wagon detector, with reference to Fig. 1, sheet Embodiment marks vehicle region in a manual manner, and the standard of mark need to include the edge of left and right vehicle wheel both sides for positive sample image And the region of each extension 2% in left and right, whole and each extension 2% up and down regions comprising vehicle up direction and lower section.Extraction is through mark The vehicle region of note simultaneously saves as positive sample image, and all sample images zoom to uniform sizes.The present embodiment sets the ruler It is very little for high 30 pixel, wide 36 pixel.Negative sample image is road or roadside natural scene image not comprising vehicle.
By the sample image region division of the uniform sizes into subregion, with reference to Fig. 2, take several sub-regions r1, R2, r3 with the set of these subregions to represent image-region, and this expression are denoted as:R=(r (x1,y1),r(x2, y2),...,r(xm,ym);w,h).In above-mentioned expression, r (xk,yk) represent k-th of subregion upper left in the subregion set The X and Y coordinates of angle point are (xk,yk).The number of subregion is m in set, also, all subregions tool in same set There is identical wide and high, respectively w and h.
With the set expression image-region of subregion, if image-region takes m composition therein altogether comprising M sub-regions Set, then shareThe different combination of kind.Wherein, the number of subregion depends on image-region The factors such as the lap size between size, the size of subregion, subregion.The present embodiment take the size of subregion for 8 × 8, adjacent subregion has the overlapping of 5 pixels in X and Y-direction respectively, and the number of subregion is 2 or 3 in set.It is set by above-mentioned It puts, in the case where the height and width of the positive sample in setting training set are respectively 30 and 36 pixels, easily calculates the number of subregion Mesh is 80.By number of combinations calculation formula, appoint in 80 different subregions and take 2 and can form about 3000 different groups It closes, appoints and take 3 sub-regions that can then form about 80000 different combinations.
Image-region is expressed as to the set of several sub-regions, is come from for the characteristics of image of vehicle detection in set Subregion.Specifically, the present embodiment calculates the HOG features per sub-regions, by subregion by its upper left corner location from upper It under, from left to right sorts, the HOG of each sub-regions is connected into a vector by the order after sequence, to connection gained Vector makees dimension-reduction treatment, using the vector after dimensionality reduction as the feature of image-region.
The combination of some subregions and several sub-regions, which has, in vehicle image is clearly distinguishable from other objects at this The external appearance characteristic of a little positions, and other is then almost without texture variations.Obviously, in those lists with vehicle characteristic feature A or multiple subregions extract feature, and are conducive to raising wagon detector according to the characteristic Design grader extracted Performance.The present embodiment corresponds the combination of subregion and Weak Classifier, and a kind of combination corresponds to a weak typing Device selects those with RealBoost algorithms and applies the Weak Classifier with preferable distinction for vehicle detection, weak by several Grader builds a strong classifier, and a wagon detector is formed in the form of cascade by multiple strong classifiers.
As shown in figure 3, the present invention is based on the wagon detector training method flows that multiple subarea area image feature learns automatically It may comprise steps of:
Step 301, cascade classifier is initialized as sky;
The wagon detector of the present embodiment is a cascade classifier, and each of which grade includes a strong classifier, first The strong classifier first added in is the 0th grade of grader, and what is be eventually adding is K-1 grades of graders, and K is default at most total series;
Step 302, positive and negative samples collection is inputted, positive sample number therein is Np, negative sample number is Nf;It initializes each The weights of sample, the weights of each positive sample are 1/2Np, the weights of negative sample are 1/2Nf;Strong classifier to be trained is initial It turns to comprising 0 Weak Classifier;
Step 303, for t from 1 to T iteration, wherein T is to allow the Weak Classifier contained up in a default strong classifier Number often takes turns iteration and selects a Weak Classifier;
Step 304, the weighting for calculating each Weak Classifier to be selected divides loss function by mistake, and selection has minimum point loss letter by mistake The Weak Classifier of numerical value adds it to current strong classifier as optimal Weak Classifier;
Step 305, the weights of each sample are updated as the following formula,
Wherein ft() represents the optimal Weak Classifier that the t times iteration is chosen, when correct classification, to positive sample Output is an arithmetic number, the output to negative sample is one be not more than 0 real number, xiRepresentative sample, yiIt is and xiIt is corresponding Mark, if xiFor positive sample, then yi=+1, otherwise it is one for -1, Z and is used for normalized number, is calculated as follows;
Step 306, classified using current strong classifier to the sample for being used to test, calculate verification and measurement ratio η and false-alarm Rate e, wherein verification and measurement ratio are calculated as follows,
Wherein, NppIt is the number that positive sample is identified as in all positive samples, empty inspection rate is calculated as follows
Wherein, NfpIt is the number that positive sample is mistakenly identified as in all negative samples;
Step 307, if verification and measurement ratio η is more than default threshold value ηSAnd false alarm rate e is less than default threshold value eS, then go to step 308, otherwise turn 303, one embodiment of the present of invention is by ηSAnd eS0.995 and 0.3 are set to respectively;
Step 308, terminate iteration, export the strong classifier being shown below:
Wherein, sign () represents sign function, if sample is determined as positive sample by its functional value for canonical strong classifier, Otherwise it is negative sample, T' is the Weak Classifier number actually included in strong classifier, and q is that a default value is 0 constant;
Step 309, the strong classifier that the step 308 exports is added to cascade classifier, if existing cascade sort Device is sky, then the strong classifier is the 0th grade of grader;If existing cascade classifier has contained k grades of graders, add in It is+1 grade of grader of kth;
Step 310, test image is detected with current cascade classifier;
Step 311, if verification and measurement ratio is more than predetermined threshold value ηTAnd false alarm rate is less than predetermined threshold value eTOr cascade classifier Total series has reached default number, then output cascade grader, terminates training process, otherwise goes to step 312, the present embodiment is put ηTAnd eTRespectively 0.995 and 0.5 × 10-5
Step 312, the positive sample divided by mistake and negative sample are collected, positive and negative samples collection is separately added into, and deletes negative sample collection In the negative sample correctly classified of the strong classifier currently trained, turn 302.
The step 304, the weighting for calculating each Weak Classifier to be selected divide loss function by mistake, and selection has minimum point damage by mistake The Weak Classifier of functional value is lost as optimal Weak Classifier, Fig. 4 shows specific steps, it may include:
Step 401, sample graph is calculated as follows in the gradient intensity figure and gradient direction figure, the present embodiment for calculating sample image As X and the gradient image of Y-direction,
Gx(u, v)=I (u+1, v)-I (u-1, v) (6)
Gy(u, v)=I (u, v+1)-I (u, v-1) (7)
Wherein, Gx(u, v) and Gy(u, v) represents the gradient image of X and Y-direction in the value of pixel (u, v) respectively, and I is sample This image;The gradient intensity G (u, v) and gradient direction α (u, v) of pixel (u, v) are calculated as follows;
α (u, v)=tan-1(Gy(u,v)/Gx(u,v)) (9)
Step 402, the feature under corresponding different subregion combinations is calculated respectively, and the present embodiment carrys out table with the set of subregion Show image-region, different subregion combinations corresponds to different Weak Classifiers, for a Weak Classifier to be selected, by such as Under type calculates the feature that sample image corresponds to the Weak Classifier:First, the HOG features of each sub-regions are calculated respectively, by public affairs Formula (9) calculate gained with the gradient direction that angle represents by uniform quantization into b grade, often the HOG features of sub-regions are One vector for including b element, i-th of element of the vector are the ladders covered in subregion with i-th of quantification gradation Spend the sum of the gradient intensity of all pixels in direction;Secondly, the m sub-regions in the subregion set are pressed into its upper left angle point X and Y directions coordinate from top to bottom, from left to right sorts, and is sequentially connected the HOG of each sub-regions, forms one and includes b × m The vector of element, and it is denoted as H;Min-Max standardization is carried out as the following formula to the vectorial H of connection gained:
The least member value and greatest member value of wherein min (H) and max (H) difference representation vectors H, H(i)For in vectorial H I-th of element;Finally, dimension-reduction treatment is carried out to vectorial H, using the vector after dimensionality reduction as the feature of image-region;
Step 403, corresponding Weak Classifier is determined by feature:For a specific Weak Classifier to be selected, each sample Originally the characteristic value that a correspondence Weak Classifier to be selected represents in the form of vectors can be calculated, the feature after dimensionality reduction is regarded as by more Multivariate Random Vector H=(the h of a variable composition0,h1,...,hd-1), wherein d is dimension vectorial after dimensionality reduction, and the present embodiment takes d =2 or 3;To each variable hi, its value range is divided into L section, wherein 0 corresponding variate-value scope of section is hi ≤ -1, section L-1 correspond to hi>+1, the scope that n-th section corresponds to variate-value are:
Wherein, 0 < n < L-1;
The present embodiment calculates the sum of weights of training sample in each section as follows:
Wherein, aj∈ [0, L-1] represents a section in L section, and δ is Kronecker function,Represent training sample This xiThe Weak Classifier to be selected is corresponded to through connecting, standardizing, the vector that dimension-reduction treatment is formed,In representation vector J-th of variable, Tr () they are a mapping functions, ifValue fall in u-th of interval range in the L section, Then Tr () maps it onto u;
If the dimension d=2 after dimensionality reduction, such as the P (a of formula (12)0,a1) by variable h0Fall in section a0, and variable h1Fall In section a1All samples weights read group total and obtain;If the dimension d=3 after dimensionality reduction, such as the P (a of formula (12)0,a1, a2) by variable h0Fall in section a0, h1Fall in section a1, and h2Fall in section a2Weights read group total and obtain;
Weak Classifier to be selected determines as the following formula:
Wherein V (a0,a1,...,ad-1) be calculated as follows,
Wherein Ppositive(a0,a1,...,ad-1) and Pnegative(a0,a1,...,ad-1) it is by positive sample and negative sample respectively The sum of weights calculated by formula (12);
Step 404, it is calculated each Weak Classifier to be selected to dividing loss function caused by sample classification by mistake, is selected With minimum the Weak Classifier of loss function value is divided to be optimal Weak Classifier by mistake:First, if having Weak Classifier f to sample xi's It is categorized as dividing by mistake, then sign (f (xi))≠sign(yi), the Weak Classifier to be selected caused by all sample classifications to dividing by mistake Loss function calculates as follows,
Wherein, xiAnd yiRespectively sample and corresponding mark, if xiFor positive sample, then yi=+1, otherwise yi=-1, f (xi) classification output of the Weak Classifier to sample is represented, if output valve and the symbol marked are inconsistent, mean to divide by mistake;Its It is secondary, select that there is minimum to divide that Weak Classifier of loss function value by mistake for optimal Weak Classifier as the following formula,
It is described that dimension-reduction treatment is carried out to vector in above-mentioned steps 402, using the vector after dimensionality reduction as the spy of image-region Value indicative, optionally, the present embodiment realized in a manner of principal component analysis, specifically, it may include following steps:
First, if a Weak Classifier to be selected, shape after Min-Max standardization is made by formula (10) to i-th of positive sample image Into vector be Hi, and be a column vector, covariance matrix is calculated as follows,
Wherein NpFor the number of positive sample, μ is the mean vector calculated all positive samples;
Secondly, seek the characteristic value of covariance matrix S, to all characteristic values of S by sorting from big to small, take successively wherein compared with D big characteristic value λ0、λ1、...、λd-1, take and λiCorresponding eigenvector ci, with c0、 c1、...、cd-1As column vector structure Into Matrix C, as the following formula to HiMake dimensionality reduction computing,
Hi=CTHi (18)
Wherein, the characteristic value λ of the matrix S be can make following formula set up scalar, c be it is corresponding with characteristic value λ it is intrinsic to Amount,
Sc=λ c (19).
Embodiment described above is a kind of preferable scheme of the present invention, and so it is not intended to limiting the invention.Have The those of ordinary skill of technical field is closed, without departing from the spirit and scope of the present invention, various changes can also be made Change and modification.Therefore the technical solution that all modes for taking equivalent substitution or equivalent transformation are obtained all falls within the guarantor of the present invention In the range of shield.

Claims (6)

  1. A kind of 1. wagon detector training method learnt automatically based on multiple subarea area image feature, which is characterized in that this method Including:
    S1:Subregion will be divided into for trained sample image, takes the set expression image-region of wherein several sub-regions, It is denoted as:R=(r (x1,y1),r(x2,y2),...,r(xm,ym);W, h), wherein R representative images region, r (xk,yk) described in representative K-th of subregion in image-region, (xk,yk) for the X and Y-direction coordinate of k-th subregion upper left angle point, m is image-region The number of middle subregion, all subregions in same set have identical wide and high, are respectively w and h;
    S2:The HOG features per sub-regions in image-region are calculated, subregion by its upper left corner location is ranked up, is pressed The HOG of each sub-regions is connected into a vector by the order after sequence, to connection gained vector make standardize and dimensionality reduction at Feature after reason as image-region;
    S3:If image-region comprising M sub-regions, takes m sub-regions therein to form subregion set altogether, shareThe different combination of kind, each combination are corresponded to a Weak Classifier, are calculated with RealBoost Method selects the Weak Classifier that for vehicle detection there is performance to meet preset standard and is combined into a strong classifier;
    S4:Multiple strong classifiers are formed wagon detector in the form of cascade, and each of which grade corresponds to a strong classifier.
  2. 2. the wagon detector training method according to claim 1 learnt automatically based on multiple subarea area image feature, It is characterized in that, the HOG features in image-region per sub-regions is calculated described in S2, including:
    If GxAnd GyThe respectively X of input picture and the gradient image of Y-direction, Gx(u, v) and Gy(u, v) be respectively pixel (u, V) in the gradient intensity G (u, v) and ladder of the X of input picture and the gradient intensity of Y-direction, as follows calculating pixel (u, v) Spend direction α (u, v):
    <mrow> <mi>G</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>=</mo> <msqrt> <mrow> <msup> <msub> <mi>G</mi> <mi>x</mi> </msub> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> <mo>+</mo> <msup> <msub> <mi>G</mi> <mi>y</mi> </msub> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mi>u</mi> <mo>,</mo> <mi>v</mi> <mo>)</mo> </mrow> </mrow> </msqrt> </mrow>
    α (u, v)=tan-1(Gy(u,v)/Gx(u,v))
    The gradient direction by uniform quantization into b grade, the HOG features per sub-regions be one comprising b element to Amount, i-th of element of the vector are the ladders of all pixels for the gradient direction covered in subregion with i-th of quantification gradation Spend the sum of intensity.
  3. 3. the wagon detector training method according to claim 1 learnt automatically based on multiple subarea area image feature, Be characterized in that, the HOG by each sub-regions described in S2 connects into a vector, to connection gained vector make standardization and Dimension-reduction treatment comprises the following steps:
    S21:Equipped with m sub-regions, the HOG per sub-regions is a vector for including b element, connects the vector note of gained Make H, then H includes b × m element, makees Min-Max standardization processings to H as follows:
    <mrow> <msup> <mi>H</mi> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mfrac> <mrow> <msup> <mi>H</mi> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> </msup> <mo>-</mo> <mi>m</mi> <mi>i</mi> <mi>n</mi> <mrow> <mo>(</mo> <mi>H</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> <mrow> <mo>(</mo> <mi>H</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>m</mi> <mi>i</mi> <mi>n</mi> <mrow> <mo>(</mo> <mi>H</mi> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow>
    The least member value and greatest member value of wherein min (H) and max (H) difference representation vectors H, H(j)For the jth in vectorial H A element;
    S22:To each Weak Classifier, each sub-regions are calculated according to the subregion combination of the Weak Classifier defined HOG connects the HOG of each sub-regions and makees the obtained column vector H of Min-Max standardization processings described in S21i, by following public affairs Formula calculates covariance matrix:
    <mrow> <mi>S</mi> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>N</mi> <mi>p</mi> </msub> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>N</mi> <mi>p</mi> </msub> </munderover> <mrow> <mo>(</mo> <msub> <mi>H</mi> <mi>i</mi> </msub> <mo>-</mo> <mi>&amp;mu;</mi> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <msub> <mi>H</mi> <mi>i</mi> </msub> <mo>-</mo> <mi>&amp;mu;</mi> <mo>)</mo> </mrow> <mi>T</mi> </msup> </mrow>
    Wherein NpFor positive sample number, μ is the mean vector calculated all positive samples;
    S23:The characteristic value of the covariance matrix S is sought, to all characteristic values of S by sorting from big to small, is taken successively wherein big In d characteristic value λ of threshold value0、λ1、...、λd-1, take and λiCorresponding eigenvector ci, with c0、c1、...、cd-1As column vector Matrix C is formed, as the following formula to making HiDimensionality reduction computing,
    Hi=CTHi
  4. 4. the wagon detector training method according to claim 1 learnt automatically based on multiple subarea area image feature, It is characterized in that, being selected with RealBoost algorithms described in S3 for vehicle detection there is performance to meet weak point of preset standard Class device is combined into a strong classifier, comprises the following steps:
    S31:T is from 1 to T iteration, and wherein T is to allow the Weak Classifier number contained up in a default strong classifier, often Wheel iteration selects a Weak Classifier;
    S32:As input, the weighting for calculating each Weak Classifier respectively divides the positive sample and negative sample concentrated using training sample by mistake Loss function selects that there is minimum the Weak Classifier of loss function value to be divided to be added it to current for optimal Weak Classifier by mistake Strong classifier;
    S33:The weights of each sample are updated as follows,
    <mrow> <msub> <mi>w</mi> <mi>i</mi> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>w</mi> <mi>i</mi> </msub> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <msub> <mi>f</mi> <mi>t</mi> </msub> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> <mo>)</mo> </mrow> </mrow> <mi>Z</mi> </mfrac> </mrow>
    Wherein xiRepresent i-th of sample, yiFor the mark of sample, corresponding positive negative sample yiRespectively+1 and -1, wiFor corresponding sample xiWeights, ft(xi) represent by the optimal Weak Classifier chosen of t wheel iteration to sample xiCarry out generated output of classifying, Z It is one and is used for normalized number, calculates as follows:
    <mrow> <mi>Z</mi> <mo>=</mo> <munder> <mi>&amp;Sigma;</mi> <mi>i</mi> </munder> <msub> <mi>w</mi> <mi>i</mi> </msub> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <msub> <mi>f</mi> <mi>t</mi> </msub> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
    S34:Classified using current strong classifier to the sample for being used to test, if verification and measurement ratio is more than default first threshold And false alarm rate is less than default second threshold, then terminates iteration, output strong classifier F (x) otherwise continues next iteration;By force Grader F (x) is as shown by the following formula:
    <mrow> <mi>F</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>s</mi> <mi>i</mi> <mi>g</mi> <mi>n</mi> <mrow> <mo>(</mo> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>t</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>T</mi> </munderover> <msub> <mi>f</mi> <mi>t</mi> </msub> <mo>(</mo> <mi>x</mi> <mo>)</mo> <mo>-</mo> <mi>q</mi> <mo>)</mo> </mrow> </mrow>
    Wherein sign () representatives take symbolic operation, and q is a default value for 0 constant, verification and measurement ratio η and false alarm rate e press respectively with Lower formula calculates:
    <mrow> <mi>&amp;eta;</mi> <mo>=</mo> <mfrac> <msub> <mi>N</mi> <mrow> <mi>p</mi> <mi>p</mi> </mrow> </msub> <msub> <mi>N</mi> <mi>p</mi> </msub> </mfrac> </mrow>
    <mrow> <mi>e</mi> <mo>=</mo> <mfrac> <msub> <mi>N</mi> <mrow> <mi>f</mi> <mi>p</mi> </mrow> </msub> <msub> <mi>N</mi> <mi>f</mi> </msub> </mfrac> </mrow>
    Wherein NpIt is all positive sample quantity, NppIt is the number of samples being correctly detected in all positive samples, NfIt is all negative samples Quantity, NfpIt is the number being mistakenly detected in all negative samples as positive sample.
  5. 5. the wagon detector training method according to claim 4 learnt automatically based on multiple subarea area image feature, It is characterized in that, selection divides the optimal Weak Classifier of loss function value with minimum described in S32 by mistake when, wherein dividing loss function by mistake Computational methods it is as follows:
    If Weak Classifier to be selected, as described in S2 to its HOG feature work standardize and dimension-reduction treatment after formed vector H, depending on Make the Multivariate Random Vector H=(h being made of multiple variables0,h1,...,hd-1), wherein d is dimension vectorial after dimensionality reduction, to every A variable hi, its value range is divided into L section, wherein 0 corresponding variable-value scope of section is hi≤ -1, section L-1 Corresponding hi>+1, the scope that n-th of section corresponds to variate-value be,
    <mrow> <mo>(</mo> <mo>-</mo> <mn>1</mn> <mo>+</mo> <mfrac> <mrow> <mn>2</mn> <mrow> <mo>(</mo> <mi>n</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> <mrow> <mi>L</mi> <mo>-</mo> <mn>2</mn> </mrow> </mfrac> <mo>,</mo> <mo>-</mo> <mn>1</mn> <mo>+</mo> <mfrac> <mrow> <mn>2</mn> <mi>n</mi> </mrow> <mrow> <mi>L</mi> <mo>-</mo> <mn>2</mn> </mrow> </mfrac> <mo>&amp;rsqb;</mo> </mrow>
    Wherein 0 < n < L-1;
    Calculate the sum of weights of training sample in section as follows:
    <mrow> <mi>P</mi> <mrow> <mo>(</mo> <msub> <mi>a</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>...</mo> <mo>,</mo> <msub> <mi>a</mi> <mrow> <mi>d</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mi>&amp;Sigma;</mi> <msub> <mi>x</mi> <mi>i</mi> </msub> </munder> <msub> <mi>w</mi> <mi>i</mi> </msub> <mi>&amp;delta;</mi> <mrow> <mo>(</mo> <msub> <mi>a</mi> <mn>0</mn> </msub> <mo>-</mo> <mi>T</mi> <mi>r</mi> <mo>(</mo> <mrow> <msub> <mi>H</mi> <msub> <mi>x</mi> <mi>i</mi> </msub> </msub> <mrow> <mo>(</mo> <msub> <mi>h</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mi>&amp;delta;</mi> <mrow> <mo>(</mo> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>-</mo> <mi>T</mi> <mi>r</mi> <mo>(</mo> <mrow> <msub> <mi>H</mi> <msub> <mi>x</mi> <mi>i</mi> </msub> </msub> <mrow> <mo>(</mo> <msub> <mi>h</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mo>...</mo> <mi>&amp;delta;</mi> <mrow> <mo>(</mo> <msub> <mi>a</mi> <mrow> <mi>d</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <mi>T</mi> <mi>r</mi> <mo>(</mo> <mrow> <msub> <mi>H</mi> <msub> <mi>x</mi> <mi>i</mi> </msub> </msub> <mrow> <mo>(</mo> <msub> <mi>h</mi> <mrow> <mi>d</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> </mrow> <mo>)</mo> <mo>)</mo> </mrow> </mrow>
    Wherein, aj∈ [0, L-1] represents a section in L section, and δ is Kronecker function,Represent training sample xi The Weak Classifier to be selected is corresponded to through connecting, standardizing, the vector that dimension-reduction treatment is formed,In representation vector J variable, Tr () they are a mapping functions, ifValue fall in u-th of interval range in the L section, then Tr () maps it onto u;
    The Weak Classifier of corresponding feature to be selected is:
    <mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mo>&amp;Sigma;</mo> <msub> <mi>a</mi> <mn>0</mn> </msub> </munder> <munder> <mo>&amp;Sigma;</mo> <msub> <mi>a</mi> <mn>1</mn> </msub> </munder> <mo>...</mo> <munder> <mo>&amp;Sigma;</mo> <msub> <mi>a</mi> <mrow> <mi>d</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> </munder> <mi>V</mi> <mrow> <mo>(</mo> <msub> <mi>a</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>...</mo> <mo>,</mo> <msub> <mi>a</mi> <mrow> <mi>d</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <mi>T</mi> <mi>r</mi> <mrow> <mo>(</mo> <msub> <mi>H</mi> <mi>x</mi> </msub> <mo>(</mo> <mrow> <msub> <mi>h</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>h</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>...</mo> <mo>,</mo> <msub> <mi>h</mi> <mrow> <mi>d</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> </mrow> <mo>)</mo> <mo>)</mo> </mrow> </mrow>
    Wherein V (a0,a1,...,ad-1) calculate as follows:
    <mrow> <mi>V</mi> <mrow> <mo>(</mo> <msub> <mi>a</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>...</mo> <mo>,</mo> <msub> <mi>a</mi> <mrow> <mi>d</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mi>l</mi> <mi>n</mi> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>P</mi> <mrow> <mi>p</mi> <mi>o</mi> <mi>s</mi> <mi>i</mi> <mi>t</mi> <mi>i</mi> <mi>v</mi> <mi>e</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>a</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>...</mo> <mo>,</mo> <msub> <mi>a</mi> <mrow> <mi>d</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <msub> <mi>P</mi> <mrow> <mi>n</mi> <mi>e</mi> <mi>g</mi> <mi>a</mi> <mi>t</mi> <mi>i</mi> <mi>v</mi> <mi>e</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>a</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>...</mo> <mo>,</mo> <msub> <mi>a</mi> <mrow> <mi>d</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>)</mo> </mrow> </mrow>
    Wherein Ppositive(a0,a1,...,ad-1) and Pnegative(a0,a1,...,ad-1) it is by positive sample and negative sample calculating respectively The sum of weights;
    If there is Weak Classifier to sample xiBe categorized as by mistake point, then sign (f (xi))≠sign(yi), the weak typing to be selected Device to caused by all sample classifications loss function being divided to calculate as follows by mistake,
    <mrow> <mi>E</mi> <mo>=</mo> <munder> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>:</mo> <mi>s</mi> <mi>i</mi> <mi>g</mi> <mi>n</mi> <mrow> <mo>(</mo> <mi>f</mi> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> <mo>)</mo> </mrow> <mo>&amp;NotEqual;</mo> <mi>s</mi> <mi>i</mi> <mi>g</mi> <mi>n</mi> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </munder> <msub> <mi>w</mi> <mi>i</mi> </msub> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mi>f</mi> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> <mo>)</mo> </mrow> <mo>.</mo> </mrow>
  6. 6. the wagon detector training method according to claim 1 learnt automatically based on multiple subarea area image feature, It is characterized in that, multiple strong classifiers described in S4 are formed wagon detector in the form of cascade, are comprised the following steps:
    S41:Newly after one strong classifier of training, if existing cascade classifier is sky, which is the 0th grade of classification Device, if existing cascade classifier has contained k grades of graders, which is added to+1 grade of the kth of cascade classifier;
    S42:Test image is detected with the new cascade classifier for adding in a strong classifier, if verification and measurement ratio is more than default the Three threshold values and false detection rate are less than default 4th threshold value or total series of cascade classifier has reached default number, then defeated Go out cascade classifier, terminate training process;Otherwise, the positive sample divided by mistake and negative sample are collected, is separately added into positive and negative samples Collection, and the negative sample that the strong classifier that negative sample concentration has currently been trained correctly detects is deleted, with updated positive negative sample Collection one new strong classifier of training.
CN201810053698.1A 2018-01-19 2018-01-19 Vehicle detector training method based on multi-subregion image feature automatic learning Active CN108108724B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810053698.1A CN108108724B (en) 2018-01-19 2018-01-19 Vehicle detector training method based on multi-subregion image feature automatic learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810053698.1A CN108108724B (en) 2018-01-19 2018-01-19 Vehicle detector training method based on multi-subregion image feature automatic learning

Publications (2)

Publication Number Publication Date
CN108108724A true CN108108724A (en) 2018-06-01
CN108108724B CN108108724B (en) 2020-05-08

Family

ID=62218661

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810053698.1A Active CN108108724B (en) 2018-01-19 2018-01-19 Vehicle detector training method based on multi-subregion image feature automatic learning

Country Status (1)

Country Link
CN (1) CN108108724B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1731417A (en) * 2005-08-19 2006-02-08 清华大学 Method of robust human face detection in complicated background image
CN101398893A (en) * 2008-10-10 2009-04-01 北京科技大学 Adaboost arithmetic improved robust human ear detection method
CN101609509A (en) * 2008-06-20 2009-12-23 中国科学院计算技术研究所 A kind of image object detection method and system based on pre-classifier
CN102024149A (en) * 2009-09-18 2011-04-20 北京中星微电子有限公司 Method of object detection and training method of classifier in hierarchical object detector
CN102480494A (en) * 2010-11-23 2012-05-30 金蝶软件(中国)有限公司 File updating method, device and system
CN102542303A (en) * 2010-12-24 2012-07-04 富士通株式会社 Device and method for generating classifier of specified object in detection image
CN107221000A (en) * 2017-04-11 2017-09-29 天津大学 Acupoint Visualization Platform and its image processing method based on augmented reality

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1731417A (en) * 2005-08-19 2006-02-08 清华大学 Method of robust human face detection in complicated background image
CN101609509A (en) * 2008-06-20 2009-12-23 中国科学院计算技术研究所 A kind of image object detection method and system based on pre-classifier
CN101398893A (en) * 2008-10-10 2009-04-01 北京科技大学 Adaboost arithmetic improved robust human ear detection method
CN102024149A (en) * 2009-09-18 2011-04-20 北京中星微电子有限公司 Method of object detection and training method of classifier in hierarchical object detector
CN102480494A (en) * 2010-11-23 2012-05-30 金蝶软件(中国)有限公司 File updating method, device and system
CN102542303A (en) * 2010-12-24 2012-07-04 富士通株式会社 Device and method for generating classifier of specified object in detection image
CN107221000A (en) * 2017-04-11 2017-09-29 天津大学 Acupoint Visualization Platform and its image processing method based on augmented reality

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王志强: "基于视频图像的人脸检测方法研究与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Also Published As

Publication number Publication date
CN108108724B (en) 2020-05-08

Similar Documents

Publication Publication Date Title
CN108830188B (en) Vehicle detection method based on deep learning
Gao et al. Automatic change detection in synthetic aperture radar images based on PCANet
CN108830285B (en) Target detection method for reinforcement learning based on fast-RCNN
CN104598885B (en) The detection of word label and localization method in street view image
CN103489005B (en) A kind of Classification of High Resolution Satellite Images method based on multiple Classifiers Combination
CN109284669A (en) Pedestrian detection method based on Mask RCNN
CN108492298B (en) Multispectral image change detection method based on generation countermeasure network
CN107133974A (en) The vehicle type classification method that Gaussian Background modeling is combined with Recognition with Recurrent Neural Network
CN103679191B (en) An automatic fake-licensed vehicle detection method based on static state pictures
CN104182985A (en) Remote sensing image change detection method
CN109035300B (en) Target tracking method based on depth feature and average peak correlation energy
CN103106265A (en) Method and system of classifying similar images
CN107844751A (en) The sorting technique of guiding filtering length Memory Neural Networks high-spectrum remote sensing
CN108268865A (en) Licence plate recognition method and system under a kind of natural scene based on concatenated convolutional network
CN105023027A (en) Sole trace pattern image retrieval method based on multi-feedback mechanism
CN106372658A (en) Vehicle classifier training method
CN104050684A (en) Video moving object classification method and system based on on-line training
CN110929746A (en) Electronic file title positioning, extracting and classifying method based on deep neural network
CN109344695A (en) A kind of target based on feature selecting convolutional neural networks recognition methods and device again
CN107704833A (en) A kind of front vehicles detection and tracking based on machine learning
CN107886066A (en) A kind of pedestrian detection method based on improvement HOG SSLBP
Shang et al. Support vector machine-based classification of rock texture images aided by efficient feature selection
CN107886539A (en) High class gear visible detection method under a kind of industrial scene
CN103366373A (en) Multi-time-phase remote-sensing image change detection method based on fuzzy compatible chart
CN110751670B (en) Target tracking method based on fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant