CN103544487B - Front truck recognition methods based on monocular vision - Google Patents

Front truck recognition methods based on monocular vision Download PDF

Info

Publication number
CN103544487B
CN103544487B CN201310535448.9A CN201310535448A CN103544487B CN 103544487 B CN103544487 B CN 103544487B CN 201310535448 A CN201310535448 A CN 201310535448A CN 103544487 B CN103544487 B CN 103544487B
Authority
CN
China
Prior art keywords
vehicle
image
region
feature
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310535448.9A
Other languages
Chinese (zh)
Other versions
CN103544487A (en
Inventor
陈军
袁江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangzhou Auspicious Control Vehicle Electronics Co Ltd
Original Assignee
Yangzhou Auspicious Control Vehicle Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yangzhou Auspicious Control Vehicle Electronics Co Ltd filed Critical Yangzhou Auspicious Control Vehicle Electronics Co Ltd
Priority to CN201310535448.9A priority Critical patent/CN103544487B/en
Publication of CN103544487A publication Critical patent/CN103544487A/en
Application granted granted Critical
Publication of CN103544487B publication Critical patent/CN103544487B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The present invention provides a kind of front truck recognition methods based on monocular vision, the original image acquired from vehicle-mounted camera by (1), the edge of image is extracted with Canny edge extracting method, the influence of noise spot is eliminated with morphologic filtering, and projected to horizontal direction, the area-of-interest to front vehicles is obtained according to projection properties;(2) vehicle bottom shadow region is extracted to judge vehicle bottom shadow region, and overlay edge feature according to the geometry of vehicle bottom shade, judge vehicle region;(3) the small image of colour of the candidate vehicle region of different shape class is subjected to gray processing, normalization and Dual-tree Complex Wavelet, obtains its feature vector;(4) dimension of feature vector is reduced with two-dimentional Independent Component Analysis Algorithm, it is sent into the support vector machines of radial basis function core and classifies, vehicle region is judged whether it is, it can be achieved that accurately detecting to the vehicle in road ahead, and reliable road environment information in real time can be provided for automatic driving car.

Description

Front truck recognition methods based on monocular vision
Technical field
The invention belongs to intelligent transportation fields, and in particular to whether front has the judgment method of vehicle.
Background technique
The real-time perception of environment and identification are that vehicle is unmanned and one of the key problem of active safety, front road Road often has apart from the closer vehicle of this vehicle, easily causes rear-end collision.Front vehicles is fast in research trends scene of the present invention Speed positioning and judgment method, in vehicle travel process, the information of vehicles occurred to front, which is detected and analyzed, to be nobody It drives vehicle or driver provides timely road environment information, to observe traffic rules and regulations.By use for reference the mankind visual cognition mechanism, The newest research results of computer vision and pattern recognition theory, research and design are fast and automatically, the front vehicles of robust sentence Determine system, provide strong support for the correlation theory and application development of automatic driving car, there is important theory significance and reality With value.
Summary of the invention
The invention discloses dynamic environment images in a kind of road ahead according to acquisition, judge vehicle according to vehicle characteristics Region and non-vehicle region, which solve the dynamic positioning problems of front vehicles.
Used technical solution is the present invention to solve above-mentioned technical problem:
A kind of front truck recognition methods based on monocular vision is provided, comprising the following steps:
(1) original image acquired from vehicle-mounted camera extracts the edge of image with Canny edge extracting method, uses Morphologic filtering eliminates the influence of noise spot, and is projected to horizontal direction, is obtained according to projection properties to front vehicles Area-of-interest;
(2) vehicle bottom shadow region, the primitive character of the image after morphological operator is processed, according to vehicle bottom shade are extracted Geometry judges vehicle bottom shadow region, and overlay edge feature, judges vehicle region;
(3) the small image of colour of the candidate vehicle region of different shape class is subjected to gray processing, normalization and binary tree are multiple Wavelet transformation obtains its feature vector;
(4) dimension that feature vector is reduced with two-dimentional Independent Component Analysis Algorithm, be sent into the support of radial basis function core to Classify in amount machine, judges whether it is vehicle region.
Step (1) the Canny edge extracting method specifically: set two dimensional image as f (x, y), two-dimensional Gaussian function is
Derivative in its either direction is
In formula:For the unit direction vector of gradient;
For the derivative modulus value of Gaussian function.By the gradient of image and Gaussian function convolution are as follows:
Gradient modulus value is
With less than one restriction threshold value of the edge strength maximum in 3 × 3 subdomain of center of point (x, y), to be partitioned into figure Connected region therein, is then filled by the fringe region as in, and to being projected on vertically and horizontally, according to The information from objective pattern of vehicle obtains the area-of-interest to front vehicles.
The algorithm of step (2) the judgement vehicle region specifically: original image is subjected to gray processing, histogram is carried out to it Statistics, then divides it for ξ according to threshold value t1, ξ2Two classes, the probability occurred respectively are shown below:
Wherein L is grayscale image gray level, if ξ1, ξ2The mean value of two classes is respectivelyWith:
If the mean μ of global image pixel0Indicate the mean value of the value of whole picture grayscale image pixel, the image that each width determines its Gray average uniquely determines, and is calculated with following formula:
Calculate ξ1, ξ2The variance of two classes:
ObviouslyAbout the function of t, t ' can be acquired by traversal and made(t ') is maximum, then t ' is global optimum Threshold value.And with Optimum threshold segmentation image, the bianry image comprising vehicle bottom shade is obtained, and judge vehicle bottom yin in the following method Shadow zone domain:
It is C to the zone marker in bianry imagei, the area of boundary rectangle frame is wi×hi, area Areai;If institute It takes up space and compares ariWith the ratio of width to height whri:
Meet following condition may be considered vehicle bottom shade area-of-interest Ci':
In conjunction with step (1) and step (2) vehicle location as a result, orienting the area-of-interest of vehicle, and carry out to it Corresponding judgement.
Step (3) described algorithm specifically: the template library for initially setting up vehicle and non-vehicle region, the candidate that will test Vehicle region image gray processing and normalization;Using two real wavelet transform ψh(t) and ψg(t) carry out the parallel processing image, Obtain complex wavelet transform ψ (t)=ψh(t)+jψg(t) real and imaginary parts part;On each scale, to the row and column point of image It is not filtered with two one-dimensional binary tree complex wavelets, obtains ± 15 °, ± 45 °, the Dual-tree Complex Wavelet Transformation of ± 75 ° of six directions Cluster is converted, on each scale and direction, it is (R that Dual-tree Complex Wavelet, which can generate complex coefficient,d,sc(x,y),Id,sc(x, Y) subband), wherein Rd,sc(x, y) is real coefficient, Id,sc(x, y) is complex coefficient, with the amplitude of subband
As the feature of image, wherein d ∈ { 0 ..., 5 } is direction ± 15 °, six subbands at ± 45 °, ± 75 °, sc ∈ 0 ..., and 2 } it is scale;Take S={ Md,sc(x, y): d ∈ 0 ..., 5 }, sc ∈ 0 ..., 2 } it is that Dual-tree Complex Wavelet Transformation becomes Each Wavelet image is transformed to row feature vector v by the image collection for changing expressionD, sc, and be normalized toSo as to scheme The Dual-tree Complex Wavelet Transformation feature vector of picture
Step (4) described algorithm specifically:
1) calculation optimization map vector and sample database average characteristics
To the Dual-tree Complex Wavelet Transformation sample characteristics x of car modali, i ∈ 1,2 ..., L, L are image in car modal library Number is translated into the two-dimensional matrix χ of k × ni∈Rk×n;If each component is linear group of P unknown independent elements It closes, P≤L, covariance matrix is
Wherein Σ is covariance matrix,For the average characteristics of training sample image;
Singular value decomposition is carried out to Σ, meets ∑=V Λ UT;Diagonal matrix Λ=diag (λ12,...,λn), meet λj≥ λj+1, U=[u1,u2,...,un] it is the orthogonal matrix that the corresponding feature vector of characteristic value forms;Take r=10 maximum eigenvalue Λr= diag(λ12,...,λr) corresponding feature vector Ur=[u1,u2,...,ur];Construct whitening matrixWhereinConstruct weight matrix W=(w1,...,wn)T, weight vector is updated by following steps wi:
(a) random initial weight vector w is selectedi(old);
(b) it enables
(c)
(d)
If (e)Then enable wi(old)=wi(new), return step (b), otherwise, Weight wiUpdate terminates, and enables wi=wi(new);
wiIt (t) is the weight vector of the t times iteration;G=tanh (u), g ' (u)=1- (tanh (u))2
2) after weight matrix construction complete, can seek optimization map vector isSample planting modes on sink characteristic Dimensionality reduction, the optimization map vector S={ s asked with it1,...,srExtract sample characteristics;To the binary tree of the vehicle in template library Phase information sampling feature vectors xi, i ∈ 1,2 ..., L, L are the number of image in car modal library;It is translated into k × n's Two-dimensional matrix χi∈Rk×n, enableM=1,2 ..., n obtain sample image feature vector xiR independent main point Measure Y1,...,Yr, thenI ∈ 1,2 ..., L is the dimensionality reduction feature of sampling feature vectors xi;
3) vehicle region Dual-tree Complex Wavelet Transformation Feature Dimension Reduction to be identified, it is special to the vehicle image Dual-tree Complex Wavelet Transformation extracted Vector x is levied, the two-dimensional matrix χ of k × n is translated into, is enabledM=1,2 ..., n, obtains image feature vector The r independent principal component Y of x1,...,Yr.ThenThe dimensionality reduction feature of as x;
4) vehicle judges:
To the feature extracted, the support vector machines of radial basis function core is selected to classify, if vehicle region and Fei Che Region belongs to two class w1,w2, the linear discriminant function between them is d (x)=wTX+b, b/ | | w | | it is from origin to super flat The vertical range in face, | | w | | it is the 2- norm of theorem in Euclid space.If w1,w2Between sample can distinguish, then to arbitrary i-th Sample xi, it is all satisfied constraint condition:
yi(<xi·w>+b)-1≥0
The point for meeting above equation can acquire w, the scale factor of b;These o'clocks are in two hyperplane H1:<xiThe He of w >+b=1 H2:<xiThe interval of w >+b=- 1, two datasets can be reduced to 2/ | | w | |, two datasets largest interval can be by | | w | |22 Minimum obtain.Introduce positive Lagrange multiplier ai,i=1,...,l;Objective function are as follows:
Obtain optimal Lagrange multiplierI=1 ..., l, so as to To vehicle and non-vehicle region, optimal separating hyper plane { wi,j,bi,j},i∈{1,2},j∈{1,2};With discriminant function:
f(x)=sgn{<x·wi,j>+bi,j}
Judge whether it is vehicle region;Define TcFor the accumulator that class of vehicle determines, length 2 indicates vehicle and non- The type of vehicle, is initialized as 0;The feature to be sorted for one is sent into Optimal Separating Hyperplane set, obtains fi,j(x), it uses Following formula judges whether it is vehicle:
Enable positionmaxIt is max (Tc) in accumulator TcPosition, positionmax∈{1,2}。
The present invention has compared with prior art to be had many advantages: edge feature and connection of 1. present invention according to vehicle Region projection characteristic obtains the area-of-interest of vehicle;2. pair image carries out statistics of histogram, maximum with class inherited Characteristic area separates shadow region and non-hatched area, then according to the shade geometrical characteristic at vehicle bottom, orients the interested of vehicle Region.In conjunction with the positioning result of two methods, the area-of-interest for more accurately orienting vehicle comes;3. Dual-tree Complex Wavelet Transformation becomes It changes with approximate scale and rotational invariance, and can preferably indicate the textural characteristics of vehicle image;4. two-dimentional isolated component Analysis, can effectively extract the most important characteristic information of image, be suitable for the Feature Dimension Reduction of image.Supporting vector function is preferable The problems such as dimension is higher, sample is less than normal in feature is solved, preferable classifying quality can be obtained.It can be achieved in road ahead Vehicle accurately detects, and reliable road environment information in real time can be provided for automatic driving car.
Detailed description of the invention
Fig. 1 is system flow chart of the invention.
Specific embodiment
Establish the template database of a variety of type of vehicle and non-vehicle type.By width vehicle image gray scale every in template library Change, and is normalized to 64 × 64 image with bilinear interpolation.It then is h with filter coefficient0,h1, g0,g1, h0o, h1oDual-tree Complex Wavelet is filtered the row and column of image respectively, obtain image dual-tree wavelet transform real part and Imaginary part, and its amplitude is calculated, the image set of the Dual-tree Complex Wavelet of image is formed, after normalization and vectorization, as mould The feature of plate image.Dimensionality reduction is carried out with feature samples collection of the two-dimentional independent component analysis method in step (4) to template library, is obtained To the feature of its maximum discriminationS be vehicle region and non-vehicle region, i=1,2 ... 300 be every class template library sample This number.It is sent into the support vector machines of radial basis function core and carries out sample training, obtain training result.Table 1 is two-dimentional binary The filter filter group of tree complex wavelet transformation.
Table 1
h0,g1 h1,g0 h0o h1o
0.05113 -0.00618 -0.00176 -7.06E-05
-0.01398 -0.00169 0 0
-0.10984 -0.10023 0.022266 0.001341902
0.26384 0.000874 -0.04688 -0.001883371
0.766628 0.563656 -0.04824 -0.007156808
0.563656 0.766628 0.296875 0.023856027
0.000874 0.26384 0.555469 0.055643136
-0.10023 -0.10984 0.296875 -0.051688058
-0.00169 -0.01398 -0.04824 -0.299757603
-0.00618 0.05113 -0.04688 0.559430804
0.022266 -0.299757603
0 -0.051688058
-0.00176 0.055643136
0.023856027
-0.007156808
-0.001883371
0.001341902
0
-7.06E-05
The following further describes the present invention with reference to the drawings:
(1) camera is opened.
(2) piece image is obtained.
(3) two dimensional image f (x, y), two-dimensional Gaussian function are set are as follows:
Derivative in its either direction are as follows:
In formula:For the unit direction vector of gradient.
For the derivative modulus value of Gaussian function.By the gradient of image and Gaussian function convolution are as follows:
Gradient modulus value are as follows:
With less than one restriction threshold value of the edge strength maximum in 3 × 3 subdomain of center of point (x, y), to be partitioned into figure Fringe region as in.
(4) then connected region therein is filled, and to being projected on vertically and horizontally, according to vehicle Information from objective pattern, obtain the area-of-interest of front vehicles.
(5) original image is subjected to gray processing, statistics with histogram is carried out to it, then divide it for ξ according to threshold value t1, ξ2Two Class, shown in the probability following formula occurred respectively:
Wherein L is grayscale image gray level, if ξ1, ξ2The mean value of two classes is respectivelyWith:
If the mean μ of global image pixel0Indicate the mean value of the value of whole picture grayscale image pixel, the image that each width determines its Gray average uniquely determines, and is calculated with following formula:
Calculate ξ1, ξ2The variance of two classes:
ObviouslyAbout the function of t, t ' can be acquired by traversal and made(t ') is maximum, then t ' is global optimum Threshold value.And with Optimum threshold segmentation image, obtain include vehicle bottom shade bianry image.And vehicle bottom yin is judged in the following method Shadow zone domain.
It (6) is C to the zone marker in bianry imagei, the area of boundary rectangle frame is wi×hi, area Areai; Compare ar if taking up spaceiWith the ratio of width to height whri:
Meet following condition may be considered vehicle bottom shade area-of-interest Ci′。
In conjunction with two methods vehicle location as a result, orienting the area-of-interest of vehicle, and sentenced accordingly to it It is disconnected.
(7) the interested area of vehicle image grayscale that will test turns to image Gray, and will with bilinear interpolation Gray is normalized to the image of 64 × 64 sizes.Image is converted with two-dimentional Dual-tree Complex Wavelet, filter system Number h0,h1, g0,g1, h0o,h1oAs shown in table 1, the collection of the Dual-tree Complex Wavelet magnitude image of 3,6 directions scale is obtained Close S={ Md,sc(x, y): d ∈ { 0 ..., 5 }, sc ∈ { 0 ..., 2 } }, by each magnitude image every pixel point sampling, it is transformed to Row feature vector vd,sc, and be normalized toSo as to obtain the Dual-tree Complex Wavelet Transformation feature vector of imageTotally 8064 features, as shown in table 3.
(8) the two-dimensional matrix χ that the Dual-tree Complex Wavelet feature vector x of extraction is converted to 84 × 96, in invention The method of step (4) (b) in appearance reduces the dimension of two-dimensional matrix, obtains the corresponding feature vector of 10 maximum eigenvalue Y1,...,Y10, it is the feature of 840 dimensions by these combination of eigenvectors
(9) feature after dimensionality reductionBe sent into summary of the invention in step (4) 4) description radial basis function core support to Classify in amount machine.Region of interest characteristic of field, it is sent into Optimal Separating Hyperplane set, defines the T that length is classc, make For the accumulator that vehicle determines, it is initialized as 0.Obtain fi,j(x), the classification results to be added up with following formula between every two classes mark:
Enable positionmaxIt is max (Tc) in accumulator TcPosition, positionmax∈ { 1,2 }, if the result is that 1, It is otherwise interference region for vehicle.
(10) identify whether to terminate if so, being transferred to step (11), it is otherwise transferred to step (2).
(11) camera is closed.
(12) terminate.

Claims (3)

1. a kind of front truck recognition methods based on monocular vision, it is characterised in that the following steps are included:
(1) original image acquired from vehicle-mounted camera extracts the edge of image with Canny edge extracting method, uses form It learns filtering and eliminates the influence of noise spot, and projected to horizontal direction, obtained according to projection properties emerging to the sense of front vehicles Interesting region;If two dimensional image is f (x, y), two-dimensional Gaussian function are as follows:
Derivative in its either direction are as follows:σ is Gaussian filter dimensional parameters;
For the derivative modulus value of Gaussian function
In formula:For the unit direction vector of gradient;α is Gaussian filter gradient direction angle;
By the gradient of image and Gaussian function convolution are as follows:
With less than one restriction threshold value of the edge strength maximum in 3 × 3 subdomain of center of point (x, y), to be partitioned into image Fringe region, then connected region therein is filled, and to being projected on vertically and horizontally, according to vehicle Information from objective pattern, obtain to the area-of-interests of front vehicles;
(2) vehicle bottom shadow region, the primitive character of the image after morphological operator is processed, according to the geometry of vehicle bottom shade are extracted Shape judges vehicle bottom shadow region, and overlay edge feature, judges vehicle region;And overlay edge feature, judge vehicle Region, judges the algorithm of vehicle region specifically: original image is carried out gray processing, carries out statistics with histogram to it, then will It is ξ according to threshold value t points1, ξ2Two classes, the probability occurred respectively are shown below:
Wherein L is image gray levels, if ξ1, ξ2The mean value difference of two classes is as follows:
If the mean μ of global image pixel0Indicate the mean value of the value of whole picture grayscale image pixel, its gray scale of the image of each width determination Mean value uniquely determines, PiFor the probability that current pixel value i occurs, calculated with following formula:
Calculate ξ1, ξ2The variance of two classes
About the function of t, can acquire t ' by traversal makesMaximum, then t ' is global optimum's threshold value, is used in combination Optimum threshold segmentation image obtains the bianry image comprising vehicle bottom shade, and judges vehicle bottom shadow region in the following method:
It is C to the zone marker in bianry imagei, the area of boundary rectangle frame is wi×hi, area Areai;If institute's duty Between compare ariWith the ratio of width to height whri:
Meet following condition may be considered vehicle bottom shade area-of-interest Ci':
Formula ShighAnd SlowRespectively represent the area bound threshold value of vehicle bottom shadow region judgement;
αhighAnd αlowThe space of vehicle bottom shadow region judgement is respectively represented than bound threshold value;
βhighAnd βlowRespectively represent the ratio of width to height bound threshold value of vehicle bottom shadow region judgement;
In conjunction with step (1) and step (2) vehicle location as a result, orienting the area-of-interest of vehicle, and carry out accordingly to it Judgement;
(3) the small image of colour of the candidate vehicle region of different shape class is subjected to gray processing, normalization and Dual-tree Complex Wavelet Transformation Transformation, obtains its feature vector;
(4) dimension that feature vector is reduced with two-dimentional Independent Component Analysis Algorithm, is sent into the support vector machines of radial basis function core In classify, judge whether it is vehicle region.
2. the front truck recognition methods according to claim 1 based on monocular vision, it is characterised in that: step (3) described side Method specifically: the template library for initially setting up vehicle and non-vehicle region, the candidate vehicle region image gray processing that will test and Normalization;Using two reality wavelet transform ψ h (t) and ψ g (t) come the parallel processing image, obtain complex wavelet transform ψ (t) The real and imaginary parts part of=ψ h (t)+j ψ g (t), parameter j represent imaginary part;On each scale, the row and column of image is distinguished It is filtered with two one-dimensional binary tree complex wavelets, the Dual-tree Complex Wavelet Transformation for obtaining ± 15 °, ± 45 °, ± 75 ° six directions becomes Cluster is changed, on each scale and direction, it is (R that Dual-tree Complex Wavelet, which can generate complex coefficient,d,sc(x,y),Id,sc(x,y)) Subband, wherein Rd,sc(x, y) is real part coefficient, Id,sc(x, y) is imaginary part coefficient, with the amplitude of subband:
As the feature of image, wherein d ∈ { 0 ..., 5 } is direction ± 15 °, ± 45 °, six subbands at ± 75 °, sc ∈ 0 ..., and 2 } it is scale;Take S={ Md,sc(x, y): d ∈ 0 ..., 5 }, sc ∈ 0 ..., 2 } it is Dual-tree Complex Wavelet Each Wavelet image is transformed to row feature vector V by the image collection of expressiond,sc, and normalize and can obtain the binary tree of image Phase information feature vector
3. the front truck recognition methods according to claim 1 based on monocular vision, it is characterised in that: step (4) described side Method specifically:
(1) calculation optimization map vector and sample database average characteristics:
To the Dual-tree Complex Wavelet Transformation sample characteristics x of car modali, i ∈ 1,2 ..., M, M are the number of image in car modal library, It is translated into the two-dimensional matrix χ of k × ni∈R;If each component is the linear combination of the unknown independent element of P, P≤M, Its covariance matrix are as follows:
Wherein Σ is covariance matrix,For the average characteristics of training sample image;
Singular value decomposition is carried out to Σ, meets Σ=V Λ UT;Diagonal matrix Λ=diag (λ12,···,λn), meet λj≥ λj+1, U=[u1,u2,···,un] it is the orthogonal matrix that the corresponding feature vector of characteristic value forms;V=Σ U-1ΛT;Take r=10 A maximum eigenvalue Λr=diag (λ12, λ r) corresponding feature vector Ur=[u1,u2,···,ur];Construction Whitening matrixWhereinConstruct weight matrix W=(w1, w2,···,wn), weight vector w is updated by following stepsi:
(a) random initial weight vector w is selectedi(old);
(b) it enables
(c)
(d)
If (e)Then enable wi(old)=wi(new), return step (b), otherwise, Weight wiUpdate terminates, and enables wi=wi(new);
wiIt (t) is the weight vector of the t times iteration;G=tanh (u), g ' (u)=1- (tanh (u));
(2) after weight matrix construction complete, optimization map vector can be soughtSample planting modes on sink characteristic drop Dimension, the optimization map vector S={ s asked with it1,···,srExtract sample characteristics;It is multiple to the binary tree of car modal small Wave sampling feature vectors xi, i ∈ 1,2, M, M are the number of image in car modal library;It is translated into the two of k × n Tie up matrix χi∈ R is enabledObtain the Dual-tree Complex Wavelet Transformation of car modal in sample image Sampling feature vectors xiR independent principal component Y1,···,Yr, thenAs Sampling feature vectors xiDimensionality reduction feature;
(3) vehicle region Dual-tree Complex Wavelet Transformation Feature Dimension Reduction to be identified, to the vehicle image Dual-tree Complex Wavelet Transformation feature extracted Vector x is translated into the two-dimensional matrix χ of k × n, enablesObtain image feature vector x R independent principal component Y '1,…,Y′r, thenThe dimensionality reduction feature of as x;
(4) vehicle judges:
To the feature extracted, the support vector machines of radial basis function core is selected to classify, if vehicle region and non-vehicle area The feature weight of area image belongs to two class w1,w2;Linear discriminant function between them is d (x)=w x+b, b/ | | w | | it is Vertical range from origin to hyperplane, | | w | | it is the 2- norm of theorem in Euclid space;If w1,w2Between sample can distinguish, then To arbitrary i-th of sample xi, it is all satisfied constraint condition: yi(<xiW >+b) -1 >=0, wherein b is support vector machine classifier Distance parameter, the point for meeting above equation can acquire w, the scale factor of b;These o'clocks are in two hyperplane H1:<xi·w>+b =1 and H2:<xiThe interval of w >+b=-1, two datasets can be reduced to 2/ | | w | |, two datasets largest interval can be by | | w||2/ 2 minimum obtains, and introduces positive Lagrange multiplier ai, i=1 ..., M;Objective function are as follows:
Obtain optimal Lagrange multiplier ai *, i=1 ..., M, so as to bi*=yi-<xi·w*>;yiFor the mark of i-th of sample Label, bi *For the distance parameter for calculating resulting support vector machine classifier;To vehicle and non-vehicle region, optimal separating hyper plane {wi,j,bi,j *, i ∈ { 1,2 }, j ∈ { 1,2 };With discriminant function:Judge whether it is vehicle Region;Define TcFor the accumulator that class of vehicle determines, length 2 indicates the type of vehicle and non-vehicle, is initialized as 0; The feature to be sorted for one is sent into Optimal Separating Hyperplane set, obtains fi,j(x), vehicle is judged whether it is with following formula:
Enable positionmaxIt is max (Tc) in accumulator TcPosition, positionmax∈{1,2}。
CN201310535448.9A 2013-11-01 2013-11-01 Front truck recognition methods based on monocular vision Active CN103544487B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310535448.9A CN103544487B (en) 2013-11-01 2013-11-01 Front truck recognition methods based on monocular vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310535448.9A CN103544487B (en) 2013-11-01 2013-11-01 Front truck recognition methods based on monocular vision

Publications (2)

Publication Number Publication Date
CN103544487A CN103544487A (en) 2014-01-29
CN103544487B true CN103544487B (en) 2019-11-22

Family

ID=49967922

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310535448.9A Active CN103544487B (en) 2013-11-01 2013-11-01 Front truck recognition methods based on monocular vision

Country Status (1)

Country Link
CN (1) CN103544487B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6178280B2 (en) * 2014-04-24 2017-08-09 日立建機株式会社 Work machine ambient monitoring device
CN104077611B (en) * 2014-07-14 2017-06-09 南京原觉信息科技有限公司 Indoor scene monocular vision space recognition method under class ground gravitational field environment
CN107133591B (en) * 2017-05-05 2020-07-21 深圳前海华夏智信数据科技有限公司 Parking space detection method and device based on structured light
CN107169984A (en) * 2017-05-11 2017-09-15 南宁市正祥科技有限公司 A kind of underbody shadow detection method
CN107133596A (en) * 2017-05-11 2017-09-05 南宁市正祥科技有限公司 Front truck moving vehicle detection method based on underbody shade
CN107944428B (en) * 2017-12-15 2021-07-30 北京工业大学 Indoor scene semantic annotation method based on super-pixel set
CN109815812B (en) * 2018-12-21 2020-12-04 辽宁石油化工大学 Vehicle bottom edge positioning method based on horizontal edge information accumulation
CN109917359B (en) * 2019-03-19 2022-10-14 福州大学 Robust vehicle distance estimation method based on vehicle-mounted monocular vision

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101794515A (en) * 2010-03-29 2010-08-04 河海大学 Target detection system and method based on covariance and binary-tree support vector machine
CN102542260A (en) * 2011-12-30 2012-07-04 中南大学 Method for recognizing road traffic sign for unmanned vehicle

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101794515A (en) * 2010-03-29 2010-08-04 河海大学 Target detection system and method based on covariance and binary-tree support vector machine
CN102542260A (en) * 2011-12-30 2012-07-04 中南大学 Method for recognizing road traffic sign for unmanned vehicle

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于Canny理论的自适应边缘检测方法研究;张玲艳;《中国优秀硕士学位论文全文数据库信息科技辑》;20090815(第8期);正文第23-26页 *
智能车辆中基于视频的车辆检测算法研究;欧志芳;《中国优秀硕士学位论文全文数据库信息科技辑》;20130615(第6期);摘要,正文第10、25-34页 *

Also Published As

Publication number Publication date
CN103544487A (en) 2014-01-29

Similar Documents

Publication Publication Date Title
CN103544487B (en) Front truck recognition methods based on monocular vision
CN107657279B (en) Remote sensing target detection method based on small amount of samples
Munroe et al. Multi-class and single-class classification approaches to vehicle model recognition from images
US7853072B2 (en) System and method for detecting still objects in images
CN105894047B (en) A kind of face classification system based on three-dimensional data
CN104361343B (en) Vehicle type recognition method and its device
CN105740886B (en) A kind of automobile logo identification method based on machine learning
CN103632132A (en) Face detection and recognition method based on skin color segmentation and template matching
CN106682641A (en) Pedestrian identification method based on image with FHOG- LBPH feature
Elmikaty et al. Car detection in aerial images of dense urban areas
CN114359876B (en) Vehicle target identification method and storage medium
Zaarane et al. Real-time vehicle detection using cross-correlation and 2D-DWT for feature extraction
Chen et al. Vehicle detection based on multifeature extraction and recognition adopting RBF neural network on ADAS system
Wang et al. Real-time vehicle classification based on eigenface
CN104966064A (en) Pedestrian ahead distance measurement method based on visual sense
CN106803102B (en) Self-adaptive regional pooling object detection method based on SVR model
Dalka et al. Vehicle classification based on soft computing algorithms
Wang et al. A novel fine-grained method for vehicle type recognition based on the locally enhanced PCANet neural network
Chen et al. Context-aware lane marking detection on urban roads
Kataoka et al. Extended feature descriptor and vehicle motion model with tracking-by-detection for pedestrian active safety
Badura et al. Automatic car make recognition in low-quality images
CN108364027B (en) Rapid forward multi-vehicle-type vehicle detection method
Wang et al. Component-model based detection and recognition of road vehicles
Ktata et al. License plate localization using Gabor filters and neural networks
Wang et al. On-road vehicle detection through part model learning and probabilistic inference

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
CB02 Change of applicant information

Address after: 225000 Shuangqiao business hall, No. 413 Yangzi Jiangbei Road, Hanjiang District, Jiangsu, Yangzhou Province, 3006

Applicant after: Yangzhou auspicious control vehicle electronics Co., Ltd

Address before: 211400 science and Technology Pioneering service center, No. 9, Tai Tai Road, Yizheng Economic Development Zone, Yangzhou, Jiangsu

Applicant before: Yangzhou auspicious control vehicle electronics Co., Ltd

COR Change of bibliographic data
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Inventor after: Chen Jun

Inventor after: Yuan Jiang

Inventor before: Chen Jun

COR Change of bibliographic data
GR01 Patent grant
GR01 Patent grant