CN109977766A - A kind of fine recognition methods of vehicle based on local feature - Google Patents
A kind of fine recognition methods of vehicle based on local feature Download PDFInfo
- Publication number
- CN109977766A CN109977766A CN201910122389.XA CN201910122389A CN109977766A CN 109977766 A CN109977766 A CN 109977766A CN 201910122389 A CN201910122389 A CN 201910122389A CN 109977766 A CN109977766 A CN 109977766A
- Authority
- CN
- China
- Prior art keywords
- training
- feature
- image
- local
- notable
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/54—Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Abstract
The invention discloses a kind of fine recognition methods of the vehicle based on local feature, steps are as follows: carries out just training to convolutional neural networks, obtains slightly training network;Training image I is found out to the local derviation matrix of loss function using backpropagation on thick training network, which is exactly the image gradient of training image I, then generates notable figure by gradient method for visualizing, and the size of notable figure is identical as the size of training image I;In the notable figure of generation, extracted using Threshold Segmentation Algorithm to contributive local message of classifying;Local message is inputted in thick training network, the global feature and local feature of training image I is obtained, global feature is merged with local feature, and utilizes fused feature training classifier;Vehicle image to be identified is inputted into trained classifier, obtains corresponding vehicle cab recognition result.The present invention improves the recognition accuracy of the vehicle in the case where limiting angle.
Description
Technical field
The invention belongs to Image Classfication Technology fields, in particular to a kind of model recognizing method.
Background technique
With the fast development of Chinese society economy, population is constantly expanded and the acceleration of urbanization process, China it is motor-driven
Vehicle owning amount is also quickly improving.On July 6th, 2018, Public Security Department, the Ministry of Public Security publication data show, by by the end of June, entirely
State's vehicle guaranteeding organic quantity has reached 3.19 hundred million, and wherein private car ownership is up to 1.8 hundred million, and maintains sustained and rapid growth.It is huge
Big vehicle guaranteeding organic quantity, which can be brought, cannot be neglected traffic problems, be fake-licensed car problem first.Fake-licensed car refers to illegal point
Sub license plate, color and the model for illegally extracting or forging true board vehicle makes the vehicle obtained by illegal channels (as smuggled) exist
Surface has been put on the coat of " legal ".Fake-licensed car has constituted a serious infringement former car owner's equity, allow deck illegal punishment all note in original
With car owner, economic loss is caused.In addition, fake-licensed car is often the tool used in crime of offender.Another, which cannot be neglected, asks
Topic is licence plate occlusion issue.Block license plate number harm it is very big, since license plate is blocked, driver is not limited by electronic eyes, is rushed
Red light, drive in the wrong direction, illegal lane change phenomena such as it is more prominent, serious threat, pole are constituted to autre vie and property safety
Traffic accident is easily caused, normal traffic administration order is destroyed, brings great inconvenience to the management of traffic department.Thus may be used
See, license board information is a part of information of vehicles, relies solely on license board information and is very difficult to solve deck, block license plate etc. to ask
Topic.Based on this, to the emphasis being identified as research for limiting the vehicle model under angle.
The shooting angle of bayonet vehicle image is substantially fixed, is all the positive photo of automobile, and background is more single
One.Due to the widespread deployment of highway bayonet monitoring system, a large amount of vehicle sample can be obtained, this is also the fine of vehicle
Identification provides data basis.The only preceding face information of vehicle sample under angle is limited, left and right vehicle wheel two sides and tail can not be utilized
Other parts with discrimination such as portion, this increases difficulty to the feature extraction of vehicle.
Summary of the invention
In order to solve the technical issues of above-mentioned background technique proposes, the invention proposes a kind of vehicles based on local feature
Fine recognition methods.
In order to achieve the above technical purposes, the technical solution of the present invention is as follows:
A kind of fine recognition methods of vehicle based on local feature, comprising the following steps:
(1) just training is carried out to convolutional neural networks, obtains slightly training network;
(2) training image I is found out to the local derviation matrix of loss function, the local derviation using backpropagation on thick training network
Matrix is exactly the image gradient of training image I, then generates notable figure, the size and training of notable figure by gradient method for visualizing
The size of image I is identical;
(3) it in the notable figure that step (2) generate, is extracted using Threshold Segmentation Algorithm and contributive part of classifying is believed
Breath;
(4) in the thick training network of local message input obtained step (3), obtain training image I global feature and
Local feature merges global feature with local feature, and utilizes fused feature training classifier;
(5) vehicle image to be identified is inputted into trained classifier, obtains corresponding vehicle cab recognition result.
Further, in step (2), regard the last Softmax layer of thick training network as loss function S (I, c),
Indicate score of the training image I on classification c, wherein I is 3 dimensional vectors comprising height, width and port number, in I=
I0S (I, c) is approximately linear representation about I by first order Taylor series expansion by place:
S(I,c)≈wTI+b
Wherein, w indicates that weight vectors, b indicate the deviation of model, and w is S (I, c) in given image I0The gradient at place:
If slightly having z hidden layer in training network, and use h(i)Indicate that hidden layer, 1≤i≤z are obtained by chain rule:
Above formula by slightly the backpropagation of network being trained to acquire, the amplitude of w indicate respective pixel to the contribution degree of classification c,
The more big then pixel of amplitude is also bigger to the contribution degree of classification c.
Further, in step (2), ifFor image I0Notable figure, calculateIn pixel at the position (i, j)
Value pixel (i, j):
In above formula, h (i, j, d) indicates the index of element in w, wh(i,j,d)For weight of the pixel (i, j) in the d of channel.
Further, detailed process is as follows for step (3):
Firstly, calculating the average notable figure M of entire training setave:
In above formula, K indicates the sample size in training set,For image IiNotable figure;
Then, using Threshold Segmentation Algorithm by MaveIn pixel by segmentation threshold T point be C0And C1Two classes, C0For mesh
Mark part, C1Target for background parts, and Threshold Segmentation Algorithm is to find an optimal segmentation threshold T*So that C0And C1's
It is with the biggest gap between class;
Enable N0(T) and N1(T) C is respectively indicated0And C1The number of middle pixel, N MaveThe number of middle pixel, then one
Pixel is belonging respectively to C0And C1Probability w0(T) and w1(T) as follows respectively:
Enable μ0(T) and μ1(T) C is respectively indicated0And C1Average pixel value, then MaveAverage pixel value μ it is as follows:
μ=w0(T)*μ0(T)+w1(T)*μ1(T)
C0With C1Inter-class variance σ2(T) as follows:
σ2(T)=w0(T)*(μ0(T)-μ)2+w1(T)*(μ1(T)-μ)2
Optimum segmentation threshold value T*So that C0With C1Inter-class variance it is maximum, it may be assumed that
Wherein, pminAnd pmaxRespectively MaveIn minimum pixel value and max pixel value.
In MaveIn, if pixel value is greater than T*, it is set to 1, it is on the contrary then be set to 0, obtain binary notable figure Mbin, to extract
Local feature.
Further, in step (4), local feature is merged with global feature by following formula:
In above formula, f indicates fused feature, f0Indicate global feature, W0It is f0Weight coefficient, fiFor local feature,
WiIt is fiWeight coefficient, M be local feature number, 0≤W0,Wi≤ 1, and
Further, in step (4), obtained fusion feature is gone to train CS-SVM classifier as input, if altogether
There is L vehicle classification, remaining sample is successively classified as one kind using the sample of some classification as one kind when training, thus
To L CS-SVM classifier.
By adopting the above technical scheme bring the utility model has the advantages that
The present invention takes full advantage of face information before vehicle, has been obtained in unsupervised mode comprising enough discrimination information
Local feature, and merged in global feature, improve the recognition accuracy of the vehicle in the case where limiting angle.It is verified, the present invention
98.41% accuracy rate is reached on Compcars data set.
Detailed description of the invention
Fig. 1 is outline flowchart of the invention;
Fig. 2 is the visual exemplary diagram of notable figure in the present invention;
Fig. 3 is corresponding σ when T takes different value in embodiment2(T) bar chart;
Fig. 4 is the schematic diagram of handmarking's rectangle frame in embodiment;
Fig. 5 is the schematic diagram that regional area is positioned according to Fig. 4.
Specific embodiment
Below with reference to attached drawing, technical solution of the present invention is described in detail.
The fine recognition methods of a kind of vehicle based on local feature, as shown in Figure 1, steps are as follows.
Step 1: just training being carried out to convolutional neural networks, obtains slightly training network.
In the present embodiment, convolutional neural networks select AlexNet, other convolutional neural networks structures also may be used.
Step 2: finding out training image I to the local derviation matrix of loss function using backpropagation on thick training network, be somebody's turn to do
Local derviation matrix is exactly the image gradient of training image I, then by gradient method for visualizing generate notable figure, the size of notable figure with
The size of training image I is identical.The preferred implementation of the step is as follows:
The last Softmax layer of convolutional neural networks is regarded as loss function S (I, c), indicates image I on classification c
Score, wherein I is 3 dimensional vectors (height m, width n, port number d).The key of fine granularity image classification is to find to have
The local message of discrimination, specifically, given image I0With its classification c0Find I0In to S (I0,c0) with facilitation
Pixel.Since convolutional neural networks are a kind of models of complexity, S (I, c) is nonlinearity, so S (I, c) exists
I=I0Place is by the linear representation that S (I, c) can be approximately about I by the Taylor series expansion of single order:
S(I,c)≈wTI+b
Wherein, w indicates weight vectors, and b indicates the deviation of model, and w and b are three-dimensional vectors.In fact, w is S
(I, c) is in I0The gradient at place:
Assuming that there is z hidden layer in CNN, and use h(i)(1≤i≤z) indicates hidden layer, is obtained by chain rule:
Above formula can obtain backpropagation by convolutional neural networks and acquire, and the amplitude of w indicates tribute of the respective pixel to the c of classification
Degree of offering, the more big then pixel of amplitude are also bigger to the contribution degree of classification c.
H (i, j, d) is enabled to indicate the index of element in w,Indicate image I0Notable figure, pixel (i, j) is in the d of channel
Weight be wh(i,j,d), pixel (i, j) expressionIn pixel value at the position (i, j), then
Wherein, 1≤i≤m, 1≤j≤n.It will be apparent thatSize and I0Size be identical, andEach member
The value of element is all non-negative.(a), (b), (c), (d) in Fig. 2 are 4 visual exemplary diagrams of notable figure.
Step 3: in the notable figure that step 2 generates, being extracted using Threshold Segmentation Algorithm and contributive part of classifying is believed
Breath.The preferred implementation of the step is as follows:
The average notable figure of entire training set is obtained by following formula first:
Wherein, K indicates the sample size in training set,For image IiNotable figure.Average notable figure MaveMiddle brightness
Brighter part is the part of more discrimination.MaveIn pixel by some threshold value T be divided to be two class C0With C1, C0For mesh
Mark part, C1For background parts.The present invention wishes to find an optimal segmentation threshold T*So that C0And C1Class between gap most
Greatly.
Enable N0(T) and N1(T) C is respectively indicated0And C1The number of middle pixel, N MaveThe number of middle pixel.Enable w0(T)
And w1(T) it respectively indicates a pixel and is belonging respectively to C0With C1Probability:
Enable μ0(T) and μ1(T) C is respectively indicated0And C1Average pixel value, MaveAverage pixel value be μ, then have:
μ=w0(T)*μ0(T)+w1(T)*μ1(T)
C0And C1Inter-class variance be designated as σ2(T):
σ2(T)=w0(T)*(μ0(T)-μ)2+w1(T)*(μ1(T)-μ)2
σ2(T) it can also be indicated by following formula:
σ2(T)=w0(T)*w1(T)*(μ0(T)-μ1(T))2
Optimal threshold value T*So that C0And C1Inter-class variance it is maximum:
Wherein, pminAnd pmaxRespectively MaveIn minimum pixel value and max pixel value.
In data set CompCars in MaveIn, pminIt is 0.0019, pmaxIt is 0.18298.In [0.002,0.18] section
It is interior, a list is obtained for step-length with 0.001, and obtain optimal threshold T as threshold list, and by enumerative technique*.Fig. 3 is aobvious
Corresponding σ when T takes different value is shown2(T).In MaveIn, if pixel value is greater than T*, it is set to 1, it is on the contrary then be set to 0, it just obtains in this way
Binary notable figure Mbin, MbinThe region of middle white is the regional area with discrimination, is used by the method manually chosen
Rectangle circle selects regional area, and the principle that rectangle frame is chosen is made in rectangle frame as far as possible comprising white area.Fig. 4 is shown
Tri- rectangle frames of A, B, the C manually chosen, the coordinate of rectangle is indicated with the coordinate of the point in the rectangle upper left corner and the lower right corner, then
The coordinate of rectangle A is [(47,143), (82,175)], and the coordinate of rectangle B is [(150,145), (183,178)], the seat of rectangle C
It is designated as [(150,145), (183,178)].When giving an image, in the regional location and Fig. 4 in image with discrimination
The position of rectangle frame is correspondingly, as shown in Figure 5.
Step 4: in the thick training network of local message input that step 3 is obtained, obtain training image I global feature and
Local feature merges global feature with local feature, and utilizes fused feature training classifier.
After obtaining regional area, these regional areas are fed in CNN model and CNN model are forced to gather by the present invention
Coke is on the key area of image, to improve the accuracy rate of fine granularity image recognition.
As shown in Figure 5, three target area P have been obtained when inputting picture I1, P2And P3, three target areas are worked as
The input for making model participates in training, it should be noted that needs before target area is input to network again by the big of each region
It is small to be re-set as 227 × 227.Then the global feature and local feature for respectively obtaining image I, use f0Indicate the entirety of image
Feature, fiIndicate target area piFeature, i ∈ { 1,2,3 }, by linear combination obtain feature 4096 dimension feature f:
F=W0f0+W1f1+W2f2+W3f3
Wherein, W0,W1,W2,W3For feature vector coefficient, 0≤W0,W1,W2,W3≤ 1, W0+W1+W2+W3=1.
After obtaining feature f, go to train one-to-many SVM using it as input.Assuming that a shared L classification, when training successively
Remaining sample is classified as one kind, has thus obtained L CS-SVM classifier by the sample of some classification as one kind.
Step 5: vehicle image to be identified being inputted into trained classifier, obtains corresponding vehicle cab recognition result.
Embodiment is merely illustrative of the invention's technical idea, and this does not limit the scope of protection of the present invention, it is all according to
Technical idea proposed by the present invention, any changes made on the basis of the technical scheme are fallen within the scope of the present invention.
Claims (6)
1. a kind of fine recognition methods of vehicle based on local feature, which comprises the following steps:
(1) just training is carried out to convolutional neural networks, obtains slightly training network;
(2) training image I is found out to the local derviation matrix of loss function, the local derviation matrix using backpropagation on thick training network
It is exactly the image gradient of training image I, then notable figure, the size and training image of notable figure is generated by gradient method for visualizing
The size of I is identical;
(3) it in the notable figure that step (2) generate, is extracted using Threshold Segmentation Algorithm to contributive local message of classifying;
(4) in the thick training network of local message input obtained step (3), global feature and the part of training image I is obtained
Feature merges global feature with local feature, and utilizes fused feature training classifier;
(5) vehicle image to be identified is inputted into trained classifier, obtains corresponding vehicle cab recognition result.
2. the fine recognition methods of vehicle according to claim 1 based on local feature, which is characterized in that in step (2),
The last Softmax layer of thick training network is regarded as loss function S (I, c), indicates score of the training image I on classification c,
Wherein I is 3 dimensional vectors comprising height, width and port number, in I=I0First order Taylor series expansion is pressed by S (I, c) in place
It is approximately the linear representation about I:
S(I,c)≈wTI+b
Wherein, w indicates that weight vectors, b indicate the deviation of model, and w is S (I, c) in given image I0The gradient at place:
If slightly having z hidden layer in training network, and use h(i)Indicate that hidden layer, 1≤i≤z are obtained by chain rule:
For above formula by slightly the backpropagation of network being trained to acquire, the amplitude of w indicates contribution degree of the respective pixel to classification c, amplitude
More big then pixel is also bigger to the contribution degree of classification c.
3. the fine recognition methods of vehicle according to claim 2 based on local feature, which is characterized in that in step (2),
If MI0For image I0Notable figure, calculate MI0In pixel value pixel (i, j) at the position (i, j):
In above formula, h (i, j, d) indicates the index of element in w, wh(i,j,d)For weight of the pixel (i, j) in the d of channel.
4. the fine recognition methods of vehicle according to claim 1 based on local feature, which is characterized in that the tool of step (3)
Body process is as follows:
Firstly, calculating the average notable figure M of entire training setave:
In above formula, K indicates the sample size in training set,For image IiNotable figure;
Then, using Threshold Segmentation Algorithm by MaveIn pixel by segmentation threshold T point be C0And C1Two classes, C0For target portion
Point, C1Target for background parts, and Threshold Segmentation Algorithm is to find an optimal segmentation threshold T*So that C0And C1Class between
It is with the biggest gap;
Enable N0(T) and N1(T) C is respectively indicated0And C1The number of middle pixel, N MaveThe number of middle pixel, then a pixel
Point is belonging respectively to C0And C1Probability w0(T) and w1(T) as follows respectively:
Enable μ0(T) and μ1(T) C is respectively indicated0And C1Average pixel value, then MaveAverage pixel value μ it is as follows:
μ=w0(T)*μ0(T)+w1(T)*μ1(T)
C0With C1Inter-class variance σ2(T) as follows:
σ2(T)=w0(T)*(μ0(T)-μ)2+w1(T)*(μ1(T)-μ)2
Optimum segmentation threshold value T*So that C0With C1Inter-class variance it is maximum, it may be assumed that
Wherein, pminAnd pmaxRespectively MaveIn minimum pixel value and max pixel value.
In MaveIn, if pixel value is greater than T*, it is set to 1, it is on the contrary then be set to 0, obtain binary notable figure Mbin, to extract part
Feature.
5. the fine recognition methods of vehicle according to claim 1 based on local feature, which is characterized in that in step (4),
Local feature is merged with global feature by following formula:
In above formula, f indicates fused feature, f0Indicate global feature, W0It is f0Weight coefficient, fiFor local feature, WiIt is
fiWeight coefficient, M be local feature number, 0≤W0,Wi≤ 1, and
6. the fine recognition methods of vehicle according to claim 1 based on local feature, which is characterized in that in step (4),
Obtained fusion feature is gone to train CS-SVM classifier as input, if sharing L vehicle classification, successively some when training
Remaining sample is classified as one kind, to obtain L CS-SVM classifier as one kind by the sample of classification.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910122389.XA CN109977766A (en) | 2019-02-18 | 2019-02-18 | A kind of fine recognition methods of vehicle based on local feature |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910122389.XA CN109977766A (en) | 2019-02-18 | 2019-02-18 | A kind of fine recognition methods of vehicle based on local feature |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109977766A true CN109977766A (en) | 2019-07-05 |
Family
ID=67077074
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910122389.XA Pending CN109977766A (en) | 2019-02-18 | 2019-02-18 | A kind of fine recognition methods of vehicle based on local feature |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109977766A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103413320A (en) * | 2013-08-30 | 2013-11-27 | 上海海事大学 | Port contaminant saliency detection method |
CN105005761A (en) * | 2015-06-16 | 2015-10-28 | 北京师范大学 | Panchromatic high-resolution remote sensing image road detection method in combination with significance analysis |
CN105335710A (en) * | 2015-10-22 | 2016-02-17 | 合肥工业大学 | Fine vehicle model identification method based on multi-stage classifier |
CN105469090A (en) * | 2015-11-19 | 2016-04-06 | 南京航空航天大学 | Frequency-domain-residual-error-based small target detection method and apparatus in infrared image |
CN106384100A (en) * | 2016-09-28 | 2017-02-08 | 武汉大学 | Component-based fine vehicle model recognition method |
CN107665353A (en) * | 2017-09-15 | 2018-02-06 | 平安科技(深圳)有限公司 | Model recognizing method, device, equipment and computer-readable recording medium based on convolutional neural networks |
US20180181864A1 (en) * | 2016-12-27 | 2018-06-28 | Texas Instruments Incorporated | Sparsified Training of Convolutional Neural Networks |
-
2019
- 2019-02-18 CN CN201910122389.XA patent/CN109977766A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103413320A (en) * | 2013-08-30 | 2013-11-27 | 上海海事大学 | Port contaminant saliency detection method |
CN105005761A (en) * | 2015-06-16 | 2015-10-28 | 北京师范大学 | Panchromatic high-resolution remote sensing image road detection method in combination with significance analysis |
CN105335710A (en) * | 2015-10-22 | 2016-02-17 | 合肥工业大学 | Fine vehicle model identification method based on multi-stage classifier |
CN105469090A (en) * | 2015-11-19 | 2016-04-06 | 南京航空航天大学 | Frequency-domain-residual-error-based small target detection method and apparatus in infrared image |
CN106384100A (en) * | 2016-09-28 | 2017-02-08 | 武汉大学 | Component-based fine vehicle model recognition method |
US20180181864A1 (en) * | 2016-12-27 | 2018-06-28 | Texas Instruments Incorporated | Sparsified Training of Convolutional Neural Networks |
CN107665353A (en) * | 2017-09-15 | 2018-02-06 | 平安科技(深圳)有限公司 | Model recognizing method, device, equipment and computer-readable recording medium based on convolutional neural networks |
Non-Patent Citations (1)
Title |
---|
YE ZHOU等: "A Novel Part-Based Model for Fine-Grained Vehicle Recognition", 《INTERNATIONAL CONFERENCE ON CLOUD COMPUTING AND SECURITY》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106056086B (en) | Vehicle brand type identifier method based on Fast Learning frame | |
Silva et al. | Automatic detection of motorcyclists without helmet | |
CN105335702B (en) | A kind of bayonet model recognizing method based on statistical learning | |
CN101916383B (en) | Vehicle detecting, tracking and identifying system based on multi-camera | |
CN106845453B (en) | Taillight detection and recognition methods based on image | |
CN110097109A (en) | A kind of road environment obstacle detection system and method based on deep learning | |
CN109949579A (en) | A kind of illegal automatic auditing method that makes a dash across the red light based on deep learning | |
CN110879950A (en) | Multi-stage target classification and traffic sign detection method and device, equipment and medium | |
TWI401473B (en) | Night time pedestrian detection system and method | |
CN103530640B (en) | Unlicensed vehicle checking method based on AdaBoost Yu SVM | |
KR20120032897A (en) | Method and system for detecting object in input image | |
CN107644206A (en) | A kind of road abnormal behaviour action detection device | |
CN106919939B (en) | A kind of traffic signboard tracks and identifies method and system | |
CN107886034A (en) | Driving based reminding method, device and vehicle | |
Lu et al. | A new video-based crash detection method: balancing speed and accuracy using a feature fusion deep learning framework | |
CN109508659A (en) | A kind of face identification system and method for crossing | |
CN109784214A (en) | A kind of detection device and method of railroad track foreign matter | |
CN103050008A (en) | Method for detecting vehicles in night complex traffic videos | |
Do et al. | Speed limit traffic sign detection and recognition based on support vector machines | |
CN104156701A (en) | Plate number similar character recognition method based on decision-making tree and SVM | |
Tehrani et al. | Car detection at night using latent filters | |
CN114724122A (en) | Target tracking method and device, electronic equipment and storage medium | |
CN103680145A (en) | Automatic pedestrian and vehicle recognition method based on local image characteristics | |
KR20160067631A (en) | Method for recognizing vehicle plate | |
CN103942541A (en) | Electric vehicle automatic detection method based on vehicle-mounted vision within blind zone |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190705 |
|
RJ01 | Rejection of invention patent application after publication |