CN105654066A - Vehicle identification method and device - Google Patents

Vehicle identification method and device Download PDF

Info

Publication number
CN105654066A
CN105654066A CN201610073180.5A CN201610073180A CN105654066A CN 105654066 A CN105654066 A CN 105654066A CN 201610073180 A CN201610073180 A CN 201610073180A CN 105654066 A CN105654066 A CN 105654066A
Authority
CN
China
Prior art keywords
vehicle
deep learning
degree
learning network
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610073180.5A
Other languages
Chinese (zh)
Inventor
丁鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Deepglint Information Technology Co ltd
Original Assignee
Beijing Deepglint Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Deepglint Information Technology Co ltd filed Critical Beijing Deepglint Information Technology Co ltd
Priority to CN201610073180.5A priority Critical patent/CN105654066A/en
Publication of CN105654066A publication Critical patent/CN105654066A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a vehicle identification method and device. The vehicle identification method comprises the following steps: acquiring a to-be-identified vehicle image; identifying the to-be-identified vehicle image by virtue of a deep learning network acquired by virtue of pre-training, wherein the network structure of the deep learning network comprises a convolution layer, a pooling layer and a full connection layer, the pooling layer is connected to the back of the convolution layer, the full connection layer is connected to the back of the pooling layer, and each output node on the full connection layer is vehicle attribute probability of the vehicle image; and determining vehicle attribute information of the to-be-identified vehicle image according to the vehicle attribute probability. According to the scheme, the vehicles are identified by virtue of the deep learning network, and the deep learning network is capable of depicting and distinguishing objects, so that compared with an existing manner for classifying by virtue of artificially defined characteristics, the accuracy is higher, and the false report rate and the missing report rate can be simultaneously reduced.

Description

A kind of vehicle identification method and device
Technical field
The application relates to technical field of computer vision, particularly relates to a kind of vehicle identification method and device.
Background technology
At present, during the particular content in identifying picture, it is common that comprise the steps:
The first step, detects the position of attention object in picture, such as: the identification to carry out vehicle then needs first to use a detector to be found out from picture by this car, and the output result of detector is this car coordinate on picture;
Second step, shear offs this car according to coordinate position from artwork, is put in grader by the picture after shearing, the recognition result that output result is this car of grader.
In second step, usually the original image pixel value of input is converted into the feature (human-engineeredfeatures) of Manual definition, such as: Scale invariant features transform (SIFT, Scale-invariantfeaturetransform), histograms of oriented gradients (HOG, HistogramofOrientedGradient) feature etc., then the feature that these conversion obtain is put in grader and classify, finally give the recognition result of object. Adopting and be identified in this way, classify owing to sorting algorithm is based on the feature of Manual definition, the model of use generally only comprises a hidden layer extracting feature, and feature is often not enough to portray and distinguish object, causes that the accuracy rate identified is relatively low.
Prior art deficiency is in that:
Adopt existing mode identification object accuracy rate relatively low.
Summary of the invention
The embodiment of the present application proposes a kind of vehicle identification method and device, the technical problem that accuracy rate to solve object identification method identification object of the prior art is relatively low.
The embodiment of the present application provides a kind of vehicle identification method, comprises the steps:
Obtain vehicle image to be identified;
Utilize vehicle image to be identified described in the degree of deep learning network identification that training in advance obtains; The network structure of described degree of deep learning network includes convolutional layer, pond layer and full articulamentum, described pond layer is connected after described convolutional layer, described full articulamentum is connected, the vehicle attribute probability that each output node is described vehicle image on last full articulamentum after the layer of described pond;
The vehicle attribute information of described vehicle image to be identified is determined according to described vehicle attribute probability.
The embodiment of the present application provides a kind of vehicle identifier, including:
Acquisition module, is used for obtaining vehicle image to be identified;
Training module, is used for training degree of deep learning network; The network structure of described degree of deep learning network includes convolutional layer, pond layer and full articulamentum, described pond layer is connected after described convolutional layer, described full articulamentum is connected, the vehicle attribute probability that each output node is described vehicle image on last full articulamentum after the layer of described pond;
Identification module, for utilizing vehicle image to be identified described in the degree of deep learning network identification that training in advance obtains;
Determine module, for determining the vehicle attribute information of described vehicle image to be identified according to described vehicle attribute probability.
Have the beneficial effect that:
Vehicle identification method that the embodiment of the present application provides and device, after getting vehicle image to be identified, classify again without user's manual definition feature, directly utilize the degree of deep learning network that training in advance obtains and namely can recognize that described vehicle image to be identified, vehicle attribute probability is obtained after sequentially passing through convolutional layer, pond layer and full articulamentum, so that it is determined that vehicle attribute information. What the scheme provided due to the embodiment of the present application utilized is degree of deep learning network identification vehicle, degree of deep learning network is enough to portray and distinguish object, compare the mode accuracy that existing Manual definition's feature carries out classifying higher so that rate of false alarm and rate of failing to report reduce simultaneously.
Accompanying drawing explanation
The specific embodiment of the application is described below with reference to accompanying drawings, wherein:
Fig. 1 illustrates the schematic flow sheet that in the embodiment of the present application, vehicle identification method is implemented;
Fig. 2 illustrates the structural representation of degree of deep learning network in the embodiment of the present application;
Fig. 3 illustrates the structural representation of vehicle identifier in the embodiment of the present application.
Detailed description of the invention
Technical scheme and advantage in order to make the application are clearly understood, below in conjunction with accompanying drawing, the exemplary embodiment of the application is described in more detail, obviously, described embodiment is only a part of embodiment of the application, rather than all embodiments is exhaustive. And when not conflicting, the embodiment in this explanation and the feature in embodiment can be combined with each other.
Inventor note that in invention process
Existing mode there is also following shortcoming:
1) reporting by mistake and fail to report is conflict, namely can adjust model extrinsic parameter artificially so that rate of false alarm reduces and causes rate of failing to report to rise, and vice versa. Owing to existing mode accuracy is not high, causing and adjust parameter in any case, the wrong report of result and rate of failing to report are all difficult to reduce simultaneously;
2) existing algorithm is based on the feature of manual definition, needs people manually to participate in when carrying out extracting feature after inputting picture;
3) prior art is mostly shallow-layer model, it is impossible to well depict the feature being intended to classification object.
For the deficiencies in the prior art, the embodiment of the present application proposes a kind of vehicle identification method and device, is illustrated below.
Fig. 1 illustrates the schematic flow sheet that in the embodiment of the present application, vehicle identification method is implemented, as it can be seen, described vehicle identification method may include steps of:
Step 101, obtain vehicle image to be identified;
Step 102, utilize vehicle image to be identified described in the degree of deep learning network identification that training in advance obtains; The network structure of described degree of deep learning network includes convolutional layer, pond layer and full articulamentum, described pond layer is connected after described convolutional layer, described full articulamentum is connected, the vehicle attribute probability that each output node is described vehicle image on last full articulamentum after the layer of described pond;
Step 103, determine the vehicle attribute information of described vehicle image to be identified according to described vehicle attribute probability.
When being embodied as, it is possible to first obtain vehicle image to be identified, can being the vehicle with certain attribute in described image, described attribute can be vehicle, car money, time etc., for instance, described vehicle image to be identified can be this car of Audi-A4-2012.
Then vehicle image to be identified described in the degree of deep learning network identification that training in advance obtains is utilized. Wherein, degree of depth study (deeplearning) belongs to the one of neutral net, more application is had in recent years in the field such as computer vision, speech recognition, it is a kind of deep-neural-network solving training problem, can forming more abstract high level by combination low-level feature and represent attribute classification or feature, the distributed nature to find data represents.
Degree of deep learning network in the embodiment of the present application can include convolutional layer, pond layer and full articulamentum these three level, wherein:
Convolutional layer (Convolution), makes original signal feature strengthen and reduce noise by convolution algorithm, and concrete convolutional calculation can adopt existing techniques in realizing;
Pond layer (Pooling), utilizes image local principle to reduce a lot of features by the method sampled, it is possible to include the modes such as maximum pond, average pond, random pool, implement and can adopt prior art;
Full articulamentum (FullConnected), each neuron of full articulamentum is connected with each neuron of next layer, the same as traditional multilayer perceptron (MLP, Multi-layerperceptron) neutral net, perform normal classification.
Using described vehicle image to be identified as input, convolution operation is passed through from input layer to convolutional layer, each neuron of convolutional layer can be connected with the local receptor field of certain size in input layer, by obtaining the feature (features) of described vehicle image to be identified after convolution; The process changing layer from convolutional layer to pond is properly termed as pond process, it is therefore intended that reduce the feature quantity of last layer; The feature obtained after convolutional layer and pond layer can be classified by full articulamentum, through the computing of full articulamentum, finally exports result.
The vehicle attribute probability that each output node is described vehicle image on described full articulamentum, namely on each output node output be this vehicle probability of belonging to certain attribute, such as: first output node is the probability that this vehicle belongs to Audi-A4-2012, second output node is the probability etc. that this vehicle belongs to Audi-A3-2010, the final attribute information determining this vehicle according to vehicle attribute probability, namely, determine which kind of vehicle this vehicle particularly belongs to according to probability size, car money, time etc., maximum vehicle attribute probability can be defined as which kind of vehicle this vehicle belongs to when being embodied as, car money, time etc., such as: first output node is that to belong to the probability of Audi-A4-2012 be 90% to this vehicle, the probability of other nodes output is respectively less than 90%, then may determine that this vehicle is Audi-A4-2012.
Vehicle identification method that the embodiment of the present application provides and device, after getting vehicle image to be identified, classify again without user's manual definition feature, directly utilize the degree of deep learning network that training in advance obtains and namely can recognize that described vehicle image to be identified, vehicle attribute probability is obtained after sequentially passing through convolutional layer, pond layer and full articulamentum, so that it is determined that vehicle attribute information. What the scheme provided due to the embodiment of the present application utilized is degree of deep learning network identification vehicle, degree of deep learning network is enough to portray and distinguish object, compare the mode accuracy that existing Manual definition's feature carries out classifying higher so that rate of false alarm and rate of failing to report reduce simultaneously.
In enforcement, the training step of described degree of deep learning network specifically may include that
Obtain and be with markd vehicle image sample;Described labelling includes the attribute information of vehicle;
Utilize the degree of deep learning network being previously provided with initial parameter that described vehicle image sample is classified;
Export the difference successively anti-pass between result and the attribute information of described vehicle to described degree of deep learning network according to described degree of deep learning network, train the parameter of described degree of deep learning network.
When being embodied as, some vehicle image samples can be obtained, these samples can include the vehicle image of all kinds of vehicle car money, can demarcate the attributes such as vehicle housing, and the vehicle of this car of labelling, car money, time after getting these vehicle images on vehicle image. Such as: obtain the vehicle image of nearly 2000 class vehicle car moneys, nearly 2000 class vehicle car moneys are manually demarcated, in the picture draw demarcation frame and provide the vehicle of this car, car money, the time (as: Audi-A4-2012), the amount of images of nominal data can more than 200,000.
Then utilize the degree of deep learning network being previously provided with initial parameter that described vehicle image is classified, each output node on the full articulamentum of described degree of deep learning network is to should picture be the probability of corresponding vehicle classification, and this operating process can also be called softmax. Wherein, the probability of output can be an array.
Finally, output probability is compared with authentic signature (can for array isometric with output probability), calculates the difference of the two. When being embodied as, it is possible to use cross entropy loss function crossentropylossfunction carrys out the difference between measurement model output result and actual value. By in anti-for this difference successively anti-pass to described degree of deep learning network, carry out the parameter training of model.
The embodiment of the present application utilizes the degree of depth continuous training parameter of learning network model, so that described degree of deep learning network can identify vehicle accurately, it is not necessary to the artificial i.e. available machine degree of depth that participates in learns to automatically achieve the purpose of classification.
In enforcement, described utilization is previously provided with the degree of deep learning network of initial parameter and described vehicle image sample is classified, and is specifically as follows: utilize the convolution kernel being previously provided with initial parameter and described vehicle image sample to carry out convolutional calculation; Pondization through pond layer operates and the full attended operation of full articulamentum, obtains the vehicle attribute probability of described vehicle image sample;
Described according to the difference successively anti-pass between described degree of deep learning network output result and the attribute information of described vehicle to described degree of deep learning network, train the parameter of described degree of deep learning network, it is specifically as follows: when there are differences between described vehicle attribute probability and the attribute information of described vehicle, adjust the parameter in described convolution kernel, until the vehicle attribute probability of output is consistent with the attribute information of described vehicle.
In the specific implementation, it is possible to the vehicle in vehicle image sample carries out labelling, and described labelling is specifically as follows the attribute information of described vehicle; When training the parameter of described degree of deep learning network, it is possible to described labelling is carried out reference as standard.
Described utilization is previously provided with the degree of deep learning network of initial parameter and described vehicle image sample is carried out classification is specifically as follows: utilize the convolution kernel being previously provided with initial parameter and described vehicle image sample to carry out convolutional calculation; Pondization through pond layer operates and the full attended operation of full articulamentum, obtains the vehicle attribute probability of described vehicle image sample. When being embodied as, convolution kernel can be the sizes such as 1*1,3*3,5*5, and convolution kernel includes multiple numerical value, for instance: the convolution kernel of 3*3 includes 9 numerical value, and the convolution kernel in the embodiment of the present application can be previously provided with initial parameter.
Convolution kernel according to described initial parameter carries out convolutional calculation, through pond layer, full articulamentum, the final vehicle attribute probability exporting described vehicle image sample.
Judge whether the vehicle attribute probability of described output is consistent or consistent with the attribute information of described vehicle, when there are differences between the vehicle attribute probability and the attribute information of described vehicle of described vehicle image, adjust the parameter in described convolution kernel, until the vehicle attribute probability of output is consistent with the attribute information of described vehicle.
Such as: assume the maximum probability that vehicle attribute probability is Audi-A4-2012 of output, if the vehicle attribute information of described labelling in advance is Audi-A3-2010, so readjust the parameter in described convolution kernel, carry out the identification of convolutional layer, pond layer, full articulamentum again, until the vehicle attribute probability of described output is that Audi-A3-2010 is maximum, it is consistent with described real vehicle attribute information, completes the training of described convolution kernel parameter.
In enforcement, the parameter of the described degree of deep learning network of described training is specifically as follows use degree of depth network training instrument caffe and is trained, the parameter of described caffe includes: basic learning rate ranges for 0.0001��0.01, study momentum range is 0.9��0.99, and weight penalty coefficient ranges for 0.0001��0.001.
When being embodied as, degree of depth network training instrument caffe can be adopted to carry out model training, caffe is a clear and efficient degree of deep learning framework, except with outside network structure file when using this instrument, solver file can also be defined, solver file gives the method for optimizing model (i.e. training), i.e. the back-propagation algorithm of parameter.
During use caffe, parameter could be arranged to as follows:
Basic learning rate (baselearningrate) scope 0.0001-0.01;
Study momentum (momentun) scope 0.9-0.99;
Weight penalty coefficient (weight_decay) scope 0.0001-0.001.
The embodiment of the present application adopts degree of depth network training instrument that model is trained, inventor have found that basic learning rate range for 0.0001��0.01, study momentum range be 0.9��0.99, that weight penalty coefficient ranges for training effect when 0.0001��0.001 is best.
In enforcement, the network structure of described degree of deep learning network specifically can include 5 convolutional layers, 5 pond layers and 3 full articulamentums, described pond layer is connected after each convolutional layer, next convolutional layer is connected after the layer of described pond, in the end being sequentially connected with 3 full articulamentums after a pond layer, the output number of last full articulamentum is the number of vehicle attribute classification.
Degree of deep learning network described in the embodiment of the present application employs 5 convolutional layers, connects pond layer after each convolutional layer, after connect 3 full articulamentums again, the output number of last full articulamentum is the number of classification. Degree of deep learning network designed by the embodiment of the present application, improves the accuracy of Classification and Identification while the amount of calculation guaranteeing described degree of deep learning network is moderate, overcomes shallow-layer network characterization to extract insufficient problem.
In enforcement, vehicle image to be identified described in the described degree of deep learning network identification utilizing training in advance to obtain is specifically as follows:
At convolutional layer, the convolution kernel that described vehicle image to be identified and training in advance obtain is carried out convolutional calculation, exports one or more characteristic image;
At pond layer, the output of described convolutional layer is carried out pondization operation;
At full articulamentum, the output of described last layer being carried out full attended operation, the node number of described last full articulamentum is identical with the number that vehicle attribute is classified;
The output of described last full articulamentum is classified, obtains vehicle attribute probability.
When being embodied as, it is possible to after obtaining vehicle image to be identified, at convolutional layer, the convolution kernel that described vehicle image to be identified and training in advance obtain is carried out convolutional calculation, exports one or more characteristic image; At pond layer, the output of described convolutional layer is carried out pondization operation; At full articulamentum, the output of described last layer being carried out full attended operation, the node number of described last full articulamentum is identical with vehicle classification kind; Finally, the output of described last full articulamentum is classified, obtains vehicle attribute probability.
Wherein, the convolution kernel that described training in advance obtains can be corresponding with certain vehicle attribute, described vehicle image to be identified can carry out convolutional calculation with multiple convolution kernels respectively, thus finally exporting the probability that this vehicle image to be identified is different types of vehicle attribute.
In enforcement, after obtaining vehicle image to be identified, before vehicle image to be identified described in the described degree of deep learning network identification utilizing training in advance to obtain, described method may further include:
Described vehicle image to be identified is carried out pretreatment;
Described pretreatment at least includes following a kind of operation: rotation, histogram equalization, white balance, mirror image operation, random shearing, centralization, equalization, be sized resize.
When being embodied as, after getting vehicle image to be identified, first described vehicle image to be identified can be carried out pretreatment, such as: after carrying out the pretreatment such as data enhancing, centralization, equalization and resize, store into the data form that described degree of deep learning network can read, for instance the forms such as h5, LMDB.
Wherein, data strengthen can include rotations, histogram equalization, white balance, mirror image operation, random shearing etc., centralization use parameter scope can between 100��150, equalization use parameter can between 100��150, resize picture size may range from 100��256inpixels.
The embodiment of the present application is by first carrying out pretreatment to described vehicle image to be identified before image identifying, it is possible to eliminating information unrelated in image, strengthening useful real information, thus improving the reliability of follow-up identification.
For the ease of the enforcement of the application, illustrate with example below.
The embodiment of the present application can utilize degree of deep learning network to carry out the classification of vehicle car money, and concrete operations can include following four steps:
Step one, nominal data
Manually being demarcated by about 2000 class vehicle car moneys, demarcation content includes drawing demarcation frame in artwork and finds out a car, and provides the vehicle car money time of this car, for instance: Audi-A4-2012, the quantity of nominal data is more than 200,000.
Step 2, pretreatment
The data demarcated are classified according to corresponding file, after original demarcation block diagram sheet is carried out the pretreatment such as data enhancing, centralization, equalization and resize, it is stored as the data form that deep neural network can read, such as h5, lightening internal memory mapping type data base administration (LMDB, LightningMemory-MappedDatabaseManager) etc.
Wherein, data enhancing can include rotation, histogram equalization, white balance, mirror image operation, random shearing etc.; The scope of centralization use parameter can between 100��150;Equalization use parameter can between 100��150; The scope of resize picture size can at 100��256 (pixels).
Step 3, projected deep learning network
Network structure can include three ingredients, is respectively as follows: convolutional layer (convolutionallayer), pond layer (poolinglayer) and full articulamentum (fullyconnectedlayer). The function of these three basic structure is referred to prior art, and the application does not repeat at this.
Fig. 2 illustrates the structural representation of degree of deep learning network in the embodiment of the present application, as shown in the figure, the embodiment of the present application have employed 5 convolutional layers, pond layer can be caught up with after each convolutional layer, connect three full articulamentums below again, the output number of last full articulamentum is the number of classification, each output node on full articulamentum is to should picture be the probability of corresponding vehicle classification, this operation is called softmax, finally can by output probability (can be structure of arrays) with truly demarcate classification (array isometric with output probability) and compare, crossentropylossfunction can be used to carry out the difference of measurement model output result and actual value. this difference can successively reversely pass in network, carries out the parameter training of model.
Step 4, model training
The embodiment of the present application can use existing degree of depth network training instrument to carry out model training, such as: caffe (http://caffe.berkeleyvision.org/), use procedure can define solver file, the method that described solver file gives optimizing model (training), that is, the back-propagation algorithm of parameter. Wherein, key parameter can include basic learning rate (baselearningrate), study momentum (momentum), weight penalty coefficient (weight_decay) etc., described basic learning rate may range from 0.0001��0.01, described study momentum may range from 0.9��0.99, weight penalty coefficient may range from 0.0001��0.001.
When being embodied as, the vehicle identification process in the embodiment of the present application can be batch jobs, multiple vehicle images to be identified is identified simultaneously, specific as follows:
Step one, input vehicle image to be identified, it is assumed that a data set batch can include 256 pictures altogether;
Step 2, every pictures is carried out data enhancing, is specifically as follows:
Every pictures is adjusted resize to 128*128 pixel size, and the pixel value on each for RGB passage is done centralization and readjusts rescale process, particularly as follows:
Centralization processes: each pixel value deducts 128;
Rescale process: the value after above-mentioned deducting is multiplied by 0.01 again;
Then can randomly selecting a part of 118*118 in the image after above-mentioned process, finally, the picture of 256 128*128 of input becomes the picture of 256 118*118.
Step 3, degree of deep learning network is utilized to carry out vehicle identification.
Through first convolutional layer convolutionlayer, described vehicle image to be identified and convolution kernel are carried out convolutional calculation, convolution kernel size (kernelsize) can be 7*7, during slip, each moving step length (stride) can be 2 pixels, the characteristic layer number of input can be 24, and the number of the parameter of convolution kernel is 24*7*7*3=3528:
Through first pond layer poolinglayer, pond range size (kernelsize) can be 3*3, and every time mobile (stride) is 2 pixels;
Through second convolutional layer convolutionlayer, the output of last layer and convolution kernel are carried out convolutional calculation, convolution kernel size (kernelsize) can be 5*5, moving (stride) during slip can be 1 pixel every time, can being 64 characteristic images altogether, the number of the convolution kernel parameter related to can be 64*5*5*24=38400:
Through second pond layer poolinglayer, pond range size (kernelsize) can be 3*3, and every time mobile (stride) can be 2 pixels;
Through the 3rd convolutional layer convolutionlayer, the output of last layer and convolution kernel are carried out convolutional calculation, convolution kernel size (kernelsize) can be 3*3, moving (stride) during slip can be 1 pixel every time, can being 96 characteristic images altogether, the number of the convolution kernel parameter related to can be 96*3*3*64=55296;
Through the 3rd pond layer poolinglayer, pond range size (kernelsize) can be 3*3, and every time mobile (stride) can be 2 pixels;
Through the 4th convolutional layer convolutionlayer, the output of last layer and convolution kernel are carried out convolutional calculation, convolution kernel size (kernelsize) can be 3*3, moving (stride) during slip can be 1 pixel every time, can being 96 characteristic images altogether, the number of the convolution kernel parameter related to can be 96*3*3*96=82944;
Through the 4th pond layer poolinglayer, pond range size (kernelsize) can be 3*3, and every time mobile (stride) can be 2 pixels;
Through the 5th convolutional layer convolutionlayer, the output of last layer and convolution kernel are carried out convolutional calculation, convolution kernel size (kernelsize) can be 3*3, moving (stride) during slip can be 1 pixel every time, can being 64 characteristic images altogether, the number of the convolution kernel parameter related to can be 64*3*3*96=55296;
Through the 5th pond layer poolinglayer, pond range size (kernelsize) can be 3*3, and every time mobile (stride) can be 2 pixels;
Through first full articulamentum fullyconnectedlayer, the node number of full articulamentum can be 1024, and the number of the convolution kernel parameter related to can be 1024*64*5*5=1638400;
Through second full articulamentum fullyconnectedlayer, the node number of full articulamentum can be 1024, and the number of the convolution kernel parameter related to can be 1024*1024=1048576;
Through the 3rd full articulamentum fullyco0nectedlayer, the node number of full articulamentum can be N number of (described N be classification kind, N kind vehicle car money can be represented, such as N can be 1500), the number of the convolution kernel parameter so related to can be N*1024 (as N=1500,1500*1024=1536000);
Finally carry out softmax classification, the numerical value of each output node on described 3rd full articulamentum is converted into the probit between 0��1, the probability of corresponding N kind vehicle.
In the specific implementation, a nonlinear change after each convolutional layer, can also be connect, a nonlinear change and one can be met after each full articulamentum in order to avoid the dropout layer of over-fitting.
The parameter that final convolution kernel relates to can be altogether:
Totalnumberofparametersinvolved=3528+38400+55296+82944+5 5296+1638400+1048576+1536000=4458440 (about 4,500,000 parameters).
Adopt the model that the embodiment of the present application provides can distinguish the vehicle car money time of 2000 classes nearly, at the accuracy rate > 90% of test set.
The embodiment of the present application have employed degree of depth network, owing to degree of depth network has the advantage of extract body characteristics from level to level, high-level characteristic information is the linear processes conversion of low-level image feature information, compare existing shallow-layer network and more can extract the substitutive characteristics that can portray desire classification object, thus improving modelling effect, solve prior art middle-shallow layer network characterization and extract insufficient problem, and have employed completely by the model end to end of data-driven, namely, input is original image, output is classification results, the feature in intermediate layer is without participating in manually, complete by data self-drive, it addition, adopt the technical scheme that the embodiment of the present application provides to be identified improve to a certain extent accuracy, reduce wrong report and fail to report phenomenon.
Based on same inventive concept, the embodiment of the present application additionally provides a kind of vehicle identifier, owing to the principle of these equipment solution problem is similar to a kind of vehicle identification method, therefore the enforcement of these equipment may refer to the enforcement of method, repeats part and repeats no more.
Fig. 3 illustrates the structural representation of vehicle identifier in the embodiment of the present application, as it can be seen, described vehicle identifier may include that
Acquisition module 301, is used for obtaining vehicle image to be identified;
Training module 302, is used for training degree of deep learning network; The network structure of described degree of deep learning network includes convolutional layer, pond layer and full articulamentum, described pond layer is connected after described convolutional layer, described full articulamentum is connected, the vehicle attribute probability that each output node is described vehicle image on last full articulamentum after the layer of described pond;
Identification module 303, for utilizing vehicle image to be identified described in the degree of deep learning network identification that training in advance obtains;
Determine module 304, for determining the vehicle attribute information of described vehicle image to be identified according to described vehicle attribute probability.
In enforcement, described training module specifically may include that
Acquiring unit, is used for obtaining the markd vehicle image sample of band; Described labelling includes the attribute information of vehicle;
Taxon, for utilizing the degree of deep learning network being previously provided with initial parameter that described vehicle image sample is classified;
Training unit, for exporting the difference successively anti-pass extremely described degree of deep learning network between result and the attribute information of described vehicle according to described degree of deep learning network, trains the parameter of described degree of deep learning network.
In enforcement, described taxon specifically may be used for utilizing the convolution kernel being previously provided with initial parameter and described vehicle image sample to carry out convolutional calculation; Pondization through pond layer operates and the full attended operation of full articulamentum, obtains the vehicle attribute probability of described vehicle image sample;
When described training unit specifically may be used for there are differences between described vehicle attribute probability and the attribute information of described vehicle, adjust the parameter in described convolution kernel, until the vehicle attribute probability of output is consistent with the attribute information of described vehicle.
In enforcement, described training unit specifically may be used for exporting the extremely described degree of deep learning network of the difference successively anti-pass between result and the attribute information of described vehicle according to described degree of deep learning network, degree of depth network training instrument caffe is used to be trained, the parameter of described caffe includes: basic learning rate ranges for 0.0001��0.01, study momentum range is 0.9��0.99, and weight penalty coefficient ranges for 0.0001��0.001.
In enforcement, the network structure of described degree of deep learning network specifically can include 5 convolutional layers, 5 pond layers and 3 full articulamentums, described pond layer is connected after each convolutional layer, next convolutional layer is connected after the layer of described pond, in the end being sequentially connected with 3 full articulamentums after a pond layer, the output number of last full articulamentum is the number of vehicle attribute classification.
In enforcement, described identification module specifically may include that
Convolution unit, at convolutional layer, carrying out convolutional calculation by the convolution kernel that described vehicle image to be identified and training in advance obtain, export one or more characteristic image;
Pond unit, at pond layer, carrying out pondization operation to the output of described convolutional layer;
Entirely connecting unit, at full articulamentum, the output of described last layer being carried out full attended operation, the node number of described last full articulamentum is identical with the number that vehicle attribute is classified;
Taxon, for the output of described last full articulamentum is classified, obtains vehicle attribute probability.
In enforcement, described device may further include:
Pretreatment module 305, for, after obtaining vehicle image to be identified, before vehicle image to be identified described in the described degree of deep learning network identification utilizing training in advance to obtain, carrying out pretreatment to described vehicle image to be identified; Described pretreatment at least includes following a kind of operation: rotation, histogram equalization, white balance, mirror image operation, random shearing, centralization, equalization, be sized resize.
For convenience of description, each several part of apparatus described above is divided into various module or unit to be respectively described with function. Certainly, the function of each module or unit can be realized in same or multiple softwares or hardware when implementing the application.
Those skilled in the art are it should be appreciated that embodiments herein can be provided as method, system or computer program. Therefore, the application can adopt the form of complete hardware embodiment, complete software implementation or the embodiment in conjunction with software and hardware aspect. And, the application can adopt the form at one or more upper computer programs implemented of computer-usable storage medium (including but not limited to disk memory, CD-ROM, optical memory etc.) wherein including computer usable program code.
The application describes with reference to flow chart and/or the block diagram according to the method for the embodiment of the present application, equipment (system) and computer program. It should be understood that can by the combination of the flow process in each flow process in computer program instructions flowchart and/or block diagram and/or square frame and flow chart and/or block diagram and/or square frame. These computer program instructions can be provided to produce a machine to the processor of general purpose computer, special-purpose computer, Embedded Processor or other programmable data processing device so that the instruction performed by the processor of computer or other programmable data processing device is produced for realizing the device of function specified in one flow process of flow chart or multiple flow process and/or one square frame of block diagram or multiple square frame.
These computer program instructions may be alternatively stored in and can guide in the computer-readable memory that computer or other programmable data processing device work in a specific way, the instruction making to be stored in this computer-readable memory produces to include the manufacture of command device, and this command device realizes the function specified in one flow process of flow chart or multiple flow process and/or one square frame of block diagram or multiple square frame.
These computer program instructions also can be loaded in computer or other programmable data processing device, make on computer or other programmable devices, to perform sequence of operations step to produce computer implemented process, thus the instruction performed on computer or other programmable devices provides for realizing the step of function specified in one flow process of flow chart or multiple flow process and/or one square frame of block diagram or multiple square frame.
Although having been described for the preferred embodiment of the application, but those skilled in the art are once know basic creative concept, then these embodiments can be made other change and amendment. So, claims are intended to be construed to include preferred embodiment and fall into all changes and the amendment of the application scope.

Claims (14)

1. a vehicle identification method, it is characterised in that comprise the steps:
Obtain vehicle image to be identified;
Utilize vehicle image to be identified described in the degree of deep learning network identification that training in advance obtains; The network structure of described degree of deep learning network includes convolutional layer, pond layer and full articulamentum, described pond layer is connected after described convolutional layer, described full articulamentum is connected, the vehicle attribute probability that each output node is described vehicle image on last full articulamentum after the layer of described pond;
The vehicle attribute information of described vehicle image to be identified is determined according to described vehicle attribute probability.
2. the method for claim 1, it is characterised in that the training step of described degree of deep learning network specifically includes:
Obtain and be with markd vehicle image sample; Described labelling includes the attribute information of vehicle;
Utilize the degree of deep learning network being previously provided with initial parameter that described vehicle image sample is classified;
Export the difference successively anti-pass between result and the attribute information of described vehicle to described degree of deep learning network according to described degree of deep learning network, train the parameter of described degree of deep learning network.
3. method as claimed in claim 2, it is characterized in that, described utilization is previously provided with the degree of deep learning network of initial parameter and described vehicle image sample is classified particularly as follows: utilize the convolution kernel being previously provided with initial parameter and described vehicle image sample to carry out convolutional calculation; Pondization through pond layer operates and the full attended operation of full articulamentum, obtains the vehicle attribute probability of described vehicle image sample;
Described according to the difference successively anti-pass between described degree of deep learning network output result and the attribute information of described vehicle to described degree of deep learning network, train the parameter of described degree of deep learning network, particularly as follows: when there are differences between described vehicle attribute probability and the attribute information of described vehicle, adjust the parameter in described convolution kernel, until the vehicle attribute probability of output is consistent with the attribute information of described vehicle.
4. method as claimed in claim 2, it is characterized in that, the parameter of the described degree of deep learning network of described training is specially use degree of depth network training instrument caffe and is trained, the parameter of described caffe includes: basic learning rate ranges for 0.0001��0.01, study momentum range is 0.9��0.99, and weight penalty coefficient ranges for 0.0001��0.001.
5. the method for claim 1, it is characterized in that, the network structure of described degree of deep learning network specifically includes 5 convolutional layers, 5 pond layers and 3 full articulamentums, described pond layer is connected after each convolutional layer, next convolutional layer is connected after the layer of described pond, in the end being sequentially connected with 3 full articulamentums after a pond layer, the output number of last full articulamentum is the number of vehicle attribute classification.
6. the method for claim 1, it is characterised in that vehicle image to be identified described in the described degree of deep learning network identification utilizing training in advance to obtain particularly as follows:
At convolutional layer, the convolution kernel that described vehicle image to be identified and training in advance obtain is carried out convolutional calculation, exports one or more characteristic image;
At pond layer, the output of described convolutional layer is carried out pondization operation;
At full articulamentum, the output of described last layer being carried out full attended operation, the node number of described last full articulamentum is identical with the number that vehicle attribute is classified;
The output of described last full articulamentum is classified, obtains vehicle attribute probability.
7. the method for claim 1, it is characterised in that after obtaining vehicle image to be identified, before vehicle image to be identified described in the described degree of deep learning network identification utilizing training in advance to obtain, farther include:
Described vehicle image to be identified is carried out pretreatment;
Described pretreatment at least includes following a kind of operation: rotation, histogram equalization, white balance, mirror image operation, random shearing, centralization, equalization, be sized resize.
8. a vehicle identifier, it is characterised in that including:
Acquisition module, is used for obtaining vehicle image to be identified;
Training module, is used for training degree of deep learning network; The network structure of described degree of deep learning network includes convolutional layer, pond layer and full articulamentum, described pond layer is connected after described convolutional layer, described full articulamentum is connected, the vehicle attribute probability that each output node is described vehicle image on last full articulamentum after the layer of described pond;
Identification module, for utilizing vehicle image to be identified described in the degree of deep learning network identification that training in advance obtains:
Determine module, for determining the vehicle attribute information of described vehicle image to be identified according to described vehicle attribute probability.
9. device as claimed in claim 8, it is characterised in that described training module specifically includes:
Acquiring unit, is used for obtaining the markd vehicle image sample of band; Described labelling includes the attribute information of vehicle;
Taxon, for utilizing the degree of deep learning network being previously provided with initial parameter that described vehicle image sample is classified;
Training unit, for exporting the difference successively anti-pass extremely described degree of deep learning network between result and the attribute information of described vehicle according to described degree of deep learning network, trains the parameter of described degree of deep learning network.
10. device as claimed in claim 9, it is characterised in that described taxon carries out convolutional calculation specifically for utilizing the convolution kernel being previously provided with initial parameter and described vehicle image sample; Pondization through pond layer operates and the full attended operation of full articulamentum, obtains the vehicle attribute probability of described vehicle image sample; When described training unit specifically for there are differences between described vehicle attribute probability and the attribute information of described vehicle, adjust the parameter in described convolution kernel, until the vehicle attribute probability of output is consistent with the attribute information of described vehicle.
11. device as claimed in claim 9, it is characterized in that, described training unit specifically for exporting the difference successively anti-pass extremely described degree of deep learning network between result and the attribute information of described vehicle according to described degree of deep learning network, degree of depth network training instrument caffe is used to be trained, the parameter of described caffe includes: basic learning rate ranges for 0.0001��0.01, study momentum range is 0.9��0.99, and weight penalty coefficient ranges for 0.0001��0.001.
12. device as claimed in claim 8, it is characterized in that, the network structure of described degree of deep learning network specifically includes 5 convolutional layers, 5 pond layers and 3 full articulamentums, described pond layer is connected after each convolutional layer, next convolutional layer is connected after the layer of described pond, in the end being sequentially connected with 3 full articulamentums after a pond layer, the output number of last full articulamentum is the number of vehicle attribute classification.
13. device as claimed in claim 8, it is characterised in that described identification module specifically includes:
Convolution unit, at convolutional layer, carrying out convolutional calculation by the convolution kernel that described vehicle image to be identified and training in advance obtain, export one or more characteristic image;
Pond unit, at pond layer, carrying out pondization operation to the output of described convolutional layer;
Entirely connecting unit, at full articulamentum, the output of described last layer being carried out full attended operation, the node number of described last full articulamentum is identical with the number that vehicle attribute is classified;
Taxon, for the output of described last full articulamentum is classified, obtains vehicle attribute probability.
14. device as claimed in claim 8, it is characterised in that farther include:
Pretreatment module, for, after obtaining vehicle image to be identified, before vehicle image to be identified described in the described degree of deep learning network identification utilizing training in advance to obtain, carrying out pretreatment to described vehicle image to be identified; Described pretreatment at least includes following a kind of operation: rotation, histogram equalization, white balance, mirror image operation, random shearing, centralization, equalization, be sized resize.
CN201610073180.5A 2016-02-02 2016-02-02 Vehicle identification method and device Pending CN105654066A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610073180.5A CN105654066A (en) 2016-02-02 2016-02-02 Vehicle identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610073180.5A CN105654066A (en) 2016-02-02 2016-02-02 Vehicle identification method and device

Publications (1)

Publication Number Publication Date
CN105654066A true CN105654066A (en) 2016-06-08

Family

ID=56488228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610073180.5A Pending CN105654066A (en) 2016-02-02 2016-02-02 Vehicle identification method and device

Country Status (1)

Country Link
CN (1) CN105654066A (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127248A (en) * 2016-06-24 2016-11-16 平安科技(深圳)有限公司 Car plate sorting technique based on degree of depth study and system
CN106295637A (en) * 2016-07-29 2017-01-04 电子科技大学 A kind of vehicle identification method based on degree of depth study with intensified learning
CN106709528A (en) * 2017-01-10 2017-05-24 深圳大学 Method and device of vehicle reidentification based on multiple objective function deep learning
CN106980846A (en) * 2017-05-01 2017-07-25 刘至键 A kind of auto parts machinery identifying device based on depth convolutional network
TWI596552B (en) * 2016-08-02 2017-08-21 Nat Chin-Yi Univ Of Tech Car models recognition system and method
CN107092906A (en) * 2017-05-01 2017-08-25 刘至键 A kind of Chinese traditional medicinal materials recognition device based on deep learning
CN107719374A (en) * 2016-08-12 2018-02-23 通用汽车环球科技运作有限责任公司 A kind of method for obtaining subjective dynamics of vehicle
CN107967468A (en) * 2018-01-19 2018-04-27 刘至键 A kind of supplementary controlled system based on pilotless automobile
CN108009598A (en) * 2017-12-27 2018-05-08 北京诸葛找房信息技术有限公司 Floor plan recognition methods based on deep learning
CN108416440A (en) * 2018-03-20 2018-08-17 上海未来伙伴机器人有限公司 A kind of training method of neural network, object identification method and device
WO2018149302A1 (en) * 2017-02-16 2018-08-23 平安科技(深圳)有限公司 Vehicle identification method and apparatus
CN108733719A (en) * 2017-04-24 2018-11-02 优信拍(北京)信息科技有限公司 A kind of recognition methods of vehicle position, device, equipment and computer-readable medium
CN108957024A (en) * 2017-05-22 2018-12-07 阿里巴巴集团控股有限公司 A kind of method, apparatus and electronic equipment of tachometric survey
CN109325547A (en) * 2018-10-23 2019-02-12 苏州科达科技股份有限公司 Non-motor vehicle image multi-tag classification method, system, equipment and storage medium
CN110895692A (en) * 2018-09-13 2020-03-20 浙江宇视科技有限公司 Vehicle brand identification method and device and readable storage medium
CN110991349A (en) * 2019-12-05 2020-04-10 中国科学院重庆绿色智能技术研究院 Lightweight vehicle attribute identification method based on metric learning
CN111126185A (en) * 2019-12-09 2020-05-08 南京莱斯电子设备有限公司 Deep learning vehicle target identification method for road intersection scene
CN112311605A (en) * 2020-11-06 2021-02-02 北京格灵深瞳信息技术有限公司 Cloud platform and method for providing machine learning service

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463241A (en) * 2014-10-31 2015-03-25 北京理工大学 Vehicle type recognition method in intelligent transportation monitoring system
CN105046196A (en) * 2015-06-11 2015-11-11 西安电子科技大学 Front vehicle information structured output method base on concatenated convolutional neural networks
CN105184271A (en) * 2015-09-18 2015-12-23 苏州派瑞雷尔智能科技有限公司 Automatic vehicle detection method based on deep learning
CN105574550A (en) * 2016-02-02 2016-05-11 北京格灵深瞳信息技术有限公司 Vehicle identification method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463241A (en) * 2014-10-31 2015-03-25 北京理工大学 Vehicle type recognition method in intelligent transportation monitoring system
CN105046196A (en) * 2015-06-11 2015-11-11 西安电子科技大学 Front vehicle information structured output method base on concatenated convolutional neural networks
CN105184271A (en) * 2015-09-18 2015-12-23 苏州派瑞雷尔智能科技有限公司 Automatic vehicle detection method based on deep learning
CN105574550A (en) * 2016-02-02 2016-05-11 北京格灵深瞳信息技术有限公司 Vehicle identification method and device

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127248A (en) * 2016-06-24 2016-11-16 平安科技(深圳)有限公司 Car plate sorting technique based on degree of depth study and system
KR102173610B1 (en) * 2016-06-24 2020-11-04 핑안 테크놀로지 (션젼) 컴퍼니 리미티드 Vehicle license plate classification method, system, electronic device and media based on deep learning
WO2017220032A1 (en) * 2016-06-24 2017-12-28 平安科技(深圳)有限公司 Vehicle license plate classification method and system based on deep learning, electronic apparatus, and storage medium
US10528841B2 (en) * 2016-06-24 2020-01-07 Ping An Technology (Shenzhen) Co., Ltd. Method, system, electronic device, and medium for classifying license plates based on deep learning
KR20190021187A (en) * 2016-06-24 2019-03-05 핑안 테크놀로지 (션젼) 컴퍼니 리미티드 Vehicle license plate classification methods, systems, electronic devices and media based on deep running
CN106295637A (en) * 2016-07-29 2017-01-04 电子科技大学 A kind of vehicle identification method based on degree of depth study with intensified learning
CN106295637B (en) * 2016-07-29 2019-05-03 电子科技大学 A kind of vehicle identification method based on deep learning and intensified learning
TWI596552B (en) * 2016-08-02 2017-08-21 Nat Chin-Yi Univ Of Tech Car models recognition system and method
CN107719374A (en) * 2016-08-12 2018-02-23 通用汽车环球科技运作有限责任公司 A kind of method for obtaining subjective dynamics of vehicle
CN106709528A (en) * 2017-01-10 2017-05-24 深圳大学 Method and device of vehicle reidentification based on multiple objective function deep learning
US10740927B2 (en) 2017-02-16 2020-08-11 Ping An Technology (Shenzhen) Co., Ltd. Method and device for vehicle identification
WO2018149302A1 (en) * 2017-02-16 2018-08-23 平安科技(深圳)有限公司 Vehicle identification method and apparatus
CN108733719A (en) * 2017-04-24 2018-11-02 优信拍(北京)信息科技有限公司 A kind of recognition methods of vehicle position, device, equipment and computer-readable medium
CN106980846A (en) * 2017-05-01 2017-07-25 刘至键 A kind of auto parts machinery identifying device based on depth convolutional network
CN107092906A (en) * 2017-05-01 2017-08-25 刘至键 A kind of Chinese traditional medicinal materials recognition device based on deep learning
CN108957024A (en) * 2017-05-22 2018-12-07 阿里巴巴集团控股有限公司 A kind of method, apparatus and electronic equipment of tachometric survey
CN108009598A (en) * 2017-12-27 2018-05-08 北京诸葛找房信息技术有限公司 Floor plan recognition methods based on deep learning
CN107967468A (en) * 2018-01-19 2018-04-27 刘至键 A kind of supplementary controlled system based on pilotless automobile
CN108416440A (en) * 2018-03-20 2018-08-17 上海未来伙伴机器人有限公司 A kind of training method of neural network, object identification method and device
CN110895692A (en) * 2018-09-13 2020-03-20 浙江宇视科技有限公司 Vehicle brand identification method and device and readable storage medium
CN109325547A (en) * 2018-10-23 2019-02-12 苏州科达科技股份有限公司 Non-motor vehicle image multi-tag classification method, system, equipment and storage medium
CN110991349A (en) * 2019-12-05 2020-04-10 中国科学院重庆绿色智能技术研究院 Lightweight vehicle attribute identification method based on metric learning
CN110991349B (en) * 2019-12-05 2023-02-10 中国科学院重庆绿色智能技术研究院 Lightweight vehicle attribute identification method based on metric learning
CN111126185A (en) * 2019-12-09 2020-05-08 南京莱斯电子设备有限公司 Deep learning vehicle target identification method for road intersection scene
CN111126185B (en) * 2019-12-09 2023-09-05 南京莱斯电子设备有限公司 Deep learning vehicle target recognition method for road gate scene
CN112311605A (en) * 2020-11-06 2021-02-02 北京格灵深瞳信息技术有限公司 Cloud platform and method for providing machine learning service
CN112311605B (en) * 2020-11-06 2023-12-22 北京格灵深瞳信息技术股份有限公司 Cloud platform and method for providing machine learning service

Similar Documents

Publication Publication Date Title
CN105654066A (en) Vehicle identification method and device
CN105574550A (en) Vehicle identification method and device
CN101271515B (en) Image detection device capable of recognizing multi-angle objective
CN112801146B (en) Target detection method and system
CN105590099B (en) A kind of more people's Activity recognition methods based on improvement convolutional neural networks
CN105740910A (en) Vehicle object detection method and device
CN109325395A (en) The recognition methods of image, convolutional neural networks model training method and device
US9330336B2 (en) Systems, methods, and media for on-line boosting of a classifier
CN111339935B (en) Optical remote sensing picture classification method based on interpretable CNN image classification model
CN109684922A (en) A kind of recognition methods based on the multi-model of convolutional neural networks to finished product dish
CN109948616A (en) Image detecting method, device, electronic equipment and computer readable storage medium
CN114998220B (en) Tongue image detection and positioning method based on improved Tiny-YOLO v4 natural environment
CN106408037A (en) Image recognition method and apparatus
CN112633354B (en) Pavement crack detection method, device, computer equipment and storage medium
CN113095370A (en) Image recognition method and device, electronic equipment and storage medium
CN113761259A (en) Image processing method and device and computer equipment
CN108615401B (en) Deep learning-based indoor non-uniform light parking space condition identification method
CN109284700B (en) Method, storage medium, device and system for detecting multiple faces in image
CN110852358A (en) Vehicle type distinguishing method based on deep learning
CN116843650A (en) SMT welding defect detection method and system integrating AOI detection and deep learning
CN111914902A (en) Traditional Chinese medicine identification and surface defect detection method based on deep neural network
CN111222558B (en) Image processing method and storage medium
CN117011274A (en) Automatic glass bottle detection system and method thereof
CN111144425B (en) Method and device for detecting shot screen picture, electronic equipment and storage medium
CN111444816A (en) Multi-scale dense pedestrian detection method based on fast RCNN

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100092 Beijing Haidian District Yongtaizhuang North Road No. 1 Tiandi Adjacent to Block B, Building 1, Fengji Industrial Park

Applicant after: BEIJING DEEPGLINT INFORMATION TECHNOLOGY CO., LTD.

Address before: 100091 No. 6 Yudai Road, Haidian District, Beijing

Applicant before: BEIJING DEEPGLINT INFORMATION TECHNOLOGY CO., LTD.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160608