CN105654067A - Vehicle detection method and device - Google Patents

Vehicle detection method and device Download PDF

Info

Publication number
CN105654067A
CN105654067A CN201610073326.6A CN201610073326A CN105654067A CN 105654067 A CN105654067 A CN 105654067A CN 201610073326 A CN201610073326 A CN 201610073326A CN 105654067 A CN105654067 A CN 105654067A
Authority
CN
China
Prior art keywords
vehicle
image
convolution kernel
training
characteristic image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610073326.6A
Other languages
Chinese (zh)
Inventor
张德兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING DEEPGLINT INFORMATION TECHNOLOGY Co Ltd
Original Assignee
BEIJING DEEPGLINT INFORMATION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING DEEPGLINT INFORMATION TECHNOLOGY Co Ltd filed Critical BEIJING DEEPGLINT INFORMATION TECHNOLOGY Co Ltd
Priority to CN201610073326.6A priority Critical patent/CN105654067A/en
Publication of CN105654067A publication Critical patent/CN105654067A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a vehicle detection method and device. The vehicle detection method comprises the following steps: acquiring a monitoring image in a current scene; extracting a vision characteristic with a relatively strong expression ability of the monitoring image by using a deep convolution neural network model, and outputting a characteristic image; and carrying out convolution calculation on a pre-trained convolution kernel and the characteristic image to obtain a detection result, wherein the detection result includes a vehicle existing probability and a relative position of a vehicle. According to the embodiment of the invention, characteristics of the monitoring image are extracted by using the deep convolution neural network model and the characteristic image is output; and only the convolution kernel and the characteristic image need to be calculated subsequently, the original monitoring image does not need to be pre-processed, the whole original monitoring image does not need to be subjected to sliding frame scanning, and the original monitoring image does not need to be subjected to multi-scale characteristic calculation, so that calculation time is saved, and the detection efficiency and the detection accuracy are greatly improved.

Description

A kind of vehicle checking method and device
Technical field
The application relates to technical field of computer vision, particularly relates to a kind of vehicle checking method and device.
Background technology
The vehicle detection of accurate robust has important application value for safety monitoring (particularly under bayonet socket and electronic police scene), and on road, each car of process is all the target being concerned about very much in protection and monitor field.
Existing vehicle testing techniques is most in the following way: first monitored picture is carried out Image semantic classification, such as histogram equalization, setting contrast etc.; Then predefined series of features is extracted, the Gradient Features etc. in such as edge, texture, all directions; Finally utilize features described above, successively the window of different size in the image of pretreatment, length-width ratio or position is carried out one by one classification and judges, then think vehicle if greater than certain threshold value.
Adopt aforesaid way that vehicle is detected, need first picture to be carried out pretreatment, then whole pictures is carried out feature extraction, then by treated picture according to different scale convergent-divergent, classification process, Multi-Scale Calculation etc., consume the plenty of time, cause that detection efficiency is extremely low.
Prior art deficiency is in that:
Existing vehicle detection computation schemes amount is big, detection speed is slow, scene is single, accuracy of detection is low in detection.
Summary of the invention
The embodiment of the present application proposes a kind of vehicle checking method and device, and to solve, vehicle detection computation schemes amount in prior art is big, detection speed is slow, the detection technical problem that scene is single, accuracy of detection is low.
The embodiment of the present application provides a kind of vehicle checking method, comprises the steps:
Obtain the monitoring image in current scene;
Utilize degree of depth convolutional neural networks model that described monitoring image is extracted feature, output characteristic image;
Convolution kernel training in advance obtained and described characteristic image carry out convolutional calculation, obtain testing result, and described testing result includes the probability of car and the relative position of vehicle.
The embodiment of the present application provides a kind of vehicle detection apparatus, including:
Acquisition module, for obtaining the monitoring image in current scene;
Characteristic extracting module, is used for utilizing degree of depth convolutional neural networks model that described monitoring image is extracted feature, output characteristic image;
Training module, for training in advance convolution kernel;
Detection module, convolution kernel and described characteristic image for training in advance being obtained carry out convolutional calculation, obtain testing result, and described testing result includes the probability of car and the relative position of vehicle.
Have the beneficial effect that:
Vehicle checking method that the embodiment of the present application provides and device, after monitoring image in getting current scene, can utilize degree of depth convolutional neural networks model that described monitoring image is extracted feature, output characteristic image, convolution kernel training obtained carries out convolutional calculation with described characteristic image and obtains testing result, and described testing result includes the probability of car and the relative position of vehicle. Owing to the embodiment of the present application utilizing degree of depth convolutional neural networks model described monitoring image is extracted feature output characteristic image, convolution kernel and described characteristic image are calculated by follow-up having only to, go for multiple detection scene such as bayonet socket or electronic police, without original monitoring image being carried out pretreatment, without more whole original monitoring image being carried out slider bar scanning, and original image need not be carried out Analysis On Multi-scale Features calculating, thus save the calculating time, substantially increase detection efficiency and accuracy of detection.
Accompanying drawing explanation
The specific embodiment of the application is described below with reference to accompanying drawings, wherein:
Fig. 1 illustrates the schematic flow sheet that in the embodiment of the present application, vehicle checking method is implemented;
Fig. 2 illustrates vehicle detection process schematic in the embodiment of the present application;
Fig. 3 illustrates the structural representation of vehicle detection apparatus in the embodiment of the present application.
Detailed description of the invention
Technical scheme and advantage in order to make the application are clearly understood, below in conjunction with accompanying drawing, the exemplary embodiment of the application is described in more detail, obviously, described embodiment is only a part of embodiment of the application, rather than all embodiments is exhaustive. And when not conflicting, the embodiment in this explanation and the feature in embodiment can be combined with each other.
For the deficiencies in the prior art, the embodiment of the present application proposes vehicle checking method and device, it is possible to supports Various Complex scene simultaneously, supports the vehicle of arbitrary size/ratio and can quickly realize high-precision real-time monitoring at CPU/GPU, is illustrated below.
Fig. 1 illustrates the schematic flow sheet that in the embodiment of the present application, vehicle checking method is implemented, as it can be seen, described vehicle checking method may include steps of:
Step 101, the monitoring image obtained in current scene;
Step 102, utilize degree of depth convolutional neural networks model to described monitoring image extract feature, output characteristic image;
Step 103, convolution kernel training in advance obtained carry out convolutional calculation as sliding window and described characteristic image, obtain testing result, and described testing result includes the probability of car and the relative position of vehicle.
When being embodied as, it is possible to arrange monitoring camera in advance in the scene needing monitoring, photographic head return the monitoring image in current scene. In actually used, what photographic head returned is likely monitor video, and monitor video can include the monitoring image of some frames.
Monitoring image in the embodiment of the present application can be RGB RGB image, and an image can be divided into three images according to the difference of color, can carry out subsequent operation with three images respectively when being embodied as.
Degree of depth study presents big advantage processing in image problem, and the network structure of its deep layer and end to end optimization thought make to allow machine automatically carry out study from data and is possibly realized.Convolutional neural networks (ConvolutionalNeuralNetworks, it is called for short CNN) it is that the degree of depth learns one of comparatively popular method, usually, the basic structure of convolutional neural networks can include two-layer, one layer is feature extraction layer, and one layer is Feature Mapping layer.
The embodiment of the present application utilizes degree of depth convolutional neural networks model that described monitoring image carries out feature extraction, output characteristic image, and this output is through the feature for detecting that neural network learning obtains. Wherein, convolutional neural networks model can adopt existing model, for instance existing GoogleNet model, and this model is by convolutional layer and sensing module InceptionModule is stacking forms.
Convolution kernel described in the embodiment of the present application can be understood as a part or whole part feature (headstock, car light, the tailstock, wheel, vehicle body etc.) according to car and carries out the numeral expression that sample training obtains, described convolution kernel can be the sizes such as 1*1,3*3,5*5, convolution kernel includes multiple numerical value, for instance: the convolution kernel of 3*3 includes 9 numerical value.
Assume to obtain 1 characteristic image to after monitoring image extraction feature, adopt the convolution kernel of a certain size (being assumed to be 3*3), described convolution kernel and features described above image can be utilized to carry out convolutional calculation, finally giving an output (size can be 18*31), each position of described output can represent that this position exists the probability of the vehicle of a fixed size and ratio in described characteristic image.
Recycle 1 convolution kernel and carry out convolutional calculation with features described above image, obtain the relative deviation post of upper left corner x; In like manner, the relative deviation post of the relative deviation post of upper left corner y, the relative deviation post of lower right corner x, lower right corner y etc. respectively need 1 convolution kernel and characteristic image to be calculated obtaining.
Therefore, detection for a fixed proportion and the vehicle of yardstick, when the characteristic image extracted is 1, the convolution kernel (parameter in convolution kernel is sized to 5*3*3 altogether) that can utilize 5 a certain size (being assumed to be 3*3) is calculated, after convolution kernel and characteristic image carry out convolutional calculation, 5 figure (being sized to the output of 5*18*31, i.e. the figure of 5 18*31) can be obtained.
Wherein, first figure can represent that there is the probability of car each position, all of numeral is all between 0��1, and other four figure can represent the relative deviation post of the upper left corner x coordinate of relative position of vehicle, upper left corner y-coordinate, lower right corner x coordinate, lower right corner y-coordinate respectively.
If the vehicle of length-width ratios multiple dimensioned, many is detected, assume the vehicle detection needing to process scale yardstick, ratio kind length-width ratio, the embodiment of the present application can utilize 5*scale*ratio convolution kernel to carry out convolutional calculation, 5*scale*ratio figure of final output, the figure of every output can represent certain yardstick and ratio, certain attribute (having the probability of car or four coordinates of relative deviation post) output.
Wherein, scale can be positive integer, for instance can be 7, and area can be 50*50*2i, i can be 0,1,2,3,4,5,6 etc., and ratio can be positive integer, for instance can be 5, and length-width ratio can be 3: 1,2: 1,1: 1,1: 2,1: 3 etc.
The embodiment of the present application is reduced into characteristic image after original monitoring image being extracted feature by degree of depth convolutional neural networks, carries out the operation such as follow up scan, detection and can greatly reduce slip or the detection time of convolution kernel compared with operation in original monitoring image in described characteristic image.
Sliding window for 3*3 size, the embodiment of the present application has only to the sliding window of scanning 3*3, monitoring image need not convert different yardsticks to be scanned, detect, namely, the embodiment of the present application need not change according to the dimensional variation of object, reduces computation complexity, improves detection efficiency.
When being embodied as, the parameter in convolution kernel can be trained with the parameter in described convolutional neural networks model simultaneously and be obtained, it is also possible to first carries out the training of convolution kernel, carry out the training of model again, and the order of convolution kernel training and model training is not limited as by the application.
The vehicle checking method that the embodiment of the present application provides, without monitoring image is carried out any pretreatment, arbitrary size can be accepted, the image input of any length-width ratio, the image zooming-out got can be gone out the visual signature that ability to express is higher by the embodiment of the present application, concentration obtains characteristic image, described characteristic image is detected by follow-up having only to, go for multiple detection scene such as bayonet socket or electronic police, without original monitoring image is carried out pretreatment, without whole original monitoring image is carried out slider bar scanning, and original image need not be carried out Analysis On Multi-scale Features calculating, thus saving the calculating time, substantially increase detection efficiency and accuracy of detection.
The vehicle checking method that the embodiment of the present application provides is possible not only to detection vehicle, it is also possible to the targets such as tricycle, bicycle, pedestrian are detected, it is only necessary to the convolution kernel of these targets of training in advance.
In enforcement, described monitoring image can be gray level image, RGB image, RGBD image, YUV image.
In being embodied as, the acquired monitoring image of the embodiment of the present application can be the multiple picture formats such as gray level image, RGB image, RGBD image, YUV image, and wherein, described RGBD graphical representation includes the image of depth information. When blocking between vehicle, the vehicle being blocked also can accurately be detected.
In enforcement, described utilize degree of depth convolutional neural networks model that described monitoring image is extracted feature, output characteristic image, be specifically as follows: utilize degree of depth convolutional neural networks model that described monitoring image is extracted different feature, export k characteristic image;
Described convolution kernel training in advance obtained and described characteristic image carry out convolutional calculation, are specifically as follows: 5*k*scale*ratio convolution kernel training in advance obtained and described k characteristic image carry out convolutional calculation respectively; Wherein, described scale is the quantity of vehicle different scale, and described ratio is the quantity of vehicle difference length-width ratio.
After convolutional neural networks model extraction feature, the characteristic image of output can be opened for k, and every characteristic image can be then the different sizes such as 18*31, and total size of characteristic image can be k*18*31.
Convolution kernel for detecting can have k*scale*ratio, when being embodied as, scale can have 7 kinds, ratio can have 5 kinds, wherein, different scale can be responsible for detecting different size of vehicle, for instance: 50*50*20The vehicle of size, 50*50*26The vehicle of size, different ratio can be responsible for detecting the vehicle of different length-width ratio, for instance: 1: 3, the vehicle of the length-width ratios such as 1: 2,1: 1,2: 1,3: 1.
Assume the embodiment of the present application to utilize degree of depth convolutional neural networks model described monitoring image is extracted k characteristic image, k convolution kernel so can be utilized to open characteristic image with above-mentioned k respectively and to carry out corresponding convolutional calculation, and result is sued for peace, the result so obtained is a certain size output, each position represents there is the probability of the vehicle of a relation size and ratio in this position of characteristic image. When calculating the relative deviation post of relative upper left corner x, it is possible to recycle k convolution kernel and be calculated.In like manner, the relatively skew of upper left corner y, the skew relatively of lower right corner x, lower right corner y relative skew k convolution kernel calculating can be utilized respectively to obtain.
Therefore, vehicle for a fixed proportion and yardstick detects, when extraction obtains k characteristic image, the embodiment of the present application can utilize the convolution kernel (size altogether of the parameter in convolution kernel can be k*5*3*3) of k*5 a certain size (being assumed to be 3*3) to be calculated summation, after convolution kernel carries out corresponding convolutional calculation with characteristic image, export 5 figure and (be sized to the output of 5*18*31, the i.e. figure of 5 18*31), represent the probability of the car of corresponding current scale and ratio respectively, the relative skew of upper left corner x, the relative skew of upper left corner y, the relative skew of lower right corner x, the relative skew of lower right corner y.
If the vehicle of length-width ratios multiple dimensioned, many is detected, assume the vehicle detection needing to process scale yardstick, ratio kind length-width ratio, the embodiment of the present application can utilize k*5*scale*ratio convolution kernel to carry out convolutional calculation, 5*scale*ratio figure of final output, the figure of every output can represent certain yardstick and ratio, certain attribute (four coordinates of probability or relative deviation post) output.
Conventional art in the specific implementation, extracts some basic features generally according to original monitoring image, such as texture, color, broken line, gradient of all directions etc., but these features and final vehicle detection task are not directly related to. And the embodiment of the present application has fully used for reference the efficient feature ability to express of degree of depth learning model. The advantage of degree of depth study is in that, it according to task itself, can be automatically learned the compactest effective feature specially from the mark of data and data. The embodiment of the present application can utilize degree of depth convolutional neural networks model be automatically learned from described monitoring image more abundant, more added with expressing and the feature of separating capacity, all information that can distinguish car and non-car is contained, such as either with or without position relationship of various piece of car plate, wheel, vehicle window, the angle of car or car etc. in these features.
In enforcement, described convolutional calculation is specifically as follows full convolutional calculation.
Of the prior art carry out convolutional calculation, it is that convolution kernel is carried out convolutional calculation as sliding window in certain position of described characteristic image, after calculating terminate, move again to next position carry out convolutional calculation ..., until completing the convolutional calculation of whole positions of described characteristic image.
And the embodiment of the present application employing can be full convolutional calculation, that is, described convolution kernel is disposable as sliding window and on described characteristic image, all positions carry out convolutional calculation, it is clear that, compared with prior art, the scheme detection efficiency that the embodiment of the present application provides is higher.
In enforcement, the convolution kernel that described training in advance obtains can be multiple, and described convolution kernel training in advance obtained carries out convolutional calculation as sliding window and described characteristic image, is specifically as follows:
The convolution kernel the plurality of training in advance obtained carries out convolutional calculation as sliding window is parallel with described characteristic image.
When being embodied as, convolution kernel can have multiple, can be divided into a few class according to varying in size, for instance: 1*1,3*3 or 5*5 etc., carry out slipping on described characteristic image respectively by these convolution kernels completing convolutional calculation.
The embodiment of the present application can go out multiple convolution kernel according to the feature training in advance of actual vehicle, according to different scenes, varying environment, different distance, different angles, different length-width ratio vehicle sample in car light, the tailstock, wheel, vehicle body or door handle etc., obtain may be used for the convolution kernel of detection vehicle through constantly training.Such as: according to different scenes, varying environment, different distance, different angles, different length-width ratio vehicle sample in door handle training obtain convolution kernel, such that it is able to when door handle occurring in monitoring image, adopt described convolution kernel can detect the position that there is door handle and output door handle in image fast and accurately, so that it is determined that image has the position of car and car.
The embodiment of the present application adopts multiple convolution kernels to be calculated, the position having car probability and car more accurately can be obtained after aggregated, such as, some convolution kernels be probably determine according to door handle monitoring image has a car, the convolution kernel that has is probably and determines there is a car in monitoring image according to wheel, when multiple convolution kernels all detect one car of existence in monitoring image, then the accuracy of testing result is higher.
Owing to the embodiment of the present application is in detection calculating process, convolutional calculation can be carried out with described characteristic image by multiple convolution kernels simultaneously, assume that GPU has 10 cores, so 10 cores of GPU can uniformly utilize parallel, further increase detection efficiency, prior art then can only serial process one by one, it is impossible to splits into the subtask that amount of calculation is identical, therefore also cannot make full use of the many nuclear properties of GPU. When being embodied as, GPU generally can have 200��4000 cores, and the check figure of GPU is not limited as by the application.
In enforcement, if same car produces response at diverse location when convolution kernel carries out convolutional calculation, it is possible to adopt non-maximum suppressing method to be merged by multiple relative positions of the described vehicle of output.
When being embodied as, the size of convolution kernel is likely less than described vehicle, is likely to more than described vehicle, different size of convolution kernel may produce response with same car, namely output has the probability of car more than predetermined threshold value, now, described vehicle is likely to be covered by multiple frames, and the embodiment of the present application can adopt non-maximum suppressing method to be merged by these frames.
In addition, described vehicle is likely to and produces response from the convolution kernel corresponding to different length-width ratios, and namely output has the probability of car more than predetermined threshold value, now, described vehicle is likely to be covered by multiple frames, and the embodiment of the present application can adopt non-maximum suppressing method to be merged by these frames.
In the embodiment of the present application, convolution kernel is when scanning, when repeatedly all producing to respond more by force to same car, it is possible to the vehicle relative position of those outputs is merged; In other words, same car is when convolution kernel scans, many secondary responses (described response can have car probability more than predetermined threshold value for what export) are created, then the vehicle relative position (frame corresponding to vehicle) of output can be merged at diverse location.
Non-maximum suppresses (Non-maximumsuppression) can regard the search problem of local maximum as, it is possible to be simply interpreted as suppressing not being the element (gray value corresponding to the pixel of non-maximum can be set to 0 when being embodied as) of maximum, search maximum locally.
When being embodied as, when the friendship of any two frame and than during more than default Second Threshold, then it is assumed that what the two frame covered is same car, can be merged by the two frame according to above-mentioned non-maximum suppressing method.
In the embodiment of the present application when detection, if same car is covered by multiple frames, then non-maximum suppressing method can be adopted to be merged by these frames.
In enforcement, described method may further include:
Relative position according to described vehicle and the relative reference frame pre-set, calculate described vehicle absolute position in described monitoring image.
In the embodiment of the present application after obtaining the relative position of probability and the vehicle having car, it is possible to use the relative reference frame of length-width ratios multiple dimensioned, many calculates vehicle absolute location coordinates in original monitoring image.
When being embodied as, it is possible to adopting the relative reference frame of 7 yardsticks, every kind of yardstick can have 5 kinds of length-width ratios. Such as: the area of relative reference frame can equal proportion (ratio of area is 2, the ratio of the length of side is radical sign 2) from 50*50 to 400*400, namely, 50*50,71*71,100*100,141*141,200*200,282*282,400*400, length-width ratio can be 3: 1,2: 1,1: 1,1: 2,1: 3, hypothetical reference frame area is 200*200, length-width ratio is 2: 1, then the size of this reference block is then 282*141.
The relative position (namely relative to relative reference frame) assuming vehicle is (1x, ly, rx, ry), namely the top left co-ordinate of the relative position of described vehicle is (1x, ly), lower right corner coordinate is (rx, ry), the relative reference frame that to utilize area be a, length-width ratio is b, carries out calculated as below:
Assume image (, there is the x*y that is sized to of a relative reference frame w, h) position, and according to formula x=sqrt (a*b), y=sqrt (a/b) is calculated, obtain relative reference frame absolute position and be:
The upper left corner: [w-x/2, h-y/2], the lower right corner [w+x/2, h+y/2];
Calculating obtains vehicle absolute position in original monitoring image and is
The upper left corner [w-x/2+1x, h-y/2+ly], the lower right corner [w-x/2+rx, h-y/2+ry]
Length and width described in the embodiment of the present application might not represent the length and width of vehicle body, it can be specifically the vehicle size that is shown on picture, such as: assume the height of vehicle be 0.5, length be 1.2, width be 0.7, assume that vehicle is just to camera lens, the length so detected is then 0.7, wide be 0.5, if and headstock at left, the tailstock on the right side, then the length detected is then 1.2, wide be 0.5.
In enforcement, described method may further include:
Utilizing Knowledge Verification Model that described testing result is verified, described Knowledge Verification Model is by being trained obtaining to positive example and negative example.
When being embodied as, it is assumed that testing result is determined car in current scene, it is also possible to go the result whether correct with Knowledge Verification Model further, to reduce error. Described Knowledge Verification Model can be through positive example and negative example are trained obtaining. Wherein, positive example and negative example can adopt the mode of 1: 1 to generate.
When being embodied as, the equally possible employing convolutional neural networks of described Knowledge Verification Model is trained obtaining, and the structure of described Knowledge Verification Model can be:
Input picture (is assumed to be 3*128*128);
Carrying out convolutional calculation Conv (7,2,32) through convolutional layer, convolution kernel is sized to 7*7, moves 2 pixels, 32 characteristic images altogether during slip every time;
Adopting maximum pond Pooling (3,2) through pond layer, pond range size is 3*3, every time mobile 2 pixels;
Carrying out convolutional calculation Conv (3,1,64) through convolutional layer, convolution kernel is sized to 3*3, moves 1 pixel, 64 characteristic images altogether during slip every time;
Adopting maximum pond Pooling (3,2) through pond layer, pond range size is 3*3, every time mobile 2 pixels;
Carrying out convolutional calculation Conv (3,1,128) through convolutional layer, convolution kernel is sized to 3*3, moves 1 pixel, 128 characteristic images altogether during slip every time;
Carrying out convolutional calculation Conv (3,1,128) through convolutional layer, convolution kernel is sized to 3*3, moves 1 pixel, 128 characteristic images altogether during slip every time;
Carrying out convolutional calculation Conv (3,1,128) through convolutional layer, convolution kernel is sized to 3*3, moves 1 pixel, 128 characteristic images altogether during slip every time;
Adopting maximum pond Pooling (3,2) through pond layer, pond range size is 3*3, every time mobile 2 pixels;
Through full articulamentum FC (512), the node number of full articulamentum is 512;
Through full articulamentum FC (512), the node number of full articulamentum is 512;
Two numerals output (2) of final output, a probability being to have car, a probability being not have car.
Wherein, (a, b, c) represent that convolution kernel is sized to a*a to Conv, moves b pixel during slip every time, total total c characteristic image; (a, b) represents the maximum pond of employing (maxpooling) to Pooling, and range size is a*a, every time mobile b pixel; FC (a) represents that this is a full articulamentum, and node number is a.
The training of Knowledge Verification Model can be identical with the training method training relative reference frame before, it is possible to use back propagation (BP, backpropagation) algorithm adjusts parameter so that on test set the result of output and real mark as close possible to.
The embodiment of the present application can gather more real vehicles sample in advance, positive example can be and certain real vehicles boundary rectangle frame friendship and than exceeding all samples of preset first threshold value, negative example can be then all vehicle boundary rectangle frames with on image friendship and than be respectively less than described preset first threshold value and at least with the friendship of certain boundary rectangle frame than all samples more than default Second Threshold.
Wherein, described friendship than referring to that the area of common factor of two frames is divided by the obtained numerical value of the area of the union of two frames, one of them frame can be real callout box, i.e. the boundary rectangle frame of described vehicle; Another frame can be that described boundary rectangle frame is carried out the frame that certain disturbance is obtained. In being embodied as, when the friendship of two frames and than during more than default Second Threshold (such as 0.5), then can think positive example through the frame of disturbance, otherwise may be considered negative example.
Assume that predetermined threshold value is 0.5, in the embodiment of the present application positive example can be with on image vehicle boundary rectangle frame friendship and than all samples more than 0.5, negative example can be hand over the upper all vehicle boundary rectangle frames of this figure and compare all below 0.5 and at least with the friendship of certain boundary rectangle frame ratio more than 0.1.
When being embodied as, owing to the ratio of negative example is too many, in the embodiment of the present application, in order to obtain the negative example of quality, negative example is defined to friendship ratio and less than preset first threshold value and at least there is a friendship ratio more than Second Threshold.
Aforesaid way is adopted to be likely to so that pure background exists the negative example that some rejectings are not fallen, the embodiment of the present application can add some with image in arbitrary frame all non-intersect and be easily treated as the position of error detection as negative example, thus improving the discriminating power of model further.
In enforcement, the training process of described convolution kernel is specifically as follows:
Obtaining some vehicle sample images, in described sample image, each car is labeled with boundary rectangle frame;
Utilize the feature of sample image, output characteristic image described in degree of depth convolutional neural networks model extraction;
Detect the vehicle in described sample image according to convolution kernel and described characteristic image, adjust the parameter in described convolution kernel according to testing result and described boundary rectangle frame, until the position of vehicle is close with described boundary rectangle frame or overlap in described testing result.
When being embodied as, after obtaining some vehicle sample images, it is possible to mark a boundary rectangle frame for the vehicle on every image, as the standard of following model detection.
Utilize the feature of sample image described in degree of depth convolutional neural networks model extraction, after output characteristic image, the convolution kernel pre-setting initial parameter and described characteristic image can be utilized to be calculated detecting the position etc. whether having vehicle and vehicle in described sample image, then the boundary rectangle frame of testing result with actual mark is compared, if inconsistent, adjust the numerical value in convolution kernel, recalculate, until testing result constantly level off to reality vehicle location, final training obtains described convolution kernel, determine that described convolution kernel can correctly detect whether the position of vehicle and vehicle.
The vehicle checking method end to end based on full convolution deep neural network that the embodiment of the present application is proposed, has the advantage that
1) without image is carried out any pretreatment;
2) the image input of arbitrary size, any length-width ratio can be accepted;
3) situations such as daytime, night, headstock, the tailstock can be processed simultaneously;
4) vehicle can not only be detected, it is also possible to expand to the detection to targets such as tricycle, bicycle, pedestrians easily;
5) can calculating vehicle ' s contour accurately, precision can reach 1 pixel, and traditional method is typically only capable to support the detection block of fixed proportion;
6) can running on CPU and GPU, detection efficiency is high simultaneously.
For the ease of the enforcement of the application, illustrate with example below.
The embodiment of the present application can include data collection and mark, network model's design, these four steps of off-line training detector, on-line checking.
One, data collection and mark
In order to realize different scene (bayonet socket of different angles or electronic police scene etc.), varying environment (daytime, night, sleet sky etc.), different distance (remote, near), different angles (headstock, the tailstock, side), different length-width ratio vehicle carry out unifying detection, first can collect a large amount of truthful datas covering said circumstances, and accurately mark.
Labeling form can be mark a boundary rectangle frame to each car, and final annotation results can be have a lot of rectangle frame in a monitoring image, and each frame can with each car one_to_one corresponding. If blocked between vehicle, then may be overlapping between corresponding rectangle frame.
Two, network model's design
Assume that the picture size inputted is M*N (here for 600*1000), the first half of the network model of the embodiment of the present application can adopt convolutional neural networks classical model GoogleNet (this model is formed) by convolutional layer and InceptionModule are stacking, the final output of this part is sized to 18x31 (this output is through the feature for detecting that neural network learning obtains), and the mode then passing through convolution simulates sliding window.
Three, off-line training detector
1, model training relevant parameter is set
Learning rate: 0.001;
Batch size (mini-batchsize): 100, wherein comprises 50 positive examples, 50 negative examples;
Turning operation (flip): every figure is carried out left and right upset, thus expanding training set;
Momentum (momentum): 0.9;
Weights attenuation rate (weightdecay): 0.0005.
2, positive and negative samples selection mechanism
The essence of Detection task can be the task of 2 classification, and testing result is for having vehicle or not having vehicle, and the embodiment of the present application can generate the positive example of training and negative example, positive example and negative example and adopt the mode of 1: 1 to generate.
Wherein, positive example can refer to the friendship with certain real vehicle frame and than (area of the friendship of two frames divided by two frames and area) all samples more than 0.5, and bear example and can refer to hand over the upper all vehicles of this figure and compare all below 0.5, and at least hand over some frame and compare more than 0.1.
Four, on-line checking
Fig. 2 illustrates vehicle detection process schematic in the embodiment of the present application, as it can be seen, described vehicle detection process may include steps of:
Step 1, the video image (being assumed to be 600*1000) utilizing RGB photographic head acquisition monitoring scene or video (every frame is video image);
Described video image is extracted feature, output characteristic image (18*31) by step 2, employing degree of depth convolutional neural networks model;
It is embodied as can being input described video image (600*1000);
Carrying out convolutional calculation Convl (7,2) through convolutional layer, convolution kernel is sized to 7*7, moves 2 pixels during slip every time;
Adopting maximum pond Pooling (3,2) through pond layer, pond range size is 3*3, every time mobile 2 pixels;
Carrying out convolutional calculation Conv2 (5,2) through convolutional layer, convolution kernel is sized to 5*5, moves 2 pixels during slip every time;
Adopting maximum pond Pooling (3,2) through pond layer, pond range size is 3*3, every time mobile 2 pixels;
The sensing module Inception3 in googlenet is utilized to carry out dependency gathering;
Adopting maximum pond Pooling (3,2) through pond layer, pond range size is 3*3, every time mobile 2 pixels;
The sensing module Inception4 in googlenet is utilized to carry out dependency gathering;
Finally give characteristic image.
Every time through a convolutional layer Conv or pond layer Pooling, the size of image just probably can reduce one times.
Described characteristic image can be understood as the result after the feature concentration of described video image.
Step 3, utilize convolution kernel that training in advance obtains (being assumed to be 3*3, correspond in original video image the size of convolution kernel then for 48*48) to simulate sliding window, carry out convolutional calculation with described characteristic image, obtain 5 parameters;
Wherein, first parameter represents the probability (being assumed to be 80%) having car, and the 2nd��5 parameter represents the position (being assumed to be A, B, C, D) of vehicle.
Finally, the sliding position of convolution kernel can stop detection in the picture the position of car, the top-left coordinates of the 2nd��5 parameter respectively convolution kernel and lower right coordinate, the final position calculating convolution kernel rests on top-left coordinates for (A, B), lower right coordinate be the position of (C, D), be (A in top-left coordinates, B), lower right coordinate is the convolution kernel position of (C, D) is then the position of described vehicle.
When specifically detecting, it is possible to the candidate frame that final output is a lot, and same car may be covered by multiple frames, and now, the embodiment of the present application can carry out the fusion of frame. When being embodied as, it is possible to adopt non-maximum suppression strategy, the multiple frames corresponding to same car are merged into a frame, obtain the position that this vehicle most probable occurs.
Step 4, utilize relative reference frame calculate described vehicle position in original image, return out vehicle location accurately from approximate convolution kernel position;
The vehicle checking method that the embodiment of the present application provides, the full convolutional neural networks of the degree of depth end to end can be utilized to carry out the identification of vehicle, only utilize single model to be namely widely portable to various situation, and the situations such as multiple dimensioned, multi-angle and complex environment can be automatically processed.
Based on same inventive concept, the embodiment of the present application additionally provides a kind of vehicle detection apparatus, owing to the principle of these equipment solution problem is similar to a kind of vehicle checking method, therefore the enforcement of these equipment may refer to the enforcement of method, repeats part and repeats no more.
Fig. 3 illustrates the structural representation of vehicle detection apparatus in the embodiment of the present application, as it can be seen, described vehicle detection apparatus may include that
Acquisition module, for obtaining the monitoring image in current scene; f
Characteristic extracting module, is used for utilizing degree of depth convolutional neural networks model that described monitoring image is extracted feature, output characteristic image;
Training module, for training convolutional core;
Detection module, carries out convolutional calculation for the convolution kernel described training obtained as sliding window and described characteristic image, obtains testing result, and described testing result includes the probability of car and the relative position of vehicle.
In enforcement, described characteristic extracting module specifically may be used for utilizing degree of depth convolutional neural networks model that described monitoring image is extracted different features, exports k characteristic image; 5*k*scale*ratio the convolution kernel that described detection module specifically may be used for obtaining training in advance carries out convolutional calculation respectively with described k characteristic image, obtains testing result, and described testing result includes the probability of car and the relative position of vehicle; Wherein, described scale is the kind of vehicle scale, and described ratio is the kind of vehicle length-width ratio.
In enforcement, described convolutional calculation is specifically as follows full convolutional calculation.
In enforcement, multiple convolution kernels that described detection module specifically may be used for obtaining described training carry out convolutional calculation as sliding window is parallel with described characteristic image.
In enforcement, if described detection module can be further used for same the car when carrying out convolutional calculation and produce response at diverse location, non-maximum suppressing method is adopted to be merged by multiple relative positions of the described vehicle of output.
In enforcement, described device may further include:
Computing module, for the relative position according to described vehicle and the relative reference frame pre-set, calculates described vehicle absolute position in described monitoring image.
In enforcement, described device may further include:
Correction verification module, is used for utilizing Knowledge Verification Model that described testing result is verified, and described Knowledge Verification Model is by being trained obtaining to positive example and negative example.
In enforcement, described training module specifically includes:
Acquiring unit, for obtaining some vehicle sample images in advance, in described sample image, each car is labeled with boundary rectangle frame;
Feature extraction unit, for utilizing the feature of sample image, output characteristic image described in degree of depth convolutional neural networks model extraction;
Training unit, for detecting the vehicle in described sample image according to convolution kernel with described characteristic image, the parameter in described convolution kernel is adjusted, until the position of vehicle is close with described boundary rectangle frame or overlap in described testing result according to testing result and described boundary rectangle frame.
For convenience of description, each several part of apparatus described above is divided into various module or unit to be respectively described with function.Certainly, the function of each module or unit can be realized in same or multiple softwares or hardware when implementing the application.
The embodiment of the present application adopts full convolutional neural networks, the result of all slider bar can be obtained by a forward calculation, and based on the relative reference frame of selected different size and ratio, it is possible to when not bringing extra computing, predict the vehicle of multiple yardstick and length-width ratio simultaneously. Degree of depth network compares traditional method in automatically extracting the validity feature that task is relevant very big advantage, it is possible to obtain more accurate testing result. And, the multiple dimensioned convolution sensing module (InceptionModule) of the degree of depth full convolutional neural networks internal combustion, and owing to model is deeper, receptive field (ReceptiveField) is bigger, during prediction vehicle, can naturally use the background information of vehicle periphery, thus promoting the Detection results of vehicle further.
Those skilled in the art are it should be appreciated that embodiments herein can be provided as method, system or computer program. Therefore, the application can adopt the form of complete hardware embodiment, complete software implementation or the embodiment in conjunction with software and hardware aspect. And, the application can adopt the form at one or more upper computer programs implemented of computer-usable storage medium (including but not limited to disk memory, CD-ROM, optical memory etc.) wherein including computer usable program code.
The application describes with reference to flow chart and/or the block diagram according to the method for the embodiment of the present application, equipment (system) and computer program. It should be understood that can by the combination of the flow process in each flow process in computer program instructions flowchart and/or block diagram and/or square frame and flow chart and/or block diagram and/or square frame. These computer program instructions can be provided to produce a machine to the processor of general purpose computer, special-purpose computer, Embedded Processor or other programmable data processing device so that the instruction performed by the processor of computer or other programmable data processing device is produced for realizing the device of function specified in one flow process of flow chart or multiple flow process and/or one square frame of block diagram or multiple square frame.
These computer program instructions may be alternatively stored in and can guide in the computer-readable memory that computer or other programmable data processing device work in a specific way, the instruction making to be stored in this computer-readable memory produces to include the manufacture of command device, and this command device realizes the function specified in one flow process of flow chart or multiple flow process and/or one square frame of block diagram or multiple square frame.
These computer program instructions also can be loaded in computer or other programmable data processing device, make on computer or other programmable devices, to perform sequence of operations step to produce computer implemented process, thus the instruction performed on computer or other programmable devices provides for realizing the step of function specified in one flow process of flow chart or multiple flow process and/or one square frame of block diagram or multiple square frame.
Although having been described for the preferred embodiment of the application, but those skilled in the art are once know basic creative concept, then these embodiments can be made other change and amendment. So, claims are intended to be construed to include preferred embodiment and fall into all changes and the amendment of the application scope.

Claims (16)

1. a vehicle checking method, it is characterised in that comprise the steps:
Obtain the monitoring image in current scene;
Utilize degree of depth convolutional neural networks model that described monitoring image is extracted feature, output characteristic image;
Convolution kernel training in advance obtained and described characteristic image carry out convolutional calculation, obtain testing result, and described testing result includes the probability of car and the relative position of vehicle.
2. the method for claim 1, it is characterized in that, described utilize degree of depth convolutional neural networks model to described monitoring image extract feature, output characteristic image, particularly as follows: utilize degree of depth convolutional neural networks model that described monitoring image is extracted different features, export k characteristic image;
Described convolution kernel training in advance obtained and described characteristic image carry out convolutional calculation, particularly as follows: 5*k*scale*ratio convolution kernel training in advance obtained and described k characteristic image carry out convolutional calculation respectively; Wherein, described scale is the kind of vehicle scale, and described ratio is the kind of vehicle length-width ratio.
3. the method for claim 1, it is characterised in that described convolutional calculation is specially full convolutional calculation.
4. the method for claim 1, it is characterised in that the convolution kernel that described training in advance obtains is multiple, described convolution kernel training in advance obtained and described characteristic image carry out convolutional calculation, particularly as follows:
The convolution kernel the plurality of training in advance obtained is parallel carries out convolutional calculation with described characteristic image.
5. method as claimed in claim 4, it is characterised in that if same car produces response at diverse location when carrying out convolutional calculation, adopt non-maximum suppressing method to be merged by multiple relative positions of the described vehicle of output.
6. the method for claim 1, it is characterised in that farther include:
Relative position according to described vehicle and the relative reference frame pre-set, calculate described vehicle absolute position in described monitoring image.
7. the method for claim 1, it is characterised in that farther include:
Utilizing Knowledge Verification Model that described testing result is verified, described Knowledge Verification Model is by being trained obtaining to positive example and negative example.
8. the method for claim 1, it is characterised in that the training process of described convolution kernel particularly as follows:
Obtaining some vehicle sample images, in described sample image, each car is labeled with boundary rectangle frame;
Utilize the feature of sample image, output characteristic image described in degree of depth convolutional neural networks model extraction;
Detect the vehicle in described sample image according to convolution kernel and described characteristic image, adjust the parameter in described convolution kernel according to testing result and described boundary rectangle frame, until the position of vehicle is close with described boundary rectangle frame or overlap in described testing result.
9. a vehicle detection apparatus, it is characterised in that including:
Acquisition module, for obtaining the monitoring image in current scene;
Characteristic extracting module, is used for utilizing degree of depth convolutional neural networks model that described monitoring image is extracted feature, output characteristic image;
Training module, for training convolutional core;
Detection module, convolution kernel and described characteristic image for described training being obtained carry out convolutional calculation, obtain testing result, and described testing result includes the probability of car and the relative position of vehicle.
10. device as claimed in claim 9, it is characterised in that described characteristic extracting module, specifically for utilizing degree of depth convolutional neural networks model that described monitoring image is extracted different features, exports k characteristic image;Described detection module carries out convolutional calculation specifically for 5*k*scale*ratio convolution kernel training in advance obtained respectively with described k characteristic image, obtains testing result, and described testing result includes the probability of car and the relative position of vehicle; Wherein, described scale is the kind of vehicle scale, and described ratio is the kind of vehicle length-width ratio.
11. device as claimed in claim 9, it is characterised in that described convolutional calculation is specially full convolutional calculation.
12. device as claimed in claim 9, it is characterised in that described detection module carries out convolutional calculation with described characteristic image parallel specifically for the multiple convolution kernels described training obtained.
13. device as claimed in claim 12, it is characterized in that, if described detection module is further used for same the car when carrying out convolutional calculation produces response at diverse location, non-maximum suppressing method is adopted to be merged by multiple relative positions of the described vehicle of output.
14. device as claimed in claim 9, it is characterised in that farther include:
Computing module, for the relative position according to described vehicle and the relative reference frame pre-set, calculates described vehicle absolute position in described monitoring image.
15. device as claimed in claim 9, it is characterised in that farther include:
Correction verification module, is used for utilizing Knowledge Verification Model that described testing result is verified, and described Knowledge Verification Model is by being trained obtaining to positive example and negative example.
16. device as claimed in claim 9, it is characterised in that described training module specifically includes:
Acquiring unit, for obtaining some vehicle sample images in advance, in described sample image, each car is labeled with boundary rectangle frame;
Feature extraction unit, for utilizing the feature of sample image, output characteristic image described in degree of depth convolutional neural networks model extraction;
Training unit, for detecting the vehicle in described sample image according to convolution kernel with described characteristic image, the parameter in described convolution kernel is adjusted, until the position of vehicle is close with described boundary rectangle frame or overlap in described testing result according to testing result and described boundary rectangle frame.
CN201610073326.6A 2016-02-02 2016-02-02 Vehicle detection method and device Pending CN105654067A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610073326.6A CN105654067A (en) 2016-02-02 2016-02-02 Vehicle detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610073326.6A CN105654067A (en) 2016-02-02 2016-02-02 Vehicle detection method and device

Publications (1)

Publication Number Publication Date
CN105654067A true CN105654067A (en) 2016-06-08

Family

ID=56488163

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610073326.6A Pending CN105654067A (en) 2016-02-02 2016-02-02 Vehicle detection method and device

Country Status (1)

Country Link
CN (1) CN105654067A (en)

Cited By (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106355573A (en) * 2016-08-24 2017-01-25 北京小米移动软件有限公司 Target object positioning method and device in pictures
CN106407931A (en) * 2016-09-19 2017-02-15 杭州电子科技大学 Novel deep convolution neural network moving vehicle detection method
CN106446929A (en) * 2016-07-18 2017-02-22 浙江工商大学 Vehicle type detection method based on edge gradient potential energy
CN106651955A (en) * 2016-10-10 2017-05-10 北京小米移动软件有限公司 Method and device for positioning object in picture
CN106778773A (en) * 2016-11-23 2017-05-31 北京小米移动软件有限公司 The localization method and device of object in picture
CN106845430A (en) * 2017-02-06 2017-06-13 东华大学 Pedestrian detection and tracking based on acceleration region convolutional neural networks
CN106897747A (en) * 2017-02-28 2017-06-27 深圳市捷顺科技实业股份有限公司 A kind of method and device for differentiating vehicle color based on convolutional neural networks model
CN106909943A (en) * 2017-02-28 2017-06-30 深圳市捷顺科技实业股份有限公司 A kind of method and device for differentiating vehicle color based on convolutional neural networks model
CN107139179A (en) * 2017-05-26 2017-09-08 西安电子科技大学 A kind of intellect service robot and method of work
CN107545238A (en) * 2017-07-03 2018-01-05 西安邮电大学 Underground coal mine pedestrian detection method based on deep learning
CN107729363A (en) * 2017-09-06 2018-02-23 上海交通大学 Based on GoogLeNet network model birds population identifying and analyzing methods
CN107766789A (en) * 2017-08-21 2018-03-06 浙江零跑科技有限公司 A kind of vehicle detection localization method based on vehicle-mounted monocular camera
CN107798328A (en) * 2016-08-30 2018-03-13 合肥君正科技有限公司 A kind of destination object searching method and device
CN107972026A (en) * 2016-10-25 2018-05-01 深圳光启合众科技有限公司 Robot, mechanical arm and its control method and device
CN108229386A (en) * 2017-12-29 2018-06-29 百度在线网络技术(北京)有限公司 For detecting the method, apparatus of lane line and medium
CN108509961A (en) * 2017-02-27 2018-09-07 北京旷视科技有限公司 Image processing method and device
CN108648192A (en) * 2018-05-17 2018-10-12 杭州依图医疗技术有限公司 A kind of method and device of detection tubercle
CN108764293A (en) * 2018-04-28 2018-11-06 重庆交通大学 A kind of vehicle checking method and system based on image
CN108875902A (en) * 2017-12-04 2018-11-23 北京旷视科技有限公司 Neural network training method and device, vehicle detection estimation method and device, storage medium
CN109033964A (en) * 2018-06-22 2018-12-18 顺丰科技有限公司 It is a kind of judgement vehicle to departure from port event method, system and equipment
CN109063701A (en) * 2018-08-08 2018-12-21 合肥英睿系统技术有限公司 Labeling method, device, equipment and the storage medium of target in a kind of infrared image
CN109509345A (en) * 2017-09-15 2019-03-22 富士通株式会社 Vehicle detection apparatus and method
CN109543627A (en) * 2018-11-27 2019-03-29 西安电子科技大学 A kind of method, apparatus and computer equipment judging driving behavior classification
CN109615858A (en) * 2018-12-21 2019-04-12 深圳信路通智能技术有限公司 A kind of intelligent parking behavior judgment method based on deep learning
CN109686088A (en) * 2018-12-29 2019-04-26 重庆同济同枥信息技术有限公司 A kind of traffic video alarm method, equipment and system
CN109712127A (en) * 2018-12-21 2019-05-03 云南电网有限责任公司电力科学研究院 A kind of electric transmission line fault detection method for patrolling video flowing for machine
US10296828B2 (en) 2017-04-05 2019-05-21 Here Global B.V. Learning a similarity measure for vision-based localization on a high definition (HD) map
CN109803067A (en) * 2017-11-16 2019-05-24 富士通株式会社 Video concentration method, video enrichment facility and electronic equipment
CN109934417A (en) * 2019-03-26 2019-06-25 国电民权发电有限公司 Boiler coke method for early warning based on convolutional neural networks
CN110659545A (en) * 2018-06-29 2020-01-07 比亚迪股份有限公司 Training method of vehicle recognition model, vehicle recognition method and device and vehicle
CN110920626A (en) * 2019-12-10 2020-03-27 中国科学院深圳先进技术研究院 Data-driven electric drive vehicle attachment stability identification method and device
CN110971969A (en) * 2019-12-09 2020-04-07 北京字节跳动网络技术有限公司 Video dubbing method and device, electronic equipment and computer readable storage medium
CN110969065A (en) * 2018-09-30 2020-04-07 北京四维图新科技股份有限公司 Vehicle detection method and device, front vehicle anti-collision early warning equipment and storage medium
CN111383458A (en) * 2018-12-30 2020-07-07 浙江宇视科技有限公司 Vehicle violation detection method, device, equipment and storage medium
WO2020155828A1 (en) * 2019-02-01 2020-08-06 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN111626081A (en) * 2019-02-27 2020-09-04 顺丰科技有限公司 Method and device for determining state of loading and unloading port and storage medium
CN111895931A (en) * 2020-07-17 2020-11-06 嘉兴泊令科技有限公司 Coal mine operation area calibration method based on computer vision
CN112580402A (en) * 2019-09-30 2021-03-30 广州汽车集团股份有限公司 Monocular vision pedestrian distance measurement method and system, vehicle and medium thereof
CN113111709A (en) * 2021-03-10 2021-07-13 北京爱笔科技有限公司 Vehicle matching model generation method and device, computer equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036323A (en) * 2014-06-26 2014-09-10 叶茂 Vehicle detection method based on convolutional neural network
CN105046196A (en) * 2015-06-11 2015-11-11 西安电子科技大学 Front vehicle information structured output method base on concatenated convolutional neural networks
CN105184271A (en) * 2015-09-18 2015-12-23 苏州派瑞雷尔智能科技有限公司 Automatic vehicle detection method based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036323A (en) * 2014-06-26 2014-09-10 叶茂 Vehicle detection method based on convolutional neural network
CN105046196A (en) * 2015-06-11 2015-11-11 西安电子科技大学 Front vehicle information structured output method base on concatenated convolutional neural networks
CN105184271A (en) * 2015-09-18 2015-12-23 苏州派瑞雷尔智能科技有限公司 Automatic vehicle detection method based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
罗斌: "复杂环境下基于角点回归的全卷积神经网络的车牌定位", 《数据采集与处理》 *

Cited By (56)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106446929A (en) * 2016-07-18 2017-02-22 浙江工商大学 Vehicle type detection method based on edge gradient potential energy
CN106446929B (en) * 2016-07-18 2019-02-22 浙江工商大学 Type of vehicle detection method based on edge gradient potential energy
CN106355573A (en) * 2016-08-24 2017-01-25 北京小米移动软件有限公司 Target object positioning method and device in pictures
CN106355573B (en) * 2016-08-24 2019-10-25 北京小米移动软件有限公司 The localization method and device of object in picture
CN107798328A (en) * 2016-08-30 2018-03-13 合肥君正科技有限公司 A kind of destination object searching method and device
CN106407931A (en) * 2016-09-19 2017-02-15 杭州电子科技大学 Novel deep convolution neural network moving vehicle detection method
CN106407931B (en) * 2016-09-19 2019-11-22 杭州电子科技大学 A kind of depth convolutional neural networks moving vehicle detection method
CN106651955A (en) * 2016-10-10 2017-05-10 北京小米移动软件有限公司 Method and device for positioning object in picture
CN107972026A (en) * 2016-10-25 2018-05-01 深圳光启合众科技有限公司 Robot, mechanical arm and its control method and device
CN106778773A (en) * 2016-11-23 2017-05-31 北京小米移动软件有限公司 The localization method and device of object in picture
CN106845430A (en) * 2017-02-06 2017-06-13 东华大学 Pedestrian detection and tracking based on acceleration region convolutional neural networks
CN108509961A (en) * 2017-02-27 2018-09-07 北京旷视科技有限公司 Image processing method and device
CN106909943A (en) * 2017-02-28 2017-06-30 深圳市捷顺科技实业股份有限公司 A kind of method and device for differentiating vehicle color based on convolutional neural networks model
CN106897747A (en) * 2017-02-28 2017-06-27 深圳市捷顺科技实业股份有限公司 A kind of method and device for differentiating vehicle color based on convolutional neural networks model
US10296828B2 (en) 2017-04-05 2019-05-21 Here Global B.V. Learning a similarity measure for vision-based localization on a high definition (HD) map
CN107139179B (en) * 2017-05-26 2020-05-29 西安电子科技大学 Intelligent service robot and working method
CN107139179A (en) * 2017-05-26 2017-09-08 西安电子科技大学 A kind of intellect service robot and method of work
CN107545238A (en) * 2017-07-03 2018-01-05 西安邮电大学 Underground coal mine pedestrian detection method based on deep learning
CN107766789A (en) * 2017-08-21 2018-03-06 浙江零跑科技有限公司 A kind of vehicle detection localization method based on vehicle-mounted monocular camera
CN107766789B (en) * 2017-08-21 2020-05-29 浙江零跑科技有限公司 Vehicle detection positioning method based on vehicle-mounted monocular camera
CN107729363A (en) * 2017-09-06 2018-02-23 上海交通大学 Based on GoogLeNet network model birds population identifying and analyzing methods
CN107729363B (en) * 2017-09-06 2021-08-17 上海交通大学 Bird population identification analysis method based on GoogLeNet network model
CN109509345A (en) * 2017-09-15 2019-03-22 富士通株式会社 Vehicle detection apparatus and method
CN109803067A (en) * 2017-11-16 2019-05-24 富士通株式会社 Video concentration method, video enrichment facility and electronic equipment
CN108875902A (en) * 2017-12-04 2018-11-23 北京旷视科技有限公司 Neural network training method and device, vehicle detection estimation method and device, storage medium
CN108229386A (en) * 2017-12-29 2018-06-29 百度在线网络技术(北京)有限公司 For detecting the method, apparatus of lane line and medium
CN108764293A (en) * 2018-04-28 2018-11-06 重庆交通大学 A kind of vehicle checking method and system based on image
CN108648192A (en) * 2018-05-17 2018-10-12 杭州依图医疗技术有限公司 A kind of method and device of detection tubercle
CN109033964B (en) * 2018-06-22 2022-03-15 顺丰科技有限公司 Method, system and equipment for judging arrival and departure events of vehicles
CN109033964A (en) * 2018-06-22 2018-12-18 顺丰科技有限公司 It is a kind of judgement vehicle to departure from port event method, system and equipment
CN110659545A (en) * 2018-06-29 2020-01-07 比亚迪股份有限公司 Training method of vehicle recognition model, vehicle recognition method and device and vehicle
CN110659545B (en) * 2018-06-29 2023-11-14 比亚迪股份有限公司 Training method of vehicle identification model, vehicle identification method, device and vehicle
CN109063701A (en) * 2018-08-08 2018-12-21 合肥英睿系统技术有限公司 Labeling method, device, equipment and the storage medium of target in a kind of infrared image
CN110969065A (en) * 2018-09-30 2020-04-07 北京四维图新科技股份有限公司 Vehicle detection method and device, front vehicle anti-collision early warning equipment and storage medium
CN110969065B (en) * 2018-09-30 2023-11-28 北京四维图新科技股份有限公司 Vehicle detection method and device, front vehicle anti-collision early warning device and storage medium
CN109543627B (en) * 2018-11-27 2023-08-01 西安电子科技大学 Method and device for judging driving behavior category and computer equipment
CN109543627A (en) * 2018-11-27 2019-03-29 西安电子科技大学 A kind of method, apparatus and computer equipment judging driving behavior classification
CN109615858A (en) * 2018-12-21 2019-04-12 深圳信路通智能技术有限公司 A kind of intelligent parking behavior judgment method based on deep learning
CN109712127A (en) * 2018-12-21 2019-05-03 云南电网有限责任公司电力科学研究院 A kind of electric transmission line fault detection method for patrolling video flowing for machine
CN109686088A (en) * 2018-12-29 2019-04-26 重庆同济同枥信息技术有限公司 A kind of traffic video alarm method, equipment and system
CN111383458B (en) * 2018-12-30 2021-07-27 浙江宇视科技有限公司 Vehicle violation detection method, device, equipment and storage medium
CN111383458A (en) * 2018-12-30 2020-07-07 浙江宇视科技有限公司 Vehicle violation detection method, device, equipment and storage medium
WO2020155828A1 (en) * 2019-02-01 2020-08-06 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
TWI728621B (en) * 2019-02-01 2021-05-21 大陸商北京市商湯科技開發有限公司 Image processing method and device, electronic equipment, computer readable storage medium and computer program
CN111626081A (en) * 2019-02-27 2020-09-04 顺丰科技有限公司 Method and device for determining state of loading and unloading port and storage medium
CN111626081B (en) * 2019-02-27 2024-03-26 顺丰科技有限公司 Method and device for determining state of loading and unloading port and storage medium
CN109934417B (en) * 2019-03-26 2023-04-07 国电民权发电有限公司 Boiler coking early warning method based on convolutional neural network
CN109934417A (en) * 2019-03-26 2019-06-25 国电民权发电有限公司 Boiler coke method for early warning based on convolutional neural networks
CN112580402A (en) * 2019-09-30 2021-03-30 广州汽车集团股份有限公司 Monocular vision pedestrian distance measurement method and system, vehicle and medium thereof
CN110971969A (en) * 2019-12-09 2020-04-07 北京字节跳动网络技术有限公司 Video dubbing method and device, electronic equipment and computer readable storage medium
CN110920626B (en) * 2019-12-10 2021-06-04 中国科学院深圳先进技术研究院 Data-driven electric drive vehicle attachment stability identification method and device
CN110920626A (en) * 2019-12-10 2020-03-27 中国科学院深圳先进技术研究院 Data-driven electric drive vehicle attachment stability identification method and device
CN111895931B (en) * 2020-07-17 2021-11-26 嘉兴泊令科技有限公司 Coal mine operation area calibration method based on computer vision
CN111895931A (en) * 2020-07-17 2020-11-06 嘉兴泊令科技有限公司 Coal mine operation area calibration method based on computer vision
CN113111709A (en) * 2021-03-10 2021-07-13 北京爱笔科技有限公司 Vehicle matching model generation method and device, computer equipment and storage medium
CN113111709B (en) * 2021-03-10 2023-12-29 北京爱笔科技有限公司 Vehicle matching model generation method, device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN105654067A (en) Vehicle detection method and device
CN105740910A (en) Vehicle object detection method and device
CN108805016B (en) Head and shoulder area detection method and device
CN105574550A (en) Vehicle identification method and device
CN103679205B (en) Assume based on shade and the Foregut fermenters method of layering HOG symmetrical feature checking
CN109087337B (en) Long-time target tracking method and system based on hierarchical convolution characteristics
CN103886279B (en) Real-time rider detection using synthetic training data
CN114049572A (en) Detection method for identifying small target
CN109543498B (en) Lane line detection method based on multitask network
CN110909656B (en) Pedestrian detection method and system integrating radar and camera
CN112307984A (en) Safety helmet detection method and device based on neural network
CN113052071B (en) Method and system for rapidly detecting distraction behavior of driver of hazardous chemical substance transport vehicle
CN116665095B (en) Method and system for detecting motion ship, storage medium and electronic equipment
CN109284752A (en) A kind of rapid detection method of vehicle
CN115937736A (en) Small target detection method based on attention and context awareness
CN116168328A (en) Thyroid nodule ultrasonic inspection system and method
CN112446292B (en) 2D image salient object detection method and system
CN112733671A (en) Pedestrian detection method, device and readable storage medium
CN108830166B (en) Real-time bus passenger flow volume statistical method
Yang et al. Research on Target Detection Algorithm for Complex Scenes
CN116778262B (en) Three-dimensional target detection method and system based on virtual point cloud
CN116823819B (en) Weld surface defect detection method, system, electronic equipment and storage medium
CN115496977B (en) Target detection method and device based on multi-mode sequence data fusion
US20230245466A1 (en) Vehicle Lidar System and Object Classification Method Therewith
CN112712061B (en) Method, system and storage medium for recognizing multidirectional traffic police command gestures

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100092 Beijing Haidian District Yongtaizhuang North Road No. 1 Tiandi Adjacent to Block B, Building 1, Fengji Industrial Park

Applicant after: BEIJING DEEPGLINT INFORMATION TECHNOLOGY CO., LTD.

Address before: 100091 No. 6 Yudai Road, Haidian District, Beijing

Applicant before: BEIJING DEEPGLINT INFORMATION TECHNOLOGY CO., LTD.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160608