CN109284752A - A kind of rapid detection method of vehicle - Google Patents
A kind of rapid detection method of vehicle Download PDFInfo
- Publication number
- CN109284752A CN109284752A CN201810883971.3A CN201810883971A CN109284752A CN 109284752 A CN109284752 A CN 109284752A CN 201810883971 A CN201810883971 A CN 201810883971A CN 109284752 A CN109284752 A CN 109284752A
- Authority
- CN
- China
- Prior art keywords
- layer
- size
- vehicle
- convolutional layer
- length
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Abstract
The invention discloses a kind of rapid detection methods of vehicle, which comprises step 1) building and training vehicle detection convolutional neural networks;Step 2) acquisition image simultaneously handled, input step 1) trained vehicle detection convolutional neural networks, according to output result obtain vehicle detection result.Not only detection accuracy is high for method of the invention, but also calculation amount is small, and without GPU, quick detection can be realized on CPU.
Description
Technical field
The present invention relates to computer vision and depth learning technology field, in particular to the quick detection side of a kind of vehicle
Method.
Background technique
In today that modern social development makes rapid progress, key areas of the communication as relationship people's daily life,
Although achieving significant progress, but still the needs of people's life transport cannot be fully met, the traffic problems constantly highlighted are
The thorny problem global as one, congested in traffic, blocking are got worse, and traffic accident and environmental pollution also increasingly cause
The attention of society and pass.Obtain the base that accurate road vehicle quantity and distributed intelligence in real time are intelligent transportation scene perception systems
This demand relates generally to interesting target detection, has detected target identification, motion target tracking three phases.Wherein identification with
Tracking is all based on testing result, and vehicle detection performance plays a crucial role in systems.
Traditional vehicle checking method has powerful connections calculus of finite differences, frame difference method, optical flow method etc., these methods pass through vehicle and background
Difference in color, in shape extracts vehicle from image.Usually there are the more threshold value artificially specified, Generalization Capability
Poor testing result is unstable.Another kind of is the algorithm based on statistical model, such as SVM algorithm, Adaboost algorithm, using big
Positive negative sample is measured, associative learning algorithm obtains discriminant function, to identify to the target in image.It is based on from after proposition
The vehicle checking method of statistical model is just widely applied, but its detection performance is still with actual demand that there are biggish differences
Away from.
In recent years, the method based on deep learning thought achieves great development, is computer science to intelligence side
The support on algorithm is provided to development.The basic thought of deep learning is exactly the artificial neural network for constructing deep layer, simulates people
The study mechanism of brain, the feature of " automatic " learning objective object by the way of unsupervised learning, the feature learnt have layer
Secondary structure: from detail to abstract concept, such feature has more essential portray to data itself.The side of deep learning
Application of the method in many fields all achieves breakthrough success, the Handwritten Digit Recognition System of U.S.'s more banks, Google
Image classification speech recognition integrated project Google Brain, Microsoft full-automatic simultaneous interpretation system be all based on depth
What the method for habit was realized.Wherein, the deep learning algorithm based on convolutional neural networks is all obtained in the multiple fields of image procossing
Level advanced in the world.
It is traditional influenced based on the vehicle checking method of motion information by environmental change it is bigger, it is dry vulnerable to ambient noise
It disturbs, detection accuracy is lower.And limited based on the method for machine learning by training samples number, lack to different postures or screening
The vehicle detection ability of gear state, Generalization Capability are poor.The detection method detection based on deep learning fast-developing in recent years
There is qualitative leap compared with conventional method in precision, huge network public data collection ensure that the general of such method as training sample
Change ability.However, it is most based on the detection method of deep learning due to general convolutional neural networks complexity used compared with
Height, needs high performance GPU auxiliary operation to can be only achieved faster processing speed, and this point limits its large-scale popularization and answers
With.
Currently, the vehicle detecting algorithm based on convolutional neural networks has been able to obtain higher detection accuracy, still
There are still detection speed, and slow, computing resource consumes the problems such as big, it usually needs real-time place is just able to achieve on GPU hardware platform
Reason.However, GPU hardware platform is costly, actual deployment cost is too high, and user is unbearable.User urgently thirsts for a kind of nothing
Need GPU that the high performance vehicle detection method analyzed in real time can be realized.
Summary of the invention
It can not meet it is an object of the present invention to overcome and detect speed in the existing vehicle testing techniques based on deep learning
The problem of practical application needs proposes a kind of rapid detection method of vehicle, and inspection is promoted while guaranteeing Detection accuracy
Degree of testing the speed.
To achieve the goals above, the present invention provides a kind of rapid detection methods of vehicle, which comprises
Step 1) building and training vehicle detection convolutional neural networks;
Step 2) acquisition image simultaneously handled, input step 1) trained vehicle detection convolutional neural networks, root
Vehicle detection result is obtained according to output result.
As a kind of improvement of the above method, the step 1) is specifically included:
Step 1-1) the vehicle detection convolutional neural networks include sequentially connected input layer, the first convolutional layer, first
Down-sampling layer, the second convolutional layer, third convolutional layer, Volume Four lamination, the second down-sampling layer, the 5th convolutional layer, the 6th convolutional layer,
7th convolutional layer, third down-sampling layer, the 8th convolutional layer, the 4th down-sampling layer, the 9th convolutional layer, the first full articulamentum, second
Full articulamentum and output layer;
Step 1-2) by each training sample input vehicle detection convolutional neural networks in training set, utilize classification results
It is iterated to train the parameter of vehicle detection convolutional neural networks with training label.
As a kind of improvement of the above method, in the step 1-1) vehicle detection convolutional neural networks in:
The input layer, for inputting the color image of size 448*448;
First convolutional layer shares 16 convolution kernels, and each convolution kernel size is 7 × 7, and step-length 2, boundary is filled out automatically
Fill 0;The all pixels point in input picture is scanned using 7 × 7 convolution kernel, to generate the spy of a pair 224 × 224
Sign figure;
The first down-sampling layer, for using maximum pond algorithm to upper one layer of progress down-sampling;Its window size is 4
× 4, step-length 4;The first down-sampling layer includes 16 secondary down-sampled images, and the size of every pair figure is 224 × 224;
The convolution kernel size of second convolutional layer is 1 × 1, step-length 1, altogether includes 4 convolution kernels, when scanning boundary from
Dynamic filling 0, generating characteristic pattern size is 56 × 56;
The convolution kernel size of the third convolutional layer is 1 × 1, step-length 1, altogether includes 4 convolution kernels, when scanning boundary from
Dynamic filling 0, generating characteristic pattern size is 56 × 56;
The convolution kernel size of the Volume Four lamination is 3 × 3, step-length 1, altogether includes 8 convolution kernels, when scanning boundary from
Dynamic filling 0, generating characteristic pattern size is 56 × 56;
The second down-sampling layer, for using maximum pond algorithm to upper one layer of progress down-sampling;Its window size is 4
× 4, step-length 4;Window size is 2 × 2, step-length 2;The second down-sampling layer includes 8 secondary down-sampled images, every pair figure
Having a size of 28 × 28;
The convolution kernel size of the 5th convolutional layer C5 is 1 × 1, step-length 1, altogether includes 8 convolution kernels, boundary when scanning
Automatic filling 0, generating characteristic pattern size is 28 × 28;
The convolution kernel size of 6th convolutional layer is 1 × 1, step-length 1, altogether includes 8 convolution kernels, when scanning boundary from
Dynamic filling 0, generating characteristic pattern size is 28 × 28;
The convolution kernel size of 7th convolutional layer is 3 × 3, step-length 1, altogether includes 16 convolution kernels, boundary when scanning
Automatic filling 0, generating characteristic pattern size is 28 × 28;
The third down-sampling layer, for using maximum pond algorithm to upper one layer of progress down-sampling;Window size be 2 ×
2, step-length 2;The third down-sampling layer includes 16 secondary down-sampled images, and the size of every pair figure is 14 × 14;
The convolution kernel size of 8th convolutional layer is 3 × 3, step-length 1, altogether includes 32 convolution kernels, boundary when scanning
Automatic filling 0, generating characteristic pattern size is 14 × 14;
The 4th down-sampling layer, for using maximum pond algorithm to upper one layer of progress down-sampling;Window size be 2 ×
2, step-length 2;The 4th down-sampling layer includes 32 secondary down-sampled images, and the size of every pair figure is 7 × 7;
The convolution kernel size of 9th convolutional layer is 3 × 3, step-length 1, altogether includes 64 convolution kernels, boundary when scanning
Automatic filling 0, generating characteristic pattern size is 7 × 7;
Described first full articulamentum is made of 256 neurons, is activated using Relu function to neuron;
Described second full articulamentum is made of 4096 neurons, is swashed using Leaky-ReLu function to neuron
It is living;
It is made of in the output layer 891 neurons, neuron is activated using Relu function.
As a kind of improvement of the above method, first convolutional layer, the second convolutional layer, third convolutional layer, Volume Four product
Layer, the 5th convolutional layer, the 6th convolutional layer, the 7th convolutional layer, the 8th convolutional layer and the 9th convolutional layer use Leaky-ReLu letter
Several pairs of neurons activate.
As a kind of improvement of the above method, the step 2) is specifically included:
Step 2-1) area-of-interest that vehicle target detects delimited in the image of acquisition, area-of-interest is rectangle,
It is determined by (x, y, w, h) four parameters, x and y indicate rectangle top left co-ordinate, and w is rectangle width, and h is rectangular elevation;
Step 2-2) image in area-of-interest is normalized, it is normalized to 448*448 size;
The image of 448*448 obtained in step 2-2) is divided into S*S grid by step 2-3;Based on structure in step 1)
The vehicle detection convolutional neural networks built classify to the picture material in each grid;B square is exported for each grid
Shape frame, each frame include 5 parameters, and (x, y) is the relevant encirclement frame center of grid, and (w, h) is the wide and high of the encirclement frame,
PoIndicate that the encirclement frame is the fiducial probability of vehicle;Each grid also exports C class condition probability Pc;Wherein, S=9, B=
2, C=1;
Step 2-3) by each grid pass through class condition probability PcWith the product P of the fiducial probability of vehicleoPcAs corresponding
The fiducial probability of classification;
Step 2-4) judge whether the fiducial probability of respective classes is greater than threshold value, threshold value is set as 0.6;If it is judged that
It is affirmative, then rectangle frame is considered as the vehicle of detection, is transferred to step 2-5);Otherwise, vehicle is not detected;
Step 2-5) using the overlapping frame of doubtful same target in non-maxima suppression algorithm removal rectangle frame, it obtains most
Whole vehicle detection frame.
As a kind of improvement of the above method, the step 2-2) normalized the step of are as follows:
To each point (i, j) in the image in area-of-interest, respective coordinates (distI, distJ) after difference processing
Value is F (i+u, j+v), wherein v indicates line number deviation, and u indicates columns deviation, calculation expression are as follows:
Wherein, R () is convoluting interpolation formula, and expression formula is
Present invention has an advantage that the invention proposes a kind of vehicles based on lightweight vehicle detection convolutional neural networks
Detection method, not only detection accuracy is high for this method, but also calculation amount is small, and without GPU, quick inspection can be realized on CPU
It surveys.
Detailed description of the invention
Fig. 1 is the convolutional neural networks structure chart detected provided by the present invention for rapid vehicle.
Specific embodiment
Now in conjunction with attached drawing, the invention will be further described.
The present invention provides a kind of rapid detection method of vehicle, and the convolutional neural networks based on building are realized, convolutional Neural
Network structure will be as shown in Figure 1, below will describe to this method in detail:
Step 1) building and training vehicle detection convolutional neural networks;
Convolutional neural networks input the color image that picture is size 448*448.
Vehicle detection convolutional neural networks constructed by the present invention include 9 convolutional layers, and 4 down-sampling layers, two connect entirely
Connect layer and an output layer.
In convolutional layer, each neuron only receives domain with upper one layer of a part and is connected.Convolutional layer C1 shares 16
Convolution kernel, each convolution kernel size are 7 × 7, and step-length 2, boundary fills 0 automatically.At this point, convolutional layer C1 will will use 7 × 7
Convolution kernel is scanned all pixels point in input picture, to generate the characteristic pattern of a pair 224 × 224.Convolutional layer C2
Convolution kernel size be 1 × 1, step-length 1 includes 4 convolution kernels altogether, and boundary fills 0 automatically when scanning, generates characteristic pattern size
It is 56 × 56.The convolution kernel size of convolutional layer C3 is 1 × 1, step-length 1, altogether includes 4 convolution kernels, boundary is filled out automatically when scanning
0 is filled, generating characteristic pattern size is 56 × 56.The convolution kernel size of convolutional layer C4 is 3 × 3, step-length 1, altogether includes 8 convolution
Core, boundary fills 0 automatically when scanning, and generating characteristic pattern size is 56 × 56.The convolution kernel size of convolutional layer C5 is 1 × 1, step-length
It is 1, altogether includes 8 convolution kernels, boundary fills 0 automatically when scanning, and generating characteristic pattern size is 28 × 28.The convolution of convolutional layer C6
Core size is 1 × 1, step-length 1, altogether includes 8 convolution kernels, and boundary fills 0 automatically when scanning, generate characteristic pattern size be 28 ×
28.The convolution kernel size of convolutional layer C7 is 3 × 3, step-length 1, altogether includes 16 convolution kernels, and boundary fills 0 automatically when scanning, raw
It is 28 × 28 at characteristic pattern size.The convolution kernel size of convolutional layer C8 is 3 × 3, step-length 1, altogether includes 32 convolution kernels, scanning
When boundary fill 0 automatically, generating characteristic pattern size is 14 × 14.The convolution kernel size of convolutional layer C9 is 3 × 3, step-length 1, altogether
Comprising 64 convolution kernels, boundary fills 0 automatically when scanning, and generating characteristic pattern size is 7 × 7.Leaky- is utilized in convolutional layer
ReLu function activates neuron.
In down-sampling layer, each neuron only receives domain with one piece of part in a upper convolutional layer and is connected.Under adopt
Sample layer S1, S2, S3 and S4 are using maximum pond (Max Pooling) algorithm to upper one layer of progress down-sampling.Wherein, S1 layers
Window size is 4 × 4, the down-sampling that step-length is 4.S2, S3 and S4 layers of window size are 2 × 2, step-length 2.It is every in down-sampling layer
One sub-picture is all corresponded with the secondary characteristic pattern in convolutional layer, therefore down-sampling layer S1 includes 16 secondary down-sampled images, often
The size of secondary figure is 224 × 224.Down-sampling layer S2 includes 8 secondary down-sampled images, and the size of every pair figure is 28 × 28.Down-sampling
Layer S3 includes 16 secondary down-sampled images, and the size of every pair figure is 14 × 14.Down-sampling layer S4 includes 32 secondary down-sampled images, every pair
The size of figure is 7 × 7.Down-sampling step takes full advantage of the local correlations of image, is realized by reducing spatial resolution
Retain useful information while reducing data processing amount.
In full articulamentum, each neuron is connect with each neuron of preceding layer.In two full articulamentums, first
A full articulamentum is made of 256 neurons, is activated using Relu function to neuron, and second full articulamentum is by 4096
A neuron is constituted, and is activated using Leaky-ReLu function to neuron.
In output layer, each neuron is connect with each neuron of preceding layer.Output layer is by 891 neuron structures
At being activated using Relu function to neuron.
Step 2) acquisition image is simultaneously handled, and vehicle detection convolutional neural networks are inputted, and obtains vehicle according to output result
Position;It specifically includes:
Step 2-1) area-of-interest that vehicle target detects delimited in the picture, area-of-interest is rectangle, by (x, y,
W, h) four parameters determine that x and y indicate rectangle top left co-ordinate, and w is rectangle width, and h is rectangular elevation.Subsequent vehicle detection
It is carried out only for the image in area-of-interest;
Convolutional neural networks constructed by the present invention are inputted for fixed dimension image, are inputted having a size of 448*448.
The resolution ratio such as 1280*720,1920*1080 are generallyd use in traffic surveillance videos at present, picture size is all larger than 448*448.
Step 2-2) image in area-of-interest is normalized, it is normalized to 448*448 size.
Detailed process are as follows: to each point (i, j) in original image, the value of respective coordinates (distI, distJ) after difference processing
For F (i+u, j+v), wherein v indicates line number deviation, and u indicates columns deviation, calculation expression are as follows:
Wherein, R () is convoluting interpolation formula, and expression formula is
Step 2-3) detection when the image of 448*448 obtained in previous step is first divided into S*S grid.Based on step
The vehicle detection convolutional neural networks constructed in rapid 1 classify to the picture material in each grid.It is defeated for each grid
B rectangle frame out, each frame include 5 parameters, and (x, y) is the relevant encirclement frame center of grid, and (w, h) is the encirclement frame
It is wide and high, PoIndicate that the encirclement frame is the fiducial probability of object.In addition, each grid is also responsible for the condition of C classification of output
Probability Pc, i.e. the probability that belongs to a certain classification of target corresponding to the grid.This probability, which reflects, surrounds the suitable target of frame
Degree.In the detection, each grid passes through the product P of class condition probability and target fiducial probabilityoPcIndicate setting for respective classes
Believe probability.Final output number of parameters N are as follows:
N=S*S* (B*5+C)
Wherein, S=9, B=2, C=1, therefore N=891 are set in the present invention.
Step 2-4) certain threshold value is set as needed for grid categorization fiducial probability, threshold value is set as in the present invention
0.6.When classification fiducial probability is more than threshold value, the real goal that frame can be considered as detection is surrounded, it is straight lower than the rectangle frame of threshold value
Connect discarding.
Step 2-5) finally, ((Non Maximum Suppression, NMS) algorithm, which removes, doubts using non-maxima suppression
Like the overlapping frame of same target, final target detection frame can be obtained.
It should be noted last that the above examples are only used to illustrate the technical scheme of the present invention and are not limiting.Although ginseng
It is described the invention in detail according to embodiment, those skilled in the art should understand that, to technical side of the invention
Case is modified or replaced equivalently, and without departure from the spirit and scope of technical solution of the present invention, should all be covered in the present invention
Scope of the claims in.
Claims (6)
1. a kind of rapid detection method of vehicle, which comprises
Step 1) building and training vehicle detection convolutional neural networks;
Step 2) acquisition image simultaneously handled, input step 1) trained vehicle detection convolutional neural networks, according to defeated
Result obtains vehicle detection result out.
2. the rapid detection method of vehicle according to claim 1, which is characterized in that the step 1) specifically includes:
Step 1-1) the vehicle detection convolutional neural networks include adopting under sequentially connected input layer, the first convolutional layer, first
Sample layer, the second convolutional layer, third convolutional layer, Volume Four lamination, the second down-sampling layer, the 5th convolutional layer, the 6th convolutional layer, the 7th
Convolutional layer, third down-sampling layer, the 8th convolutional layer, the 4th down-sampling layer, the 9th convolutional layer, the first full articulamentum, second connect entirely
Connect layer and output layer;
Step 1-2) by each training sample input vehicle detection convolutional neural networks in training set, utilize classification results and instruction
Practice label to be iterated to train the parameter of vehicle detection convolutional neural networks.
3. the rapid detection method of vehicle according to claim 2, which is characterized in that in the step 1-1) vehicle inspection
In the convolutional neural networks of survey:
The input layer, for inputting the color image of size 448*448;
First convolutional layer shares 16 convolution kernels, and each convolution kernel size is 7 × 7, and step-length 2, boundary fills 0 automatically;
The all pixels point in input picture is scanned using 7 × 7 convolution kernel, to generate the feature of a pair 224 × 224
Figure;
The first down-sampling layer, for using maximum pond algorithm to upper one layer of progress down-sampling;Its window size is 4 × 4,
Step-length is 4;The first down-sampling layer includes 16 secondary down-sampled images, and the size of every pair figure is 224 × 224;
The convolution kernel size of second convolutional layer is 1 × 1, step-length 1, altogether includes 4 convolution kernels, boundary is filled out automatically when scanning
0 is filled, generating characteristic pattern size is 56 × 56;
The convolution kernel size of the third convolutional layer is 1 × 1, step-length 1, altogether includes 4 convolution kernels, boundary is filled out automatically when scanning
0 is filled, generating characteristic pattern size is 56 × 56;
The convolution kernel size of the Volume Four lamination is 3 × 3, step-length 1, altogether includes 8 convolution kernels, boundary is filled out automatically when scanning
0 is filled, generating characteristic pattern size is 56 × 56;
The second down-sampling layer, for using maximum pond algorithm to upper one layer of progress down-sampling;Its window size is 4 × 4,
Step-length is 4;Window size is 2 × 2, step-length 2;The second down-sampling layer includes 8 secondary down-sampled images, the size of every pair figure
It is 28 × 28;
The convolution kernel size of the 5th convolutional layer C5 is 1 × 1, step-length 1, altogether includes 8 convolution kernels, boundary is automatic when scanning
Filling 0, generating characteristic pattern size is 28 × 28;
The convolution kernel size of 6th convolutional layer is 1 × 1, step-length 1, altogether includes 8 convolution kernels, boundary is filled out automatically when scanning
0 is filled, generating characteristic pattern size is 28 × 28;
The convolution kernel size of 7th convolutional layer is 3 × 3, step-length 1, altogether includes 16 convolution kernels, boundary is automatic when scanning
Filling 0, generating characteristic pattern size is 28 × 28;
The third down-sampling layer, for using maximum pond algorithm to upper one layer of progress down-sampling;Window size is 2 × 2, step
A length of 2;The third down-sampling layer includes 16 secondary down-sampled images, and the size of every pair figure is 14 × 14;
The convolution kernel size of 8th convolutional layer is 3 × 3, step-length 1, altogether includes 32 convolution kernels, boundary is automatic when scanning
Filling 0, generating characteristic pattern size is 14 × 14;
The 4th down-sampling layer, for using maximum pond algorithm to upper one layer of progress down-sampling;Window size is 2 × 2, step
A length of 2;The 4th down-sampling layer includes 32 secondary down-sampled images, and the size of every pair figure is 7 × 7;
The convolution kernel size of 9th convolutional layer is 3 × 3, step-length 1, altogether includes 64 convolution kernels, boundary is automatic when scanning
Filling 0, generating characteristic pattern size is 7 × 7;
Described first full articulamentum is made of 256 neurons, is activated using Relu function to neuron;
Described second full articulamentum is made of 4096 neurons, is activated using Leaky-ReLu function to neuron;
It is made of in the output layer 891 neurons, neuron is activated using Relu function.
4. the rapid detection method of vehicle according to claim 3, which is characterized in that first convolutional layer, volume Two
Lamination, third convolutional layer, Volume Four lamination, the 5th convolutional layer, the 6th convolutional layer, the 7th convolutional layer, the 8th convolutional layer and the 9th
Convolutional layer activates neuron using Leaky-ReLu function.
5. the rapid detection method of the vehicle according to one of claim 2-4, which is characterized in that the step 2) is specifically wrapped
It includes:
Step 2-1) area-of-interest that vehicle target detects delimited in the image of acquisition, area-of-interest is rectangle, by (x,
Y, w, h) four parameters determine that x and y indicate rectangle top left co-ordinate, and w is rectangle width, and h is rectangular elevation;
Step 2-2) image in area-of-interest is normalized, it is normalized to 448*448 size;
The image of 448*448 obtained in step 2-2) is divided into S*S grid by step 2-3;Based on what is constructed in step 1)
Vehicle detection convolutional neural networks classify to the picture material in each grid;B rectangle frame is exported for each grid,
Each frame includes 5 parameters, and (x, y) is the relevant encirclement frame center of grid, and (w, h) is the width and high, P of the encirclement frameoTable
Show that the encirclement frame is the fiducial probability of vehicle;Each grid also exports C class condition probability Pc;Wherein, S=9, B=2, C
=1;
Step 2-3) by each grid pass through class condition probability PcWith the product P of the fiducial probability of vehicleoPcAs respective classes
Fiducial probability;
Step 2-4) judge whether the fiducial probability of respective classes is greater than threshold value, threshold value is set as 0.6;If a determination be made that agreeing
Fixed, then rectangle frame is considered as the vehicle of detection, is transferred to step 2-5);Otherwise, vehicle is not detected;
Step 2-5) using the overlapping frame of doubtful same target in non-maxima suppression algorithm removal rectangle frame, it obtains final
Vehicle detection frame.
6. the rapid detection method of vehicle according to claim 5, which is characterized in that the step 2-2) normalization at
The step of reason are as follows:
To each point (i, j) in the image in area-of-interest, the value of respective coordinates (distI, distJ) is after difference processing
F (i+u, j+v), wherein v indicates line number deviation, and u indicates columns deviation, calculation expression are as follows:
Wherein, R () is convoluting interpolation formula, and expression formula is
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810883971.3A CN109284752A (en) | 2018-08-06 | 2018-08-06 | A kind of rapid detection method of vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810883971.3A CN109284752A (en) | 2018-08-06 | 2018-08-06 | A kind of rapid detection method of vehicle |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109284752A true CN109284752A (en) | 2019-01-29 |
Family
ID=65182898
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810883971.3A Pending CN109284752A (en) | 2018-08-06 | 2018-08-06 | A kind of rapid detection method of vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109284752A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111598030A (en) * | 2020-05-21 | 2020-08-28 | 山东大学 | Method and system for detecting and segmenting vehicle in aerial image |
CN112380962A (en) * | 2020-11-11 | 2021-02-19 | 成都摘果子科技有限公司 | Animal image identification method and system based on deep learning |
CN112587149A (en) * | 2020-11-11 | 2021-04-02 | 上海数创医疗科技有限公司 | Atrial premature beat target detection device based on convolutional neural network |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105205114A (en) * | 2015-09-06 | 2015-12-30 | 重庆邮电大学 | Wi-Fi (wireless fidelity) positioning fingerprint database construction method based on image processing |
CN106529468A (en) * | 2016-11-07 | 2017-03-22 | 重庆工商大学 | Finger vein identification method and system based on convolutional neural network |
CN106920214A (en) * | 2016-07-01 | 2017-07-04 | 北京航空航天大学 | Spatial target images super resolution ratio reconstruction method |
CN107609491A (en) * | 2017-08-23 | 2018-01-19 | 中国科学院声学研究所 | A kind of vehicle peccancy parking detection method based on convolutional neural networks |
CN107705577A (en) * | 2017-10-27 | 2018-02-16 | 中国科学院声学研究所 | A kind of real-time detection method and system based on lane line demarcation vehicle peccancy lane change |
-
2018
- 2018-08-06 CN CN201810883971.3A patent/CN109284752A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105205114A (en) * | 2015-09-06 | 2015-12-30 | 重庆邮电大学 | Wi-Fi (wireless fidelity) positioning fingerprint database construction method based on image processing |
CN106920214A (en) * | 2016-07-01 | 2017-07-04 | 北京航空航天大学 | Spatial target images super resolution ratio reconstruction method |
CN106529468A (en) * | 2016-11-07 | 2017-03-22 | 重庆工商大学 | Finger vein identification method and system based on convolutional neural network |
CN107609491A (en) * | 2017-08-23 | 2018-01-19 | 中国科学院声学研究所 | A kind of vehicle peccancy parking detection method based on convolutional neural networks |
CN107705577A (en) * | 2017-10-27 | 2018-02-16 | 中国科学院声学研究所 | A kind of real-time detection method and system based on lane line demarcation vehicle peccancy lane change |
Non-Patent Citations (1)
Title |
---|
SI-QI ZHAO 等: "A Fast Vehicle Detection Method for Road Monitoring", 《2017 2ND INTERNATIONAL CONFERENCE ON COMPUTER, MECHATRONICS AND ELECTRONIC ENGINEERING (CMEE 2017)》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111598030A (en) * | 2020-05-21 | 2020-08-28 | 山东大学 | Method and system for detecting and segmenting vehicle in aerial image |
CN111598030B (en) * | 2020-05-21 | 2023-06-16 | 山东大学 | Method and system for detecting and segmenting vehicle in aerial image |
CN112380962A (en) * | 2020-11-11 | 2021-02-19 | 成都摘果子科技有限公司 | Animal image identification method and system based on deep learning |
CN112587149A (en) * | 2020-11-11 | 2021-04-02 | 上海数创医疗科技有限公司 | Atrial premature beat target detection device based on convolutional neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110287960A (en) | The detection recognition method of curve text in natural scene image | |
CN108304873A (en) | Object detection method based on high-resolution optical satellite remote-sensing image and its system | |
US20190138849A1 (en) | Rotation variant object detection in deep learning | |
CN108764063A (en) | A kind of pyramidal remote sensing image time critical target identifying system of feature based and method | |
CN109902806A (en) | Method is determined based on the noise image object boundary frame of convolutional neural networks | |
CN110782420A (en) | Small target feature representation enhancement method based on deep learning | |
CN105574550A (en) | Vehicle identification method and device | |
CN104299006A (en) | Vehicle license plate recognition method based on deep neural network | |
CN107016357A (en) | A kind of video pedestrian detection method based on time-domain convolutional neural networks | |
CN104657717B (en) | A kind of pedestrian detection method based on layering nuclear sparse expression | |
CN108305260B (en) | Method, device and equipment for detecting angular points in image | |
CN104809443A (en) | Convolutional neural network-based license plate detection method and system | |
CN111160249A (en) | Multi-class target detection method of optical remote sensing image based on cross-scale feature fusion | |
CN109409384A (en) | Image-recognizing method, device, medium and equipment based on fine granularity image | |
CN105654066A (en) | Vehicle identification method and device | |
CN104504395A (en) | Method and system for achieving classification of pedestrians and vehicles based on neural network | |
CN111046880A (en) | Infrared target image segmentation method and system, electronic device and storage medium | |
CN108960404B (en) | Image-based crowd counting method and device | |
CN110929593A (en) | Real-time significance pedestrian detection method based on detail distinguishing and distinguishing | |
CN109543632A (en) | A kind of deep layer network pedestrian detection method based on the guidance of shallow-layer Fusion Features | |
CN109978882A (en) | A kind of medical imaging object detection method based on multi-modal fusion | |
CN106408030A (en) | SAR image classification method based on middle lamella semantic attribute and convolution neural network | |
CN113160062B (en) | Infrared image target detection method, device, equipment and storage medium | |
CN108647695A (en) | Soft image conspicuousness detection method based on covariance convolutional neural networks | |
CN109002752A (en) | A kind of complicated common scene rapid pedestrian detection method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190129 |
|
RJ01 | Rejection of invention patent application after publication |