CN107220603A - Vehicle checking method and device based on deep learning - Google Patents
Vehicle checking method and device based on deep learning Download PDFInfo
- Publication number
- CN107220603A CN107220603A CN201710353521.9A CN201710353521A CN107220603A CN 107220603 A CN107220603 A CN 107220603A CN 201710353521 A CN201710353521 A CN 201710353521A CN 107220603 A CN107220603 A CN 107220603A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- sample image
- image
- vehicle sample
- detected
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of vehicle checking method and device based on deep learning, methods described includes:Vehicle sample image is obtained, and the vehicle sample image is pre-processed;Depth convolutional neural networks model is built by pretreated vehicle sample image;Vehicle image to be detected is detected using the depth convolutional neural networks model, and exports testing result.The technical scheme of the present embodiment builds depth convolutional neural networks model to pretreated vehicle sample image, then vehicle image to be detected is detected using the depth convolutional neural networks model, and export testing result, this method can avoid excessive compute repeatedly, so as to improve detection speed, and result in more preferable vehicle identification effect.
Description
Technical field
The present invention relates to technical field of computer vision, more particularly to a kind of vehicle checking method based on deep learning and
Device.
Background technology
Computer vision is an important cross discipline of artificial intelligence and image processing field.The computer vision of early stage
The solution of task mainly contains two steps, and one is manual designs feature, and another is to build a shallow-layer study system
System.With the development of artificial intelligence, deep learning is formal in 2006 to be proposed.Deep learning originates from multilayer ANN
Network, has been successfully applied to the fields such as computer vision, natural language processing and intelligent search at present.Current existing deep learning
Network mainly includes convolutional neural networks, depth confidence net and stacks automatic coding machine.Convolutional neural networks are joined due to its interlayer
System and the tight association of spatial information (si), make it be widely used in image procossing.
Vehicle detection in image procossing typically realizes that real-time moving vehicle is detected by using background modeling algorithm, really
Determine the vehicle movement region in image, final vehicle figure is then obtained using shadow information, car light information and/or vehicle window information
Picture, so as to complete vehicle detection process.Disclosed vehicle checking method, detection figure is inputted in detection first in the prior art
Picture, then extracts the Haar-like features of detection image, subsequently using Haar-like of the cascade classifier to detection image
Feature carries out detection identification, and the information of vehicle location in detection image is judged finally according to the vehicle characteristics identified.This method
Lack the adaptability to different monitoring scene, vehicle can only be detected from single visual angle, if the bat of video camera
Take the photograph angle to be changed, the Detection results of this method will be substantially reduced.
The content of the invention
In view of this, the purpose of the embodiment of the present invention is to provide a kind of vehicle sample and non-vehicle from a large amount of different scenes
Sample learning to vehicle substantive characteristics, can make vehicle checking method of the vehicle identification effect preferably based on deep learning and
Device.
To achieve these goals, the embodiments of the invention provide a kind of vehicle checking method based on deep learning, bag
Include:
Vehicle sample image is obtained, and the vehicle sample image is pre-processed;Wherein described vehicle sample image
Including the vehicle sample image comprising vehicle image and only include the vehicle sample image of background image;
Depth convolutional neural networks model is built by pretreated vehicle sample image;
Vehicle image to be detected is detected using the depth convolutional neural networks model, and exports testing result.
Preferably, depth convolutional neural networks model is built by pretreated vehicle sample image, including:
Pretreated vehicle sample image is built into the grid of the default length of side by network structure;
Calculate the predicted boundary frame and corresponding border confidence level of each grid cell;
The bounding box is filtered according to default filtering rule;
Non-maxima suppression processing is carried out to the bounding box after filtering.
Preferably, depth convolutional neural networks model is built by pretreated vehicle sample image, including:
Optimize the depth convolutional neural networks model.
Preferably, optimize the depth convolutional neural networks model, including:
The performance of the depth convolutional neural networks model is detected using preset algorithm.
Preferably, when obtaining the vehicle sample information comprising vehicle image information, methods described also includes:
Obtain the vehicle sample image of pre-set categories;
Line translation is entered using pre-set image processing method to the vehicle sample image, new vehicle sample image is formed.
Preferably, the pre-set categories include following at least one classification:Front and back vehicle sample image, side
Vehicle sample image and prism vehicle sample image;The pre-set image processing method includes following at least one image procossing
Method:Change of scale, translation transformation, rotation transformation and flip horizontal.
Preferably, the vehicle sample image is pre-processed, including:
According to preset vehicle sample image size, the size to vehicle sample image carries out the convolution behaviour of Gabor filter
Make, be then normalized.
Preferably, after being detected using the depth convolutional neural networks model to vehicle image to be detected, it is described
Method also includes:
Detect the degree of accuracy of the depth convolutional neural networks model.
Preferably, before being detected using the depth convolutional neural networks model to vehicle image to be detected, institute
Stating method also includes:
Obtain vehicle image to be detected.
The embodiment of the present invention also provides a kind of vehicle detection apparatus based on deep learning, including:
First acquisition module, is configured to obtain vehicle sample image, and pre-process the vehicle sample image;
Module is built, is configured to build depth convolutional neural networks model by pretreated vehicle sample image;
Detection module, is configured so that the depth convolutional neural networks model is examined to the vehicle image to be detected
Survey, and export testing result.
Compared with prior art, the embodiment of the present invention has the advantages that:The technical scheme of the present embodiment is to pre- place
Vehicle sample image after reason builds depth convolutional neural networks model, then using the depth convolutional neural networks model pair
Vehicle image to be detected detected, and exports testing result, this method can avoid it is excessive compute repeatedly, so as to improve
Detection speed, and result in more preferable vehicle identification effect.
Brief description of the drawings
Fig. 1 is the flow chart of the embodiment one of the vehicle checking method based on deep learning of the present invention;
Fig. 2 is the flow chart of the embodiment two of the vehicle checking method based on deep learning of the present invention;
Fig. 3 is the schematic diagram of the embodiment one of the vehicle detection apparatus based on deep learning of the present invention.
Embodiment
With reference to the accompanying drawings and examples, the embodiment to the present invention is described in further detail.Implement below
Example is used to illustrate the present invention, but is not limited to the scope of the present invention.
Fig. 1 is the flow chart of the embodiment one of the vehicle checking method based on deep learning of the present invention, as shown in figure 1,
The vehicle checking method based on deep learning of the present embodiment, specifically may include steps of:
S101, obtains vehicle sample image, and the vehicle sample image is pre-processed.
Wherein described vehicle sample image includes the vehicle sample image comprising vehicle image and only includes background image
Vehicle sample image.The vehicle sample image of background image is only included, i.e., not comprising the vehicle sample image for having vehicle image.This
Sample can improve the accuracy for extracting vehicle characteristics.
Due to the vehicle sample image that obtains in the specific implementation in terms of size and angle it is not consistent, therefore be side
Just the use in later step to vehicle sample image, the present embodiment is first pre-processed vehicle sample image, for example
It is normalized.
S102, depth convolutional neural networks model is built by pretreated vehicle sample image.
The present embodiment build depth convolutional neural networks model be in order to which the characteristic information to vehicle is identified so that by
Vehicle is detected in complicated background.
The vehicle characteristics that the present embodiment is extracted include the single features such as HOG features, Gabor characteristic and STRIP features, or
Composite character (HOG features+Gabor characteristic, HOG feature+Haar-like features) that these single features of person are combined etc..
The modeling method of the present embodiment can solve the algorithm on background modeling of prior art such as, mixed Gaussian mould
Type, ViBe algorithms and directly setting background image scheduling algorithm are easily influenceed and led by the various external conditions such as illumination, weather
Accuracy is caused, and system continuously carries out the problem of bringing substantial amounts of data and energy consumption.
S103, is detected, and export detection using the depth convolutional neural networks model to vehicle image to be detected
As a result.
Specifically, the present embodiment is when building depth convolutional neural networks model, and convolutional neural networks are from substantial amounts of scene
In extract the vehicle sample image for including vehicle image and arrived not comprising the vehicle sample image learning for having vehicle image
The feature of vehicle essence, the feature of this aspect ratio hand-designed has stronger separability, so as to improve the effect of vehicle detection
Rate and the degree of accuracy.
The technical scheme of the present embodiment builds depth convolutional neural networks model to pretreated vehicle sample image, so
Vehicle image to be detected is detected using the depth convolutional neural networks model afterwards, and exports testing result, this method
Can avoid it is excessive compute repeatedly, so as to improve detection speed, and result in more preferable vehicle identification effect.
Fig. 2 is the flow chart of the embodiment two of the vehicle checking method based on deep learning of the present invention, the present embodiment
Vehicle checking method based on deep learning introduces the present invention's in further detail on the basis of above-described embodiment one, further
Technical scheme.As shown in Fig. 2 the vehicle checking method based on deep learning of the present embodiment, specifically may include steps of:
S201, obtains vehicle sample image, and the vehicle sample image is pre-processed.
Specifically, when obtaining the vehicle sample image comprising vehicle image, step S201 includes:A, obtains pre-set categories
Vehicle sample image;B, enters line translation using pre-set image processing method to the vehicle sample image, forms new vehicle
Sample image.
Wherein, the pre-set categories include following at least one classification:Front and back vehicle sample image, side vehicle
Sample image and prism vehicle sample image;The pre-set image processing method includes following at least one image processing method
Method:Change of scale, translation transformation, rotation transformation and flip horizontal.
When building depth convolutional neural networks model, to train up network, it is necessary to many acquisition vehicle sample graphs of trying one's best
Picture.For example, 3600 vehicle sample images can be obtained from network or the video of actual photographed, these vehicle sample images are contained
Most of visual angle of vehicle is covered.In the specific implementation, limited due to being fixed input size by convolutional neural networks, it is impossible to
The vehicle at multiple visual angles is handled simultaneously, therefore the data set of vehicle sample image is divided into three classes:Front and back vehicle sample
This image, side vehicle sample image, prism vehicle sample image.Wherein, side vehicle sample image quantity is 1200,
It is 78 pixels of width by image size normalization when vehicle sample image is normalized, 36 pixels of height make
The image of vehicle is around used as background in the center of whole image around 6 pixels;The number of prism vehicle sample image
Measure as 1200, when vehicle sample image is normalized, it is 48 pixels of width and height to make image size normalization
36 pixels are spent, around background are used as around 5 pixels;The quantity of front and back vehicle sample image is 1200, to car
When sample image is normalized, image size normalization is 24 pixels of 28 pixels of width and height, is around enclosed
Background is used as around 4 pixels.
, can be from certain amount when obtaining the vehicle sample image for only including background image, such as 100, not comprising car
Intercepted at random in the image of image, the quantity of interception is identical with the vehicle sample image quantity comprising vehicle image.Wherein, carry on the back
Scape image can be obtained by network or other any approach, the content of background image be had no specifically limited.
Meanwhile, in order to strengthen the robustness of detecting system, the vehicle sample image containing vehicle image can be carried out random
The small change of scale in ground (e.g., [0.9,1.1] times), translation transformation (e.g., [- 2 ,+2] pixel) and rotation transformation (e.g., [- 15 ,+
15] spend), side vehicle sample and prism vehicle sample can also carry out flip horizontal, form 6600 side vehicle samples
Image, 7200 prism vehicle sample images, 3600 front and back vehicle sample images.
S202, according to preset vehicle sample image size, the size to vehicle sample image carries out the volume of Gabor filter
Product operation, is then normalized.
Specifically, Gabor filter definition is expressed as follows:
U=xcos θ+ysin θ
V=-ysin θ+xcos θ
Wherein, θ is the direction of wave filter, δu、δνRespectively standard deviation of the Gaussian envelope on u axles and v axles, u axles parallel to
θ, v axle are perpendicular to θ, and ω represents multiple sinusoidal frequency.
It can be seen that, Gabor filter has frequency selective characteristic.
The present embodiment the samples pictures can be normalized using the image normalization method based on square.
S203, pretreated vehicle sample image is built into by network structure the grid of the default length of side.
For example, the pretreated vehicle sample image of input is divided into S × S grid by network structure.
Wherein, S is positive integer
S204, calculates the predicted boundary frame and corresponding border confidence level of each grid cell.
Specifically, the confidence level on each grid cell predicted boundary frame and the affiliated border of the bounding box is calculated.
For example, vehicle sample image is divided into S × S grid, if the center of vehicle to be detected is in grid
Unit in, then corresponding grid cell is responsible for the corresponding vehicle to be detected of detection.Calculate each described grid list
First predicted boundary frame and the confidence level on border, wherein, the quantity of predicted boundary frame is B, and confidence level characterizes what bounding box was included
The score of the vehicle to be detected, while also providing predicted value of the model to the bounding box.The calculation formula of the confidence level
It is as follows:
Wherein, if vehicle to be detected falls in a grid, then Pr (Obj)=1, otherwise Pr (Obj)=0,It is the bounding box and actual boundary frame predicted common factor value IOU in combination, if there is no vehicle to be detected in grid cell,
Then its confidence level is 0.
Wherein, each bounding box includes 5 predicted values:X, y, w, h and confidence level confidence.Also, (x, y) is sat
Mark represents the center of the grid cell, and w is the width of bounding box, and h is the height of bounding box, and confidence level confidence represents prediction
Border and actual boundary;Meanwhile, each grid cell predicted condition probable value C represents that the grid cell is to be checked comprising certain
The possibility of target is surveyed, each grid cell only predicts a class probability, the number without considering border, C is expressed as Pr (Classi
|Obj)。
For example, vehicle sample image input size is that 448x448, S=7, B=2, C=20, the i.e. data set include 20
Classification, finally predicts the outcome as S*S* (B*5+C)=7 × 7 × 30 tensor.
When building depth convolutional neural networks model, classification information and the bounding box prediction that each grid cell is predicted
Confidence information is multiplied, and just obtains the specific classification confidence of each bounding box, representation formula is as follows:
Above-mentioned equation left side Section 1 is exactly the classification information of each grid forecasting, and Section 2 and Section 3 are exactly each side
The confidence level of boundary's frame prediction.This product is that the grid of prediction belongs to the probability of a certain class.
S205, the bounding box is filtered according to default filtering rule.
Specifically, the bounding box by the value of the specific classification confidence of predicted boundary frame less than predetermined threshold value is filtered out, and is protected
Score is stayed to be higher than the bounding box of predetermined threshold value, in order to which the predicted boundary frame after filtering carries out non-maxima suppression processing.
Network structure includes 20 convolutional layers and 2 full articulamentums, and the convolutional layer is used for extracting the vehicle sample
The feature of image, the full articulamentum is used for predicting output probability.Wherein, the depth convolutional neural networks model of the present embodiment can
To be applied to many convolutional layers, it is characterized in local that reason, which is that one layer of convolution is acquired, and the number of plies is higher, and the feature acquired is more complete
Officeization, full articulamentum is mainly converted into low-dimensional and then output to higher-dimension.Essence is directly right by single convolutional neural networks
Each predicted boundary frame is returned and predicts the probability of corresponding classification.
S206, non-maxima suppression processing is carried out to the bounding box after filtering.
Specifically, non-maxima suppression processing is carried out, is mainly included the following steps that:
C, selects the result of top score, is then divided by calculating it with the remaining IOU for detecting bounding box.Example
Such as, it is assumed that N number of testing result uses B={ bi=(xilt,yilt,xirb,yirb), i ∈ [1,2 ..., N] } represent, choosing every time
Select the bounding box with current maximum score and be designated as bk, calculate itself and remaining square frameIt is worth and classifies:WhenGreatly
Show that they predict same object when 0.5, these bounding boxes are constituted into a first-type boundary frame set Bk1={ bi|IOU
(bi,bk)≥0.5,bi∈ B }, bounding box therein is not participated in be divided again, bkIt is used as one of element;WhenGreatly
When 0.3 is less than 0.5, it is believed that they have shared subregion, obtain a two class bounding box set, Bk2={ bi| 0.3 <
IOU(bi,bk) < 0.5, bi∈ B }, bounding box therein is participated in and divided again;It steps be repeated alternatively until all testing results
K group finally is divided into, obtains being designated as { B with corresponding first-type boundary frame and two class bounding boxes with physical quantitiesk1,Bk2),k∈
[1,2,...,k]}。
D, for all first-type boundary frame Bk1, with calculating mean block and institute's bounding box and average bounding box quaternary
Error array is simultaneously recorded to reserved portion, then calculates the real difference value of object real border frame and average bounding box.
S207, optimizes the depth convolutional neural networks model.
Step S207, including:E, the performance of the depth convolutional neural networks model is detected using preset algorithm.
Perform and complete after step S203 to step S206, that is, complete vehicle location and classification in the vehicle sample image
Judgement, the performance of the depth convolutional neural networks model is then detected using preset algorithm.It is for instance possible to use loss letter
Number judges the performance of the depth convolutional neural networks model, the vehicle detecting system after then being optimized.
The loss function includes coordinate prediction, the prediction of the confidence level of the bounding box containing target to be detected, without treating
Prediction and the class prediction of the confidence level of the bounding box of target are detected, its formula is as follows:
Wherein, λcoordTo give the loss weight that coordinate is predicted, λnoobjFor the confidence level of the grid to target not to be detected
The loss weight of imparting,For judging whether bounding box described in j-th in grid described in i-th is responsible for the mesh to be detected
Mark,For judging whether that the center of the target to be detected falls in i-th of grid.
The present embodiment can be set:λcoord=5, λnoobj=0.5.
In wherein one application scenarios, to improve the degree of accuracy of output result, depth convolutional neural networks mould is being built
During type, train 140 times altogether, learning rate delay is 0.0005.The first round, learning rate is slowly increased to 0.001 from 0.0001,
Because if being initially high learning rate, model can be caused to dissipate;0.001 speed is kept to 75 wheels;Then in rear 30 take turns, under
Drop to 0.0001;Last 35 wheel, learning rate is 0.00001.
S208, obtains vehicle image to be detected.
For example, obtaining vehicle image to be detected from monitoring camera, DVR or local video.
S209, is detected, and export detection using the depth convolutional neural networks model to vehicle image to be detected
As a result.
Specifically, it is possible to use the vehicle detecting system after the optimization that step S207 is obtained is to the vehicle image to be detected
Vehicle detection is carried out, vehicle detection result is marked in detection image, and result is exported, to complete vehicle detection.
S210, detects the degree of accuracy of the depth convolutional neural networks model.
To make vehicle detecting system more perfect, the present embodiment can also detect the depth convolutional neural networks model
The degree of accuracy.For example, after the completion of vehicle detection, the vehicle detecting system is tested on Caltech101 data sets.
Caltech101 data sets only have the test pictures of side vehicle, per pictures only one of which vehicle and scene is simple.At every
In the case that the average wrong report of picture is 0.24, the detecting system has reached 93.1% detection on Caltech101 data sets
The degree of accuracy, it was demonstrated that detection method of the invention can improve the detection speed and Detection accuracy of system.
The convolutional neural networks that training is completed are optimized, filtered first with Gabor by the technical scheme of the present embodiment
Device after samples pictures processing to being normalized, and the frequency of the wave filter and direction are represented closer to human visual system couple
Expression in frequency and direction.Then using vehicle sample image as network structure input, directly in output layer outgoing position
With affiliated classification, it is to avoid computed repeatedly caused by candidate frame (proposal) in deep learning detection method is relatively more,
The detection speed of system is improved, Practical Project demand can be met.
Fig. 3 is the schematic diagram of the embodiment one of the vehicle detection apparatus based on deep learning of the present invention, as shown in figure 3,
The vehicle detection apparatus based on deep learning of the present embodiment, can specifically include the first acquisition module 31, build the and of module 32
Detection module 33.
First acquisition module 31, is configured to obtain vehicle sample image, and pre-process the vehicle sample image;
Module 32 is built, is configured to build depth convolutional neural networks model by pretreated vehicle sample image;
Detection module 33, is configured so that the depth convolutional neural networks model is carried out to the vehicle image to be detected
Detection, and export testing result.
The vehicle detection apparatus based on deep learning of the present embodiment, is detected by using above-mentioned module to vehicle
Realization mechanism is identical with the realization mechanism of the vehicle checking method based on deep learning of above-mentioned embodiment illustrated in fig. 1, in detail may be used
With the record with reference to above-mentioned embodiment illustrated in fig. 1, it will not be repeated here.
Above example is only the exemplary embodiment of the present invention, is not used in the limitation present invention, protection scope of the present invention
It is defined by the claims.Those skilled in the art can make respectively in the essence and protection domain of the present invention to the present invention
Modification or equivalent substitution are planted, this modification or equivalent substitution also should be regarded as being within the scope of the present invention.
Claims (10)
1. a kind of vehicle checking method based on deep learning, it is characterised in that including:
Vehicle sample image is obtained, and the vehicle sample image is pre-processed;Wherein described vehicle sample image includes
Vehicle sample image comprising vehicle image and the vehicle sample image for only including background image;
Depth convolutional neural networks model is built by pretreated vehicle sample image;
Vehicle image to be detected is detected using the depth convolutional neural networks model, and exports testing result.
2. according to the method described in claim 1, it is characterised in that depth is built by pretreated vehicle sample image and rolled up
Product neural network model, including:
Pretreated vehicle sample image is built into the grid of the default length of side by network structure;
Calculate the predicted boundary frame and corresponding border confidence level of each grid cell;
The bounding box is filtered according to default filtering rule;
Non-maxima suppression processing is carried out to the bounding box after filtering.
3. according to the method described in claim 1, it is characterised in that depth is built by pretreated vehicle sample image and rolled up
Product neural network model, including:
Optimize the depth convolutional neural networks model.
4. method according to claim 3, it is characterised in that the optimization depth convolutional neural networks model, including:
The performance of the depth convolutional neural networks model is detected using preset algorithm.
5. according to the method described in claim 1, it is characterised in that include the vehicle sample information of vehicle image information in acquisition
When, methods described also includes:
Obtain the vehicle sample image of pre-set categories;
Line translation is entered using pre-set image processing method to the vehicle sample image, new vehicle sample image is formed.
6. method according to claim 5, it is characterised in that the pre-set categories include following at least one classification:Just
Face and back side vehicle sample image, side vehicle sample image and prism vehicle sample image;The pre-set image processing side
Method includes following at least one image processing method:Change of scale, translation transformation, rotation transformation and flip horizontal.
7. according to the method described in claim 1, it is characterised in that the vehicle sample image is pre-processed, including:
According to preset vehicle sample image size, the size to vehicle sample image carries out the convolution operation of Gabor filter, so
After be normalized.
8. according to the method described in claim 1, it is characterised in that using the depth convolutional neural networks model to be detected
After vehicle image is detected, methods described also includes:
Detect the degree of accuracy of the depth convolutional neural networks model.
9. according to the method described in claim 1, it is characterised in that using the depth convolutional neural networks model to be detected
Before vehicle image is detected, methods described also includes:
Obtain vehicle image to be detected.
10. a kind of vehicle detection apparatus based on deep learning, it is characterised in that including:
First acquisition module, is configured to obtain vehicle sample image, and pre-process the vehicle sample image;
Module is built, is configured to build depth convolutional neural networks model by pretreated vehicle sample image;
Detection module, is configured so that the depth convolutional neural networks model is detected to the vehicle image to be detected,
And export testing result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710353521.9A CN107220603A (en) | 2017-05-18 | 2017-05-18 | Vehicle checking method and device based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710353521.9A CN107220603A (en) | 2017-05-18 | 2017-05-18 | Vehicle checking method and device based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107220603A true CN107220603A (en) | 2017-09-29 |
Family
ID=59944220
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710353521.9A Pending CN107220603A (en) | 2017-05-18 | 2017-05-18 | Vehicle checking method and device based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107220603A (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108009509A (en) * | 2017-12-12 | 2018-05-08 | 河南工业大学 | Vehicle target detection method |
CN108154518A (en) * | 2017-12-11 | 2018-06-12 | 广州华多网络科技有限公司 | A kind of method, apparatus of image procossing, storage medium and electronic equipment |
CN108509954A (en) * | 2018-04-23 | 2018-09-07 | 合肥湛达智能科技有限公司 | A kind of more car plate dynamic identifying methods of real-time traffic scene |
CN108510472A (en) * | 2018-03-08 | 2018-09-07 | 北京百度网讯科技有限公司 | Method and apparatus for handling image |
CN108764293A (en) * | 2018-04-28 | 2018-11-06 | 重庆交通大学 | A kind of vehicle checking method and system based on image |
CN109029363A (en) * | 2018-06-04 | 2018-12-18 | 泉州装备制造研究所 | A kind of target ranging method based on deep learning |
CN109145756A (en) * | 2018-07-24 | 2019-01-04 | 湖南万为智能机器人技术有限公司 | Object detection method based on machine vision and deep learning |
CN109766775A (en) * | 2018-12-18 | 2019-05-17 | 四川大学 | A kind of vehicle detecting system based on depth convolutional neural networks |
CN110321853A (en) * | 2019-07-05 | 2019-10-11 | 杭州巨骐信息科技股份有限公司 | Distribution cable external force damage prevention system based on video intelligent detection |
CN110399803A (en) * | 2019-07-01 | 2019-11-01 | 北京邮电大学 | A kind of vehicle checking method and device |
CN110533098A (en) * | 2019-08-28 | 2019-12-03 | 长安大学 | A method of identifying that the green compartment that is open to traffic loads type based on convolutional neural networks |
CN111144273A (en) * | 2019-12-24 | 2020-05-12 | 苏州奥易克斯汽车电子有限公司 | Non-motor vehicle detection method |
CN111886603A (en) * | 2018-03-12 | 2020-11-03 | 伟摩有限责任公司 | Neural network for target detection and characterization |
CN117351439A (en) * | 2023-12-06 | 2024-01-05 | 山东博安智能科技股份有限公司 | Dynamic monitoring management system for intelligent expressway overrun vehicle |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105069472A (en) * | 2015-08-03 | 2015-11-18 | 电子科技大学 | Vehicle detection method based on convolutional neural network self-adaption |
CN106845430A (en) * | 2017-02-06 | 2017-06-13 | 东华大学 | Pedestrian detection and tracking based on acceleration region convolutional neural networks |
-
2017
- 2017-05-18 CN CN201710353521.9A patent/CN107220603A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105069472A (en) * | 2015-08-03 | 2015-11-18 | 电子科技大学 | Vehicle detection method based on convolutional neural network self-adaption |
CN106845430A (en) * | 2017-02-06 | 2017-06-13 | 东华大学 | Pedestrian detection and tracking based on acceleration region convolutional neural networks |
Non-Patent Citations (4)
Title |
---|
TONISCHENK等: "《数字摄影测量学 背景、基础、自动定向过程》", 30 September 2009, 武汉大学出版社 * |
张琪等: "电视制导的目标检测算法研究", 《科技经济导刊》 * |
曾向阳: "《智能水中目标识别》", 31 March 2016, 国防工业出版社 * |
王丽娜等: "《信息隐藏技术与应用》", 31 May 2012, 武汉大学出版社 * |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108154518A (en) * | 2017-12-11 | 2018-06-12 | 广州华多网络科技有限公司 | A kind of method, apparatus of image procossing, storage medium and electronic equipment |
CN108154518B (en) * | 2017-12-11 | 2020-09-08 | 广州华多网络科技有限公司 | Image processing method and device, storage medium and electronic equipment |
CN108009509A (en) * | 2017-12-12 | 2018-05-08 | 河南工业大学 | Vehicle target detection method |
CN108510472A (en) * | 2018-03-08 | 2018-09-07 | 北京百度网讯科技有限公司 | Method and apparatus for handling image |
US11928866B2 (en) | 2018-03-12 | 2024-03-12 | Waymo Llc | Neural networks for object detection and characterization |
CN111886603A (en) * | 2018-03-12 | 2020-11-03 | 伟摩有限责任公司 | Neural network for target detection and characterization |
CN111886603B (en) * | 2018-03-12 | 2024-03-15 | 伟摩有限责任公司 | Neural network for target detection and characterization |
CN108509954A (en) * | 2018-04-23 | 2018-09-07 | 合肥湛达智能科技有限公司 | A kind of more car plate dynamic identifying methods of real-time traffic scene |
CN108764293A (en) * | 2018-04-28 | 2018-11-06 | 重庆交通大学 | A kind of vehicle checking method and system based on image |
CN109029363A (en) * | 2018-06-04 | 2018-12-18 | 泉州装备制造研究所 | A kind of target ranging method based on deep learning |
CN109145756A (en) * | 2018-07-24 | 2019-01-04 | 湖南万为智能机器人技术有限公司 | Object detection method based on machine vision and deep learning |
CN109766775A (en) * | 2018-12-18 | 2019-05-17 | 四川大学 | A kind of vehicle detecting system based on depth convolutional neural networks |
CN110399803A (en) * | 2019-07-01 | 2019-11-01 | 北京邮电大学 | A kind of vehicle checking method and device |
CN110321853A (en) * | 2019-07-05 | 2019-10-11 | 杭州巨骐信息科技股份有限公司 | Distribution cable external force damage prevention system based on video intelligent detection |
CN110321853B (en) * | 2019-07-05 | 2021-05-11 | 杭州巨骐信息科技股份有限公司 | Distributed cable external-damage-prevention system based on video intelligent detection |
CN110533098A (en) * | 2019-08-28 | 2019-12-03 | 长安大学 | A method of identifying that the green compartment that is open to traffic loads type based on convolutional neural networks |
CN110533098B (en) * | 2019-08-28 | 2022-03-29 | 长安大学 | Method for identifying loading type of green traffic vehicle compartment based on convolutional neural network |
CN111144273A (en) * | 2019-12-24 | 2020-05-12 | 苏州奥易克斯汽车电子有限公司 | Non-motor vehicle detection method |
CN117351439A (en) * | 2023-12-06 | 2024-01-05 | 山东博安智能科技股份有限公司 | Dynamic monitoring management system for intelligent expressway overrun vehicle |
CN117351439B (en) * | 2023-12-06 | 2024-02-20 | 山东博安智能科技股份有限公司 | Dynamic monitoring management system for intelligent expressway overrun vehicle |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107220603A (en) | Vehicle checking method and device based on deep learning | |
CN109978893B (en) | Training method, device, equipment and storage medium of image semantic segmentation network | |
CN110348376B (en) | Pedestrian real-time detection method based on neural network | |
CN108694386B (en) | Lane line detection method based on parallel convolution neural network | |
CN108805093A (en) | Escalator passenger based on deep learning falls down detection algorithm | |
CN110188807A (en) | Tunnel pedestrian target detection method based on cascade super-resolution network and improvement Faster R-CNN | |
CN111160249A (en) | Multi-class target detection method of optical remote sensing image based on cross-scale feature fusion | |
CN111709285A (en) | Epidemic situation protection monitoring method and device based on unmanned aerial vehicle and storage medium | |
CN107085696A (en) | A kind of vehicle location and type identifier method based on bayonet socket image | |
CN110298265A (en) | Specific objective detection method in a kind of elevator based on YOLO neural network | |
CN111126399A (en) | Image detection method, device and equipment and readable storage medium | |
CN110084165A (en) | The intelligent recognition and method for early warning of anomalous event under the open scene of power domain based on edge calculations | |
CN104517095B (en) | A kind of number of people dividing method based on depth image | |
CN104134364B (en) | Real-time traffic sign identification method and system with self-learning capacity | |
CN112818871B (en) | Target detection method of full fusion neural network based on half-packet convolution | |
CN110532850B (en) | Fall detection method based on video joint points and hybrid classifier | |
CN110399820B (en) | Visual recognition analysis method for roadside scene of highway | |
CN112287827A (en) | Complex environment pedestrian mask wearing detection method and system based on intelligent lamp pole | |
CN109935080A (en) | The monitoring system and method that a kind of vehicle flowrate on traffic route calculates in real time | |
CN113326735B (en) | YOLOv 5-based multi-mode small target detection method | |
CN116343330A (en) | Abnormal behavior identification method for infrared-visible light image fusion | |
CN114612937A (en) | Single-mode enhancement-based infrared and visible light fusion pedestrian detection method | |
CN112580434B (en) | Face false detection optimization method and system based on depth camera and face detection equipment | |
CN116824335A (en) | YOLOv5 improved algorithm-based fire disaster early warning method and system | |
CN109961425A (en) | A kind of water quality recognition methods of Dynamic Water |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170929 |
|
RJ01 | Rejection of invention patent application after publication |