CN107609602A - A kind of Driving Scene sorting technique based on convolutional neural networks - Google Patents
A kind of Driving Scene sorting technique based on convolutional neural networks Download PDFInfo
- Publication number
- CN107609602A CN107609602A CN201710894156.2A CN201710894156A CN107609602A CN 107609602 A CN107609602 A CN 107609602A CN 201710894156 A CN201710894156 A CN 201710894156A CN 107609602 A CN107609602 A CN 107609602A
- Authority
- CN
- China
- Prior art keywords
- convolutional neural
- neural networks
- scene
- traffic scene
- traffic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of Driving Scene sorting technique based on convolutional neural networks, comprise the following steps:Road environment video image acquisition;Traffic scene category division simultaneously establishes traffic scene identification database;The samples pictures of different Driving Scenes are extracted from traffic scene identification database, feature extraction is carried out to samples pictures using depth convolutional neural networks and multiple convolution is trained, pixel value is rasterized, and connect into a vector and be input to traditional neutral net, convolutional neural networks output is obtained, realizes the deep learning to variety classes Driving Scene;The network structure of convolutional neural networks to building carries out parameter optimization, the convolutional neural networks grader trained, traffic scene identification model is adjusted, selects standard of the optimum way as traffic scene identification model;Traffic scene image to be measured is gathered in real time, is input in traffic scene identification model and road environment scene is identified.
Description
Technical field
The present invention relates to autonomous driving vehicle technical field, and in particular to be a kind of to utilize convolutional neural networks technology pair
The method of intelligent vehicle Driving Scene classification.
Background technology
In recent years, vehicle intellectualized technology is quickly grown.In vehicle intellectualized skills classification standard, driving technology is aided in
The industrialization stage is had enter into part automatic Pilot technology;Conditional automatic Pilot enters with increasingly automated driving technology
Test Qualify Phase.Image procossing is that intelligent driving accessory system carries out environment sensing with automatic driving vehicle with identification technology
Key technology, application are more extensive.Different kinds of roads environment letter can accurately be obtained based on vehicle-mounted forward direction vision sensor
Breath.Vehicle can identify different road scenes by the perception to environmental information;For different road scenes, vehicle can
With the different driving model of autonomous switching, the Decision Control scheme of adaptively changing system, and then adjust the row of vehicle itself
Sail state and perform operational order corresponding with road condition, realize efficient, the driving function of energy-saving and environmental protection.
However, traffic scene is complex, class between closely spaced feature larger with spacing in different traffic scene classes.
Feature must be extracted before carrying out traffic scene identification, polytropy and complexity due to traffic scene picture, explicit feature carry
Take and be not easy to.
Deep learning is widely used the every field in machine learning as a kind of new method.Deep learning is introduced into
To having obtained development quickly since image recognition.It is characterized in used by image recognition algorithm based on deep learning from big number
According to being automatically learned, rather than by manually carrying out characteristic Design.Wherein convolutional neural networks are in conventional multilayer nerve net
A kind of specially designed deep learning method for image classification and identification to grow up on the basis of network.Convolutional Neural net
Network has used the special construction for image recognition, can carry out Fast Training.And then it can effectively utilize multilayer nerve net
Network structural model, and sandwich construction very big advantage on recognition accuracy.Therefore, convolutional neural networks are used for solve intelligence
The Driving Scene classification problem of automobile has preferable feasibility.
The content of the invention
The present invention provides a kind of Driving Scene sorting technique based on depth convolutional network, and its object is to be intelligent driving
The driving model of accessory system and automatic driving vehicle switches provides basic technology support with autonomous thermoacoustic prime engine decision-making.
The purpose of the present invention is realized by following scheme:
A kind of Driving Scene sorting technique based on convolutional neural networks, comprises the following steps:
Step 1: road environment video image acquisition:Vehicle front in vehicle travel process is obtained using vehicle-mounted camera
With the road environment image of surrounding;
Step 2: traffic scene category division and establishing traffic scene identification database:Based on vehicle-mounted camera collection
Data, the image of three class traffic scenes is carried out the characteristics of according to urban road scene, backroad scene, highway scene
Classification, and establish traffic scene identification database;
Step 3: depth convolutional neural networks learn:Carried in the traffic scene identification database established from the step 2
The samples pictures of different Driving Scenes are taken, feature extraction and multiple convolution are carried out to samples pictures using depth convolutional neural networks
Training, pixel value is rasterized, and is connected into a vector and be input to traditional neutral net, and it is defeated to obtain convolutional neural networks
Go out, realize the deep learning to variety classes Driving Scene;
Step 4: the network structure for the convolutional neural networks built to the step 3 carries out parameter optimization, trained
Good convolutional neural networks grader;Traffic scene identification figure is obtained in the traffic scene identification database established in step 2
Picture, after carrying out image preprocessing, it is inputted the convolutional neural networks sorter model trained, output category result, test
Go out the model to the correct recognition rata of all kinds of traffic scenes and traffic scene identification model is adjusted accordingly, select optimal side
Standard of the formula as traffic scene identification model;
Step 5: gathering traffic scene image to be measured in real time, and image standardized processing is carried out, picture to be detected is defeated
Enter in the traffic scene identification model optimized to step 4 training, road environment scene is identified in real time.
Further, the step 3 carries out feature extraction and more rewindings using depth convolutional neural networks to samples pictures
Accumulating the process trained is:By input picture by trainable wave filter and can biasing put carry out convolution, then carry out sub-sampling, connect
And carry out second of convolution, then carry out second of sub-sampling, the result of sampling out is attached.
Further, the sub-sampling procedures are that every m pixel summation of neighborhood is changed into a pixel, are added by scalar
Power, then add a biasing, Feature Mapping figure is then produced by activation primitive Sigmoid.
It is of the invention compared with existing invention, have the advantage that:
The basic structure of convolutional neural networks mainly has two layers, and first layer is feature extraction layer, the input of each neuron
It is connected with the local acceptance region of preceding layer, and extracts the local feature.The second layer is Feature Mapping layer, each calculating of network
Layer is made up of multiple Feature Mappings, and each Feature Mapping is a plane, and the weights of all neurons are equal in plane.One reflects
The neuron penetrated on face shares weights, reduces the number of network freedom parameter.According to the characteristics of convolutional neural networks, it can be with
For identify displacement, scaling and other without regular two dimensional image.Convolutional neural networks are learnt by training data, are kept away
Exempted from explicit feature extraction, and it is implicit learnt from training data, the neuron weights on same mapping face are identical,
So can be with collateral learning.This feature has the superiority of uniqueness in terms of image procossing, and weights are shared to reduce network
Complexity.
Convolutional neural networks have three obvious design features:Local acceptance region, weights share and sub-sampling, in traffic
During scene Recognition, urban road scene is mainly found out, backroad scene, the difference of highway scene image
Place.Traffic scene database is established herein, changes the network number of plies of convolutional neural networks and the number of Feature Mapping layer
The identification of database traffic scene test set is carried out, finds out recognition correct rate highest parameter, is identified for traffic scene.
Brief description of the drawings
Fig. 1 is a kind of overall flow figure of the Driving Scene sorting technique based on convolutional neural networks of the present invention;
Fig. 2 is depth convolutional neural networks framework;
Fig. 3 is convolution and sub-sampling procedures in depth convolutional neural networks;
Fig. 4 (a) is up-sampling function;
Fig. 4 (b) is down-sampling function;
Fig. 5 is depth convolutional neural networks structure chart;
Fig. 6 is deep convolutional neural networks training process schematic diagram under different traffic.
Embodiment
It is a primary object of the present invention to invent a kind of Driving Scene sorting technique based on convolutional neural networks, it is intended to real
The existing road environment automatic recognition classification of vehicle in motion, i.e. vehicle can continue to perceive and judge surrounding in the process of moving
The method of Driving Scene information, enable the vehicle to it is safer, efficiently by corresponding road environment.
To achieve the above object, a kind of Driving Scene sorting technique based on convolutional neural networks provided by the invention, bag
Include following steps:
Step 1: road environment video image acquisition, vehicle front in vehicle travel process is obtained using vehicle-mounted camera
With the road environment image of surrounding;
Step 2: traffic scene category division and establishing traffic scene identification database;
Huge data are gathered using vehicle-mounted camera, according to urban road scene, backroad scene, highway field
The characteristics of scape, classifies to the image of three class traffic scenes, and establishes traffic scene identification database.
Step 3: depth convolutional neural networks learn;
The samples pictures of different Driving Scenes are extracted in the traffic scene identification database established from step 2, utilize depth
Convolutional neural networks carry out feature extraction to samples pictures and multiple convolution is trained, and realize the depth to variety classes Driving Scene
Study;Input picture by trainable wave filter and can biasing put produce Feature Mapping figure, by multiple convolution, by pixel value
Rasterisation, and connect into a vector and be input to traditional neutral net, exported.
Convolution is carried out to test set, trains each stage of study to choose certain amount picture and is trained, carry out first
Convolution, sub-sampling is being carried out, is then carrying out second of convolution, carrying out second of sub-sampling, the result of sampling out is being carried out
Connection, design classification results are calculated, and experiment label is contrasted, and parameter is adjusted.
The main flow that neutral net is used for pattern-recognition is that have guidance learning network, and non supervised learning network is more used to cluster
Analysis.For there is the pattern-recognition of guidance learning network, because the classification of any sample is known, distribution of the sample in space
No longer it is to be inclined to according to its NATURAL DISTRIBUTION to divide, but will be according to similar sample between the distribution in space and inhomogeneity sample
Separation degree look for a kind of appropriate space-division method, or find a classification boundaries so that inhomogeneity sample distinguishes position
In in different regions.This just needs the learning process of a long-time and complexity, constantly adjusts to divide sample space
The position of classification boundaries, sample as few as possible is set to be divided into non-homogeneous region.
Convolutional network is inherently a kind of mapping for being input to output, and it can learn between substantial amounts of input and output
Mapping relations, without it is any input output between accurate mathematic(al) representation, as long as use known to pattern to volume
Product network is trained, and convolutional network just has the mapping ability between inputoutput pair.
Convolutional neural networks are one kind of deep neural network, are widely applied to each side such as Face datection, speech detection
Face.Every layer is made up of multiple two dimensional surfaces, and each plane is made up of multiple independent neurons.
Convolutional neural networks as shown in Figure 2:Input picture by with three trainable wave filters and can biasing put progress
Convolution, filtering are:Every group of four pixels are entered again in C1 layers three Feature Mapping figure of generation, Feature Mapping figure after convolution
Row mean-pooling, biasing are put, and the Feature Mapping figure of three S2 layers is obtained by a Sigmoid function, three S2 layers
Mapping graph obtains C3 layers after filtering again, and C3 hierarchical structures carry out mean-pooling and produce S4 layers, finally, these pixel values again
It is rasterized, and connects into a vector and be input to traditional neutral net, is exported.
Usually, C1 layers, C3 layers are characterized extract layer, the input of each neuron and the local receptor field phase of preceding layer
Even, and the local feature is extracted, after the local feature is extracted, its position relationship between other features is also true therewith
Decide;S2 layers, S4 layers are Feature Mapping layers, and each computation layer of convolutional network is made up of multiple Feature Mappings, each feature
It is mapped as a plane, the weights of all neurons are equal in plane.Feature Mapping structure is small using influence function core
Activation primitive of the sigmoid functions as convolutional network so that Feature Mapping has shift invariant.
Further, since the neuron on a mapping face shares weights, thus reduce the number of network freedom parameter, drop
The complexity of low network parameter selection.Each feature extraction layer (C- layers) followed by use in convolutional neural networks
To seek the computation layer of local average and second extraction (S- layers), this distinctive structure of feature extraction twice makes network in identification
There is higher distortion tolerance to input sample.
As shown in figure 3, first layer is input layer:480x280 size pictures are inputted, the second layer is convolutional layer, and convolution kernel is
5x5 sizes, third layer are sub-sampling layer, carry out 2x2 average samplings to input map, the 4th layer is convolutional layer, convolution kernel 5x5
The convolution kernel of size, layer 5 are sub-sampling layer, and 2x2 average sub-samplings are carried out to input map.
Convolutional neural networks training process can be summarized as:
Convolution process:With a trainable wave filter fXDeconvolute an input picture (first stage is input picture,
Stage below is exactly convolution feature map), then plus one biases bX, obtain convolutional layer CX+1.Sub-sampling procedures include:Often
Four pixel summations of neighborhood are changed into a pixel, then pass through scalar WX+1Weighting, it is further added by biasing bX+1, then pass through one
Sigmoid activation primitives, produce a Feature Mapping figure S for probably reducing four timesX+1。
So can be regarded as making convolution algorithm from the mapping of plane to a next plane, S- layers are considered as obscuring
Wave filter, play a part of Further Feature Extraction.Spatial resolution is successively decreased between hidden layer and hidden layer, and the number of planes contained by every layer
It is incremented by, so can be used for detecting more characteristic informations.
Specific convolutional neural networks training process includes:
Convolutional network perform be have guidance train, so its sample set be by shaped like " (input vector, ideal export to
Amount) " vector to composition.All these vectors are right, should all be derived from network i.e. by the actual " RUN " of simulation system
As a result.They can gather to come from actual motion system.Before training is started, all power all should be different with some
Small random number initialized." small random number " is used for ensureing that network will not enter saturation state because weights are excessive, so as to
Cause failure to train;" difference " is used for ensureing that network can normally learn.If in fact, weighed with the deinitialization of identical number
Matrix, then network impotentia study.
1) convolutional layer
In a convolutional layer, the feature of last layer carries out convolution by a convolution kernel that can learn, then swashed by one
Function living, it is possible to obtain output characteristic map.Each output map is probably the value for combining the multiple input map of convolution:
In formula (1), MjThe map of selection input set is represented, it is volume that each output map, which has biasing coefficient a b, k,
Product core, b are biasing coefficient.
It is assumed that each convolutional layer l can meet a down-sampling layer l+1.For BP, want to ask each god of l layers
Right value update through weights corresponding to member, the sensitivity δ of each neurode of l layers need to be first sought, such as formula (2).In order to obtain spirit
Sensitivity we just need the first spirit to next layer of node (node for being connected to the l+1 layers of current layer l node interested)
Sensitivity sums to obtain δl+1, be then multiplied by weights corresponding to these connections (connection l layers node interested and l+1 node layers
Weights) W, multiplied by reverse with the input u of the neuron node of current l layers activation primitive f derivative value, that is, sensitivity
The δ of the formula (1) of propagationlSolve, so can be obtained by sensitivity δ corresponding to the current each neurode of l layersl.
Due to the presence of down-sampling, sensitivity corresponding to a pixel (neuron node) of sample level corresponds to convolutional layer
The output map of (last layer) one piece of pixel (sampling window size).Therefore, the map each node and layer l+ in layer l
Corresponding map node connection in 1.
For effective computation layer l sensitivity, it would be desirable to up-sample the sensitivity map (features corresponding to down-sampling layer
The corresponding sensitivity of each pixel in map, so also forming a sensitivity map), so just cause this sensitivity map
The map of size and convolutional layer is in the same size, then again by the partial derivative of the map of l layers activation value and the up-sampling from l+1 layers
Obtained sensitivity map is by element multiplication, i.e. formula (1).
An identical value β is all taken in down-sampling layer map weights, and is a constant.So we only need will be upper
The result that one step obtains is multiplied by the calculating that a weight coefficient β can completes l layer sensitivity.
We can be to each feature map in convolutional layerjRepeat identical calculating process.It is apparent that need to match phase
The map for the sub-sampling layer answered:
In formula (2), up () represents a up-sampling function, and carrying out horizontal vertical direction to input picture respectively expands one
Times, will each copy 2 times in each pixel level and vertical direction, as shown in Figure 4.
2) sub-sampling layer
The process of sub-sampling layer is mainly changed into a pixel per m pixel summation of neighborhood, is weighted by scalar, then
Add a biasing, Feature Mapping figure is then produced by activation primitive Sigmoid, concrete operation is as follows:
In formula (3), x is input picture;Down () is a down-sampling function, different 2x2s all to input picture
Block is averaged;β is weight coefficient;B is biasing coefficient.
As shown in Fig. 4 (a), Fig. 4 (b), understand output image in two dimensions of input picture all according to foregoing description
Reduce 2 times.For sub-sampling layer, there is N number of input map, just there is N number of output map, simply each output map size
All reduce 4 times.
Convolution is carried out to the picture of screening, trains each stage of study to choose 50 pictures from database and is trained,
Convolution is carried out first, then carries out sub-sampling, then carries out second of convolution, then carries out second of sub-sampling, to sampling out
As a result it is attached, calculates classification results, and experiment label is contrasted, and parameter is adjusted.
As shown in figure 5, first layer input layer:480x280 size pictures are inputted, the second layer is convolutional layer, convolution kernel 5x5
Size, third layer are sub-sampling layer, and 2x2 average sub-samplings are carried out to input map, and the 4th layer is convolutional layer, and convolution kernel is that 5x5 is big
Small convolution kernel, layer 5 are sub-sampling layer, and 2x2 average sub-samplings are carried out to input map.
Step 4: by the convolutional neural networks built, parameter optimization, the net trained are carried out to its network structure
Network grader uses different convolutional neural networks respectively, uses different Feature Mapping layers respectively in the parameter of convolutional layer,
Depth convolutional Neural training is carried out respectively, adjusting parameter, calculates the identification error rate to test set.Obtain and hand in database
Logical scene Recognition image, carries out image preprocessing, makes image standardization, be inputted the convolutional network grader mould trained
Type, output category result, the model is tested out to the correct recognition rata of all kinds of traffic scenes and mould is identified to traffic scene accordingly
Type is adjusted, and selects standard of the optimum way as traffic scene identification model.
Specific embodiment is:Actual scene test experiments are carried out, contrast the network classifier that simultaneously inspection institute establishes;Pass through
Depth convolutional neural networks parameter observes its algorithm recognition performance.The use of test sample picture is 480x280 sizes, is entering
When traffic scene picture is identified row depth convolutional neural networks, the convolutional network number of plies and Feature Mapping layer number are adjusted
The comparison of error rate is identified to observe algorithm to test sample.Optimal result is selected as final depth convolutional neural networks knot
Structure parameter.
Step 5: gathering traffic scene image to be measured in real time, and image standardized processing is carried out, picture to be detected is defeated
Enter in the network classifier model optimized to training, road environment scene is identified in real time.
The inventive method is not limited to only for scene to be divided into city, at a high speed rural area, three kinds of classifications, expansible to be applied to
The identification of the traffic scene such as ring road, rotary island after city road, intersection, through street come in and go out.
Claims (3)
1. a kind of Driving Scene sorting technique based on convolutional neural networks, it is characterised in that comprise the following steps:
Step 1: road environment video image acquisition:Vehicle front and week in vehicle travel process are obtained using vehicle-mounted camera
The road environment image enclosed;
Step 2: traffic scene category division and establishing traffic scene identification database:The data gathered based on vehicle-mounted camera,
The characteristics of according to urban road scene, backroad scene, highway scene, classifies to the image of three class traffic scenes,
And establish traffic scene identification database;
Step 3: depth convolutional neural networks learn:Extracted not in the traffic scene identification database established from the step 2
With the samples pictures of Driving Scene, feature extraction is carried out to samples pictures using depth convolutional neural networks and multiple convolution is instructed
Practice, pixel value rasterized, and connect into a vector and be input to traditional neutral net, obtain convolutional neural networks output,
Realize the deep learning to variety classes Driving Scene;
Step 4: the network structure for the convolutional neural networks built to the step 3 carries out parameter optimization, trained
Convolutional neural networks grader;Traffic scene identification image is obtained in the traffic scene identification database established in step 2,
After carrying out image preprocessing, the convolutional neural networks sorter model trained is inputted, output category result, tests out this
Model is to the correct recognition ratas of all kinds of traffic scenes and traffic scene identification model is adjusted accordingly, selects optimum way work
For the standard of traffic scene identification model;
Step 5: gathering traffic scene image to be measured in real time, and image standardized processing is carried out, picture to be detected is input to
In the traffic scene identification model that the step 4 training has optimized, road environment scene is identified in real time.
2. a kind of Driving Scene sorting technique based on convolutional neural networks as claimed in claim 1, it is characterised in that described
Step 3 carries out feature extraction using depth convolutional neural networks to samples pictures and the process of multiple convolution training is:Will input
Image by trainable wave filter and can biasing put carry out convolution, then carry out sub-sampling, then carry out second of convolution, then enter
Second of sub-sampling of row, the result of sampling out is attached.
3. a kind of Driving Scene sorting technique based on convolutional neural networks as claimed in claim 2, it is characterised in that described
Sub-sampling procedures are that every m pixel summation of neighborhood is changed into a pixel, are weighted by scalar, then add a biasing, then
Feature Mapping figure is produced by activation primitive Sigmoid.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710894156.2A CN107609602A (en) | 2017-09-28 | 2017-09-28 | A kind of Driving Scene sorting technique based on convolutional neural networks |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710894156.2A CN107609602A (en) | 2017-09-28 | 2017-09-28 | A kind of Driving Scene sorting technique based on convolutional neural networks |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107609602A true CN107609602A (en) | 2018-01-19 |
Family
ID=61059069
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710894156.2A Pending CN107609602A (en) | 2017-09-28 | 2017-09-28 | A kind of Driving Scene sorting technique based on convolutional neural networks |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107609602A (en) |
Cited By (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108513674A (en) * | 2018-03-26 | 2018-09-07 | 深圳市锐明技术股份有限公司 | A kind of detection alarm method, storage medium and the server of Chinese herbaceous peony accumulated snow and icing |
CN108545021A (en) * | 2018-04-17 | 2018-09-18 | 济南浪潮高新科技投资发展有限公司 | A kind of auxiliary driving method and system of identification special objective |
CN108701396A (en) * | 2018-03-26 | 2018-10-23 | 深圳市锐明技术股份有限公司 | A kind of detection alarm method, storage medium and the server of Chinese herbaceous peony accumulated snow and icing |
CN108765033A (en) * | 2018-06-08 | 2018-11-06 | Oppo广东移动通信有限公司 | Transmitting advertisement information method and apparatus, storage medium, electronic equipment |
CN108920805A (en) * | 2018-06-25 | 2018-11-30 | 大连大学 | Driving behavior modeling with state feature extraction functions |
CN108921200A (en) * | 2018-06-11 | 2018-11-30 | 百度在线网络技术(北京)有限公司 | Method, apparatus, equipment and medium for classifying to Driving Scene data |
CN108921044A (en) * | 2018-06-11 | 2018-11-30 | 大连大学 | Driver's decision feature extracting method based on depth convolutional neural networks |
CN109001211A (en) * | 2018-06-08 | 2018-12-14 | 苏州赛克安信息技术有限公司 | Welds seam for long distance pipeline detection system and method based on convolutional neural networks |
CN109145798A (en) * | 2018-08-13 | 2019-01-04 | 浙江零跑科技有限公司 | A kind of Driving Scene target identification and travelable region segmentation integrated approach |
CN109341691A (en) * | 2018-09-30 | 2019-02-15 | 百色学院 | Intelligent indoor positioning system and its localization method based on icon-based programming |
CN109389046A (en) * | 2018-09-11 | 2019-02-26 | 昆山星际舟智能科技有限公司 | Round-the-clock object identification and method for detecting lane lines for automatic Pilot |
CN109582993A (en) * | 2018-06-20 | 2019-04-05 | 长安大学 | Urban transportation scene image understands and multi-angle of view gunz optimization method |
CN109747659A (en) * | 2018-11-26 | 2019-05-14 | 北京汽车集团有限公司 | The control method and device of vehicle drive |
CN109902600A (en) * | 2019-02-01 | 2019-06-18 | 清华大学 | A kind of road area detection method |
CN109993082A (en) * | 2019-03-20 | 2019-07-09 | 上海理工大学 | The classification of convolutional neural networks road scene and lane segmentation method |
CN110046655A (en) * | 2019-03-26 | 2019-07-23 | 天津大学 | A kind of audio scene recognition method based on integrated study |
CN110162601A (en) * | 2019-05-22 | 2019-08-23 | 吉林大学 | A kind of biomedical publication submission recommender system based on deep learning |
CN110192232A (en) * | 2019-04-18 | 2019-08-30 | 京东方科技集团股份有限公司 | Traffic information processing equipment, system and method |
CN110232335A (en) * | 2019-05-24 | 2019-09-13 | 国汽(北京)智能网联汽车研究院有限公司 | Driving Scene classification method and electronic equipment |
CN110376594A (en) * | 2018-08-17 | 2019-10-25 | 北京京东尚科信息技术有限公司 | A kind of method and system of the intelligent navigation based on topological diagram |
CN110494863A (en) * | 2018-03-15 | 2019-11-22 | 辉达公司 | Determine autonomous vehicle drives free space |
CN110705483A (en) * | 2019-10-08 | 2020-01-17 | Oppo广东移动通信有限公司 | Driving reminding method, device, terminal and storage medium |
CN110893860A (en) * | 2018-09-12 | 2020-03-20 | 华为技术有限公司 | Intelligent driving method and intelligent driving system |
CN110956146A (en) * | 2019-12-04 | 2020-04-03 | 新奇点企业管理集团有限公司 | Road background modeling method and device, electronic equipment and storage medium |
CN111062307A (en) * | 2019-12-12 | 2020-04-24 | 天地伟业技术有限公司 | Scene recognition and classification method based on Tiny-Darknet |
CN111079800A (en) * | 2019-11-29 | 2020-04-28 | 上海汽车集团股份有限公司 | Acceleration method and acceleration system for intelligent driving virtual test |
CN111160424A (en) * | 2019-12-16 | 2020-05-15 | 南方电网科学研究院有限责任公司 | NFC equipment fingerprint authentication method and system based on CNN image identification |
CN111179585A (en) * | 2018-11-09 | 2020-05-19 | 上海汽车集团股份有限公司 | Site testing method and device for automatic driving vehicle |
CN111231823A (en) * | 2020-03-16 | 2020-06-05 | 连云港杰瑞电子有限公司 | Special vehicle-mounted lighting system for special vehicle and light adjusting method thereof |
CN111273676A (en) * | 2020-03-24 | 2020-06-12 | 广东工业大学 | End-to-end automatic driving method and system |
CN111599183A (en) * | 2020-07-22 | 2020-08-28 | 中汽院汽车技术有限公司 | Automatic driving scene classification and identification system and method |
CN111858342A (en) * | 2020-07-23 | 2020-10-30 | 深圳慕智科技有限公司 | Fuzzy test data generation method based on intelligent traffic image input feature recognition |
CN111985378A (en) * | 2020-08-13 | 2020-11-24 | 中国第一汽车股份有限公司 | Road target detection method, device and equipment and vehicle |
CN112154490A (en) * | 2018-03-28 | 2020-12-29 | 罗伯特·博世有限公司 | In-vehicle system for estimating scene inside vehicle cabin |
CN112512890A (en) * | 2020-04-02 | 2021-03-16 | 华为技术有限公司 | Abnormal driving behavior recognition method |
CN112633095A (en) * | 2020-12-14 | 2021-04-09 | 公安部交通管理科学研究所 | Video point location filing information verification method |
CN113052135A (en) * | 2021-04-22 | 2021-06-29 | 淮阴工学院 | Lane line detection method and system based on deep neural network Lane-Ar |
CN113449589A (en) * | 2021-05-16 | 2021-09-28 | 桂林电子科技大学 | Method for calculating driving strategy of unmanned automobile in urban traffic scene |
CN113610970A (en) * | 2021-08-30 | 2021-11-05 | 上海智能网联汽车技术中心有限公司 | Automatic driving system, device and method |
WO2021253741A1 (en) * | 2020-06-15 | 2021-12-23 | 重庆长安汽车股份有限公司 | Scenario identification-based vehicle adaptive sensor system |
CN114580890A (en) * | 2022-03-02 | 2022-06-03 | 博雷顿科技有限公司 | Collaborative decision analysis neural network algorithm system design method and system and vehicle |
WO2023279396A1 (en) * | 2021-07-09 | 2023-01-12 | 华为技术有限公司 | Operational design domain identification method and apparatus |
CN118334612A (en) * | 2024-06-14 | 2024-07-12 | 成都航空职业技术学院 | Dynamic target detection method under complex traffic scene based on deep learning |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105138963A (en) * | 2015-07-31 | 2015-12-09 | 小米科技有限责任公司 | Picture scene judging method, picture scene judging device and server |
CN105488534A (en) * | 2015-12-04 | 2016-04-13 | 中国科学院深圳先进技术研究院 | Method, device and system for deeply analyzing traffic scene |
CN106446914A (en) * | 2016-09-28 | 2017-02-22 | 天津工业大学 | Road detection based on superpixels and convolution neural network |
-
2017
- 2017-09-28 CN CN201710894156.2A patent/CN107609602A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105138963A (en) * | 2015-07-31 | 2015-12-09 | 小米科技有限责任公司 | Picture scene judging method, picture scene judging device and server |
CN105488534A (en) * | 2015-12-04 | 2016-04-13 | 中国科学院深圳先进技术研究院 | Method, device and system for deeply analyzing traffic scene |
CN106446914A (en) * | 2016-09-28 | 2017-02-22 | 天津工业大学 | Road detection based on superpixels and convolution neural network |
Non-Patent Citations (2)
Title |
---|
常亮 等: "图像理解中的卷积神经网络", 《自动化学报》 * |
杨景超: "辅助驾驶中基于单幅图像的复杂道路场景分类方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (62)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110494863B (en) * | 2018-03-15 | 2024-02-09 | 辉达公司 | Determining drivable free space of an autonomous vehicle |
CN110494863A (en) * | 2018-03-15 | 2019-11-22 | 辉达公司 | Determine autonomous vehicle drives free space |
CN108513674A (en) * | 2018-03-26 | 2018-09-07 | 深圳市锐明技术股份有限公司 | A kind of detection alarm method, storage medium and the server of Chinese herbaceous peony accumulated snow and icing |
CN108701396A (en) * | 2018-03-26 | 2018-10-23 | 深圳市锐明技术股份有限公司 | A kind of detection alarm method, storage medium and the server of Chinese herbaceous peony accumulated snow and icing |
CN112154490A (en) * | 2018-03-28 | 2020-12-29 | 罗伯特·博世有限公司 | In-vehicle system for estimating scene inside vehicle cabin |
CN108545021A (en) * | 2018-04-17 | 2018-09-18 | 济南浪潮高新科技投资发展有限公司 | A kind of auxiliary driving method and system of identification special objective |
CN108765033A (en) * | 2018-06-08 | 2018-11-06 | Oppo广东移动通信有限公司 | Transmitting advertisement information method and apparatus, storage medium, electronic equipment |
CN109001211A (en) * | 2018-06-08 | 2018-12-14 | 苏州赛克安信息技术有限公司 | Welds seam for long distance pipeline detection system and method based on convolutional neural networks |
CN108765033B (en) * | 2018-06-08 | 2021-01-12 | Oppo广东移动通信有限公司 | Advertisement information pushing method and device, storage medium and electronic equipment |
CN108921044A (en) * | 2018-06-11 | 2018-11-30 | 大连大学 | Driver's decision feature extracting method based on depth convolutional neural networks |
CN108921200A (en) * | 2018-06-11 | 2018-11-30 | 百度在线网络技术(北京)有限公司 | Method, apparatus, equipment and medium for classifying to Driving Scene data |
CN113642633A (en) * | 2018-06-11 | 2021-11-12 | 百度在线网络技术(北京)有限公司 | Method, apparatus, device and medium for classifying driving scene data |
US11783590B2 (en) | 2018-06-11 | 2023-10-10 | Apollo Intelligent Driving Technology (Beijing) Co., Ltd. | Method, apparatus, device and medium for classifying driving scenario data |
CN109582993A (en) * | 2018-06-20 | 2019-04-05 | 长安大学 | Urban transportation scene image understands and multi-angle of view gunz optimization method |
CN109582993B (en) * | 2018-06-20 | 2022-11-25 | 长安大学 | Urban traffic scene image understanding and multi-view crowd-sourcing optimization method |
CN108920805B (en) * | 2018-06-25 | 2022-04-05 | 大连大学 | Driver behavior modeling system with state feature extraction function |
CN108920805A (en) * | 2018-06-25 | 2018-11-30 | 大连大学 | Driving behavior modeling with state feature extraction functions |
CN109145798B (en) * | 2018-08-13 | 2021-10-22 | 浙江零跑科技股份有限公司 | Driving scene target identification and travelable region segmentation integration method |
CN109145798A (en) * | 2018-08-13 | 2019-01-04 | 浙江零跑科技有限公司 | A kind of Driving Scene target identification and travelable region segmentation integrated approach |
CN110376594A (en) * | 2018-08-17 | 2019-10-25 | 北京京东尚科信息技术有限公司 | A kind of method and system of the intelligent navigation based on topological diagram |
CN109389046A (en) * | 2018-09-11 | 2019-02-26 | 昆山星际舟智能科技有限公司 | Round-the-clock object identification and method for detecting lane lines for automatic Pilot |
US11724700B2 (en) | 2018-09-12 | 2023-08-15 | Huawei Technologies Co., Ltd. | Intelligent driving method and intelligent driving system |
CN110893860A (en) * | 2018-09-12 | 2020-03-20 | 华为技术有限公司 | Intelligent driving method and intelligent driving system |
CN109341691A (en) * | 2018-09-30 | 2019-02-15 | 百色学院 | Intelligent indoor positioning system and its localization method based on icon-based programming |
CN111179585A (en) * | 2018-11-09 | 2020-05-19 | 上海汽车集团股份有限公司 | Site testing method and device for automatic driving vehicle |
CN109747659A (en) * | 2018-11-26 | 2019-05-14 | 北京汽车集团有限公司 | The control method and device of vehicle drive |
CN109747659B (en) * | 2018-11-26 | 2021-07-02 | 北京汽车集团有限公司 | Vehicle driving control method and device |
CN109902600A (en) * | 2019-02-01 | 2019-06-18 | 清华大学 | A kind of road area detection method |
CN109993082A (en) * | 2019-03-20 | 2019-07-09 | 上海理工大学 | The classification of convolutional neural networks road scene and lane segmentation method |
CN109993082B (en) * | 2019-03-20 | 2021-11-05 | 上海理工大学 | Convolutional neural network road scene classification and road segmentation method |
CN110046655A (en) * | 2019-03-26 | 2019-07-23 | 天津大学 | A kind of audio scene recognition method based on integrated study |
CN110046655B (en) * | 2019-03-26 | 2023-03-31 | 天津大学 | Audio scene recognition method based on ensemble learning |
CN110192232A (en) * | 2019-04-18 | 2019-08-30 | 京东方科技集团股份有限公司 | Traffic information processing equipment, system and method |
CN110162601B (en) * | 2019-05-22 | 2020-12-25 | 吉林大学 | Biomedical publication contribution recommendation system based on deep learning |
CN110162601A (en) * | 2019-05-22 | 2019-08-23 | 吉林大学 | A kind of biomedical publication submission recommender system based on deep learning |
CN110232335A (en) * | 2019-05-24 | 2019-09-13 | 国汽(北京)智能网联汽车研究院有限公司 | Driving Scene classification method and electronic equipment |
CN110705483A (en) * | 2019-10-08 | 2020-01-17 | Oppo广东移动通信有限公司 | Driving reminding method, device, terminal and storage medium |
CN110705483B (en) * | 2019-10-08 | 2022-11-18 | Oppo广东移动通信有限公司 | Driving reminding method, device, terminal and storage medium |
CN111079800B (en) * | 2019-11-29 | 2023-06-23 | 上海汽车集团股份有限公司 | Acceleration method and acceleration system for intelligent driving virtual test |
CN111079800A (en) * | 2019-11-29 | 2020-04-28 | 上海汽车集团股份有限公司 | Acceleration method and acceleration system for intelligent driving virtual test |
CN110956146B (en) * | 2019-12-04 | 2024-04-12 | 新奇点企业管理集团有限公司 | Road background modeling method and device, electronic equipment and storage medium |
CN110956146A (en) * | 2019-12-04 | 2020-04-03 | 新奇点企业管理集团有限公司 | Road background modeling method and device, electronic equipment and storage medium |
CN111062307A (en) * | 2019-12-12 | 2020-04-24 | 天地伟业技术有限公司 | Scene recognition and classification method based on Tiny-Darknet |
CN111160424A (en) * | 2019-12-16 | 2020-05-15 | 南方电网科学研究院有限责任公司 | NFC equipment fingerprint authentication method and system based on CNN image identification |
CN111231823A (en) * | 2020-03-16 | 2020-06-05 | 连云港杰瑞电子有限公司 | Special vehicle-mounted lighting system for special vehicle and light adjusting method thereof |
CN111273676A (en) * | 2020-03-24 | 2020-06-12 | 广东工业大学 | End-to-end automatic driving method and system |
CN111273676B (en) * | 2020-03-24 | 2023-04-18 | 广东工业大学 | End-to-end automatic driving method and system |
CN112512890A (en) * | 2020-04-02 | 2021-03-16 | 华为技术有限公司 | Abnormal driving behavior recognition method |
WO2021253741A1 (en) * | 2020-06-15 | 2021-12-23 | 重庆长安汽车股份有限公司 | Scenario identification-based vehicle adaptive sensor system |
CN111599183A (en) * | 2020-07-22 | 2020-08-28 | 中汽院汽车技术有限公司 | Automatic driving scene classification and identification system and method |
CN111599183B (en) * | 2020-07-22 | 2020-10-27 | 中汽院汽车技术有限公司 | Automatic driving scene classification and identification system and method |
CN111858342A (en) * | 2020-07-23 | 2020-10-30 | 深圳慕智科技有限公司 | Fuzzy test data generation method based on intelligent traffic image input feature recognition |
CN111985378A (en) * | 2020-08-13 | 2020-11-24 | 中国第一汽车股份有限公司 | Road target detection method, device and equipment and vehicle |
CN112633095A (en) * | 2020-12-14 | 2021-04-09 | 公安部交通管理科学研究所 | Video point location filing information verification method |
CN113052135A (en) * | 2021-04-22 | 2021-06-29 | 淮阴工学院 | Lane line detection method and system based on deep neural network Lane-Ar |
CN113449589B (en) * | 2021-05-16 | 2022-11-15 | 桂林电子科技大学 | Method for calculating driving strategy of unmanned vehicle in urban traffic scene |
CN113449589A (en) * | 2021-05-16 | 2021-09-28 | 桂林电子科技大学 | Method for calculating driving strategy of unmanned automobile in urban traffic scene |
WO2023279396A1 (en) * | 2021-07-09 | 2023-01-12 | 华为技术有限公司 | Operational design domain identification method and apparatus |
CN113610970A (en) * | 2021-08-30 | 2021-11-05 | 上海智能网联汽车技术中心有限公司 | Automatic driving system, device and method |
CN114580890A (en) * | 2022-03-02 | 2022-06-03 | 博雷顿科技有限公司 | Collaborative decision analysis neural network algorithm system design method and system and vehicle |
CN114580890B (en) * | 2022-03-02 | 2024-07-26 | 博雷顿科技股份公司 | Collaborative decision analysis neural network algorithm system design method, collaborative decision analysis neural network algorithm system design system and vehicle |
CN118334612A (en) * | 2024-06-14 | 2024-07-12 | 成都航空职业技术学院 | Dynamic target detection method under complex traffic scene based on deep learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107609602A (en) | A kind of Driving Scene sorting technique based on convolutional neural networks | |
CN109977812B (en) | Vehicle-mounted video target detection method based on deep learning | |
CN110956094B (en) | RGB-D multi-mode fusion personnel detection method based on asymmetric double-flow network | |
CN110163187B (en) | F-RCNN-based remote traffic sign detection and identification method | |
CN109800736A (en) | A kind of method for extracting roads based on remote sensing image and deep learning | |
CN112800906B (en) | Improved YOLOv 3-based cross-domain target detection method for automatic driving automobile | |
CN110414387A (en) | A kind of lane line multi-task learning detection method based on lane segmentation | |
CN111582201A (en) | Lane line detection system based on geometric attention perception | |
CN109284669A (en) | Pedestrian detection method based on Mask RCNN | |
CN114359851A (en) | Unmanned target detection method, device, equipment and medium | |
CN105550701A (en) | Real-time image extraction and recognition method and device | |
CN111882620B (en) | Road drivable area segmentation method based on multi-scale information | |
JPWO2020181685A5 (en) | ||
CN110263786A (en) | A kind of road multi-targets recognition system and method based on characteristic dimension fusion | |
CN105488517A (en) | Vehicle brand model identification method based on deep learning | |
CN112489054A (en) | Remote sensing image semantic segmentation method based on deep learning | |
CN111695447B (en) | Road travelable area detection method based on twin feature enhancement network | |
CN111797841A (en) | Visual saliency detection method based on depth residual error network | |
CN111832453A (en) | Unmanned scene real-time semantic segmentation method based on double-path deep neural network | |
CN110443155A (en) | A kind of visual aid identification and classification method based on convolutional neural networks | |
CN112560624A (en) | High-resolution remote sensing image semantic segmentation method based on model depth integration | |
CN117372898A (en) | Unmanned aerial vehicle aerial image target detection method based on improved yolov8 | |
CN116246169A (en) | SAH-Unet-based high-resolution remote sensing image impervious surface extraction method | |
CN116092034A (en) | Lane line detection method based on improved deep V < 3+ > model | |
CN112489072A (en) | Vehicle-mounted video perception information transmission load optimization method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20180119 |