CN106408015A - Road fork identification and depth estimation method based on convolutional neural network - Google Patents
Road fork identification and depth estimation method based on convolutional neural network Download PDFInfo
- Publication number
- CN106408015A CN106408015A CN201610818250.5A CN201610818250A CN106408015A CN 106408015 A CN106408015 A CN 106408015A CN 201610818250 A CN201610818250 A CN 201610818250A CN 106408015 A CN106408015 A CN 106408015A
- Authority
- CN
- China
- Prior art keywords
- fork
- road
- sample
- convolutional neural
- detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a road fork identification and depth estimation method based on a convolutional neural network, comprising the following steps: S1, collecting fork samples and non-fork samples in all kinds of outdoor scenes; S2, preprocessing the fork samples and the non-fork samples; S3, training a CNN fork detector; S4, acquiring detection images, and preprocessing the acquired detection images; S5, building an image pyramid based on the detection images; S6, extracting the features of the image pyramid; S7, scanning a feature map, and forming a feature vector; S8, classifying the features; and S9, combining detection windows and outputting the result. According to the invention, the essential features of forks are learned from samples by use of a convolutional neural network, the features are extracted by scanning windows on the detection images and are classified by a classifier of the convolutional neural network, and the depth information of forks is obtained through linear regression of the convolutional neural network. The robustness and anti-interference performance of road fork detection can be improved effectively.
Description
Technical field
The present invention relates to the multiple fields such as computer vision, pattern recognition, it is based on convolutional neural networks particularly to a kind of
Fork in the road identification and depth estimation method.
Background technology
The basic task of Outdoor Mobile Robot Navigation technology is the phase providing mobile robot and environment by sensor
To position, it is that robot carries out path planning.In existing airmanship, the method for view-based access control model is due to its abundant sensing
Device information and the perceptive mode closest to the mankind, have become as most one of method of Research Prospects.In visible sensation method, base
In monocular vision algorithm due to having reached preferable balance between robustness and efficiency, and only need to relatively inexpensive setting
Standby, thus obtaining most commonly used research.
Up to the present, the development of robot navigation is roughly divided into 3 stages:1st stage was studied mainly for structuring
Method based on map is usually used in the application of environment, wherein small-scale environments, but rests essentially within laboratory stage;For
The application of highway environment does not then create global map, mostly relies on floor line feature and carries out local relative position estimation, mainly
Application towards intelligent transportation field;The research in the 2nd stage starts to turn to complicated structured road, urban environment and non-structural
Change environment (wild environment).Machine Learning Theory, more complicated filtering technique and sensor fusion techniques are introduced into this detection
System, the fusion of sensor aspect and algorithm aspect becomes the emphasis of this period research.But simultaneously as problems faced is more
For dissipating with complicated it is impossible to form the algorithm frame of " standard " as the research in the 1st stage.The base of the 3rd stage research
This thought is " three-dimensionalreconstruction based on single image " it is intended to pass through to analyze feature under multiple dimensioned for the single image, to be based on
The method of machine learning reconstructing three-dimensional scenic, finally analyzes " can traffic areas " of mobile robot surrounding.
Method now concerning the target detection technique of Target Modeling is generally divided into off-line training and two ranks of on-line checking
Section.Off-line training step carries out feature representation respectively to the foreground target in training sample and background it is established that target or background
Apparent model, then carry out classifier training and obtain sorter model.The on-line checking stage is enterprising in multiple yardsticks to test sample
After the scanning of line slip window, apparent model is set up using same feature representation method, is then trained with off-line phase again
To sorter model it is classified, thus judging whether each window is foreground target.Based on foreground target modeling
Object detection method is different from the method based on background modeling, and such method is not limited by scene, and range of application is relatively wide,
And testing result does not need to be split once again.
Image feature representation can be divided into the feature based on engineer and the feature based on study.Wherein, wherein, manually
Design feature is that one kind utilizes mankind's priori and wisdom, and this kind of knowledge is applied to the tasks such as target detection, identification
Mode well.This kind of method realization is relatively easy, it is also fairly simple to calculate, but it depend heavilys on human knowledge, experience
Summary, and can not accomplish to image or the most essential the portraying of object module.Feature representation based on study mainly passes through no to supervise
The mode that educational inspector practises, allow machine automatically from sample learning to characterizing the more essential feature of these samples.Classifier design
It is the real needs according to target detection, design is applied to the grader of particular problem.Wherein, support vector machine (SVM), dynamic
Bayesian network and nearest neighbor classifier are the most widely used graders, but the progress with Image Classification Studies, accordingly
Image expression data dimension grow with each passing day, traditional single grader be difficult to meet require, common practice is by multiple classification
Device integrates a more excellent strong classifier of classification performance, conventional method mainly have Bagging, Boosting and with
Machine forest etc..
Using the feature of hand-designed or the grader of complexity, there are two obvious shortcomings:1st, the spy of hand-designed
Levy the weaker low-level feature of typically separability;2nd, the complexity of grader is higher, and the time that fork in the road detection spends is more.Separately
Outward, because outdoor scene is complicated and changeable, there is illumination and the change at visual angle in different scenes.It would therefore be desirable to one kind is in complexity
Accurately and rapidly fork in the road detection technique in scene.
The purpose of the depth estimation algorithm based on monocular image is all to give out one for all pixels point on piece image
Individual depth.According to the degree of algorithm automatization, the depth estimation algorithm of monocular can be divided into two classes:Automanual algorithm and full-automatic
Algorithm, its difference is the need of artificial auxiliary.Automanual depth estimation algorithm typically requires and marks image manually
Or the partial depth in video sequence, then goes to extrapolate entire image depth using these sparse deep annotations.Half certainly
Dynamic mode purpose is to recover high accuracy depth, due to needing to introduce artificial means, generally takes very much it is also difficult to expire
The demand of sufficient most computers visual task.And full automatic algorithm does not need any supplementary meanss, directly by designing
Algorithm automatically estimates depth.This algorithm has more the universal suitability.Comprehensive recent two decades carry out vast researcher
In the progress in estimation of Depth field, we will be divided into two classes based on the depth estimation algorithm of monocular image:Based on Depth cue
Depth estimation algorithm and the depth estimation algorithm based on machine learning.
Conventional Depth cue includes:Movable information, linear perspective, focus on, block, texture, shade, mist etc..Based on deep
The depth estimation method of degree clue has strict use condition, and the method that multiple Depth cues merge can expand to a certain extent
Applicable greatly scene domain, but the precision of estimation of Depth is still relatively low.Depth estimation algorithm based on statistical model is not due to
Limited by specific scene condition, and be there is the preferable suitability, just obtaining more and more widely studying.Such algorithm master
Defeated by the method for machine learning, by preprepared representational training image collection and corresponding depth collection in a large number
Enter the study carrying out in the model defining having supervision, after training completes, just actual image to be tested can be input to instruction
Carry out the calculating of depth in the model perfected.
According to the analysis of the depth estimation method to conventional target detection method with based on monocular image, based on image
Process and certain defect is had based on the object detection method of hand-designed feature.Now is in the historical background of current big data
Under, we can preferably complete the task of fork in the road detection and estimation of Depth with the method for deep learning.
Content of the invention
It is an object of the invention to overcoming the deficiencies in the prior art, provide a kind of linear regression by convolutional neural networks
Obtain the depth information of fork in the road, robustness and the anti-interference of fork in the road detection can be effectively improved, be so based on convolutional Neural
The accuracy rate of the fork in the road detector of network can meet Practical Project demand, can be widely applied to the detection of various forks in the road based on
The fork in the road identification of convolutional neural networks and depth estimation method.
The purpose of the present invention is achieved through the following technical solutions:Based on convolutional neural networks fork in the road identify and
Depth estimation method, including off-line training and two steps of on-line checking;Described offline inspection comprises the following steps:
S1, the fork in the road sample collecting outdoor various scenes and non-fork in the road sample, and to fork in the road sample classification;
S2, pretreatment is carried out to fork in the road sample and non-fork in the road sample;
S3, training CNN fork in the road detector;
Described on-line checking comprises the following steps:
S4, acquisition detection picture, and pretreatment is carried out to the detection picture obtaining;
S5, detection image is built image pyramid;
S6, feature extraction is carried out to image pyramid:Feature extractor using CNN fork in the road detector detects to whole
Image carries out feature extraction, by multiple convolution and multiple characteristic patterns of down-sampled formation;
S7, characteristic pattern is scanned, forms characteristic vector;
S8, tagsort:Grader using CNN fork in the road detector is classified to characteristic vector, if grader
Output valve is more than the threshold value setting, then judge that this window correspondence region in detection image comprises fork in the road, to be otherwise judged to
Background;
S9, merging detection window simultaneously export.
Further, described step S1 concrete methods of realizing is:Shoot the sample of a large amount of forks in the road, fork in the road by mobile phone
Sample set contains different branch road mouth-shapeds, each visual angle of fork in the road and the situation to the various distance in fork in the road.
Further, described step S2 includes following sub-step:
S21, the sample-size according to setting, fork in the road sample and non-fork in the road sample carry out change of scale;
S22, each fork in the road sample are randomly carried out flip horizontal, translation transformation, change of scale and rotation transformation,
Increase some color change etc. to image, to increase fork in the road sample size;
S23, all samples are normalized.
Further, described step S3 concrete methods of realizing is:CNN fork in the road detector is trained using BP algorithm, every time
Iteration calculating network error and renewal weight by the way of the most small quantities of, when on checking collection, accuracy terminates net when no longer improving
Network training, obtains CNN fork in the road detector.
Further, described step S7 includes following sub-step:
S71, the window size according to setting, scan multiple characteristic patterns that finally down-sampled layer generates simultaneously;
S72, the eigenvalue in window is chained together, forms characteristic vector.
Further, described step S9 includes following sub-step:
S91, after all pyramidal layer have all detected, all intersecting detections knots are merged using the method for non-maximum compacting
Really;
S92, draw fork in the road testing result in detection image, and result is exported, complete fork in the road detection, if inspection
Measure fork in the road then to pass through to return the estimation of Depth obtaining fork in the road region.
Further, described CNN fork in the road detector is a multilayered model, and CNN fork in the road detector is divided into two
Point:Feature extractor and grader, feature extractor includes input layer, convolutional layer and the down-sampled layer of network front end;Grader
Full articulamentum including network backend.
The invention has the beneficial effects as follows:Present invention proposition application convolutional neural networks carry out fork in the road detection and depth is estimated
Meter:In the training stage, convolutional neural networks are from the fork in the road sample of outdoor different scenes and non-fork in the road sample learning in a large number
To the feature of fork in the road essence, the feature of this aspect ratio hand-designed has higher separability;In detection-phase, adopt
The feature extraction mode of scanning window in detection image, and classified with the grader of convolutional neural networks, by convolution god
Linear regression through network obtains the depth information of fork in the road;Robustness and the anti-interference of fork in the road detection can be effectively improved,
The accuracy rate of the fork in the road detector so based on convolutional neural networks can meet Practical Project demand, can be widely applied to various
Fork in the road is detected.
Brief description
Fig. 1 is fork in the road identification and the depth estimation method flow chart of the present invention;
Fig. 2 is the CNN fork in the road detector network structure chart of the present invention.
Specific embodiment
The present invention carries out fork in the road detection and estimation of Depth exactly using convolutional neural networks.In the training stage, first
Collect substantial amounts of fork in the road sample and non-fork in the road sample, and depth information is marked to fork in the road sample, for training fork in the road
Grader and fork in the road estimation of Depth;Then pretreatment is carried out to training sample, carry out change of scale, normalization pixel value
Span and expansion training sample set;Finally adopt BP algorithm training CNN fork in the road detector.Through repetition training, CNN
Fork in the road detector has learnt the feature to fork in the road essence, can complete accurate fork in the road Detection task.In detection rank
Section, does pretreatment to detection picture first, simultaneously in order to carry out multiple scale detecting, builds image pyramid to detection picture;So
Using the feature extractor of CNN fork in the road detector, feature extraction is carried out to detection image afterwards, form multiple characteristic patterns;Secondly use
Window scanning feature figure, the eigenvalue in window is chained together, and forms characteristic vector, using CNN fork in the road detector
Grader is classified to characteristic vector, determines whether fork in the road;Finally merged all intersecting with the method for non-maximum compacting
Testing result and output result.The present invention utilizes convolutional neural networks to extract picture feature, can effectively improve fork in the road detection
Robustness and anti-interference, the accuracy rate of so fork in the road detector based on convolutional neural networks can meet Practical Project need
Ask.Further illustrate technical scheme below in conjunction with the accompanying drawings.
As shown in figure 1, based on convolutional neural networks fork in the road identification and depth estimation method, including off-line training and
Line detects two steps;Described offline inspection comprises the following steps:
S1, the fork in the road sample collecting outdoor various scenes and non-fork in the road sample, and to fork in the road sample classification;Specifically
Implementation method is:Shoot the sample of a large amount of forks in the road by mobile phone, fork in the road sample set contains different branch road mouth-shapeds, trouble
Each visual angle at crossing and the situation to the various distance in fork in the road;
Training CNN fork in the road detector needs substantial amounts of sample.Given this correlational study of front fork in the road detection is less, does not have
There is the data set of correlation, therefore 1000 fork in the road samples be have taken using mobile phone, cover different branch road mouth-shapeds, branch road
Most of visual angle of mouth and the depth information measuring fork in the road using infrared range-measurement system.Original image size is normalized to
Width 480 pixel and 640 pixels of height.The corresponding depth information in fork in the road is 5 meters, 10 meters, 15 meters and 20 meters of 4 classifications.
S2, pretreatment is carried out to fork in the road sample and non-fork in the road sample;Including following sub-step:
S21, the sample-size according to setting, fork in the road sample and non-fork in the road sample carry out change of scale;
S22, each fork in the road sample are randomly carried out flip horizontal, translation transformation, change of scale and rotation transformation,
Increase some color change etc. to image, to increase fork in the road sample size;
S23, all samples are normalized;
In order to increase sample size, in original image, obtain fork in the road sample and non-fork in the road so that different windows is mobile
Sample, window size is chosen according to depth information.Finally by the image normalization of these windows acquisition to identical size
224*224.All samples adopt RGB color image it is not necessary to change pixel value.
S3, training CNN fork in the road detector;Its concrete methods of realizing is:Using BP algorithm training CNN fork in the road detection
Device, each iteration calculating network error and renewal weight by the way of the most small quantities of, when on checking collection, accuracy no longer improves
When terminate network training, obtain CNN fork in the road detector;
CNN fork in the road detector is a multilayered model, and it automatically carries out feature from great amount of samples with having supervision
Study.Input is the original image vegetarian refreshments of picture, and output is tag along sort and the depth value of image.CNN fork in the road mesh of detectors
Network comprises 5 convolutional layers, 3 full articulamentums and 1 softmax layer.CNN fork in the road detector network structure is as shown in Figure 2.
It is that for 4 pixels, (this is neighbour in same nuclear mapping to 11 × 11 × 3, step-length that first convolutional layer utilizes 96 sizes
The distance between receptive field center of nearly neuron) core, carry out the input picture being 224 × 224 × 3 to size and be filtered.
Second convolutional layer needs that output is as the input of oneself, and utilization using (the response normalization and pond) of first convolutional layer
256 sizes be 5 × 5 × 48 verification its be filtered.Three, the 4th and the 5th convolutional layers are connected with each other, not any
Pooling layer between and normalization layer.3rd convolutional layer has the core that 384 sizes are 3 × 3 × 256 to be connected to
(normalized, pond) output of second convolutional layer.4th convolutional layer has the core that 384 sizes are 3 × 3 × 192,
5th convolutional layer has the core that 256 sizes are 3 × 3 × 192.Full articulamentum all respectively has 4096 neurons.In fig. 2 front two
Individual full articulamentum uses dropout.Without dropout, our network can show substantial amounts of over-fitting.dropout
The iterationses needed for convergence are made substantially to increased one times.Excitation function adopts Relu.
Whole network model also needs to carry out estimation of Depth except fork in the road detection, and we are in the final layer of convolutional neural networks
There is a linear regression layer, after extracting high-level characteristic, be translated into current picture by sending a simple linear regression
Depth value on vegetarian refreshments.
CNN fork in the road detector is trained using BP algorithm, according to the error update net of network output and sample label
The parameter of network.In training, learning rate is set as 0.01, and each iteration inputs 128 samples, updates ginseng with mean error
Number.Iterationses determine according to the test effect on checking collection, when the accuracy on checking collection no longer improves eventually
The only training of network.
CNN fork in the road detector is divided into two parts:Feature extractor and grader, feature extractor includes network front end
Input layer, convolutional layer and down-sampled layer;Grader includes the full articulamentum of network backend.
The difference of testing process of the present invention and conventional target detection technique flow process be feature extraction and
Window scans the order of this two step.Conventional target detection technique is all first according to the window size setting mostly, from detection figure
As upper scanning window pixel-by-pixel, then feature is extracted to the subimage in each window.And in the present invention, due to convolutional Neural net
Network feature extractor is not limited by image size, and the concrete form of once learning to convolution kernel is it is possible to from arbitrary size figure
Extract feature in picture, therefore first detection image is integrally carried out with feature extraction, form the characteristic pattern of multiple detection images, then many
Scanning window simultaneously on individual characteristic pattern, the eigenvalue in window is chained together, is classified by grader.
Described on-line checking comprises the following steps:
S4, acquisition detection picture, and pretreatment is carried out to the detection picture obtaining;
S5, detection image is built image pyramid, image pyramid change of scale is divided into 6 grades:[0.5,0.6,
0.7,0.8,0.9,1.0], it is easy to multiple scale detecting fork in the road;
S6, feature extraction is carried out to image pyramid:Feature extractor using CNN fork in the road detector detects to whole
Image carries out feature extraction, by multiple convolution and multiple characteristic patterns of down-sampled formation;
S7, characteristic pattern is scanned, forms characteristic vector;Specifically include following sub-step:
S71, the window size according to setting, scan multiple characteristic patterns that finally down-sampled layer generates simultaneously;
S72, the eigenvalue in window is chained together, forms characteristic vector;
S8, tagsort:Grader using CNN fork in the road detector is classified to characteristic vector, if grader
Output valve is more than the threshold value (threshold value is set to 0.8 by the present embodiment) setting, then judge this area in detection image for window correspondence
Domain comprises fork in the road, is otherwise judged to background;
S9, merging detection window simultaneously export, including following sub-step:
S91, after all pyramidal layer have all detected, all intersecting detections knots are merged using the method for non-maximum compacting
Really;
S92, draw fork in the road testing result in detection image, and result is exported, complete fork in the road detection, if inspection
Measure fork in the road then to pass through to return the estimation of Depth obtaining fork in the road region.
1000 fork in the road data sets of collection are tested, the situation being 0.25 in the average wrong report of every pictures
Under, CNN fork in the road detector has reached 89.6% accuracy in detection.Prove the present invention detection method have higher accurate
Degree and practicality.
Those of ordinary skill in the art will be appreciated that, embodiment described here is to aid in reader and understands this
Bright principle is it should be understood that protection scope of the present invention is not limited to such special statement and embodiment.This area
Those of ordinary skill can make various other each without departing from present invention essence according to these technology disclosed by the invention enlightenment
Plant concrete deformation and combine, these deform and combine still within the scope of the present invention.
Claims (7)
1. based on convolutional neural networks fork in the road identify and depth estimation method it is characterised in that include off-line training and
Line detects two steps;Described offline inspection comprises the following steps:
S1, the fork in the road sample collecting outdoor various scenes and non-fork in the road sample, and to fork in the road sample classification;
S2, pretreatment is carried out to fork in the road sample and non-fork in the road sample;
S3, training CNN fork in the road detector;
Described on-line checking comprises the following steps:
S4, acquisition detection picture, and pretreatment is carried out to the detection picture obtaining;
S5, detection image is built image pyramid;
S6, feature extraction is carried out to image pyramid:Using CNN fork in the road detector feature extractor to whole detection image
Carry out feature extraction, by multiple convolution and multiple characteristic patterns of down-sampled formation;
S7, characteristic pattern is scanned, forms characteristic vector;
S8, tagsort:Grader using CNN fork in the road detector is classified to characteristic vector, if grader output
Value more than the threshold value setting, then judges that this window correspondence region in detection image comprises fork in the road, is otherwise judged to background;
S9, merging detection window simultaneously export.
2. the fork in the road identification based on convolutional neural networks according to claim 1 and depth estimation method, its feature exists
In described step S1 concrete methods of realizing is:Shoot the sample of a large amount of forks in the road by mobile phone, fork in the road sample set contains not
Same branch road mouth-shaped, each visual angle of fork in the road and the situation to the various distance in fork in the road.
3. the fork in the road identification based on convolutional neural networks according to claim 2 and depth estimation method, its feature exists
In described step S2 includes following sub-step:
S21, the sample-size according to setting, fork in the road sample and non-fork in the road sample carry out change of scale;
S22, each fork in the road sample are randomly carried out flip horizontal, translation transformation, change of scale and rotation transformation, to figure
As increasing some color change etc., to increase fork in the road sample size;
S23, all samples are normalized.
4. the fork in the road identification based on convolutional neural networks according to claim 3 and depth estimation method, its feature exists
In described step S3 concrete methods of realizing is:Using BP algorithm training CNN fork in the road detector, each iteration is using the most small quantities of
Mode calculating network error and update weight, when checking collection on accuracy no longer improve when terminate network training, obtain CNN
Fork in the road detector.
5. the fork in the road identification based on convolutional neural networks according to claim 4 and depth estimation method, its feature exists
In described step S7 includes following sub-step:
S71, the window size according to setting, scan multiple characteristic patterns that finally down-sampled layer generates simultaneously;
S72, the eigenvalue in window is chained together, forms characteristic vector.
6. the fork in the road identification based on convolutional neural networks according to claim 5 and depth estimation method, its feature exists
In described step S9 includes following sub-step:
S91, after all pyramidal layer have all detected, the method using non-maximum compacting merges all intersecting testing results;
S92, drawing fork in the road testing result in detection image, and result is exported, completing fork in the road detection, if detected
Fork in the road is then passed through to return the estimation of Depth obtaining fork in the road region.
7. the fork in the road identification based on convolutional neural networks according to claim 1 and depth estimation method, its feature exists
In described CNN fork in the road detector is a multilayered model, and CNN fork in the road detector is divided into two parts:Feature extractor and
Grader, feature extractor includes input layer, convolutional layer and the down-sampled layer of network front end;Grader includes the complete of network backend
Articulamentum.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610818250.5A CN106408015A (en) | 2016-09-13 | 2016-09-13 | Road fork identification and depth estimation method based on convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610818250.5A CN106408015A (en) | 2016-09-13 | 2016-09-13 | Road fork identification and depth estimation method based on convolutional neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106408015A true CN106408015A (en) | 2017-02-15 |
Family
ID=57999067
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610818250.5A Pending CN106408015A (en) | 2016-09-13 | 2016-09-13 | Road fork identification and depth estimation method based on convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106408015A (en) |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106951473A (en) * | 2017-03-06 | 2017-07-14 | 浙江大学 | Towards the construction method of the deep vision question answering system of dysopia personage |
CN106981080A (en) * | 2017-02-24 | 2017-07-25 | 东华大学 | Night unmanned vehicle scene depth method of estimation based on infrared image and radar data |
CN107204010A (en) * | 2017-04-28 | 2017-09-26 | 中国科学院计算技术研究所 | A kind of monocular image depth estimation method and system |
CN107330437A (en) * | 2017-07-03 | 2017-11-07 | 贵州大学 | Feature extracting method based on the real-time detection model of convolutional neural networks target |
CN107766881A (en) * | 2017-09-30 | 2018-03-06 | 中国地质大学(武汉) | A kind of method for searching based on fundamental classifier, equipment and storage device |
CN108122562A (en) * | 2018-01-16 | 2018-06-05 | 四川大学 | A kind of audio frequency classification method based on convolutional neural networks and random forest |
CN108174289A (en) * | 2017-12-28 | 2018-06-15 | 泰康保险集团股份有限公司 | A kind of image data processing method, device, medium and electronic equipment |
CN108509828A (en) * | 2017-02-28 | 2018-09-07 | 深圳市朗驰欣创科技股份有限公司 | A kind of face identification method and face identification device |
CN108573492A (en) * | 2018-04-02 | 2018-09-25 | 电子科技大学 | A kind of real time radar search coverage detection method |
CN108615244A (en) * | 2018-03-27 | 2018-10-02 | 中国地质大学(武汉) | A kind of image depth estimation method and system based on CNN and depth filter |
CN108734329A (en) * | 2017-04-21 | 2018-11-02 | 北京微影时代科技有限公司 | A kind of method and device at prediction film next day box office |
CN108877267A (en) * | 2018-08-06 | 2018-11-23 | 武汉理工大学 | A kind of intersection detection method based on vehicle-mounted monocular camera |
CN108932523A (en) * | 2017-05-26 | 2018-12-04 | 日东电工株式会社 | Image classification and classification data manufacturing system and method, storage medium |
CN109241893A (en) * | 2018-08-27 | 2019-01-18 | 广州大学 | Road selection method, device and readable storage medium storing program for executing based on artificial intelligence technology |
CN109612513A (en) * | 2018-12-17 | 2019-04-12 | 安徽农业大学 | A kind of online method for detecting abnormality towards extensive higher-dimension sensing data |
CN109635723A (en) * | 2018-12-11 | 2019-04-16 | 讯飞智元信息科技有限公司 | A kind of occlusion detection method and device |
CN109685842A (en) * | 2018-12-14 | 2019-04-26 | 电子科技大学 | A kind of thick densification method of sparse depth based on multiple dimensioned network |
CN109889724A (en) * | 2019-01-30 | 2019-06-14 | 北京达佳互联信息技术有限公司 | Image weakening method, device, electronic equipment and readable storage medium storing program for executing |
CN109945802A (en) * | 2018-10-11 | 2019-06-28 | 宁波深浅优视智能科技有限公司 | A kind of structural light three-dimensional measurement method |
CN110175574A (en) * | 2019-05-28 | 2019-08-27 | 中国人民解放军战略支援部队信息工程大学 | A kind of Road network extraction method and device |
CN110334628A (en) * | 2019-06-26 | 2019-10-15 | 华中科技大学 | A kind of outdoor monocular image depth estimation method based on structuring random forest |
CN110852243A (en) * | 2019-11-06 | 2020-02-28 | 中国人民解放军战略支援部队信息工程大学 | Improved YOLOv 3-based road intersection detection method and device |
CN111209864A (en) * | 2020-01-07 | 2020-05-29 | 上海交通大学 | Target identification method for power equipment |
CN112150804A (en) * | 2020-08-31 | 2020-12-29 | 中国地质大学(武汉) | City multi-type intersection identification method based on MaskRCNN algorithm |
CN112837361A (en) * | 2021-03-05 | 2021-05-25 | 浙江商汤科技开发有限公司 | Depth estimation method and device, electronic equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102750702A (en) * | 2012-06-21 | 2012-10-24 | 东华大学 | Monocular infrared image depth estimation method based on optimized BP (Back Propagation) neural network model |
CN104036323A (en) * | 2014-06-26 | 2014-09-10 | 叶茂 | Vehicle detection method based on convolutional neural network |
CN105069472A (en) * | 2015-08-03 | 2015-11-18 | 电子科技大学 | Vehicle detection method based on convolutional neural network self-adaption |
-
2016
- 2016-09-13 CN CN201610818250.5A patent/CN106408015A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102750702A (en) * | 2012-06-21 | 2012-10-24 | 东华大学 | Monocular infrared image depth estimation method based on optimized BP (Back Propagation) neural network model |
CN104036323A (en) * | 2014-06-26 | 2014-09-10 | 叶茂 | Vehicle detection method based on convolutional neural network |
CN105069472A (en) * | 2015-08-03 | 2015-11-18 | 电子科技大学 | Vehicle detection method based on convolutional neural network self-adaption |
Non-Patent Citations (1)
Title |
---|
田虎: "单目图像的深度估计", 《中国博士学位论文全文数据库(电子期刊)信息科技辑》 * |
Cited By (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106981080A (en) * | 2017-02-24 | 2017-07-25 | 东华大学 | Night unmanned vehicle scene depth method of estimation based on infrared image and radar data |
CN108509828A (en) * | 2017-02-28 | 2018-09-07 | 深圳市朗驰欣创科技股份有限公司 | A kind of face identification method and face identification device |
CN106951473A (en) * | 2017-03-06 | 2017-07-14 | 浙江大学 | Towards the construction method of the deep vision question answering system of dysopia personage |
CN106951473B (en) * | 2017-03-06 | 2019-11-26 | 浙江大学 | The construction method of deep vision question answering system towards dysopia personage |
CN108734329A (en) * | 2017-04-21 | 2018-11-02 | 北京微影时代科技有限公司 | A kind of method and device at prediction film next day box office |
CN107204010A (en) * | 2017-04-28 | 2017-09-26 | 中国科学院计算技术研究所 | A kind of monocular image depth estimation method and system |
CN107204010B (en) * | 2017-04-28 | 2019-11-19 | 中国科学院计算技术研究所 | A kind of monocular image depth estimation method and system |
CN108932523A (en) * | 2017-05-26 | 2018-12-04 | 日东电工株式会社 | Image classification and classification data manufacturing system and method, storage medium |
CN108932523B (en) * | 2017-05-26 | 2024-04-09 | 日东电工株式会社 | Image classification and classified data creation system and method, and storage medium |
CN107330437A (en) * | 2017-07-03 | 2017-11-07 | 贵州大学 | Feature extracting method based on the real-time detection model of convolutional neural networks target |
CN107330437B (en) * | 2017-07-03 | 2021-01-08 | 贵州大学 | Feature extraction method based on convolutional neural network target real-time detection model |
CN107766881B (en) * | 2017-09-30 | 2020-06-26 | 中国地质大学(武汉) | Way finding method and device based on basic classifier and storage device |
CN107766881A (en) * | 2017-09-30 | 2018-03-06 | 中国地质大学(武汉) | A kind of method for searching based on fundamental classifier, equipment and storage device |
CN108174289A (en) * | 2017-12-28 | 2018-06-15 | 泰康保险集团股份有限公司 | A kind of image data processing method, device, medium and electronic equipment |
CN108122562A (en) * | 2018-01-16 | 2018-06-05 | 四川大学 | A kind of audio frequency classification method based on convolutional neural networks and random forest |
CN108615244A (en) * | 2018-03-27 | 2018-10-02 | 中国地质大学(武汉) | A kind of image depth estimation method and system based on CNN and depth filter |
CN108615244B (en) * | 2018-03-27 | 2019-11-15 | 中国地质大学(武汉) | A kind of image depth estimation method and system based on CNN and depth filter |
CN108573492B (en) * | 2018-04-02 | 2020-04-03 | 电子科技大学 | Real-time radar detection area detection method |
CN108573492A (en) * | 2018-04-02 | 2018-09-25 | 电子科技大学 | A kind of real time radar search coverage detection method |
CN108877267A (en) * | 2018-08-06 | 2018-11-23 | 武汉理工大学 | A kind of intersection detection method based on vehicle-mounted monocular camera |
CN109241893B (en) * | 2018-08-27 | 2021-08-06 | 广州大学 | Road selection method and device based on artificial intelligence technology and readable storage medium |
CN109241893A (en) * | 2018-08-27 | 2019-01-18 | 广州大学 | Road selection method, device and readable storage medium storing program for executing based on artificial intelligence technology |
CN109945802A (en) * | 2018-10-11 | 2019-06-28 | 宁波深浅优视智能科技有限公司 | A kind of structural light three-dimensional measurement method |
CN109945802B (en) * | 2018-10-11 | 2021-03-09 | 苏州深浅优视智能科技有限公司 | Structured light three-dimensional measurement method |
CN109635723A (en) * | 2018-12-11 | 2019-04-16 | 讯飞智元信息科技有限公司 | A kind of occlusion detection method and device |
CN109685842A (en) * | 2018-12-14 | 2019-04-26 | 电子科技大学 | A kind of thick densification method of sparse depth based on multiple dimensioned network |
CN109612513B (en) * | 2018-12-17 | 2021-10-15 | 安徽农业大学 | Online anomaly detection method for large-scale high-dimensional sensor data |
CN109612513A (en) * | 2018-12-17 | 2019-04-12 | 安徽农业大学 | A kind of online method for detecting abnormality towards extensive higher-dimension sensing data |
CN109889724A (en) * | 2019-01-30 | 2019-06-14 | 北京达佳互联信息技术有限公司 | Image weakening method, device, electronic equipment and readable storage medium storing program for executing |
CN110175574A (en) * | 2019-05-28 | 2019-08-27 | 中国人民解放军战略支援部队信息工程大学 | A kind of Road network extraction method and device |
CN110334628A (en) * | 2019-06-26 | 2019-10-15 | 华中科技大学 | A kind of outdoor monocular image depth estimation method based on structuring random forest |
CN110334628B (en) * | 2019-06-26 | 2021-07-27 | 华中科技大学 | Outdoor monocular image depth estimation method based on structured random forest |
CN110852243A (en) * | 2019-11-06 | 2020-02-28 | 中国人民解放军战略支援部队信息工程大学 | Improved YOLOv 3-based road intersection detection method and device |
CN111209864A (en) * | 2020-01-07 | 2020-05-29 | 上海交通大学 | Target identification method for power equipment |
CN111209864B (en) * | 2020-01-07 | 2023-05-26 | 上海交通大学 | Power equipment target identification method |
CN112150804B (en) * | 2020-08-31 | 2021-10-19 | 中国地质大学(武汉) | City multi-type intersection identification method based on MaskRCNN algorithm |
CN112150804A (en) * | 2020-08-31 | 2020-12-29 | 中国地质大学(武汉) | City multi-type intersection identification method based on MaskRCNN algorithm |
CN112837361A (en) * | 2021-03-05 | 2021-05-25 | 浙江商汤科技开发有限公司 | Depth estimation method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106408015A (en) | Road fork identification and depth estimation method based on convolutional neural network | |
CN110163187B (en) | F-RCNN-based remote traffic sign detection and identification method | |
CN109285139A (en) | A kind of x-ray imaging weld inspection method based on deep learning | |
CN108734143A (en) | A kind of transmission line of electricity online test method based on binocular vision of crusing robot | |
CN110598736A (en) | Power equipment infrared image fault positioning, identifying and predicting method | |
CN111444939B (en) | Small-scale equipment component detection method based on weak supervision cooperative learning in open scene of power field | |
CN108509954A (en) | A kind of more car plate dynamic identifying methods of real-time traffic scene | |
CN111783590A (en) | Multi-class small target detection method based on metric learning | |
CN109800736A (en) | A kind of method for extracting roads based on remote sensing image and deep learning | |
CN109800628A (en) | A kind of network structure and detection method for reinforcing SSD Small object pedestrian detection performance | |
CN107945153A (en) | A kind of road surface crack detection method based on deep learning | |
CN106845374A (en) | Pedestrian detection method and detection means based on deep learning | |
CN105160310A (en) | 3D (three-dimensional) convolutional neural network based human body behavior recognition method | |
CN105574550A (en) | Vehicle identification method and device | |
CN111612059A (en) | Construction method of multi-plane coding point cloud feature deep learning model based on pointpilars | |
CN112560675B (en) | Bird visual target detection method combining YOLO and rotation-fusion strategy | |
CN114399672A (en) | Railway wagon brake shoe fault detection method based on deep learning | |
CN106408030A (en) | SAR image classification method based on middle lamella semantic attribute and convolution neural network | |
CN113255589B (en) | Target detection method and system based on multi-convolution fusion network | |
CN113609896A (en) | Object-level remote sensing change detection method and system based on dual-correlation attention | |
CN109002752A (en) | A kind of complicated common scene rapid pedestrian detection method based on deep learning | |
CN117593304B (en) | Semi-supervised industrial product surface defect detection method based on cross local global features | |
CN113313094B (en) | Vehicle-mounted image target detection method and system based on convolutional neural network | |
CN108256462A (en) | A kind of demographic method in market monitor video | |
CN109086803A (en) | A kind of haze visibility detection system and method based on deep learning and the personalized factor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170215 |