CN107545238A - Underground coal mine pedestrian detection method based on deep learning - Google Patents
Underground coal mine pedestrian detection method based on deep learning Download PDFInfo
- Publication number
- CN107545238A CN107545238A CN201710532800.1A CN201710532800A CN107545238A CN 107545238 A CN107545238 A CN 107545238A CN 201710532800 A CN201710532800 A CN 201710532800A CN 107545238 A CN107545238 A CN 107545238A
- Authority
- CN
- China
- Prior art keywords
- layer
- group
- convolutional
- deep learning
- coal mine
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses the underground coal mine pedestrian detection method based on deep learning, detection method are as follows:Establish the deep learning convolutional neural networks model of underground coal mine pedestrian detection, it is picture by the coal mine underground monitoring Video Quality Metric of acquisition, input deep learning convolutional neural networks model, extract the low-level image feature and high-level characteristic of pedestrian in coal mine underground monitoring video, and merged low-level image feature and high-level characteristic, as final output testing result.The underground coal mine pedestrian detection method detection speed based on deep learning is very fast, Detection accuracy is high, disclosure satisfy that real-time requirement.
Description
Technical field
The invention belongs to field of artificial intelligence, and in particular to the underground coal mine pedestrian detection side based on deep learning
Method.
Background technology
At present, all kinds of public places have possessed effective camera head monitor, can be ensured with all kinds of accidents of effective monitoring
The property safety and social stability of the people.Because illumination condition is dark under mine, background clutter, environment is complicated, can under monitor video
The visual information utilized is seldom, and no image of Buddha natural light condition utilizes portable feature by traditional images treatment technology:HOG、Haar、
The correlation techniques such as LBP features realize the detection of underground coal mine pedestrian.
The content of the invention
The technical problems to be solved by the invention are to be directed to above-mentioned the deficiencies in the prior art, there is provided a kind of detection speed compared with
It hurry up, Detection accuracy height, disclosure satisfy that the underground coal mine pedestrian detection method based on deep learning required in real time.
In order to solve the above technical problems, the technical solution adopted by the present invention is, the detection method is as follows:Establish underground coal mine
The deep learning convolutional neural networks model of pedestrian detection, it is picture by the coal mine underground monitoring Video Quality Metric of acquisition, input is deep
Degree study convolutional neural networks model, the low-level image feature and high-level characteristic of pedestrian in coal mine underground monitoring video are extracted, and the bottom of by
Layer feature and high-level characteristic are merged, as final output testing result.
Further, the deep learning convolutional neural networks model includes n-layer convolutional layer and 6 layers of down-sampling layer, n-layer convolution
Layer and 6 layers of down-sampling layer are divided into following seven groups:First group and second group includes one layer of convolutional layer and one layer of down-sampling layer successively,
3rd group includes three-layer coil lamination and one layer of down-sampling layer successively, and the 4th group and the 5th group includes multilayer convolutional layer and one successively
Layer down-sampling layer, and the number of plies of the convolutional layer in the 4th group is less than the number of plies of the convolutional layer in the 5th group, the 6th group includes successively
Multilayer convolutional layer, the 7th group includes one layer or multilayer convolutional layer and one layer of down-sampling layer successively;Deep learning convolution god
It is sequentially connected with through first group in network model to the 6th group, meanwhile, the 4th group of the input exported as the 7th group, the 7th group
Output be added with the 6th group of output, the output as whole model;Wherein:23≤n≤150, n are integer.
Further, the deep learning convolutional neural networks model includes 23 layers of convolutional layer and 6 layers of down-sampling layer, and described 23
Layer convolutional layer and 6 layers of down-sampling layer are divided into following seven groups:First group and second group includes under one layer of convolutional layer and one layer successively
Sample level, the 3rd group includes three-layer coil lamination and one layer of down-sampling layer successively, and the 4th group includes three-layer coil lamination and one layer successively
Down-sampling layer, the 5th group includes five layers of convolutional layer and one layer of down-sampling layer successively, and the 6th group includes nine layers of convolutional layer successively, and the 7th
Group includes one layer of convolutional layer and one layer of down-sampling layer successively;First group to the 6th in the deep learning convolutional neural networks model
Group is sequentially connected with, meanwhile, the 4th group of the input exported as the 7th group, the 7th group of output is added with the 6th group of output,
Output as whole model.
Further, should be in first group, second group and the 3rd group, for convolutional layer by 3*3 sizes, step-length is 1 convolution kernel
Formed.
Further, should be in the 4th group, the 5th group and the 6th group, multiple convolutional layers are by multiple 3*3 sizes, step-length 1
Convolution kernel and multiple 1*1 sizes, step-length be 1 convolution kernel formed.
Further, should be in the 7th group, convolutional layer is 1*1 by 30 sizes, and the convolution kernel that step-length is 1 is formed.
Further, before picture being inputted into deep learning convolutional neural networks model, training dataset is chosen, with image
In the minimum loss function of target be training parameter, the deep learning convolutional neural networks model is trained, with
The deep learning convolutional neural networks model trained.
Underground coal mine pedestrian detection method of the invention based on deep learning has the following advantages that:1. detection speed is very fast,
It disclosure satisfy that real-time requirement.2. using the Fusion Features of different levels, so as to realize the lifting of accuracy rate, Detection accuracy
It is high.3. the problem of using depth learning technology most fiery in the last few years to replace portable feature, obtained using convolutional neural networks
Characteristics of image.
Brief description of the drawings
Fig. 1 is the structural representation of deep learning model in the present invention.
Fig. 2 is the schematic diagram of convolutional calculation process in the present invention.
Fig. 3 uses maximum pond method for down-sampling in the present invention.
Embodiment
Underground coal mine pedestrian detection method of the invention based on deep learning, the detection method are as follows:Establish underground coal mine
The deep learning convolutional neural networks model of pedestrian detection, it is picture by the coal mine underground monitoring Video Quality Metric of acquisition, input is deep
Degree study convolutional neural networks model, the low-level image feature and high-level characteristic of pedestrian in coal mine underground monitoring video are extracted, and the bottom of by
Layer feature and high-level characteristic are merged, as final output testing result.
As shown in figure 1, above-mentioned deep learning convolutional neural networks model includes n-layer convolutional layer and 6 layers of down-sampling layer, institute
State n-layer convolutional layer and 6 layers of down-sampling layer are divided into following seven groups:First group and second group includes one layer of convolutional layer and one layer successively
Down-sampling layer, the 3rd group includes three-layer coil lamination and one layer of down-sampling layer successively, and the 4th group and the 5th group includes multilayer successively
Convolutional layer and one layer of down-sampling layer, and the number of plies of the convolutional layer in described 4th group is less than the number of plies of the convolutional layer in the 5th group,
6th group includes multilayer convolutional layer successively, and the 7th group includes one layer or multilayer convolutional layer and one layer of down-sampling layer successively;Institute
First group to the 6th group in the deep learning convolutional neural networks model stated is sequentially connected with, meanwhile, the 4th group of output conduct
7th group of input, the 7th group of output are added with the 6th group of output, the output as whole model;Wherein:23≤n≤
150, n be integer.
In actual treatment coal underground picture, when n is 23, effect is best, i.e. deep learning convolutional neural networks model
Including 23 layers of convolutional layer and 6 layers of down-sampling layer, 23 layers of convolutional layer and 6 layers of down-sampling layer are divided into following seven groups:First group and
Second group includes one layer of convolutional layer and one layer of down-sampling layer successively, and the 3rd group includes three-layer coil lamination and one layer of down-sampling successively
Layer, the 4th group includes three-layer coil lamination and one layer of down-sampling layer successively, and the 5th group includes adopting under five layers of convolutional layer and one layer successively
Sample layer, the 6th group includes nine layers of convolutional layer successively, and the 7th group includes one layer of convolutional layer and one layer of down-sampling layer successively;Described depth
First group to the 6th group in degree study convolutional neural networks model is sequentially connected with, meanwhile, the 4th group of output is as the 7th group
Input, the 7th group of output is added with the 6th group of output, the output as whole model.
In the deep learning convolutional neural networks model in first group, second group and the 3rd group, the convolutional layer is by 3*3
Size, the convolution kernel that step-length is 1 are formed.In the 4th group, the 5th group and the 6th group, multiple convolutional layers are by multiple 3*3 sizes, step
A length of 1 convolution kernel and multiple 1*1 sizes, the convolution kernel that step-length is 1 are formed.In the 7th group, convolutional layer is by 30 sizes
For 1*1, the convolution kernel that step-length is 1 is formed.
The underground coal mine pedestrian detection method based on deep learning, picture is inputted into deep learning convolutional neural networks mould
Before type, training dataset is chosen, using the minimum loss function of the target in image as training parameter, to the deep learning
Convolutional neural networks model is trained, with the deep learning convolutional neural networks model trained.Build deep learning
The loss function of convolutional neural networks model, to minimize loss function as training parameter, the damage constructed by target in figure
It is as follows to lose function:
Wherein, s2:Represent that picture has been divided into s × s grid shares s2Individual grid;B:Represent each grid prediction model
The number of peripheral frame;xi,yi,ωi,hi:The value of the scope frame of the target of i-th of grid prediction is represented, four parameters represent mesh respectively
Coordinate, height, the width of target center;xi',yi',ωi',hi' represent be actual value (mark value);λcoord、λnoobjRepresent
Weight coefficient;Ci:Represent whether i-th of grid includes the confidence score of target;Ci':Represent whether i-th of grid includes mesh
Target authentic signature value;Lij objRepresent whether target falls into j-th of scope frame in the i grids;Lij noobj:Represent that target does not have
J-th of the scope frame fallen into i-th of grid;Li obj:Indicate whether that the central point of target falls falling in grid i;I model
Enclose all is 0 to arrive S2, j span is 0 to arrive B, classes:When representing the other set, i.e. model training of target class, data
The included class number of collection;C represents specific target classification;pi(c):That represent is the target classification c of i-th of grid prediction
Probability;pi'(c):What is represented is the value that i-th of grid is responsible for predicting target classification c.Without " ' in loss function " it is pre-
Measured value, band " ' " for actual value.
The detailed process of picture processing is as follows:
The size of the picture of input is first adjusted to 416*416 resolution ratio, then split into 13*13 grid, such as
The center of one target of fruit falls in some grid, then this grid is just responsible for detecting this target.Each grid energy
Enough predict the confidence level of some species representated by 5 scope frames and these scope frames.Confidence level represents this scope frame and all wrapped
Accuracy containing which species and these species of prediction respectively.Confidence level is defined as the detection probability of target and multiplying for IOU
Product is as follows:
P (Object) represent in some grid whether there is target object probability, if there is then value be 1, otherwise for
0.Hand over and than IOU (Intersection Over Union), be the overlapping rate of scope frame caused by model and indicia framing, that is, examine
Survey the common factor of result (Detection Result) and the true rectangular frame (Ground Truth) of marked mistake than it is upper they
Union.
If target to be detected is not present in some grid, its confidence score is 0, otherwise for testing result and truly
The IOU of scope.
Each scope frame is made up of 5 parameters, is respectively (x, y, w, h, confidence).Wherein x, y represent scope frame
Center point coordinate after being normalized relative to its grid.W, h represent the width and height of scope frame respectively, and confidence is to put
Reliability.Each grid can also predict whether the other probability of C target class, that is, each grid include the probability of target.By mesh
Mark class probability is multiplied to obtain with confidence level:
Score using this product as each confidence level.
Due to picture has been divided into 13*13 grid, and 5 scope frames are predicted in each grid, each scope frame has 5
Individual parameter, data set shares 1 target classification (detection target someone), so last prediction result will include 13*13*
(5*5+1)=4394 parameter.Because each grid predicts multiple scope frames, but in the training of network model, each
As long as a target preferably scope frame is detected to be responsible for prediction.Therefore, when some scope frame is chosen, judge which is predicted
As a result it is maximum with the IOU of real goal scope, then just to predict this target with this scope frame.Each thing can thus be caused
Body has a specific scope frame to detect.
When the target in image is larger or target is in faceted boundary, it is possible to have multiple grids and be positioned out
Come, at this moment remove the object that repetition detects using non-maxima suppression (Non-maximal suppression).
The resolution sizes of the picture inputted in convolutional neural networks model are 416*416, and it is big finally to export 30 13*13
Small characteristic pattern.Detailed process is as follows:
First group:Input picture is 3*3 by 32 convolution kernel sizes, and step-length is 1 convolutional layer, then by size is 2*
After 2 step-lengths is 1 down-sampling layers, the characteristic pattern that 32 sizes are 208*208 is obtained.
Second group:The characteristic pattern of first group of output by 64 convolution kernel sizes is 3*3, the convolutional layer that step-length is 1, then is passed through
Cross after size is the down-sampling layer that 2*2 step-lengths are 1, obtain the characteristic pattern that 64 sizes are 104*104.
3rd group:Second group of characteristic pattern for exporting to obtain is successively 3*3 by 128 convolution kernel sizes, the volume that step-length is 1
Lamination, 64 convolution kernel sizes are 3*3, the convolutional layer that step-length is 1,128 convolution kernel sizes are 3*3, the convolutional layer that step-length is 1
Afterwards, then after the down-sampling layer that size is 2*2, step-length is 1 characteristic pattern of 128 52*52 sizes is obtained.
4th group:The 3rd group of characteristic pattern for exporting to obtain is successively 3*3 by 256 convolution kernel sizes, the volume that step-length is 1
Lamination, the convolutional layer that 128 convolution kernel sizes are 1*1 sizes, step-length is 1,256 convolution kernel sizes are 3*3, step-length is 1
After convolutional layer, by the down-sampling layer that size is 2*2, step-length is 1, the characteristic pattern of 256 26*26 sizes is obtained.
5th group:4th group of obtained characteristic pattern is successively 3*3 by 512 convolution kernel sizes, the convolution that step-length is 1
Layer, 512 convolution kernel sizes are 1*1, the convolutional layer that step-length is 1, and 512 convolution kernel sizes are 3*3, the convolutional layer that step-length is 1,
256 convolution kernel sizes are 1*1, the convolutional layer that step-length is 1, and 512 convolution kernel sizes are 3*3, after the convolutional layer that step-length is 1,
The characteristic pattern of 512 26*26 sizes is obtained by the down-sampling layer that size is 2*2, step-length is 1.
6th group:5th group of obtained characteristic pattern is successively 3*3 by 1024 convolution kernel sizes, the convolution that step-length is 1
Layer, 512 convolution kernel sizes are 1*1, the convolutional layer that step-length is 1, and 1024 convolution kernel sizes are 3*3, the convolution that step-length is 1
Layer, 512 convolution kernel sizes are 1*1, the convolutional layer that step-length is 1, then by 4 groups of 1024 convolution kernel sizes be 3*3, step-length be
1 convolutional layer, then by 30 convolution kernel sizes be 1*1, after the convolutional layer that step-length is 1, obtain 30 13*13 characteristic pattern.
7th group:4th group of obtained characteristic pattern is successively 1*1 by 30 sizes, and step-length is 1 convolutional layer, under
Sample level, characteristic pattern is obtained, be added with the 6th group of obtained characteristic pattern, the output last as whole network.
Convolutional calculation process is as shown in Fig. 2 one inputs size to be generated after 5*5 picture and 3*3 convolution kernel convolution
One 3*3 characteristic pattern, the value of the digitized representation pixel in each square frame.The value of characteristic pattern is just input picture and convolution kernel pair
The multiplication of the pixel value in region is answered to be added again, such as from left to right the value of the element of the first row first row is just to input figure to characteristic pattern
Piece upper left corner 3*3 regions are multiplied with convolution kernel to be added again, as a result for:
1 × 1+1 × 0+1 × 1+0 × 0+1 × 1+1 × 0+0 × 1+0 × 0+1 × 1=4;
Similarly, the value of the first row secondary series is:
1 × 1+1 × 0+0 × 1+1 × 0+1 × 1+1 × 0+0 × 1+1 × 0+1 × 1=3;
Then convolution kernel travels through the characteristic pattern that whole input data can just be exported.
Down-sampling uses maximum pond method, and its process is as shown in figure 3, it is on the basis of picture pixels, first
An area size is manually set, then extracts the maximum pixel of pixel value from picture or characteristic pattern by this size
Come, as the value of this panel region, it is just after sampling to travel through the new picture obtained after entire image or characteristic pattern or characteristic pattern
As a result.
Claims (7)
1. the underground coal mine pedestrian detection method based on deep learning, it is characterised in that the detection method is as follows:Establish coal mine
The deep learning convolutional neural networks model of lower pedestrian detection, it is picture by the coal mine underground monitoring Video Quality Metric of acquisition, input
Deep learning convolutional neural networks model, the low-level image feature and high-level characteristic of pedestrian in coal mine underground monitoring video are extracted, and will
Low-level image feature and high-level characteristic are merged, as final output testing result.
2. the underground coal mine pedestrian detection method according to claim 1 based on deep learning, it is characterised in that the depth
Degree study convolutional neural networks model includes n-layer convolutional layer and 6 layers of down-sampling layer, the n-layer convolutional layer and 6 layers of down-sampling layer point
For following seven groups:First group and second group includes one layer of convolutional layer and one layer of down-sampling layer successively, and the 3rd group includes three successively
Layer convolutional layer and one layer of down-sampling layer, the 4th group and the 5th group includes multilayer convolutional layer and one layer of down-sampling layer, and institute successively
The number of plies for stating the convolutional layer in the 4th group is less than the number of plies of the convolutional layer in the 5th group, and the 6th group includes multilayer convolutional layer successively,
7th group includes one layer or multilayer convolutional layer and one layer of down-sampling layer successively;Described deep learning convolutional neural networks
First group to the 6th group in model is sequentially connected with, meanwhile, the 4th group of output is as the 7th group of input, the 7th group of output
Output with the 6th group is added, the output as whole model;Wherein:23≤n≤150, n are integer.
3. the underground coal mine pedestrian detection method according to claim 2 based on deep learning, it is characterised in that the depth
Degree study convolutional neural networks model includes 23 layers of convolutional layer and 6 layers of down-sampling layer, 23 layers of convolutional layer and 6 layers of down-sampling layer
It is divided into following seven groups:First group and second group includes one layer of convolutional layer and one layer of down-sampling layer successively, and the 3rd group includes successively
Three-layer coil lamination and one layer of down-sampling layer, the 4th group includes three-layer coil lamination and one layer of down-sampling layer successively, and the 5th group is wrapped successively
Five layers of convolutional layer and one layer of down-sampling layer are included, the 6th group includes nine layers of convolutional layer successively, and the 7th group includes one layer of convolutional layer successively
With one layer of down-sampling layer;First group to the 6th group in described deep learning convolutional neural networks model is sequentially connected with, meanwhile,
4th group of output is as the 7th group of input, and the 7th group of output is added with the 6th group of output, as the defeated of whole model
Go out.
4. the underground coal mine pedestrian detection method based on deep learning according to Claims 2 or 3, it is characterised in that
In first group, second group and the 3rd group, by 3*3 sizes, the convolution kernel that step-length is 1 is formed the convolutional layer.
5. the underground coal mine pedestrian detection method according to claim 4 based on deep learning, it is characterised in that the 4th
In group, the 5th group and the 6th group, for multiple convolutional layers by multiple 3*3 sizes, step-length is 1 convolution kernel and multiple 1*1 sizes, is walked
A length of 1 convolution kernel is formed.
6. the underground coal mine pedestrian detection method according to claim 5 based on deep learning, it is characterised in that the 7th
In group, convolutional layer is 1*1 by 30 sizes, and the convolution kernel that step-length is 1 is formed.
7. the underground coal mine pedestrian detection method based on deep learning according to claim 1,2 or 3, it is characterised in that
Before picture is inputted into deep learning convolutional neural networks model, training dataset is chosen, with the minimum of the target in image
Loss function is training parameter, the deep learning convolutional neural networks model is trained, with the depth trained
Learn convolutional neural networks model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710532800.1A CN107545238A (en) | 2017-07-03 | 2017-07-03 | Underground coal mine pedestrian detection method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710532800.1A CN107545238A (en) | 2017-07-03 | 2017-07-03 | Underground coal mine pedestrian detection method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107545238A true CN107545238A (en) | 2018-01-05 |
Family
ID=60970549
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710532800.1A Pending CN107545238A (en) | 2017-07-03 | 2017-07-03 | Underground coal mine pedestrian detection method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107545238A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109635717A (en) * | 2018-12-10 | 2019-04-16 | 天津工业大学 | A kind of mining pedestrian detection method based on deep learning |
CN109961014A (en) * | 2019-02-25 | 2019-07-02 | 中国科学院重庆绿色智能技术研究院 | A kind of coal mine conveying belt danger zone monitoring method and system |
CN111008544A (en) * | 2018-10-08 | 2020-04-14 | 阿里巴巴集团控股有限公司 | Traffic monitoring and unmanned driving assistance system and target detection method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104063719A (en) * | 2014-06-27 | 2014-09-24 | 深圳市赛为智能股份有限公司 | Method and device for pedestrian detection based on depth convolutional network |
CN105095833A (en) * | 2014-05-08 | 2015-11-25 | 中国科学院声学研究所 | Network constructing method for human face identification, identification method and system |
CN105654067A (en) * | 2016-02-02 | 2016-06-08 | 北京格灵深瞳信息技术有限公司 | Vehicle detection method and device |
CN106372630A (en) * | 2016-11-23 | 2017-02-01 | 华南理工大学 | Face direction detection method based on deep learning |
CN106780448A (en) * | 2016-12-05 | 2017-05-31 | 清华大学 | A kind of pernicious sorting technique of ultrasonic Benign Thyroid Nodules based on transfer learning Yu Fusion Features |
-
2017
- 2017-07-03 CN CN201710532800.1A patent/CN107545238A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105095833A (en) * | 2014-05-08 | 2015-11-25 | 中国科学院声学研究所 | Network constructing method for human face identification, identification method and system |
CN104063719A (en) * | 2014-06-27 | 2014-09-24 | 深圳市赛为智能股份有限公司 | Method and device for pedestrian detection based on depth convolutional network |
CN105654067A (en) * | 2016-02-02 | 2016-06-08 | 北京格灵深瞳信息技术有限公司 | Vehicle detection method and device |
CN106372630A (en) * | 2016-11-23 | 2017-02-01 | 华南理工大学 | Face direction detection method based on deep learning |
CN106780448A (en) * | 2016-12-05 | 2017-05-31 | 清华大学 | A kind of pernicious sorting technique of ultrasonic Benign Thyroid Nodules based on transfer learning Yu Fusion Features |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111008544A (en) * | 2018-10-08 | 2020-04-14 | 阿里巴巴集团控股有限公司 | Traffic monitoring and unmanned driving assistance system and target detection method and device |
CN111008544B (en) * | 2018-10-08 | 2023-05-09 | 阿里巴巴集团控股有限公司 | Traffic monitoring and unmanned auxiliary system and target detection method and device |
CN109635717A (en) * | 2018-12-10 | 2019-04-16 | 天津工业大学 | A kind of mining pedestrian detection method based on deep learning |
CN109961014A (en) * | 2019-02-25 | 2019-07-02 | 中国科学院重庆绿色智能技术研究院 | A kind of coal mine conveying belt danger zone monitoring method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111259850B (en) | Pedestrian re-identification method integrating random batch mask and multi-scale representation learning | |
EP3614308B1 (en) | Joint deep learning for land cover and land use classification | |
CN113011319B (en) | Multi-scale fire target identification method and system | |
CN106682697A (en) | End-to-end object detection method based on convolutional neural network | |
CN107680090A (en) | Based on the electric transmission line isolator state identification method for improving full convolutional neural networks | |
CN107392901A (en) | A kind of method for transmission line part intelligence automatic identification | |
CN104462494B (en) | A kind of remote sensing image retrieval method and system based on unsupervised feature learning | |
CN107463919A (en) | A kind of method that human facial expression recognition is carried out based on depth 3D convolutional neural networks | |
CN108399362A (en) | A kind of rapid pedestrian detection method and device | |
CN109559302A (en) | Pipe video defect inspection method based on convolutional neural networks | |
CN107133960A (en) | Image crack dividing method based on depth convolutional neural networks | |
CN106250931A (en) | A kind of high-definition picture scene classification method based on random convolutional neural networks | |
CN106910185A (en) | A kind of DBCC disaggregated models and construction method based on CNN deep learnings | |
CN107220603A (en) | Vehicle checking method and device based on deep learning | |
CN107169435A (en) | A kind of convolutional neural networks human action sorting technique based on radar simulation image | |
CN107016405A (en) | A kind of insect image classification method based on classification prediction convolutional neural networks | |
CN107945153A (en) | A kind of road surface crack detection method based on deep learning | |
CN107133622A (en) | The dividing method and device of a kind of word | |
CN108647655A (en) | Low latitude aerial images power line foreign matter detecting method based on light-duty convolutional neural networks | |
CN106981063A (en) | A kind of grid equipment state monitoring apparatus based on deep learning | |
CN106156765A (en) | safety detection method based on computer vision | |
CN107967474A (en) | A kind of sea-surface target conspicuousness detection method based on convolutional neural networks | |
CN107038416A (en) | A kind of pedestrian detection method based on bianry image modified HOG features | |
CN106991666A (en) | A kind of disease geo-radar image recognition methods suitable for many size pictorial informations | |
CN104298974A (en) | Human body behavior recognition method based on depth video sequence |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180105 |