CN107092926A - Service robot object recognition algorithm based on deep learning - Google Patents
Service robot object recognition algorithm based on deep learning Download PDFInfo
- Publication number
- CN107092926A CN107092926A CN201710202158.0A CN201710202158A CN107092926A CN 107092926 A CN107092926 A CN 107092926A CN 201710202158 A CN201710202158 A CN 201710202158A CN 107092926 A CN107092926 A CN 107092926A
- Authority
- CN
- China
- Prior art keywords
- object identification
- network
- service robot
- layer
- deep learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides a kind of service robot object recognition algorithm based on deep learning, step one:Gather the image of service robot object to be identified and make the image data set comprising training set and checking collection;Step 2:Convolutional neural networks structure is designed, is trained under deep learning framework and obtains object identification model;Step 3:Tested using object identification model, realize the object identification under indoor complex environment, the image that service robot can be captured according to camera determines the classification of target object, completes object identification.The present invention can realize object identification function of the service robot under complex indoor environment, and real-time is good, accuracy rate is high.
Description
Technical field
The present invention relates to a kind of service robot object identification method based on deep learning, belong to service robot object
Identification field.
Background technology
Object identification is the major issue in machine vision research, and indoor object identification is intellect service robot completion
The required skill of service role.Because kind of object is various, feature is different, and indoor environment because illumination, block, angle etc.
Problem is complicated and changeable, the efficient universal method of indoor object identification still neither one, thus indoor object identification problem by
Extensive concern.Object identification is the method by characteristic matching or Model Identification, is treated it is determined that whether there is in acquired image
Recognize object.The object identification method of traditional feature based matching is generally, and the characteristics of image of object is extracted first, then right
The feature extracted is described, and finally carries out characteristic matching to the object being described.Although specific feature is in specific thing
Preferable effect is achieved in body identification problem, but this artificial feature extracting method greatly relies on experience;And multiple
Under miscellaneous scene, the complexity height of characteristic matching, poor robustness;Along with the method that this substep is carried out is time-consuming more, without reality
Shi Xing.The method that the present invention is designed solves that dependence is artificial to extract under feature, complex scene poor robustness and without real-time
The problem of property, whole object identifying system is established, there is important reference for service robot object identification, can be with
Directly apply to a variety of occasions such as family, office, airport, hotel.
The content of the invention
It is an object of the invention to design a kind of service-delivery machine personage that the real-time based on deep learning is good, accuracy rate is high
Body recognizer, realizes object identification function of the service robot under complex indoor environment.
The object of the present invention is achieved like this:Comprise the following steps:
Step one:Gather the image of service robot object to be identified and make the picture number comprising training set and checking collection
According to collection;
Step 2:Convolutional neural networks structure is designed, is trained under deep learning framework and obtains object identification model;
Step 3:Tested using object identification model, realize the object identification under indoor complex environment, service-delivery machine
The image that people can capture according to camera determines the classification of target object, completes object identification.
Present invention additionally comprises some such architectural features:
1. step one includes:
(1) by downloading the image with camera shooting, collecting object to be identified, object to be identified includes cup, key, pen
With the type objects of USB flash disk four, the picture per type objects gathers 150 altogether;
(2) picture gathered is normalized to unified size, form;
(3) by the picture of the every type objects gathered according to 1:4 ratio is divided into checking collection and training set, and carries out correspondence
Label;
(4) checking is collected and training set generates file path respectively and the one-to-one text of label is standby.
2. step 2 is specifically included:
(1) an original network structure for having eight layers is designed, eight layers of original network structure includes five convolution
Module, two full articulamentums, an output category layers, each convolution module include convolutional layer, activation primitive layer, pond layer, mark
Standardization layer;
(2) primitive network structure is optimized according to convolution kernel size, three aspects of parameter setting and the network number of plies, obtained
To for train object identification model, optimization after convolutional neural networks structure;
A, convolution kernel size:Influence of the convolution kernel size of first convolutional layer to recognition accuracy is analyzed, by accuracy rate
The convolution kernel used during highest is dimensioned to the convolution kernel size of the first layer convolutional layer of network after optimization;
B, parameter setting:Influence of the dropout ratio value to recognition accuracy is analyzed, Dropout layers of adjustment
Dropout ratio value condition, the Dropout ratio valued combinations used during using accuracy rate highest are used as net after optimization
The Dropout ratio values of network;
C, the network number of plies:Influence of the network number of plies to recognition accuracy is analyzed, is changed on primitive network architecture basics
Become, design the network structure of the different convolution numbers of plies, the recognition accuracy highest network number of plies is set to the network after optimization
The number of plies;
(3) network structure after optimization is built under CAFFE frameworks;
(4) data set is inputed into convolutional neural networks structure, trained under GeForce GTX 1080GPU;
(5) object identification model is obtained by 20000 training.
Compared with prior art, the beneficial effects of the invention are as follows:The present invention thing to be identified by gathering service robot
Body image simultaneously makes data set, designs convolutional neural networks structure, is trained under deep learning framework and obtains object identification
Model, and tested using object identification model, the object identification under indoor complex environment is realized, service robot being capable of root
The image captured according to camera determines the classification of target object, completes the task of object identification.
The present invention devises a kind of service robot object identification that real-time is good, accuracy rate is high calculated based on deep learning
Algorithm, including the design of data set, the structure design of convolutional neural networks and recognizing test method.Traditional feature based
The recognition methods matched somebody with somebody first has to extract characteristics of image, then carries out feature description, finally carries out characteristic matching and determines the object
Classification.For traditional recognition method extract feature when it is very big must rely on artificial experience the problem of, it is proposed that based on deep learning calculate
The recognition methods of method, it is not necessary to which artificial feature of extracting is extracted by deep learning network model from bottom to high level, automatically completely
Feature;For traditional recognition method, because the feature of selection has caused by specificity asking under complex scene poor robustness
Topic, the feature that algorithm proposed by the present invention is extracted is more than specific a certain kind, but the group of the feature such as color, shape
Close, therefore with more robustness under complex environment;The problem of for traditional recognition method poor real, calculation proposed by the present invention
Method, automatically extracts feature and is classified, and this end-to-end mode can save the time loss of step-by-step processing generation, greatly
Improve real-time in ground.The present invention realizes the object identification function of service robot, it is adaptable to the target under complex indoor environment
Identification mission, can be widely applied in the target identification under the various complicateds such as family, office, the real-time of algorithm, Shandong
Rod and high-accuracy can ensure that robot completes the identification mission of plurality of target under complex environment.
Brief description of the drawings
Fig. 1 is the convolutional neural networks structure chart that the present invention is designed;
Fig. 2 is the object recognition algorithm flow chart based on deep learning in the present invention.
Embodiment
The present invention is described in further detail with embodiment below in conjunction with the accompanying drawings.
The present invention proposes a kind of service robot object identification method based on deep learning, it is intended to realize service-delivery machine
People efficiently and accurately carries out object identification under complex environment.The image of collection service robot object to be identified and making first
Image data set comprising training set and checking collection.Then original convolution neural network structure is designed, including it is convolutional layer, down-sampled
Layer, full articulamentum etc., for train classification models.Next from convolution kernel size, parameter setting and the net of first layer convolutional layer
Network layers number determines that three aspects are optimized to designed convolutional neural networks structure, accurate according to the identification under different settings
The combination for three kinds of settings that rate selection behaves oneself best is as final improvement project.Using the network structure after improvement, in depth
It is trained under learning framework and obtains model, write shell script and call object identification model to be tested, for each input
Picture, exports its corresponding classification and accuracy rate.Finally realize the object identification of service robot under indoor complex environment, clothes
The image that business robot can be captured according to camera determines the classification of target object, completes the task of object identification.
In some embodiments, the foundation of data set is specially:
(1) downloaded by picture concerned on various websites and camera is shot, image of the collection per type objects, including colored
With black and white, under different angles and illumination condition, complex degree of background is different, and object number and object are in figure in picture
The different picture of proportion in piece, each 150 of the picture comprising pen, key, cup and the type objects of USB flash disk four.
(2) in order to keep the uniformity of network inputs size, original image is normalized to unified size, form, such as
500*500*3, JPG form.
(3) by the picture of every type objects according to 1:4 ratio is divided into checking collection and training set, and carries out corresponding label,
If the object in picture 0001.jpg is cup, then picture 0001.jpg label is 0.
(4) it is the data set generation file path handled well and the one-to-one text of label is standby.
In some embodiments, the design of convolutional neural networks structure is specially:
(1) an original network structure is designed first, and the network has 8 layers, including (convolutional layer adds 5 convolution modules
On activation primitive layer thereafter, pond layer, normalization layer constitute a convolution module), 2 full articulamentums, 1 output category
Layer.
(2) influence of the convolution kernel size of first convolutional layer to recognition accuracy is analyzed, is used during by accuracy rate highest
Convolution kernel be dimensioned to optimization after network first layer convolutional layer convolution kernel size.
(3) influence of the analysis dropout ratio value to recognition accuracy, the Dropout of Dropout layers of adjustment
Ratio value condition, the Dropout ratio valued combinations used during using accuracy rate highest are used as network after optimization
Dropout ratio values.
(4) influence of the analysis network number of plies to recognition accuracy, is changed on primitive network architecture basics, designs
The network structure of the different convolution numbers of plies, the recognition accuracy highest network structure number of plies is set to the layer of the network after optimization
Number.
(5) primitive network result is optimized according to the Optimized Measures in terms of above three, designs final network knot
Structure is used to train object identification model.
In some embodiments, recognizing test method is specially:
(1) designed network structure is built under CAFFE frameworks.
(2) designed data set is inputed into convolutional neural networks, trained under GeForce GTX 1080GPU.
(3) object identification model is obtained by 20000 training.
(4) obtained object identification model is called, test script program is write, for the picture of input, realized defeated in real time
Go out the function of its classification and accuracy rate.
Substep description is carried out to the present invention below in conjunction with the accompanying drawings:
(1) data set is set up
The present invention establishes an image data set to the object identification under indoor complex environment for service robot, first
First passing through network and downloading to gather with camera includes under black and white and colored, different angles and light conditions, background complexity
Degree is different, objects in images number and object in the picture proportion also different (namely a picture part is come
Network is come from, another part is the subject image gathered by camera.These images both chromatic colour and also have black and white, not
With angle and light conditions shoot, complex degree of background, the number of objects in images and object proportion in the picture
Also it is different).In order to keep the uniformity of network inputs size, original image is normalized to unified size.Will processing
Good data set generation file path and the one-to-one text of label is standby.The data set of foundation includes cup, key
Spoon, pen, USB flash disk totally four class, the picture per type objects have 150, according to 1:4 ratio is divided into checking collection and training set.
(2) convolutional neural networks structure is designed
The original convolution neural network structure that the present invention is designed is as shown in figure 1, the network has 8 layers, including 5 convolution moulds
Block (convolutional layer constitutes a convolution module plus activation primitive layer thereafter, pond layer, normalization layer), 2 full articulamentums, 1
Individual output category layer.A maximum pond layer (Max- has been closelyed follow after first, second and the 5th convolutional layer respectively
Pooling, i.e., down-sampled layer), wherein followed by one norm again after first and second maximum pond layer
(normalization, standardization) layer, two layers of full articulamentum is after five convolution modules, and last layer has 4 outputs
Kind judging layer, for cup to be sorted, pen, key and USB flash disk totally four class picture.In addition to output category layer, each layer is all used
ReLU activation primitives, it is short compared to traditional Sigmoid functions computational short cut, training time, and can avoid what gradient disappeared
Problem.Maximum pond layer uses overlapping pool (overlapping pooling), solves easily occurred to a certain extent
The problem of fitting phenomenon (overfitting).In order to prevent over-fitting occur in full articulamentum, herein using a kind of canonical
Change method Dropout, the weight of some nodes in network is randomly ignored in training, retains these weights but does not update, temporarily
When think that the node ignored is not a part for network structure, but do not delete these nodes.
Because first layer convolutional layer is nearest apart from original image, parameter is also most sensitive, and follow-up operation is defeated dependent on this layer
Go out, the smaller minutia extracted of convolution kernel is more, and larger convolution kernel can obtain more structural informations.For analysis
Influence of the convolution kernel size of first layer convolutional layer to recognition accuracy, does one group of contrast test, will on raw data set
First layer convolutional layer conv1 in test models selects five kinds of convolution kernels of 3*3,5*5,7*7,11*11 and 15*15 size,
To reduce influence of the other factors to result, in addition to convolution kernel is of different sizes, other network structure all sames.Obtained result is
5*5 convolution kernel shows higher recognition accuracy.
To analyze influences of dropout layers of the parameter dropoutratio to recognition accuracy, one group of contrast test is done, is adjusted
Whole Dropout layers Dropout ratio value condition, carries out five groups of experiments, drop6 and drop7 layers of parameter are 0.5 altogether
With 0.05 for minimum change interval variation in the range of~0.7, obtained result is Drop6 layers of Dropout ratio=0.55,
Recognition accuracy highest during Drop7 layers of Dropout ratio=0.5.
To analyze influence of the number of plies of network structure to recognition accuracy, become on primitive network architecture basics and dissolve 4 kinds
The only different network structure of the convolution number of plies, comprising the convolution number of plies be respectively 3,4,6,7, the structure for increasing Internet,
The back gauge that size is 1 is added to picture in order to not change the size of characteristic pattern, increased convolutional layer is using the convolution that size is 3
Core because 3 be can guarantee that obtain above and below, left and right and intermediate pixel feature minimum dimension.Obtained result is in primitive network
Recognition accuracy highest when increasing by one layer of convolutional layer conv6 on architecture basics.
According to above-mentioned three aspect analysis result, primitive network structure is improved, the network configuration after improvement be
Add one layer of convolutional layer 6 after convolutional layer 5, the convolution kernel size of increased convolutional layer is 3*3;The convolution kernel of first layer convolutional layer is big
It is small to be changed to 5*5;Drop6 layers of dropoutratio is changed to 0.55;Other parts are identical with primitive network.
(3) recognizer flow
Referring to Fig. 2, the image of object to be identified is gathered first, image data set is fabricated to;Then by designed network
Structure is built under CAFFE frameworks, and data set is inputted into convolutional neural networks, is instructed under the GPU of GeForce GTX 1080
Practice, object identification model is obtained by 20000 training;Write shell script and call the object identification model, realize for clothes
The classification and the work(of accuracy rate of object in the image information that the camera that business robot is carried is captured, real-time output image
Energy.
Claims (3)
1. the service robot object recognition algorithm based on deep learning, it is characterised in that:Comprise the following steps:
Step one:Gather the image of service robot object to be identified and make the view data comprising training set and checking collection
Collection;
Step 2:Convolutional neural networks structure is designed, is trained under deep learning framework and obtains object identification model;
Step 3:Tested using object identification model, realize the object identification under indoor complex environment, service robot energy
Enough images captured according to camera determine the classification of target object, complete object identification.
2. the service robot object recognition algorithm according to claim 1 based on deep learning, it is characterised in that:Step
One includes:
(1) by downloading the image with camera shooting, collecting object to be identified, object to be identified includes cup, key, pen and USB flash disk
Four type objects, the picture per type objects gathers 150 altogether;
(2) picture gathered is normalized to unified size, form;
(3) by the picture of the every type objects gathered according to 1:4 ratio is divided into checking collection and training set, and carries out corresponding mark
Label;
(4) checking is collected and training set generates file path respectively and the one-to-one text of label is standby.
3. the service robot object recognition algorithm according to claim 2 based on deep learning, it is characterised in that:Step
Two specifically include:
(1) design one and have eight layers of an original network structure, eight layers of original network structure include five convolution modules,
Two full articulamentums, an output category layers, each convolution module include convolutional layer, activation primitive layer, pond layer, standardization
Layer;
(2) primitive network structure is optimized according to convolution kernel size, three aspects of parameter setting and the network number of plies, used
In training object identification model, optimization after convolutional neural networks structure;
A, convolution kernel size:Influence of the convolution kernel size of first convolutional layer to recognition accuracy is analyzed, by accuracy rate highest
Shi Caiyong convolution kernel is dimensioned to the convolution kernel size of the first layer convolutional layer of network after optimization;
B, parameter setting:Influence of the dropout ratio value to recognition accuracy is analyzed, Dropout layers of adjustment
Dropout ratio value condition, the Dropout ratio valued combinations used during using accuracy rate highest are used as net after optimization
The Dropout ratio values of network;
C, the network number of plies:Influence of the network number of plies to recognition accuracy is analyzed, is changed on primitive network architecture basics, if
The network structure of the different convolution numbers of plies is counted out, the recognition accuracy highest network number of plies is set to the network number of plies after optimization;
(3) network structure after optimization is built under CAFFE frameworks;
(4) data set is inputed into convolutional neural networks structure, trained under the GPU of GeForce GTX 1080;
(5) object identification model is obtained by 20000 training.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710202158.0A CN107092926A (en) | 2017-03-30 | 2017-03-30 | Service robot object recognition algorithm based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710202158.0A CN107092926A (en) | 2017-03-30 | 2017-03-30 | Service robot object recognition algorithm based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107092926A true CN107092926A (en) | 2017-08-25 |
Family
ID=59649246
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710202158.0A Pending CN107092926A (en) | 2017-03-30 | 2017-03-30 | Service robot object recognition algorithm based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107092926A (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107844758A (en) * | 2017-10-24 | 2018-03-27 | 量子云未来(北京)信息科技有限公司 | Intelligence pre- film examination method, computer equipment and readable storage medium storing program for executing |
CN108038440A (en) * | 2017-12-07 | 2018-05-15 | 河海大学 | A kind of Hyperspectral Remote Sensing Imagery Classification method based on overlapping pool |
CN108267172A (en) * | 2018-01-25 | 2018-07-10 | 神华宁夏煤业集团有限责任公司 | Mining intelligent robot inspection system |
CN108341184A (en) * | 2018-03-01 | 2018-07-31 | 安徽省星灵信息科技有限公司 | A kind of intelligent sorting dustbin |
CN108921218A (en) * | 2018-06-29 | 2018-11-30 | 炬大科技有限公司 | A kind of target object detection method and device |
CN108940919A (en) * | 2018-06-14 | 2018-12-07 | 华东理工大学 | Garbage classification machine people based on wireless transmission and deep learning |
CN109688395A (en) * | 2018-12-29 | 2019-04-26 | 北京中科寒武纪科技有限公司 | Operation method, device and Related product |
TWI666594B (en) * | 2017-09-01 | 2019-07-21 | 潘品睿 | Indoor object management system and indoor object management method |
CN110116415A (en) * | 2019-06-12 | 2019-08-13 | 中北大学 | A kind of Bottle & Can class rubbish identification sorting machine people based on deep learning |
CN110533099A (en) * | 2019-08-28 | 2019-12-03 | 上海零眸智能科技有限公司 | A kind of item identification method of the multi-cam acquisition image based on deep learning |
CN110532320A (en) * | 2019-08-01 | 2019-12-03 | 立旃(上海)科技有限公司 | Training data management method and device based on block chain |
CN110574040A (en) * | 2018-02-14 | 2019-12-13 | 深圳市大疆创新科技有限公司 | Automatic snapshot method and device, unmanned aerial vehicle and storage medium |
CN110727272A (en) * | 2019-11-11 | 2020-01-24 | 广州赛特智能科技有限公司 | Path planning and scheduling system and method for multiple robots |
CN111104523A (en) * | 2019-12-20 | 2020-05-05 | 西南交通大学 | Audio-visual cooperative learning robot based on voice assistance and learning method |
CN111401297A (en) * | 2020-04-03 | 2020-07-10 | 天津理工大学 | Triphibian robot target recognition system and method based on edge calculation and neural network |
CN111602183A (en) * | 2018-05-22 | 2020-08-28 | 日本金钱机械株式会社 | Unlocking system |
US10970859B2 (en) * | 2018-12-05 | 2021-04-06 | Ankobot (Shenzhen) Smart Technologies Co., Ltd. | Monitoring method and device for mobile target, monitoring system and mobile robot |
CN113076965A (en) * | 2020-01-06 | 2021-07-06 | 广州中国科学院先进技术研究所 | Cloud-based service robot scene classification system and method |
CN115755910A (en) * | 2022-11-17 | 2023-03-07 | 珠海城市职业技术学院 | Control method and system of home service robot |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160180195A1 (en) * | 2013-09-06 | 2016-06-23 | Toyota Jidosha Kabushiki Kaisha | Augmenting Layer-Based Object Detection With Deep Convolutional Neural Networks |
CN105913087A (en) * | 2016-04-11 | 2016-08-31 | 天津大学 | Object identification method based on optimal pooled convolutional neural network |
CN106228177A (en) * | 2016-06-30 | 2016-12-14 | 浙江大学 | Daily life subject image recognition methods based on convolutional neural networks |
-
2017
- 2017-03-30 CN CN201710202158.0A patent/CN107092926A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160180195A1 (en) * | 2013-09-06 | 2016-06-23 | Toyota Jidosha Kabushiki Kaisha | Augmenting Layer-Based Object Detection With Deep Convolutional Neural Networks |
CN105913087A (en) * | 2016-04-11 | 2016-08-31 | 天津大学 | Object identification method based on optimal pooled convolutional neural network |
CN106228177A (en) * | 2016-06-30 | 2016-12-14 | 浙江大学 | Daily life subject image recognition methods based on convolutional neural networks |
Non-Patent Citations (1)
Title |
---|
黄斌等: "基于深度卷积神经网络的物体识别算法", 《计算机应用》 * |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI666594B (en) * | 2017-09-01 | 2019-07-21 | 潘品睿 | Indoor object management system and indoor object management method |
CN107844758A (en) * | 2017-10-24 | 2018-03-27 | 量子云未来(北京)信息科技有限公司 | Intelligence pre- film examination method, computer equipment and readable storage medium storing program for executing |
CN108038440A (en) * | 2017-12-07 | 2018-05-15 | 河海大学 | A kind of Hyperspectral Remote Sensing Imagery Classification method based on overlapping pool |
CN108267172A (en) * | 2018-01-25 | 2018-07-10 | 神华宁夏煤业集团有限责任公司 | Mining intelligent robot inspection system |
CN110574040A (en) * | 2018-02-14 | 2019-12-13 | 深圳市大疆创新科技有限公司 | Automatic snapshot method and device, unmanned aerial vehicle and storage medium |
CN108341184A (en) * | 2018-03-01 | 2018-07-31 | 安徽省星灵信息科技有限公司 | A kind of intelligent sorting dustbin |
CN111602183A (en) * | 2018-05-22 | 2020-08-28 | 日本金钱机械株式会社 | Unlocking system |
CN111602183B (en) * | 2018-05-22 | 2022-03-18 | 日本金钱机械株式会社 | Unlocking system |
CN108940919A (en) * | 2018-06-14 | 2018-12-07 | 华东理工大学 | Garbage classification machine people based on wireless transmission and deep learning |
CN108921218A (en) * | 2018-06-29 | 2018-11-30 | 炬大科技有限公司 | A kind of target object detection method and device |
CN108921218B (en) * | 2018-06-29 | 2022-06-24 | 炬大科技有限公司 | Target object detection method and device |
US10970859B2 (en) * | 2018-12-05 | 2021-04-06 | Ankobot (Shenzhen) Smart Technologies Co., Ltd. | Monitoring method and device for mobile target, monitoring system and mobile robot |
CN109688395A (en) * | 2018-12-29 | 2019-04-26 | 北京中科寒武纪科技有限公司 | Operation method, device and Related product |
CN110116415A (en) * | 2019-06-12 | 2019-08-13 | 中北大学 | A kind of Bottle & Can class rubbish identification sorting machine people based on deep learning |
CN110532320A (en) * | 2019-08-01 | 2019-12-03 | 立旃(上海)科技有限公司 | Training data management method and device based on block chain |
CN110532320B (en) * | 2019-08-01 | 2023-06-27 | 立旃(上海)科技有限公司 | Training data management method and device based on block chain |
CN110533099A (en) * | 2019-08-28 | 2019-12-03 | 上海零眸智能科技有限公司 | A kind of item identification method of the multi-cam acquisition image based on deep learning |
CN110533099B (en) * | 2019-08-28 | 2024-01-09 | 上海零眸智能科技有限公司 | Article identification method for acquiring images by multiple cameras based on deep learning |
CN110727272A (en) * | 2019-11-11 | 2020-01-24 | 广州赛特智能科技有限公司 | Path planning and scheduling system and method for multiple robots |
CN110727272B (en) * | 2019-11-11 | 2023-04-18 | 广州赛特智能科技有限公司 | Path planning and scheduling system and method for multiple robots |
CN111104523A (en) * | 2019-12-20 | 2020-05-05 | 西南交通大学 | Audio-visual cooperative learning robot based on voice assistance and learning method |
CN113076965A (en) * | 2020-01-06 | 2021-07-06 | 广州中国科学院先进技术研究所 | Cloud-based service robot scene classification system and method |
CN111401297A (en) * | 2020-04-03 | 2020-07-10 | 天津理工大学 | Triphibian robot target recognition system and method based on edge calculation and neural network |
CN115755910A (en) * | 2022-11-17 | 2023-03-07 | 珠海城市职业技术学院 | Control method and system of home service robot |
CN115755910B (en) * | 2022-11-17 | 2023-07-25 | 珠海城市职业技术学院 | Control method and system for home service robot |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107092926A (en) | Service robot object recognition algorithm based on deep learning | |
CN108717524B (en) | Gesture recognition system based on double-camera mobile phone and artificial intelligence system | |
CN104063686B (en) | Crop leaf diseases image interactive diagnostic system and method | |
CN109614996A (en) | The recognition methods merged based on the weakly visible light for generating confrontation network with infrared image | |
CN107016405A (en) | A kind of insect image classification method based on classification prediction convolutional neural networks | |
CN106991666B (en) | A kind of disease geo-radar image recognition methods suitable for more size pictorial informations | |
CN107239790A (en) | A kind of service robot target detection and localization method based on deep learning | |
CN106997475B (en) | A kind of pest image-recognizing method based on parallel-convolution neural network | |
CN108629368B (en) | Multi-modal foundation cloud classification method based on joint depth fusion | |
CN109886153B (en) | Real-time face detection method based on deep convolutional neural network | |
CN109508756B (en) | Foundation cloud classification method based on multi-cue multi-mode fusion depth network | |
CN109325495A (en) | A kind of crop image segmentation system and method based on deep neural network modeling | |
CN108665005A (en) | A method of it is improved based on CNN image recognition performances using DCGAN | |
CN107909008A (en) | Video target tracking method based on multichannel convolutive neutral net and particle filter | |
CN111611889B (en) | Miniature insect pest recognition device in farmland based on improved convolutional neural network | |
CN108447048B (en) | Convolutional neural network image feature processing method based on attention layer | |
CN110516723A (en) | A kind of multi-modal ground cloud atlas recognition methods based on the fusion of depth tensor | |
CN112818827A (en) | Image recognition-based method for judging stage temperature control point in tobacco leaf baking process | |
CN111652326A (en) | Improved fruit maturity identification method and identification system based on MobileNet v2 network | |
CN108229589A (en) | A kind of ground cloud atlas sorting technique based on transfer learning | |
CN114359727A (en) | Tea disease identification method and system based on lightweight optimization Yolo v4 | |
CN113221655A (en) | Face spoofing detection method based on feature space constraint | |
CN111178177A (en) | Cucumber disease identification method based on convolutional neural network | |
CN108509939A (en) | A kind of birds recognition methods based on deep learning | |
CN112001370A (en) | Crop pest and disease identification method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170825 |
|
RJ01 | Rejection of invention patent application after publication |