CN106780605A - A kind of detection method of the object crawl position based on deep learning robot - Google Patents

A kind of detection method of the object crawl position based on deep learning robot Download PDF

Info

Publication number
CN106780605A
CN106780605A CN201611181461.9A CN201611181461A CN106780605A CN 106780605 A CN106780605 A CN 106780605A CN 201611181461 A CN201611181461 A CN 201611181461A CN 106780605 A CN106780605 A CN 106780605A
Authority
CN
China
Prior art keywords
capture area
input
crawl position
candidate capture
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611181461.9A
Other languages
Chinese (zh)
Inventor
高靖
李超
曹雏清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhu Hit Robot Technology Research Institute Co Ltd
Original Assignee
Wuhu Hit Robot Technology Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhu Hit Robot Technology Research Institute Co Ltd filed Critical Wuhu Hit Robot Technology Research Institute Co Ltd
Priority to CN201611181461.9A priority Critical patent/CN106780605A/en
Publication of CN106780605A publication Critical patent/CN106780605A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention is applied to robot and captures field, there is provided a kind of detection method of the object crawl position based on deep learning robot, the method comprises the following steps:RGB D images comprising object are gathered by sensor;Candidate capture area is divided in the target area of RGB D images;Keep the length-width ratio of candidate capture area constant, the size of candidate capture area is amplified to the size of neutral net input requirements;Input vector is built to the candidate capture area after amplification;Whitening processing is carried out to input vector, the input vector after the whitening processing is input to the neutral net for training;The score of each candidate capture area is obtained, the candidate capture area of highest scoring is defined as crawl position.The RGB D images of the object by obtaining, you can determine the crawl position of the object, robot by the crawl position can realize it is any realize the crawl of object, and do not need artificial participation.

Description

A kind of detection method of the object crawl position based on deep learning robot
Technical field
The invention belongs to robot crawl field, more particularly to a kind of object crawl position based on deep learning robot The detection method put.
Background technology
In order to nurse the handicapped personage such as the elderly, disabled person, to familiar object in home environment, such as teacup beverage The crawl of bottle, books etc., as the indispensable critical function demand of home-services robot.Different from industrial robot in knot To the crawl of workpiece under structure environment, intelligent grabbing of the service robot under home environment is faced with lot of challenges, for example, move State environment, illumination variation, tens or even hundreds of target object, mutually blocking between complex background, object.
At present, robot crawl detection technique includes following several:The crawl feature of engineer's object, by target Crawl feature sets up crawl model, detects crawl position, and the method for the crawl feature of existing engineer's object both took Substantial amounts of artificial participation is needed again, and cannot accurately detect crawl position for the unseen object of robot, it is impossible to Perform grasping movement.
The content of the invention
The embodiment of the present invention provides a kind of detection method of the object crawl position based on deep learning robot, it is intended to The method for solving the crawl feature of existing engineer's object, not only took but also needed substantial amounts of artificial participation, and for The unseen object of robot cannot accurately detect crawl position, it is impossible to perform grasping movement problem.
The present invention is achieved in that a kind of detection method of the object crawl position based on deep learning robot, Methods described comprises the following steps:
S1. the RGB-D images comprising object are gathered by sensor;
S2. candidate capture area is divided in the target area of RGB-D images;
S3. keep the length-width ratio of the candidate capture area constant, the size of the candidate capture area is amplified to god Through the size of network inputs requirement;
S4. input vector is built to the candidate capture area after the amplification;
S5. whitening processing is carried out to the input vector, the input vector after the whitening processing is input to and is trained Neutral net;
S6. the score of each candidate capture area is obtained, the candidate capture area of the highest scoring is defined as crawl Position.
The embodiment of the present invention divides candidate and grabs by obtaining the RGB-D images of object, to the target area of RGB-D images Take region, and be amplified to the size of neutral net input requirements, input vector is built to the candidate capture area after amplification, by structure The input vector input neutral net built up, obtains the score of a candidate capture area, and the candidate capture area of highest scoring is true It is set to the crawl position of object, the RGB-D images of the object by obtaining, you can determine the crawl position of the object, Robot by the crawl position can realize it is any realize the crawl of object, and do not need artificial participation.
Brief description of the drawings
Fig. 1 is the stream of the object crawl position detection method based on deep learning robot provided in an embodiment of the present invention Cheng Tu.
Specific embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, it is right below in conjunction with drawings and Examples The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.
Fig. 1 is the stream of the object crawl position detection method based on deep learning robot provided in an embodiment of the present invention Cheng Tu, the method comprises the following steps:
S1. the RGB-D images comprising object are gathered by sensor;
The embodiment of the present invention obtains the high-resolution RGB image and depth of crawl object using Microsoft's Kinect sensor Degree image, RGB image contains crawl target surface colouring information and texture information, and depth image contains crawl object Spatial form information, each pixel value in depth image illustrate sensor to crawl object distance, RGB image and Pixel between depth image is one-to-one, constitutes RGB-D images.
S2. candidate capture area is divided in the target area of RGB-D images;
In embodiments of the present invention, the target area of RGB-D images is extracted using background subtraction, is set in target area Determine sliding window, candidate capture area is extracted by the movement of sliding window, the size of the active window is candidate's crawl The size in region, the embodiment of the present invention uses baxter tow-armed robots, and the end effector of robot is clamping jaw, sliding window It is rectangular slide window, the size of sliding window is the size according to gripper to be determined, active window is set to 30 pixel × 10 The rectangular active window of pixel, therefore the size of candidate capture area is candidate's crawl rectangle of 30 pixel × 10 pixels.
S3. keep the length-width ratio of candidate capture area constant, the size of candidate capture area is amplified to neutral net defeated Enter the size of requirement;
In embodiments of the present invention, the requirement according to neutral net to input sample size, is keeping candidate capture area Length-width ratio it is constant in the case of, filled by 0 value, or the size of candidate capture area is amplified to neutral net by border extended The size of input requirements, in embodiments of the present invention, requirement of the neutral net to input sample size is 32 pixel × 32 pixels.
S4. input vector is built to the candidate capture area after amplification;
In embodiments of the present invention, to candidate capture area build 7 input vectors of passage, 7 inputs of passage to Amount includes:Surface normal of the depth data on three directions of x, y, z axle, yuv data is obtained from depth image to be converted into On tri- passages of Y, U, V vector and depth image is converted into vector.
In embodiments of the present invention, if step S3 is to fill for the size of candidate capture area to be amplified to nerve by 0 value The size of network inputs requirement, due to having substantial amounts of 0 value, and different candidate capture areas in the input vector after filling Input vector in fill 0 be worth quantity different, can finally influence the output score of neutral net, therefore, filled out to eliminate The influence of 0 value filled to waiting the element in input vector, it is necessary to be multiplied by a zoom factor, the value to element in input vector is entered Row scaling, the computing formula of zoom factor is as follows:
Wherein,It is i-th zoom factor of element in t-th input vector of sample,For passage r amplification because Son, when i-th element x in input vectoriWhen belonging to passage r, Sr,iValue be 1, otherwise Sr,iValue be 0, work as xthiIt is not 0 During Filling power,Value be 1, otherwiseValue be 0.
Additionally, as the preferred embodiments of the present invention, it is contemplated that zoom factor crosses conference causes the distortion of input data, contracting Put the factor and be up to certain value C, i.e.,The value of C is 4
S5. whitening processing is carried out to the input vector for building, the input vector after whitening processing is input to what is trained Neutral net;
In embodiments of the present invention, in order to reduce the redundancy of input, the input vector to building carries out whitening processing, The process of whitening processing includes:The input vector for being input into each passage is subtracted into respective average value, then divided by by 7 passages The standard deviation of the mix vector of input vector composition.
S6. the score of each candidate capture area is obtained, the candidate region of highest scoring is defined as crawl position.
The embodiment of the present invention divides candidate and grabs by obtaining the RGB-D images of object, to the target area of RGB-D images Take region, and be amplified to the size of neutral net input requirements, input vector is built to the candidate capture area after amplification, by structure The input vector input neutral net built up, obtains the score of a candidate capture area, and the candidate capture area of highest scoring is true It is set to the crawl position of object, the RGB-D images of the object by obtaining, you can determine the crawl position of the object, Robot by the crawl position can realize it is any realize the crawl of object, and do not need artificial participation.
In embodiments of the present invention, also included before the step S1:
S7. neutral net is built;
In embodiments of the present invention, the neutral net of structure is sparse certainly by 7168 neuron input layers, 200 neurons Encoder and sigmoid output layers are constituted.
S8. off-line training is carried out to the neutral net for building.
In embodiments of the present invention, by giving the input and output of sample, W when obtaining optimal by training, then use W Calculate the prediction output of given input.
In embodiments of the present invention, off-line training is carried out to the neutral net for building and specifically includes following steps:
S81. the sample for giving is pre-processed using step S1-S5;
S82. the given sample input neutral net that will have been pre-processed, and the output result of sample is given, using unsupervised Training iteration 200 times, trains 2 sparse self-encoding encoders of hidden layer to initialize hidden layer weights;
Initialization hidden layer weights W of the sparse self-encoding encoder when cost function is optimal*Formula is as follows:
Wherein,It is input vector x(t)Reconstruction, g (h) be openness penalty, λ is openness penalty Coefficient, f (W) is Regularization function, and β is the coefficient of Regularization function,For t-th i-th of input sample input vector yuan Element, Wi,jIt is weights of i-th element on j-th hidden neuron,It is that t-th input sample is vectorial in j-th hidden layer Output on neuron, σ is sigmoid activation primitives, W*Sparse self-encoding encoder initializes hidden layer when being optimal cost function Weights.
Sparse self-encoding encoder in the embodiment of the present invention includes the Sparse self-encoding encoder and the second layer of ground floor hidden layer The standardized sparse self-encoding encoder of hidden layer;
When the sparse self-encoding encoder is the first hidden layer Sparse self-encoding encoder, the regularization combined using L2 and L1 Method, regular function isWherein | | W | |1It is the corresponding Regularization functions of L1,It is L2 pairs The Regularization function answered, the regularization coefficient ε of L12=0.0003, L2 regularization coefficient ε1=0.001, wherein to f1(W) add Slight bias amount 0.00001;0 worth interference in input vector is wherein avoided by adding the method for slight bias, this layer dilute It is 3 to dredge property penalty coefficient, and the output of Sparse self-encoding encoder is the real number between 0 to 1;
When the sparse self-encoding encoder is the second hidden layer standardized sparse self-encoding encoder, using L1 regularization methods, canonical Function is f2(W)=ε2||W||1, L1 regularization coefficients, ε2=0.0003, this layer of openness penalty coefficient is 3, and standardized sparse is certainly The output of encoder is 0 or 1.
S83. the sample that will have been pre-processed passes through back-propagation algorithm Training iteration 10 times, to whole network parameter Carry out global optimization.
Presently preferred embodiments of the present invention is the foregoing is only, is not intended to limit the invention, it is all in essence of the invention Any modification, equivalent and improvement made within god and principle etc., should be included within the scope of the present invention.

Claims (9)

1. a kind of detection method of the object crawl position based on deep learning robot, it is characterised in that methods described bag Include following steps:
S1. the RGB-D images comprising object are gathered by sensor;
S2. candidate capture area is divided in the target area of the RGB-D images;
S3. keep the length-width ratio of the candidate capture area constant, the size of the candidate capture area is amplified to nerve net The size of network input requirements;
S4. input vector is built to the candidate capture area after the amplification;
S5. whitening processing is carried out to the input vector, the input vector after the whitening processing is input to the god for training Through network;
S6. the score of each candidate capture area is obtained, the candidate capture area of the highest scoring is defined as crawl position.
2. the method for inspection of the object crawl position of deep learning robot is based on as claimed in claim 1, and its feature exists In the candidate capture area is that the sliding window by setting moves to extract in the target area of the RGB-D images 's;
The size of the active window is the size of the candidate capture area.
3. the method for inspection of the object crawl position of deep learning robot is based on as claimed in claim 1, and its feature exists In being filled by 0 value or the size of the candidate capture area be amplified to border extended the size of neutral net input requirements.
4. the detection method of the object crawl position of deep learning robot is based on as claimed in claim 3, and its feature exists In when the size of the candidate capture area being amplified into the size of neutral net input requirements by 0 value filling, in step Also include after rapid S4:
Element in the input vector is multiplied by a zoom factor, the value to element in the input vector is zoomed in and out;
The computing formula of the zoom factor is as follows:
ψ i ( t ) = Σ r = 1 R S r , i Ψ r ( t )
Ψ r ( t ) = Σ i = 1 N S r , i / ( Σ i = 1 N S r , i μ i ( t ) )
Wherein,It is i-th zoom factor of element in t-th input vector of sample,It is the amplification factor of passage r, When i-th element x in input vectoriWhen belonging to passage r, Sr,iValue be 1, otherwise Sr,iValue be 0, work as xthiIt is not 0 filling During value,Value be 1, otherwiseValue be 0.
5. the detection method of the object crawl position of deep learning robot is based on as claimed in claim 4, and its feature exists In the zoom factorThe value of C is 4.
6. the detection method of the object crawl position of deep learning robot is based on as claimed in claim 1, and its feature exists In also including before step S1:
S7. neutral net is built;
S8. off-line training is carried out to the neutral net for building.
The neutral net is by 7168 neuron input layers, 200 sparse self-encoding encoders of neuron and sigmoid output layer groups Into.
7. the detection method of the object crawl position of deep learning robot is based on as claimed in claim 6, and its feature exists In the step S8 specifically includes following steps:
S81. the sample for giving is pre-processed using step S1-S5;
S82. the given sample that has pre-processed is input into neutral net, and given sample output result, using unsupervised Training iteration 200 times, trains 2 sparse self-encoding encoders of hidden layer to initialize hidden layer weights;
S83. the sample for having pre-processed is passed through into back-propagation algorithm Training iteration 10 times, to whole network parameter Carry out global optimization.
8. the detection method of the object crawl position of deep learning robot is based on as claimed in claim 7, and its feature exists In initialization hidden layer weights W of the sparse self-encoding encoder when cost function is optimal*Computing formula is as follows:
h j ( t ) = σ ( Σ i = 1 N x i ( t ) W i , j )
Wherein,It is input vector x(t)Reconstruction, g (h) be openness penalty, λ is the coefficient of openness penalty, f (W) it is Regularization function, β is the coefficient of Regularization function,It is t-th i-th element of input sample input vector, Wi,j It is weights of i-th element on j-th hidden neuron,It is that t-th input sample is vectorial in j-th hidden neuron On output, σ be sigmoid activation primitives.
9. the detection method of the object crawl position of deep learning robot is based on as claimed in claim 8, and its feature exists In when the sparse self-encoding encoder is the first hidden layer Sparse self-encoding encoder, the Sparse self-encoding encoder uses L2 The regularization method combined with L1, regular function isWherein | | W | |1It is the corresponding canonicals of L1 Change function,It is the corresponding Regularization functions of L2, the regularization coefficient ε of L12=0.0003, L2 regularization coefficient ε1= 0.001, wherein to f1(W) addition slight bias amount 0.00001, openness the punishing of the first hidden layer Sparse self-encoding encoder The coefficient lambda of penalty function1=3;
When the sparse self-encoding encoder is the second hidden layer standardized sparse self-encoding encoder, the standardized sparse self-encoding encoder uses L1 Regularization method, regular function is f2(W)=ε2||W||1, L1 regularization coefficients ε2=0.0003, the second hidden layer standard is dilute Dredge the coefficient lambda of the openness penalty of self-encoding encoder2=3.
CN201611181461.9A 2016-12-20 2016-12-20 A kind of detection method of the object crawl position based on deep learning robot Pending CN106780605A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611181461.9A CN106780605A (en) 2016-12-20 2016-12-20 A kind of detection method of the object crawl position based on deep learning robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611181461.9A CN106780605A (en) 2016-12-20 2016-12-20 A kind of detection method of the object crawl position based on deep learning robot

Publications (1)

Publication Number Publication Date
CN106780605A true CN106780605A (en) 2017-05-31

Family

ID=58890864

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611181461.9A Pending CN106780605A (en) 2016-12-20 2016-12-20 A kind of detection method of the object crawl position based on deep learning robot

Country Status (1)

Country Link
CN (1) CN106780605A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107479501A (en) * 2017-09-28 2017-12-15 广州智能装备研究院有限公司 3D parts suction methods based on deep learning
CN107679477A (en) * 2017-09-27 2018-02-09 深圳市未来媒体技术研究院 Face depth and surface normal Forecasting Methodology based on empty convolutional neural networks
CN108126914A (en) * 2017-11-24 2018-06-08 上海发那科机器人有限公司 More object robots method for sorting at random in a kind of material frame based on deep learning
CN108280856A (en) * 2018-02-09 2018-07-13 哈尔滨工业大学 The unknown object that network model is inputted based on mixed information captures position and orientation estimation method
CN108805004A (en) * 2018-04-12 2018-11-13 深圳市商汤科技有限公司 Functional area detection method and device, electronic equipment, storage medium, program
CN108908334A (en) * 2018-07-20 2018-11-30 汕头大学 A kind of intelligent grabbing system and method based on deep learning
CN109508707A (en) * 2019-01-08 2019-03-22 中国科学院自动化研究所 The crawl point acquisition methods of robot stabilized crawl object based on monocular vision
CN109531584A (en) * 2019-01-31 2019-03-29 北京无线电测量研究所 A kind of Mechanical arm control method and device based on deep learning
CN110208211A (en) * 2019-07-03 2019-09-06 南京林业大学 A kind of near infrared spectrum noise-reduction method for Detecting Pesticide
CN110691676A (en) * 2017-06-19 2020-01-14 谷歌有限责任公司 Robot crawling prediction using neural networks and geometrically-aware object representations
CN111310637A (en) * 2020-02-11 2020-06-19 山西大学 Robot target grabbing detection method based on scale invariant network
CN111324095A (en) * 2020-02-27 2020-06-23 金陵科技学院 Unmanned shipment system of dry bulk material intelligent industrial robot
CN111428731A (en) * 2019-04-04 2020-07-17 深圳市联合视觉创新科技有限公司 Multi-class target identification and positioning method, device and equipment based on machine vision
JP2021517681A (en) * 2018-12-12 2021-07-26 達闥机器人有限公司 How to detect the target object gripping position of the robot
CN116945210A (en) * 2023-07-12 2023-10-27 深圳市永顺创能技术有限公司 Robot intelligent control system based on machine vision

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105598965A (en) * 2015-11-26 2016-05-25 哈尔滨工业大学 Robot under-actuated hand autonomous grasping method based on stereoscopic vision
CN105718959A (en) * 2016-01-27 2016-06-29 中国石油大学(华东) Object identification method based on own coding
CN106094516A (en) * 2016-06-08 2016-11-09 南京大学 A kind of robot self-adapting grasping method based on deeply study

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105598965A (en) * 2015-11-26 2016-05-25 哈尔滨工业大学 Robot under-actuated hand autonomous grasping method based on stereoscopic vision
CN105718959A (en) * 2016-01-27 2016-06-29 中国石油大学(华东) Object identification method based on own coding
CN106094516A (en) * 2016-06-08 2016-11-09 南京大学 A kind of robot self-adapting grasping method based on deeply study

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
IAN LENZ 等: ""Deep Learning for Detecting Robotic Grasps"", 《百度学术》 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110691676A (en) * 2017-06-19 2020-01-14 谷歌有限责任公司 Robot crawling prediction using neural networks and geometrically-aware object representations
US11554483B2 (en) 2017-06-19 2023-01-17 Google Llc Robotic grasping prediction using neural networks and geometry aware object representation
CN107679477A (en) * 2017-09-27 2018-02-09 深圳市未来媒体技术研究院 Face depth and surface normal Forecasting Methodology based on empty convolutional neural networks
CN107479501A (en) * 2017-09-28 2017-12-15 广州智能装备研究院有限公司 3D parts suction methods based on deep learning
CN108126914A (en) * 2017-11-24 2018-06-08 上海发那科机器人有限公司 More object robots method for sorting at random in a kind of material frame based on deep learning
CN108280856A (en) * 2018-02-09 2018-07-13 哈尔滨工业大学 The unknown object that network model is inputted based on mixed information captures position and orientation estimation method
CN108280856B (en) * 2018-02-09 2021-05-07 哈尔滨工业大学 Unknown object grabbing pose estimation method based on mixed information input network model
CN108805004A (en) * 2018-04-12 2018-11-13 深圳市商汤科技有限公司 Functional area detection method and device, electronic equipment, storage medium, program
CN108805004B (en) * 2018-04-12 2021-09-14 深圳市商汤科技有限公司 Functional area detection method and device, electronic equipment and storage medium
CN108908334A (en) * 2018-07-20 2018-11-30 汕头大学 A kind of intelligent grabbing system and method based on deep learning
US11878433B2 (en) 2018-12-12 2024-01-23 Cloudminds Robotics Co., Ltd. Method for detecting grasping position of robot in grasping object
JP7085726B2 (en) 2018-12-12 2022-06-17 達闥機器人股▲分▼有限公司 How to detect the target object gripping position of the robot
JP2021517681A (en) * 2018-12-12 2021-07-26 達闥机器人有限公司 How to detect the target object gripping position of the robot
CN109508707A (en) * 2019-01-08 2019-03-22 中国科学院自动化研究所 The crawl point acquisition methods of robot stabilized crawl object based on monocular vision
CN109531584A (en) * 2019-01-31 2019-03-29 北京无线电测量研究所 A kind of Mechanical arm control method and device based on deep learning
CN111428731A (en) * 2019-04-04 2020-07-17 深圳市联合视觉创新科技有限公司 Multi-class target identification and positioning method, device and equipment based on machine vision
CN111428731B (en) * 2019-04-04 2023-09-26 深圳市联合视觉创新科技有限公司 Multi-category identification positioning method, device and equipment based on machine vision
CN110208211B (en) * 2019-07-03 2021-10-22 南京林业大学 Near infrared spectrum noise reduction method for pesticide residue detection
CN110208211A (en) * 2019-07-03 2019-09-06 南京林业大学 A kind of near infrared spectrum noise-reduction method for Detecting Pesticide
CN111310637B (en) * 2020-02-11 2022-11-11 山西大学 Robot target grabbing detection method based on scale invariant network
CN111310637A (en) * 2020-02-11 2020-06-19 山西大学 Robot target grabbing detection method based on scale invariant network
CN111324095A (en) * 2020-02-27 2020-06-23 金陵科技学院 Unmanned shipment system of dry bulk material intelligent industrial robot
CN116945210A (en) * 2023-07-12 2023-10-27 深圳市永顺创能技术有限公司 Robot intelligent control system based on machine vision
CN116945210B (en) * 2023-07-12 2024-03-15 深圳市永顺创能技术有限公司 Robot intelligent control system based on machine vision

Similar Documents

Publication Publication Date Title
CN106780605A (en) A kind of detection method of the object crawl position based on deep learning robot
CN107886069A (en) A kind of multiple target human body 2D gesture real-time detection systems and detection method
CN108280856B (en) Unknown object grabbing pose estimation method based on mixed information input network model
CN106097322B (en) A kind of vision system calibration method based on neural network
CN104978580B (en) A kind of insulator recognition methods for unmanned plane inspection transmission line of electricity
CN106951923B (en) Robot three-dimensional shape recognition method based on multi-view information fusion
Browne et al. Convolutional neural networks for image processing: an application in robot vision
CN105447529A (en) Costume detection and attribute value identification method and system
CN110509273B (en) Robot manipulator detection and grabbing method based on visual deep learning features
CN104320617B (en) A kind of round-the-clock video frequency monitoring method based on deep learning
Shinzato et al. Fast visual road recognition and horizon detection using multiple artificial neural networks
CN105205453A (en) Depth-auto-encoder-based human eye detection and positioning method
CN104392228A (en) Unmanned aerial vehicle image target class detection method based on conditional random field model
Shinzato et al. A road following approach using artificial neural networks combinations
CN107414830B (en) A kind of carrying machine human arm manipulation multi-level mapping intelligent control method and system
He et al. Integrated moment-based LGMD and deep reinforcement learning for UAV obstacle avoidance
CN107424161A (en) A kind of indoor scene image layout method of estimation by thick extremely essence
CN106780546A (en) The personal identification method of the motion blur encoded point based on convolutional neural networks
CN1758283A (en) Nerve network of simulating multi-scale crossover receptive field and its forming method and application
Zhang et al. Learn to navigate maplessly with varied LiDAR configurations: A support point-based approach
CN108009512A (en) A kind of recognition methods again of the personage based on convolutional neural networks feature learning
Zhang et al. Learning-based six-axis force/torque estimation using gelstereo fingertip visuotactile sensing
Nishide et al. Predicting object dynamics from visual images through active sensing experiences
Komer et al. BatSLAM: Neuromorphic spatial reasoning in 3D environments
Shinzato et al. Path recognition for outdoor navigation using artificial neural networks: Case study

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170531