CN109531584A - A kind of Mechanical arm control method and device based on deep learning - Google Patents

A kind of Mechanical arm control method and device based on deep learning Download PDF

Info

Publication number
CN109531584A
CN109531584A CN201910098680.8A CN201910098680A CN109531584A CN 109531584 A CN109531584 A CN 109531584A CN 201910098680 A CN201910098680 A CN 201910098680A CN 109531584 A CN109531584 A CN 109531584A
Authority
CN
China
Prior art keywords
image
mechanical arm
crawl
acquisition
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910098680.8A
Other languages
Chinese (zh)
Inventor
吕泽杉
李竹奇
李源
韩华涛
高景
高景一
耿金鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Radio Measurement
Original Assignee
Beijing Institute of Radio Measurement
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Radio Measurement filed Critical Beijing Institute of Radio Measurement
Priority to CN201910098680.8A priority Critical patent/CN109531584A/en
Publication of CN109531584A publication Critical patent/CN109531584A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/06Recognition of objects for industrial automation

Abstract

The present invention discloses a kind of Mechanical arm control method and device based on deep learning, comprising: acquisition is crawled the image of object;Optimal Grasp process is calculated according to the image of acquisition;Object is grabbed according to Optimal Grasp Row control mechanical arm.The present invention passes through the 3-D image of acquisition plane and depth, extracts feature output by convolutional neural networks, and crawl strategy rectangle is grabbed frame quantization means: crawl information includes object space position, and coordinate posture cooperates the size etc. of gripping portion;All alternative rectangle frames, which are compared marking and queuing, selects optimal item simultaneously, the different size of nerve training network of design two, it uses small network as first round screening, alternate strategies sequence will be grabbed with big network, improve robustness and accuracy that mechanical arm grabs task.

Description

A kind of Mechanical arm control method and device based on deep learning
Technical field
The present invention relates to electromechanical control fields.More particularly, to a kind of Mechanical arm control method based on deep learning And device.
Background technique
The learning tasks that vision system is carried out are carried for mechanical arm, are directed to the crawl problem of end effector, are wrapped It includes and is not limited to two finger flat-nose pliers grabbing devices, for the flat-nose pliers crawl problem of " trick " cooperation, two sons can be divided into Problem: visual perception and planning crawl.The estimation of visual perception module is crawled object information, including position, posture, size.Rule It draws crawl and relates generally to mechanical arm with which type of track and angle crawl object.If guaranteeing that the article size size of crawl is high Degree and end effector form, movement travel and tooling height match, and the machine learning algorithm of cooperation crawl strategy can be complete At the crawl strategy task less high to optical depth information susceptibility.But rely solely on the object that plane visual can provide Can position, posture, the information such as size, but the mechanical arm body for obtaining these information, which grab, successfully has certain do not know Property.For at present, crawl problem is still challenging to be primarily due to be grabbed body form, and fixture postures are uncertain, Encoder noise, each axis coupling of mechanical arm and camera and mechanical arm tail end calibration are uncertain.
It is acquired only with monocular RGB camera, plane internal coordinate can be obtained from calibration principle, it can be with cooperative mechanical arm Ontology completes some simple tasks, such as plane positioning, fixed space movement etc., and cooperation end effector and completes related appoint Business.For mechanical arm collocation vision system learning tasks, most study be end effector crawl problem, for two refer to Flat-nose pliers grabbing device can be divided into two sub-problems: visual perception for the flat-nose pliers crawl problem of " trick " cooperation It is grabbed with planning.The estimation of visual perception module is crawled object information, including position, posture, size.Planning crawl relates generally to Mechanical arm grabs object with which type of track and angle, but the depth information that this method obtains in the whole full visual field is comparatively laborious And it is not strong to environmental suitability and robustness.
Accordingly, it is desirable to provide a kind of Mechanical arm control method and device based on deep learning.
Summary of the invention
The purpose of the present invention is to provide a kind of Mechanical arm control method and device based on deep learning, it is flat by acquiring The 3-D image in face and depth, by convolutional neural networks extract feature output, improve mechanical arm crawl task robustness and Accuracy.
In order to achieve the above objectives, the present invention adopts the following technical solutions:
A kind of Mechanical arm control method based on deep learning, comprising:
Acquisition is crawled the image of object;
Optimal Grasp process is calculated according to the image of acquisition;
Object is grabbed according to Optimal Grasp Row control mechanical arm.
Further, described acquire is crawled the image of object and includes:
Acquire the RGB color image of object;
Acquire the D spatial depth image of object.
Further, when acquiring image camera and mechanical arm are successively subjected to calibration experiment in plane and depth, determined Coordinate system and transfer matrix.
Further, the calculating Optimal Grasp process includes:
The mark or feature of crawl key message are carried out for every kind of image for being crawled object by neural network algorithm Study;
Setting convolutional neural networks algorithm carries out evaluation sequence to the superiority and inferiority of the key message, determines crawl strategy.
Further, the crawl strategy includes: to be carried out according to the position of object, posture, dimension information control mechanical arm Crawl.
Further, the control mechanical arm crawl object includes: that control mechanical arm is translated, vertically moved, rotated, It and when rotated include multiple freedom degrees.
The present invention also provides a kind of manipulator control devices based on deep learning, comprising:
Image capture module: for acquiring the image for being crawled object;
Deep learning training and computing module: for calculating Optimal Grasp process according to the image of acquisition;
Control module: for grabbing object according to Optimal Grasp Row control mechanical arm.
Further, described image acquisition module includes:
RGB color camera: for acquiring the RGB color image of object;
D depth camera: for acquiring the D spatial depth image of object.
Further, the deep learning training and computing module are used for:
The mark or feature of crawl key message are carried out for every kind of image for being crawled object by neural network algorithm Study;
Setting convolutional neural networks algorithm carries out evaluation sequence to the superiority and inferiority of the key message, determines crawl strategy.
Further, the control module is used for: control mechanical arm is translated, is vertically moved, is rotated, and when rotated Including multiple freedom degrees.
Beneficial effects of the present invention are as follows:
Technical solution of the present invention passes through the 3-D image of acquisition plane and depth, extracts by convolutional neural networks special Sign output, crawl strategy rectangle is grabbed frame quantization means: crawl information includes object space position, coordinate posture, cooperation The size etc. of gripping portion grabs frame quantization means with rectangle;All alternative rectangle frames are compared marking and queuing simultaneously Optimal item is selected, the different size of nerve training network of design two is used small network as first round screening, will be grabbed with big network It takes alternate strategies to sort, improves the robustness and accuracy of mechanical arm crawl task.
Detailed description of the invention
Specific embodiments of the present invention will be described in further detail with reference to the accompanying drawing;
Fig. 1 is that the present invention is based on the Mechanical arm control method flow charts of deep learning;
Fig. 2 is that the present invention is based on the Mechanical arm control method schematic diagrames of deep learning;
Fig. 3 is that the present invention is based on the Mechanical arm control method exemplary diagrams of deep learning;
Fig. 4 is that the present invention is based on the manipulator control device schematic diagrames of deep learning.
Specific embodiment
In order to illustrate more clearly of the present invention, the present invention is done further below with reference to preferred embodiments and drawings It is bright.Similar component is indicated in attached drawing with identical appended drawing reference.It will be appreciated by those skilled in the art that institute is specific below The content of description is illustrative and be not restrictive, and should not be limited the scope of the invention with this.
As shown in Figure 1, the invention discloses a kind of Mechanical arm control method of deep learning, ahead of the curve in research, generally It can be directed to by neural network algorithm and provide artificial mark or user-defined feature study that image carries out crawl key message, so Certain algorithm is set afterwards, evaluation sequence is carried out to the superiority and inferiority of crawl information, finally by optimal crawl information (including object position Set, posture, size) it is supplied to mechanical arm and then executes crawl task, which is similar to the target inspection in picture depth study Survey problem.The main frame of this method are as follows: first acquire universe image with camera, using pretreatment and dividing method, be categorized into The single image for being crawled part, convolutional neural networks CNN model output finally good by off-line training, controls mechanical arm It is grabbed.
This method comprises: S1, acquisition are crawled the image of object;
S2, Optimal Grasp process is calculated according to the image of acquisition;
S3, object is grabbed according to Optimal Grasp Row control mechanical arm.
Specifically, as shown in Fig. 2, S1, acquisition are crawled the image of object.
When obtaining subject image, mainly D of the RGB color image and camera of acquisition object at a distance from object Common RGB camera can be selected in this camera in spatial depth image, binocular camera also can be selected, binocular camera is in measurement distance Result is more accurate in terms of determining position.After acquiring the image, to carrying out calibration experiment between RGB color camera and mechanical arm, Determine transfer matrix relationship between two plane coordinate systems, it is then real to secondary calibration is carried out between D depth camera and mechanical arm It tests, determines transfer matrix in the depth direction, secondary positioning when convenient for crawl.
S2, Optimal Grasp process is calculated according to the image of acquisition.
Gray level image is converted by the RGB image of target object first with matlab software when pre-training, then using warp Playback mechanism is tested, neural network is independent of each other for input data to be wanted so that the photo front and back degree of correlation is as small as possible to meet It asks, the image of input neural network is obtained finally by stochastical sampling;Data Dimensionality Reduction is realized by deep learning, using target Q value network technique constantly to adjust the weight matrix of neural network, finally obtains convergent neural network.
When determining Optimal Grasp process, as shown in figure 3, if guaranteeing that the article size dimensional height of crawl is held with end Row device form, movement travel and tooling height match, and crawl strategy can be completed in the machine learning algorithm of cooperation crawl strategy The basic ideas of gestures of object crawl are grabbed in the task less high to optical depth information susceptibility, estimation:
1) a large amount of crawl alternate strategies are generated;
2) it then assesses each strategy and grabs successful success rate;
3) a large amount of crawl training datas are given, certain of detection picture or point cloud are trained with a classifier or regression model A little parts;
4) it is independent for grabbing because of this method detection with target ontology, and crawl information can be also generated to new object.
Wherein, the design of Feature Selection all in the first step needs while guaranteeing enough rapidities and robustness to produce Raw outstanding crawl alternate strategies collection, will be very difficult if relying on engineer.And deep learning can provide solution The method of this problem, that is, design two different size of trained networks, use small network that as the first round screening, use big net Network that be used to will grab alternate strategies sort.Learnt from no labeled data using self-encoding encoder sparse in deep learning special Sign provides feature description more better than initial data.Feature output is extracted by convolutional neural networks, with wide in target identification The Alex-Net network structure of general application is mainly made of five convolutional layers and two full articulamentums, insertion one among convolutional layer The operation layer of a little normalization and maximum pond, the full articulamentum of the final output of whole network layer have 6 neurons, corresponding crawl square 6 parameters of shape frame, four parameters correspond to the height and width of rectangle frame, and crawl angle is twice of rotational symmetry, so with two A additional parameter indicates: the sine and cosine of twice of angle.
The necessary condition that crawl strategy is measured with rectangle frame:
1, angle and ground real angle are grabbed within 30 degree;
2, the refined card index of similarity between prediction crawl and ground truth is higher than 25%;
Wherein refined card youngster's index of similarity J (A, B)=| A ∩ B |/| A ∪ B |, relative to measurement a strategy, rectangle frame The quality for differentiating crawl strategy has better resolving effect.
S3, object is grabbed according to Optimal Grasp Row control mechanical arm.
Mechanical arm body and control module mainly include multi freedom degree mechanical ontology, and each joint driver and control calculate Machine.Wherein, mechanical arm can be translated, vertically moves, rotate, and mechanical arm rotary freedom is not limited to six degree of freedom, to driving Device and each producer of controller are generally applicable, and control computer provides numerical control instruction, pass through the total line traffic control of ETHERNET/CAN Driver output driving current processed installs mechanics on each joint or position sensor is anti-so that each joint motor be driven to move For feedback signal to controller, system sends control strategy to control module by wired or wireless communication system to control machinery Motion state realizes the accurate crawl of target object.
As shown in figure 4, the present invention also provides a kind of manipulator control devices based on deep learning, comprising:
Image capture module 1: for acquiring the image for being crawled object;It specifically includes:
RGB color camera 11: for acquiring the RGB color image of object;
D depth camera 12: for acquiring the D spatial depth image of object.
Deep learning training and computing module 2: for calculating Optimal Grasp process according to the image of acquisition;Specifically.It is logical Cross mark or feature learning that neural network algorithm carries out crawl key message for every kind of image for being crawled object;Setting CNN algorithm carries out evaluation sequence to the superiority and inferiority of the key message, determines crawl strategy.
Control module 3: for grabbing object according to Optimal Grasp Row control mechanical arm;Specifically, control mechanical arm into Row is translated, is vertically moved, rotating, and when rotated includes multiple freedom degrees.
Using the mutual cooperation between above three module, finally realize it is a it is versatile, at low cost, structure is simple It is single, the random grasping means suitable for various workpieces.The principle of the invention is simple, it is convenient and practical, greatly reduce the development time, Reduce development cost.
It will be understood by those skilled in the art that embodiments herein can provide as method, apparatus (equipment) or computer Program product.Therefore, in terms of the application can be used complete hardware embodiment, complete software embodiment or combine software and hardware Embodiment form.Moreover, it wherein includes the meter of computer usable program code that the application, which can be used in one or more, The computer journey implemented in calculation machine usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) The form of sequence product.
The application is flow chart of the reference according to method, apparatus (equipment) and computer program product of the embodiment of the present application And/or block diagram describes.It should be understood that each process in flowchart and/or the block diagram can be realized by computer program instructions And/or the combination of the process and/or box in box and flowchart and/or the block diagram.It can provide these computer programs to refer to Enable the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to generate One machine so that by the instruction that the processor of computer or other programmable data processing devices executes generate for realizing The device for the function of being specified in one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates, Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or The function of being specified in multiple boxes.
Obviously, the above embodiment of the present invention be only to clearly illustrate example of the present invention, and not be pair The restriction of embodiments of the present invention may be used also on the basis of the above description for those of ordinary skill in the art To make other variations or changes in different ways, all embodiments can not be exhaustive here, it is all to belong to this hair The obvious changes or variations that bright technical solution is extended out are still in the scope of protection of the present invention.

Claims (10)

1. a kind of Mechanical arm control method based on deep learning characterized by comprising
Acquisition is crawled the image of object;
Optimal Grasp process is calculated according to the image of acquisition;
Object is grabbed according to Optimal Grasp Row control mechanical arm.
2. the method according to claim 1, wherein the image that the acquisition is crawled object includes:
Acquire the RGB color image of object;
Acquire the D spatial depth image of object.
3. according to the method described in claim 2, it is characterized in that, when acquisition image by camera and mechanical arm successively in plane and Calibration experiment is carried out in depth, determines coordinate system and transfer matrix.
4. the method according to claim 1, wherein the calculating Optimal Grasp process includes:
The mark or feature learning of crawl key message are carried out for every kind of image for being crawled object by neural network algorithm;
Setting convolutional neural networks algorithm carries out evaluation sequence to the superiority and inferiority of the key message, determines crawl strategy.
5. according to the method described in claim 4, it is characterized in that, the crawl strategy includes: position according to object, appearance State, dimension information control mechanical arm are grabbed.
6. the method according to claim 1, wherein control mechanical arm crawl object includes: that control is mechanical Arm is translated, is vertically moved, is rotated, and when rotated includes multiple freedom degrees.
7. a kind of manipulator control device based on deep learning characterized by comprising
Image capture module: for acquiring the image for being crawled object;
Deep learning training and computing module: for calculating Optimal Grasp process according to the image of acquisition;
Control module: for grabbing object according to Optimal Grasp Row control mechanical arm.
8. device according to claim 7, which is characterized in that described image acquisition module includes:
RGB color camera: for acquiring the RGB color image of object;
D depth camera: for acquiring the D spatial depth image of object.
9. device according to claim 7, which is characterized in that the deep learning training and computing module are used for:
The mark or feature learning of crawl key message are carried out for every kind of image for being crawled object by neural network algorithm;
Setting convolutional neural networks algorithm carries out evaluation sequence to the superiority and inferiority of the key message, determines crawl strategy.
10. device according to claim 7, which is characterized in that the control module is used for: control mechanical arm carries out flat It moves, vertically move, rotate, and when rotated include multiple freedom degrees.
CN201910098680.8A 2019-01-31 2019-01-31 A kind of Mechanical arm control method and device based on deep learning Pending CN109531584A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910098680.8A CN109531584A (en) 2019-01-31 2019-01-31 A kind of Mechanical arm control method and device based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910098680.8A CN109531584A (en) 2019-01-31 2019-01-31 A kind of Mechanical arm control method and device based on deep learning

Publications (1)

Publication Number Publication Date
CN109531584A true CN109531584A (en) 2019-03-29

Family

ID=65838866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910098680.8A Pending CN109531584A (en) 2019-01-31 2019-01-31 A kind of Mechanical arm control method and device based on deep learning

Country Status (1)

Country Link
CN (1) CN109531584A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110125930A (en) * 2019-04-18 2019-08-16 华中科技大学 It is a kind of that control method is grabbed based on the mechanical arm of machine vision and deep learning
CN110276805A (en) * 2019-06-28 2019-09-24 联想(北京)有限公司 A kind of data processing method and electronic equipment
CN110342252A (en) * 2019-07-01 2019-10-18 芜湖启迪睿视信息技术有限公司 A kind of article automatically grabs method and automatic grabbing device
CN111024145A (en) * 2019-12-25 2020-04-17 北京航天计量测试技术研究所 Method and device for calibrating handheld digital meter
CN111055275A (en) * 2019-12-04 2020-04-24 深圳市优必选科技股份有限公司 Action simulation method and device, computer readable storage medium and robot
CN111151463A (en) * 2019-12-24 2020-05-15 北京无线电测量研究所 Mechanical arm sorting and grabbing system and method based on 3D vision
CN111524184A (en) * 2020-04-21 2020-08-11 湖南视普瑞智能科技有限公司 Intelligent unstacking method and system based on 3D vision
WO2020207017A1 (en) * 2019-04-11 2020-10-15 上海交通大学 Method and device for collaborative servo control of uncalibrated movement vision of robot in agricultural scene
CN112223288A (en) * 2020-10-09 2021-01-15 南开大学 Visual fusion service robot control method
CN113232019A (en) * 2021-05-13 2021-08-10 中国联合网络通信集团有限公司 Mechanical arm control method and device, electronic equipment and storage medium
CN113799138A (en) * 2021-10-09 2021-12-17 中山大学 Mechanical arm grabbing method for generating convolutional neural network based on grabbing
CN114435827A (en) * 2021-12-24 2022-05-06 北京无线电测量研究所 Wisdom warehouse system
EP4044125A4 (en) * 2020-10-20 2023-08-02 Samsung Electronics Co., Ltd. Electronic apparatus and control method thereof

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780605A (en) * 2016-12-20 2017-05-31 芜湖哈特机器人产业技术研究院有限公司 A kind of detection method of the object crawl position based on deep learning robot
KR20170101455A (en) * 2016-02-29 2017-09-06 성균관대학교산학협력단 Training method of robot with 3d camera using artificial intelligence deep learning network based big data platform
CN107139179A (en) * 2017-05-26 2017-09-08 西安电子科技大学 A kind of intellect service robot and method of work
CN108010078A (en) * 2017-11-29 2018-05-08 中国科学技术大学 A kind of grasping body detection method based on three-level convolutional neural networks
US20180147723A1 (en) * 2016-03-03 2018-05-31 Google Llc Deep machine learning methods and apparatus for robotic grasping
CN108171748A (en) * 2018-01-23 2018-06-15 哈工大机器人(合肥)国际创新研究院 A kind of visual identity of object manipulator intelligent grabbing application and localization method
CN108399639A (en) * 2018-02-12 2018-08-14 杭州蓝芯科技有限公司 Fast automatic crawl based on deep learning and arrangement method
CN108510062A (en) * 2018-03-29 2018-09-07 东南大学 A kind of robot irregular object crawl pose rapid detection method based on concatenated convolutional neural network
WO2018236753A1 (en) * 2017-06-19 2018-12-27 Google Llc Robotic grasping prediction using neural networks and geometry aware object representation

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170101455A (en) * 2016-02-29 2017-09-06 성균관대학교산학협력단 Training method of robot with 3d camera using artificial intelligence deep learning network based big data platform
US20180147723A1 (en) * 2016-03-03 2018-05-31 Google Llc Deep machine learning methods and apparatus for robotic grasping
CN106780605A (en) * 2016-12-20 2017-05-31 芜湖哈特机器人产业技术研究院有限公司 A kind of detection method of the object crawl position based on deep learning robot
CN107139179A (en) * 2017-05-26 2017-09-08 西安电子科技大学 A kind of intellect service robot and method of work
WO2018236753A1 (en) * 2017-06-19 2018-12-27 Google Llc Robotic grasping prediction using neural networks and geometry aware object representation
CN108010078A (en) * 2017-11-29 2018-05-08 中国科学技术大学 A kind of grasping body detection method based on three-level convolutional neural networks
CN108171748A (en) * 2018-01-23 2018-06-15 哈工大机器人(合肥)国际创新研究院 A kind of visual identity of object manipulator intelligent grabbing application and localization method
CN108399639A (en) * 2018-02-12 2018-08-14 杭州蓝芯科技有限公司 Fast automatic crawl based on deep learning and arrangement method
CN108510062A (en) * 2018-03-29 2018-09-07 东南大学 A kind of robot irregular object crawl pose rapid detection method based on concatenated convolutional neural network

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020207017A1 (en) * 2019-04-11 2020-10-15 上海交通大学 Method and device for collaborative servo control of uncalibrated movement vision of robot in agricultural scene
CN110125930A (en) * 2019-04-18 2019-08-16 华中科技大学 It is a kind of that control method is grabbed based on the mechanical arm of machine vision and deep learning
CN110125930B (en) * 2019-04-18 2021-05-11 华中科技大学 Mechanical arm grabbing control method based on machine vision and deep learning
CN110276805A (en) * 2019-06-28 2019-09-24 联想(北京)有限公司 A kind of data processing method and electronic equipment
CN110342252A (en) * 2019-07-01 2019-10-18 芜湖启迪睿视信息技术有限公司 A kind of article automatically grabs method and automatic grabbing device
CN111055275B (en) * 2019-12-04 2021-10-29 深圳市优必选科技股份有限公司 Action simulation method and device, computer readable storage medium and robot
CN111055275A (en) * 2019-12-04 2020-04-24 深圳市优必选科技股份有限公司 Action simulation method and device, computer readable storage medium and robot
CN111151463A (en) * 2019-12-24 2020-05-15 北京无线电测量研究所 Mechanical arm sorting and grabbing system and method based on 3D vision
CN111024145A (en) * 2019-12-25 2020-04-17 北京航天计量测试技术研究所 Method and device for calibrating handheld digital meter
CN111524184A (en) * 2020-04-21 2020-08-11 湖南视普瑞智能科技有限公司 Intelligent unstacking method and system based on 3D vision
CN111524184B (en) * 2020-04-21 2024-01-16 湖南视普瑞智能科技有限公司 Intelligent unstacking method and unstacking system based on 3D vision
CN112223288A (en) * 2020-10-09 2021-01-15 南开大学 Visual fusion service robot control method
CN112223288B (en) * 2020-10-09 2021-09-14 南开大学 Visual fusion service robot control method
EP4044125A4 (en) * 2020-10-20 2023-08-02 Samsung Electronics Co., Ltd. Electronic apparatus and control method thereof
CN113232019A (en) * 2021-05-13 2021-08-10 中国联合网络通信集团有限公司 Mechanical arm control method and device, electronic equipment and storage medium
CN113799138A (en) * 2021-10-09 2021-12-17 中山大学 Mechanical arm grabbing method for generating convolutional neural network based on grabbing
CN114435827A (en) * 2021-12-24 2022-05-06 北京无线电测量研究所 Wisdom warehouse system

Similar Documents

Publication Publication Date Title
CN109531584A (en) A kind of Mechanical arm control method and device based on deep learning
Wang et al. 3d shape perception from monocular vision, touch, and shape priors
Li et al. Slip detection with combined tactile and visual information
Xu et al. Densephysnet: Learning dense physical object representations via multi-step dynamic interactions
Calandra et al. The feeling of success: Does touch sensing help predict grasp outcomes?
Sahbani et al. An overview of 3D object grasp synthesis algorithms
Fan et al. Learning collision-free space detection from stereo images: Homography matrix brings better data augmentation
Alonso et al. Current research trends in robot grasping and bin picking
US8731719B2 (en) Robot with vision-based 3D shape recognition
CN112297013B (en) Robot intelligent grabbing method based on digital twin and deep neural network
CN108972494A (en) A kind of Apery manipulator crawl control system and its data processing method
Toussaint et al. Integrated motor control, planning, grasping and high-level reasoning in a blocks world using probabilistic inference
Zhang et al. Robotic grasp detection based on image processing and random forest
CN114952809B (en) Workpiece identification and pose detection method, system and mechanical arm grabbing control method
CN110271000A (en) A kind of grasping body method based on oval face contact
Aleotti et al. Perception and grasping of object parts from active robot exploration
Khan et al. PackerRobo: Model-based robot vision self supervised learning in CART
Iscimen et al. Smart robot arm motion using computer vision
Gulde et al. RoPose: CNN-based 2D pose estimation of industrial robots
Devgon et al. Orienting novel 3D objects using self-supervised learning of rotation transforms
Deng et al. A human–robot collaboration method using a pose estimation network for robot learning of assembly manipulation trajectories from demonstration videos
Zhang et al. Learning-based six-axis force/torque estimation using gelstereo fingertip visuotactile sensing
Ku et al. Associating grasp configurations with hierarchical features in convolutional neural networks
CN117021099A (en) Human-computer interaction method oriented to any object and based on deep learning and image processing
CN113436293B (en) Intelligent captured image generation method based on condition generation type countermeasure network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190329