CN112372641A - Family service robot figure article grabbing method based on visual feedforward and visual feedback - Google Patents

Family service robot figure article grabbing method based on visual feedforward and visual feedback Download PDF

Info

Publication number
CN112372641A
CN112372641A CN202010783580.1A CN202010783580A CN112372641A CN 112372641 A CN112372641 A CN 112372641A CN 202010783580 A CN202010783580 A CN 202010783580A CN 112372641 A CN112372641 A CN 112372641A
Authority
CN
China
Prior art keywords
article
camera
mechanical arm
visual
tail end
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010783580.1A
Other languages
Chinese (zh)
Other versions
CN112372641B (en
Inventor
陈殿生
田蔚瀚
李逸飞
王敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202010783580.1A priority Critical patent/CN112372641B/en
Publication of CN112372641A publication Critical patent/CN112372641A/en
Application granted granted Critical
Publication of CN112372641B publication Critical patent/CN112372641B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for grabbing a figure of a family service robot based on visual feedforward and visual feedback. The invention relates to the technologies of article identification and positioning, article attitude estimation, mechanical arm control and the like, which are mainly applied to a family service type robot and realize the function of grabbing articles by the robot. The method comprises the steps of identifying and roughly positioning an article by controlling a global camera to complete vision feedforward, identifying and accurately positioning the article by a camera at the tail end of a mechanical arm to complete vision feedback, and controlling the mechanical arm to accurately grab the article according to the rough position and the accurate position of the article. The method can meet the requirements of the home service robot on various object grabbing operation tasks in a home scene, and the effectiveness and the adaptability of the home service robot for executing the tasks are improved to a great extent.

Description

Family service robot figure article grabbing method based on visual feedforward and visual feedback
Technical Field
The invention belongs to the technical field of hand-eye coordination control, provides a grabbing operation method of a home service robot arm-hand system based on combination of visual feedforward and visual feedback, and relates to the technologies of article identification and positioning, article posture estimation, mechanical arm control and the like.
Background
The service robot is a semi-autonomous or fully-autonomous working robot which can complete service work beneficial to human health. The service robot has a wide application range and mainly performs the work of maintenance, repair, transportation, cleaning, security, rescue, monitoring, welcome, delivery and the like. In the process of executing the work, the robot needs to grab more or less articles, and in the grabbing operation process, the acquisition of article information needs to be completed so as to realize the object grabbing task.
At present, two schemes of grabbing for limiting articles or an article in a set pose are generally adopted in robot article grabbing operation. The limited object, namely the grabbed object, is generally one or more fixed objects, the shape of the object is simple and uniform, and the grabbing of the object can be completed without accurate positioning by modifying the adaptive grab handle of the object; the position of the object with the set posture, namely the grasped object, and the posture of the object are determined information, and the robot can finish the task only by grasping the object according to the known information and the set posture. However, the two modes can only complete the task of grabbing specific articles, the types of the articles in a family scene are various and the postures of the articles are random, and the grabbing of various articles in the scene is difficult to achieve. Therefore, the robot has very important significance in realizing the grabbing operation of various articles under the complex environment by researching the robot figure article grabbing method combining article identification and positioning with attitude estimation.
Disclosure of Invention
The invention aims to provide a household service robot article grabbing method based on visual feedforward and visual feedback to make up the limitation of the article grabbing method.
In order to achieve the purpose, the technical scheme provided by the invention comprises the following steps:
step 1: performing internal reference calibration and hand-eye calibration on the global camera and the camera at the tail end of the mechanical arm to obtain a conversion relation between an internal reference matrix of the camera and a coordinate system;
step 2: collecting information of articles in family scene, labeling data of collected result to obtain article data set, and establishing attitude estimation database
And step 3: training an article data set in a family scene through an artificial neural network article recognition algorithm, and recognizing articles in the family scene in a global camera color picture by using a training result to obtain the types of the articles in a scene field and the pixel coordinates of the central point of each article image in a captured color picture;
and 4, step 4: selecting an identification image to be captured according to task requirements, aligning pixel coordinates of the object in the image to coordinates of a depth camera coordinate system of the global camera, calculating and obtaining rough coordinates of the object in a world coordinate system according to a coordinate conversion matrix calibrated by hands and eyes, and finishing vision feedforward;
and 5: controlling the tail end of the mechanical arm to move to the front upper part of the article according to the rough position coordinates of the article obtained by visual feedforward, and enabling the camera at the tail end of the mechanical arm to face the article to be grabbed, so that the article is in a better visual field range of the camera at the tail end of the mechanical arm;
step 6: the camera at the tail end of the mechanical arm captures information of an object in a field, calculates the normal characteristic of depth information, performs window-dividing matching on the normal characteristic and a template in an attitude estimation database, performs attitude estimation according to the optimal matching result to obtain the attitude of the object in a picture, and obtains the accurate coordinate and attitude information of the object in a world coordinate system through calculation of a conversion matrix calibrated by hands and eyes to finish visual feedback;
and 7: and controlling the gripper at the tail end of the mechanical arm to move to a gripping point according to the accurate coordinates and posture information of the object obtained by visual feedback and by combining the preset geometric gripping point and gripping mode of the object, thereby finishing the object gripping task.
Adopt the produced beneficial effect of above-mentioned technical scheme to lie in: according to the method for grabbing the figure articles by the family service robot based on the vision feedforward and the vision feedback, the vision feedforward on a family scene and the vision feedback on the articles are completed by the global camera, so that the precise grabbing of various articles in the complex family scene is realized. The home service robot is applied to a home scene, service tasks are carried out according to user requirements, article grabbing operation is often involved, and the grabbing method provided by the invention can greatly improve the effectiveness and adaptability of the home service robot in executing the tasks.
Drawings
In order to more clearly illustrate the technical solution according to the present invention, the drawings used in the following embodiments or the prior art descriptions are briefly introduced, and it is obvious that the drawings in the following description are embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a flowchart illustrating the execution of a robot grabbing task based on visual feedforward and visual feedback according to an embodiment of the present invention;
FIG. 2 is a schematic layout of a camera and a robot in an embodiment of the present invention, in which 1-a robot end camera, 2-a robot, and 3-a global camera;
FIG. 3 is a diagram of a matching template for a pop can in a pose estimation database according to an embodiment of the present invention;
FIG. 4 is a diagram of global camera vision feed-forward article identification effects in an embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating an alignment solution of color pixel coordinates of a global camera and three-dimensional coordinates of depth according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating the effect of estimating the visual feedback attitude of the end camera of the robot arm according to the embodiment of the present invention;
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
In this embodiment, a method for grabbing a personal belonging item by a home service robot based on visual feedforward and visual feedback, as shown in fig. 1, includes the following steps:
step 1: performing internal reference calibration and hand-eye calibration on the global camera and the camera at the tail end of the mechanical arm to obtain a coordinate system conversion relation participating in the camera, acquiring information of articles in a family scene and establishing a posture estimation database;
in the embodiment, internal reference calibration needs to be performed on the global camera and the camera at the tail end of the mechanical arm, calibration is completed by matching with a calibration plate, and the calibration result is a 3 × 3 matrix; and after the internal reference calibration is finished, the hand-eye calibration is required. As shown in fig. 2, the arrangement of the cameras and the mechanical arm is such that the global camera is in eye-to-hand configuration, the camera at the tail end of the mechanical arm is in eye-in-hand configuration, and the transformation matrix T of the coordinate system of the global camera and the coordinate system of the base of the mechanical arm can be obtained according to the calibration result of the hand and the eye respectivelycrConversion matrix T between coordinate system of camera at tail end of mechanical arm and tail end of mechanical armce
Step 2: acquiring information of articles in a family scene, performing data annotation on an acquisition result to obtain an article data set, and establishing a posture estimation database;
in this embodiment, a data labeling tool is used to label multiple pictures of various articles in a family scene, and the labeling result is stored, so as to establish a data set of various articles in the family scene. The RGBD camera is used for three-dimensional scanning modeling of the article to obtain a grid model of the corresponding article in the STL format, as shown in fig. 3, a pop can template is used as template data for storage to establish a posture estimation database.
And step 3: training an article data set in a family scene through an artificial neural network article recognition algorithm, and recognizing articles in the family scene in a global camera color picture by using a training result to obtain the types of the articles in a scene field and the pixel coordinates of the central point of each article image in a captured color picture;
and (3) training the family scene article data set established in the step (2) through an artificial neural network article identification algorithm to obtain a training result of the data set, identifying the articles in the family scene in the global camera color picture by taking the training result as an article identification model, displaying the identification result in a frame selection mode, taking the frame selection central point as the central point of the identified articles and solving the pixel coordinates (u, v) of the point in the captured color picture, wherein the identification result is shown in fig. 4.
And 4, step 4: selecting an identification image to be captured according to task requirements, aligning pixel coordinates of the object in the image to coordinates of a depth camera coordinate system of the global camera, calculating and obtaining rough coordinates of the object in a world coordinate system according to a coordinate conversion matrix calibrated by hands and eyes, and finishing vision feedforward;
the pixel coordinates (u, v) of the center point of the article in the frame are obtained according to the frame selection result of the article to be grabbed, and according to the imaging principle, as shown in fig. 5, the following formula (1) can be obtained:
Figure BDA0002621116460000031
wherein u is0,v0,fx,fyIs the internal reference of the camera color camera, u and v are any pixel points in an image coordinate system, zc(x) a z-axis value representing camera coordinates provided by the depth cameraw,yw,zw) I.e. the rough coordinates of the object under the camera coordinate system, through the coordinate transformation matrix TcrThe rough coordinates of the object under the world coordinate system can be obtained, and the visual feedforward is completed.
And 5: controlling the tail end of the mechanical arm to move to the front upper part of the article according to the rough position coordinates of the article obtained by visual feedforward, and enabling the camera at the tail end of the mechanical arm to face the article to be grabbed, so that the article is in a better visual field range of the camera at the tail end of the mechanical arm;
step 6: the camera at the tail end of the mechanical arm captures information of an object in a field, calculates the normal characteristic of depth information, performs window-dividing matching on the normal characteristic and a template in an attitude estimation database, performs attitude estimation according to the optimal matching result to obtain the attitude of the object in a picture, and obtains the accurate coordinate and attitude information of the object in a world coordinate system through calculation of a conversion matrix calibrated by hands and eyes to finish visual feedback;
the method comprises the steps of completing information capture of an object to be grabbed in a visual field through a camera at the tail end of a mechanical arm, calculating normal features of depth information, structuring the normal features into 5 directions based on a linemod algorithm to carry out extended computation and storage, calculating a matching degree in a window-dividing mode by using a computation storage result and a cosine value of a template in a posture estimation database as a matching object, carrying out posture estimation according to an optimal matching degree result to obtain the posture of the object to be grabbed in a depth picture, and converting a coordinate transformation matrix TceThe accurate coordinates and the corresponding posture information of the object under the world coordinate system can be obtained, the visual feedback is completed, and the recognition result is shown in fig. 6.
And 7: and controlling the gripper at the tail end of the mechanical arm to move to a gripping point according to the accurate coordinates and posture information of the object obtained by visual feedback and by combining the preset geometric gripping point and gripping mode of the object, thereby finishing the object gripping task.

Claims (4)

1. A home service robot figure grabbing method based on visual feedforward and visual feedback is characterized by comprising the following steps:
step 1: performing internal reference calibration and hand-eye calibration on the global camera and the camera at the tail end of the mechanical arm to obtain a conversion relation between an internal reference matrix of the camera and a coordinate system;
step 2: collecting information of articles in family scene, labeling data of collected result to obtain article data set, and establishing attitude estimation database
And step 3: training an article data set in a family scene through an artificial neural network article recognition algorithm, and recognizing articles in the family scene in a global camera color picture by using a training result to obtain the types of the articles in a scene field and the pixel coordinates of the central point of each article image in a captured color picture;
and 4, step 4: selecting an identification image to be captured according to task requirements, aligning pixel coordinates of the object in the image to coordinates of a depth camera coordinate system of the global camera, calculating and obtaining rough coordinates of the object in a world coordinate system according to a coordinate conversion matrix calibrated by hands and eyes, and finishing vision feedforward;
and 5: controlling the tail end of the mechanical arm to move to the front upper part of the article according to the rough position coordinates of the article obtained by visual feedforward, and enabling the camera at the tail end of the mechanical arm to face the article to be grabbed, so that the article is in a better visual field range of the camera at the tail end of the mechanical arm;
step 6: the camera at the tail end of the mechanical arm captures information of an object in a field, calculates the normal characteristic of depth information, performs window-dividing matching on the normal characteristic and a template in an attitude estimation database, performs attitude estimation according to the optimal matching result to obtain the attitude of the object in a picture, and obtains the accurate coordinate and attitude information of the object in a world coordinate system through calculation of a conversion matrix calibrated by hands and eyes to finish visual feedback;
and 7: controlling a gripper at the tail end of a mechanical arm to move to a grabbing point according to the accurate coordinates and posture information of the object obtained by visual feedback and in combination with a preset geometric grabbing point and grabbing mode of the object, and completing an object grabbing task; .
2. The robotic article picking method of claim 1, wherein: and 2, establishing the attitude estimation database by establishing a grid model of the articles in the family scene in a three-dimensional scanning modeling mode, and intensively storing the grid model in an STL file format to provide template calling for subsequent matching.
3. The robotic article picking method of claim 1, wherein: the visual feedforward method in the steps 3 and 4 comprises a method for identifying various articles in a family scene through an artificial neural network algorithm and a method for roughly positioning the articles in the family scene through a series of conversion of framing a central point pixel coordinate.
4. The robotic article picking method of claim 1, wherein: and 6, completing information capture of an article to be grabbed in the visual field through a camera at the tail end of the mechanical arm, calculating normal features of depth information, structuring the normal features into 5 directions based on a linemod algorithm to perform extended computation and storage, taking a computed storage result and a cosine value of a template in an attitude estimation database as a matching object, computing a matching degree in a window-cutting mode, performing attitude estimation according to an optimal matching degree result to obtain the attitude of the article to be grabbed in a depth picture, and obtaining the accurate coordinates and the corresponding attitude information of the article in a world coordinate system through a coordinate transformation matrix.
CN202010783580.1A 2020-08-06 2020-08-06 Household service robot character grabbing method based on visual feedforward and visual feedback Active CN112372641B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010783580.1A CN112372641B (en) 2020-08-06 2020-08-06 Household service robot character grabbing method based on visual feedforward and visual feedback

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010783580.1A CN112372641B (en) 2020-08-06 2020-08-06 Household service robot character grabbing method based on visual feedforward and visual feedback

Publications (2)

Publication Number Publication Date
CN112372641A true CN112372641A (en) 2021-02-19
CN112372641B CN112372641B (en) 2023-06-02

Family

ID=74586010

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010783580.1A Active CN112372641B (en) 2020-08-06 2020-08-06 Household service robot character grabbing method based on visual feedforward and visual feedback

Country Status (1)

Country Link
CN (1) CN112372641B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113084808A (en) * 2021-04-02 2021-07-09 上海智能制造功能平台有限公司 Monocular vision-based 2D plane grabbing method for mobile mechanical arm
CN113370223B (en) * 2021-04-19 2022-09-02 中国人民解放军火箭军工程大学 Following type explosive-handling robot device and control method
CN115319739A (en) * 2022-08-02 2022-11-11 中国科学院沈阳自动化研究所 Workpiece grabbing method based on visual mechanical arm

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160151915A1 (en) * 2014-11-28 2016-06-02 Fanuc Corporation Cooperation system having machine tool and robot
CN108399639A (en) * 2018-02-12 2018-08-14 杭州蓝芯科技有限公司 Fast automatic crawl based on deep learning and arrangement method
CN109483554A (en) * 2019-01-22 2019-03-19 清华大学 Robotic Dynamic grasping means and system based on global and local vision semanteme
CN110211180A (en) * 2019-05-16 2019-09-06 西安理工大学 A kind of autonomous grasping means of mechanical arm based on deep learning
CN110744544A (en) * 2019-10-31 2020-02-04 昆山市工研院智能制造技术有限公司 Service robot vision grabbing method and service robot
US20200094406A1 (en) * 2017-05-31 2020-03-26 Preferred Networks, Inc. Learning device, learning method, learning model, detection device and grasping system
CN111055281A (en) * 2019-12-19 2020-04-24 杭州电子科技大学 ROS-based autonomous mobile grabbing system and method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160151915A1 (en) * 2014-11-28 2016-06-02 Fanuc Corporation Cooperation system having machine tool and robot
US20200094406A1 (en) * 2017-05-31 2020-03-26 Preferred Networks, Inc. Learning device, learning method, learning model, detection device and grasping system
CN108399639A (en) * 2018-02-12 2018-08-14 杭州蓝芯科技有限公司 Fast automatic crawl based on deep learning and arrangement method
CN109483554A (en) * 2019-01-22 2019-03-19 清华大学 Robotic Dynamic grasping means and system based on global and local vision semanteme
CN110211180A (en) * 2019-05-16 2019-09-06 西安理工大学 A kind of autonomous grasping means of mechanical arm based on deep learning
CN110744544A (en) * 2019-10-31 2020-02-04 昆山市工研院智能制造技术有限公司 Service robot vision grabbing method and service robot
CN111055281A (en) * 2019-12-19 2020-04-24 杭州电子科技大学 ROS-based autonomous mobile grabbing system and method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113084808A (en) * 2021-04-02 2021-07-09 上海智能制造功能平台有限公司 Monocular vision-based 2D plane grabbing method for mobile mechanical arm
CN113084808B (en) * 2021-04-02 2023-09-22 上海智能制造功能平台有限公司 Monocular vision-based 2D plane grabbing method for mobile mechanical arm
CN113370223B (en) * 2021-04-19 2022-09-02 中国人民解放军火箭军工程大学 Following type explosive-handling robot device and control method
CN115319739A (en) * 2022-08-02 2022-11-11 中国科学院沈阳自动化研究所 Workpiece grabbing method based on visual mechanical arm

Also Published As

Publication number Publication date
CN112372641B (en) 2023-06-02

Similar Documents

Publication Publication Date Title
CN107160364B (en) Industrial robot teaching system and method based on machine vision
CN111046948B (en) Point cloud simulation and deep learning workpiece pose identification and robot feeding method
CN111347411B (en) Two-arm cooperative robot three-dimensional visual recognition grabbing method based on deep learning
Schröder et al. Real-time hand tracking using synergistic inverse kinematics
CN111958604A (en) Efficient special-shaped brush monocular vision teaching grabbing method based on CAD model
CN112509063A (en) Mechanical arm grabbing system and method based on edge feature matching
CN110909644A (en) Method and system for adjusting grabbing posture of mechanical arm end effector based on reinforcement learning
CN107662195A (en) A kind of mechanical hand principal and subordinate isomery remote operating control system and control method with telepresenc
CN110605711B (en) Method, device and system for controlling cooperative robot to grab object
CN112372641A (en) Family service robot figure article grabbing method based on visual feedforward and visual feedback
CN114571153A (en) Weld joint identification and robot weld joint tracking method based on 3D point cloud
CN109146939A (en) A kind of generation method and system of workpiece grabbing template
CN113715016A (en) Robot grabbing method, system and device based on 3D vision and medium
CN111251292A (en) Workpiece assembling method and device based on visual positioning and storage medium
CN115213896A (en) Object grabbing method, system and equipment based on mechanical arm and storage medium
CN109087343A (en) A kind of generation method and system of workpiece grabbing template
Shin et al. Integration of deep learning-based object recognition and robot manipulator for grasping objects
CN114851201A (en) Mechanical arm six-degree-of-freedom vision closed-loop grabbing method based on TSDF three-dimensional reconstruction
CN115464657A (en) Hand-eye calibration method of rotary scanning device driven by motor
CN112109072A (en) Method for measuring and grabbing accurate 6D pose of large sparse feature tray
CN113894774A (en) Robot grabbing control method and device, storage medium and robot
Kragic et al. Model based techniques for robotic servoing and grasping
Kita et al. A method for handling a specific part of clothing by dual arms
Li et al. Learning complex assembly skills from kinect based human robot interaction
CN115861780B (en) Robot arm detection grabbing method based on YOLO-GGCNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant