CN111645080A - Intelligent service robot hand-eye cooperation system and operation method - Google Patents

Intelligent service robot hand-eye cooperation system and operation method Download PDF

Info

Publication number
CN111645080A
CN111645080A CN202010379426.8A CN202010379426A CN111645080A CN 111645080 A CN111645080 A CN 111645080A CN 202010379426 A CN202010379426 A CN 202010379426A CN 111645080 A CN111645080 A CN 111645080A
Authority
CN
China
Prior art keywords
grabbed
grabbing
robot
arm
humanoid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010379426.8A
Other languages
Chinese (zh)
Inventor
韦文
覃立万
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202010379426.8A priority Critical patent/CN111645080A/en
Publication of CN111645080A publication Critical patent/CN111645080A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/008Manipulators for service tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/1607Calculation of inertia, jacobian matrixes and inverses
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The invention belongs to the field of mobile robots and artificial intelligence, and particularly relates to an intelligent service robot hand-eye cooperation system and an operation method. The system comprises a sensor module, an algorithm module and a grabbing module, wherein the sensor module mainly comprises a depth camera and is used for acquiring RGB (red, green and blue) data and three-dimensional point cloud data of an object to be grabbed; the algorithm module can process the RGB data and the three-dimensional point cloud data so as to obtain the shape, the pose and the category of an object to be grabbed and provide data such as distance, direction, force, angle and the like required by grabbing for the grabbing module; the grabbing module comprises a humanoid palm and a humanoid arm and is used for completing grabbing actions. The intelligent service type robot hand-eye cooperation system and the operation method provided by the invention can accurately grab objects in various shapes and positions by utilizing the cooperation of the depth camera, the multiple algorithm modules and the grabbing module, and identify the objects.

Description

Intelligent service robot hand-eye cooperation system and operation method
Technical Field
The invention belongs to the field of mobile robots and artificial intelligence, and particularly relates to an intelligent service robot hand-eye cooperation system and an operation method.
Background
As robots are more and more widely used in various industries, the robot technology has been rapidly developed. The vision system of the robot is an important part of the robot, the function in the robot grabbing operation is to identify and position a target object, and provide the type and position and posture information of the target object for the robot, and the position and posture accuracy of the object is very important, which directly affects the success rate of robot grabbing. At present, many robots still adopt 2D vision, only can provide the position of an object in a plane, cannot provide the depth and posture information of the object, and cannot meet the grabbing application requirements of a three-dimensional space. Secondly, many robots are used sucking disc or plane anchor clamps when current snatch, do not have humanoid palm flexibility. The robot has a limited variety of objects to be gripped by the suction cups, is not suitable for some very delicate gripping tasks, and also requires a vacuum pump or the like. Thirdly, when many robots grab an object, the three-dimensional position and posture information of the object is not considered, whether the object can be grabbed or not is not judged, and meanwhile, the grabbed object is not recognized in real time. For the robot, it is not known what the object is, and more advanced services cannot be provided further.
Disclosure of Invention
The invention aims to design an intelligent service robot hand-eye cooperation system and an operation method. The system utilizes the depth camera to obtain the depth point cloud data of the object to be grabbed, and further obtains the accurate position and posture information of the object to be grabbed through algorithm processing. Meanwhile, the object to be grabbed is identified in real time through a deep learning algorithm, and different grabbing modes are intelligently adopted according to the category of the object to be grabbed. After the grabbing is finished, higher-level functions such as classification and the like can be further provided according to the recognition result of the object to be grabbed.
In order to achieve the purpose, the invention is realized by the following technical scheme:
an intelligent service robot hand-eye cooperation system comprises a sensor module, an algorithm module and a grabbing module; the sensor module mainly comprises a depth camera and is used for acquiring RGB data and three-dimensional point cloud data of an object to be grabbed; the algorithm module can process the RGB data and the three-dimensional point cloud data so as to obtain the shape, the pose and the category of an object to be grabbed and provide data such as distance, direction, force, angle and the like required by grabbing for the grabbing module; the grabbing module comprises a humanoid palm and a humanoid arm and is used for completing grabbing actions.
In the intelligent service robot hand-eye cooperation system, RGB data are analyzed through a deep learning algorithm and then are used for identifying object types in real time; and analyzing the point cloud data through a filtering algorithm to obtain accurate pose information and size information of the object, and further judging that the object cannot be grabbed.
According to the intelligent service robot hand-eye cooperation system, when the types of objects to be grabbed are different, the grabbing modes are different, so that a better grabbing effect is obtained; the orientation of the humanoid palm at the tail end of the robot is adjusted by obtaining the pose information of the object to be grabbed, so that the object to be grabbed can be grabbed no matter the object to be grabbed is inclined or straight.
According to the intelligent service robot hand-eye cooperation system, the three-dimensional point cloud data is further used for adjusting the relative position of the robot and an object to be grabbed, and when the object is out of the motion space of the robot arm, the robot is driven to enable the object to be grabbed to be in the motion range of the robot arm.
The intelligent service robot hand-eye cooperation system is characterized in that the depth camera is installed right in the middle of the lower portion of the neck of the robot, images on the left side and the right side of the robot can be seen by the depth camera at the position, and reference is provided for judging whether the left hand or the right hand is used for grabbing.
According to the intelligent service robot hand-eye cooperation system, through structural calculation, the image coordinate of the depth camera and the robot coordinate can be converted mutually, and a series of steps required by calibration between the robot arm coordinate and the depth camera coordinate are omitted.
According to the intelligent service robot hand-eye cooperation system, an algorithm module uses a kinematics forward and inverse solution algorithm to model a humanoid arm of a robot, and joint angles of joints of the humanoid arm are obtained through Cartesian coordinates of the tail end of an object to be grabbed.
According to the intelligent service robot hand-eye cooperation system, the humanoid arm has 6 degrees of freedom, and the motion range of each degree of freedom is similar to that of a human hand; the humanoid palm also has 6 degrees of freedom, and the range of motion of each degree of freedom can reach 180 degrees.
According to the intelligent service robot hand-eye cooperation system, the humanoid arm can bear an object with set quality, the humanoid arm and the humanoid palm are protected and controlled, and when the mass of the object to be grabbed is too large, the humanoid arm and the humanoid palm can be protected.
An operation method of an intelligent service robot hand-eye cooperation system comprises the following specific steps:
the method comprises the steps that firstly, a depth camera is opened, and images around an object to be grabbed are obtained;
secondly, identifying an object to be grabbed in the camera by using a deep learning algorithm;
thirdly, accurately calculating the grabbing position and the grabbing attitude of the object to be grabbed by using an image processing algorithm;
fourthly, converting an image coordinate system of the object to be grabbed into a robot coordinate system by using the installation position information of the camera;
and fifthly, modeling the mechanical arm with 6 degrees of freedom, and judging whether the object to be grabbed is in the motion space of the mechanical arm by using a kinematics inverse solution method: 1) if not, generally, the robot is far away from the object to be grabbed, the robot is driven to move to be close to the object to be grabbed, and the second step, the third step, the fourth step and the fifth step are continuously repeated in the process of approaching until the object to be grabbed is in the motion space of the arm, and the robot is stopped to move; 2) if the object to be grabbed is in the motion state, converting the Cartesian coordinates of the object to be grabbed into each joint angle of the arm by using a kinematic inverse solution, then driving the arm to move to a grabbing position, and controlling each finger to move at different angles for different types of objects to be grabbed so as to execute grabbing tasks.
In summary, the intelligent service type robot hand-eye cooperation system and the operation method provided by the invention can accurately grasp objects in various shapes and positions by using the cooperation of the depth camera, the multiple algorithm modules and the grasping module, and have the following innovation points:
(1) the data of the object to be grabbed, which are acquired by the depth camera, comprise two types, namely RGB data and three-dimensional point cloud data, the shape and pose information of the object can be acquired through an algorithm, the category of the object can be acquired through a depth learning algorithm, and the object is identified.
(2) Various data of the object to be grabbed obtained through the algorithm can be analyzed to determine whether the object can be grabbed. When its shape is too large, it will give up grabbing; when the mass of the robot is too large (more than 2 kg), the humanoid arm and the humanoid palm are both controlled by protection, and grabbing is also abandoned.
(3) According to different recognized types of the objects to be grabbed, different grabbing modes can be adjusted to obtain a better grabbing effect.
(4) The human-shaped palm has 6 degrees of freedom, the motion range of each degree of freedom can reach 180 degrees, the object to be grabbed which is supported to be grabbed is wider, and the human-shaped palm is suitable for more complex and fine grabbing tasks. The joint angles of all joints of the humanoid arm can be provided after calculation according to the data of the object to be grabbed, so that the orientation of the humanoid palm at the tail end of the robot is adjusted, and the object to be grabbed can be grabbed no matter the object is inclined or straight.
The foregoing is a summary of the present application and thus contains, by necessity, simplifications, generalizations and omissions of detail; those skilled in the art will appreciate that the summary is illustrative of the application and is not intended to be in any way limiting. Other aspects, features and advantages of the devices and/or methods and/or other subject matter described in this specification will become apparent as the description proceeds. The summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Drawings
The above-described and other features of the present application will become more fully apparent from the following description and appended claims, taken in conjunction with the accompanying drawings. It is to be understood that these drawings are solely for purposes of illustrating several embodiments of the present application and are not intended as a definition of the limits of the application, for which reference should be made to the appended drawings, wherein the disclosure is to be interpreted in a more complete and detailed manner.
Fig. 1 is a drawing illustrating a grasping example of the hand-eye cooperation system of the intelligent service robot of the present invention.
Fig. 2 is a flowchart of an operation method of the intelligent service robot hand-eye cooperation system of the invention.
Description of reference numerals: 1-an object to be grabbed, 2-a humanoid palm, 3-a humanoid arm and 4-a depth camera.
Detailed Description
In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, the same/similar reference numerals generally refer to the same/similar parts unless otherwise specified in the specification. The illustrative embodiments described in the detailed description, drawings, and claims should not be considered limiting of the application. Other embodiments of, and changes to, the present application may be made without departing from the spirit or scope of the subject matter presented in the present application. It should be readily understood that the aspects of the present application, as generally described in the specification and illustrated in the figures herein, could be arranged, substituted, combined, designed in a wide variety of different configurations, and that all such modifications are expressly contemplated and made part of this application.
Referring to fig. 1, the invention provides an intelligent service robot hand-eye cooperation system, which comprises a sensor module, an algorithm module and a grabbing module. The sensor module mainly comprises a depth camera and is used for acquiring RGB data and three-dimensional point cloud data of an object to be grabbed. The algorithm module can process the RGB data and the three-dimensional point cloud data so as to obtain the shape, the pose and the category of an object to be grabbed and provide data such as distance, direction, force, angle and the like required by grabbing for the grabbing module; in addition, the algorithm module can be used for modeling the humanoid arm of the robot by using a kinematics forward and inverse solution algorithm, and joint angles of all joints of the humanoid arm are obtained through Cartesian coordinates of the tail end of the object to be grabbed. The grabbing module comprises a humanoid palm and a humanoid arm and is used for completing grabbing actions.
The RGB data are analyzed through a deep learning algorithm and then used for identifying object types in real time; and (4) analyzing the point cloud data through a filtering algorithm to obtain accurate pose information and size information of the object, and further judging that the object cannot be grabbed. When the types of the objects to be grabbed are different, the grabbing modes are different, so that a better grabbing effect is obtained; the orientation of the humanoid palm at the tail end of the robot is adjusted by obtaining the pose information of the object to be grabbed, so that the object to be grabbed can be grabbed no matter the object to be grabbed is inclined or straight. The three-dimensional point cloud data is also used for adjusting the relative position of the robot and the object to be grabbed, and when the object is out of the motion space of the robot arm, the robot is driven to enable the object to be grabbed to be in the motion range of the robot arm. The depth camera is installed in the middle of the neck below of robot, and in this position, the image on both sides about the robot can be seen simultaneously to the depth camera, and then snatchs for judging the left hand or the right hand removes to snatch and provide the reference. In addition, through structural calculation, the image coordinates of the depth camera and the robot coordinates can be converted with each other, and a series of steps required by calibration between the robot arm coordinates and the depth camera coordinates are omitted. In the traditional calibration, a plurality of points are required to be taken, the coordinates of each point in a robot arm coordinate system and the coordinates of each point in a camera coordinate system are respectively obtained, and then the mapping relation between the two coordinates is calculated. The humanoid arm has 6 degrees of freedom, and the motion range of each degree of freedom is similar to that of a human hand; the humanoid palm also has 6 degrees of freedom, and the range of motion of each degree of freedom can reach 180 degrees. The humanoid arm can bear an object with set mass, and the humanoid arm and the humanoid palm are both protected and controlled, so that the humanoid arm and the humanoid palm can be protected when the mass of the object to be grabbed is overlarge.
Referring to fig. 2, an operation method of the intelligent service robot hand-eye cooperation system includes the following specific steps:
the method comprises the steps that firstly, a depth camera is opened, and images around an object to be grabbed are obtained;
secondly, identifying an object to be grabbed in the camera by using a deep learning algorithm;
thirdly, accurately calculating the grabbing position and the grabbing attitude of the object to be grabbed by using an image processing algorithm;
fourthly, converting an image coordinate system of the object to be grabbed into a robot coordinate system by using the installation position information of the camera;
and fifthly, modeling the mechanical arm with 6 degrees of freedom, and judging whether the object to be grabbed is in the motion space of the mechanical arm by using a kinematics inverse solution method: 1) if not, generally, the robot is far away from the object to be grabbed, the robot is driven to move to be close to the object to be grabbed, and the second step, the third step, the fourth step and the fifth step are continuously repeated in the process of approaching until the object to be grabbed is in the motion space of the arm, and the robot is stopped to move; 2) if the object to be grabbed is in the motion state, converting the Cartesian coordinates of the object to be grabbed into each joint angle of the arm by using a kinematic inverse solution, then driving the arm to move to a grabbing position, and controlling each finger to move at different angles for different types of objects to be grabbed so as to execute grabbing tasks.
The implementation example is as follows: process for grabbing a bottle of mineral water
1) The main control system allocates a grabbing task to the robot to grab a bottle of mineral water on a table beside the robot;
2) the robot automatically moves to a certain range in front of the mineral water bottle through the navigation positioning system;
3) the robot acquires RGB data and three-dimensional point cloud data of mineral water through a depth camera at the neck;
4) processing the RGB data through a graphic processing algorithm to obtain the external dimension data and pose data of the mineral water bottle, and positioning the pose of the object to be grabbed by utilizing the algorithm; and processing and identifying the three-dimensional point cloud data through a deep learning algorithm to obtain a bottle of common mineral water of the current object.
5) The robot adjusts the relative position with mineral water, make mineral water in the space of grabbing of the robot arm;
6) according to the category of mineral water, a suitable grabbing mode is optimized, modeling is carried out on a 6-degree-of-freedom mechanical arm, cartesian coordinates of the mineral water are converted into joint angles of the arm by using a kinematics inverse solution method, the arm is driven to move to a grabbing position, and different fingers are controlled to move at different angles to different types of objects to be grabbed to execute grabbing tasks. In the grabbing process, the mass of the mineral water is identified to be 0.4kg, and the mineral water is not interrupted and the grabbing task is continuously completed when the mineral water is completely within the grabbing mass range of the robot.
The experiment result shows that the robot successfully grabs a bottle of mineral water and identifies the bottle of mineral water.
In summary, the intelligent service type robot hand-eye cooperation system and the operation method provided by the invention can accurately grasp objects in various shapes and positions by using the cooperation of the depth camera, the multiple algorithm modules and the grasping module, and have the following innovation points: 1) the data of the object to be grabbed, which are acquired by the depth camera, comprise two types, namely RGB data and three-dimensional point cloud data, the shape and pose information of the object can be acquired through an algorithm, the category of the object can be acquired through a depth learning algorithm, and the object is identified. 2) Various data of the object to be grabbed obtained through the algorithm can be analyzed to determine whether the object can be grabbed. When its shape is too large, it will give up grabbing; when the mass of the robot is too large (more than 2 kg), the humanoid arm and the humanoid palm are both controlled by protection, and grabbing is also abandoned. 3) According to different recognized types of the objects to be grabbed, different grabbing modes can be adjusted to obtain a better grabbing effect. 4) The human-shaped palm has 6 degrees of freedom, the motion range of each degree of freedom can reach 180 degrees, the object to be grabbed which is supported to be grabbed is wider, and the human-shaped palm is suitable for more complex and fine grabbing tasks. The joint angles of all joints of the humanoid arm can be provided after calculation according to the data of the object to be grabbed, so that the orientation of the humanoid palm at the tail end of the robot is adjusted, and the object to be grabbed can be grabbed no matter the object is inclined or straight.
The foregoing has been a detailed description of various embodiments of the apparatus and/or methods of the present application via block diagrams, flowcharts, and/or examples of implementations. When the block diagrams, flowcharts, and/or embodiments include one or more functions and/or operations, it will be understood by those within the art that each function and/or operation within the block diagrams, flowcharts, and/or embodiments can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or virtually any combination thereof. However, those skilled in the art will recognize that some aspects of the embodiments described in this specification can be equivalently implemented in whole or in part in integrated circuits, in the form of one or more computer programs running on one or more computers (e.g., in the form of one or more computer programs running on one or more computer systems), in the form of one or more programs running on one or more processors (e.g., in the form of one or more programs running on one or more microprocessors), in the form of firmware, or in virtually any combination thereof, and, it is well within the ability of those skilled in the art to design circuits and/or write code for use in the present application, software and/or firmware, in accordance with the teachings disclosed herein. In addition, those skilled in the art will appreciate that the mechanisms of the subject matter described in this specification are capable of being distributed as a program product in a variety of forms, regardless of the type of signal bearing media used to actually carry out the distribution, and that an illustrative embodiment of the subject matter described in this specification applies. For example, signal bearing media include, but are not limited to, the following: recordable type media such as floppy disks, hard disks, Compact Disks (CDs), Digital Video Disks (DVDs), digital tape, computer memory, etc.; a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.).
Those skilled in the art will recognize that it is common within the art to describe devices and/or methods in the manner described in this specification and then to perform engineering practices to integrate the described devices and/or methods into a data processing system. That is, at least a portion of the devices and/or methods described herein may be integrated into a data processing system through a reasonable amount of experimentation. Those skilled in the art will recognize that a typical data processing system will typically include one or more of the following: a system unit housing, a video display device, memory such as volatile and non-volatile memory, processors such as microprocessors and digital signal processors, computing entities such as operating systems, drivers, graphical user interfaces, and applications, one or more interaction devices such as a touch pad or screen, and/or a control system including feedback loops and control motors (e.g., feedback to detect position and/or velocity; control motors to move and/or adjust components and/or size). A typical data processing system may be implemented using any suitable commercially available components such as those typically found in data computing/communication and/or network computing/communication systems.
With respect to substantially any plural and/or singular terms used in this specification, those skilled in the art may interpret the plural as singular and/or the singular as plural as appropriate from a context and/or application. Various singular/plural combinations may be explicitly stated in this specification for the sake of clarity.
Various aspects and embodiments of the present application are disclosed herein, and other aspects and embodiments of the present application will be apparent to those skilled in the art. The various aspects and embodiments disclosed in this application are presented by way of example only, and not by way of limitation, and the true scope and spirit of the application is to be determined by the following claims.

Claims (10)

1. An intelligent service robot hand-eye cooperation system is characterized by comprising a sensor module, an algorithm module and a grabbing module; the sensor module mainly comprises a depth camera and is used for acquiring RGB data and three-dimensional point cloud data of an object to be grabbed; the algorithm module can process the RGB data and the three-dimensional point cloud data so as to obtain the shape, the pose and the category of an object to be grabbed and provide data such as distance, direction, force, angle and the like required by grabbing for the grabbing module; the grabbing module comprises a humanoid palm and a humanoid arm and is used for completing grabbing actions.
2. The intelligent service robot hand-eye coordination system according to claim 1, wherein the RGB data is analyzed by a deep learning algorithm and then used for real-time recognition of object classes; and analyzing the point cloud data through a filtering algorithm to obtain accurate pose information and size information of the object, and further judging that the object cannot be grabbed.
3. The intelligent service robot hand-eye coordination system according to claim 2, wherein when the types of the objects to be grabbed are different, the grabbing manner is different, so as to obtain better grabbing effect; the orientation of the humanoid palm at the tail end of the robot is adjusted by obtaining the pose information of the object to be grabbed, so that the object to be grabbed can be grabbed no matter the object to be grabbed is inclined or straight.
4. The intelligent service robot hand-eye coordination system as claimed in claim 2, wherein said three-dimensional point cloud data is further used for adjusting the relative position of the robot and the object to be grabbed, and when the object is out of the motion space of the robot arm, the robot is driven to make the object to be grabbed in the motion range of the robot arm.
5. The intelligent service robot hand-eye coordination system according to claim 1, wherein the depth camera is installed right in the middle under the neck of the robot, and in the position, the depth camera can simultaneously see the images of the left side and the right side of the robot, so as to provide reference for judging whether the left hand is used for grabbing or the right hand is used for grabbing.
6. An intelligent service robot hand-eye coordination system as claimed in claim 5, wherein through structural calculation, the image coordinates of the depth camera and the robot coordinates can be transformed to each other, eliminating a series of steps required for calibration between the robot arm coordinates and the depth camera coordinates.
7. The intelligent service robot hand-eye coordination system according to claim 1, wherein the algorithm module uses a kinematics forward and inverse solution algorithm to model the robot humanoid arm, and joint angles of each joint of the humanoid arm are obtained from cartesian coordinates of the end of the object to be grabbed.
8. An intelligent service robot hand-eye coordination system according to claim 1, wherein said humanoid arm has 6 degrees of freedom, each degree of freedom motion range being similar to a human hand; the humanoid palm also has 6 degrees of freedom, and the range of motion of each degree of freedom can reach 180 degrees.
9. The intelligent service robot hand-eye coordination system according to claim 1, wherein the humanoid arm can bear an object with a set mass, and the humanoid arm and the humanoid palm are both protected and controlled, so that the humanoid arm and the humanoid palm can be protected when the mass of the object to be grabbed is too large.
10. An operation method of an intelligent service robot hand-eye cooperation system comprises the following specific steps:
the method comprises the steps that firstly, a depth camera is opened, and images around an object to be grabbed are obtained;
secondly, identifying an object to be grabbed in the camera by using a deep learning algorithm;
thirdly, accurately calculating the grabbing position and the grabbing attitude of the object to be grabbed by using an image processing algorithm;
fourthly, converting an image coordinate system of the object to be grabbed into a robot coordinate system by using the installation position information of the camera;
and fifthly, modeling the mechanical arm with 6 degrees of freedom, and judging whether the object to be grabbed is in the motion space of the mechanical arm by using a kinematics inverse solution method: 1) if not, generally, the robot is far away from the object to be grabbed, the robot is driven to move to be close to the object to be grabbed, and the second step, the third step, the fourth step and the fifth step are continuously repeated in the process of approaching until the object to be grabbed is in the motion space of the arm, and the robot is stopped to move; 2) if the object to be grabbed is in the motion state, converting the Cartesian coordinates of the object to be grabbed into each joint angle of the arm by using a kinematic inverse solution, then driving the arm to move to a grabbing position, and controlling each finger to move at different angles for different types of objects to be grabbed so as to execute grabbing tasks.
CN202010379426.8A 2020-05-08 2020-05-08 Intelligent service robot hand-eye cooperation system and operation method Pending CN111645080A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010379426.8A CN111645080A (en) 2020-05-08 2020-05-08 Intelligent service robot hand-eye cooperation system and operation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010379426.8A CN111645080A (en) 2020-05-08 2020-05-08 Intelligent service robot hand-eye cooperation system and operation method

Publications (1)

Publication Number Publication Date
CN111645080A true CN111645080A (en) 2020-09-11

Family

ID=72352268

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010379426.8A Pending CN111645080A (en) 2020-05-08 2020-05-08 Intelligent service robot hand-eye cooperation system and operation method

Country Status (1)

Country Link
CN (1) CN111645080A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105598965A (en) * 2015-11-26 2016-05-25 哈尔滨工业大学 Robot under-actuated hand autonomous grasping method based on stereoscopic vision
CN106826838A (en) * 2017-04-01 2017-06-13 西安交通大学 A kind of interactive biomimetic manipulator control method based on Kinect space or depth perception sensors
CN108638054A (en) * 2018-04-08 2018-10-12 河南科技学院 A kind of intelligence explosive-removal robot five-needle pines blister rust control method
CN109129419A (en) * 2018-08-29 2019-01-04 王令剑 A kind of anti-overload industrial machine human arm
CN109732628A (en) * 2018-12-26 2019-05-10 南京熊猫电子股份有限公司 Robot end's intelligent grabbing control device and its intelligent grabbing method
CN110202583A (en) * 2019-07-09 2019-09-06 华南理工大学 A kind of Apery manipulator control system and its control method based on deep learning
CN110340893A (en) * 2019-07-12 2019-10-18 哈尔滨工业大学(威海) Mechanical arm grasping means based on the interaction of semantic laser
CN111080693A (en) * 2019-11-22 2020-04-28 天津大学 Robot autonomous classification grabbing method based on YOLOv3

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105598965A (en) * 2015-11-26 2016-05-25 哈尔滨工业大学 Robot under-actuated hand autonomous grasping method based on stereoscopic vision
CN106826838A (en) * 2017-04-01 2017-06-13 西安交通大学 A kind of interactive biomimetic manipulator control method based on Kinect space or depth perception sensors
CN108638054A (en) * 2018-04-08 2018-10-12 河南科技学院 A kind of intelligence explosive-removal robot five-needle pines blister rust control method
CN109129419A (en) * 2018-08-29 2019-01-04 王令剑 A kind of anti-overload industrial machine human arm
CN109732628A (en) * 2018-12-26 2019-05-10 南京熊猫电子股份有限公司 Robot end's intelligent grabbing control device and its intelligent grabbing method
CN110202583A (en) * 2019-07-09 2019-09-06 华南理工大学 A kind of Apery manipulator control system and its control method based on deep learning
CN110340893A (en) * 2019-07-12 2019-10-18 哈尔滨工业大学(威海) Mechanical arm grasping means based on the interaction of semantic laser
CN111080693A (en) * 2019-11-22 2020-04-28 天津大学 Robot autonomous classification grabbing method based on YOLOv3

Similar Documents

Publication Publication Date Title
US20210205986A1 (en) Teleoperating Of Robots With Tasks By Mapping To Human Operator Pose
CN107160364B (en) Industrial robot teaching system and method based on machine vision
CN108453742B (en) Kinect-based robot man-machine interaction system and method
CN114080583B (en) Visual teaching and repetitive movement manipulation system
Rogalla et al. Using gesture and speech control for commanding a robot assistant
Wang et al. Robot manipulator self-identification for surrounding obstacle detection
CN111515945A (en) Control method, system and device for mechanical arm visual positioning sorting and grabbing
JP2012218119A (en) Information processing apparatus, method for controlling the same, and program
CN111462154A (en) Target positioning method and device based on depth vision sensor and automatic grabbing robot
US11254003B1 (en) Enhanced robot path planning
CN112454333B (en) Robot teaching system and method based on image segmentation and surface electromyogram signals
CN106020494B (en) Three-dimensional gesture recognition method based on mobile tracking
CN114131616B (en) Three-dimensional virtual force field visual enhancement method applied to mechanical arm control
Vinayavekhin et al. Towards an automatic robot regrasping movement based on human demonstration using tangle topology
CN114770461A (en) Monocular vision-based mobile robot and automatic grabbing method thereof
Ogawara et al. Acquiring hand-action models in task and behavior levels by a learning robot through observing human demonstrations
CN111645080A (en) Intelligent service robot hand-eye cooperation system and operation method
CN113681560B (en) Method for operating articulated object by mechanical arm based on vision fusion
CN211890823U (en) Four-degree-of-freedom mechanical arm vision servo control system based on RealSense camera
Infantino et al. Visual control of a robotic hand
Zhao et al. Intuitive robot teaching by hand guided demonstration
Hu et al. Manipulator arm interactive control in unknown underwater environment
Amat et al. Virtual exoskeleton for telemanipulation
Jabalameli et al. Near Real-Time Robotic Grasping of Novel Objects in Cluttered Scenes
Verma et al. Application of markerless image-based arm tracking to robot-manipulator teleoperation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200911