CN113829357A - Teleoperation method, device, system and medium for robot arm - Google Patents

Teleoperation method, device, system and medium for robot arm Download PDF

Info

Publication number
CN113829357A
CN113829357A CN202111240203.4A CN202111240203A CN113829357A CN 113829357 A CN113829357 A CN 113829357A CN 202111240203 A CN202111240203 A CN 202111240203A CN 113829357 A CN113829357 A CN 113829357A
Authority
CN
China
Prior art keywords
hand
operator
robot
gesture
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111240203.4A
Other languages
Chinese (zh)
Other versions
CN113829357B (en
Inventor
高庆
陈勇全
池楚亮
王启文
沈文心
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chinese University of Hong Kong Shenzhen
Original Assignee
Chinese University of Hong Kong Shenzhen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chinese University of Hong Kong Shenzhen filed Critical Chinese University of Hong Kong Shenzhen
Priority to CN202111240203.4A priority Critical patent/CN113829357B/en
Publication of CN113829357A publication Critical patent/CN113829357A/en
Application granted granted Critical
Publication of CN113829357B publication Critical patent/CN113829357B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/085Force or torque sensors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Abstract

The application discloses a method, a device, a system and a medium for teleoperation of a robot arm, which relate to the field of teleoperation of robots, and the method comprises the following steps: acquiring a hand image of an operator at the current time, and performing image recognition on the hand image to obtain the hand position and the gesture information of the operator at the current time; determining an expected position parameter of the robot hand at the next moment based on the mapping relation between the position of the operator hand and the preset position at the current moment, and determining an expected gesture of the robot hand at the next moment based on the gesture information of the operator at the current moment and the preset gesture mapping relation; the robot hand is subjected to position control by utilizing the expected position parameters and gesture control is performed on the robot fingers by utilizing the expected gestures, and the hand image is detected and identified by utilizing vision in the scheme, so that the position and the gestures of the robot hand are determined, and the aim of teleoperation of the robot arm is fulfilled.

Description

Teleoperation method, device, system and medium for robot arm
Technical Field
The invention relates to the field of teleoperation of robots, in particular to a method, a device, a system and a medium for teleoperation of a robot arm.
Background
Teleoperation of a robot is a technique that organically combines the decision-making capabilities of a person with the accuracy of the robot. Because the intelligent degree of the robot is limited at present, the robot can be remotely controlled by operating personnel to complete dangerous and difficult tasks through a robot teleoperation technology, and the technology has great application value in special fields such as aerospace, medical treatment, explosion prevention and the like. The traditional robot teleoperation mode such as mouse, keyboard and teleoperation pole only carries out the control of position and gesture motion to traditional mechanical arm, has the problem that can only realize simple teleoperation and operates the degree of difficulty great, and the teleoperation method based on wearing formula interactive device such as data gloves and several bracelets can restrict the flexibility of staff's motion, and can only resolve out basic static gesture.
In summary, how to realize teleoperation with convenient arrangement, freedom, flexibility and simple operation on a mechanical arm and a manipulator of a robot is a problem to be solved in the field.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a method, an apparatus, a system and a medium for teleoperation of a robot arm, which can teleoperate the robot arm and a manipulator of the robot. The specific scheme is as follows:
in a first aspect, the present application discloses a method for teleoperation of a robot arm, comprising:
acquiring a hand image of an operator at the current time, and performing image recognition on the hand image to obtain the hand position and the gesture information of the operator at the current time;
determining an expected position parameter of the robot hand at the next moment based on the mapping relation between the position of the operator hand and the preset position at the current moment, and determining an expected gesture of the robot hand at the next moment based on the gesture information of the operator at the current moment and the preset gesture mapping relation;
and carrying out position control on the robot hand by utilizing the expected position parameters and carrying out gesture control on the robot finger by utilizing the expected gesture.
Optionally, the acquiring a hand image of an operator at the current time, and performing image recognition on the hand image to obtain a hand position and gesture information of the operator at the current time includes:
acquiring images of the hands of the operator at the current moment to obtain a hand color image and a hand depth image of the operator at the current moment;
detecting the two-dimensional hand position of the operator in the hand color image, and determining the three-dimensional hand position of the operator at the current moment based on the two-dimensional hand position and the hand depth image;
and cutting an image area corresponding to the two-dimensional position of the hand in the hand color image to obtain a corresponding cut hand image, and identifying gesture information in the cut hand image to obtain the gesture information of the operator at the current moment.
Optionally, the detecting the two-dimensional hand position of the operator in the human hand color image, and determining the three-dimensional hand position of the operator at the current time based on the two-dimensional hand position and the human hand depth image include:
performing target detection on a hand area in the human hand color image by using a preset target detection network to obtain a two-dimensional hand position of the operator;
and mapping the depth information on the image area corresponding to the two-dimensional hand position in the hand depth image to the two-dimensional hand position based on the corresponding relation of the pixel points between the hand color image and the hand depth image so as to obtain the three-dimensional hand position of the operator at the current moment.
Optionally, the recognizing the gesture information in the cut hand image to obtain the gesture information of the operator at the current time includes:
inputting the cut hand image into a trained gesture recognition model to obtain the gesture information of the operator at the current moment output by the gesture recognition model; the trained gesture recognition model is obtained by utilizing a preset training set to train a model to be trained, which is constructed based on a convolutional neural network algorithm, wherein the preset training set comprises historical hand images and corresponding gesture labels.
Optionally, determining an expected position parameter of the robot hand at the next moment based on the mapping relationship between the current moment and the preset position of the operator hand, includes:
determining a first position distance between the operator hand position at the current moment and the operator hand position at the previous moment;
and determining a second position distance which the robot hand needs to move at the next moment by using the first position distance and a position distance mapping relation constructed based on a preset distance scaling.
Optionally, determining an expected position parameter of the robot hand at the next moment based on the mapping relationship between the operator hand position and the preset position at the current moment includes:
according to a position coordinate mapping relation established based on a preset coordinate value scaling ratio, mapping position coordinates of the hand position of the operator in a camera coordinate system at the current moment to a robot coordinate system to obtain expected position coordinates of the robot hand in the robot coordinate system at the next moment; the camera coordinate system is a three-dimensional space coordinate system constructed based on a camera used for collecting the human hand image.
Optionally, in the process of performing position control on the robot hand by using the expected position parameter and performing gesture control on the robot finger by using the expected gesture, the method further includes:
acquiring information of a surrounding environment by using a visual sensor which is arranged on the robot in advance to obtain visual sensing information, and acquiring information by using a force sensor which is arranged on the robot in advance to obtain force sensing information;
and fine-adjusting the pose of the robot in the position control and gesture control processes based on the visual sensing information and the force sensing information.
In a second aspect, the present application discloses a robotic arm teleoperation device, comprising:
the image acquisition module is used for acquiring a hand image of an operator at the current moment;
the image recognition module is used for carrying out image recognition on the hand image so as to obtain the hand position and the gesture information of the operator at the current moment;
the position determining module is used for determining expected position parameters of the robot hand at the next moment based on the mapping relation between the position of the operator hand and the preset position at the current moment;
the gesture determining module is used for determining an expected gesture of the robot hand at the next moment based on the gesture information of the operator at the current moment and a preset gesture mapping relation;
and the control module is used for carrying out position control on the robot hand by utilizing the expected position parameters and carrying out gesture control on the robot finger by utilizing the expected gesture.
In a third aspect, the present application discloses a robotic system comprising a teleoperational device and a robot; the teleoperational device comprises a depth camera, a memory, and a processor; wherein the content of the first and second substances,
the depth camera is used for collecting hand images of an operator at the current moment;
the memory is used for storing a computer program;
the processor is configured to execute the computer program to implement the following steps:
carrying out image recognition on the hand image to obtain the hand position and the gesture information of the operator at the current moment; determining an expected position parameter of the robot hand at the next moment based on the mapping relation between the position of the operator hand and the preset position at the current moment, and determining an expected gesture of the robot hand at the next moment based on the gesture information of the operator at the current moment and the preset gesture mapping relation; and carrying out position control on the robot hand by utilizing the expected position parameters and carrying out gesture control on the robot finger by utilizing the expected gesture.
Optionally, the manipulator of the robot is a five-finger dexterous hand with 6 degrees of freedom, and the mechanical arm of the robot is a mechanical arm with 6 degrees of freedom.
Optionally, the robot further comprises:
the visual sensor is used for acquiring information of the surrounding environment to obtain visual sensing information;
a force sensor for acquiring force sensing information;
and the processor, when executing the computer program, further implements the steps of:
and fine-adjusting the pose of the robot in the position control and gesture control processes based on the visual sensing information and the force sensing information.
In a fourth aspect, the present application discloses a computer readable storage medium for storing a computer program; wherein the computer program realizes the steps of the robot arm teleoperation method disclosed in the foregoing when being executed by a processor.
Therefore, according to the method, firstly, a hand image of an operator at the current moment is collected, and the image recognition is carried out on the hand image so as to obtain the hand position and the gesture information of the operator at the current moment; determining an expected position parameter of the robot hand at the next moment based on the mapping relation between the position of the operator hand and the preset position at the current moment, and determining an expected gesture of the robot hand at the next moment based on the gesture information of the operator at the current moment and the preset gesture mapping relation; and carrying out position control on the robot hand by utilizing the expected position parameters and carrying out gesture control on the robot finger by utilizing the expected gesture. Therefore, the image recognition is carried out on the hand image at the current moment based on the vision, and the hand position and the gesture information of the operator at the current moment are determined, so that the recognition operation steps are simple and the gesture of the operator has flexibility; based on a preset position mapping relation, the hand position of the operator at the current moment is correspondingly transformed, and an expected position parameter of the robot hand at the next moment is determined, so that the setting is simpler and more convenient; and performing gesture control on the robot finger by utilizing the expected gesture, so that the robot hand completes corresponding gestures, and efficient gesture teleoperation is realized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a teleoperation method of a robot arm disclosed in the present application;
FIG. 2 is a diagram illustrating exemplary gesture information disclosed herein;
FIG. 3 is a schematic illustration of a specific robot arm teleoperation disclosed herein;
FIG. 4 is a flow chart of a particular method of teleoperation of a robotic arm as disclosed herein;
FIG. 5 is a flowchart of a specific method for obtaining a three-dimensional position of a human hand disclosed in the present application;
FIG. 6 is a flow chart of a particular method of teleoperation of a robotic arm as disclosed herein;
FIG. 7 is a flow chart of a particular method of teleoperation of a robotic arm as disclosed herein;
fig. 8 is a schematic structural diagram of a teleoperation device for a robot arm disclosed in the present application;
fig. 9 is a schematic structural diagram of a teleoperational device disclosed in the present application;
fig. 10 is a schematic view of a particular robotic hand-arm apparatus disclosed herein.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The traditional robot teleoperation mode such as mouse, keyboard and teleoperation pole only carries out the control of position and gesture motion to traditional mechanical arm, has the problem that can only realize simple teleoperation and operates the degree of difficulty great, and the teleoperation method based on wearing formula interactive device such as data gloves and several bracelets can restrict the flexibility of staff's motion, and can only resolve out basic static gesture.
Therefore, the application correspondingly provides a teleoperation scheme of the robot arm, and teleoperation which is convenient to set, free, flexible and easy to operate can be achieved on the mechanical arm and the mechanical arm of the robot.
Referring to fig. 1, an embodiment of the present invention discloses a method for teleoperation of a robot arm, including:
step S11: the method comprises the steps of collecting a hand image of an operator at the current time, and carrying out image recognition on the hand image to obtain the hand position and the gesture information of the operator at the current time.
In this embodiment, firstly, a hand image of an operator at a current time is collected, and the collected hand image is subjected to image recognition to obtain a hand position and gesture information of the operator at the current time, where the gesture information is information representing a remote operation gesture. As shown in fig. 2, for example, an operator gesture picture in which five fingers are closed and four finger joints except for a thumb are bent toward the palm corresponds to gesture information "pinch"; and corresponding the gesture picture of the operator with the five fingers relaxed to gesture information of 'relax'.
Step S12: and determining expected position parameters of the robot hand at the next moment based on the mapping relation between the position of the operator hand and the preset position at the current moment, and determining expected gestures of the robot hand at the next moment based on the gesture information of the operator at the current moment and the preset gesture mapping relation.
In this embodiment, the gesture information of the operator at the current moment is mapped to the robot hand, so that the robot hand is controlled to move to a corresponding gesture posture in the following process. For example, when the gesture information is 'pinch', the gesture information corresponding to 'pinch' recorded in the preset gesture mapping relation is mapped to the robot hand; and when the gesture information is 'relaxed', mapping the gesture information which is recorded in the preset gesture mapping relation and corresponds to 'relaxed' to the robot hand. However, in this embodiment, the time difference between the current time and the next time is small, and the robot can perform an action corresponding to the operator within 100 milliseconds, that is, within 100 milliseconds after the hand of the operator moves. It should be further noted that the preset position mapping relationship may specifically include a position-distance mapping relationship or a position-coordinate mapping relationship.
Step S13: and carrying out position control on the robot hand by utilizing the expected position parameters and carrying out gesture control on the robot finger by utilizing the expected gesture.
In this embodiment, in the process of performing position control on the robot hand by using the expected position parameter and performing gesture control on the robot finger by using the expected gesture, the method may further include acquiring information of a surrounding environment by using a visual sensor preset on the robot to obtain visual sensing information, and acquiring information by using a force sensor preset on the robot to obtain force sensing information; and fine-adjusting the pose of the robot in the position control and gesture control processes based on the visual sensing information and the force sensing information. The number and the positions of the force sensors and the vision sensors can be determined according to the specific requirements of actual conditions, for example, when particularly fine operation is performed, a plurality of vision sensors can be arranged on the robot arm, the vision sensors can also be independently arranged outside the robot, and a plurality of force sensors can be arranged at the tail end of each finger and the palm of the robot, so that more accurate vision sensing information and force sensing information can be acquired. For example, as shown in fig. 3, in the drawing, an operator 11 and a robot arm 13 are located on the same side, a camera 12 in front of the operator 11 is used for acquiring a hand image of the operator 11 at the current moment, a gripping object 14 is placed in front of the robot arm 13, when the robot grips the object, a visual sensor acquires information of a surrounding environment to obtain visual sensing information, information acquisition is performed based on a force sensor installed on the five-finger dexterous hand to obtain force sensing information, and in the process of moving to a desired position based on the desired position parameter, the mechanical arm and the five-finger dexterous hand use the visual sensing information and the force sensing information to finely control a pose and a gripping force, so that the mechanical arm and the five-finger dexterous hand are finely adjusted to realize efficient and precise remote operation performance.
Therefore, according to the method, firstly, a hand image of an operator at the current moment is collected, and the image recognition is carried out on the hand image so as to obtain the hand position and the gesture information of the operator at the current moment; determining an expected position parameter of the robot hand at the next moment based on the mapping relation between the position of the operator hand and the preset position at the current moment, and determining an expected gesture of the robot hand at the next moment based on the gesture information of the operator at the current moment and the preset gesture mapping relation; and carrying out position control on the robot hand by utilizing the expected position parameters and carrying out gesture control on the robot finger by utilizing the expected gesture. Therefore, the image recognition is carried out on the hand image at the current moment based on the vision, and the hand position and the gesture information of the operator at the current moment are determined, so that the recognition operation steps are simple and the gesture of the operator has flexibility; based on a preset position mapping relation, the hand position of the operator at the current moment is correspondingly transformed, and an expected position parameter of the robot hand at the next moment is determined, so that the setting is simpler and more convenient; and performing gesture control on the robot finger by utilizing the expected gesture, so that the robot hand completes corresponding gestures, and efficient gesture teleoperation is realized.
Referring to fig. 4, an embodiment of the present invention discloses a specific method for teleoperation of a robot arm, including:
step S21: and acquiring images of the hands of the operator at the current moment to obtain a hand color image and a hand depth image of the operator at the current moment.
In this embodiment, the image of the human hand of the operator at the current time is acquired in real time by using a depth camera, wherein the depth camera may include, but is not limited to, a RealSense D435i depth camera, wherein a color image camera module on the depth camera acquires a two-dimensional image of a color plane of the hand of the operator at the current time, and an infrared camera module on the depth camera acquires hand depth information. It is understood that the image information of the human hand may include palm position information and finger pose information.
Step S22: and detecting the two-dimensional hand position of the operator in the hand color image, and determining the three-dimensional hand position of the operator at the current moment based on the two-dimensional hand position and the hand depth image.
In this embodiment, the detecting the two-dimensional hand position of the operator in the hand color image and determining the three-dimensional hand position of the operator at the current time based on the two-dimensional hand position and the hand depth image includes performing target detection on a hand area in the hand color image by using a preset target detection network to obtain the two-dimensional hand position of the operator; and mapping the depth information on the image area corresponding to the two-dimensional hand position in the hand depth image to the two-dimensional hand position based on the corresponding relation of the pixel points between the hand color image and the hand depth image so as to obtain the three-dimensional hand position of the operator at the current moment. The preset target detection network may be any one of YOLO (uniform real-time target detection), ssd (single shot multicast detector), fast _ rcnn, and centret network. For example, as shown in fig. 5, taking the centrnet network as an example, the acquisition of the two-dimensional position of the hand and the three-dimensional position of the hand in the present embodiment will be described in detail. Firstly, detecting the central point of the hand in the hand color image, then predicting the width and height of the hand by a bounding box regression algorithm and the central point offset to obtain the actual range of the hand, and finally acquiring the two-dimensional position of the hand of the operator based on the steps. For obtaining the three-dimensional hand position of the operator at the current time, the depth information on the image area corresponding to the two-dimensional hand position in the hand depth image is mapped to the two-dimensional hand position based on the obtained pixel point corresponding relationship between the hand depth image and the hand color image, so that the three-dimensional hand position of the operator at the current time can be obtained.
Step S23: and cutting an image area corresponding to the two-dimensional position of the hand in the hand color image to obtain a corresponding cut hand image, and identifying gesture information in the cut hand image to obtain the gesture information of the operator at the current moment.
In this embodiment, the recognizing the gesture information in the cut hand image to obtain the gesture information of the operator at the current time includes inputting the cut hand image to a trained gesture recognition model to obtain the gesture information of the operator at the current time output by the gesture recognition model; the trained gesture recognition model is obtained by utilizing a preset training set to train a model to be trained, which is constructed based on a convolutional neural network algorithm, wherein the preset training set comprises historical hand images and corresponding gesture labels. For example, the clipped hand image is input into the inclusion-Resnet-V2 network model trained by the preset training set to obtain the corresponding gesture information of the operator at the current moment.
Step S24: and determining expected position parameters of the robot hand at the next moment based on the three-dimensional position of the operator hand at the current moment and a preset position mapping relation, and determining expected gestures of the robot hand at the next moment based on the gesture information of the operator at the current moment and the preset gesture mapping relation.
Step S25: and carrying out position control on the robot hand by utilizing the expected position parameters and carrying out gesture control on the robot finger by utilizing the expected gesture.
For more specific working processes of the steps S24 and S25, reference may be made to corresponding contents disclosed in the foregoing embodiments, and details are not repeated here.
Therefore, the two-dimensional hand position of the operator is obtained through the target detection network, the three-dimensional hand position of the operator is obtained through corresponding mapping with the hand depth image, the expected position parameter of the robot hand at the next moment is determined according to the three-dimensional hand position of the operator at the current moment, the position determining operation is simpler, the gesture information is recognized based on the trained gesture recognition model, the speed of the gesture recognition step is increased, and meanwhile the accuracy is improved.
Referring to fig. 6, an embodiment of the present invention discloses a specific method for teleoperation of a robot arm, including:
step S31: the method comprises the steps of collecting a hand image of an operator at the current time, and carrying out image recognition on the hand image to obtain the hand position and the gesture information of the operator at the current time.
Step S32: and determining an expected position parameter of the robot hand at the next moment according to the position of the operator hand at the current moment and a position distance mapping relation constructed based on a preset distance scaling, and determining an expected gesture of the robot hand at the next moment based on the gesture information of the operator at the current moment and the preset gesture mapping relation.
In this embodiment, the determining an expected position parameter of the robot hand at the next time according to the operator hand position at the current time and the position-distance mapping relationship constructed based on the preset distance scaling ratio includes determining a first position distance between the operator hand position at the current time and the operator hand position at the previous time; and determining a second position distance which the robot hand needs to move at the next moment by using the first position distance and a position distance mapping relation constructed based on a preset distance scaling. For example, when the second position distance that the robot hand needs to move is expected to be 1 meter, since the second position distance is relatively large, a position distance mapping relationship constructed based on a preset distance scaling ratio can be preset, and the position distance mapping relationship is adjusted to a mapping relationship in which the second position distance is 10 times the first position distance, so that the first position distance can be controlled to be 1 meter only by 0.1 meter according to the position distance mapping relationship.
Step S33: and carrying out position control on the robot hand by utilizing the expected position parameters and carrying out gesture control on the robot finger by utilizing the expected gesture.
For more specific working processes of the steps S31 and S33, reference may be made to corresponding contents disclosed in the foregoing embodiments, and details are not repeated here.
Therefore, the position and distance mapping relation constructed based on the preset distance scaling in the application can be used for scaling the position parameters of the robot hand according to the preset position mapping relation, and an operator does not need to move according to the distance required by the robot hand to move, so that the operation is convenient and the man-machine cooperation speed is increased.
Referring to fig. 7, the embodiment of the present invention discloses a specific method for teleoperation of a robot arm, and compared with the previous embodiment, the present embodiment further describes and optimizes the technical solution, including:
step S41: the method comprises the steps of collecting a hand image of an operator at the current time, and carrying out image recognition on the hand image to obtain the hand position and the gesture information of the operator at the current time.
Step S42: and determining an expected position parameter of the robot hand at the next moment according to the position of the operator hand at the current moment and a position coordinate mapping relation constructed based on a preset coordinate value scaling ratio, and determining an expected gesture of the robot hand at the next moment based on the gesture information of the operator at the current moment and the preset gesture mapping relation.
In this embodiment, the determining the expected position parameter of the robot hand at the next moment according to the position of the operator hand at the current moment and the position coordinate mapping relationship established based on the preset coordinate value scaling ratio includes mapping the position coordinate of the position of the operator hand in the camera coordinate system at the current moment to the robot coordinate system according to the position coordinate mapping relationship established based on the preset coordinate value scaling ratio, so as to obtain the expected position coordinate of the robot hand in the robot coordinate system at the next moment; the camera coordinate system is a three-dimensional space coordinate system constructed based on a camera used for collecting the human hand image. It can be understood that the mapping relationship between the operator hand position at the current time and the position coordinate constructed based on the preset coordinate value scaling ratio further includes a coordinate conversion relationship between a camera coordinate system where the operator hand position is located at the current time and a robot coordinate system. Wherein, the preset coordinate value scaling relation is as follows:
PR=αPH
PHis the position coordinates of the operator's hand in the robot coordinate system, PRIs the desired position coordinates of the robot hand in the robot coordinate system, where α is the scaling factor. For example, when the position coordinates of the operator hand in the robot coordinate system at the present time are moved to (5,5,5), the scaling factor α is set to 3, and the desired position coordinates of the robot hand in the robot coordinate system at the next time are (15,15, 15).
Step S43: and carrying out position control on the robot hand by utilizing the expected position parameters and carrying out gesture control on the robot finger by utilizing the expected gesture.
For more specific working processes of the steps S41 and S43, reference may be made to corresponding contents disclosed in the foregoing embodiments, and details are not repeated here.
Therefore, according to the method, the position coordinates of the hand position of the operator in the camera coordinate system at the current moment are obtained, then the position coordinates of the hand position of the operator in the camera coordinate system are mapped to the robot coordinate system based on the position coordinate mapping relation established by the preset coordinate value scaling, and the expected position coordinates of the hand of the robot in the robot coordinate system at the next moment are obtained.
Referring to fig. 8, an embodiment of the present application discloses a robot arm teleoperation device, including:
the image acquisition module 21 is used for acquiring a hand image of an operator at the current moment;
the image recognition module 22 is configured to perform image recognition on the hand image to obtain a position of the hand of the operator and gesture information of the operator at the current time;
the position determining module 23 is configured to determine an expected position parameter of the robot hand at the next moment based on the mapping relationship between the operator hand position and the preset position at the current moment;
the gesture determining module 24 is configured to determine an expected gesture of the robot hand at a next moment based on the gesture information of the operator at the current moment and a preset gesture mapping relationship;
a control module 25 for performing position control on the robot hand using the desired position parameters and performing gesture control on the robot finger using the desired gesture.
Therefore, the image acquisition module and the image recognition module acquire the hand position and the gesture information of the operator at the current moment based on vision, the complexity in the teleoperation process is reduced, the naturalness of the operator is not limited, the flexibility of the teleoperation is improved, the position determination module can zoom the distance of the hand of the operator, which needs to move, on the basis of the preset position mapping relation, the distance of the hand of the operator and the distance of the robot, which needs to move, are equal, the operation is more convenient, the gesture determination module determines the expected gesture of the hand of the robot at the next moment on the basis of the preset gesture mapping relation, so that the control module controls the robot by utilizing the information of the position determination module and the gesture determination module to complete the teleoperation, and the speed of completing the teleoperation is improved.
In some embodiments, the image capturing module 21 includes:
the color image acquisition unit is used for acquiring images of the hands of the operator at the current moment to obtain a color image of the hands of the operator at the current moment;
and the depth image acquisition unit is used for acquiring images of the hand of the operator at the current moment so as to obtain the hand depth image of the operator at the current moment.
In some embodiments, the image recognition module 22 includes:
and the hand position acquisition module is used for detecting the two-dimensional position of the hand of the operator in the hand color image and determining the three-dimensional position of the hand of the operator at the current moment based on the two-dimensional position of the hand and the hand depth image.
In some embodiments, a hand position acquisition module comprises:
and the two-dimensional position acquisition unit is used for carrying out target detection on a hand area in the human hand color image so as to obtain the two-dimensional position of the hand of the operator.
In some embodiments, a hand position acquisition module comprises:
and the three-dimensional position acquisition unit is used for mapping the depth information on the image area corresponding to the two-dimensional hand position in the hand depth image to the two-dimensional hand position based on the corresponding relation of the pixel points between the hand color image and the hand depth image so as to obtain the three-dimensional hand position of the operator at the current moment.
In some embodiments, the image recognition module 22 includes:
and the gesture information acquisition module is used for cutting an image area corresponding to the two-dimensional hand position in the hand color image to obtain a corresponding cut hand image, and identifying gesture information in the cut hand image to obtain the gesture information of the operator at the current moment.
In some embodiments, the gesture information obtaining module includes:
the gesture information output unit is used for inputting the cut hand images to the trained gesture recognition model so as to obtain the gesture information of the operator at the current moment output by the gesture recognition model; the trained gesture recognition model is obtained by utilizing a preset training set to train a model to be trained, which is constructed based on a convolutional neural network algorithm, wherein the preset training set comprises historical hand images and corresponding gesture labels.
In some embodiments, the position determining module 23 includes:
and the position distance mapping unit is used for determining a first position distance between the position of the operator hand at the current moment and the position of the operator hand at the previous moment, and determining a second position distance which the robot hand needs to move at the next moment by using the first position distance and a position distance mapping relation constructed based on a preset distance scaling.
In some embodiments, the position determining module 23 includes:
the position coordinate mapping unit is used for mapping the position coordinate of the hand position of the operator in the camera coordinate system at the current moment to the robot coordinate system according to the position coordinate mapping relation established based on the preset coordinate value scaling ratio to obtain the expected position coordinate of the robot hand in the robot coordinate system at the next moment; the camera coordinate system is a three-dimensional space coordinate system constructed based on a camera used for collecting the human hand image.
In some embodiments, the control module 25 includes:
the vision sensing information acquisition module is used for acquiring information of the surrounding environment by using a vision sensor which is arranged on the robot in advance so as to obtain vision sensing information;
the force sensing information acquisition module is used for acquiring information by utilizing a force sensor which is arranged on the robot in advance so as to obtain force sensing information;
and the fine adjustment module is used for fine adjusting the pose of the robot in the position control and gesture control processes based on the visual sensing information and the force sensing information.
Further, the embodiment of the application also provides a robot system, which comprises a teleoperation device and a robot, wherein the teleoperation device comprises a depth camera, a memory and a processor.
The teleoperation device may be an electronic device as shown in fig. 9. Fig. 9 is a schematic diagram of a teleoperational device 30 according to an embodiment of the present invention, which should not be construed as limiting the scope of the present application.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 30 may specifically include: at least one processor 31, at least one memory 32, a power supply 33, a communication interface 34, an input-output interface 35, a communication bus 36, and a depth camera 37. Wherein the memory 32 is used for storing a computer program, which is loaded and executed by the processor 31, to implement the steps of: carrying out image recognition on the hand image to obtain the hand position and the gesture information of the operator at the current moment; determining an expected position parameter of the robot hand at the next moment based on the mapping relation between the position of the operator hand and the preset position at the current moment, and determining an expected gesture of the robot hand at the next moment based on the gesture information of the operator at the current moment and the preset gesture mapping relation; and carrying out position control on the robot hand by utilizing the expected position parameters and carrying out gesture control on the robot finger by utilizing the expected gesture.
In this embodiment, the power supply 33 is used to provide operating voltage for each hardware device on the computer device 30; the communication interface 34 can create a data transmission channel between the computer device 30 and an external device, and the communication protocol followed by the communication interface is any communication protocol applicable to the technical solution of the present application, and is not specifically limited herein; the input/output interface 35 is configured to obtain external input data or output data to the outside, and a specific interface type thereof may be selected according to specific application requirements, which is not specifically limited herein.
The processor 31 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 31 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 31 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 31 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 31 may further include an AI (Artificial Intelligence) processor for processing a calculation operation related to machine learning.
In addition, the storage 32 is used as a carrier for storing resources, and may be a read-only memory, a random access memory, a magnetic disk, an optical disk, or the like, wherein the resources stored thereon include an operating system 321, a computer program 322, data 323, and the like, and the storage may be a transient storage or a permanent storage.
The operating system 321 is used for managing and controlling hardware devices and computer programs 322 on the computer device 30, so as to implement operations and processing of the mass data 323 in the memory 32 by the processor 31, which may be Windows, Unix, Linux, and the like. The computer program 322 may further comprise a computer program that can be used to perform other specific tasks in addition to the computer program that can be used to perform the method for teleoperation of a robot arm performed by the computer device 30 disclosed in any of the previous embodiments. The data 323 may include data received by the computer device and transmitted from an external device, or may include data collected by the input/output interface 35 itself.
The depth camera 37 may be configured to acquire a hand image of an operator at a current time; it should be noted that the depth camera 37 may be a RealSense D435i depth camera, and performs image acquisition on the human hand of the operator at the current time by using the RealSense D435i depth camera to obtain a human hand color image and a human hand depth image of the operator at the current time.
Those skilled in the art will appreciate that the configuration shown in fig. 9 is not limiting of electronic device 20 and may include more or fewer components than those shown.
Fig. 10 provides a schematic diagram of a robot arm, in a robot system, the robot arm includes a vision sensor 41, a manipulator 42, a UR5, and a manipulator 43, and further includes force sensors installed at different positions of the robot arm according to different requirements, for performing corresponding teleoperation according to information transmitted by teleoperation equipment. The vision sensor 41 is a RealSense camera, and in this embodiment, may be specifically a RealSense D435i depth camera, and is installed at the end of the mechanical arm, and is configured to acquire information of a surrounding environment to obtain vision sensing information, and mainly used to detect pose information of a grabbed object. The manipulator 42 of the robot is a five-finger dexterous hand with 6 degrees of freedom, and comprises 5 bending degrees of freedom of 5 fingers and 1 rotation degree of freedom of a thumb, and the manipulator of the robot is a manipulator with 6 degrees of freedom, and comprises a translation degree of freedom that the manipulator 43 can move along the directions of three orthogonal coordinate axes of x, y and z and a rotation degree of freedom that the manipulator rotates around the three orthogonal coordinate axes of x, y and z, so that basic path planning and motion control functions can be realized. The robot further comprises a visual sensor for acquiring information of the surrounding environment to obtain visual sensing information, and a force sensor for acquiring force sensing information, wherein the processor further realizes the following steps when executing the computer program, and fine-tunes the pose of the robot in the position control and gesture control processes based on the visual sensing information and the force sensing information.
Further, an embodiment of the present application further discloses a storage medium, in which a computer program is stored, and when the computer program is loaded and executed by a processor, the steps of the robot arm teleoperation method disclosed in any of the foregoing embodiments are implemented.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above detailed descriptions of the method, apparatus, system and medium for teleoperation of a robot arm provided by the present invention, and the specific examples applied herein have been provided to explain the principles and embodiments of the present invention, and the descriptions of the above embodiments are only used to help understanding the method and its core ideas of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (12)

1. A method for teleoperation of a robot arm is characterized by comprising the following steps:
acquiring a hand image of an operator at the current time, and performing image recognition on the hand image to obtain the hand position and the gesture information of the operator at the current time;
determining an expected position parameter of the robot hand at the next moment based on the mapping relation between the position of the operator hand and the preset position at the current moment, and determining an expected gesture of the robot hand at the next moment based on the gesture information of the operator at the current moment and the preset gesture mapping relation;
and carrying out position control on the robot hand by utilizing the expected position parameters and carrying out gesture control on the robot finger by utilizing the expected gesture.
2. The method for teleoperation of a robot arm according to claim 1, wherein the acquiring a hand image of an operator at a current time and performing image recognition on the hand image to obtain a hand position and gesture information of the operator at the current time comprises:
acquiring images of the hands of the operator at the current moment to obtain a hand color image and a hand depth image of the operator at the current moment;
detecting the two-dimensional hand position of the operator in the hand color image, and determining the three-dimensional hand position of the operator at the current moment based on the two-dimensional hand position and the hand depth image;
and cutting an image area corresponding to the two-dimensional position of the hand in the hand color image to obtain a corresponding cut hand image, and identifying gesture information in the cut hand image to obtain the gesture information of the operator at the current moment.
3. The method of claim 2, wherein the detecting the two-dimensional position of the operator's hand in the color image of the human hand and determining the three-dimensional position of the operator's hand at the current time based on the two-dimensional position of the hand and the depth image of the human hand comprises:
performing target detection on a hand area in the human hand color image by using a preset target detection network to obtain a two-dimensional hand position of the operator;
and mapping the depth information on the image area corresponding to the two-dimensional hand position in the hand depth image to the two-dimensional hand position based on the corresponding relation of the pixel points between the hand color image and the hand depth image so as to obtain the three-dimensional hand position of the operator at the current moment.
4. The method of claim 2, wherein the recognizing gesture information in the cut hand image to obtain the gesture information of the operator at the current time comprises:
inputting the cut hand image into a trained gesture recognition model to obtain the gesture information of the operator at the current moment output by the gesture recognition model; the trained gesture recognition model is obtained by utilizing a preset training set to train a model to be trained, which is constructed based on a convolutional neural network algorithm, wherein the preset training set comprises historical hand images and corresponding gesture labels.
5. The method of claim 1, wherein determining the expected position parameter of the robot hand at the next time based on the operator hand position at the current time and a preset position mapping relationship comprises:
determining a first position distance between the operator hand position at the current moment and the operator hand position at the previous moment;
and determining a second position distance which the robot hand needs to move at the next moment by using the first position distance and a position distance mapping relation constructed based on a preset distance scaling.
6. The method of claim 1, wherein determining the expected position parameter of the robot hand at the next time based on the mapping relationship between the operator hand position at the current time and the preset position comprises:
according to a position coordinate mapping relation established based on a preset coordinate value scaling ratio, mapping position coordinates of the hand position of the operator in a camera coordinate system at the current moment to a robot coordinate system to obtain expected position coordinates of the robot hand in the robot coordinate system at the next moment; the camera coordinate system is a three-dimensional space coordinate system constructed based on a camera used for collecting the human hand image.
7. The method of any of claims 1 to 6, wherein the controlling the position of the robot hand using the desired position parameters and the controlling the gesture of the robot finger using the desired gesture further comprises:
acquiring information of a surrounding environment by using a visual sensor which is arranged on the robot in advance to obtain visual sensing information, and acquiring information by using a force sensor which is arranged on the robot in advance to obtain force sensing information;
and fine-adjusting the pose of the robot in the position control and gesture control processes based on the visual sensing information and the force sensing information.
8. A teleoperation device of robot arm, characterized by comprising:
the image acquisition module is used for acquiring a hand image of an operator at the current moment;
the image recognition module is used for carrying out image recognition on the hand image so as to obtain the hand position and the gesture information of the operator at the current moment;
the position determining module is used for determining expected position parameters of the robot hand at the next moment based on the mapping relation between the position of the operator hand and the preset position at the current moment;
the gesture determining module is used for determining an expected gesture of the robot hand at the next moment based on the gesture information of the operator at the current moment and a preset gesture mapping relation;
and the control module is used for carrying out position control on the robot hand by utilizing the expected position parameters and carrying out gesture control on the robot finger by utilizing the expected gesture.
9. A robotic system comprising a teleoperational device and a robot; the teleoperational device comprises a depth camera, a memory, and a processor; wherein the content of the first and second substances,
the depth camera is used for collecting hand images of an operator at the current moment;
the memory is used for storing a computer program;
the processor is configured to execute the computer program to implement the following steps:
carrying out image recognition on the hand image to obtain the hand position and the gesture information of the operator at the current moment; determining an expected position parameter of the robot hand at the next moment based on the mapping relation between the position of the operator hand and the preset position at the current moment, and determining an expected gesture of the robot hand at the next moment based on the gesture information of the operator at the current moment and the preset gesture mapping relation; and carrying out position control on the robot hand by utilizing the expected position parameters and carrying out gesture control on the robot finger by utilizing the expected gesture.
10. The robot system according to claim 9, wherein the robot hand is a five-finger dexterous hand having 6 degrees of freedom, and the robot arm is a robot arm having 6 degrees of freedom.
11. The robotic system as claimed in claim 9, wherein the robot further comprises:
the visual sensor is used for acquiring information of the surrounding environment to obtain visual sensing information;
a force sensor for acquiring force sensing information;
and the processor, when executing the computer program, further implements the steps of:
and fine-adjusting the pose of the robot in the position control and gesture control processes based on the visual sensing information and the force sensing information.
12. A computer-readable storage medium for storing a computer program; wherein the computer program realizes the steps of the method for teleoperation of a robotic arm as claimed in any one of claims 1 to 7 when executed by a processor.
CN202111240203.4A 2021-10-25 2021-10-25 Remote operation method, device, system and medium for robot arm Active CN113829357B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111240203.4A CN113829357B (en) 2021-10-25 2021-10-25 Remote operation method, device, system and medium for robot arm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111240203.4A CN113829357B (en) 2021-10-25 2021-10-25 Remote operation method, device, system and medium for robot arm

Publications (2)

Publication Number Publication Date
CN113829357A true CN113829357A (en) 2021-12-24
CN113829357B CN113829357B (en) 2023-10-03

Family

ID=78965881

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111240203.4A Active CN113829357B (en) 2021-10-25 2021-10-25 Remote operation method, device, system and medium for robot arm

Country Status (1)

Country Link
CN (1) CN113829357B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102350700A (en) * 2011-09-19 2012-02-15 华南理工大学 Method for controlling robot based on visual sense
CN103112007A (en) * 2013-02-06 2013-05-22 华南理工大学 Human-machine interaction method based on mixing sensor
CN104589356A (en) * 2014-11-27 2015-05-06 北京工业大学 Dexterous hand teleoperation control method based on Kinect human hand motion capturing
CN108828996A (en) * 2018-05-31 2018-11-16 四川文理学院 A kind of the mechanical arm remote control system and method for view-based access control model information
CN111694428A (en) * 2020-05-25 2020-09-22 电子科技大学 Gesture and track remote control robot system based on Kinect
US20210086364A1 (en) * 2019-09-20 2021-03-25 Nvidia Corporation Vision-based teleoperation of dexterous robotic system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102350700A (en) * 2011-09-19 2012-02-15 华南理工大学 Method for controlling robot based on visual sense
CN103112007A (en) * 2013-02-06 2013-05-22 华南理工大学 Human-machine interaction method based on mixing sensor
CN104589356A (en) * 2014-11-27 2015-05-06 北京工业大学 Dexterous hand teleoperation control method based on Kinect human hand motion capturing
CN108828996A (en) * 2018-05-31 2018-11-16 四川文理学院 A kind of the mechanical arm remote control system and method for view-based access control model information
US20210086364A1 (en) * 2019-09-20 2021-03-25 Nvidia Corporation Vision-based teleoperation of dexterous robotic system
CN111694428A (en) * 2020-05-25 2020-09-22 电子科技大学 Gesture and track remote control robot system based on Kinect

Also Published As

Publication number Publication date
CN113829357B (en) 2023-10-03

Similar Documents

Publication Publication Date Title
US20210205986A1 (en) Teleoperating Of Robots With Tasks By Mapping To Human Operator Pose
Kang et al. Toward automatic robot instruction from perception-temporal segmentation of tasks from human hand motion
WO2023056670A1 (en) Mechanical arm autonomous mobile grabbing method under complex illumination conditions based on visual-tactile fusion
CN114080583B (en) Visual teaching and repetitive movement manipulation system
Tölgyessy et al. Foundations of visual linear human–robot interaction via pointing gesture navigation
WO2011065035A1 (en) Method of creating teaching data for robot, and teaching system for robot
WO2011065034A1 (en) Method for controlling action of robot, and robot system
CN107030692B (en) Manipulator teleoperation method and system based on perception enhancement
Lu et al. Immersive manipulation of virtual objects through glove-based hand gesture interaction
Zubrycki et al. Using integrated vision systems: three gears and leap motion, to control a 3-finger dexterous gripper
CN105319991A (en) Kinect visual information-based robot environment identification and operation control method
CN105930775A (en) Face orientation identification method based on sensitivity parameter
CN102830798A (en) Mark-free hand tracking method of single-arm robot based on Kinect
Chen et al. A human–robot interface for mobile manipulator
CN115070781B (en) Object grabbing method and two-mechanical-arm cooperation system
Kofman et al. Robot-manipulator teleoperation by markerless vision-based hand-arm tracking
CN113505694A (en) Human-computer interaction method and device based on sight tracking and computer equipment
CN115576426A (en) Hand interaction method for mixed reality flight simulator
Kang et al. A robot system that observes and replicates grasping tasks
Raheja et al. Controlling a remotely located robot using hand gestures in real time: A DSP implementation
CN113829357B (en) Remote operation method, device, system and medium for robot arm
Kang et al. Grasp recognition and manipulative motion characterization from human hand motion sequences
Barber et al. Sketch-based robot programming
Amatya et al. Real time kinect based robotic arm manipulation with five degree of freedom
KR20230100101A (en) Robot control system and method for robot setting and robot control using the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant