CN113829357B - Remote operation method, device, system and medium for robot arm - Google Patents

Remote operation method, device, system and medium for robot arm Download PDF

Info

Publication number
CN113829357B
CN113829357B CN202111240203.4A CN202111240203A CN113829357B CN 113829357 B CN113829357 B CN 113829357B CN 202111240203 A CN202111240203 A CN 202111240203A CN 113829357 B CN113829357 B CN 113829357B
Authority
CN
China
Prior art keywords
hand
operator
robot
gesture
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111240203.4A
Other languages
Chinese (zh)
Other versions
CN113829357A (en
Inventor
高庆
陈勇全
池楚亮
王启文
沈文心
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Guochuang Guishen Intelligent Robot Co ltd
Original Assignee
Chinese University of Hong Kong Shenzhen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chinese University of Hong Kong Shenzhen filed Critical Chinese University of Hong Kong Shenzhen
Priority to CN202111240203.4A priority Critical patent/CN113829357B/en
Publication of CN113829357A publication Critical patent/CN113829357A/en
Application granted granted Critical
Publication of CN113829357B publication Critical patent/CN113829357B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/085Force or torque sensors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Manipulator (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a teleoperation method, a device, a system and a medium for a robot arm, which relate to the field of teleoperation of robots, and the method comprises the following steps: acquiring a hand image of an operator at the current moment, and carrying out image recognition on the hand image to obtain the hand position of the operator and the gesture information of the operator at the current moment; determining expected position parameters of the robot hand at the next moment based on the position of the operator hand at the current moment and a preset position mapping relation, and determining expected gestures of the robot hand at the next moment based on the gesture information of the operator at the current moment and a preset gesture mapping relation; and performing position control on the robot hand by using the expected position parameters and performing gesture control on the robot finger by using the expected gestures, detecting and identifying the hand image by using vision in the scheme, further determining the position and the gestures of the robot hand, and achieving the purpose of teleoperation of the robot arm.

Description

Remote operation method, device, system and medium for robot arm
Technical Field
The application relates to the field of teleoperation of robots, in particular to a teleoperation method, a teleoperation device, a teleoperation system and a teleoperation medium for a robot arm.
Background
Teleoperation of a robot is a technique that combines the decision-making ability of a person with the accuracy of the robot. Because the intelligent degree of the existing robot is limited, an operator can remotely control the robot to complete dangerous and difficult tasks through a robot teleoperation technology, and the technology has great application value in the special fields of aerospace, medical treatment, explosion prevention and the like. The teleoperation mode of the traditional robot, such as a mouse, a keyboard and a teleoperation rod, only controls the position and the gesture movement of the traditional mechanical arm, the problem that the simple teleoperation can only be realized and the operation difficulty is high exists, and the teleoperation method of the wearable interactive equipment based on data gloves, a few hand rings and the like can limit the flexibility of the movement of hands and only analyze basic static gestures.
In summary, how to realize teleoperation with convenient, free and flexible and simple operation on the mechanical arm and the mechanical arm of the robot is a problem to be solved in the field.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a method, apparatus, system, and medium for teleoperation of a robot arm, which can realize teleoperation of the robot arm and the robot arm. The specific scheme is as follows:
In a first aspect, the application discloses a teleoperation method for a robot arm, comprising the following steps:
acquiring a hand image of an operator at the current moment, and carrying out image recognition on the hand image to obtain the hand position of the operator and the gesture information of the operator at the current moment;
determining expected position parameters of the robot hand at the next moment based on the position of the operator hand at the current moment and a preset position mapping relation, and determining expected gestures of the robot hand at the next moment based on the gesture information of the operator at the current moment and a preset gesture mapping relation;
and performing position control on the robot hand by using the expected position parameters and performing gesture control on the robot finger by using the expected gestures.
Optionally, the collecting a hand image of the operator at the current moment and performing image recognition on the hand image to obtain the hand position of the operator and the gesture information of the operator at the current moment includes:
image acquisition is carried out on the hands of the operator at the current moment to obtain a hand color image and a hand depth image of the operator at the current moment;
detecting the two-dimensional hand position of the operator in the hand color image, and determining the three-dimensional hand position of the operator at the current time based on the two-dimensional hand position and the hand depth image;
Cutting an image area corresponding to the two-dimensional hand position in the hand color image to obtain a corresponding cut hand image, and identifying gesture information in the cut hand image to obtain operator gesture information at the current moment.
Optionally, the detecting the two-dimensional hand position of the operator in the color image of the human hand, and determining the three-dimensional hand position of the operator at the current time based on the two-dimensional hand position and the depth image of the human hand includes:
performing target detection on a hand area in the hand color image by using a preset target detection network to obtain a hand two-dimensional position of the operator;
and mapping depth information on an image area corresponding to the hand two-dimensional position in the hand depth image to the hand two-dimensional position based on the pixel point corresponding relation between the hand color image and the hand depth image so as to obtain the hand three-dimensional position of the operator at the current moment.
Optionally, the identifying the gesture information in the cut hand image to obtain the gesture information of the operator at the current moment includes:
Inputting the cut hand image into a trained gesture recognition model to obtain operator gesture information at the current moment output by the gesture recognition model; the trained gesture recognition model is obtained by training a model to be trained constructed based on a convolutional neural network algorithm by using a preset training set, and the preset training set comprises historical hand images and corresponding gesture labels.
Optionally, the determining, based on the mapping relationship between the hand position of the operator and the preset position at the current time, the expected position parameter of the hand of the robot at the next time includes:
determining a first position distance between the hand position of the operator at the current time and the hand position of the operator at the previous time;
and determining a second position distance to be moved by the robot hand at the next moment by using the first position distance and a position distance mapping relation constructed based on a preset distance scaling.
Optionally, the determining, based on the mapping relationship between the hand position of the operator and the preset position at the current time, the expected position parameter of the hand of the robot at the next time includes:
according to a position coordinate mapping relation constructed based on a preset coordinate value scaling proportion, mapping the position coordinate of the hand position of the operator in a camera coordinate system at the current moment to a robot coordinate system to obtain the expected position coordinate of the hand of the robot in the robot coordinate system at the next moment; the camera coordinate system is a three-dimensional spatial coordinate system constructed based on a camera used to acquire the image of the human hand.
Optionally, in the process of performing position control on the robot hand by using the desired position parameter and performing gesture control on the robot finger by using the desired gesture, the method further includes:
acquiring information of surrounding environment by utilizing a vision sensor which is arranged on the robot in advance to obtain vision sensing information, and acquiring information by utilizing a force sensor which is arranged on the robot in advance to obtain force sensing information;
and fine tuning the pose of the robot in the process of the position control and the gesture control based on the visual sensing information and the force sensing information.
In a second aspect, the present application discloses a teleoperation device for a robotic arm, comprising:
the image acquisition module is used for acquiring the hand image of the operator at the current moment;
the image recognition module is used for carrying out image recognition on the hand image so as to obtain the hand position of the operator and the gesture information of the operator at the current moment;
the position determining module is used for determining expected position parameters of the robot hand at the next moment based on the position of the hand of the operator at the current moment and a preset position mapping relation;
The gesture determining module is used for determining expected gestures of the robot hand at the next moment based on the gesture information of the operator at the current moment and a preset gesture mapping relation;
and the control module is used for carrying out position control on the robot hand by utilizing the expected position parameter and carrying out gesture control on the robot finger by utilizing the expected gesture.
In a third aspect, the present application discloses a robotic system comprising a teleoperational device and a robot; the teleoperation device comprises a depth camera, a memory and a processor; wherein,,
the depth camera is used for acquiring a hand image of an operator at the current moment;
the memory is used for storing a computer program;
the processor is configured to execute the computer program to implement the steps of:
performing image recognition on the hand image to obtain the hand position of the operator and the gesture information of the operator at the current moment; determining expected position parameters of the robot hand at the next moment based on the position of the operator hand at the current moment and a preset position mapping relation, and determining expected gestures of the robot hand at the next moment based on the gesture information of the operator at the current moment and a preset gesture mapping relation; and performing position control on the robot hand by using the expected position parameters and performing gesture control on the robot finger by using the expected gestures.
Optionally, the manipulator of the robot is a five-finger dexterous hand having 6 degrees of freedom, and the manipulator of the robot is a manipulator having 6 degrees of freedom.
Optionally, the robot further includes:
the visual sensor is used for collecting information of surrounding environment to obtain visual sensing information;
a force sensor for acquiring force sensing information;
and, when executing the computer program, the processor further implements the steps of:
and fine tuning the pose of the robot in the process of the position control and the gesture control based on the visual sensing information and the force sensing information.
In a fourth aspect, the present application discloses a computer-readable storage medium for storing a computer program; wherein the computer program when executed by the processor implements the steps of the previously disclosed robot arm teleoperation method.
Therefore, the hand image of the operator at the current moment is acquired, and the hand image is subjected to image recognition to obtain the hand position of the operator and the gesture information of the operator at the current moment; determining expected position parameters of the robot hand at the next moment based on the position of the operator hand at the current moment and a preset position mapping relation, and determining expected gestures of the robot hand at the next moment based on the gesture information of the operator at the current moment and a preset gesture mapping relation; and performing position control on the robot hand by using the expected position parameters and performing gesture control on the robot finger by using the expected gestures. Therefore, the hand image at the current moment is subjected to image recognition based on vision, and the hand position of the operator and the gesture information of the operator at the current moment are determined, so that the recognition operation steps are simple and the gesture of the operator is flexible; based on a preset position mapping relation, the hand position of the operator at the current moment is correspondingly transformed, and expected position parameters of the hand of the robot at the next moment are determined, so that the setting is simpler and more convenient; and performing gesture control on the robot finger by using the expected gesture, so that the robot hand can complete corresponding gestures, and efficient gesture teleoperation is realized.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for teleoperation of a robotic arm in accordance with the present disclosure;
FIG. 2 is a schematic diagram illustrating a specific gesture information according to the present disclosure;
FIG. 3 is a schematic illustration of a robotic arm teleoperation embodying the present disclosure;
FIG. 4 is a flowchart of a method of teleoperation of a particular robotic arm in accordance with the present disclosure;
FIG. 5 is a flowchart of a method for acquiring a three-dimensional position of a human hand according to the present disclosure;
FIG. 6 is a flowchart of a method of teleoperation of a particular robotic arm in accordance with the present disclosure;
FIG. 7 is a flowchart of a method of teleoperation of a particular robotic arm in accordance with the present disclosure;
FIG. 8 is a schematic diagram of a teleoperation device for a robotic arm according to the present disclosure;
FIG. 9 is a schematic diagram of a teleoperation device according to the present disclosure;
fig. 10 is a schematic view of a specific robotic arm apparatus of the present disclosure.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The teleoperation mode of the traditional robot, such as a mouse, a keyboard and a teleoperation rod, only controls the position and the gesture movement of the traditional mechanical arm, the problem that the simple teleoperation can only be realized and the operation difficulty is high exists, and the teleoperation method of the wearable interactive equipment based on data gloves, a few hand rings and the like can limit the flexibility of the movement of hands and only analyze basic static gestures.
Therefore, the application correspondingly provides a teleoperation scheme of the robot arm, and the teleoperation with convenient setting, freedom, flexibility and simple operation can be realized for the mechanical arm and the mechanical arm of the robot.
Referring to fig. 1, the embodiment of the invention discloses a teleoperation method for a robot arm, which comprises the following steps:
step S11: and acquiring a hand image of an operator at the current moment, and carrying out image recognition on the hand image to obtain the hand position of the operator and the gesture information of the operator at the current moment.
In this embodiment, first, an image of a hand of an operator at a current time is collected, and an image of the collected hand image is identified, so as to obtain a hand position and gesture information of the operator at the current time, where the gesture information is information representing a teleoperation gesture. As shown in fig. 2, for example, an operator gesture picture in which five fingers are gathered and four-finger joints other than the thumb are bent toward the palm corresponds to gesture information "pinching"; the operator gesture picture with the five fingers relaxed is corresponding to the gesture information as "relax".
Step S12: and determining expected position parameters of the robot hand at the next moment based on the position of the operator hand at the current moment and a preset position mapping relation, and determining expected gestures of the robot hand at the next moment based on the gesture information of the operator at the current moment and a preset gesture mapping relation.
In this embodiment, the gesture information of the operator at the current time is mapped to the robot hand, so as to control the robot hand to move to the corresponding gesture subsequently. For example, when the gesture information is "pinch", mapping gesture information corresponding to "pinch" recorded in a preset gesture mapping relationship to a robot hand; and when the gesture information is 'relaxed', mapping gesture information corresponding to the 'relaxed' recorded in a preset gesture mapping relation to the robot hand. However, in this embodiment, the time difference between the current time and the next time is small, and the robot can perform the action corresponding to the operator within 100 ms, that is, within 100 ms after the hand of the operator moves. It should be further noted that the preset position mapping relationship may specifically include a position distance mapping relationship or a position coordinate mapping relationship.
Step S13: and performing position control on the robot hand by using the expected position parameters and performing gesture control on the robot finger by using the expected gestures.
In this embodiment, in the process of performing position control on the robot hand by using the desired position parameter and performing gesture control on the robot finger by using the desired gesture, the method may further include performing information acquisition on a surrounding environment by using a vision sensor set in advance on the robot to obtain vision sensing information, and performing information acquisition by using a force sensor set in advance on the robot to obtain force sensing information; and fine tuning the pose of the robot in the process of the position control and the gesture control based on the visual sensing information and the force sensing information. The number and positions of the force sensors and the vision sensors can be determined according to specific requirements of actual situations, for example, when particularly fine operation is performed, a plurality of vision sensors can be arranged on the robot arm, the vision sensors can be independently arranged outside the robot, and a plurality of force sensors can be installed at the tail end and the palm position of each finger of the robot, so that more accurate vision sensing information and force sensing information can be obtained. For example, as shown in fig. 3, in the figure, the operator 11 and the robot arm 13 are located at the same side, the camera 12 in front of the operator 11 is used for collecting a hand image of the operator 11 at the current moment, the gripping object 14 is placed in front of the robot arm 13, when the robot grips the object, the vision sensor performs information collection on the surrounding environment to obtain vision sensing information, based on the force sensor installed on the five-finger smart hand, information collection is performed to obtain force sensing information, and in the process of moving to the desired position based on the desired position parameter, the mechanical arm and the five-finger smart hand perform fine control on the pose and the gripping force by using the vision sensing information and the force sensing information, so that the mechanical arm and the five-finger smart hand perform fine adjustment, and high-efficiency and accurate teleoperation performance is realized.
Therefore, the hand image of the operator at the current moment is acquired, and the hand image is subjected to image recognition to obtain the hand position of the operator and the gesture information of the operator at the current moment; determining expected position parameters of the robot hand at the next moment based on the position of the operator hand at the current moment and a preset position mapping relation, and determining expected gestures of the robot hand at the next moment based on the gesture information of the operator at the current moment and a preset gesture mapping relation; and performing position control on the robot hand by using the expected position parameters and performing gesture control on the robot finger by using the expected gestures. Therefore, the hand image at the current moment is subjected to image recognition based on vision, and the hand position of the operator and the gesture information of the operator at the current moment are determined, so that the recognition operation steps are simple and the gesture of the operator is flexible; based on a preset position mapping relation, the hand position of the operator at the current moment is correspondingly transformed, and expected position parameters of the hand of the robot at the next moment are determined, so that the setting is simpler and more convenient; and performing gesture control on the robot finger by using the expected gesture, so that the robot hand can complete corresponding gestures, and efficient gesture teleoperation is realized.
Referring to fig. 4, the embodiment of the invention discloses a specific teleoperation method for a robot arm, which comprises the following steps:
step S21: and acquiring images of hands of the operator at the current moment to obtain color images of the hands and the depth images of the hands of the operator at the current moment.
In this embodiment, the image of the hand of the operator at the current moment is acquired in real time by using a depth camera, where the depth camera may include, but is not limited to, a RealSense D435i depth camera, where a color image camera module on the depth camera acquires a two-dimensional color plane image of the hand of the operator at the current moment, and an infrared camera module on the depth camera acquires depth information of the hand. It is understood that the hand image information may include palm position information, finger pose information.
Step S22: and detecting the two-dimensional hand position of the operator in the hand color image, and determining the three-dimensional hand position of the operator at the current time based on the two-dimensional hand position and the hand depth image.
In this embodiment, the detecting the two-dimensional hand position of the operator in the color image of the human hand, and determining the three-dimensional hand position of the operator at the current time based on the two-dimensional hand position and the depth image of the human hand includes performing target detection on a hand region in the color image of the human hand by using a preset target detection network to obtain the two-dimensional hand position of the operator; and mapping depth information on an image area corresponding to the hand two-dimensional position in the hand depth image to the hand two-dimensional position based on the pixel point corresponding relation between the hand color image and the hand depth image so as to obtain the hand three-dimensional position of the operator at the current moment. The preset target detection network may be any one of YOLO (you only look once, i.e. unified real-time target detection), SSD (single shot mutibox detector) and master_ rcnn, centerNet networks. For example, as shown in fig. 5, a centering net network is taken as an example, and the acquisition of the two-dimensional hand position and the three-dimensional hand position in this embodiment will be described in detail. Firstly, detecting the center point of a human hand in the human hand color image, predicting the width and height of the human hand through a binding box regression algorithm and the center point offset to obtain the actual range of the human hand, and finally, acquiring the two-dimensional hand position of an operator based on the steps. For obtaining the three-dimensional hand position of the operator at the current moment, depth information on an image area corresponding to the two-dimensional hand position in the human hand depth image is mapped to the two-dimensional hand position based on the pixel point corresponding relation between the obtained human hand depth image and the human hand color image, so that the three-dimensional hand position of the operator at the current moment can be obtained.
Step S23: cutting an image area corresponding to the two-dimensional hand position in the hand color image to obtain a corresponding cut hand image, and identifying gesture information in the cut hand image to obtain operator gesture information at the current moment.
In this embodiment, the step of identifying the gesture information in the cut hand image to obtain the gesture information of the operator at the current time includes inputting the cut hand image to a trained gesture identification model to obtain the gesture information of the operator at the current time output by the gesture identification model; the trained gesture recognition model is obtained by training a model to be trained constructed based on a convolutional neural network algorithm by using a preset training set, and the preset training set comprises historical hand images and corresponding gesture labels. For example, the cut hand image is input into an acceptance-Resnet-V2 network model trained through a preset training set, so as to obtain corresponding gesture information of an operator at the current moment.
Step S24: and determining expected position parameters of the robot hand at the next moment based on the three-dimensional position of the operator hand at the current moment and a preset position mapping relation, and determining expected gestures of the robot hand at the next moment based on the gesture information of the operator at the current moment and the preset gesture mapping relation.
Step S25: and performing position control on the robot hand by using the expected position parameters and performing gesture control on the robot finger by using the expected gestures.
For more specific working procedures in the steps S24 and S25, reference may be made to the corresponding contents disclosed in the foregoing embodiments, and no further description is given here.
Therefore, the two-dimensional hand position of the operator is obtained through the target detection network, the three-dimensional hand position of the operator is obtained through corresponding mapping with the hand depth image, and therefore expected position parameters of the hand of the robot at the next moment are determined according to the three-dimensional hand position of the operator at the current moment, position determining operation is simpler, gesture information is identified based on the trained gesture identification model, and the speed of gesture identification step is improved and meanwhile accuracy is improved.
Referring to fig. 6, the embodiment of the application discloses a specific teleoperation method for a robot arm, which comprises the following steps:
step S31: and acquiring a hand image of an operator at the current moment, and carrying out image recognition on the hand image to obtain the hand position of the operator and the gesture information of the operator at the current moment.
Step S32: and determining expected position parameters of the robot hand at the next moment according to the position of the operator hand at the current moment and a position distance mapping relation constructed based on a preset distance scaling, and determining expected gestures of the robot hand at the next moment based on the gesture information of the operator at the current moment and the preset gesture mapping relation.
In this embodiment, the determining the expected position parameter of the robot hand at the next time according to the position-distance mapping relationship constructed based on the preset distance scaling and the operator hand position at the current time includes determining a first position distance between the operator hand position at the current time and the operator hand position at the previous time; and determining a second position distance to be moved by the robot hand at the next moment by using the first position distance and a position distance mapping relation constructed based on a preset distance scaling. For example, when the second position distance that the robot hand needs to move is expected to be 1 meter, because the second position distance is relatively large, a position distance mapping relation constructed based on a preset distance scaling ratio can be preset, and the position distance mapping relation is adjusted to be a mapping relation that the second position distance is 10 times that of the first position distance, so that the first position distance can be controlled to be 1 meter according to the position distance mapping relation only by 0.1 meter.
Step S33: and performing position control on the robot hand by using the expected position parameters and performing gesture control on the robot finger by using the expected gestures.
The more specific working procedures of the steps S31 and S33 may refer to the corresponding contents disclosed in the foregoing embodiments, and will not be described herein.
Therefore, the position distance mapping relation constructed based on the preset distance scaling ratio can scale the position parameters of the robot hand according to the preset position mapping relation, and an operator does not need to move according to the distance required to move by the robot hand, so that the operation is convenient, and the speed of man-machine cooperation is improved.
Referring to fig. 7, the embodiment of the application discloses a specific teleoperation method for a robot arm, and compared with the previous embodiment, the embodiment further describes and optimizes a technical scheme, and includes:
step S41: and acquiring a hand image of an operator at the current moment, and carrying out image recognition on the hand image to obtain the hand position of the operator and the gesture information of the operator at the current moment.
Step S42: and determining expected position parameters of the robot hand at the next moment according to the position of the operator hand at the current moment and a position coordinate mapping relation constructed based on a scaling proportion of preset coordinate values, and determining expected gestures of the robot hand at the next moment based on the gesture information of the operator at the current moment and the preset gesture mapping relation.
In this embodiment, the determining the expected position parameter of the robot hand at the next time according to the position coordinate mapping relationship constructed based on the scaling of the preset coordinate value and the operator hand position at the current time includes mapping the position coordinate of the operator hand position in the camera coordinate system at the current time to the robot coordinate system according to the position coordinate mapping relationship constructed based on the scaling of the preset coordinate value to obtain the expected position coordinate of the robot hand at the next time in the robot coordinate system; the camera coordinate system is a three-dimensional spatial coordinate system constructed based on a camera used to acquire the image of the human hand. It can be understood that the position coordinate mapping relationship constructed according to the operator hand position at the current time and the scaling ratio based on the preset coordinate value further includes a coordinate conversion relationship between the camera coordinate system where the operator hand position is located at the current time and the robot coordinate system. The preset coordinate value scaling relation is as follows:
P R =αP H
P H is the position coordinates of the operator's hand in the robot coordinate system, P R Is the desired position coordinates of the robot hand in the robot coordinate system, where α is a scaling factor. For example, when the position coordinates of the operator's hand in the robot coordinate system at the present time is moved to (5, 5) and the scaling factor α is set to 3, the desired position coordinates of the robot hand in the robot coordinate system at the next time are (15, 15).
Step S43: and performing position control on the robot hand by using the expected position parameters and performing gesture control on the robot finger by using the expected gestures.
The more specific working procedures of the steps S41 and S43 may refer to the corresponding contents disclosed in the foregoing embodiments, and will not be described herein.
Therefore, the method and the device can acquire the position coordinates of the hand position of the operator in the camera coordinate system at the current time, map the position coordinates of the hand position of the operator in the camera coordinate system to the robot coordinate system based on the position coordinate mapping relation constructed by the preset coordinate value scaling ratio, and obtain the expected position coordinates of the hand of the robot in the robot coordinate system at the next time.
Referring to fig. 8, an embodiment of the application discloses a teleoperation device for a robot arm, which comprises:
an image acquisition module 21, configured to acquire a hand image of an operator at a current moment;
The image recognition module 22 is configured to perform image recognition on the hand image to obtain the hand position of the operator and the gesture information of the operator at the current moment;
a position determining module 23, configured to determine a desired position parameter of the robot hand at a next moment based on the operator hand position at the current moment and a preset position mapping relationship;
a gesture determining module 24, configured to determine, based on the operator gesture information at the current time and a preset gesture mapping relationship, a desired gesture of the robot hand at a next time;
and the control module 25 is used for performing position control on the robot hand by using the expected position parameters and performing gesture control on the robot finger by using the expected gestures.
Therefore, the image acquisition module and the image recognition module acquire the hand position of the operator and the gesture information of the operator at the current moment based on vision, complexity in a teleoperation process is reduced, naturalness of the operator is not limited, flexibility of teleoperation is improved, the distance of the hand of the operator to be moved can be scaled based on a preset position mapping relation by the position determination module, the distance of the hand of the operator is not required to be equal to the distance of the robot to be moved, operation is more convenient, the gesture determination module determines expected gestures of the hand of the robot at the next moment based on the preset gesture mapping relation, and the control module controls the robot to finish teleoperation by using the information of the position determination module and the gesture determination module, so that the speed of finishing teleoperation is improved.
In some embodiments, the image acquisition module 21 includes:
the color image acquisition unit is used for acquiring images of hands of an operator at the current moment so as to obtain color images of the hands of the operator at the current moment;
the depth image acquisition unit is used for acquiring images of hands of the operator at the current moment so as to obtain the depth images of the hands of the operator at the current moment.
In some embodiments, the image recognition module 22 includes:
the hand position acquisition module is used for detecting the two-dimensional hand position of the operator in the hand color image and determining the three-dimensional hand position of the operator at the current moment based on the two-dimensional hand position and the hand depth image.
In some embodiments, the hand position acquisition module includes:
and the two-dimensional position acquisition unit is used for carrying out target detection on the hand area in the hand color image so as to obtain the two-dimensional hand position of the operator.
In some embodiments, the hand position acquisition module includes:
the three-dimensional position obtaining unit is used for mapping depth information on an image area corresponding to the hand two-dimensional position in the hand depth image to the hand two-dimensional position based on the pixel point corresponding relation between the hand color image and the hand depth image so as to obtain the hand three-dimensional position of the operator at the current moment.
In some embodiments, the image recognition module 22 includes:
the gesture information acquisition module is used for cutting an image area corresponding to the two-dimensional hand position in the hand color image to obtain a corresponding cut hand image, and identifying gesture information in the cut hand image to obtain the gesture information of an operator at the current moment.
In some specific embodiments, the gesture information acquisition module includes:
the gesture information output unit is used for inputting the cut hand image into the trained gesture recognition model so as to obtain the gesture information of the operator at the current moment output by the gesture recognition model; the trained gesture recognition model is obtained by training a model to be trained constructed based on a convolutional neural network algorithm by using a preset training set, and the preset training set comprises historical hand images and corresponding gesture labels.
In some embodiments, the location determination module 23 includes:
the position distance mapping unit is used for determining a first position distance between the hand position of the operator at the current moment and the hand position of the operator at the previous moment, and determining a second position distance to be moved by the robot hand at the next moment by utilizing the first position distance and a position distance mapping relation constructed based on a preset distance scaling scale.
In some embodiments, the location determination module 23 includes:
the position coordinate mapping unit is used for mapping the position coordinate of the hand position of the operator in the camera coordinate system at the current moment to the robot coordinate system according to the position coordinate mapping relation constructed based on the preset coordinate value scaling ratio, so as to obtain the expected position coordinate of the hand of the robot in the robot coordinate system at the next moment; the camera coordinate system is a three-dimensional spatial coordinate system constructed based on a camera used to acquire the image of the human hand.
In some embodiments, the control module 25 includes:
the visual sensing information acquisition module is used for acquiring information of surrounding environment by utilizing a visual sensor preset on the robot so as to obtain visual sensing information;
the force sensing information acquisition module is used for acquiring information by utilizing a force sensor preset on the robot so as to obtain force sensing information;
and the fine adjustment module is used for carrying out fine adjustment on the pose of the robot in the process of the position control and the gesture control based on the visual sensing information and the force sensing information.
Further, the embodiment of the application also provides a robot system which comprises a teleoperation device and a robot, wherein the teleoperation device comprises a depth camera, a memory and a processor.
The teleoperated device may be an electronic device as shown in fig. 9. Fig. 9 is a schematic diagram of a teleoperational device 30 according to an example of the present application, which is not to be considered as limiting the scope of use of the present application.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 30 may specifically include: at least one processor 31, at least one memory 32, a power supply 33, a communication interface 34, an input output interface 35, a communication bus 36, and a depth camera 37. Wherein the memory 32 is used for storing a computer program, which is loaded and executed by the processor 31 for realizing the following steps: performing image recognition on the hand image to obtain the hand position of the operator and the gesture information of the operator at the current moment; determining expected position parameters of the robot hand at the next moment based on the position of the operator hand at the current moment and a preset position mapping relation, and determining expected gestures of the robot hand at the next moment based on the gesture information of the operator at the current moment and a preset gesture mapping relation; and performing position control on the robot hand by using the expected position parameters and performing gesture control on the robot finger by using the expected gestures.
In this embodiment, the power supply 33 is used to provide an operating voltage for each hardware device on the computer device 30; the communication interface 34 can create a data transmission channel between the computer device 30 and an external device, and the communication protocol to be followed is any communication protocol applicable to the technical solution of the present application, which is not specifically limited herein; the input/output interface 35 is used for acquiring external input data or outputting external output data, and the specific interface type thereof may be selected according to the specific application requirement, which is not limited herein.
Processor 31 may include one or more processing cores, such as a 4-core processor, an 8-core processor, etc. The processor 31 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 31 may also comprise a main processor, which is a processor for processing data in an awake state, also called CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 31 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 31 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
The memory 32 may be a carrier for storing resources, such as a read-only memory, a random access memory, a magnetic disk, or an optical disk, and the resources stored thereon include an operating system 321, a computer program 322, and data 323, and the storage may be temporary storage or permanent storage.
The operating system 321 is used for managing and controlling various hardware devices on the computer device 30 and the computer program 322, so as to implement the operation and processing of the processor 31 on the mass data 323 in the memory 32, which may be Windows, unix, linux or the like. The computer program 322 may further comprise a computer program capable of performing other specific tasks in addition to the computer program capable of performing the robotic arm teleoperation method performed by the computer device 30 as disclosed in any of the foregoing embodiments. The data 323 may include data received by the computer device and transmitted from an external device, data collected by the input/output interface 35 itself, and the like.
Wherein, the depth camera 37 can be used for collecting the hand image of the operator at the current moment; it should be noted that the depth camera 37 may be a RealSense D435i depth camera, and uses the RealSense D435i depth camera to perform image acquisition on the hand of the operator at the current time, so as to obtain a color image of the hand and a depth image of the hand of the operator at the current time.
Those skilled in the art will appreciate that the structure shown in fig. 9 is not limiting of the electronic device 20 and may include more or fewer components than shown.
Fig. 10 provides a schematic diagram of a robotic arm, in a robotic system, the robotic arm includes a vision sensor 41, a manipulator 42, a UR5 manipulator 43, and force sensors mounted at different positions of the robotic arm according to different requirements, for performing corresponding teleoperations according to information transferred by a teleoperation device. The vision sensor 41 refers to a RealSense camera, in this embodiment, specifically may be a RealSense D435i depth camera, which is mounted at the end of the mechanical arm and is used for collecting information of the surrounding environment to obtain vision sensing information, and is mainly used for detecting pose information of the object to be grabbed. The manipulator 42 of the robot is a five-finger dexterous hand with 6 degrees of freedom, including 5 bending degrees of freedom of 5 fingers and 1 rotation degree of freedom of thumb, and the manipulator of the robot is a manipulator with 6 degrees of freedom, including translational degrees of freedom in which the manipulator 43 can move along three rectangular coordinate axes of x, y and z and rotational degrees of freedom in which the manipulator rotates around three rectangular coordinate axes of x, y and z, so that basic path planning and motion control functions can be realized. The robot further comprises a visual sensor, a force sensor and a processor, wherein the visual sensor is used for acquiring information of surrounding environment to obtain visual sensing information, the force sensor is used for acquiring force sensing information, and the processor further realizes the following steps when executing the computer program, and fine adjustment is carried out on the pose of the robot in the position control and gesture control process based on the visual sensing information and the force sensing information.
Further, the embodiment of the application also discloses a storage medium, wherein the storage medium stores a computer program, and when the computer program is loaded and executed by a processor, the steps of the robot arm teleoperation method disclosed in any embodiment are realized.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The method, the device, the system and the medium for teleoperation of the robot arm provided by the invention are described in detail, and specific examples are applied to the description of the principle and the implementation mode of the invention, and the description of the examples is only used for helping to understand the method and the core idea of the invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (11)

1. A method of teleoperation of a robotic arm, comprising:
acquiring a hand image of an operator at the current moment, and carrying out image recognition on the hand image to obtain the hand position of the operator and the gesture information of the operator at the current moment;
determining expected position parameters of the robot hand at the next moment based on the position of the operator hand at the current moment and a preset position mapping relation, and determining expected gestures of the robot hand at the next moment based on the gesture information of the operator at the current moment and a preset gesture mapping relation;
performing position control on the robot hand by using the expected position parameters and performing gesture control on the robot finger by using the expected gesture;
The determining the expected position parameter of the robot hand at the next moment based on the mapping relation between the hand position of the operator at the current moment and the preset position comprises the following steps:
determining a first position distance between the hand position of the operator at the current time and the hand position of the operator at the previous time; and determining a second position distance to be moved by the robot hand at the next moment by using the first position distance and a position distance mapping relation constructed based on a preset distance scaling.
2. The method for teleoperation of a robotic arm according to claim 1, wherein the steps of acquiring a hand image of an operator at a current time and performing image recognition on the hand image to obtain the hand position and the gesture information of the operator at the current time include:
image acquisition is carried out on the hands of the operator at the current moment to obtain a hand color image and a hand depth image of the operator at the current moment;
detecting the two-dimensional hand position of the operator in the hand color image, and determining the three-dimensional hand position of the operator at the current time based on the two-dimensional hand position and the hand depth image;
Cutting an image area corresponding to the two-dimensional hand position in the hand color image to obtain a corresponding cut hand image, and identifying gesture information in the cut hand image to obtain operator gesture information at the current moment.
3. The method of claim 2, wherein the detecting the two-dimensional hand position of the operator in the color image of the human hand and determining the three-dimensional hand position of the operator at the current time based on the two-dimensional hand position and the depth image of the human hand comprises:
performing target detection on a hand area in the hand color image by using a preset target detection network to obtain a hand two-dimensional position of the operator;
and mapping depth information on an image area corresponding to the hand two-dimensional position in the hand depth image to the hand two-dimensional position based on the pixel point corresponding relation between the hand color image and the hand depth image so as to obtain the hand three-dimensional position of the operator at the current moment.
4. The method for teleoperation of a robotic arm according to claim 2, wherein the identifying the gesture information in the cut hand image to obtain the gesture information of the operator at the current moment comprises:
Inputting the cut hand image into a trained gesture recognition model to obtain operator gesture information at the current moment output by the gesture recognition model; the trained gesture recognition model is obtained by training a model to be trained constructed based on a convolutional neural network algorithm by using a preset training set, and the preset training set comprises historical hand images and corresponding gesture labels.
5. The method according to claim 1, wherein determining the expected position parameter of the robot hand at the next time based on the operator hand position at the current time and the preset position mapping relation comprises:
according to a position coordinate mapping relation constructed based on a preset coordinate value scaling proportion, mapping the position coordinate of the hand position of the operator in a camera coordinate system at the current moment to a robot coordinate system to obtain the expected position coordinate of the hand of the robot in the robot coordinate system at the next moment; the camera coordinate system is a three-dimensional spatial coordinate system constructed based on a camera used to acquire the image of the human hand.
6. The method according to any one of claims 1 to 5, wherein the step of performing position control on the robot hand using the desired position parameter and performing gesture control on the robot finger using the desired gesture further comprises:
Acquiring information of surrounding environment by utilizing a vision sensor which is arranged on the robot in advance to obtain vision sensing information, and acquiring information by utilizing a force sensor which is arranged on the robot in advance to obtain force sensing information;
and fine tuning the pose of the robot in the process of the position control and the gesture control based on the visual sensing information and the force sensing information.
7. A robotic arm teleoperation device, comprising:
the image acquisition module is used for acquiring the hand image of the operator at the current moment;
the image recognition module is used for carrying out image recognition on the hand image so as to obtain the hand position of the operator and the gesture information of the operator at the current moment;
the position determining module is used for determining expected position parameters of the robot hand at the next moment based on the position of the hand of the operator at the current moment and a preset position mapping relation;
the gesture determining module is used for determining expected gestures of the robot hand at the next moment based on the gesture information of the operator at the current moment and a preset gesture mapping relation;
the control module is used for performing position control on the robot hand by utilizing the expected position parameters and performing gesture control on the robot finger by utilizing the expected gestures;
Wherein the location determination module comprises:
the position distance mapping unit is used for determining a first position distance between the hand position of the operator at the current moment and the hand position of the operator at the previous moment, and determining a second position distance to be moved by the robot hand at the next moment by utilizing the first position distance and a position distance mapping relation constructed based on a preset distance scaling scale.
8. A robotic system comprising teleoperational equipment and a robot; the teleoperation device comprises a depth camera, a memory and a processor; wherein,,
the depth camera is used for acquiring a hand image of an operator at the current moment;
the memory is used for storing a computer program;
the processor is configured to execute the computer program to implement the steps of:
performing image recognition on the hand image to obtain the hand position of the operator and the gesture information of the operator at the current moment; determining expected position parameters of the robot hand at the next moment based on the position of the operator hand at the current moment and a preset position mapping relation, and determining expected gestures of the robot hand at the next moment based on the gesture information of the operator at the current moment and a preset gesture mapping relation; performing position control on the robot hand by using the expected position parameters and performing gesture control on the robot finger by using the expected gesture;
When the processor executes the computer program, the following steps are specifically implemented: and determining a first position distance between the hand position of the operator at the current moment and the hand position of the operator at the last moment, and determining a second position distance to be moved by the robot hand at the next moment by utilizing the first position distance and a position distance mapping relation constructed based on a preset distance scaling.
9. The robotic system of claim 8, wherein the robotic arm of the robot is a five-finger dexterous hand having 6 degrees of freedom and the robotic arm of the robot is a robotic arm having 6 degrees of freedom.
10. The robotic system of claim 8, wherein the robot further comprises:
the visual sensor is used for collecting information of surrounding environment to obtain visual sensing information;
a force sensor for acquiring force sensing information;
and, when executing the computer program, the processor further implements the steps of:
and fine tuning the pose of the robot in the process of the position control and the gesture control based on the visual sensing information and the force sensing information.
11. A computer-readable storage medium storing a computer program; wherein the computer program when executed by a processor implements the steps of the robot arm teleoperation method according to any one of claims 1 to 6.
CN202111240203.4A 2021-10-25 2021-10-25 Remote operation method, device, system and medium for robot arm Active CN113829357B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111240203.4A CN113829357B (en) 2021-10-25 2021-10-25 Remote operation method, device, system and medium for robot arm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111240203.4A CN113829357B (en) 2021-10-25 2021-10-25 Remote operation method, device, system and medium for robot arm

Publications (2)

Publication Number Publication Date
CN113829357A CN113829357A (en) 2021-12-24
CN113829357B true CN113829357B (en) 2023-10-03

Family

ID=78965881

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111240203.4A Active CN113829357B (en) 2021-10-25 2021-10-25 Remote operation method, device, system and medium for robot arm

Country Status (1)

Country Link
CN (1) CN113829357B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114851200A (en) * 2022-05-18 2022-08-05 上海海事大学 Method and system for controlling mechanical arm to assemble blade based on gesture recognition
CN114924573A (en) * 2022-06-27 2022-08-19 国网陕西省电力有限公司电力科学研究院 Mobile robot
CN118305818B (en) * 2024-06-07 2024-08-13 烟台大学 Bionic manipulator control method and system based on double-hand interaction attitude estimation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102350700A (en) * 2011-09-19 2012-02-15 华南理工大学 Method for controlling robot based on visual sense
CN103112007A (en) * 2013-02-06 2013-05-22 华南理工大学 Human-machine interaction method based on mixing sensor
CN104589356A (en) * 2014-11-27 2015-05-06 北京工业大学 Dexterous hand teleoperation control method based on Kinect human hand motion capturing
CN108828996A (en) * 2018-05-31 2018-11-16 四川文理学院 A kind of the mechanical arm remote control system and method for view-based access control model information
CN111694428A (en) * 2020-05-25 2020-09-22 电子科技大学 Gesture and track remote control robot system based on Kinect

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210086364A1 (en) * 2019-09-20 2021-03-25 Nvidia Corporation Vision-based teleoperation of dexterous robotic system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102350700A (en) * 2011-09-19 2012-02-15 华南理工大学 Method for controlling robot based on visual sense
CN103112007A (en) * 2013-02-06 2013-05-22 华南理工大学 Human-machine interaction method based on mixing sensor
CN104589356A (en) * 2014-11-27 2015-05-06 北京工业大学 Dexterous hand teleoperation control method based on Kinect human hand motion capturing
CN108828996A (en) * 2018-05-31 2018-11-16 四川文理学院 A kind of the mechanical arm remote control system and method for view-based access control model information
CN111694428A (en) * 2020-05-25 2020-09-22 电子科技大学 Gesture and track remote control robot system based on Kinect

Also Published As

Publication number Publication date
CN113829357A (en) 2021-12-24

Similar Documents

Publication Publication Date Title
CN113829357B (en) Remote operation method, device, system and medium for robot arm
US20210205986A1 (en) Teleoperating Of Robots With Tasks By Mapping To Human Operator Pose
CN114080583B (en) Visual teaching and repetitive movement manipulation system
Kang et al. Toward automatic robot instruction from perception-temporal segmentation of tasks from human hand motion
CN111694428B (en) Gesture and track remote control robot system based on Kinect
CN107030692B (en) Manipulator teleoperation method and system based on perception enhancement
CN108958471B (en) Simulation method and system for virtual hand-operated object in virtual space
CN108509026A (en) Tele-Maintenance Support System and method based on enhancing interactive mode
CN102830798A (en) Mark-free hand tracking method of single-arm robot based on Kinect
CN115070781B (en) Object grabbing method and two-mechanical-arm cooperation system
CN105319991A (en) Kinect visual information-based robot environment identification and operation control method
Chen et al. A human–robot interface for mobile manipulator
Kofman et al. Robot-manipulator teleoperation by markerless vision-based hand-arm tracking
Kang et al. A robot system that observes and replicates grasping tasks
Xue et al. Gesture-and vision-based automatic grasping and flexible placement in teleoperation
Palm et al. Recognition of human grasps by time-clustering and fuzzy modeling
Kang et al. Grasp recognition and manipulative motion characterization from human hand motion sequences
Barber et al. Sketch-based robot programming
Du et al. A novel natural mobile human-machine interaction method with augmented reality
CN115958595A (en) Mechanical arm guiding method and device, computer equipment and storage medium
CN113561172B (en) Dexterous hand control method and device based on binocular vision acquisition
Abiko et al. Linkage of Virtual Object and Physical Object for Teaching to Caregiver-Robot.
Ehrenmann et al. A multisensor system for observation of user actions in programming by demonstration
Li et al. Gesture-Based Human-Robot Interaction Framework for Teleoperation Control of Agricultural Robot
Sahiwala et al. Virtual Mouse using Coordinate Mapping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240619

Address after: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Patentee after: Hong Kong Zhongda (Shenzhen) Asset Management Co.,Ltd.

Country or region after: China

Address before: No. 2001, Longxiang Avenue, Longgang District, Shenzhen, Guangdong 518000

Patentee before: THE CHINESE University OF HONGKONG SHENZHEN

Country or region before: China

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240708

Address after: Building B805, Yunzhong City, Phase 6, Vanke Yuncheng, Dashi 2nd Road, Xili Community, Xili Street, Nanshan District, Shenzhen City, Guangdong Province 518000

Patentee after: Shenzhen Guochuang Guishen Intelligent Robot Co.,Ltd.

Country or region after: China

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Patentee before: Hong Kong Zhongda (Shenzhen) Asset Management Co.,Ltd.

Country or region before: China