CN111612889B - Robot action recognition method and action recognition system - Google Patents
Robot action recognition method and action recognition system Download PDFInfo
- Publication number
- CN111612889B CN111612889B CN202010414463.8A CN202010414463A CN111612889B CN 111612889 B CN111612889 B CN 111612889B CN 202010414463 A CN202010414463 A CN 202010414463A CN 111612889 B CN111612889 B CN 111612889B
- Authority
- CN
- China
- Prior art keywords
- teaching
- robot
- action
- image
- angle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Numerical Control (AREA)
Abstract
The invention relates to the technical field of robots, in particular to a motion recognition method and a motion recognition system of a robot, wherein the motion recognition method comprises the following steps: step S1, shooting teaching actions at multiple angles in a preset teaching field to generate multiple-angle action images; s2, performing 3D modeling processing on a preset teaching field to form a 3D model; step S3, setting the initial state and the final state of each teaching action to assist the robot in dividing each teaching action; s4, mapping the two-dimensional picture coordinates of each teaching action into three-dimensional picture coordinates by the robot so as to form a three-dimensional picture coordinate set; and S5, fitting teaching actions by the mechanical arm of the robot so as to realize the function of simulating the teaching actions by the mechanical arm of the robot. The technical scheme of the invention has the beneficial effects that: the teaching actions are fitted through the mechanical arm of the robot, so that the mechanical arm of the robot can simulate the function of each teaching action of an operator.
Description
Technical Field
The present invention relates to the field of robots, and in particular, to a method and a system for identifying actions of a robot.
Background
Currently, along with research of each mechanism on the humanoid robot technology, aiming at a multi-axis mechanical arm robot, a plurality of basic actions are required to be preset to form a basic action library, in the prior art, single coding and other means are generally adopted to code and teach the basic actions of the robot, the process is complicated, and the characteristic of autonomous learning and teaching actions of the robot cannot be reflected.
Therefore, the above problems are a major challenge for those skilled in the art.
Disclosure of Invention
In order to solve the above problems in the prior art, a method and a system for identifying motions of a robot are provided.
The specific technical scheme is as follows:
the invention provides a motion recognition method of a robot, which provides a preset teaching field, wherein the method comprises the following steps:
step S1, shooting each teaching action of an operator at multiple angles in the preset teaching field to generate a multiple-angle action image and transmitting the multiple-angle action image into the robot;
s2, controlling the robot to perform 3D modeling processing on the preset teaching field through a terminal device so as to form a 3D model;
s3, setting the initial state and the final state of each teaching action in the multi-angle action image through an image setting method so as to assist the robot in dividing and recording each teaching action;
step S4, the robot maps the two-dimensional picture coordinates of each teaching action in the multi-angle action image into three-dimensional picture coordinates according to the 3D model so as to form a three-dimensional picture coordinate set;
and S5, fitting the teaching actions by the mechanical arm of the robot according to the three-dimensional picture coordinate set so as to realize the function that the mechanical arm of the robot simulates each teaching action of an operator.
Preferably, in the step S1, the operator pauses for a preset time after teaching each teaching action is finished, so as to distinguish each teaching action.
Preferably, in the step S2, the robot includes a radar device and an image capturing device, and the 3D modeling process is performed on the preset teaching field by using the radar device and the image capturing device.
Preferably, the step S3 includes:
step S30, setting the initial state and the final state of each teaching action in the multi-angle action image through the image setting method;
step S31, the robot divides each teaching action according to the set initial state and the set final state of each teaching action.
Preferably, the image setting method sets according to a key frame of the image.
Preferably, the mechanical arm is of a multi-shaft transmission structure.
The invention also provides a motion recognition system of the robot, wherein the motion recognition system of the robot comprises:
the shooting module is used for shooting each teaching action of an operator at multiple angles in a preset teaching field so as to generate a multi-angle action image and transmitting the multi-angle action image into the robot;
the modeling processing module is used for controlling the robot to perform 3D modeling processing on the preset teaching field through terminal equipment so as to form a 3D model;
the setting module is connected with the shooting module and is used for setting the initial state and the final state of each teaching action in the multi-angle action image through an image setting method so as to assist the robot in dividing and recording each teaching action;
the mapping module is respectively connected with the modeling processing module and the setting module and is used for mapping the two-dimensional picture coordinates of each teaching action in the multi-angle action image into three-dimensional picture coordinates according to the 3D model by the robot so as to form a three-dimensional picture coordinate set;
and the fitting module is connected with the mapping module, and the mechanical arm of the robot fits the teaching actions according to the three-dimensional picture coordinate set so as to realize the function of simulating each teaching action of an operator by the mechanical arm of the robot.
Preferably, the setting module includes:
a setting unit configured to set the start state and the end state of each teaching action in the multi-angle action image by the image setting method;
the dividing unit is connected with the setting unit and used for dividing each teaching action of the robot according to the initial state and the final state of each teaching action.
The technical scheme of the invention has the beneficial effects that: the multi-angle action image is formed by shooting a plurality of teaching actions of an operator at multiple angles and is transmitted to the robot, the robot maps two-dimensional picture coordinates in the multi-angle action image into three-dimensional picture coordinates to form a three-dimensional picture coordinate set, and the teaching actions are fitted according to the three-dimensional picture coordinate set through the mechanical arm of the robot, so that the function that the mechanical arm of the robot simulates each teaching action of the operator is realized, the process is simple, and the basic actions of the robot are not required to be coded and taught by means of independent coding and the like.
Drawings
Embodiments of the present invention will now be described more fully with reference to the accompanying drawings. The drawings, however, are for illustration and description only and are not intended as a definition of the limits of the invention.
FIG. 1 is a step diagram of an action recognition method according to an embodiment of the present invention;
FIG. 2 is a step S3 diagram of an embodiment of the present invention;
FIG. 3 is a functional block diagram of an action recognition system of an embodiment of the present invention;
fig. 4 is a block diagram of a setup module of an action recognition system according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that, without conflict, the embodiments of the present invention and features of the embodiments may be combined with each other.
The invention is further described below with reference to the drawings and specific examples, which are not intended to be limiting.
The invention provides a method for identifying the actions of a robot, which provides a preset teaching field and is characterized by comprising the following steps:
step S1, shooting each teaching action of an operator at multiple angles in a preset teaching field to generate a multiple-angle action image and transmitting the multiple-angle action image into a robot;
s2, controlling the robot to perform 3D modeling processing on a preset teaching field through terminal equipment so as to form a 3D model;
step S3, setting the initial state and the final state of each teaching action in the multi-angle action image through an image setting method so as to assist the robot in dividing and recording each teaching action;
step S4, the robot maps the two-dimensional picture coordinates of each teaching action in the multi-angle action image into three-dimensional picture coordinates according to the 3D model so as to form a three-dimensional picture coordinate set;
and S5, fitting teaching actions by the mechanical arm of the robot according to the three-dimensional picture coordinate set so as to realize the function that the mechanical arm of the robot simulates each teaching action of an operator.
With the action recognition method provided in the above, as shown in fig. 1, first, in a preset teaching field, an operator performs demonstration of each teaching action, and after the teaching of each teaching action by the operator is finished, the operator pauses for a preset time (for example, the pause time is 3 s), so as to distinguish each teaching action, and each teaching action of the operator is photographed at multiple angles by a camera, so that a multi-angle action image is formed and transmitted to the interior of the robot, for example, to a storage device in the interior of the robot.
In this embodiment, the robot is controlled by the terminal device, and the radar device and the camera device arranged in the head of the robot are controlled to perform 3D modeling processing on the preset teaching field without personnel, so as to form a 3D model, so that each teaching action of the preset teaching field and the operator can be distinguished conveniently.
Further, the starting state and the ending state of each teaching action in the multi-angle action image are set through setting of an image key frame so as to assist the robot in dividing each teaching action and recording, for example, an action image with any angle is selected from the multi-angle action image, a plurality of key frames are set manually to serve as the starting position and the ending position of a single teaching action, and the robot judges the starting state and the ending state of the single teaching action through the key frames, so that the robot can divide each teaching action conveniently.
Further, the robot maps the two-dimensional picture coordinates of each teaching action in the multi-angle action image to the three-dimensional picture coordinates according to the 3D model to form a three-dimensional picture coordinate set, and outputs the relevant parameters of the end positions of the joints of the mechanical arm.
Further, the mechanical arm of the robot fits teaching actions according to the three-dimensional picture coordinate set, the teaching actions of limbs of an operator are controlled to the greatest extent, so that the mechanical arm of the robot simulates the function of each teaching action of the operator, the relative position relation and torque information of the robot in the final state of each teaching action are recorded, and the relative position relation and torque information are stored in a storage device of the robot to serve as preset action information.
According to the method, the device and the system, the multi-angle action image is formed through a plurality of teaching actions of the multi-angle shooting operator and is transmitted to the robot, two-dimensional picture coordinates in the multi-angle action image are mapped into three-dimensional picture coordinates by the robot to form a three-dimensional picture coordinate set, the teaching actions are fitted according to the three-dimensional picture coordinate set through the mechanical arm of the robot, so that the function that the mechanical arm of the robot simulates each teaching action of the operator is achieved, the process is simple, and coding teaching is not needed to be carried out on basic actions of the robot by means of independent coding and the like.
In a preferred embodiment, in step S1, the operator pauses for a preset time after teaching each teaching action, and the pause time may be 3S for distinguishing each teaching action.
In a preferred embodiment, in step S2, the robot includes a radar device and an image pickup device, and the robot is configured to perform 3D modeling processing on a preset teaching field through the radar device and the image pickup device.
In this embodiment, the technical means of performing 3D modeling processing on the preset teaching field through the radar device and the imaging device is the prior art, so that detailed description thereof is omitted here.
In a preferred embodiment, step S3 comprises:
step S30, setting the initial state and the final state of each teaching action in the multi-angle action image by an image setting method;
step S31, the robot divides each teaching action according to the set initial state and the set final state of each teaching action.
Specifically, as shown in fig. 2, in this embodiment, the start state and the end state of each teaching action in the multi-angle action image are set by setting an image key frame, so as to assist the robot in dividing each teaching action and recording, for example, an action image with any angle is selected from the multi-angle action images, a plurality of key frames are set manually as the start position and the end position of a single teaching action, and the robot determines the start state and the end state of the single teaching action through the key frames, so that the robot can divide each teaching action conveniently.
In a preferred embodiment, the image setting method is set according to the key frames of the image.
In a preferred embodiment, the mechanical arm is a multi-shaft transmission structure, and the multi-shaft transmission structure comprises three or seven rotary joints, and the robot can freely rotate each rotary joint so as to achieve teaching actions to be executed.
The invention also provides a motion recognition system of the robot, wherein the motion recognition system adopts the motion recognition method of the robot, and comprises the following steps:
a shooting module 1, configured to perform multi-angle shooting on each teaching action of an operator in a preset teaching field, so as to generate a multi-angle action image and transmit the multi-angle action image into the robot;
a modeling processing module 2 for controlling the robot to perform 3D modeling processing on a preset teaching field through a terminal device so as to form a 3D model;
the setting module 3 is connected with the shooting module 1 and is used for setting the initial state and the final state of each teaching action in the multi-angle action image through an image setting method so as to assist the robot in dividing and recording each teaching action;
the mapping module 4 is respectively connected with the modeling processing module 2 and the setting module 3 and is used for mapping the two-dimensional picture coordinates of each teaching action in the multi-angle action image into three-dimensional picture coordinates according to the 3D model by the robot so as to form a three-dimensional picture coordinate set;
and the fitting module 5 is connected with the mapping module 4, and the mechanical arm of the robot fits teaching actions according to the three-dimensional picture coordinate set so as to realize the function that the mechanical arm of the robot simulates each teaching action of an operator.
With the motion recognition system provided in the above, as shown in fig. 3, an operator firstly performs demonstration of each teaching motion in a preset teaching field through the shooting module 1, and pauses for a preset time (for example, the pause time is 3 s) after the completion of teaching of each teaching motion by the operator, so as to distinguish each teaching motion, and perform multi-angle shooting on each teaching motion of the operator through the camera, thereby forming a multi-angle motion image and transmitting the multi-angle motion image to the interior of the robot, for example, to a storage device in the interior of the robot.
In this embodiment, the modeling processing module 2 controls the robot through the terminal device, and the radar device and the camera device arranged in the head of the robot are controlled to perform 3D modeling processing on the preset teaching field without personnel, so as to form a 3D model, so that each teaching action of the preset teaching field and the operator can be distinguished conveniently.
Further, the setting module 3 sets the starting state and the ending state of each teaching action in the multi-angle action image through setting an image key frame so as to assist the robot in dividing each teaching action and recording, for example, an action image with any angle is selected from the multi-angle action images, a plurality of key frames are set manually as the starting position and the ending position of a single teaching action, and the robot judges the starting state and the ending state of the single teaching action through the key frames, so that the robot can divide each teaching action conveniently.
Further, in the mapping module 4, the robot maps the two-dimensional picture coordinates of each teaching action in the multi-angle action image into three-dimensional picture coordinates according to the 3D model, so as to form a three-dimensional picture coordinate set, and outputs relevant parameters of the end positions of the joints of the mechanical arm.
Further, in the fitting module 5, the mechanical arm of the robot fits the teaching actions according to the three-dimensional image coordinate set, so as to control the mechanical arm to simulate the teaching actions of limbs of the operator to the maximum extent, so as to realize the function of the mechanical arm of the robot to simulate each teaching action of the operator, record the relative position relation and torque information when each teaching action is in a final state, and store the relative position relation and torque information in a storage device of the robot as one preset action information.
According to the method, the device and the system, the multi-angle action image is formed through a plurality of teaching actions of the multi-angle shooting operator and is transmitted to the robot, two-dimensional picture coordinates in the multi-angle action image are mapped into three-dimensional picture coordinates by the robot to form a three-dimensional picture coordinate set, the teaching actions are fitted according to the three-dimensional picture coordinate set through the mechanical arm of the robot, so that the function that the mechanical arm of the robot simulates each teaching action of the operator is achieved, the process is simple, and coding teaching is not needed to be carried out on basic actions of the robot by means of independent coding and the like.
In a preferred embodiment, the setting module 3 comprises:
a setting unit 30 for setting a start state and an end state of each teaching action in the multi-angle action image by an image setting method;
a dividing unit 31 connected to the setting unit 30 for dividing each teaching action by the robot according to the set start state and end state of each teaching action.
Specifically, as shown in fig. 4, the setting unit 30 sets the start state and the end state of each teaching action in the multi-angle action image through the setting of an image key frame, so as to assist the robot in dividing each teaching action by the dividing unit 31 and recording, for example, selecting an action image with any angle from the multi-angle action image, setting a plurality of key frames as the start position and the end position of a single teaching action by human, and judging the start state and the end state of the single teaching action by the key frames by the robot, thereby facilitating the division of each teaching action by the robot.
The technical scheme of the invention has the beneficial effects that: the multi-angle action image is formed by shooting a plurality of teaching actions of an operator at multiple angles and is transmitted to the robot, the robot maps two-dimensional picture coordinates in the multi-angle action image into three-dimensional picture coordinates to form a three-dimensional picture coordinate set, and the teaching actions are fitted according to the three-dimensional picture coordinate set through the mechanical arm of the robot, so that the function that the mechanical arm of the robot simulates each teaching action of the operator is realized, the process is simple, and the basic actions of the robot are not required to be coded and taught by means of independent coding and the like.
The foregoing description is only illustrative of the preferred embodiments of the present invention and is not to be construed as limiting the scope of the invention, and it will be appreciated by those skilled in the art that equivalent substitutions and obvious variations may be made using the description and illustrations of the present invention, and are intended to be included within the scope of the present invention.
Claims (8)
1. A motion recognition method of a robot provides a preset teaching field, which is characterized by comprising the following steps:
step S1, shooting each teaching action of an operator at multiple angles in the preset teaching field to generate a multiple-angle action image and transmitting the multiple-angle action image into the robot;
s2, controlling the robot to perform 3D modeling processing on the preset teaching field through a terminal device so as to form a 3D model;
s3, setting the initial state and the final state of each teaching action in the multi-angle action image through an image setting method so as to assist the robot in dividing and recording each teaching action;
step S4, the robot maps the two-dimensional picture coordinates of each teaching action in the multi-angle action image into three-dimensional picture coordinates according to the 3D model so as to form a three-dimensional picture coordinate set;
and S5, fitting the teaching actions by the mechanical arm of the robot according to the three-dimensional picture coordinate set so as to realize the function that the mechanical arm of the robot simulates each teaching action of an operator.
2. The method according to claim 1, wherein in the step S1, the operator pauses for a predetermined time after teaching each teaching action is completed, so as to distinguish each teaching action.
3. The method according to claim 1, wherein in the step S2, the robot includes a radar device and an imaging device for performing the 3D modeling process on the preset teaching field through the radar device and the imaging device.
4. The method according to claim 1, wherein the step S3 includes:
step S30, setting the initial state and the final state of each teaching action in the multi-angle action image through the image setting method;
step S31, the robot divides each teaching action according to the set initial state and the set final state of each teaching action.
5. The method according to claim 4, wherein the image setting method is set based on a key frame of an image.
6. The method for recognizing actions of a robot according to claim 1, wherein the mechanical arm is a multi-axis transmission structure.
7. A motion recognition system for a robot, wherein a motion recognition method for a robot according to any one of claims 1 to 6 is used, the motion recognition system comprising:
the shooting module is used for shooting each teaching action of an operator at multiple angles in a preset teaching field so as to generate a multi-angle action image and transmitting the multi-angle action image into the robot;
the modeling processing module is used for controlling the robot to perform 3D modeling processing on the preset teaching field through terminal equipment so as to form a 3D model;
the setting module is connected with the shooting module and is used for setting the initial state and the final state of each teaching action in the multi-angle action image through an image setting method so as to assist the robot in dividing and recording each teaching action;
the mapping module is respectively connected with the modeling processing module and the setting module and is used for mapping the two-dimensional picture coordinates of each teaching action in the multi-angle action image into three-dimensional picture coordinates according to the 3D model by the robot so as to form a three-dimensional picture coordinate set;
and the fitting module is connected with the mapping module, and the mechanical arm of the robot fits the teaching actions according to the three-dimensional picture coordinate set so as to realize the function of simulating each teaching action of an operator by the mechanical arm of the robot.
8. The motion recognition system of a robot of claim 7, wherein the setup module comprises:
a setting unit configured to set the start state and the end state of each teaching action in the multi-angle action image by the image setting method;
the dividing unit is connected with the setting unit and used for dividing each teaching action of the robot according to the initial state and the final state of each teaching action.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010414463.8A CN111612889B (en) | 2020-05-15 | 2020-05-15 | Robot action recognition method and action recognition system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010414463.8A CN111612889B (en) | 2020-05-15 | 2020-05-15 | Robot action recognition method and action recognition system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111612889A CN111612889A (en) | 2020-09-01 |
CN111612889B true CN111612889B (en) | 2023-05-05 |
Family
ID=72201933
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010414463.8A Active CN111612889B (en) | 2020-05-15 | 2020-05-15 | Robot action recognition method and action recognition system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111612889B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113408621B (en) * | 2021-06-21 | 2022-10-14 | 中国科学院自动化研究所 | Rapid simulation learning method, system and equipment for robot skill learning |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06259536A (en) * | 1991-11-27 | 1994-09-16 | Yoshihiko Nomura | Three-dimensional correcting method for image pickup position and posture and three-dimensional position correcting method for robot |
CN107544311A (en) * | 2017-10-20 | 2018-01-05 | 高井云 | Industrial machine human hand holds the servicing unit and method of teaching |
-
2020
- 2020-05-15 CN CN202010414463.8A patent/CN111612889B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06259536A (en) * | 1991-11-27 | 1994-09-16 | Yoshihiko Nomura | Three-dimensional correcting method for image pickup position and posture and three-dimensional position correcting method for robot |
CN107544311A (en) * | 2017-10-20 | 2018-01-05 | 高井云 | Industrial machine human hand holds the servicing unit and method of teaching |
Non-Patent Citations (1)
Title |
---|
工业机器人三维示教模型的设计与实现;杨胜安;汪明;赵永国;;山东建筑大学学报(01);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN111612889A (en) | 2020-09-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107263449B (en) | Robot remote teaching system based on virtual reality | |
WO2018137445A1 (en) | Ros-based mechanical arm grabbing method and system | |
US8265791B2 (en) | System and method for motion control of humanoid robot | |
CN205219101U (en) | Service robot of family | |
CN108284436B (en) | Remote mechanical double-arm system with simulation learning mechanism and method | |
CN112634318B (en) | Teleoperation system and method for underwater maintenance robot | |
JP6915605B2 (en) | Image generator, robot training system, image generation method, and image generation program | |
JP2022542241A (en) | Systems and methods for augmenting visual output from robotic devices | |
Inaba et al. | A platform for robotics research based on the remote-brained robot approach | |
US20170019659A1 (en) | 3D scanning control apparatus based on FPGA and control method and system thereof | |
KR20130080021A (en) | Method and system for minimally-invasive surgery training using tracking data | |
WO2022134702A1 (en) | Action learning method and apparatus, storage medium, and electronic device | |
CN113103230A (en) | Human-computer interaction system and method based on remote operation of treatment robot | |
CN116664355A (en) | Virtual teaching system and method based on remote experiment | |
CN112580582B (en) | Action learning method, action learning device, action learning medium and electronic equipment | |
CN111612889B (en) | Robot action recognition method and action recognition system | |
US20190355281A1 (en) | Learning support system and recording medium | |
US10713833B2 (en) | Method and device for controlling 3D character using user's facial expressions and hand gestures | |
Macchini et al. | Does spontaneous motion lead to intuitive Body-Machine Interfaces? A fitness study of different body segments for wearable telerobotics | |
CN112732075B (en) | Virtual-real fusion machine teacher teaching method and system for teaching experiments | |
CN110948467A (en) | Handheld teaching device and method based on stereoscopic vision | |
Muis et al. | Realistic human motion preservation-imitation development on robot with kinect | |
CN210121851U (en) | Robot | |
Mohammad et al. | Tele-operation of robot using gestures | |
CN208196772U (en) | A kind of cloud robot system with knowledge sharing and autonomous learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |