CN112257655A - Method for robot to recognize human body sewing action - Google Patents

Method for robot to recognize human body sewing action Download PDF

Info

Publication number
CN112257655A
CN112257655A CN202011240809.3A CN202011240809A CN112257655A CN 112257655 A CN112257655 A CN 112257655A CN 202011240809 A CN202011240809 A CN 202011240809A CN 112257655 A CN112257655 A CN 112257655A
Authority
CN
China
Prior art keywords
sewing
robot
needle
motion
human body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011240809.3A
Other languages
Chinese (zh)
Other versions
CN112257655B (en
Inventor
王晓华
王皞燚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Polytechnic University
Original Assignee
Xian Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Polytechnic University filed Critical Xian Polytechnic University
Priority to CN202011240809.3A priority Critical patent/CN112257655B/en
Publication of CN112257655A publication Critical patent/CN112257655A/en
Application granted granted Critical
Publication of CN112257655B publication Critical patent/CN112257655B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Abstract

The invention discloses a method for identifying human body sewing actions by a robot, which comprises the following steps: step 1, a modular multi-robot sewing system is set up, and the modular multi-robot sewing system comprises a double-hand sewing system, a stereoscopic vision system and a visual servo system which are connected with each other; step 2, collecting human body sewing actions as a robot learning sample to generate training data; step 3, using Gaussian mixture model coding to perform task learning on the motion elements obtained by demonstration; step 4, establishing a stereoscopic vision system, and detecting the motion posture of the needle during the task; and 5, establishing a visual servo system, and guiding and adjusting the motion of the robot through the feedback of closed-loop vision. The invention improves the accuracy of sewing gesture recognition, improves the response speed and solves the problem of poor dynamic real-time performance of the current method.

Description

Method for robot to recognize human body sewing action
Technical Field
The invention belongs to the technical field of robot vision recognition, and relates to a method for recognizing human body sewing actions by a robot.
Background
The robot vision recognition technology is a key part in an intelligent robot system, and means that a robot is used for realizing the vision function of a human, namely, the recognition of an objective three-dimensional world, and an environment target is mainly recognized by using information such as color, shape and the like. The robot vision recognition technology is widely applied to a robot intelligent system, the vision recognition system needs to accurately acquire images, respond to external changes in real time, and track objects moving outside in real time, however, the existing method is low in accuracy of recognition of sewing actions, and is large in time delay in a dynamic recognition process, so that a robot cannot obtain a good learning effect. The research for identifying the human body sewing action can provide technical support for the robot to learn information in the sewing environment, the attitude planning of the end effector, the track planning of the sewing process and the like.
Disclosure of Invention
The invention aims to provide a method for identifying human body sewing actions by a robot, which solves the problems of low accuracy of identifying the sewing actions and large time delay in the dynamic identification process in the prior art.
The technical scheme adopted by the invention is that the method for identifying the human body sewing action by the robot is implemented according to the following steps:
step 1, a modular multi-robot sewing system is set up, and the modular multi-robot sewing system comprises a double-hand sewing system, a stereoscopic vision system and a visual servo system which are connected with each other;
step 2, collecting human body sewing actions as a robot learning sample to generate training data;
step 3, using Gaussian mixture model coding to perform task learning on the motion elements obtained by demonstration;
step 4, establishing a stereoscopic vision system, and detecting the motion posture of the needle during the task;
and 5, establishing a visual servo system, and guiding and adjusting the motion of the robot through the feedback of closed-loop vision.
The present invention is also characterized in that,
the stereoscopic vision system in the step 1 comprises two cameras with different angles, visual information is obtained through the cameras, the visual information is controlled and fed back to the double-hand sewing system through the visual servo system, the double-hand sewing system comprises two robots provided with sewing needle drivers, and the two robots are used for simulating the sewing action of human hands.
And a bar code mark is arranged on the needle head of the sewing needle driver and is used for visually tracking a target and recording the six-degree-of-freedom attitude information of the sewing needle driver.
Step 2, demonstrating sewing gesture actions by human beings, recording by a stereoscopic vision system, collecting samples by adopting human body action demonstration, and demonstrating a sewing process to a two-hand sewing system for many times to generate training data.
The two-hand sewing system uses two robots to build a sewing model to simulate the sewing actions of human hands, track the target motion and record the posture information; one robot in the two-hand sewing system is provided with a motorized sewing needle driver, and the needle head is sent to a sewing position and sewing is executed according to the actions learned in human demonstration and continuously circulates; the other robot is provided with a core rod, a sewing needle driver is fixed on one side of the core rod to grab the needle again, and the core rod is placed in a required posture to ensure that the robot sews at the same position under the local frame and control the sewing position.
The specific steps of the step 3 are as follows:
after the training data obtained in the step 2 is low-pass filtered, each demonstration is divided into a series of motion primitives according to the opening and closing states of a sewing needle driver and the connection mode of the sewing needle driver and each motion primitive is coded by using a Gaussian mixture model omega, the coding elements comprise a timestamp t and six-degree-of-freedom posture information h, and the probability that a given point t and h belong to omega is calculated as the weighted probability sum of the point, which is shown in the following formula:
Figure BDA0002768298740000031
wherein, pikAnd pkIs a gaussian component omegakConditional probability density, mean μkSum covariance ∑kIs defined as:
Figure BDA0002768298740000032
to determine highNumber of components, using quintupling cross-validation, query poses at each time step
Figure BDA0002768298740000033
Mean value of
Figure BDA0002768298740000034
Sum covariance
Figure BDA0002768298740000035
And searching the reference track of each motion element by using a Gaussian mixture model, wherein the reference track is shown as the following formula:
Figure BDA0002768298740000036
wherein:
Figure BDA0002768298740000037
Figure BDA0002768298740000038
Figure BDA0002768298740000039
according to the difference between different deductions in each motion element, the speed of the task for copying in different task contexts is changed, the reference track learned by the system is further optimized, and the sewing target track is obtained.
And 4, tracking and monitoring the posture of the needle by adopting a stereoscopic vision system consisting of two cameras, avoiding the sewing failure caused by the accumulation of deviation and obtaining the actual sewing track.
The specific steps for establishing the stereoscopic vision system are as follows:
step 4.1, detecting a sewing needle in each three-dimensional image by using a needle detection algorithm to obtain a characteristic image;
step 4.2, enhancing a curve structure in the characteristic image;
4.3, projecting the three-dimensional points of the ideal posture model of the expected middle sewing needle on an image plane;
step 4.4, detecting small straight sections, comparing the difference between the real posture and the ideal posture of the needle, and regarding the sections which are close to the projection needle and have similar orientations as a part of the sewing needle;
these segments are combined to create a continuous curve representing the sewing needle detected in the image, step 4.5.
And (4) comparing the sewing target track and the actual track obtained in the steps (3) and (4) by deploying a servo system based on closed-loop vision, and performing feedback control.
Controlling the motion of the robot by using the needle attitude information obtained in the step 4 in a manner of simultaneously performing observation and movement, realizing the function of moving the needle to the sewing position and piercing the fabric, and converting the needle attitude into the needle driver attitude in the following manner:
sxdsxn·(dHn)-1 (7)
whereindHnDetecting during the mission, indicating the relative attitude between the needle n and the needle driver d;sxnrepresenting a series of needle poses during the suturing process;sxdindicating the needle driver pose corresponding to the needle pose during the suturing process, the robot will adjust the trajectory for different needle poses to ensure that the same stitches are produced.
The invention has the beneficial effects that:
(1) and the Gaussian mixture model is used for coding, the time stamp and the needle posture are used as joint information, a better task learning effect is obtained, and the accuracy of sewing gesture recognition is improved.
(2) By adopting the method taking the 'object as the center', namely taking the end position of the needle as a research object and the needle driver as a tool, the robot uses the same tool to manipulate the same object in the task without mapping the motion of the human to the body of the robot, thereby improving the efficiency of the robot for learning the sewing action of the human.
(3) The motion of the robot is controlled by adopting a mode of simultaneously carrying out observation and movement, the response speed is improved, and the problem of poor dynamic real-time performance of the current method is solved.
Drawings
FIG. 1 is a flow chart of a method for identifying human body sewing actions by a robot according to the invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention discloses a flow chart of a method for identifying human body sewing actions by a robot, which comprises the following concrete implementation steps as shown in figure 1:
step 1, a modular multi-robot sewing system is set up, and the modular multi-robot sewing system comprises a double-hand sewing system, a stereoscopic vision system and a visual servo system which are connected with each other;
the invention applies the method of identifying human body sewing actions by the robot to the existing sewing robot intelligent system, and is a method of identifying human body sewing actions by the robot based on stereoscopic vision.
And a bar code mark is arranged on the needle head of the sewing needle driver and is used for visually tracking a target and recording the six-degree-of-freedom attitude information of the sewing needle driver.
Step 2, collecting human body sewing actions as a robot learning sample to generate training data;
step 2, demonstrating sewing gesture actions by human beings, recording by a stereoscopic vision system, collecting samples by adopting human body action demonstration, and demonstrating a sewing process to a two-hand sewing system for many times to generate training data.
The two-hand sewing system uses two robots to build a sewing model to simulate the sewing actions of human hands, track the target motion and record the posture information; one robot in the two-hand sewing system is provided with a motorized sewing needle driver, and the needle head is sent to a sewing position and sewing is executed according to the actions learned in human demonstration and continuously circulates; the other robot is provided with a core rod, a sewing needle driver is fixed on one side of the core rod to grab the needle again, and the core rod is placed in a required posture to ensure that the robot sews at the same position under the local frame and control the sewing position.
Step 3, using Gaussian mixture model coding to perform task learning on the motion elements obtained by demonstration;
the specific steps of step 3 are as follows: after low pass filtering the training data obtained in step 2, depending on the open and closed state of the sewing needle drive, and the manner of connection to the sewing needle,
dividing each demonstration into a series of motion primitives, and encoding each motion primitive by using a Gaussian mixture model omega, wherein the encoding elements comprise a timestamp t and six-degree-of-freedom attitude information h, and the probability that a given point t and h belong to omega is calculated as the weighted probability sum of the point, as shown in the following formula:
Figure BDA0002768298740000061
wherein, pikAnd pkIs a gaussian component omegakConditional probability density, mean μkSum covariance ∑kIs defined as:
Figure BDA0002768298740000062
to determine the number of Gaussian components, quintupling cross-validation is used, with query poses at each time step
Figure BDA0002768298740000063
Mean value of
Figure BDA0002768298740000064
Sum covariance
Figure BDA0002768298740000065
And searching the reference track of each motion element by using a Gaussian mixture model, wherein the reference track is shown as the following formula:
Figure BDA0002768298740000066
wherein:
Figure BDA0002768298740000067
Figure BDA0002768298740000068
Figure BDA0002768298740000071
according to the difference between different deductions in each motion element, the speed of the task for copying in different task contexts is changed, the reference track learned by the system is further optimized, and the sewing target track is obtained.
Step 4, establishing a stereoscopic vision system, and detecting the motion posture of the needle during the task;
and 4, tracking and monitoring the posture of the needle by adopting a stereoscopic vision system consisting of two cameras, avoiding the sewing failure caused by the accumulation of deviation and obtaining the actual sewing track.
The working process of establishing the stereoscopic vision system is as follows:
step 4.1, detecting a sewing needle in each three-dimensional image by using a needle detection algorithm to obtain a characteristic image;
step 4.2, enhancing a curve structure in the characteristic image;
4.3, projecting the three-dimensional points of the ideal posture model of the expected middle sewing needle on an image plane;
step 4.4, detecting small straight sections, comparing the difference between the real posture and the ideal posture of the needle, and regarding the sections which are close to the projection needle and have similar orientations as a part of the sewing needle;
these segments are combined to create a continuous curve representing the sewing needle detected in the image, step 4.5.
And 5, establishing a visual servo system, and guiding and adjusting the motion of the robot through the feedback of closed-loop vision. And (4) comparing the sewing target track and the actual track obtained in the steps (3) and (4) by deploying a servo system based on closed-loop vision, and performing feedback control.
Controlling the motion of the robot by using the needle attitude information obtained in the step 4 in a manner of simultaneously performing observation and movement, realizing the function of moving the needle to the sewing position and piercing the fabric, and converting the needle attitude into the needle driver attitude in the following manner:
sxdsxn·(dHn)-1 (7)
whereindHnDetecting during the mission, indicating the relative attitude between the needle n and the needle driver d;sxnrepresenting a series of needle poses during the suturing process;sxdrepresenting the needle driver pose corresponding to the needle pose during the suturing process. For different needle poses, the robot will adjust the trajectory to ensure that the same stitch is produced.
The invention relates to a method for identifying human body sewing actions by a robot, which adopts a two-hand sewing system and a three-dimensional vision system, takes the tail end of a sewing needle as an object to be researched, a sewing needle driver as a tool, and adopts a method of taking the object as a center, wherein human skills are expressed by the motion of the tool and the object, the tool is controlled by a human to operate the object, and the motion of the tool and the object is recorded and learned instead of human hands. In task replication, the robot uses the same tools to manipulate the same objects, so that the manipulation skills of the human can be easily transferred to the robot without mapping the human motion to the robot. The target sewing gesture is tracked and learned, a Gaussian mixture model is used for coding, contact information of a timestamp and the needle gesture is established, the accuracy of action recognition is improved, a vision servo system is added, and the robot adjusts the motion track according to the gesture information of the needle through feedback of closed-loop vision.

Claims (10)

1. A method for identifying human body sewing actions by a robot is characterized by comprising the following steps:
step 1, a modular multi-robot sewing system is set up, and the modular multi-robot sewing system comprises a double-hand sewing system, a stereoscopic vision system and a visual servo system which are connected with each other;
step 2, collecting human body sewing actions as a robot learning sample to generate training data;
step 3, using Gaussian mixture model coding to perform task learning on the motion elements obtained by demonstration;
step 4, establishing a stereoscopic vision system, and detecting the motion posture of the needle during the task;
and 5, establishing a visual servo system, and guiding and adjusting the motion of the robot through the feedback of closed-loop vision.
2. The method for human body sewing action recognition by the robot as claimed in claim 1, wherein the stereoscopic vision system in step 1 comprises two cameras with different angles, the cameras are used for obtaining visual information, the visual information is controlled and fed back to the two-hand sewing system by the visual servo system, the two-hand sewing system comprises two robots provided with sewing needle drivers, and the two robots are used for simulating the sewing action of human hands.
3. The method for human body sewing action recognition by the robot as claimed in claim 2, wherein a bar code mark is installed on the needle head of the sewing needle driver for visual tracking of the target and recording the six-degree-of-freedom attitude information of the sewing needle driver.
4. The method for human body sewing motion recognition by a robot as claimed in claim 2, wherein step 2 is performed by demonstrating the sewing gesture motion by a human, the stereoscopic vision system recording and adopting the human body motion demonstration to collect the sample, and the sewing process is demonstrated to the two-hand sewing system for a plurality of times to generate the training data.
5. The method for human body sewing action recognition by the robot as claimed in claim 4, wherein the two-hand sewing system uses two robots to build a sewing model to simulate the sewing action of human hands, track the target motion and record the posture information; one robot in the two-hand sewing system is provided with a motorized sewing needle driver, and the needle head is sent to a sewing position and sewing is executed according to the actions learned in human demonstration and continuously circulates; the other robot is provided with a core rod, a sewing needle driver is fixed on one side of the core rod to grab the needle again, and the core rod is placed in a required posture to ensure that the robot sews at the same position under the local frame and control the sewing position.
6. The method for identifying human body sewing actions by a robot as claimed in claim 2, wherein the specific steps of the step 3 are as follows:
after the training data obtained in the step 2 is low-pass filtered, each demonstration is divided into a series of motion primitives according to the opening and closing states of a sewing needle driver and the connection mode of the sewing needle driver and each motion primitive is coded by using a Gaussian mixture model omega, the coding elements comprise a timestamp t and six-degree-of-freedom posture information h, and the probability that a given point t and h belong to omega is calculated as the weighted probability sum of the point, which is shown in the following formula:
Figure FDA0002768298730000021
wherein, pikAnd pkIs a gaussian component omegakConditional probability density, mean μkSum covariance ∑kIs defined as:
Figure FDA0002768298730000022
to determine the number of Gaussian components, quintupling cross-validation is used, with query poses at each time step
Figure FDA0002768298730000023
Mean value of
Figure FDA0002768298730000024
Sum covariance
Figure FDA0002768298730000025
And searching the reference track of each motion element by using a Gaussian mixture model, wherein the reference track is shown as the following formula:
Figure FDA0002768298730000026
wherein:
Figure FDA0002768298730000031
Figure FDA0002768298730000032
Figure FDA0002768298730000033
according to the difference between different deductions in each motion element, the speed of the task for copying in different task contexts is changed, the reference track learned by the system is further optimized, and the sewing target track is obtained.
7. The method for human body sewing motion recognition by a robot as claimed in claim 6, wherein step 4 employs a stereoscopic vision system composed of two cameras to track and monitor the needle posture, so as to avoid sewing failure due to accumulated deviation and obtain the actual sewing track.
8. The method for identifying human body sewing actions by a robot as claimed in claim 7, wherein the specific steps for establishing the stereoscopic vision system are as follows:
step 4.1, detecting a sewing needle in each three-dimensional image by using a needle detection algorithm to obtain a characteristic image;
step 4.2, enhancing a curve structure in the characteristic image;
4.3, projecting the three-dimensional points of the ideal posture model of the expected middle sewing needle on an image plane;
step 4.4, detecting small straight sections, comparing the difference between the real posture and the ideal posture of the needle, and regarding the sections which are close to the projection needle and have similar orientations as a part of the sewing needle;
these segments are combined to create a continuous curve representing the sewing needle detected in the image, step 4.5.
9. The method for human body sewing motion recognition by a robot according to claim 8, wherein the target sewing track and the actual sewing track obtained in the steps 3 and 4 are compared by deploying a servo system based on closed-loop vision to perform feedback control.
10. The method for recognizing human body sewing action by robot as claimed in claim 9, wherein the movement of the robot is controlled by using the needle attitude information obtained in step 4 in a manner of simultaneous "observation" and "movement" to realize the function of moving the needle to the sewing position and piercing the fabric, and the needle attitude is converted into the needle driver attitude in the following manner:
sxdsxn·(dHn)-1 (7)
whereindHnDetecting during the mission, indicating the relative attitude between the needle n and the needle driver d;sxnrepresenting a series of needle poses during the suturing process;sxdindicating the needle driver pose corresponding to the needle pose during the suturing process, the robot will adjust the trajectory for different needle poses to ensure that the same stitches are produced.
CN202011240809.3A 2020-11-09 2020-11-09 Method for robot to recognize human body sewing action Active CN112257655B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011240809.3A CN112257655B (en) 2020-11-09 2020-11-09 Method for robot to recognize human body sewing action

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011240809.3A CN112257655B (en) 2020-11-09 2020-11-09 Method for robot to recognize human body sewing action

Publications (2)

Publication Number Publication Date
CN112257655A true CN112257655A (en) 2021-01-22
CN112257655B CN112257655B (en) 2022-05-03

Family

ID=74266536

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011240809.3A Active CN112257655B (en) 2020-11-09 2020-11-09 Method for robot to recognize human body sewing action

Country Status (1)

Country Link
CN (1) CN112257655B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103282153A (en) * 2011-01-10 2013-09-04 弗罗纽斯国际有限公司 Method for teaching/testing a motion sequence of a welding robot, welding robot and control system for same
CN104005180A (en) * 2014-06-12 2014-08-27 新杰克缝纫机股份有限公司 Visual positioning method and system for sewing
CN106087262A (en) * 2016-06-24 2016-11-09 芜湖固高自动化技术有限公司 The research and development method of robot sewing system and operational approach and system
CN110524548A (en) * 2019-08-02 2019-12-03 珞石(北京)科技有限公司 A kind of robot based on closed-loop control and sewing machine speed Synergistic method
CN111424380A (en) * 2020-03-31 2020-07-17 山东大学 Robot sewing system and method based on skill learning and generalization
US10745839B1 (en) * 2019-12-05 2020-08-18 Softwear Automation, Inc. Unwrinkling systems and methods
CN111645072A (en) * 2020-05-26 2020-09-11 山东大学 Robot sewing method and system based on multi-mode dictionary control strategy

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103282153A (en) * 2011-01-10 2013-09-04 弗罗纽斯国际有限公司 Method for teaching/testing a motion sequence of a welding robot, welding robot and control system for same
CN104005180A (en) * 2014-06-12 2014-08-27 新杰克缝纫机股份有限公司 Visual positioning method and system for sewing
CN106087262A (en) * 2016-06-24 2016-11-09 芜湖固高自动化技术有限公司 The research and development method of robot sewing system and operational approach and system
CN110524548A (en) * 2019-08-02 2019-12-03 珞石(北京)科技有限公司 A kind of robot based on closed-loop control and sewing machine speed Synergistic method
US10745839B1 (en) * 2019-12-05 2020-08-18 Softwear Automation, Inc. Unwrinkling systems and methods
CN111424380A (en) * 2020-03-31 2020-07-17 山东大学 Robot sewing system and method based on skill learning and generalization
CN111645072A (en) * 2020-05-26 2020-09-11 山东大学 Robot sewing method and system based on multi-mode dictionary control strategy

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王晓华,王育合,王文杰,王进,陶庆: "《缝纫机器人工作空间分析》", 《机械科学与技术》 *

Also Published As

Publication number Publication date
CN112257655B (en) 2022-05-03

Similar Documents

Publication Publication Date Title
JP5209751B2 (en) Robot drive system, robot drive method, and robot drive program
Billing et al. A formalism for learning from demonstration
CN109590986B (en) Robot teaching method, intelligent robot and storage medium
CN109571487A (en) A kind of robotic presentation learning method of view-based access control model
CN109108942A (en) The mechanical arm motion control method and system of the real-time teaching of view-based access control model and adaptive DMPS
US20210023703A1 (en) System and method for augmenting a visual output from a robotic device
Kober et al. Learning movement primitives for force interaction tasks
WO2021069129A1 (en) Device and method for controlling a robot device
CN111872934A (en) Mechanical arm control method and system based on hidden semi-Markov model
Hueser et al. Learning of demonstrated grasping skills by stereoscopic tracking of human head configuration
CN109993770A (en) A kind of method for tracking target of adaptive space-time study and state recognition
Chen et al. Transferable active grasping and real embodied dataset
Mühlig et al. Human-robot interaction for learning and adaptation of object movements
Vidaković et al. Learning from demonstration based on a classification of task parameters and trajectory optimization
CN112257655B (en) Method for robot to recognize human body sewing action
CN109676583B (en) Deep learning visual acquisition method based on target posture, learning system and storage medium
Wang et al. Modelling of human haptic skill: A framework and preliminary results
CN113134839B (en) Robot precision flexible assembly method based on vision and force position image learning
CN109685828B (en) Deep learning tracking acquisition method based on target posture, learning system and storage medium
Zhu Robot Learning Assembly Tasks from Human Demonstrations
Hüser et al. Visual programming by demonstration of grasping skills in the context of a mobile service robot using 1D-topology based self-organizing-maps
Tidemann et al. Self-organizing multiple models for imitation: Teaching a robot to dance the YMCA
Hung et al. An approach to learn hand movements for robot actions from human demonstrations
Steil et al. Learning issues in a multi-modal robot-instruction scenario
Kojo et al. Gesture recognition for humanoids using proto-symbol space

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant