CN112257655B - Method for robot to recognize human body sewing action - Google Patents

Method for robot to recognize human body sewing action Download PDF

Info

Publication number
CN112257655B
CN112257655B CN202011240809.3A CN202011240809A CN112257655B CN 112257655 B CN112257655 B CN 112257655B CN 202011240809 A CN202011240809 A CN 202011240809A CN 112257655 B CN112257655 B CN 112257655B
Authority
CN
China
Prior art keywords
sewing
needle
robot
motion
posture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011240809.3A
Other languages
Chinese (zh)
Other versions
CN112257655A (en
Inventor
王晓华
王皞燚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Polytechnic University
Original Assignee
Xian Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Polytechnic University filed Critical Xian Polytechnic University
Priority to CN202011240809.3A priority Critical patent/CN112257655B/en
Publication of CN112257655A publication Critical patent/CN112257655A/en
Application granted granted Critical
Publication of CN112257655B publication Critical patent/CN112257655B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Psychiatry (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Social Psychology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Sewing Machines And Sewing (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a method for identifying human body sewing actions by a robot, which comprises the following steps: step 1, a modular multi-robot sewing system is set up, and the modular multi-robot sewing system comprises a double-hand sewing system, a stereoscopic vision system and a visual servo system which are connected with each other; step 2, collecting human body sewing actions as a robot learning sample to generate training data; step 3, using Gaussian mixture model coding to perform task learning on the motion elements obtained by demonstration; step 4, establishing a stereoscopic vision system, and detecting the motion posture of the needle during the task; and 5, establishing a visual servo system, and guiding and adjusting the motion of the robot through the feedback of closed-loop vision. The invention improves the accuracy of sewing gesture recognition, improves the response speed and solves the problem of poor dynamic real-time performance of the current method.

Description

Method for robot to recognize human body sewing action
Technical Field
The invention belongs to the technical field of robot vision recognition, and relates to a method for recognizing human body sewing actions by a robot.
Background
The robot vision recognition technology is a key part in an intelligent robot system, and means that a robot is used for realizing the vision function of a human, namely, the recognition of an objective three-dimensional world, and an environment target is mainly recognized by using information such as color, shape and the like. The robot vision recognition technology is widely applied to a robot intelligent system, the vision recognition system needs to accurately acquire images, respond to external changes in real time, and track objects moving outside in real time, however, the existing method is low in accuracy of recognition of sewing actions, and is large in time delay in a dynamic recognition process, so that a robot cannot obtain a good learning effect. The research for identifying the human body sewing action can provide technical support for the robot to learn information in the sewing environment, the attitude planning of the end effector, the track planning of the sewing process and the like.
Disclosure of Invention
The invention aims to provide a method for identifying human body sewing actions by a robot, which solves the problems of low accuracy of identifying the sewing actions and large time delay in the dynamic identification process in the prior art.
The technical scheme adopted by the invention is that the method for identifying the human body sewing action by the robot is implemented according to the following steps:
step 1, a modular multi-robot sewing system is set up, and the modular multi-robot sewing system comprises a double-hand sewing system, a stereoscopic vision system and a visual servo system which are connected with each other;
step 2, collecting human body sewing actions as a robot learning sample to generate training data;
step 3, using Gaussian mixture model coding to perform task learning on the motion elements obtained by demonstration;
step 4, establishing a stereoscopic vision system, and detecting the motion posture of the needle during the task;
and 5, establishing a visual servo system, and guiding and adjusting the motion of the robot through the feedback of closed-loop vision.
The present invention is also characterized in that,
the stereoscopic vision system in the step 1 comprises two cameras with different angles, visual information is obtained through the cameras, the visual information is controlled and fed back to the double-hand sewing system through the visual servo system, the double-hand sewing system comprises two robots provided with sewing needle drivers, and the two robots are used for simulating the sewing action of human hands.
And a bar code mark is arranged on the needle head of the sewing needle driver and is used for visually tracking a target and recording the six-degree-of-freedom attitude information of the sewing needle driver.
Step 2, demonstrating sewing gesture actions by human beings, recording by a stereoscopic vision system, collecting samples by adopting human body action demonstration, and demonstrating a sewing process to a two-hand sewing system for many times to generate training data.
The two-hand sewing system uses two robots to build a sewing model to simulate the sewing actions of human hands, track the target motion and record the posture information; one robot in the two-hand sewing system is provided with a motorized sewing needle driver, and the needle head is sent to a sewing position and sewing is executed according to the actions learned in human demonstration and continuously circulates; the other robot is provided with a core rod, a sewing needle driver is fixed on one side of the core rod to grab the needle again, and the core rod is placed in a required posture to ensure that the robot sews at the same position under the local frame and control the sewing position.
The specific steps of the step 3 are as follows:
after the training data obtained in the step 2 is low-pass filtered, each demonstration is divided into a series of motion primitives according to the opening and closing states of a sewing needle driver and the connection mode of the sewing needle driver and each motion primitive is coded by using a Gaussian mixture model omega, the coding elements comprise a timestamp t and six-degree-of-freedom posture information h, and the probability that a given point t and h belong to omega is calculated as the weighted probability sum of the point, which is shown in the following formula:
Figure BDA0002768298740000031
wherein, pikAnd pkIs a gaussian component omegakConditional probability density, mean μkSum covariance ∑kIs defined as:
Figure BDA0002768298740000032
to determine the number of Gaussian components, quintupling cross-validation is used, with query poses at each time step
Figure BDA0002768298740000033
Mean value of
Figure BDA0002768298740000034
Sum covariance
Figure BDA0002768298740000035
And searching the reference track of each motion element by using a Gaussian mixture model, wherein the reference track is shown as the following formula:
Figure BDA0002768298740000036
wherein:
Figure BDA0002768298740000037
Figure BDA0002768298740000038
Figure BDA0002768298740000039
according to the difference between different deductions in each motion element, the speed of the task for copying in different task contexts is changed, the reference track learned by the system is further optimized, and the sewing target track is obtained.
And 4, tracking and monitoring the posture of the needle by adopting a stereoscopic vision system consisting of two cameras, avoiding the sewing failure caused by the accumulation of deviation and obtaining the actual sewing track.
The specific steps for establishing the stereoscopic vision system are as follows:
step 4.1, detecting a sewing needle in each three-dimensional image by using a needle detection algorithm to obtain a characteristic image;
step 4.2, enhancing a curve structure in the characteristic image;
4.3, projecting the three-dimensional points of the ideal posture model of the expected middle sewing needle on an image plane;
step 4.4, detecting small straight sections, comparing the difference between the real posture and the ideal posture of the needle, and regarding the sections which are close to the projection needle and have similar orientations as a part of the sewing needle;
these segments are combined to create a continuous curve representing the sewing needle detected in the image, step 4.5.
And (4) comparing the sewing target track and the actual track obtained in the steps (3) and (4) by deploying a servo system based on closed-loop vision, and performing feedback control.
Controlling the motion of the robot by using the needle attitude information obtained in the step 4 in a manner of simultaneously performing observation and movement, realizing the function of moving the needle to the sewing position and piercing the fabric, and converting the needle attitude into the needle driver attitude in the following manner:
sxdsxn·(dHn)-1 (7)
whereindHnDetecting during the mission, indicating the relative attitude between the needle n and the needle driver d;sxnrepresenting a series of needle poses during the suturing process;sxdindicating the needle driver pose corresponding to the needle pose during the suturing process, the robot will adjust the trajectory for different needle poses to ensure that the same stitches are produced.
The invention has the beneficial effects that:
(1) and the Gaussian mixture model is used for coding, the time stamp and the needle posture are used as joint information, a better task learning effect is obtained, and the accuracy of sewing gesture recognition is improved.
(2) By adopting the method taking the 'object as the center', namely taking the end position of the needle as a research object and the needle driver as a tool, the robot uses the same tool to manipulate the same object in the task without mapping the motion of the human to the body of the robot, thereby improving the efficiency of the robot for learning the sewing action of the human.
(3) The motion of the robot is controlled by adopting a mode of simultaneously carrying out observation and movement, the response speed is improved, and the problem of poor dynamic real-time performance of the current method is solved.
Drawings
FIG. 1 is a flow chart of a method for identifying human body sewing actions by a robot according to the invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention discloses a flow chart of a method for identifying human body sewing actions by a robot, which comprises the following concrete implementation steps as shown in figure 1:
step 1, a modular multi-robot sewing system is set up, and the modular multi-robot sewing system comprises a double-hand sewing system, a stereoscopic vision system and a visual servo system which are connected with each other;
the invention applies the method of identifying human body sewing actions by the robot to the existing sewing robot intelligent system, and is a method of identifying human body sewing actions by the robot based on stereoscopic vision.
And a bar code mark is arranged on the needle head of the sewing needle driver and is used for visually tracking a target and recording the six-degree-of-freedom attitude information of the sewing needle driver.
Step 2, collecting human body sewing actions as a robot learning sample to generate training data;
step 2, demonstrating sewing gesture actions by human beings, recording by a stereoscopic vision system, collecting samples by adopting human body action demonstration, and demonstrating a sewing process to a two-hand sewing system for many times to generate training data.
The two-hand sewing system uses two robots to build a sewing model to simulate the sewing actions of human hands, track the target motion and record the posture information; one robot in the two-hand sewing system is provided with a motorized sewing needle driver, and the needle head is sent to a sewing position and sewing is executed according to the actions learned in human demonstration and continuously circulates; the other robot is provided with a core rod, a sewing needle driver is fixed on one side of the core rod to grab the needle again, and the core rod is placed in a required posture to ensure that the robot sews at the same position under the local frame and control the sewing position.
Step 3, using Gaussian mixture model coding to perform task learning on the motion elements obtained by demonstration;
the specific steps of step 3 are as follows: after low pass filtering the training data obtained in step 2, depending on the open and closed state of the sewing needle drive, and the manner of connection to the sewing needle,
dividing each demonstration into a series of motion primitives, and encoding each motion primitive by using a Gaussian mixture model omega, wherein the encoding elements comprise a timestamp t and six-degree-of-freedom attitude information h, and the probability that a given point t and h belong to omega is calculated as the weighted probability sum of the point, as shown in the following formula:
Figure BDA0002768298740000061
wherein, pikAnd pkIs a gaussian component omegakConditional probability density, mean μkSum covariance ∑kIs defined as:
Figure BDA0002768298740000062
to determine the number of Gaussian components, quintupling cross-validation is used, with query poses at each time step
Figure BDA0002768298740000063
Mean value of
Figure BDA0002768298740000064
Sum covariance
Figure BDA0002768298740000065
And searching the reference track of each motion element by using a Gaussian mixture model, wherein the reference track is shown as the following formula:
Figure BDA0002768298740000066
wherein:
Figure BDA0002768298740000067
Figure BDA0002768298740000068
Figure BDA0002768298740000071
according to the difference between different deductions in each motion element, the speed of the task for copying in different task contexts is changed, the reference track learned by the system is further optimized, and the sewing target track is obtained.
Step 4, establishing a stereoscopic vision system, and detecting the motion posture of the needle during the task;
and 4, tracking and monitoring the posture of the needle by adopting a stereoscopic vision system consisting of two cameras, avoiding the sewing failure caused by the accumulation of deviation and obtaining the actual sewing track.
The working process of establishing the stereoscopic vision system is as follows:
step 4.1, detecting a sewing needle in each three-dimensional image by using a needle detection algorithm to obtain a characteristic image;
step 4.2, enhancing a curve structure in the characteristic image;
4.3, projecting the three-dimensional points of the ideal posture model of the expected middle sewing needle on an image plane;
step 4.4, detecting small straight sections, comparing the difference between the real posture and the ideal posture of the needle, and regarding the sections which are close to the projection needle and have similar orientations as a part of the sewing needle;
these segments are combined to create a continuous curve representing the sewing needle detected in the image, step 4.5.
And 5, establishing a visual servo system, and guiding and adjusting the motion of the robot through the feedback of closed-loop vision. And (4) comparing the sewing target track and the actual track obtained in the steps (3) and (4) by deploying a servo system based on closed-loop vision, and performing feedback control.
Controlling the motion of the robot by using the needle attitude information obtained in the step 4 in a manner of simultaneously performing observation and movement, realizing the function of moving the needle to the sewing position and piercing the fabric, and converting the needle attitude into the needle driver attitude in the following manner:
sxdsxn·(dHn)-1 (7)
whereindHnDetecting during the mission, indicating the relative attitude between the needle n and the needle driver d;sxnrepresenting a series of needle poses during the suturing process;sxdrepresenting needle driver poses corresponding to needle poses during suturing. For different needle poses, the robot will adjust the trajectory to ensure that the same stitches are produced.
The invention relates to a method for identifying human body sewing actions by a robot, which adopts a two-hand sewing system and a three-dimensional vision system, takes the tail end of a sewing needle as an object to be researched, a sewing needle driver as a tool, and adopts a method of taking the object as a center, wherein human skills are expressed by the motion of the tool and the object, the tool is controlled by a human to operate the object, and the motion of the tool and the object is recorded and learned instead of human hands. In task replication, the robot uses the same tools to manipulate the same objects, so that the manipulation skills of the human can be easily transferred to the robot without mapping the human motion to the robot. The target sewing gesture is tracked and learned, a Gaussian mixture model is used for coding, contact information of a timestamp and the needle gesture is established, the accuracy of action recognition is improved, a vision servo system is added, and the robot adjusts the motion track according to the gesture information of the needle through feedback of closed-loop vision.

Claims (6)

1. A method for identifying human body sewing actions by a robot is characterized by comprising the following steps:
step 1, a modular multi-robot sewing system is set up, and the modular multi-robot sewing system comprises a double-hand sewing system, a stereoscopic vision system and a visual servo system which are connected with each other;
the stereoscopic vision system in the step 1 comprises two cameras with different angles, visual information is obtained through the cameras, the visual information is controlled and fed back to the double-hand sewing system through the visual servo system, the double-hand sewing system comprises two robots provided with sewing needle drivers, and the two robots are used for simulating the sewing action of both hands of a human;
step 2, collecting human body sewing actions as a robot learning sample to generate training data;
step 3, using Gaussian mixture model coding to perform task learning on the motion elements obtained by demonstration; the specific steps of the step 3 are as follows:
after the training data obtained in the step 2 is low-pass filtered, each demonstration is divided into a series of motion primitives according to the opening and closing states of a sewing needle driver and the connection mode of the sewing needle driver and each motion primitive is coded by using a Gaussian mixture model omega, the coding elements comprise a timestamp t and six-degree-of-freedom posture information h, and the probability that a given point t and h belong to omega is calculated as the weighted probability sum of the point, which is shown in the following formula:
Figure FDA0003556286440000011
wherein k represents the kth gaussian component; pikAnd pkIs a gaussian component omegakConditional probability density, mean μkSum covariance ∑kIs defined as:
Figure FDA0003556286440000012
μt,k、μh,krespectively representing the corresponding mean values of the kth Gaussian component time t and the attitude h; sigmatt,k、∑th,kRespectively represent mean values of μt,kThen, the query condition is the variance corresponding to the time t and the attitude h; sigmahh,k、∑ht,kRespectively represent mean values of μh,kThen, the query condition is the variance corresponding to the attitude h and the time t;
to determine the number of Gaussian components, quintupling cross-validation is used, with query poses at each time step
Figure FDA0003556286440000021
Mean value of
Figure FDA0003556286440000022
Sum covariance
Figure FDA0003556286440000023
And searching the reference track of each motion element by using a Gaussian mixture model, wherein the reference track is shown as the following formula:
Figure FDA0003556286440000024
wherein:
Figure FDA0003556286440000025
Figure FDA0003556286440000026
Figure FDA0003556286440000027
Figure FDA0003556286440000028
in order to be a step of time,
Figure FDA0003556286440000029
the mean value corresponding to the gesture track of the motion element is obtained;
Figure FDA00035562864400000210
represents a time step of
Figure FDA00035562864400000211
Then, the covariance corresponding to the gesture trajectory of the motion primitive; the parameter K represents the total number of Gaussian components;
Figure FDA00035562864400000212
a mixture weight representing a k-th gaussian component;
Figure FDA00035562864400000213
is a time step of
Figure FDA00035562864400000214
Then, the k-th Gaussian component attitude h is taken as the corresponding mean value;
Figure FDA00035562864400000215
represents a time step of
Figure FDA00035562864400000216
Mean value of
Figure FDA00035562864400000217
Then, the query condition of the kth Gaussian component is the variance corresponding to the attitude h;
changing the speed of the task for copying the task in different task contexts according to the difference between different deductions in each motion element, and further optimizing the reference track learned by the system to obtain a sewing target track;
step 4, establishing a stereoscopic vision system, and detecting the motion posture of the needle during the task;
step 4, a stereoscopic vision system consisting of two cameras is adopted to track and monitor the posture of the needle, so that the sewing failure caused by the accumulation of deviation is avoided, and the actual sewing track is obtained;
the specific steps for establishing the stereoscopic vision system are as follows:
step 4.1, detecting a sewing needle in each three-dimensional image by using a needle detection algorithm to obtain a characteristic image;
step 4.2, enhancing a curve structure in the characteristic image;
4.3, projecting the three-dimensional points of the ideal posture model of the expected middle sewing needle on an image plane;
step 4.4, detecting the small straight section, comparing the difference between the real posture and the ideal posture of the needle, and considering the section which is close to the projection needle and has similar orientation as a part of the sewing needle;
step 4.5, combining the segments to create a continuous curve representing the sewing needle detected in the image;
and 5, establishing a visual servo system, and guiding and adjusting the motion of the robot through the feedback of closed-loop vision.
2. The method for human body sewing action recognition by the robot as claimed in claim 1, wherein a bar code mark is installed on the needle head of the sewing needle driver for visual tracking of the target and recording the six-degree-of-freedom attitude information of the sewing needle driver.
3. The method for human body sewing motion recognition by a robot as claimed in claim 1, wherein step 2 is performed by demonstrating the sewing gesture motion by a human, the stereoscopic vision system performs recording to collect samples by adopting human body motion demonstration, and the sewing process is demonstrated to the two-hand sewing system for a plurality of times to generate training data.
4. The method for identifying the human body sewing action by the robot as claimed in claim 3, wherein the two-hand sewing system uses two robots to build a sewing model to simulate the sewing action of human hands, track the target motion and record the posture information; one robot in the two-hand sewing system is provided with a motorized sewing needle driver, and the needle head is sent to a sewing position and sewing is executed according to the actions learned in human demonstration and continuously circulates; the other robot is provided with a core rod, a sewing needle driver is fixed on one side of the core rod to grab the needle again, and the core rod is placed in a required posture to ensure that the robot sews at the same position under the local frame and control the sewing position.
5. The method for human body sewing motion recognition by a robot according to claim 1, wherein feedback control is performed by deploying a servo system based on closed-loop vision to compare the sewing target trajectory and the actual trajectory obtained in the steps 3 and 4.
6. The method for robot to recognize human body sewing action according to claim 5, wherein the needle posture is detected in step 4, the robot motion is controlled in a manner of simultaneous "observation" and "movement", the function of moving the needle to the sewing position and piercing the fabric is realized, and the needle posture is converted into the needle driver posture in the following manner:
sxdsxn·(dHn)-1 (7)
whereindHnDetecting during the mission, indicating the relative attitude between the needle n and the needle driver d;sxnrepresenting a series of needle poses during the suturing process;sxdindicating the needle driver pose corresponding to the needle pose during the suturing process, the robot will adjust the trajectory for different needle poses to ensure that the same stitches are produced.
CN202011240809.3A 2020-11-09 2020-11-09 Method for robot to recognize human body sewing action Active CN112257655B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011240809.3A CN112257655B (en) 2020-11-09 2020-11-09 Method for robot to recognize human body sewing action

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011240809.3A CN112257655B (en) 2020-11-09 2020-11-09 Method for robot to recognize human body sewing action

Publications (2)

Publication Number Publication Date
CN112257655A CN112257655A (en) 2021-01-22
CN112257655B true CN112257655B (en) 2022-05-03

Family

ID=74266536

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011240809.3A Active CN112257655B (en) 2020-11-09 2020-11-09 Method for robot to recognize human body sewing action

Country Status (1)

Country Link
CN (1) CN112257655B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103282153A (en) * 2011-01-10 2013-09-04 弗罗纽斯国际有限公司 Method for teaching/testing a motion sequence of a welding robot, welding robot and control system for same

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104005180B (en) * 2014-06-12 2016-06-22 杰克缝纫机股份有限公司 A kind of vision positioning method for sewing and system
CN106087262B (en) * 2016-06-24 2019-01-18 芜湖固高自动化技术有限公司 The research and development method of robot sewing system and its operating method and system
CN110524548B (en) * 2019-08-02 2021-03-19 珞石(北京)科技有限公司 Robot and sewing machine speed cooperation method based on closed-loop control
US10745839B1 (en) * 2019-12-05 2020-08-18 Softwear Automation, Inc. Unwrinkling systems and methods
CN111424380B (en) * 2020-03-31 2021-04-30 山东大学 Robot sewing system and method based on skill learning and generalization
CN111645072B (en) * 2020-05-26 2021-09-24 山东大学 Robot sewing method and system based on multi-mode dictionary control strategy

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103282153A (en) * 2011-01-10 2013-09-04 弗罗纽斯国际有限公司 Method for teaching/testing a motion sequence of a welding robot, welding robot and control system for same

Also Published As

Publication number Publication date
CN112257655A (en) 2021-01-22

Similar Documents

Publication Publication Date Title
Das et al. Model-based inverse reinforcement learning from visual demonstrations
Calinon et al. Goal-directed imitation in a humanoid robot
Billard et al. Survey: Robot programming by demonstration
Billing et al. A formalism for learning from demonstration
CN109571487A (en) A kind of robotic presentation learning method of view-based access control model
CN109108942A (en) The mechanical arm motion control method and system of the real-time teaching of view-based access control model and adaptive DMPS
Kober et al. Learning movement primitives for force interaction tasks
WO2021069129A1 (en) Device and method for controlling a robot device
CN114127806A (en) System and method for enhancing visual output from a robotic device
CN111872934A (en) Mechanical arm control method and system based on hidden semi-Markov model
Hueser et al. Learning of demonstrated grasping skills by stereoscopic tracking of human head configuration
CN109993770A (en) A kind of method for tracking target of adaptive space-time study and state recognition
Radosavovic et al. Robot learning with sensorimotor pre-training
Chen et al. Transferable active grasping and real embodied dataset
Mühlig et al. Human-robot interaction for learning and adaptation of object movements
Reinhart et al. Representation and generalization of bi-manual skills from kinesthetic teaching
CN112257655B (en) Method for robot to recognize human body sewing action
CN113134839B (en) Robot precision flexible assembly method based on vision and force position image learning
Calderon et al. Robot imitation from human body movements
Zhu Robot Learning Assembly Tasks from Human Demonstrations
Ito et al. Visualization of focal cues for visuomotor coordination by gradient-based methods: A recurrent neural network shifts the attention depending on task requirements
Tidemann et al. Self-organizing multiple models for imitation: Teaching a robot to dance the YMCA
Steil et al. Learning issues in a multi-modal robot-instruction scenario
Yamazaki et al. Assembly manipulation understanding based on 3D object pose estimation and human motion estimation
CN113119073A (en) Mechanical arm system based on computer vision and machine learning and oriented to 3C assembly scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant