CN109685828B - Deep learning tracking acquisition method based on target posture, learning system and storage medium - Google Patents

Deep learning tracking acquisition method based on target posture, learning system and storage medium Download PDF

Info

Publication number
CN109685828B
CN109685828B CN201811467812.1A CN201811467812A CN109685828B CN 109685828 B CN109685828 B CN 109685828B CN 201811467812 A CN201811467812 A CN 201811467812A CN 109685828 B CN109685828 B CN 109685828B
Authority
CN
China
Prior art keywords
teaching
tracking
target
learning
function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811467812.1A
Other languages
Chinese (zh)
Other versions
CN109685828A (en
Inventor
刘培超
刘主福
郎需林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yuejiang Technology Co Ltd
Original Assignee
Shenzhen Yuejiang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yuejiang Technology Co Ltd filed Critical Shenzhen Yuejiang Technology Co Ltd
Priority to CN201811467812.1A priority Critical patent/CN109685828B/en
Publication of CN109685828A publication Critical patent/CN109685828A/en
Application granted granted Critical
Publication of CN109685828B publication Critical patent/CN109685828B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to the technical field of robots, and discloses a deep learning tracking acquisition method based on a target posture, a learning system and a storage medium, which are used for controlling the learning and teaching actions of a robot, wherein the deep learning tracking acquisition method based on the target posture comprises the following steps: selecting a plurality of reference points of the teaching action, respectively tracking the motion of each reference point and recording teaching tracking data; analyzing each of said teaching trace data to fit to at least two functions by a movement versus time relationship: an attitude function for describing the change of the target attitude along with time, and a displacement function for describing the change of the target position along with time; and generating a control program so that the robot can realize the teaching operation according to the posture function and the displacement function. The invention generates the driving program by collecting the teaching action of the target, reduces the requirement degree for human participation, and has the advantages of high intelligent degree, high simulation reduction degree and the like.

Description

Deep learning tracking acquisition method based on target posture, learning system and storage medium
Technical Field
The invention relates to the technical field of robots, in particular to a deep learning tracking acquisition method based on target postures, a learning system and a storage medium.
Background
A Robot (Robot) is a high-tech product, in which a program or a principle outline is preset, and after receiving a signal or an instruction, the Robot can judge and take an action to a certain extent, such as moving, taking, swinging a limb, and the like. The task of the robot is mainly to assist or even replace human work in some occasions, judgment of actions and information related in an actual working scene is very complicated, and the actions and the information are difficult to be recorded in the robot in a program mode in advance, so that how to learn by itself according to existing knowledge to improve adaptive capacity and intelligent level, namely, the robot learning becomes a very popular research focus in the robot industry.
In the prior art, the process of simulating human teaching actions by a robot mainly includes: 1. digitally collecting a plurality of key point coordinates of the teaching action; 2. and taking points to solve the points reversely into a robot control program. In the above two steps, a lot of manual involvement is required, and particularly in step 1, not only the key point needs to be selected, but also the teaching action needs to be simplified, for example, the teaching action is moved from point a to point B, and then the degree of simplification rises or falls at point B, the higher the degree of simplification of the teaching action is, the lower the simulated reduction degree of the robot is, the lower the degree of simplification of the teaching action is, the larger the calculation amount of the relevant point taking is, and finally the robot is difficult to realize the simulated human teaching action with high reduction degree.
Disclosure of Invention
The invention aims to provide a target posture-based deep learning tracking acquisition method, a learning system and a storage medium, and aims to solve the problems that when a robot in the prior art simulates human teaching actions, the action reduction degree is low, the point taking calculation amount is large, the manual participation is more, and the intelligent degree is low.
The invention provides a deep learning tracking acquisition method based on a target posture, which is used for controlling a robot to learn teaching actions and is characterized by comprising the following steps of: selecting a plurality of reference points of the teaching action, respectively tracking the motion of each reference point and recording teaching tracking data; analyzing each of said teaching trace data to fit to at least two functions by a movement versus time relationship: an attitude function for describing the change of the target attitude along with time, and a displacement function for describing the change of the target position along with time; and generating a control program so that the robot can realize the teaching operation according to the posture function and the displacement function.
The present invention also provides a learning system for controlling a teaching action of a robot learning target, the robot having an execution terminal, comprising: a motion tracking unit having a plurality of motion trackers, each of which continuously tracks a corresponding reference point in the teaching motion process and records teaching tracking data; a data analysis unit which receives the teaching trace data and analyzes the teaching trace data to obtain a motion function of the teaching operation; and a drive control unit which receives the motion function, generates a drive program, and controls the execution end to perform a simulation operation.
The invention also provides a computer readable storage medium, which stores a computer program, and the computer program is executed by a processor to realize the steps of the method for acquiring the tracking based on the target posture deep learning
Compared with the prior art, the teaching action of the target is simplified into at least two functions for description: a displacement function describing a displacement over time; and the attitude function describes the relationship of the attitude along with time. After the action is simplified, the point-taking calculated amount is reduced, the driving program is generated by the teaching action of the acquisition target, the requirement degree on manual participation is reduced, and the method has the advantages of high intelligent degree, high simulation reduction degree and the like.
Drawings
Fig. 1 is a schematic flow chart of a target posture-based deep learning tracking acquisition method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of calculating a swing angle of a posture function in the target posture-based deep learning tracking acquisition method according to the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the description of the present invention, it is to be understood that the terms "length", "width", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on the orientations or positional relationships illustrated in the drawings, and are used merely for convenience in describing the present invention and for simplicity in description, and do not indicate or imply that the devices or elements referred to must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, are not to be construed as limiting the present invention.
In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the present invention, unless otherwise expressly stated or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
The implementation of this embodiment will be described in detail below with reference to the specific drawings, and for convenience of description, a spatial coordinate system (x, y, z) is established, wherein the x axis and the y axis are located on a horizontal plane and are perpendicular to each other, and the z axis is located in a vertical direction.
The embodiment provides a deep learning tracking acquisition method based on a target posture, which is used for learning teaching actions of a target and comprises the following steps:
101. and selecting a plurality of reference points of the teaching action, respectively tracking the motion of each reference point and recording teaching tracking data. The target in this embodiment may be the whole of a human, animal or other mechanical device, or a specific part, such as a human hand, a bird wing, a human tip, etc. Specifically, in this embodiment, a calligraphy action of writing a certain chinese character by a writing brush is taken as an example for a person to receive, where the writing brush is taken as a target, the action of the writing brush during writing is a teaching action, and in a process of writing a certain chinese character by a writer holding the writing brush, a writing action process of the writing brush is tracked from multiple directions, multiple reference points on the writing brush are marked, and each reference point is tracked and recorded as teaching tracking data, where the teaching tracking data should include a moving trajectory of the reference point and a time axis. It is understood that the hand may be selected as the target in this embodiment, and in other embodiments.
102. Each teaching trace data is analyzed and fitted to at least two functions through the relationship of movement and time: the attitude function is used for describing the change of the target attitude along with time, and the displacement function is used for describing the change of the target position along with time. In this step, the teaching trace data obtained in step 101 is processed and analyzed, and in this embodiment, as shown in fig. 1, A, B, C three points are collected on the writing brush as reference points, where point B is a rotation center point of the writing brush during writing, a certain interval time is set as an analysis unit, for example, when t is selected to be 0.5s, the position change information of each reference point at intervals of 0.5s of each point is analyzed, and the data is fitted into at least two functions: a pose function and a displacement function. In the course of action, the change of the target's own posture can be described by a posture function, for example, a certain angle of rotation in the vertical direction. In the displacement function, the target is considered as a particle, describing the amount of displacement change of the target, such as moving from point A to point B, and then rising to point C. In other embodiments, the number of functions may also be increased, such as the action function: it is described that the output signal performs a specified operation, such as welding, pressing, etc. at time t, at some specific point in time. It should be understood that if the teaching action of the target has no change of the posture all the way, only the change of the displacement, the posture function is fitted to be a constant function with the value of 0, otherwise, only the change of the posture has no change of the displacement all the way, the displacement function is fitted to be a constant function with the value of 0, and the two conditions are obviously included by having at least two functions of the posture function and the displacement function. And the position information at this time is recorded simultaneously with the time point corresponding thereto.
As shown in fig. 1 and fig. 2, in the present embodiment, since the point B is a rotation center point, that is, if the displacement of the writing brush is ignored, the point B is regarded as a stationary point during writing, and therefore, the change of the point B with time can be used as a displacement function. By the change of the position of the point A in the time t and the relative between the point A and the point B
Distance (l in the figure)1Length of the arm), the swing angle, i.e., the attitude change, can be calculated. The specific calculation method may be various, for example, let the distance between points A and B be l1Between points B and CA distance of l2Simplifying the writing brush shot at intervals of t time into t1And t2Two straight lines, the B points of the two straight lines are coincided, and t is calculated1And t2A distance X between points A on1,X1And l1The angle alpha can be calculated through a cosine formula, and the change of the angle alpha relative to the time t is the attitude function at the moment. In the same way, t1And t2Distance X between upper C points2And l2The angle β can be calculated by a cosine formula, and should be theoretically equal to the angle α, and can be used as mutual verification data in the calculation.
The attitude function and the displacement function both include the same variables: and on the other hand, the speed and the acceleration of the robot at a specific position/time can be known through increment of unit time as reference data for controlling the robot.
In the writing process of the writing brush, the displacement function is used for recording the movement of the pen in three coordinate directions in space along with the change of time t, wherein the change of coordinates on an x axis and a y axis can be used as data for describing actions such as rough stroke trend, font size, writing range and the like when writing characters. The coordinate change on the z axis can be approximately used as a function for describing the thickness of the stroke, and the paper surface is taken as a z coordinate 0 point, the closer the z coordinate is to 0, the higher the compression force applied to the pen point is, the thicker the stroke is, and the larger the corresponding writing force is; the larger the z-axis coordinate, the less compressive force the pen tip is subjected to, and the thinner the stroke. And the part of the displacement function, of which the z-axis coordinate exceeds the threshold value, indicates that the pen point is separated from the paper surface at the moment, is marked as invalid writing operation, and is recorded as the displacement operation for recording the position of the moving pen.
The attitude function is used for recording the change along with the time t, and the pen self rotates in three axial directions of x, y and z. The posture function can be used for describing the posture change of a penholder in the writing process, and corresponding to the calligraphy, the posture change of the penpoint can be understood as the posture change using the penpoint.
103. A control program is generated so that the robot can perform teaching operations in accordance with the attitude function and the displacement function. The robot can simulate actions according to a driving program and move according to a desired movement mode: the movement of the execution end along with time follows a displacement function, and the posture change of the execution end per se follows a posture function in the process of moving according to the displacement function, so that the teaching action of the target is simulated.
It can be seen from the above process that, in the deep learning, tracking and acquiring method based on target posture provided in this embodiment, a plurality of reference points are determined, then the motions of the reference points are tracked during the teaching action process, respectively, teaching tracking data are acquired, then the teaching action process of the target is described by two functions formed with the time variable, and since the posture function only records the posture change of the target itself with respect to time, and the displacement function regards the target as a mass point and only records the position change of the target with respect to time, the action data are simplified. The inverse solution is fitted into two functions, a control program is generated according to the two functions, and the robot can simulate the operation process of the target by running the control program. Because the action data is simplified, the point-taking calculated amount when the teaching action which simulates the complex teaching action is reduced, the teaching action which has higher reduction degree can be simulated, the action is simplified without manual participation judgment, the requirement of the process of simulating the learning on manual participation is low, and the intelligent degree is high.
Preferably, as shown in fig. 1, after step 103, the following steps are further included:
104. and driving the robot to simulate the motion according to the control program.
105. The simulated motion process is tracked from multiple directions and simulated tracking data is recorded.
106. Comparing the simulated tracking data with the teaching tracking data, and correcting the control program.
Because the generated control program is only based on data acquisition and automatic calculation, the executed action does not necessarily completely meet the simulation requirement, the control program is tried to run in step 104, the simulation tracking data is recorded in the execution mode according to step 101, then the simulation tracking data is compared with the teaching tracking data, and then the control program is modified to form a control closed loop, namely the robot learning process.
The specific comparison mode can be various, for example, the process from step 101 to step 103 is repeated, the execution end of the robot is used as the acquisition target of the teaching action, a new displacement function and a new attitude function are generated secondarily, and the new displacement function and the new attitude function are compared with the displacement function and the new attitude function generated by the original action data to find out whether the deviation exceeding the threshold value occurs; or directly comparing the simulated image information with the teaching image in an image comparison mode, adjusting the transparency, superposing the simulated image information and the teaching image, and comparing errors on the images to judge the similarity. If the error is found to exceed the threshold, the correction direction and magnitude are determined, and the correction control routine is then de-calibrated.
The above steps 103 to 106 may be repeated, and after multiple trial runs, acquisition and comparison, and multiple iterative learning, the action error converges to the threshold value, and the learning process is determined to be completed.
The embodiment also provides a learning system for controlling the robot to learn the teaching action, wherein the robot comprises an action tracking part, a data analysis part, a drive control part and an execution end. The motion tracking part is provided with a plurality of trackers and can continuously track a reference point and record teaching tracking data in the teaching motion process, the data analysis part receives the teaching tracking data and then analyzes the teaching tracking data to obtain a motion function of a teaching motion, namely the displacement function and the posture function, the drive control part receives the motion function and then generates a control program to control the execution end to simulate the motion.
The learning system in this embodiment can deconstruct and analyze the motion function by itself by collecting the teaching action, and then generate the control program, and after the control program is run, the execution end performs the simulation action to simulate the teaching action. Because the action data is simplified, the point-taking calculated amount when the teaching action which simulates the complex teaching action is reduced, the teaching action which has higher reduction degree can be simulated, the action is simplified without manual participation judgment, the requirement of the process of simulating the learning on manual participation is low, and the intelligent degree is high.
Preferably, the tracker is a plurality of tracking cameras, each tracking camera selects a corresponding reference point to perform rotation tracking, and rotation information of the tracking camera is recorded as teaching tracking data. The tracking camera is a special camera capable of rotating and adjusting the shooting angle of the tracking camera, and can capture the motion of a target through a control program and an image recognition technology so that the target is always positioned in the shooting range. In this embodiment, the reference point is the target that the tracking camera needs to catch, and along with the progress of teaching action, the tracking camera follows the reference point and rotates, and its angle of rotation and speed are rotation information, and the record is as teaching tracking data, according to the distance of tracking camera and target, can restore the displacement data of reference point.
Preferably, the motion tracking part has an effective collection space, and each tracker camera is distributed around the space around the effective collection space and is aligned to the effective collection space. Teaching actions should be performed in an efficient acquisition space to facilitate tracking of acquired data
Furthermore, a marker is arranged on the target, so that the tracking camera can conveniently identify the target. The marker can be a point for drawing a special color, pasting a pattern with a special shape, installing a part capable of emitting a special light and a special electromagnetic signal, and the like. In other embodiments, the reference points may be handled as digital information in the system, just as virtual concepts, without the points actually marked on the target.
Preferably, the action of the execution end is always positioned in the effective acquisition space. In practical application, the execution end can be moved into the effective acquisition space after the teaching action is completed, or a plurality of effective acquisition spaces can be set, and the teaching action and the simulation action of the execution end are respectively carried out.
Preferably, the learning system further includes a learning section; the learning principle of the control program for correcting the action of the execution end is the same as the detection, modification and re-operation processes, and is not described in detail.
The embodiment also provides a computer readable storage medium, which stores a computer program, and the computer program is executed by a processor to implement the steps of the above deep learning tracking acquisition method based on the target pose.
The present invention is not limited to the above preferred embodiments, and any modifications, equivalent substitutions and improvements made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A deep learning tracking acquisition method based on target posture is used for controlling a robot to learn teaching actions, and is characterized by comprising the following steps:
selecting a plurality of reference points of the teaching action in an effective acquisition space, respectively rotating and tracking the motion of each reference point by adopting a plurality of tracking cameras arranged around the effective acquisition space in a surrounding manner, and recording teaching tracking data; the rotation information of the tracking camera is recorded as teaching tracking data;
analyzing each of said teaching trace data to fit to at least two functions by a movement versus time relationship: an attitude function for describing the change of the target attitude along with time, and a displacement function for describing the change of the target position along with time;
and generating a control program so that the robot can realize the teaching operation according to the posture function and the displacement function.
2. The deep learning tracking acquisition method based on target pose as claimed in claim 1, further comprising the following steps after generating the control program:
driving the robot to simulate actions according to the control program;
tracking the simulated motion process from multiple directions and recording simulated tracking data;
and comparing the simulated tracking data with the teaching tracking data, and correcting the control program.
3. The deep learning, tracking and collecting method based on target posture as claimed in claim 2, wherein after a plurality of iterative learning, the action error converges to within the threshold value, and the learning process is judged to be completed.
4. A learning system for controlling a teaching action of a robot learning object, the robot having an execution end, comprising:
the motion tracking part is provided with a plurality of motion trackers, each tracker continuously tracks a corresponding reference point in the teaching motion process and records teaching tracking data, and the reference point is positioned in an effective acquisition space; the motion tracker is a tracking camera which is arranged around the effective acquisition space in a surrounding mode, and rotation information of the tracking camera is recorded as teaching tracking data;
a data analysis unit which receives the teaching trace data and analyzes the teaching trace data to obtain a motion function of the teaching operation;
and a drive control unit which receives the motion function, generates a drive program, and controls the execution end to perform a simulation operation.
5. The learning system as claimed in claim 4, wherein the motion tracking part has an effective acquisition space, and each of the tracking cameras is distributed around a space around the effective acquisition space.
6. The learning system of claim 4, further comprising an identifier mounted on the target for recognition by the tracking camera.
7. The learning system of claim 5, wherein the actions of the execution end are always located within the active acquisition space.
8. The learning system of claim 6, further comprising a learning section; for modifying the control program in accordance with the action of the execution terminal.
9. Storage medium storing a computer program, wherein the computer program when executed by a processor implements the steps of the target pose based deep learning tracking acquisition method according to any of claims 1 to 3.
CN201811467812.1A 2018-12-03 2018-12-03 Deep learning tracking acquisition method based on target posture, learning system and storage medium Active CN109685828B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811467812.1A CN109685828B (en) 2018-12-03 2018-12-03 Deep learning tracking acquisition method based on target posture, learning system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811467812.1A CN109685828B (en) 2018-12-03 2018-12-03 Deep learning tracking acquisition method based on target posture, learning system and storage medium

Publications (2)

Publication Number Publication Date
CN109685828A CN109685828A (en) 2019-04-26
CN109685828B true CN109685828B (en) 2021-04-16

Family

ID=66186057

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811467812.1A Active CN109685828B (en) 2018-12-03 2018-12-03 Deep learning tracking acquisition method based on target posture, learning system and storage medium

Country Status (1)

Country Link
CN (1) CN109685828B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110039540A (en) * 2019-05-27 2019-07-23 聊城大学 A kind of service robot paths planning method that multiple target optimizes simultaneously

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106182003A (en) * 2016-08-01 2016-12-07 清华大学 A kind of mechanical arm teaching method, Apparatus and system
CN107225573A (en) * 2017-07-05 2017-10-03 上海未来伙伴机器人有限公司 The method of controlling operation and device of robot
CN107544311A (en) * 2017-10-20 2018-01-05 高井云 Industrial machine human hand holds the servicing unit and method of teaching
CN107616837B (en) * 2017-10-27 2020-02-07 清华大学 Visual servo control intramedullary nail distal locking screw nailing method and system

Also Published As

Publication number Publication date
CN109685828A (en) 2019-04-26

Similar Documents

Publication Publication Date Title
CN109590986B (en) Robot teaching method, intelligent robot and storage medium
CN109993073B (en) Leap Motion-based complex dynamic gesture recognition method
Riley et al. Enabling real-time full-body imitation: a natural way of transferring human movement to humanoids
CN111872934B (en) Mechanical arm control method and system based on hidden semi-Markov model
JP2022542241A (en) Systems and methods for augmenting visual output from robotic devices
CN113524157A (en) Robot system, method, robot arm, and storage medium for configuring copy function
JPWO2003019475A1 (en) Robot device, face recognition method, and face recognition device
KR102353637B1 (en) Method and apparatus of analyzing golf motion
CN109590987B (en) Semi-intelligent teaching learning method, intelligent robot and storage medium
CN106020494B (en) Three-dimensional gesture recognition method based on mobile tracking
CN107363834A (en) A kind of mechanical arm grasping means based on cognitive map
Skoglund et al. Programming by demonstration of pick-and-place tasks for industrial manipulators using task primitives
CN109676583B (en) Deep learning visual acquisition method based on target posture, learning system and storage medium
CN112109074A (en) Robot target image capturing method
CN108044625A (en) A kind of robot arm control method based on the virtual gesture fusions of more Leapmotion
CN109685828B (en) Deep learning tracking acquisition method based on target posture, learning system and storage medium
CN111208730A (en) Rapid terminal sliding mode impedance control algorithm
US20230173660A1 (en) Robot teaching by demonstration with visual servoing
Arsenic Developmental learning on a humanoid robot
Vecerik et al. Robotap: Tracking arbitrary points for few-shot visual imitation
Gutzeit et al. The besman learning platform for automated robot skill learning
CN113246131B (en) Motion capture method and device, electronic equipment and mechanical arm control system
Yu et al. Gamma: Generalizable articulation modeling and manipulation for articulated objects
CN117340929A (en) Flexible clamping jaw grabbing and disposing device and method based on three-dimensional point cloud data
CN116749233A (en) Mechanical arm grabbing system and method based on visual servoing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant