CN110253583A - The human body attitude robot teaching method and device of video is taken based on wearing teaching - Google Patents

The human body attitude robot teaching method and device of video is taken based on wearing teaching Download PDF

Info

Publication number
CN110253583A
CN110253583A CN201910590614.2A CN201910590614A CN110253583A CN 110253583 A CN110253583 A CN 110253583A CN 201910590614 A CN201910590614 A CN 201910590614A CN 110253583 A CN110253583 A CN 110253583A
Authority
CN
China
Prior art keywords
teaching
robot
human body
video
multidimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910590614.2A
Other languages
Chinese (zh)
Other versions
CN110253583B (en
Inventor
彭云峰
郭秀萍
郭燕妮
翟雪迎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology Beijing USTB
Original Assignee
University of Science and Technology Beijing USTB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology Beijing USTB filed Critical University of Science and Technology Beijing USTB
Priority to CN201910590614.2A priority Critical patent/CN110253583B/en
Publication of CN110253583A publication Critical patent/CN110253583A/en
Application granted granted Critical
Publication of CN110253583B publication Critical patent/CN110253583B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/0081Programme-controlled manipulators with master teach-in means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Numerical Control (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a kind of human body attitude robot teaching method and devices that video is taken based on wearing teaching, by the teaching process video for obtaining teaching people, and then the space-time point evolution sequence and multidimensional kinetic parameter in skeleton joint are obtained by video object identification and video frame sync blending algorithm, then robot local operation parameter or execution code, pass through the simple teaching of true man in this way, robot energy acquistion action process and main points, reappear the job task of user individual, in the case where realizing user without grasping complicated programming, the goal of the invention that personalized manipulation is carried out to service robot and is used.

Description

The human body attitude robot teaching method and device of video is taken based on wearing teaching
Technical field
The invention belongs to the computer vision recognition technology fields of human body attitude, more specifically, are related to one kind and are based on Wearing teaching takes the human body attitude robot teaching method and device of video.
Background technique
The movement of robot realizes that the programmer for usually relying on profession writes holding function and logical program, Program downloading is input to robot, then robot executes program, such as the mounter people on production line.But in future, The extensive service robot application field of demand, can have individual business demand abundant, such as housekeeping robot and amusement table Robot etc. is drilled, however user can not usually grasp complicated programming, so that service robot, which uses, receives certain limitation.
Summary of the invention
It is an object of the invention to overcome the deficiencies of the prior art and provide a kind of human body appearances that video is taken based on wearing teaching State robot teaching method and device makes the user do not need in the case where grasping complicated programming, can carry out to service robot Personalization manipulation and use.
In order to achieve the above-mentioned object of the invention, the present invention is based on the human body attitude robot teaching sides that wearing teaching takes video Method, which comprises the following steps:
(1), wearing teaching takes video acquisition
Teaching people dresses teaching clothes and carries out teaching, is regarded using the teaching process that monocular or more mesh camera systems obtain teaching people Frequently, which is and dresses teaching to take video;
(2), human body attitude finely identifies
Using video object identification and monocular/more mesh video frame sync blending algorithm, is taken according to teaching, obtain accurate table Levy space-time point evolution sequence (the multidimensional posture space-time posture number in the skeleton joint of the human body attitude change procedure of teaching people According to) and multidimensional kinetic parameter;
(3), robot local operation Parameter Switch
Step (2) obtain skeleton joint space-time point evolution sequence (multidimensional posture space-time attitude data) and Multidimensional kinetic parameter is converted to robot local operation parameter or executes code (robot operating parameter instruction sequence).
A kind of human body attitude robot teaching apparatus taking video based on wearing teaching characterized by comprising
Teaching clothes, are made using retractable material, personal can be dressed, and the position for being close to human body limb joint has phase The gradual change mutually distinguished is wide or the joint colour band ring of wide color, and the company of corresponding critical body points' muscle (tendon) is arranged on teaching clothes The colour band item for connecing joint colour band ring, for finely characterizing the skeleton joint motions and muscular movement deformation of teaching people;
Teaching video acquisition module, according to application scenarios difference, the camera that can be mounted on robot body can also With one that is independently of robot external camera or multiple camera groups laid around teaching people, show for obtaining wearing It teaches the continuous videos of the teaching people of clothes to dress teaching and takes video;Camera can also wirelessly access video board with wired Card, these boards on machine human body, can also can be only fitted to the local computer of machine-independent people, edge calculations service Device etc.;With the multi-channel interface module for supporting multi-cam, time synchronization is carried out to multi-path video stream and samples to form image sequence Column;
The fine identification module of human body attitude is one group of computer vision target identification and image co-registration function algorithm, operation On robot host or local/edge calculations server, video is taken to wearing teaching acquired in teaching video acquisition module It is handled, obtains the space-time point evolution sequence in the skeleton joint of the human body attitude change procedure of accurate Characterization teaching people (multidimensional posture space-time attitude data) and multidimensional kinetic parameter;
Robot manipulation's number conversion module is one group of functional software module, and may operate on robot host can also run On local or edge calculations server, according to space-time point evolution sequence (the multidimensional posture space-time appearance for obtaining skeleton joint State data) and multidimensional kinetic parameter, it is converted to robot local operation parameter or executes code;If it is at local or edge It is handled on calculation server, robot local operation parameter or execution code are downloaded to robot by setting download tool.
The object of the present invention is achieved like this.
The present invention is based on the human body attitude robot teaching method and devices that wearing teaching takes video, by obtaining teaching people Teaching process video, and then by video object identification and video frame sync blending algorithm obtain skeleton joint when Null point position evolution sequence and multidimensional kinetic parameter, then robot local operation parameter or execution code, pass through true man in this way Simple teaching, robot energy acquistion action process and main points, reappear the job task of user individual, to realize that user is not necessarily to In the case where grasping complicated programming, to goal of the invention service robot progress personalized manipulation and used.
Detailed description of the invention
Fig. 1 is that the present invention is based on a kind of specific embodiments of human body attitude robot teaching method that wearing teaching takes video Flow chart;
Fig. 2 is that teaching takes a kind of specific embodiment schematic diagram;
Fig. 3 is that the present invention is based on a kind of specific embodiments of human body attitude robot teaching apparatus that wearing teaching takes video Schematic diagram.
Specific embodiment
A specific embodiment of the invention is described with reference to the accompanying drawing, preferably so as to those skilled in the art Understand the present invention.Requiring particular attention is that in the following description, when known function and the detailed description of design perhaps When can desalinate main contents of the invention, these descriptions will be ignored herein.
Embodiment
Fig. 1 is that the present invention is based on a kind of specific embodiments of human body attitude robot teaching method that wearing teaching takes video Flow chart
In the present embodiment, as shown in Figure 1, the present invention is based on the human body attitude robot teaching sides that wearing teaching takes video Method, which comprises the following steps:
Step S1: wearing teaching takes video acquisition
Teaching people dresses teaching clothes and carries out teaching, is regarded using the teaching process that monocular or more mesh camera systems obtain teaching people Frequently, which is and dresses teaching to take video.
In the present embodiment, as shown in Fig. 2, the teaching of teaching people wearing takes the strong elasticity face by that can be close to teaching human body table Material is made, and the position that human body limb joint is close on teaching clothes has the gradual change that is mutually distinguishable not wide or the joint color of wide color Band, teaching take the colour band item that the connecting joint colour band ring of corresponding critical body points' muscle (tendon) is arranged, for finely characterization The skeleton joint motions and muscular movement deformation of teaching people.Meanwhile it being imaged using the monocular cam of low cost or more mesh The teaching process video of head sync pulse jamming teaching people.
Step S2: human body attitude finely identifies
Using video object identification and monocular/more mesh video frame sync blending algorithm, is taken according to teaching, obtain accurate table Levy the space-time point evolution sequence and multidimensional kinetic parameter in the skeleton joint of the human body attitude change procedure of teaching people.
Bone and muscular movement rule and principle based on human morphology, the multidimensional multiple degrees of freedom time-space of foundation The multidimensional space-time coordinates of linkage passes through the space of colour band item and colour band ring in analysis video in this multidimensional space-time coordinates Displacement and distortion situation of change, are based on human cinology's principle, deduce joint, bone and muscle during calculating teaching human action Geometry state measurement, the expression of sport dynamics mathematical model and its model parameter, thus meticulous depiction human body attitude evolution process And should during skeleton rigid body and joint beformable body, muscle beformable body state and deformation, and then obtain skeleton joint Space-time point evolution sequence and multidimensional kinetic parameter.
In the present embodiment, multidimensional space-time coordinates based in human morphology bone system or body configuration certain point or Some group may be based on certain fixed points construction of teaching site of deployment workbench or environment with reference to anchor point as referring to anchor point (such as: the fixed article of some of teaching workbench, the shoulder joint of teaching people, elbow joint etc. can be used as multidimensional simultaneously Space-time coordinates coordinate origin anchor);In addition, the multidimensional space-time of fusion human figure anchor point (being) and working site anchor point (being) is sat Mark system, dimension are N (N >=3);Articulation is by cross-sectional freedom degree and the orthogonal free movement characterization of 2 dimension of vertical profile freedom degree setting;Draw The wearing teaching for entering teaching process takes video acquisition frame rate F, and being introduced into the joint number investigated in specific teaching demand for services is M (M >=1) constitutes the multidimensional space-time coordinates (note: anchor point of the present invention, it will be appreciated that in institute of the present invention of 2 X F of N X M X Evaluated and tested in the multidimensional space-time coordinates stated human body attitude rigid body and beformable body motion scale human body global coordinate system origin and shoulder, Each Local Phase such as hip is to coordinate origin).
Teaching video frame rate F is dressed, is a variable, for example 30 indicate 30 frame images of acquisition per second, can be equal The time interval period acquires F frame, can also embody according to posture evolution details speed using not waiting the periods to acquire the F frame image On a timeline sampling instant interval uniformly whether.
The multidimensional storage model of the meticulous depiction human body attitude evolution of multidimensional space-time coordinates building, multidimensional space-time coordinates In, storage is related to video sequence and identifies the position of obtained joint beformable body, joint angles, speed in computer memory device Degree, the projector distance of bone rigid body, the angle of muscle beformable body, speed, projector distance etc., these variate-values can be described Absolute value or the relative value in adjacent time sequence image period in space-time coordinates;Understood by mathematical angle, the storage model It is a kind of matrix of multilayer nest, atomic element can be multi-C vector, by computer disposal angle, can be a kind of multilayer Nested structure of arrays body or list structure body, is also possible to the tables of data form of database.
Multidimensional kinetic parameter is the one group of characterization associated joint constrained by human morphology principle and human body attitude mechanism The equation group that beformable body, bone rigid body, the linear, non-linear of muscle beformable body incidence relation, differential mix, the variation of equation group are anti- The association and constraint of motion rigid body and beformable body in the timing evolution process of human body attitude are reflected, the multidimensional kinetic parameter can refer to Human morphology and human motion theory are analyzed statistics extraction by teaching sequence of video images and are obtained, while again for reverse Correct the counted bad parameter value of teaching video processing procedure meter.
Step S3: robot local operation Parameter Switch drills the space-time point in the skeleton joint that step (2) obtain Into sequence and multidimensional kinetic parameter, is converted to robot local operation parameter or executes code.
For two kinds of typical services types of teaching robot, it may be assumed that posture is twin and with the same work in border, separately designs robot Local operation parameter executes code.
In the twin business of posture, when robot local operation Parameter Switch is handled, the high figure of the teaching person and machine are not considered The difference of device human body type's shape size, only in robot motion ability (or the mechanical form of robot manipulates threshold) constraint Under, the storing data of human body attitude evolution sequence and multidimensional kinetic parameter that teaching people's video identification is got reappears for machine Device people's local operation number logic, and can be converted to by artificial or adaptation robot compiling middleware robot can be performed it is soft Part code, the machine that the method is suitable for teaching posture reappear purposes.
In same border in industry business, when the processing of robot local operation Parameter Switch, must consider the teaching high figure of the person with The difference of machine human body type's shape size, the fixed point according to scene workbench and manipulating object do reference system, dynamic in robot Make under capacity consistency, to complete the job task in the identical workbench identical operation object of teaching people as target, teaching acquistion Human body attitude evolution sequence data and multidimensional kinetic parameter are converted to robot local operation number, and can pass through artificial or adaptation The compiling middleware of robot is converted to the executable software code of robot;Functional machinery component can involved in robot The flexible situation of deformation, is handled by the robot local operation Parameter Switch under the twin business of posture.
Teaching clothes, are made using retractable material, personal can be dressed, and the position for being close to human body limb joint has phase The gradual change mutually distinguished is wide or the joint colour band ring of wide color, and the company of corresponding critical body points' muscle (tendon) is arranged on teaching clothes The colour band item for connecing joint colour band ring, for finely characterizing the skeleton joint motions and muscular movement deformation of teaching people;
Teaching video acquisition module, according to application scenarios difference, the camera that can be mounted on robot body can also With one that is independently of robot external camera or multiple camera groups laid around teaching people, show for obtaining wearing It teaches the continuous videos of the teaching people of clothes to dress teaching and takes video;Camera can also wirelessly access video board with wired Card, these boards on machine human body, can also can be only fitted to the local computer of machine-independent people, edge calculations service Device etc.;With the multi-channel interface module for supporting multi-cam, time synchronization is carried out to multi-path video stream and samples to form image sequence Column;
The fine identification module of human body attitude is one group of computer vision target identification and image co-registration function algorithm, operation On robot host or local/edge calculations server, video is taken to wearing teaching acquired in teaching video acquisition module It is handled, obtains the space-time point evolution sequence in the skeleton joint of the human body attitude change procedure of accurate Characterization teaching people (multidimensional posture space-time attitude data) and multidimensional kinetic parameter;
Robot manipulation's number conversion module is one group of functional software module, and may operate on robot host can also run On local or edge calculations server, according to space-time point evolution sequence (the multidimensional posture space-time appearance for obtaining skeleton joint State data) and multidimensional kinetic parameter, it is converted to robot local operation parameter or executes code;If it is at local or edge Handled on calculation server, setting download tool robot local operation parameter or execute code (robot operating parameter refers to Enable sequence) robot is downloaded to, reappear teaching operation.
Although the illustrative specific embodiment of the present invention is described above, in order to the technology of the art Personnel understand the present invention, it should be apparent that the present invention is not limited to the range of specific embodiment, to the common skill of the art For art personnel, if various change the attached claims limit and determine the spirit and scope of the present invention in, these Variation is it will be apparent that all utilize the innovation and creation of present inventive concept in the column of protection.

Claims (7)

1. a kind of human body attitude robot teaching method for taking video based on wearing teaching, which comprises the following steps:
(1), wearing teaching takes video acquisition
Teaching people dresses teaching clothes and carries out teaching, and the teaching process video of teaching people is obtained using monocular or more mesh camera systems, The video is to dress teaching to take video;
(2), human body attitude finely identifies
It using video object identification and monocular/more mesh video frame sync blending algorithm, is taken according to teaching, obtains accurate Characterization and show Teach the space-time point evolution sequence (multidimensional posture space-time attitude data) in the skeleton joint of the human body attitude change procedure of people With multidimensional kinetic parameter;
(3), robot local operation Parameter Switch
The space-time point evolution sequence (multidimensional posture space-time attitude data) and multidimensional in the skeleton joint that step (2) are obtained Kinetic parameter is converted to robot local operation parameter or executes code (robot operating parameter instruction sequence).
2. human body attitude robot teaching method according to claim 1, which is characterized in that the teaching people wearing shows Religion clothes are made of the strong elasticity fabric that can be close to teaching human body table, and the position in human body limb joint is close on teaching clothes with mutual The gradual change of difference is wide or the joint colour band ring of wide color, and the connection of corresponding critical body points' muscle (tendon) is arranged on teaching clothes The colour band item of joint colour band ring, for finely characterizing the skeleton joint motions and muscular movement deformation of teaching people.
3. human body attitude robot teaching method according to claim 1, which is characterized in that step (2) is based on human body Morphologic bone and muscular movement rule and principle, the multidimensional space-time of the multidimensional multiple degrees of freedom time-space linkage of foundation are sat Mark system in this multidimensional in empty set, passes through the space displacement of colour band item and colour band ring in analysis video and distortion situation of change, base In human cinology's principle, the geometry state measurement in joint, bone and muscle during calculating teaching human action, motoricity are deduced Immunologing mathematics model tormulation and its model parameter, thus meticulous depiction human body attitude evolution process and should during skeleton rigid body With the state and deformation of joint beformable body, muscle beformable body, and then the space-time point evolution sequence and multidimensional in skeleton joint are obtained Kinetic parameter.
4. human body attitude robot teaching method according to claim 3, which is characterized in that the multidimensional space-time coordinates Based in human morphology bone system or body configuration certain point or some group as referring to anchor point, may be based on teaching application Certain fixed points of work on the spot platform or environment construction with reference to anchor point (such as: the fixed article of some of teaching workbench, Shoulder joint, elbow joint of teaching people etc. can be used as multidimensional space-time coordinates coordinate origin anchor simultaneously);In addition, fusion human body shape The multidimensional space-time coordinates of state anchor point (being) and working site anchor point (being), dimension are N (N >=3);Articulation presses cross-sectional freedom Degree and the orthogonal free movement characterization of 2 dimension of vertical profile freedom degree setting.
5. human body attitude robot teaching method according to claim 1, which is characterized in that the multidimensional kinetic parameter It is one group of characterization associated joint beformable body, bone rigid body, the muscle beformable body constrained by human morphology principle and human body attitude mechanism The equation group that the linear, non-linear of incidence relation, differential mix, the timing evolution of the variation reflection human body attitude of equation group The association and constraint of motion rigid body and beformable body in journey, the multidimensional kinetic parameter can refer to human morphology and human cinology Theory is analyzed statistics extraction by teaching sequence of video images and is obtained, while again by inversely correcting based on teaching video processing procedure Counted bad parameter value.
6. human body attitude robot teaching method according to claim 1, which is characterized in that in step (3), for teaching Two kinds of typical services types of robot, it may be assumed that posture is twin and with the same work in border, separately designs robot local operation parameter or holds Line code.
In the twin business of posture, when robot local operation Parameter Switch is handled, the high figure of the teaching person and robot are not considered The difference of figure shape size, only under robot motion ability (or the mechanical form of robot manipulates threshold) constraint, The storing data of human body attitude evolution sequence and multidimensional kinetic parameter that teaching people's video identification is got, reappears for robot sheet Ground operand logic, and robot executable software generation can be converted to by artificial or adaptation robot compiling middleware Code, the machine that the method is suitable for teaching posture reappear purposes.
In same border in industry business, when robot local operation Parameter Switch is handled, the high figure of the teaching person and machine must be considered The difference of human body type's shape size, the fixed point according to scene workbench and manipulating object does reference system, in robot motion energy Under force constraint, to complete the job task in the identical workbench identical operation object of teaching people as target, teaching acquistion human body Posture evolution sequence data and multidimensional kinetic parameter are converted to robot local operation number, and can pass through artificial or adaptation machine The compiling middleware of people is converted to the executable software code of robot;The functional machinery component involved in robot is deformable Flexible situation is handled by the robot local operation Parameter Switch under the twin business of posture.
7. a kind of human body attitude robot teaching apparatus for taking video based on wearing teaching characterized by comprising
Teaching clothes, are made using retractable material, personal can be dressed, and the position for being close to human body limb joint has mutual area Other gradual change is wide or the joint colour band ring of wide color, and the connection that corresponding critical body points' muscle (tendon) is arranged on teaching clothes is closed The colour band item for saving colour band ring, for finely characterizing the skeleton joint motions and muscular movement deformation of teaching people;
Teaching video acquisition module, according to application scenarios difference, the camera that can be mounted on robot body is also possible to The external camera of one of machine-independent people or multiple camera groups laid around teaching people, for obtaining wearing teaching clothes Teaching people continuous videos i.e. dress teaching take video;Camera can also wirelessly access video adapter card with wired, this A little boards on machine human body, can also can be only fitted to the local computer of machine-independent people, edge calculations server etc.; With the multi-channel interface module for supporting multi-cam, time synchronization is carried out to multi-path video stream and samples to form image sequence;
The fine identification module of human body attitude is one group of computer vision target identification and image co-registration function algorithm, operates in machine On device people host or local/edge calculations server, video is taken to wearing teaching acquired in teaching video acquisition module and is carried out Processing, the space-time point evolution sequence for obtaining the skeleton joint of the human body attitude change procedure of accurate Characterization teaching people are (more Tie up posture space-time attitude data) and multidimensional kinetic parameter;
Robot manipulation's number conversion module is one group of functional software module, and may operate on robot host also may operate at this On ground or edge calculations server, according to space-time point evolution sequence (the multidimensional posture space-time posture number for obtaining skeleton joint According to) and multidimensional kinetic parameter, it is converted to robot local operation parameter or executes code;If it is in local or edge calculations It is handled on server, robot local operation parameter or execution code are downloaded to robot by setting download tool.
CN201910590614.2A 2019-07-02 2019-07-02 Human body posture robot teaching method and device based on wearable teaching clothes video Active CN110253583B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910590614.2A CN110253583B (en) 2019-07-02 2019-07-02 Human body posture robot teaching method and device based on wearable teaching clothes video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910590614.2A CN110253583B (en) 2019-07-02 2019-07-02 Human body posture robot teaching method and device based on wearable teaching clothes video

Publications (2)

Publication Number Publication Date
CN110253583A true CN110253583A (en) 2019-09-20
CN110253583B CN110253583B (en) 2021-01-26

Family

ID=67923900

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910590614.2A Active CN110253583B (en) 2019-07-02 2019-07-02 Human body posture robot teaching method and device based on wearable teaching clothes video

Country Status (1)

Country Link
CN (1) CN110253583B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111292402A (en) * 2020-02-13 2020-06-16 腾讯科技(深圳)有限公司 Data processing method, device, equipment and computer readable storage medium
CN111695523A (en) * 2020-06-15 2020-09-22 浙江理工大学 Double-current convolutional neural network action identification method based on skeleton space-time and dynamic information
CN111708635A (en) * 2020-06-16 2020-09-25 深圳天海宸光科技有限公司 Video intelligent grading processing system and method
CN111730601A (en) * 2020-07-20 2020-10-02 季华实验室 Wearable demonstrator demonstration control method and device and electronic equipment
CN111860243A (en) * 2020-07-07 2020-10-30 华中师范大学 Robot action sequence generation method
CN112037312A (en) * 2020-11-04 2020-12-04 成都市谛视科技有限公司 Real-time human body posture inverse kinematics solving method and device
CN113878595A (en) * 2021-10-27 2022-01-04 上海清芸机器人有限公司 Humanoid entity robot system based on raspberry group

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10264059A (en) * 1997-03-27 1998-10-06 Trinity Ind Corp Teaching device of painting robot
CN105328701A (en) * 2015-11-12 2016-02-17 东北大学 Teaching programming method for series mechanical arms
CN108127669A (en) * 2018-02-08 2018-06-08 华南理工大学 A kind of robot teaching system and implementation based on action fusion
CN108274448A (en) * 2018-01-31 2018-07-13 佛山智能装备技术研究院 A kind of the robot teaching method and teaching system of human body interaction
CN109676615A (en) * 2019-01-18 2019-04-26 合肥工业大学 A kind of spray robot teaching method and device using arm electromyography signal and motion capture signal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10264059A (en) * 1997-03-27 1998-10-06 Trinity Ind Corp Teaching device of painting robot
CN105328701A (en) * 2015-11-12 2016-02-17 东北大学 Teaching programming method for series mechanical arms
CN108274448A (en) * 2018-01-31 2018-07-13 佛山智能装备技术研究院 A kind of the robot teaching method and teaching system of human body interaction
CN108127669A (en) * 2018-02-08 2018-06-08 华南理工大学 A kind of robot teaching system and implementation based on action fusion
CN109676615A (en) * 2019-01-18 2019-04-26 合肥工业大学 A kind of spray robot teaching method and device using arm electromyography signal and motion capture signal

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111292402A (en) * 2020-02-13 2020-06-16 腾讯科技(深圳)有限公司 Data processing method, device, equipment and computer readable storage medium
CN111292402B (en) * 2020-02-13 2023-03-07 腾讯科技(深圳)有限公司 Data processing method, device, equipment and computer readable storage medium
CN111695523A (en) * 2020-06-15 2020-09-22 浙江理工大学 Double-current convolutional neural network action identification method based on skeleton space-time and dynamic information
CN111695523B (en) * 2020-06-15 2023-09-26 浙江理工大学 Double-flow convolutional neural network action recognition method based on skeleton space-time and dynamic information
CN111708635A (en) * 2020-06-16 2020-09-25 深圳天海宸光科技有限公司 Video intelligent grading processing system and method
CN111860243A (en) * 2020-07-07 2020-10-30 华中师范大学 Robot action sequence generation method
CN111730601A (en) * 2020-07-20 2020-10-02 季华实验室 Wearable demonstrator demonstration control method and device and electronic equipment
CN112037312A (en) * 2020-11-04 2020-12-04 成都市谛视科技有限公司 Real-time human body posture inverse kinematics solving method and device
CN112037312B (en) * 2020-11-04 2021-02-09 成都市谛视科技有限公司 Real-time human body posture inverse kinematics solving method and device
CN113878595A (en) * 2021-10-27 2022-01-04 上海清芸机器人有限公司 Humanoid entity robot system based on raspberry group

Also Published As

Publication number Publication date
CN110253583B (en) 2021-01-26

Similar Documents

Publication Publication Date Title
CN110253583A (en) The human body attitude robot teaching method and device of video is taken based on wearing teaching
CN108762495B (en) Virtual reality driving method based on arm motion capture and virtual reality system
CN106648116B (en) Virtual reality integrated system based on motion capture
CN108268129B (en) Method and apparatus for calibrating a plurality of sensors on a motion capture glove and motion capture glove
CN101579238B (en) Human motion capture three dimensional playback system and method thereof
CN107330967B (en) Rider motion posture capturing and three-dimensional reconstruction system based on inertial sensing technology
CN104756045B (en) For tracking the wearable sensor of body part connected by joint
CN105701790B (en) For determining method and system of the video camera relative to the posture of at least one object of true environment
Cerulo et al. Teleoperation of the SCHUNK S5FH under-actuated anthropomorphic hand using human hand motion tracking
Molet et al. Human motion capture driven by orientation measurements
CN107115114A (en) Human Stamina evaluation method, apparatus and system
Yuan et al. SLAC: 3D localization of human based on kinetic human movement capture
US20200178851A1 (en) Systems and methods for tracking body movement
US20170000389A1 (en) Biomechanical information determination
CN103930944A (en) Adaptive tracking system for spatial input devices
CN104376309A (en) Method for structuring gesture movement basic element models on basis of gesture recognition
JPWO2009116597A1 (en) Posture grasping device, posture grasping program, and posture grasping method
CN110327048A (en) A kind of human upper limb posture reconstruction system based on wearable inertial sensor
CN108098780A (en) A kind of new robot apery kinematic system
CN105637531A (en) Recognition of gestures of a human body
Mihcin et al. Investigation of wearable motion capture system towards biomechanical modelling
CN112256125B (en) Laser-based large-space positioning and optical-inertial-motion complementary motion capture system and method
CN206924405U (en) A kind of wearable optical inertial catches equipment and system
CN112711332B (en) Human body motion capture method based on attitude coordinates
Molet et al. An architecture for immersive evaluation of complex human tasks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant