CN116160441A - Robot teleoperation collision prevention method based on human arm motion prediction - Google Patents

Robot teleoperation collision prevention method based on human arm motion prediction Download PDF

Info

Publication number
CN116160441A
CN116160441A CN202211650919.6A CN202211650919A CN116160441A CN 116160441 A CN116160441 A CN 116160441A CN 202211650919 A CN202211650919 A CN 202211650919A CN 116160441 A CN116160441 A CN 116160441A
Authority
CN
China
Prior art keywords
model
robot
arm
human arm
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211650919.6A
Other languages
Chinese (zh)
Inventor
周世宁
朱蓉军
刘振
黄琦
程栋梁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Hebin Intelligent Robot Co ltd
Original Assignee
Hefei Hebin Intelligent Robot Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Hebin Intelligent Robot Co ltd filed Critical Hefei Hebin Intelligent Robot Co ltd
Priority to CN202211650919.6A priority Critical patent/CN116160441A/en
Publication of CN116160441A publication Critical patent/CN116160441A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/1605Simulation of manipulator lay-out, design, modelling of manipulator
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The invention discloses a teleoperation collision prevention method of a robot based on human arm motion prediction, which comprises the following steps: and establishing a human arm model, wherein the human arm model is used for predicting the motion trail of the slave human arm and transmitting the motion trail to the three-dimensional graphic visualization scene. And establishing a robot model, simulating a motion trail of the teleoperation robot according to teleoperation main end data by the robot model, and sending the motion trail to a three-dimensional graphic visualization scene. And establishing a three-dimensional graphic visual scene, wherein the three-dimensional graphic visual scene is used for displaying the motion trail predicted by the human arm model and displaying the motion trail simulated by the robot model. According to the method, the human arm model completes simulation of the human arm motion trail according to feedback of the vision module of the slave end, and the motion trail of the human arm and the robot seen by an operator of the master end through a three-dimensional graphic visual scene is the real-time motion trail of the human arm and the robot of the slave end, so that actions of the operator can be continuous, and safety threat to the environment of the slave end due to time delay can be avoided.

Description

Robot teleoperation collision prevention method based on human arm motion prediction
Technical Field
The invention relates to the technical field of teleoperation, in particular to a teleoperation collision prevention method of a robot based on human arm motion prediction.
Background
The network-based robot teleoperation technology plays an extremely important role in the projects of telemedicine, aviation exploration, deep sea exploration and the like. The teleoperation main end is an operation end of an operator, the slave end is a controlled end of an operated robot, when the robot of the slave end needs to be operated and matched with a matched person of the slave end, as physical distance limitation exists between the robot of the slave end of the operator of the main end and the matched person, network time delay between the robot of the slave end and the matched person causes that the operator hardly perceives the environment of the slave end well, so that the slave end enters a 'motion-waiting' state, motion discontinuity or instability of a control system cannot be operated, and great safety threat is generated to the environment of the slave end. The existing method can not solve the safety problem brought by dynamic things such as arm movement and the like to a teleoperation system of the robot.
Therefore, how to solve the problem of discontinuous movement of operators caused by visual delay is the technical problem to be solved by the application.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention aims to provide a teleoperation collision prevention method for a robot based on human arm motion prediction, which solves the safety problem brought to a teleoperation system of the robot by dynamic things such as human arm motion and the like in the prior art by simulating the motion of the human arm and the robot in real time through a three-dimensional graphic visual scene.
In order to achieve the above purpose, the following technical scheme is provided:
a teleoperation collision prevention method of a robot based on human arm motion prediction comprises the following steps:
st10, establishing a human arm model, wherein the human arm model is used for predicting the motion trail of the slave human arm and transmitting the motion trail to a three-dimensional graphic visualization scene.
St20, establishing a robot model, wherein the robot model simulates a motion track of a teleoperation robot according to teleoperation main end data and sends the motion track to a three-dimensional graphic visualization scene.
St30, establishing a three-dimensional graphic visual scene, wherein the three-dimensional graphic visual scene is used for displaying a motion trail predicted by the human arm model and displaying a motion trail simulated by the robot model.
In summary, the above technical scheme has the following beneficial effects: according to the human arm model, simulation of a human arm motion trail is completed according to feedback of a visual module of a slave end, and correspondingly, the slave end is provided with a visual module for detecting human arm motion, such as a visual sensor or an infrared sensor, coupled with the human arm model, so that in order to reduce the problem of motion discontinuity caused by network time delay, the human arm model completes prediction of the human arm motion trail through feedback of the visual module, and an operator of a master end can see the predicted human arm motion trail on a three-dimensional graphic visual scene; and the robot model simulates the motion trail according to teleoperation main end data. The motion trail of the human arm and the robot seen by the operator at the main end through the three-dimensional graphic visual scene is the real-time motion trail of the human arm and the robot at the auxiliary end, so that the actions of the operator can be continuous, and the safety threat to the environment at the auxiliary end due to time delay can be avoided.
Drawings
FIG. 1 is a schematic diagram of a module frame of a teleoperation collision avoidance method for a robot based on human arm motion prediction;
FIG. 2 is a schematic diagram of a human arm motion prediction trajectory;
FIG. 3 is a schematic view of an arm angle;
FIG. 4 is a 12-dimensional simulated schematic of a multi-joint pose;
FIG. 5 is a schematic illustration of an 8-dimensional simulation employed in the present invention;
FIG. 6 is a schematic flow chart of a teleoperation collision avoidance method of a robot based on human arm motion prediction;
FIG. 7 is a diagram of a hand trajectory prediction result;
FIG. 8 is a diagram of the arm angle prediction result;
fig. 9 is a graphical illustration of relative error of hand motion trajectories.
Reference numerals: 10. a human arm model; 11. a kinematic model; 12. a hand trajectory prediction model; 13. a anthropomorphic arm configuration prediction model; 20. a robot model; 30. visualizing a scene in a three-dimensional graph; 40. and the video communication module.
Detailed Description
The invention will be described in further detail below with reference to the drawings by means of specific embodiments. In the following embodiments, numerous specific details are set forth in order to provide a better understanding of the present application. However, one skilled in the art will readily recognize that some of the features may be omitted in various situations, or replaced by other materials, methods. In some instances, some operations associated with the present application have not been shown or described in the specification to avoid obscuring the core portions of the present application, and may not be necessary for a person skilled in the art to describe in detail the relevant operations based on the description herein and the general knowledge of one skilled in the art.
Furthermore, the described features, operations, or characteristics of the description may be combined in any suitable manner in various embodiments. Also, various steps or acts in the method descriptions may be interchanged or modified in a manner apparent to those of ordinary skill in the art. Thus, the various orders in the description and drawings are for clarity of description of only certain embodiments, and are not meant to be required orders unless otherwise indicated.
A teleoperation collision prevention method of a robot based on human arm motion prediction comprises the following steps:
st10, a human arm model 10 is built, and the human arm model 10 is used for predicting the motion trail of the slave human arm and transmitting to a three-dimensional graphic visualization scene 30.
As shown in fig. 1, the human arm model 10 includes a kinematic model 11, a hand trajectory prediction model 12, and a anthropomorphic arm configuration prediction model 13; the kinematic model 11 is used for rotationally connecting bones of the human arm model 10 through joints; the hand trajectory prediction model 12 is used for predicting hand motion trajectories; the anthropomorphic arm configuration prediction model 13 is used for calculating and obtaining a human arm configuration corresponding to the hand track point according to the hand motion track and the bone data.
The kinematic model 11 is a 7-degree-of-freedom kinematic model 11 of the SRS structure. The kinematic model 11 establishes a 7-degree-of-freedom kinematic model 11 of a typical SRS structure (swing, lift of the shoulder, rotation of the large arm, bending of the elbow, rotation of the small arm, bending and swing of the wrist) by connecting the bones of the human arm as rigid links through rotary joints, including the shoulder joint S, the elbow joint E and the wrist joint W.
St11, a hand motion locus is predicted by the hand locus prediction model 12.
Predicting the hand movement track comprises the following steps:
st111, collecting hand motion trail data, and forming an observation data set.
Specifically, the slave end is equipped with a vision module for detecting human arm movement, such as a vision sensor or an infrared sensor coupled with the human arm model 10, for example, a Vicon infrared movement capturing system, so as to acquire a site scanning image and a movement state of a human body in real time, and transmit the site scanning image and the movement state of the human body to the hand track prediction model 12 through a network.
St112, the hand trajectory prediction model 12 predicts the hand motion trajectory from the observation dataset and forms a prediction dataset. The hand movement trace data is represented by normal symbols and symbols with Λ, respectively, as observation data and prediction data. The observed data set is the hand motion track point of N frames before a certain moment, and the observed data set is expressed as:
Figure BDA0004010579590000041
in the formula (1), X k Is the observation data set at the kth moment; [ (r) ];]a time series input for the model; n is a natural number constant representing the range (number of frames) of the observed dataset, 6 representing that the hand motion trajectory comprises 6 dimensions, and 6N representing 6*N frame dimensions.
The predicted data set is the hand motion track point of M frames after a certain moment, and the predicted data set is expressed as:
Figure BDA0004010579590000042
in the formula (2), the amino acid sequence of the compound,
Figure BDA0004010579590000043
is the predicted dataset at time k; [ (r) ];]a time series output for the model; m is a natural number constant representing the range (frame number) of the predicted data set; />
Figure BDA0004010579590000044
Is the M-th frame prediction at the kth instant.
The predicted time for predicting the data set is between 200ms and 600 ms. Preferably, the time to predict the data set is 400ms. Since the maximum measured time delay of the remote ultrasonic robot is 200ms, the motion prediction time is set to 400ms, the observation data set of the first 1000ms is taken as input, and the prediction data set of the last 400ms is output.
St113, the hand trajectory prediction model 12 combines the prediction data obtained each time with the observation data set again as a new observation data set, and predicts the hand motion trajectory from the new observation data set. Specifically, at a certain moment, after the hand trajectory prediction model 12 predicts a frame of prediction data, the frame of prediction data and the N-frame observation data set are combined, and the combined frame of prediction data and the N-frame observation data set are used as inputs of the hand trajectory prediction model 12 at the moment until the M-frame prediction data set is completed. Due to the time-varying nature of human arm motion, the hand trajectory prediction model 12 can be represented by a time-varying function f:
Figure BDA0004010579590000051
as shown in fig. 2, the motion of the arm is complex and highly nonlinear, and has a strong time relation, so that the RNN (Recurrent Neural Network ) hand trajectory prediction model 12 structure of N-to-1 is used to obtain N frames of historical hand motion trajectories, and predict and output a frame of hand motion trajectories. The hand trajectory prediction model 12 adds a newly predicted frame of hand motion trajectory as input in real time, then predicts the next frame, and loops the prediction until a complete M-frame prediction is obtained. The advantage of this hand trajectory prediction model 12 is that it allows for greater flexibility in online adaptation, which allows for adaptive updating of the model once new observations are made.
St12, calculating the human arm configuration of the predicted hand motion track through the anthropomorphic arm configuration prediction model 13, thereby obtaining the predicted motion track of the slave human arm. The hand motion track consists of a plurality of frames of hand motion track points, the method firstly predicts the hand motion track, then calculates the arm configuration of each frame of hand motion track points to finish the arm prediction, and the complete one-frame prediction process totally requires the data of 8 dimensions of the hand motion track, namely 6 dimensions of the large arm length and 8 dimensions of the small arm length. As shown in FIG. 4, s in FIG. 4 k 、e k 、w k And h k Respectively show the observed spatial position sequences of the shoulder joint S, the elbow joint E, the wrist joint W and the hand,
Figure BDA0004010579590000052
and->
Figure BDA0004010579590000053
For the predicted position sequence, if the positioning prediction of one frame is completed through the shoulder coordinate, the elbow coordinate, the wrist coordinate and the hand position coordinate, 12 dimensions are needed to be calculated, which results in high training and calculating cost and poor prediction progress, the angles of all joints of the human arm cannot be directly output, and meanwhile, the positions of the shoulder joint S, the elbow joint E and the wrist joint W are also needed to be obtained in real time by additional hardware equipment at the slave end. As shown in fig. 5, the method reduces the input and output dimensions through the anthropomorphic arm configuration prediction model 13, only requires the spatial positions x, y and z of the hand positions, the attitude angle alpha of the tail end rotating around the x axis, the attitude angle beta of the tail end rotating around the y axis and the attitude angle gamma of the tail end rotating around the z axis, and considers the influence of the length of the upper arm and the lower arm on the motion prediction progress, so that the calculated amount is reduced, the prediction time is shorter, and the prediction progress is higher. Further, the hand trajectory prediction model 12 is built based on a recurrent neural network (Recurrent Neural Network, RNN), and the anthropomorphic arm configuration prediction model 13 is based on multiple functionsLayer perceptron (Multilayer Perceptron, MLP) set up.
Bone data includes upper arm length, lower arm length, and arm angle. As shown in FIG. 3, a seven-degree-of-freedom human arm model corresponds to numerous arm configurations when the hand pose is fixed, namely numerous inverse solutions, so that planes determined by three points of a shoulder joint S, an elbow joint E and a wrist joint W are arm planes, when the angle q3 of one group of inverse solutions is 0, the elbow joint is E0, the arm planes S-E0-W are reference planes, and the arm angles are defined as included angles between the reference planes and the arm planes.
St121, the anthropomorphic arm configuration prediction model 13 predicts an arm angle according to the hand motion track, the upper arm length and the lower arm length, and then obtains the human arm configuration corresponding to the hand track through inverse kinematics of the arm angle. Processing the data of the upper arm length, the lower arm length and the hand movement track to obtain an input vector m= [ x, y, z, alpha, beta, gamma, l 1 ,l 2 ] T And predicts the arm angle corresponding to each frame of hand motion track point, the anthropomorphic arm configuration prediction model 13 can be expressed as
Figure BDA0004010579590000061
Specifically, the anthropomorphic arm configuration prediction model 13 predicts the arm configuration of the arm angle value corresponding to each frame of hand motion track through the MLP, and the mathematical model is as follows:
Figure BDA0004010579590000062
in the formula (4), the amino acid sequence of the compound,
Figure BDA0004010579590000063
for the arm angle value, l predicted by the human arm at the motion track point of the hand at the kth moment 1 ,l 2 The length of the upper arm and the lower arm are shown, respectively, and MLP represents the learned [ h ] k ,l 1 ,l 2 ] T To arm angle->
Figure BDA0004010579590000064
Is a mapping relation of (a) to (b). Method package using learningIncluding but not limited to gaussian process, linear regression, neural network, learning hand pose at the end of human arm, upper and lower arm length vector m and arm angle +.>
Figure BDA0004010579590000065
And (3) establishing a mapping relation between the two human-like arm configuration prediction models 13.
St20, a robot model 20 is established, and the robot model 20 simulates the motion trail of the teleoperation robot according to teleoperation main end data and sends the motion trail to the three-dimensional graphic visualization scene 30.
As shown in fig. 6, the teleoperation master end data includes cartesian pose data of the robot, the robot model 20 inverse-solves the cartesian pose data in real time by using inverse kinematics of the robot, thereby obtaining a joint angle of the robot, and drives the simulated teleoperation robot to move by taking the joint angle of the robot as an input.
St30, creating a three-dimensional graphic visualization scene 30, wherein the three-dimensional graphic visualization scene 30 is used for displaying the motion trail predicted by the human arm model 10 and displaying the motion trail simulated by the robot model 20.
St31, modeling a person and a robot through three-dimensional software, assembling the person and the robot in a node mode, and importing the modeling into a three-dimensional graphic visualization scene 30 to form a virtual robot and a virtual human arm.
St32, data of the human arm model 10 is imported into the three-dimensional graphic visualization scene 30 in real time to control the model real-time motion of the human.
St33, data of the robot model 20 is imported into the three-dimensional graphic visualization scene 30 in real time to control real-time movement of the model of the robot.
The three-dimensional graphic visualization scene 30 is connected with a video communication module 40, and the video communication module 40 is used for collecting environment videos of the slave end. The three-dimensional visual scene of the human arm and the robot which are predicted and displayed is combined with the common video communication scene, so that the visual telepresence of a main end operator of a teleoperation system is enriched, the operator can predict the occurrence of collision in advance, the collision is effectively avoided, a safe control instruction is issued to a slave end robot, and the potential collision safety hazard caused by visual feedback lag due to time delay is avoided.
According to the human arm model 10, simulation of a human arm motion track is completed according to feedback of a vision module of a slave end, and correspondingly, the slave end is provided with a vision module for detecting human arm motion, such as a vision sensor or an infrared sensor, coupled with the human arm model 10, so that in order to reduce the problem of motion discontinuity caused by network time delay, the human arm model 10 completes prediction of the human arm motion track through feedback of the vision module, and an operator of a master end can see the predicted human arm motion track on a three-dimensional graphic visual scene 30; the robot model 20 then performs simulation of the motion trail based on the teleoperation master data. The motion trail of the human arm and the robot seen by the operator at the master end through the three-dimensional graphic visualization scene 30 is the real-time motion trail of the human arm and the robot at the slave end, so that the actions of the operator can be continuous, and the safety threat to the environment at the slave end caused by time delay can be avoided.
The present invention evaluates the prediction effect through the following experiment. And (3) constructing a Vicon infrared motion capture system, and acquiring a motion track data sequence of the right arm hand of a patient during scanning, wherein the acquisition period is 40ms (1 frame). The 16 subjects were randomly selected to include 8 men and 8 women. The age of the tested person is 18-38 years old, the height of the male tested person is 165-193cm, and the weight is 52-101Kg; the height of the female tested person is distributed at 151-180cm, and the weight is distributed at 39-70Kg.
The scanning bed is placed at the visual angle center of the infrared motion capturing system, a tested person is arranged to lie on the operating bed with black dynamic capturing clothes, light reflection mark points are fixed on the shoulder, the elbow, the wrist and the hand respectively, and the tested person is lying to simulate possible movements of the human arm during scanning. In the acquisition process, the starting point and the ending point of each action can be freely adjusted, the duration time of each action is kept between 1 and 3 seconds, arm motion data of 20 groups of different starting points and ending points of each tested person are acquired, and 320 groups of arm motion track data sequences are obtained from 16 tested persons. The static joint data are collected and used for learning the corresponding arm angles under the hand gestures of the human arm, and training the anthropomorphic arm configuration prediction model 13. The tail end handle of the mechanical arm is gripped by a tested person, the mechanical arm is dragged by zero force to change the hand position and posture, the most comfortable arm configuration is kept, the arm configuration corresponding to each gripping position of the tested person is collected, the arm configuration comprises shoulder joints, elbow joints, wrist joints, hand position and the length of the upper arm and the lower arm, each tested person determines 80 groups of data, and 1280 groups of data are collected altogether. Linear is selected as a first hidden layer activation function of the prediction model, relu is selected as a second hidden layer activation function, adam is selected by an optimizer, iteration is performed for 3000 times, a static joint data training set is input, and Python3.6 training is used to obtain the anthropomorphic arm configuration prediction model 13.
The hand track prediction data and the observed data change are shown in fig. 7, the maximum absolute error of the hand prediction track position is 3.95mm, and the prediction track is basically consistent with the observed track.
The hand trajectory prediction data set is input into the anthropomorphic arm configuration prediction model 13, and a comparison result of the predicted value and the true value is obtained, as shown in fig. 8, it can be seen that the average error 0.0532rad of the predicted arm angle and the true arm angle.
Compared with a 12-dimensional human arm model 10 for prediction, the prediction model has higher input and output dimensions, the optimal result cannot be obtained by using the training parameters the same as those of the prediction model, the number of hidden layer neurons of the model is adjusted to 1024, RMSE is used as a loss function, the loss function is about 0.2 after 10000 times of iteration, the prediction model is obtained by training, an observation data set is input for prediction, and the maximum absolute error of the hand track prediction position is 4.841mm. The motion relative error of the mth frame is defined as the ratio of the error at this moment to the motion amplitude, and the prediction model performance is measured, and the result is shown in fig. 9. When predicting within 10 frames, the relative motion error of the human arm model 10 is within 8 percent, and for the predicting result of more than 25 frames, the relative motion of the human arm model 10 is kept within 10 percent, which is superior to the human arm model 10 with 12 dimensions.
The invention simplifies the input and output dimensions of the prediction model by combining human arm kinematics, simultaneously takes the lengths of the upper arm and the lower arm as the prediction input of the prediction model, optimizes the influence of different arm length parameters on the motion prediction precision, and realizes the longer effective human arm motion prediction effect by comparing the prediction scheme of 12 dimensions with the average absolute error of the arm angle prediction value of 0.052rad and the relative motion error of the hand track prediction position of 8 percent in the motion prediction of 400ms. Based on the method, a man-machine collision prevention method based on human arm motion prediction is provided, and according to the predicted arm motion state of a patient, the man-machine collision caused by misoperation due to the delay of the visual presence and the three-dimensional graphic visual scene 30 are avoided by constructing the visual presence and the three-dimensional graphic visual scene 30 for real-time collision detection, so that the safety of a teleoperation system is improved.
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above examples, and all technical solutions belonging to the concept of the present invention belong to the protection scope of the present invention. It should be noted that modifications and adaptations to the present invention may occur to one skilled in the art without departing from the principles of the present invention and are intended to be within the scope of the present invention.

Claims (10)

1. The teleoperation collision prevention method for the robot based on the human arm motion prediction is characterized by comprising the following steps of:
establishing a human arm model (10), wherein the human arm model (10) is used for predicting the motion trail of a slave human arm and transmitting the motion trail to a three-dimensional graphic visualization scene (30);
a robot model (20) is established, and the robot model (20) simulates a motion trail of a teleoperation robot according to teleoperation main end data and sends the motion trail to a three-dimensional graphic visualization scene (30);
a three-dimensional graphic visualization scene (30) is established, wherein the three-dimensional graphic visualization scene (30) is used for displaying a motion trail predicted by the human arm model (10) and displaying a motion trail simulated by the robot model (20).
2. The teleoperation collision avoidance method for robots based on human arm motion prediction according to claim 1, characterized in that the human arm model (10) comprises a kinematic model (11), a hand trajectory prediction model (12) and a anthropomorphic arm configuration prediction model (13);
the kinematic model (11) is used for rotationally connecting bones of the human arm model (10) through joints;
the hand track prediction model (12) is used for predicting hand motion tracks;
the anthropomorphic arm configuration prediction model (13) is used for calculating and obtaining a human arm configuration corresponding to a hand track point according to the hand motion track and skeleton data;
and after the hand motion trail is predicted by the hand trail prediction model (12), calculating the human arm configuration of the predicted hand motion trail by the anthropomorphic arm configuration prediction model (13), thereby obtaining the predicted motion trail of the slave human arm.
3. The robot teleoperation collision avoidance method based on human arm motion prediction according to claim 2, wherein the kinematic model (11) is a 7-degree-of-freedom kinematic model (11) of SRS structure.
4. The teleoperation collision avoidance method of robots based on human arm motion prediction according to claim 2, wherein predicting the hand motion trajectory comprises the following process:
collecting hand motion track data and forming an observation data set;
the hand trajectory prediction model (12) predicts hand motion trajectories from the observation dataset and forms a prediction dataset.
5. The teleoperation collision avoidance method for robots based on human arm motion prediction according to claim 4, wherein the hand trajectory prediction model (12) combines the predicted data obtained each time with the observation data set again as a new observation data set, and predicts the hand motion trajectory from the new observation data set.
6. The teleoperation collision avoidance method for robots based on human arm motion prediction according to claim 4, wherein the predicted time for predicting the data set is between 200ms and 600 ms.
7. The teleoperation collision avoidance method of a robot based on human arm motion prediction according to claim 2, wherein the skeletal data comprises upper arm length, lower arm length and arm angle,
the anthropomorphic arm configuration prediction model (13) predicts an arm angle according to the hand motion track, the upper arm length and the lower arm length, and then obtains the human arm configuration corresponding to the hand track through inverse kinematics of the arm angle.
8. The robot teleoperation collision avoidance method based on the robot arm motion prediction according to claim 1, wherein the teleoperation master data comprises cartesian pose data of the robot, the robot model (20) adopts inverse kinematics of the robot to inverse-solve the cartesian pose data in real time, thereby obtaining a joint angle of the robot, and the simulated teleoperation robot is driven to move by taking the joint angle of the robot as an input.
9. The teleoperation collision avoidance method of robots based on human arm motion prediction according to claim 1, characterized in that modeling of human and robot is performed by three-dimensional software, both the modeling of human and robot is performed by means of nodes, and the modeling is imported into the three-dimensional graphic visualization scene (30);
the data of the human arm model (10) is imported into a three-dimensional graphic visualization scene (30) in real time so as to control the real-time movement of the model of the human;
the data of the robot model (20) is imported into a three-dimensional graphic visualization scene (30) in real time to control the real-time movement of the model of the robot.
10. The robot teleoperation collision avoidance method based on human arm motion prediction according to claim 9, characterized in that the three-dimensional graphic visualization scene (30) comprises a video communication module (40), the video communication module (40) is used for acquiring environment video of a slave.
CN202211650919.6A 2022-12-21 2022-12-21 Robot teleoperation collision prevention method based on human arm motion prediction Pending CN116160441A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211650919.6A CN116160441A (en) 2022-12-21 2022-12-21 Robot teleoperation collision prevention method based on human arm motion prediction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211650919.6A CN116160441A (en) 2022-12-21 2022-12-21 Robot teleoperation collision prevention method based on human arm motion prediction

Publications (1)

Publication Number Publication Date
CN116160441A true CN116160441A (en) 2023-05-26

Family

ID=86412384

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211650919.6A Pending CN116160441A (en) 2022-12-21 2022-12-21 Robot teleoperation collision prevention method based on human arm motion prediction

Country Status (1)

Country Link
CN (1) CN116160441A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117733874A (en) * 2024-02-20 2024-03-22 中国科学院自动化研究所 Robot state prediction method and device, electronic equipment and storage medium
CN117733874B (en) * 2024-02-20 2024-05-14 中国科学院自动化研究所 Robot state prediction method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117733874A (en) * 2024-02-20 2024-03-22 中国科学院自动化研究所 Robot state prediction method and device, electronic equipment and storage medium
CN117733874B (en) * 2024-02-20 2024-05-14 中国科学院自动化研究所 Robot state prediction method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US11337652B2 (en) System and method for measuring the movements of articulated rigid bodies
CN107263449B (en) Robot remote teaching system based on virtual reality
Lakhal et al. Hybrid approach for modeling and solving of kinematics of a compact bionic handling assistant manipulator
CN107225573A (en) The method of controlling operation and device of robot
CN105252532A (en) Method of cooperative flexible attitude control for motion capture robot
Wen et al. Force-guided high-precision grasping control of fragile and deformable objects using semg-based force prediction
CN109807887A (en) Flexible arm Intellisense and control method and system based on deep neural network
CN111152220B (en) Mechanical arm control method based on man-machine fusion
CN112959330B (en) Robot double-arm motion man-machine corresponding device and method based on master-slave dynamic motion elements
Natale et al. Learning precise 3d reaching in a humanoid robot
Rosado et al. A Kinect-based motion capture system for robotic gesture imitation
CN112894820A (en) Flexible mechanical arm remote operation man-machine interaction device and system
CN116160441A (en) Robot teleoperation collision prevention method based on human arm motion prediction
Kawaharazuka et al. Hardware Design and Learning-Based Software Architecture of Musculoskeletal Wheeled Robot Musashi-W for Real-World Applications
Infantino et al. A cognitive architecture for robotic hand posture learning
JP2019093537A (en) Deep learning system, deep learning method, and robot
Su et al. Machine learning driven human skill transferring for control of anthropomorphic manipulators
Leng et al. Flexible online planning based residual space object de-spinning for dual-arm space-borne maintenance
Pan et al. A Study of Intelligent Rehabilitation Robot Imitation of Human Behavior Based on Kinect
Jiang et al. Deep learning based human-robot co-manipulation for a mobile manipulator
Aslan et al. End-to-end learning from demonstation for object manipulation of robotis-Op3 humanoid robot
Petrenko et al. The Study of the Problems of the Master-Slave Teleoperation Control Anthropomorphic Manipulator
Xu et al. Design of a human-robot interaction system for robot teleoperation based on digital twinning
RU2813444C1 (en) Mixed reality human-robot interaction system
CN114055461B (en) Robot force and position synchronous teleoperation control method and device based on myoelectric interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination