CN113386128B - Body potential interaction method for multi-degree-of-freedom robot - Google Patents

Body potential interaction method for multi-degree-of-freedom robot Download PDF

Info

Publication number
CN113386128B
CN113386128B CN202110512320.5A CN202110512320A CN113386128B CN 113386128 B CN113386128 B CN 113386128B CN 202110512320 A CN202110512320 A CN 202110512320A CN 113386128 B CN113386128 B CN 113386128B
Authority
CN
China
Prior art keywords
coordinate system
robot
coordinates
coordinate
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110512320.5A
Other languages
Chinese (zh)
Other versions
CN113386128A (en
Inventor
张平
陈佳新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202110512320.5A priority Critical patent/CN113386128B/en
Publication of CN113386128A publication Critical patent/CN113386128A/en
Application granted granted Critical
Publication of CN113386128B publication Critical patent/CN113386128B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a body potential interaction method for a multi-degree-of-freedom robot, which comprises the following steps: obtaining the pixel coordinates of the human skeleton key points by adopting a human skeleton key point identification algorithm, and obtaining the three-dimensional space coordinates of each human skeleton key point according to the pixel coordinates of the human skeleton key points; detecting whether an abnormal error that the shoulder is shielded by the arm exists in the interaction process; correcting the human body space posture, and performing coordinate reconstruction on the space coordinates of the key points; normalizing the coordinate of the upper wrist part of the arm relative to the shoulder part to obtain the normalized coordinate, and meanwhile, establishing a local space coordinate system on the palm to obtain an attitude angle Eler (psi, theta, gamma) of the local coordinate system on the palm relative to the coordinate system taking the shoulder part as an origin; and combining the normalized coordinates, the length of the connecting rod of the robot and the palm posture to obtain joint angles of joints of the robot so as to drive the robot to move. The working space of the whole mechanical arm can be covered under the condition that people do not exceed the effective visual field of the sensor in the interaction process.

Description

Body potential interaction method for multi-degree-of-freedom robot
Technical Field
The invention belongs to the field of human-computer interaction, and particularly relates to a body potential interaction method for a multi-degree-of-freedom robot.
Background
With the continuous and deep promotion of industrial 4.0 development plans in many countries in the world, the intelligent requirement of industrial production on robots is higher and higher, and natural and efficient advanced human-robot interaction interfaces are widely regarded by the society.
The man-machine interaction is a process of collecting information of a person by using equipment and transmitting the intention of the person to a machine; a human-computer interaction interface is an algorithm or program that translates human intent into instructions that a machine can execute. According to different interaction modes, human-computer interaction is achieved through voice interaction, sensor wearing interaction, gamepad interaction, baton interaction, brain wave interaction and visual interaction. From the naturalness of the interaction process and the complexity of the design of the interaction system, the interaction process based on the posture can not only effectively avoid the interference of environmental noise, but also reduce the constraint brought by the sensor worn by people.
In the traditional human-computer interaction process, the number of interaction semantics defined based on characteristics is always limited, so that the diversity requirement in the complex interaction process is difficult to meet; when the dynamic body potential is used for interaction, although complex interaction requirements can be achieved by tracking the three-dimensional motion tracks of key points of human skeleton, the limitation of the effective visual field of the sensor and the size difference of the multi-degree-of-freedom robots with different structures limit the use of the interaction mode, the interaction process has to be interrupted and recovered when a person exceeds the effective visual field of the sensor, the process restricts the activity range of the person on one hand, and the probability of failure of the interaction process is increased on the other hand. While multiple sensor data is utilized by researchers to expand the single sensor field of view, this increases system complexity while increasing cost.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a body potential interaction method based on a depth sensor and a human skeleton key point identification algorithm.
In order to achieve the purpose of the invention, the body potential interaction method for the multi-degree-of-freedom robot comprises the following steps:
obtaining the pixel coordinates of the human skeleton key points by adopting a human skeleton key point identification algorithm, and obtaining the three-dimensional space coordinates of each human skeleton key point according to the pixel coordinates of the human skeleton key points;
detecting whether an abnormal error that the shoulder is shielded by the arm exists in the interaction process, and if so, recovering or marking the key point of the shoulder as an invalid point;
correcting the space posture of the human body, and reconstructing the coordinates of the space coordinates of the key points, wherein the coordinates are reconstructed by establishing a space rectangular coordinate system O.x ' y ' z ' taking the left shoulder key point as the origin of coordinates and other skeleton key points piThe coordinate system is used as a reference system to perform coordinate reconstruction to obtain pi';
Normalizing the coordinate of the upper wrist part of the arm relative to the shoulder part to obtain a normalized coordinateNp7Meanwhile, a local space coordinate system is established on the palm, and the attitude angle Eler (psi, theta, gamma) of the local coordinate system on the palm relative to a coordinate system taking the shoulder as an origin is obtained, wherein the Eler (psi, theta, gamma) represents the space attitude of the palm;
and combining the normalized coordinates, the length of the connecting rod of the robot and the space posture of the palm to obtain joint angles of joints of the robot so as to drive the robot to move.
Further, the human bone key point identification algorithm is Open Pose.
Further, the obtaining of the three-dimensional space coordinates of each human skeleton key point according to the pixel coordinates of the human skeleton key point includes:
filtering by using the size of a preset window to obtain an effective value of a bone key point pixel coordinate;
and carrying out pixel level alignment on the collected depth map and the RGB video frame at the same moment to obtain a three-dimensional coordinate corresponding to each pixel point by taking the camera as the origin of a space coordinate system.
Further, the detecting whether an abnormal error that the left shoulder is shielded by the left arm exists in the interactive process, and if so, recovering or marking the key point of the shoulder as an invalid point includes:
calculating left arm p6 p7Direction vector of
Figure GDA0003198953650000021
P7Point of direction p5Vector of (2)
Figure GDA0003198953650000022
P6Point of direction p5Vector of (2)
Figure GDA0003198953650000023
Calculating p2p4At p2p3Projection square value of
Figure GDA0003198953650000024
Wherein p is3Is the skeletal key point, p, at the junction of the right forearm and the upper arm4Is a skeletal key point at the joint of the right forearm and the right wrist;
by calculating p2And a straight line p3 p4Is detected p2Whether or not it is shielded
Figure GDA0003198953650000031
Wherein x isa、ya、za、xb、yb、zbRespectively represent vectors
Figure GDA0003198953650000032
The x, y, z components of (a).
Passing only p5And a straight line p6 p7Is not enough to judge whether occlusion actually occurs, p must be increased5At right angles to
Figure GDA0003198953650000033
And each pass through p6、p7The constraint between planes of (a):
will be provided with
Figure GDA0003198953650000034
Is substituted into the process
Figure GDA0003198953650000035
Is a normal vector and passes through p6The spatial plane equation of (a):
Figure GDA0003198953650000036
xn、yn、znrepresenting a vector
Figure GDA0003198953650000037
The x, y, z components of (a);
will be provided with
Figure GDA0003198953650000038
Is substituted into the process
Figure GDA0003198953650000039
Is a normal vector and passes through p7The spatial plane equation of (a):
Figure GDA00031989536500000310
get s1,s2If the sign is negative, the left shoulder key point p is indicated5Between two planes, then
s=s1·s2
When in use
Figure GDA00031989536500000311
Established, left shoulder key point p5Occlusion occurs.
Further, in the correction of the human body spatial posture, the human body posture is corrected in such a manner that a spatial vector between the shoulders is parallel to the x-axis of the camera coordinate system O · xyz.
Further, a space rectangular coordinate system O.x ' y ' z ' with the left shoulder key point as the coordinate origin is established, and other skeleton key points piAll using the coordinate system as a reference system to perform coordinate reconstruction to obtain pi', includes:
the left shoulder is taken as the origin,
Figure GDA00031989536500000312
is the x' axis, perpendicular to the o · xz plane parallel to the sensing coordinate system
Figure GDA00031989536500000313
The direction of the pointing sensor is a y ' axis, and the direction opposite to the y axis of the sensor is a reconstructed coordinate system O.x ' y ' z ' of a z ' axis;
p2point of direction p5Of the space vector v
v=p2-p5=[x y z]T (7)
Angle between v and yoz plane
Figure GDA0003198953650000041
θxCorresponding rotation matrix
Figure GDA0003198953650000042
v is through R (theta)x) Space vector parallel to yoz plane after rotation transformation
v'=R(θx)×v (10)
Angle between space vector v' and xoy plane
Figure GDA0003198953650000043
θzCorresponding rotation matrix
Figure GDA0003198953650000044
v' by R (θ)x) Space vector parallel to yoz plane after rotation transformation
v”=R(θz)×v' (13)
After rotational transformation, p2New space coordinates
p2'=p5+v” (14)
Total rotational transformation
R=R(θz)×R(θx) (15)
For skeletal key point piIts reconstructed coordinates pi'
pi'=p5+vi'
Wherein v isi'=R×vi
vi=pi-p5
Where R is a rotation matrix from the camera coordinate system to the coordinate system with the shoulder as the origin.
Further, the coordinates of the upper wrist part of the arm relative to the shoulder part are normalized to obtain coordinatesNp7The method comprises the following steps:
respectively obtaining the large arms p5'p6' Length dist1Arm p of the arm6'p7' Length dist2And palm to shoulder p5'p7Distance dist of3The calculation formula is as follows:
Figure GDA0003198953650000051
Figure GDA0003198953650000052
Np7is the normalized coordinates of the hand in the coordinate system O.x ' y ' z ' space unit sphere (the coordinate inner product on the sphere is 1) with the left shoulder as the origin, and sacle is the adaptive scaling factor.
Further, the establishing a local space coordinate system on the palm to obtain the posture represented by the posture angle Eler (ψ, θ, γ) of the local coordinate system on the palm with respect to the coordinate system with the shoulder as the origin includes:
at key point p on the palm of the hand30' Direction p32' vector of
Figure GDA0003198953650000053
As the O · x axis of the local coordinate system,
Figure GDA0003198953650000054
and p31' Direction p33' vector of
Figure GDA0003198953650000055
O.xy plane as local coordinate system, over p31' vector of
Figure GDA0003198953650000056
And is
Figure GDA0003198953650000057
And is
Figure GDA00031989536500000517
To be provided with
Figure GDA0003198953650000058
As the O · z axis, there are:
Figure GDA0003198953650000059
x, y, z are the coordinate components of each vector;
solving the normal vector of the O.xz plane
Figure GDA00031989536500000510
And is
Figure GDA00031989536500000511
Will vector
Figure GDA00031989536500000512
Normalization:
Figure GDA00031989536500000513
r11、r21、r31is a vector
Figure GDA00031989536500000514
Normalized three coordinate components, r12、r22、r32Is a vector
Figure GDA00031989536500000515
Normalized three coordinate components, r13、r23、r33Is a vector
Figure GDA00031989536500000516
Normalizing the three coordinate components;
Rhis OhX 'y' z 'is a rotation matrix at O x' y 'z' and its attitude angle Eler (ψ, θ, γ) is calculated by the following formula:
Figure GDA0003198953650000061
ψ denotes an angle of rotating the coordinate system about the x-axis, θ denotes an angle of rotating the coordinate axis about the y-axis, and γ denotes an angle of rotating the coordinate axis about the z-axis. atan2 is an inverse trigonometric function, and the tangent angle is calculated.
Further, before the normalized coordinates, the length of the connecting rod of the robot and the palm posture are combined to obtain the joint angle, the filtering operation is carried out on the normalized coordinates and the palm posture angle.
Further, the obtaining of joint angles of joints of the robot by combining the normalized coordinates, the length of the connecting rod of the robot and the palm posture so as to drive the robot to move includes:
the ROS inverse kinematics solver obtains joint angles of joints of the robot according to the attitude angle Eler (psi, theta and gamma) of the palm and the tail end position of the robot; wherein the robot end position PeThe calculation formula of (a) is as follows:
PeNp7·L
in the formula (I), the compound is shown in the specification,Np7for normalized wrist atThe shoulder is the coordinate in the coordinate system of the origin, and L is the total length of the robot connecting rod. Eler (psi, theta, gamma) as the pose of the robot tip, PeAs the terminal position, in a human-computer interaction system, the human-computer interaction system firstly calculates each joint angle of the robot for the target position posture through an ROS self-contained inverse kinematics solver, and then controls the robot to move through a network socket connection.
Compared with the prior art, the invention can realize the following beneficial effects:
(1) the human-computer interaction system simultaneously controls the position and the posture of the multi-degree-of-freedom robot by utilizing the human arm and the palm.
(2) A space triangle formed by the human arms is utilized to form a unique space position coordinate of the palm relative to the shoulder in the maximum working space, the coordinate is mapped to the mechanical arms with different sizes after normalization processing, and the working space of the whole mechanical arm can be covered under the condition that the human does not exceed the effective visual field of the sensor in the interaction process.
(3) Compared with the problem that the scaling factor is difficult to determine by tracking the dynamic gesture in the prior art, the method provided by the invention has the advantages of strong stability, self-adaptive adjustment of the scaling factor and wide practicability.
(4) The gesture of the palm in the space is mapped to the gesture of the TCP of the mechanical arm, so that the intention of a person can be quickly and efficiently transferred to the robot.
(5) The invention corrects the human posture in advance, and uses the connecting line between two shoulders as a reference to reconstruct a coordinate system of the human posture, so that the corrected human posture faces the sensor no matter how, when a person does not depart from the effective visual field of the sensor, the relative position of each key point in a local coordinate system constructed by the human body per se can not be changed, and the comfort of the person is greatly improved.
(6) Because the relative position relation between the robot and the human is not needed to be calibrated in advance, the efficiency of human-computer interaction is improved.
(7) For the self-shielding problem of the arm, a shielding detection algorithm is adopted for detection and recovery, normal work under a complex environment is guaranteed, and the system is high in anti-interference performance.
Drawings
FIG. 1 is a schematic diagram of the system of the present invention.
Fig. 2 is a schematic flow chart of a body potential interaction method for a multiple degree of freedom robot according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a mapping relationship between a robot motion space and a human arm motion space.
Fig. 4 is a schematic diagram of human posture pre-correction.
FIG. 5 is a schematic diagram of effective values calculated by pixel coordinate sliding window mean filtering of bone key points.
FIG. 6 is a schematic diagram of occlusion detection and automatic recovery.
Fig. 7 is a schematic view of controlling the end of the robot by using the posture.
Fig. 8 is a schematic diagram of key points of human bones and finger joints.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the human-computer interaction system interface comprises a posture recognition panel, a virtual simulation panel, a video monitoring panel and a control panel. The system is developed based on an open-source robot operating system ROS, and the functions of virtual simulation, collision detection, motion planning and inverse motion solving of the robot are achieved by using the simulation function of the ROS and a motion planner Moveit.
The body potential interaction method facing the multi-degree-of-freedom robot provided by the invention utilizes the human arm and the palm to simultaneously control the position and the posture of the multi-degree-of-freedom robot, as shown in figure 2, when the system works, the human body video is collected by utilizing the depth camera, and the depth camera can simultaneously obtain the color image of each frame of image of the video and the depth image of the color image; inputting the color image into a human skeleton key point recognition algorithm to extract skeleton key points, then obtaining the depth information of each key point from the depth map, and obtaining the space coordinates of the skeleton key points after occlusion detection and recovery; reconstructing a coordinate system by taking the obtained human skeleton key points as reference by using a connecting line of two shoulders; and finally, after the space coordinates of the wrist relative to the shoulder are normalized, mapping the normalized space coordinates into the space coordinates of the tail end of the robot relative to the base, and mapping the gesture Eler (psi, theta and gamma) of the palm into the gesture of the tail end of the robot, so as to realize human-computer interaction. According to the method, a space triangle formed by human arms is utilized to form a unique space position coordinate of a palm relative to a shoulder in a maximum working space, the coordinate is mapped to mechanical arms with different sizes after normalization processing, the working space of the whole mechanical arm can be covered under the condition that a human does not exceed the effective visual field of a sensor in the interaction process, as shown in figure 3, the robot extends along with the extension of the human wrist, and meanwhile, the tail end of the robot is close to the edge of the working space when the human extends to the limit. Meanwhile, the gesture of the palm in the space is mapped to the gesture of the tail end of the mechanical arm, so that the intention of a person can be quickly and efficiently transferred to the robot. In order to solve the problem that the coordinates of the palm relative to the shoulders are not fixed due to the position change of the human body and the sensor, the gesture pre-correction method is adopted, the space vector between the two shoulders is parallel to the x axis of the camera coordinate system O.xyz, as shown in the attached figure 4, the corrected human body gesture faces the sensor no matter how, when the human body does not depart from the effective visual field of the sensor, the relative position in the local coordinate system constructed by the human body per se can not be changed, and the comfort of the human body is greatly improved. In consideration of the self-shielding problem of the arm, the shielding detection algorithm is adopted for detection, the historical coordinates of the key point and the coordinates of other non-shielded positions are automatically recovered, and the system is high in interference resistance.
Specifically, the body potential interaction method for the multi-degree-of-freedom robot provided by the invention comprises the following steps:
and step S1, obtaining the pixel coordinates of the human skeleton key points by adopting a human skeleton key point recognition algorithm, and obtaining the three-dimensional space coordinates of each human skeleton key point according to the pixel coordinates of the human skeleton key points, wherein the human skeleton key points comprise skeleton key points positioned on the human body and on the palm.
Step S1 includes the following steps:
step S11: acquiring a human body video by using an image acquisition sensor to obtain a color image and a depth image of each frame of image of the video;
in one embodiment of the invention, the image capture sensor is a depth camera.
Step 12: inputting the color image into a human skeleton key point identification algorithm to extract a skeleton key point pixel coordinate;
in one embodiment of the present invention, the adopted human bone key point identification algorithm is Open pos, but in other embodiments, other algorithms for identifying bone key points may be adopted.
In one embodiment of the present invention, the skeleton key point information collected by the human skeleton key point identification algorithm includes three elements, k ═ px py score, where px and py are pixel coordinates corresponding to the skeleton key point identification in each video frame, and score is the confidence level of the key point.
Due to the differences of different ambient light, different image acquisition sensors, different human-computer interaction personnel and the like, the confidence degrees of the identified key points are different, so that the traditional method for judging whether the segmentation based on the fixed threshold is effective identification information is not applicable any more; in addition, although the automatic threshold segmentation algorithm based on the frequency domain can better identify the effectiveness of the joint points, the processing process of the method is relatively complex, and the calculation amount is large. In the invention, after the key points of the human skeleton are identified, the confidence degrees of the key points are relatively close, and the difference between the key points with wrong identification and the key points with correct identification is large, therefore, an adaptive threshold segmentation method for the key points with necessary identification is adopted, and the method takes the key points with necessary identification as the reference and takes the upper and lower preset threshold value spaces as the threshold value intervals of effectiveness (the upper and lower 20% threshold value spaces are taken as the threshold value intervals of effectiveness by the embodiment of data analysis) on the basis of the traditional threshold value segmentation method based on fixed values. In one embodiment of the present invention, a left shoulder and a right shoulder are usedSkeletal key point p of right shoulder5,p2As the key points that must be identified (the subsequent steps will perform coordinate system reconstruction and pose correction with the connecting line of these two points). The calculation process is as follows:
marking whether each key point is effectively identified: validpatrix [ [ false ] [ false ] [ false ] ]
The confidence averages of the bone keypoints that have to be identified are selected as reference confidence, and in one embodiment of the invention, the keypoints p of the left shoulder and the right shoulder are selected5,p2Confidence average of points as reference confidence s:
Figure GDA0003198953650000091
wherein, scoreiRepresenting a key point kiThe degree of confidence of (a) is,iindex number of key point;
Figure GDA0003198953650000092
judging whether each key point is effectively identified, if so, judging whether each key point is effectively identified according to the ValidMatrix]For false, the invalid point is removed.
In one embodiment of the present invention, a human color image is collected by a depth camera, and 25 skeletal key points and 20 finger joint key points of the right hand are identified by a human skeletal key point identification algorithm, as shown in fig. 8, and the serial numbers and specific positions of the key points are shown in tables 1 and 2. Selection of skeletal keypoints p on the left arm of 25 joints among the skeletal keypoints of the human body6,p7Skeletal key point p on the left shoulder5Key point p of bone on neck1Bone key point p on the right shoulder2Skeletal key point p on the right arm3,p4As the most important key points, these key points are the basis of the subsequent steps; in one embodiment of the present invention, the key points obtained by the human skeleton recognition algorithm are all located at the joints of the skeleton, and if the width of the limb is considered, the key points are located at the center, which are determined by the human skeleton key point recognition algorithm.
TABLE 1 human skeleton key point sequence number and position
Figure GDA0003198953650000093
Figure GDA0003198953650000101
TABLE 2 palm skeleton Key Point sequence number and location
Figure GDA0003198953650000102
Figure GDA0003198953650000111
Step S13: and (3) carrying out moving average filtering by using a fixed-size window to obtain effective values of the pixel coordinates x and y of the bone key points (the effective values refer to values of the pixel coordinates after noise of the bone key point identification algorithm is filtered).
In one embodiment of the present invention, the noise of the human skeleton key point identification algorithm when the human body is still fluctuates approximately up and down with 30 video frames as periodicity, so a sliding window with the size of 30 is adopted for the sliding mean filtering, as shown in fig. 5. The specific process of filtering is as follows:
step S131: a window of a predetermined size is allocated to each bone key point, and the predetermined size of the predetermined window is 30 in this embodiment.
Figure GDA0003198953650000112
Figure GDA0003198953650000113
Representing a key point kiIs taken as the jth input value of
Figure GDA0003198953650000114
Calculating the coordinates of the pixel points by the px and py in the middle;
step S132: configuring WINDOWs of sliding filters for all skeletal key points;
WINDOW=[window0 ... windowi ...]
windowiis a key point kiThe sliding filter window of (1);
step S133: if the number of the acquired image frames is less than 30, the image quality is improved
Figure GDA0003198953650000115
sumiFor the sum of the elements in the ith filtering window, i indicates that this is the filtering window configured for the ith bone keypoint. If the number of the acquired image frames is more than 30, the original images are summed
Figure GDA0003198953650000116
Adding new input data
Figure GDA0003198953650000117
Then subtract the first input number
Figure GDA0003198953650000118
I.e. the update process is
Figure GDA0003198953650000121
In the process of the sliding mean filtering, a method of adding the sum in the current window to the data to be inserted and subtracting the earliest inserted element in the window is adopted, so that the defect of repeated sum in the filtering process can be effectively reduced, and the method is realized by using a circular queue with a fixed size.
Step S134: calculating to obtain a key point piAverage of all numbers in the sliding window:
Figure GDA0003198953650000122
according to the data of the analysis practical experiment and the characteristics of the periodic function, the sum of the function values of the periodic up-and-down fluctuation noise in a complete period is zero, because sum is accumulated by 3The sum of 0 frames, the result of a single frame is obtained by averaging. The method aims to reduce the noise of the human skeleton key point identification algorithm.
Step S14: and converting the coordinates of the key points of the human skeleton from two-dimensional pixel coordinates into three-dimensional space coordinates.
In one embodiment of the invention, the depth map and the RGB video frame at the same time, which are acquired by the image acquisition sensor, are subjected to pixel level alignment to obtain a three-dimensional coordinate p (x, y, z) corresponding to each pixel point by using the camera as an origin of a spatial coordinate system. Converting the two-dimensional pixel coordinates into three-dimensional coordinates by a depth map of a depth camera: p is a radical ofi=remap[pxi pyi]Remap is a development interface (API) provided by the depth camera development kit (SDK) and functions to convert pixel coordinates into spatial coordinates with the camera as the origin.
The spatial coordinates of each skeletal keypoint are: p ═ x y z ]
Step S2, detecting an abnormal error that the shoulder is blocked by the arm during the interaction process, and automatically recovering or marking as an invalid point, as shown in fig. 6.
In this step, first, the occlusion detection algorithm is used to detect the left shoulder key point p5Whether the shoulder is shielded or not, and if the shoulder is detected to be shielded, the right shoulder key point p is utilized2Recovering the depth information; if the right shoulder keypoint p2And if the left shoulder key point is also shielded, selecting the historical coordinates of the left shoulder key point
Figure GDA0003198953650000123
If the coordinate value at the current moment is recovered, the right shoulder key point p cannot be recovered if the two methods are exhausted5Depth information, then the left shoulder key point p5Marking as an invalid point, and discarding the data acquired by the image of the frame. The occlusion detection algorithm proceeds as follows:
calculating left arm p6 p7Direction vector of (2)
Figure GDA0003198953650000124
P7Point of direction p5Vector of (2)
Figure GDA0003198953650000125
P6Point of direction p5Vector of (2)
Figure GDA0003198953650000126
Calculating p2p4(line between two key points) at p2p3Projection square value of (A)
Figure GDA0003198953650000131
By calculating p2And a straight line p3 p4Is detected p2Whether or not to be shielded
Figure GDA0003198953650000132
Wherein x isa、ya、za、xb、yb、zbRespectively represent vectors
Figure GDA0003198953650000133
The x, y, z components of (a).
Passing only p5And a straight line p6 p7Is not enough to determine whether occlusion actually occurs, p must be increased5At right angles to
Figure GDA0003198953650000134
And each pass through p6、p7Is used as a constraint between planes.
Will be provided with
Figure GDA0003198953650000135
Is substituted into the process
Figure GDA0003198953650000136
Is a normal vector and passes through p6The spatial plane equation of (a):
Figure GDA0003198953650000137
xn、yn、znrepresenting a vector
Figure GDA0003198953650000138
The x, y, z components of (a);
will be provided with
Figure GDA0003198953650000139
Is substituted into the process
Figure GDA00031989536500001310
Is a normal vector and passes through p7The spatial plane equation of (a):
Figure GDA00031989536500001311
get s1,s2If the sign is negative, the left shoulder key point p is indicated5Between two planes, then
s=s1·s2
When in use
Figure GDA00031989536500001312
Established, left shoulder key point p5Occlusion occurs. threshold is adjustable, depending on the sensor noise level, and represents the spatial distance of the left wrist from the spatial line of the left forearm, and further, the image is left-right inverted from the real world due to the mirror image of the image. In one embodiment of the present invention, the threshold value is 50mm, and in other embodiments, other numbers may be used according to the actual situationThe value is obtained.
And S3, correcting the human body space posture, and performing coordinate reconstruction on the space coordinates of the key points.
The preliminarily obtained three-dimensional coordinate system of the key points of the human skeleton takes the sensor as an origin, and when the human body and the sensor are in different orientations, the human body postures are different in the camera coordinate system, as shown in the example before the correction of fig. 4. In one embodiment of the invention, the method of pre-correction is utilized, namely, the human body is positioned at the left shoulder p5(x, y, z) is directed to the right shoulder p2The connecting line of (x, y, z) is finally parallel to the x coordinate axis O-x of the camera coordinate system through rotating transformation R, namely, the human body posture is corrected in a mode that the space vector between two shoulders is parallel to the x axis of the camera coordinate system O.xyz.
When the arm of the person changes the posture, the left and right shoulders p2,p5The relative position does not change and the coordinate system needs to be established on a stable reference. The three-dimensional coordinates pointing to other skeleton key points from the left shoulder are subjected to the same rotation transformation, and finally the left shoulder is established as the origin,
Figure GDA0003198953650000141
is the x' axis, perpendicular to the o · xz plane parallel to the sensor coordinate system
Figure GDA0003198953650000142
And the pointing direction is the y 'axis, and the opposite direction to the sensor y axis is the z' axis, as shown in fig. 7. The process is as follows:
p2point of direction p5Of the space vector v
v=p2-p5=[x y z]T (7)
Angle between v and yoz plane
Figure GDA0003198953650000143
θxCorresponding rotation matrix
Figure GDA0003198953650000144
v is through R (theta)x) Space vector parallel to yoz plane after rotation transformation
v'=R(θx)×v (10)
Angle between space vector v' and xoy plane
Figure GDA0003198953650000145
θzCorresponding rotation matrix
Figure GDA0003198953650000146
v' by R (θ)x) Space vector parallel to yoz plane after rotation transformation
v”=R(θz)×v' (13)
After rotational transformation, p2New spatial position
p2'=p5+v” (14)
Total rotational transformation
R=R(θz)×R(θx) (15)
(15) R of formula is a rotation matrix from the camera coordinate system to the coordinate system with the shoulder as the origin.
For skeletal key point piIts reconstructed coordinates pi'
pi'=p5+vi'
Wherein v isi'=R×vi
vi=pi-p5
viIs a skeletal key point piAnd the left shoulder p5Vector of (v)i' is a space vector transformed by rotation.
Step S4, putting the palm of the arm on the palmNormalizing the coordinates of the shoulder to obtain normalized coordinatesNp7Meanwhile, a local spatial coordinate system is established on the palm, and the attitude Euler (ψ, θ, γ) of the coordinate system established in S3 in Euler angles is solved.
In one embodiment of the present invention, step S4 includes the following steps:
step S41: respectively obtaining the big arms p5'p6' Length dist1Arm p of the arm6'p7' Length dist2And palm to shoulder p5'p7Distance dist of3The calculation formula is as follows:
Figure GDA0003198953650000151
x6'、y6'、z6is' is p6'coordinate component, x, of the space coordinate system O.x' y 'z' after reconstruction5'、y5'、z5Is' is p5'coordinate components of the space coordinate system O · x' y 'z' after reconstruction;
Figure GDA0003198953650000152
Np7is the normalized coordinates of the hand in the space unit sphere (the coordinate inner product on the sphere is 1) of the coordinate system O.x ' y ' z ' with the left shoulder as the origin, sacle is an adaptive scaling factor, and the hand can be conveniently converted to other coordinate systems by the factor.
Step S42: to solve the pose of the palm in the O.x ' y ' z ' coordinate system, a local spatial coordinate system O is established on the palmhX ' y ' z '. At key point p on the palm of the hand30' Direction p32' vector of
Figure GDA0003198953650000161
As the O · x axis of the local coordinate system,
Figure GDA0003198953650000162
and p31' Direction p33' vector of
Figure GDA0003198953650000163
O.xy plane as local coordinate system, over p31' vector of
Figure GDA0003198953650000164
And is
Figure GDA0003198953650000165
And is
Figure GDA0003198953650000166
To be provided with
Figure GDA0003198953650000167
As the O · z axis, there are:
Figure GDA0003198953650000168
xc、yc、zcis a vector
Figure GDA0003198953650000169
Three coordinate components of (2), xd、yd、zdIs a vector
Figure GDA00031989536500001610
Three coordinate components of (a);
solving the normal vector of the O.xz plane
Figure GDA00031989536500001611
And is
Figure GDA00031989536500001612
Will vector
Figure GDA00031989536500001613
Normalization:
Figure GDA00031989536500001614
r11、r21、r31is a vector
Figure GDA00031989536500001615
Normalized three coordinate components, r12、r22、r32Is a vector
Figure GDA00031989536500001616
Normalized three coordinate components, r13、r23、r33Is a vector
Figure GDA00031989536500001617
Normalizing the three coordinate components;
Rhis OhThe rotation matrix of x 'y' z 'at O · x' y 'z', the attitude angle Eler (ψ, θ, γ), i.e., the spatial attitude of the palm, is calculated by the following equation:
Figure GDA00031989536500001618
in the formula, ψ represents an angle of rotating the coordinate system about the x-axis, θ represents an angle of rotating the coordinate axis about the y-axis, and γ represents an angle of rotating the coordinate axis about the z-axis. atan2 is an inverse trigonometric function and is calculated to obtain the tangent angle.
Step S5, for the result obtained in step S4
Figure GDA00031989536500001619
Filtering the trace points, so that the adverse effect of noise on the space coordinate of the wrist is reduced, and the influence of accumulated errors of the sensor is reduced; and (4) performing filtering processing on the Eler (psi, theta and gamma) obtained in the step (S4) to reduce the hand local coordinate system posture jitter.
Step S6, filtering the processed product in step S5
Figure GDA00031989536500001620
Multiplication byThe sum L of the length of the connecting rod of the robot and the palm posture form a space pose ps(x,y,z,ψ,θ,γ)。
With the above ps(x, y, z, psi, theta and gamma) are input into a human-computer interaction system, the human-computer interaction system calculates each joint angle of the robot for the target position posture through an ROS (reactive species ROS) self-contained inverse kinematics solver, and then the robot is controlled to move through a network socket connection.
Calculating the total length of the connecting rod of the robot through the dynamic data of the robot
Figure GDA0003198953650000171
lidxThe length of the first idx connecting rod of the robot is shown, dof is the degree of freedom of the robot, and idx represents the serial number of the connecting rod of the robot.
End position of robot
PeNp7·L (22)
Np7The coordinates of the wrist in the coordinate system with the shoulder as the origin after normalization represent one coordinate in the sphere of the spatial unit.
The body potential interaction mode has the great advantages that each limb of a person can express rich semantics, rich spatial relations exist among the limbs of the person, and the motion process of the arm of the person is very similar to that of the arm of the robot. The human-computer interaction system simultaneously controls the position and the posture of the multi-degree-of-freedom robot by utilizing the human arm and the palm. A space triangle formed by the human arms is utilized to form a unique space position coordinate of the palm relative to the shoulder in the maximum working space, the coordinate is mapped to the mechanical arms with different sizes after normalization processing, and the working space of the whole mechanical arm can be covered under the condition that the human does not exceed the effective visual field of the sensor in the interaction process. Compared with the problem that the scaling factor is difficult to determine by tracking the dynamic gesture in the prior art, the method provided by the invention has the advantages of strong stability, self-adaptive adjustment of the scaling factor and wide practicability. The gesture of the palm in the space is mapped to the gesture of the mechanical arm TCP, so that the intention of a person can be quickly and efficiently transferred to the robot. The invention adopts a human posture pre-correction method, and uses the connecting line between two shoulders as a reference to reconstruct a coordinate system of the human posture, the corrected human posture faces to the sensor no matter how, when a person does not depart from the effective visual field of the sensor, the relative position of each key point in a local coordinate system constructed by the human body per se can not be changed, and the comfort of the person is greatly improved. Because the relative position relation among the sensor, the robot and the human is not required to be calibrated in advance, the efficiency of human-computer interaction is improved. For the self-shielding problem of the arm, a shielding detection algorithm is adopted for detection and recovery, normal work under a complex environment is guaranteed, and the system is high in anti-interference performance.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A body potential interaction method for a multi-degree-of-freedom robot is characterized by comprising the following steps:
obtaining the pixel coordinates of the human skeleton key points by adopting a human skeleton key point identification algorithm, and obtaining the three-dimensional space coordinates of each human skeleton key point according to the pixel coordinates of the human skeleton key points;
detecting whether an abnormal error that the shoulder is shielded by the arm exists in the interaction process, and if so, recovering or marking the key point of the shoulder as an invalid point;
correcting the space posture of the human body, and reconstructing the coordinates of the space coordinates of the key points, wherein the coordinates are reconstructed by establishing a space rectangular coordinate system O.x ' y ' z ' taking the left shoulder key point as the origin of coordinates and other skeleton key points piAll using the coordinate systemReconstructing coordinates of the reference system to obtain pi';
Normalizing the coordinate of the upper wrist part of the arm relative to the shoulder part to obtain a normalized coordinateNp7Establishing a local space coordinate system on the palm, and obtaining attitude angles Eler (psi, theta, gamma) of the local coordinate system on the palm relative to a coordinate system taking the shoulder as an origin, wherein the attitude angles Eler (psi, theta, gamma) represent the space attitude of the palm;
and combining the normalized coordinates, the length of the connecting rod of the robot and the space posture of the palm to obtain joint angles of joints of the robot so as to drive the robot to move.
2. The body potential interaction method oriented to the multi-degree-of-freedom robot as claimed in claim 1, wherein the human skeleton key point recognition algorithm is Open Pose.
3. The body potential interaction method facing the multi-degree-of-freedom robot as claimed in claim 1, wherein the obtaining of the three-dimensional space coordinates of each human skeleton key point from the pixel coordinates of the human skeleton key point comprises:
filtering by using the size of a preset window to obtain an effective value of a bone key point pixel coordinate;
and carrying out pixel level alignment on the collected depth map and the RGB video frame at the same moment to obtain a three-dimensional coordinate corresponding to each pixel point by taking the camera as the origin of a space coordinate system.
4. The method as claimed in claim 1, wherein the detecting whether there is an abnormal error that a left shoulder is blocked by a left arm during the interaction process, and if so, recovering or marking a key point of the shoulder as an invalid point comprises:
calculating left arm p6 p7Direction vector of
Figure RE-FDA0003154290950000011
P7Point of direction p5Vector of (2)
Figure RE-FDA0003154290950000012
P6Point of direction p5Vector of (2)
Figure RE-FDA0003154290950000021
Calculating p2p4At p2p3Projection square value of
Figure RE-FDA0003154290950000022
By calculating p2And a straight line p3 p4Is detected p2Whether or not it is shielded
Figure RE-FDA0003154290950000023
Wherein x isa、ya、za、xb、yb、zbRespectively represent vectors
Figure RE-FDA0003154290950000024
The x, y, z components of (a);
p5at right angles to
Figure RE-FDA0003154290950000025
And each pass through p6、p7The constraint between planes of (a):
will be provided with
Figure RE-FDA0003154290950000026
Is substituted into the process
Figure RE-FDA0003154290950000027
Is a normal vector and passes through p6The spatial plane equation of (a):
Figure RE-FDA0003154290950000028
xn、yn、znrepresenting a vector
Figure RE-FDA0003154290950000029
The x, y, z components of (a);
will be provided with
Figure RE-FDA00031542909500000210
Is substituted into the process
Figure RE-FDA00031542909500000211
Is a normal vector and passes through p7The spatial plane equation of (a):
Figure RE-FDA00031542909500000212
get s1,s2If the sign is negative, the left shoulder key point p is indicated5Between two planes, then
s=s1·s2
When in use
Figure RE-FDA00031542909500000213
Established, left shoulder key point p5Occlusion occurs, and threshold represents the spatial distance of the left wrist from the spatial line where the left forearm is located.
5. The method according to claim 1, wherein the correction of the human body posture is performed such that a space vector between the shoulders is parallel to an x-axis of a camera coordinate system O · xyz.
6. The body potential interaction method for the multi-degree-of-freedom robot as claimed in claim 1, wherein the spatial rectangular coordinate system O · x ' y ' z ' with the left shoulder key point as the origin is established, and other skeleton key points piThe coordinate system is used as a reference system to perform coordinate reconstruction to obtain pi', includes:
the left shoulder is taken as the origin,
Figure FDA0003060761830000031
is the x' axis, perpendicular to the o · xz plane parallel to the sensing coordinate system
Figure FDA0003060761830000032
The direction pointing to the sensor is a y ' axis, and the direction opposite to the y axis of the sensor is used as a reconstructed coordinate system O.x ' y ' z ' of a z ' axis;
p2point of direction p5Of the space vector v
v=p2-p5=[x y z]T (7)
Angle between v and yoz plane
Figure FDA0003060761830000033
θxCorresponding rotation matrix
Figure FDA0003060761830000034
v is through R (theta)x) Space vector parallel to yoz plane after rotation transformation
v'=R(θx)×v (10)
Angle between space vector v' and xoy plane
Figure FDA0003060761830000035
θzCorresponding rotation matrix
Figure FDA0003060761830000036
v' by R (θ)x) Space vector parallel to yoz plane after rotation transformation
v”=R(θz)×v' (13)
After rotational transformation, p2New space coordinates
p2'=p5+v” (14)
Total rotational transformation
R=R(θz)×R(θx) (15)
For skeletal key point piIts reconstructed coordinates pi'
pi'=p5+vi'
Wherein v isi'=R×vi
vi=pi-p5
Where R is a rotation matrix from the camera coordinate system to the coordinate system with the shoulder as the origin.
7. The method as claimed in claim 1, wherein the coordinates of the wrist of the arm with respect to the shoulder are normalized to obtain the coordinatesNp7The method comprises the following steps:
respectively obtaining the large arms p5'p6' Length dist1Arm p of the arm6'p7' Length dist2And palm to shoulder p5'p7Distance dist of3The calculation formula is as follows:
Figure FDA0003060761830000041
Figure FDA0003060761830000042
Np7is the normalized coordinates of the hand in the unit sphere of space of the coordinate system O.x ' y ' z ' with the left shoulder as the origin, x6'、y6'、z6Is' is p6'coordinate component, x, of the space coordinate system O.x' y 'z' after reconstruction5'、y5'、z5Is' is p5'the coordinate component of the space coordinate system O · x' y 'z' after reconstruction, sacle, is the adaptive scaling factor.
8. The method according to claim 1, wherein the establishing a local spatial coordinate system on the palm to obtain the attitude represented by an attitude angle Eler (ψ, θ, γ) of the local coordinate system on the palm with respect to a coordinate system with the shoulder as an origin comprises:
at key point p on the palm of the hand30' Direction p32' vector of
Figure FDA0003060761830000043
As the O · x axis of the local coordinate system,
Figure FDA0003060761830000044
and p31' Direction p33' vector of
Figure FDA0003060761830000045
O.xy plane as local coordinate system, over p31' vector of
Figure FDA0003060761830000046
And is
Figure FDA0003060761830000047
And is
Figure FDA0003060761830000048
To be provided with
Figure FDA0003060761830000049
As the O · z axis, there are:
Figure FDA00030607618300000410
x, y, z are the coordinate components of each vector;
solving the normal vector of the O.xz plane
Figure FDA00030607618300000411
And is
Figure FDA00030607618300000412
Will vector
Figure FDA00030607618300000413
Normalization:
Figure FDA0003060761830000051
Figure FDA0003060761830000052
r11、r21、r31is a vector
Figure FDA0003060761830000053
Normalized three coordinate components, r12、r22、r32Is a vector
Figure FDA0003060761830000054
Normalized three coordinate components, r13、r23、r33Is a vector
Figure FDA0003060761830000055
Normalizing the three coordinate components;
Rhis OhThe rotation matrix of x 'y' z 'at O · x' y 'z', its attitude angle Eler (ψ, θ, γ) is calculated by the following equation:
Figure FDA0003060761830000056
ψ represents an angle of rotating the coordinate system about the x-axis, θ represents an angle of rotating the coordinate axis about the y-axis, γ represents an angle of rotating the coordinate axis about the z-axis, atan2 is an inverse trigonometric function, and a tangent angle is calculated.
9. The method as claimed in claim 1, further comprising performing a filtering operation on the normalized coordinates and the palm pose angle before combining the normalized coordinates, the length of the robot link, and the palm pose to obtain the joint angles of the joints of the robot.
10. The method for interacting the body potentials of the robot with multiple degrees of freedom according to any one of claims 1 to 9, wherein the joint angles of the joints of the robot are obtained by combining the normalized coordinates, the length of the connecting rod of the robot and the palm posture of the hand so as to drive the robot to move, and the method comprises the following steps:
the ROS inverse kinematics solver obtains joint angles of joints of the robot according to the attitude angle Eler (psi, theta and gamma) of the palm and the tail end position of the robot; wherein the robot end position PeThe calculation formula of (a) is as follows:
PeNp7·L
in the formula (I), the compound is shown in the specification,Np7for normalizing the rear handThe coordinate of the wrist in a coordinate system with the shoulder as the origin, and L is the total length of the robot connecting rod.
CN202110512320.5A 2021-05-11 2021-05-11 Body potential interaction method for multi-degree-of-freedom robot Active CN113386128B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110512320.5A CN113386128B (en) 2021-05-11 2021-05-11 Body potential interaction method for multi-degree-of-freedom robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110512320.5A CN113386128B (en) 2021-05-11 2021-05-11 Body potential interaction method for multi-degree-of-freedom robot

Publications (2)

Publication Number Publication Date
CN113386128A CN113386128A (en) 2021-09-14
CN113386128B true CN113386128B (en) 2022-06-10

Family

ID=77616921

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110512320.5A Active CN113386128B (en) 2021-05-11 2021-05-11 Body potential interaction method for multi-degree-of-freedom robot

Country Status (1)

Country Link
CN (1) CN113386128B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114327047B (en) * 2021-12-01 2024-04-30 北京小米移动软件有限公司 Device control method, device control apparatus, and storage medium
CN115331153B (en) * 2022-10-12 2022-12-23 山东省第二人民医院(山东省耳鼻喉医院、山东省耳鼻喉研究所) Posture monitoring method for assisting vestibule rehabilitation training
CN118288297B (en) * 2024-06-06 2024-08-16 北京人形机器人创新中心有限公司 Robot motion control method, system, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106313049A (en) * 2016-10-08 2017-01-11 华中科技大学 Somatosensory control system and control method for apery mechanical arm
CN107160364A (en) * 2017-06-07 2017-09-15 华南理工大学 A kind of industrial robot teaching system and method based on machine vision
CN107363813A (en) * 2017-08-17 2017-11-21 北京航空航天大学 A kind of desktop industrial robot teaching system and method based on wearable device
CN107953331A (en) * 2017-10-17 2018-04-24 华南理工大学 A kind of human body attitude mapping method applied to anthropomorphic robot action imitation
CN110480634A (en) * 2019-08-08 2019-11-22 北京科技大学 A kind of arm guided-moving control method for manipulator motion control
CN111738092A (en) * 2020-05-28 2020-10-02 华南理工大学 Method for recovering shielded human body posture sequence based on deep learning
CN112149455A (en) * 2019-06-26 2020-12-29 北京京东尚科信息技术有限公司 Method and device for detecting human body posture
JP2021068438A (en) * 2019-10-21 2021-04-30 ダッソー システムズDassault Systemes Computer-implemented method for making skeleton of modeled body take posture

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106313049A (en) * 2016-10-08 2017-01-11 华中科技大学 Somatosensory control system and control method for apery mechanical arm
CN107160364A (en) * 2017-06-07 2017-09-15 华南理工大学 A kind of industrial robot teaching system and method based on machine vision
CN107363813A (en) * 2017-08-17 2017-11-21 北京航空航天大学 A kind of desktop industrial robot teaching system and method based on wearable device
CN107953331A (en) * 2017-10-17 2018-04-24 华南理工大学 A kind of human body attitude mapping method applied to anthropomorphic robot action imitation
CN112149455A (en) * 2019-06-26 2020-12-29 北京京东尚科信息技术有限公司 Method and device for detecting human body posture
CN110480634A (en) * 2019-08-08 2019-11-22 北京科技大学 A kind of arm guided-moving control method for manipulator motion control
JP2021068438A (en) * 2019-10-21 2021-04-30 ダッソー システムズDassault Systemes Computer-implemented method for making skeleton of modeled body take posture
CN111738092A (en) * 2020-05-28 2020-10-02 华南理工大学 Method for recovering shielded human body posture sequence based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李瑞.图像和深度图中的动作识别与手势姿态估计.《中国博士学位论文全文数据库 信息科技辑》.2019, *
王志红.基于视觉手势识别的机械手操控系统的研究.《中国优秀硕士学位论文全文数据库 信息科技辑》.2017, *

Also Published As

Publication number Publication date
CN113386128A (en) 2021-09-14

Similar Documents

Publication Publication Date Title
CN113386128B (en) Body potential interaction method for multi-degree-of-freedom robot
CN106909216B (en) Kinect sensor-based humanoid manipulator control method
Lee et al. Model-based analysis of hand posture
Heap et al. Towards 3D hand tracking using a deformable model
CN108972494A (en) A kind of Apery manipulator crawl control system and its data processing method
Kang et al. Toward automatic robot instruction from perception-temporal segmentation of tasks from human hand motion
Triesch et al. Robotic gesture recognition
Schröder et al. Real-time hand tracking using synergistic inverse kinematics
CN113505694B (en) Man-machine interaction method and device based on sight tracking and computer equipment
JP4765075B2 (en) Object position and orientation recognition system using stereo image and program for executing object position and orientation recognition method
Guo Research of hand positioning and gesture recognition based on binocular vision
CN102830798A (en) Mark-free hand tracking method of single-arm robot based on Kinect
CN112966628A (en) Visual angle self-adaptive multi-target tumble detection method based on graph convolution neural network
CN115576426A (en) Hand interaction method for mixed reality flight simulator
CN114495273A (en) Robot gesture teleoperation method and related device
CN108115671B (en) Double-arm robot control method and system based on 3D vision sensor
JP7171294B2 (en) Information processing device, information processing method and program
Chaudhary et al. A vision-based method to find fingertips in a closed hand
CN117333635A (en) Interactive two-hand three-dimensional reconstruction method and system based on single RGB image
Sun et al. Visual hand tracking on depth image using 2-D matched filter
CN109214295B (en) Gesture recognition method based on data fusion of Kinect v2 and Leap Motion
Ehlers et al. Self-scaling Kinematic Hand Skeleton for Real-time 3D Hand-finger Pose Estimation.
Triesch et al. Robotic gesture recognition by cue combination
Liang et al. Hand pose estimation by combining fingertip tracking and articulated ICP
Fujiki et al. Real-time 3D hand shape estimation based on inverse kinematics and physical constraints

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant