CN113386128B - Body potential interaction method for multi-degree-of-freedom robot - Google Patents

Body potential interaction method for multi-degree-of-freedom robot Download PDF

Info

Publication number
CN113386128B
CN113386128B CN202110512320.5A CN202110512320A CN113386128B CN 113386128 B CN113386128 B CN 113386128B CN 202110512320 A CN202110512320 A CN 202110512320A CN 113386128 B CN113386128 B CN 113386128B
Authority
CN
China
Prior art keywords
coordinate system
coordinate
space
coordinates
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110512320.5A
Other languages
Chinese (zh)
Other versions
CN113386128A (en
Inventor
张平
陈佳新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202110512320.5A priority Critical patent/CN113386128B/en
Publication of CN113386128A publication Critical patent/CN113386128A/en
Application granted granted Critical
Publication of CN113386128B publication Critical patent/CN113386128B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a body potential interaction method for a multi-degree-of-freedom robot, which comprises the following steps: obtaining the pixel coordinates of the human skeleton key points by adopting a human skeleton key point identification algorithm, and obtaining the three-dimensional space coordinates of each human skeleton key point according to the pixel coordinates of the human skeleton key points; detecting whether an abnormal error that the shoulder is shielded by the arm exists in the interaction process; correcting the human body space posture, and performing coordinate reconstruction on the space coordinates of the key points; normalizing the coordinate of the upper wrist part of the arm relative to the shoulder part to obtain the normalized coordinate, and meanwhile, establishing a local space coordinate system on the palm to obtain an attitude angle Eler (psi, theta, gamma) of the local coordinate system on the palm relative to the coordinate system taking the shoulder part as an origin; and combining the normalized coordinates, the length of the connecting rod of the robot and the palm posture to obtain joint angles of joints of the robot so as to drive the robot to move. The working space of the whole mechanical arm can be covered under the condition that people do not exceed the effective visual field of the sensor in the interaction process.

Description

一种面向多自由度机器人的体势交互方法A body posture interaction method for multi-degree-of-freedom robots

技术领域technical field

本发明属于人机交互领域,具体涉及一种面向多自由度机器人的体势交互方法。The invention belongs to the field of human-computer interaction, and in particular relates to a posture interaction method for a multi-degree-of-freedom robot.

背景技术Background technique

随着世界许多国家将工业4.0发展计划持续深入推进,工业生产对机器人智能化要求越来越高,自然高效的先进的人与机器人交互接口得到社会的广泛重视。As many countries in the world continue to promote the development plan of Industry 4.0, industrial production has higher and higher requirements for robot intelligence, and the natural and efficient advanced human-robot interaction interface has been widely valued by the society.

人机交互是利用设备采集人的信息,将人的意图传递给机器的过程;人机交互接口是将人的意图转变为机器能够执行指令的算法或者程序。根据交互方式的不同,人机交互分别有通过语音交互、通过佩戴传感器交互、通过游戏手柄交互、通过指挥棒交互、通过脑电波交互、以及通过视觉交互方式。从交互过程的自然性和交互系统设计的复杂程度比较,基于体势的交互过程既能够有效避免环境噪音的干扰,也能减少人被佩戴的传感器所带来的束缚。Human-computer interaction is the process of using equipment to collect human information and transmit human intentions to machines; human-computer interaction interface is to convert human intentions into algorithms or programs that machines can execute instructions. According to different interaction methods, human-computer interaction includes voice interaction, wear sensor interaction, gamepad interaction, baton interaction, brain wave interaction, and visual interaction. Compared with the naturalness of the interaction process and the complexity of the design of the interaction system, the interaction process based on body posture can not only effectively avoid the interference of environmental noise, but also reduce the constraints brought by the sensors worn by people.

传统的基于人机交互过程,基于特征定义的交互语义的数量始终是有限的,难以满足复杂交互过程中的多样性需求;利用动态体势交互时,虽然通过追踪人体骨骼关键点的三维运动轨迹可以实现复杂的交互需求,但是传感器有效视野的限制和不同结构多自由度机器人的尺寸差异限制了这种交互方式的使用,不得不在人超过传感器视野有效视野打断交互过程并进行恢复,这个过程一方面束缚了人的活动范围,另外一方面增加了交互过程失败的概率。虽然由研究人员利用多个传感器数据来扩大单一传感器视野,但是这在增加成本的同时也增加了系统复杂性。In the traditional human-computer interaction process, the number of interactive semantics defined based on features is always limited, and it is difficult to meet the diverse needs in complex interaction processes; when using dynamic body posture interaction, although the three-dimensional motion trajectory of key points of human skeleton is tracked. Complex interaction requirements can be achieved, but the limitation of the effective field of view of the sensor and the size difference of multi-degree-of-freedom robots with different structures limit the use of this interaction method, and the interaction process has to be interrupted and restored when the person exceeds the effective field of view of the sensor field of view. On the one hand, it restricts the scope of human activities, and on the other hand, it increases the probability of failure of the interaction process. While it is up to the researchers to use multiple sensor data to expand the field of view of a single sensor, this adds cost and complexity to the system.

发明内容SUMMARY OF THE INVENTION

为了现有技术中存在的问题,本发明提供一种基于深度传感器和人体骨骼关键点识别算法的体势交互方法。In order to solve the problems existing in the prior art, the present invention provides a body posture interaction method based on a depth sensor and a human skeleton key point recognition algorithm.

为了实现本发明目的,本发明提供的一种面向多自由度机器人的体势交互方法,包括以下步骤:In order to achieve the purpose of the present invention, the present invention provides a multi-degree-of-freedom robot-oriented body posture interaction method, comprising the following steps:

采用人体骨骼关键点识别算法得到人体骨骼关键点像素坐标,根据到人体骨骼关键点的像素坐标得到各人体骨骼关键点的三维空间坐标;The human skeleton key point recognition algorithm is used to obtain the pixel coordinates of the human skeleton key points, and the three-dimensional space coordinates of each human skeleton key point are obtained according to the pixel coordinates of the human skeleton key points;

检测交互过程中过程是否存在肩部被手臂遮挡的异常错误,若存在,则进行恢复或将肩部的关键点标记为无效点;Detect whether there is an abnormal error that the shoulder is occluded by the arm during the interaction process, and if so, restore it or mark the key points of the shoulder as invalid points;

将人体空间姿态进行矫正,并对关键点空间坐标进行坐标重建,所述坐标重建为,建立以左肩关键点为坐标原点的空间直角坐标系O·x'y'z',其他骨骼关键点pi均以该坐标系为参考系进行坐标重建得到pi';Correct the human body space posture, and reconstruct the coordinates of the key point space coordinates. The coordinate reconstruction is to establish a space rectangular coordinate system O x'y'z' with the left shoulder key point as the coordinate origin, and other skeleton key points p i all use the coordinate system as the reference system to reconstruct the coordinates to obtain pi ';

将手臂上腕部相对于肩部的坐标进行归一化,得到归一化后的坐标Np7,同时在手掌上建立局部空间坐标系,得到手掌上局部坐标系相对于肩部为原点的坐标系的姿态角Eler(ψ,θ,γ),Eler(ψ,θ,γ)表示手掌的空间姿态;Normalize the coordinates of the wrist on the arm relative to the shoulder to obtain the normalized coordinates N p 7 , and establish a local space coordinate system on the palm to obtain the coordinates of the local coordinate system on the palm relative to the shoulder as the origin The attitude angle of the system Eler(ψ,θ,γ), Eler(ψ,θ,γ) represents the spatial attitude of the palm;

结合归一化后的坐标、机器人连杆长度和手掌空间姿态,得到机器人各关节的关节角,以驱动机器人运动。Combined with the normalized coordinates, the length of the robot link and the palm space posture, the joint angles of each joint of the robot are obtained to drive the robot to move.

进一步地,所述人体骨骼关键点识别算法为Open Pose。Further, the human skeleton key point recognition algorithm is Open Pose.

进一步地,所述根据到人体骨骼关键点的像素坐标得到各人体骨骼关键点的三维空间坐标,包括:Further, obtaining the three-dimensional space coordinates of each human skeleton key point according to the pixel coordinates of the human skeleton key point, including:

利用预设窗口大小进行滤波得到骨骼关键点像素坐标的有效值;Use the preset window size to filter to obtain the effective value of the pixel coordinates of the key point of the bone;

将采集到的同一时刻的深度图和RGB视频帧进行像素级别对齐,得到每个像素点以相机为空间坐标系原点对应的三维坐标。The depth map and RGB video frame collected at the same moment are aligned at the pixel level, and the three-dimensional coordinates corresponding to each pixel with the camera as the origin of the spatial coordinate system are obtained.

进一步地,所述检测交互过程中过程是否存在左肩部被左手臂遮挡的异常错误,若存在,则进行恢复或将肩部的关键点标记为无效点,包括:Further, whether there is an abnormal error that the left shoulder is occluded by the left arm in the described detection interaction process, if there is, then restore or mark the key point of the shoulder as an invalid point, including:

计算左手小臂p6 p7的方向向量Calculate the direction vector of the left forearm p 6 p 7

Figure GDA0003198953650000021
Figure GDA0003198953650000021

P7指向p5的向量 P7 points to the vector of p5

Figure GDA0003198953650000022
Figure GDA0003198953650000022

P6指向p5的向量 P6 points to the vector of p5

Figure GDA0003198953650000023
Figure GDA0003198953650000023

计算p2p4在p2p3上的投影平方值Calculate the projected square value of p 2 p 4 on p 2 p 3

Figure GDA0003198953650000024
Figure GDA0003198953650000024

其中,p3是右手小臂和大臂交接处的骨骼关键点,p4为右手小臂与右手手腕交接处的骨骼关键点;Among them, p 3 is the skeleton key point at the junction of the right forearm and the upper arm, and p 4 is the skeleton key point at the junction of the right forearm and the right wrist;

通过计算p2与直线p3 p4的空间距离探测p2是否被遮挡Detect whether p 2 is occluded by calculating the spatial distance between p 2 and the straight line p 3 p 4

Figure GDA0003198953650000031
Figure GDA0003198953650000031

其中,xa、ya、za、xb、yb、zb分别表示向量

Figure GDA0003198953650000032
的x,y,z分量。Among them, x a , y a , za a , x b , y b , and z b represent vectors respectively
Figure GDA0003198953650000032
The x, y, z components of .

仅仅通过p5与直线p6 p7的空间距离不足以判断是否真的发生遮挡,必须增加p5在垂直于

Figure GDA0003198953650000033
并且分别经过p6、p7的平面之间的约束条件:Only the spatial distance between p 5 and the straight line p 6 p 7 is not enough to judge whether occlusion really occurs, and p 5 must be increased in the vertical direction.
Figure GDA0003198953650000033
And the constraints between the planes of p 6 and p 7 are respectively passed:

Figure GDA0003198953650000034
代入经过以
Figure GDA0003198953650000035
为法向量且经过p6的空间平面方程:Will
Figure GDA0003198953650000034
Substitute through
Figure GDA0003198953650000035
is the normal vector and the space plane equation through p 6 :

Figure GDA0003198953650000036
Figure GDA0003198953650000036

xn、yn、zn表示向量

Figure GDA0003198953650000037
的x,y,z分量;x n , y n , and z n represent vectors
Figure GDA0003198953650000037
the x, y, z components of ;

Figure GDA0003198953650000038
代入经过以
Figure GDA0003198953650000039
为法向量且经过p7的空间平面方程:Will
Figure GDA0003198953650000038
Substitute through
Figure GDA0003198953650000039
is the normal vector and the space plane equation through p 7 :

Figure GDA00031989536500000310
Figure GDA00031989536500000310

取s1,s2的符号,如果符号为负号,说明左肩关键点p5位于两平面之间,则Take the signs of s 1 and s 2. If the sign is a negative sign, it means that the left shoulder key point p 5 is located between the two planes, then

s=s1·s2 s=s 1 ·s 2

Figure GDA00031989536500000311
成立,左肩关键点p5发生遮挡。when
Figure GDA00031989536500000311
is established, the left shoulder key point p 5 is occluded.

进一步地,所述将人体空间姿态进行矫正中,以两肩之间的空间向量平行于相机坐标系O·xyz的x轴的方式来对人体姿态进行矫正。Further, in the correction of the human body space posture, the human body posture is corrected in a manner that the space vector between the two shoulders is parallel to the x-axis of the camera coordinate system O·xyz.

进一步地,所述建立以左肩关键点为坐标原点的空间直角坐标系O·x'y'z',其他骨骼关键点pi均以该坐标系为参考系进行坐标重建得到pi',包括:Further, the described establishment takes the left shoulder key point as the coordinate origin of the space Cartesian coordinate system O x'y'z', and other skeleton key points p i all use this coordinate system as the reference system to carry out coordinate reconstruction to obtain p i ', including :

以左肩为原点,

Figure GDA00031989536500000312
为x'轴,在平行于传感坐标系的o·xz平面上垂直于
Figure GDA00031989536500000313
且指向传感器方向为y'轴,与传感器y轴相反方向作为z'轴的重建后的坐标系O·x'y'z';Taking the left shoulder as the origin,
Figure GDA00031989536500000312
is the x' axis, which is perpendicular to the o xz plane parallel to the sensing coordinate system
Figure GDA00031989536500000313
And the direction pointing to the sensor is the y' axis, and the opposite direction to the sensor y axis is the reconstructed coordinate system O x'y'z' of the z'axis;

p2指向p5的空间向量vp 2 points to the space vector v of p 5

v=p2-p5=[x y z]T (7)v=p 2 -p 5 =[xyz] T (7)

v与yoz平面的夹角Angle between v and yoz plane

Figure GDA0003198953650000041
Figure GDA0003198953650000041

θx对应的旋转矩阵Rotation matrix corresponding to θ x

Figure GDA0003198953650000042
Figure GDA0003198953650000042

v通过R(θx)旋转变换后与yoz平面平行的空间向量v A space vector parallel to the yoz plane after transformation by R(θ x ) rotation

v'=R(θx)×v (10)v'=R(θ x )×v (10)

空间向量v'与xoy平面的夹角The angle between the space vector v' and the xoy plane

Figure GDA0003198953650000043
Figure GDA0003198953650000043

θz对应的旋转矩阵Rotation matrix corresponding to θ z

Figure GDA0003198953650000044
Figure GDA0003198953650000044

v'通过R(θx)旋转变换后与yoz平面平行的空间向量v' is a space vector parallel to the yoz plane after R(θ x ) rotation transformation

v”=R(θz)×v' (13)v”=R(θ z )×v’ (13)

经过旋转变换后,p2新的空间坐标After the rotation transformation, the new space coordinate of p 2

p2'=p5+v” (14)p 2 '=p 5 +v” (14)

总的旋转变换total rotation transformation

R=R(θz)×R(θx) (15)R=R(θ z )×R(θ x ) (15)

对于骨骼关键点pi,其重建后的坐标pi'For the bone key point p i , its reconstructed coordinate p i '

pi'=p5+vi'p i '=p 5 +v i '

其中,vi'=R×vi Wherein, v i ' = R×vi

vi=pi-p5 v i = pi -p 5

式中,R是从相机坐标系到肩部为原点的坐标系的旋转矩阵。where R is the rotation matrix from the camera coordinate system to the coordinate system with the shoulder as the origin.

进一步地,所述将手臂上腕部相对于肩部的坐标进行归一化,得到坐标Np7,包括:Further, the coordinates of the upper wrist of the arm relative to the shoulder are normalized to obtain the coordinates N p 7 , including:

分别求取大臂p5'p6'长度dist1,小臂p6'p7'长度dist2以及手掌到肩部p5'p7'的距离dist3,计算式如下:Calculate the length dist 1 of the big arm p 5 'p 6 ', the length dist 2 of the forearm p 6 'p 7 ' and the distance dist 3 from the palm to the shoulder p 5 'p 7 ', the calculation formula is as follows:

Figure GDA0003198953650000051
Figure GDA0003198953650000051

Figure GDA0003198953650000052
Figure GDA0003198953650000052

Np7是归一化后的手部在左肩为原点的坐标系O·x'y'z'空间单位球(球上坐标内积为1)中的坐标,sacle为自适应缩放因子。 N p 7 is the coordinate of the normalized hand in the coordinate system O x'y'z' space unit sphere with the left shoulder as the origin (the inner product of the coordinates on the sphere is 1), and sacle is the adaptive scaling factor.

进一步地,所述在手掌上建立局部空间坐标系,得到手掌上局部坐标系相对于肩部为原点的坐标系的姿态角Eler(ψ,θ,γ)表示的姿态,包括:Further, establishing a local space coordinate system on the palm, and obtaining the attitude represented by the attitude angle Eler (ψ, θ, γ) of the local coordinate system on the palm relative to the coordinate system whose origin is the shoulder, including:

以手掌上的关键点p30'指向p32'的向量

Figure GDA0003198953650000053
作为局部坐标系的O·x轴,
Figure GDA0003198953650000054
与p31'指向p33'的向量
Figure GDA0003198953650000055
作为局部坐标系的O·xy平面,过p31'的向量
Figure GDA0003198953650000056
Figure GDA0003198953650000057
Figure GDA00031989536500000517
Figure GDA0003198953650000058
作为O·z轴,则有:A vector pointing to p 32 ' from the key point p 30 ' on the palm
Figure GDA0003198953650000053
As the O x axis of the local coordinate system,
Figure GDA0003198953650000054
vector with p31 ' pointing to p33 '
Figure GDA0003198953650000055
As the O·xy plane of the local coordinate system, the vector through p 31 '
Figure GDA0003198953650000056
and
Figure GDA0003198953650000057
and
Figure GDA00031989536500000517
by
Figure GDA0003198953650000058
As the O z axis, there are:

Figure GDA0003198953650000059
Figure GDA0003198953650000059

x,y,z为每个向量的坐标分量;x, y, z are the coordinate components of each vector;

求解O.xz平面的法向量

Figure GDA00031989536500000510
并且
Figure GDA00031989536500000511
将向量
Figure GDA00031989536500000512
归一化:Find the normal vector of the O.xz plane
Figure GDA00031989536500000510
and
Figure GDA00031989536500000511
the vector
Figure GDA00031989536500000512
Normalized:

Figure GDA00031989536500000513
Figure GDA00031989536500000513

r11、r21、r31为向量

Figure GDA00031989536500000514
归一化后的三个坐标分量,r12、r22、r32为向量
Figure GDA00031989536500000515
归一化后的三个坐标分量,r13、r23、r33为向量
Figure GDA00031989536500000516
归一化后的三个坐标分量;r 11 , r 21 , and r 31 are vectors
Figure GDA00031989536500000514
The three coordinate components after normalization, r 12 , r 22 , and r 32 are vectors
Figure GDA00031989536500000515
The three coordinate components after normalization, r 13 , r 23 , r 33 are vectors
Figure GDA00031989536500000516
The three coordinate components after normalization;

Rh是Oh·x'y'z'在O·x'y'z'的旋转矩阵,他的姿态角Eler(ψ,θ,γ)通过下式计算:R h is the rotation matrix of O h x'y'z' in O x'y'z', and his attitude angle Eler(ψ,θ,γ) is calculated by the following formula:

Figure GDA0003198953650000061
Figure GDA0003198953650000061

ψ表示将坐标系绕x轴旋转的角度,θ表示坐标轴绕y轴旋转的角度,γ表示将坐标轴绕z轴旋转的角度。atan2为反三角函数,计算得到正切角。ψ represents the angle by which the coordinate system is rotated around the x-axis, θ is the angle by which the coordinate axis is rotated around the y-axis, and γ is the angle by which the coordinate system is rotated around the z-axis. atan2 is an inverse trigonometric function, and the tangent angle is calculated.

进一步地,在结合归一化后的坐标、机器人连杆长度和手掌姿态,得到关节角前还包括对归一化后的坐标和手掌姿态角进行滤波操作。Further, before obtaining the joint angle by combining the normalized coordinates, the length of the robot link and the palm posture, a filtering operation is also performed on the normalized coordinates and the palm posture angle.

进一步地,所述结合归一化后的坐标、机器人连杆长度和手掌姿态,得到机器人各关节的关节角,以驱动机器人运动,包括:Further, the joint angle of each joint of the robot is obtained by combining the normalized coordinates, the length of the robot connecting rod and the palm posture, so as to drive the robot to move, including:

ROS逆运动学求解器根据手掌的姿态姿态角Eler(ψ,θ,γ)、机器人末端位置得到机器人各关节的关节角;其中,机器人末端位置Pe的计算公式如下:The ROS inverse kinematics solver obtains the joint angles of each joint of the robot according to the posture and attitude angle Eler(ψ, θ, γ) of the palm and the position of the end of the robot; among them, the calculation formula of the end position P e of the robot is as follows:

PeNp7·LP e = N p 7 ·L

式中,Np7为归一化后手腕在肩部为原点的坐标系中的坐标,L为机器人连杆总长。Eler(ψ,θ,γ)作为机器人末端的姿态,Pe作为末端位置,输人机交互系统中,人机交互系先通过ROS自带逆运动学求解器为目标位置姿态计算机器人各关节角,再通过网络套接字链接控制机器人运动。In the formula, N p 7 is the coordinate of the normalized wrist in the coordinate system with the shoulder as the origin, and L is the total length of the robot link. Eler(ψ, θ, γ) is used as the attitude of the end of the robot, and P e is used as the end position. In the human-computer interaction system, the human-computer interaction system first calculates the joint angles of the robot for the target position and attitude through the built-in inverse kinematics solver of ROS. , and then control the robot movement through the network socket link.

与现有技术相比,本发明能够实现的有益效果至少如下:Compared with the prior art, the beneficial effects that the present invention can achieve are at least as follows:

(1)本发明的人机交互系统利用人手臂与手掌同时控制多自由度机器人的位置和姿态。(1) The human-computer interaction system of the present invention utilizes the human arm and the palm to simultaneously control the position and posture of the multi-degree-of-freedom robot.

(2)利用人手臂组成的空间三角形,形成手掌相对于肩部在最大工作空间中的唯一空间位置坐标,将此坐标归一化处理后映射到不同大小机械臂上,可以使得交互过程中在人不超出传感器有效视野的条件下覆盖整个机械臂的工作空间。(2) Use the space triangle formed by the human arm to form the unique spatial position coordinates of the palm relative to the shoulder in the maximum working space. After normalizing the coordinates, map the coordinates to different sizes of robotic arms, which can make the interaction process in the The entire working space of the robotic arm is covered under the condition that the person does not exceed the effective field of view of the sensor.

(3)相比于传统的跟踪动态手势难以确定缩放因子产生的问题,本发明的方法稳定性强,可自适应调节缩放因子,实用性广。(3) Compared with the problem that the scaling factor is difficult to determine in the traditional tracking dynamic gesture, the method of the present invention has strong stability, can adjust the scaling factor adaptively, and has wide practicability.

(4)将手掌在空间中的姿态映射到机械臂TCP的姿态,能够实现人的意图快速高效的传递给机器人。(4) Map the posture of the palm in space to the posture of the robotic arm TCP, which can realize the rapid and efficient transmission of human intentions to the robot.

(5)本发明事先对人体姿态进行矫正,以两肩之间的的连线作为参考,对人体姿态进行坐标系重建,矫正后的人体姿态无论如何朝向传感器,当人未脱离传感器的有效视野,人体自身构建的局部坐标系中各关键点相对位置不会变化,极大提高了人的舒适性。(5) The present invention corrects the posture of the human body in advance, and uses the connection line between the two shoulders as a reference to reconstruct the coordinate system of the posture of the human body. The corrected posture of the human body faces the sensor no matter what, when the person does not leave the effective field of view of the sensor. , the relative position of each key point in the local coordinate system constructed by the human body will not change, which greatly improves the comfort of the human body.

(6)由于不用提前标定传感、,机器人、人之间的相对位置关系,提高了人机交互的效率。(6) Since there is no need to calibrate the relative positional relationship between sensors, robots, and people in advance, the efficiency of human-computer interaction is improved.

(7)对于手臂的自遮挡问题,采用遮挡检测算法进行探测和恢复,保证了在复杂环境下也能够正常工作,系统抗干扰强。(7) For the self-occlusion problem of the arm, the occlusion detection algorithm is used for detection and recovery, which ensures that it can work normally in complex environments, and the system has strong anti-interference.

附图说明Description of drawings

图1为本发明的系统结构示意图。FIG. 1 is a schematic diagram of the system structure of the present invention.

图2为本发明实施例提供的一种面向多自由度机器人的体势交互方法流程示意图。FIG. 2 is a schematic flowchart of a body posture interaction method for a multi-degree-of-freedom robot according to an embodiment of the present invention.

图3为机器人运动空间与人手臂运动空间映射关系示意图。FIG. 3 is a schematic diagram of the mapping relationship between the robot motion space and the human arm motion space.

图4为人体姿态预矫正示意图。FIG. 4 is a schematic diagram of human posture pre-correction.

图5为骨骼关键点像素坐标滑动窗口均值滤波计算有效值示意图。FIG. 5 is a schematic diagram of calculating the effective value of the sliding window mean value filtering of the pixel coordinates of the key point of the bone.

图6为遮挡检测与自动恢复示意图。FIG. 6 is a schematic diagram of occlusion detection and automatic recovery.

图7为利用体势控制机器人末端示意图。FIG. 7 is a schematic diagram of controlling the end of the robot using body posture.

图8为人体骨骼关键点和指关节关键点示意图。FIG. 8 is a schematic diagram of key points of human skeleton and key points of knuckles.

具体实施方式Detailed ways

为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整的描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都是本发明保护的范围。In order to make the purposes, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments These are some embodiments of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts are within the protection scope of the present invention.

如图1,人机交互系统界面包括体势识别、虚拟仿真、视频监控和控制面板。系统基于开源的机器人操作系统ROS开发,利用ROS的仿真功能和运动规划器Moveit实现对机器人的虚拟仿真、碰撞检测、运动规划和逆运动求解功能。As shown in Figure 1, the interface of the human-computer interaction system includes posture recognition, virtual simulation, video surveillance and control panel. The system is developed based on the open source robot operating system ROS, and uses the simulation function of ROS and the motion planner Moveit to realize the functions of virtual simulation, collision detection, motion planning and inverse motion solution of the robot.

本发明提供的一种面向多自由度机器人的体势交互方法利用人手臂与手掌同时控制多自由度机器人的位置和姿态,如附图2,系统工作时,利用深度相机采集人体视频,深度相机可以同时得到视频每一帧图像的彩色图像和它的深度图;将彩色图像输入人体骨骼关键点识别算法提取骨骼关键点,然后从深度图中获取每个关键点的深度信息,进行遮挡检测恢复后得到骨骼关键点的空间坐标;将得到的人体骨骼关键点以两肩连线作为参考进行坐标系重建;最后将手腕相对于肩部的空间坐标归一化后映射为机器人末端相对于基座的空间坐标,将手掌的姿态Eler(ψ,θ,γ)映射为机器人末端姿态,实现人机交互。此方法中,利用人手臂组成的空间三角形,形成手掌相对于肩部在最大工作空间中的唯一空间位置坐标,将此坐标归一化处理后映射到不同大小机械臂上,可以使得交互过程中在人不超出传感器有效视野的条件下覆盖整个机械臂的工作空间,如附图3,随着人手腕伸展,机器人也伸展,同时,在人手伸展到极限,机器人末端也接近了它的工作空间边缘。同时,将手掌在空间中的姿态映射到机械臂末端的姿态,能够实现人的意图快速高效的传递给机器人。为了解决人与传感器位置变化导致的手掌相对肩部坐标不固定问题,本发明采用一种姿态预矫正方法,以两肩之间的空间向量平行于相机坐标系O·xyz的x轴,如附图4,矫正后的人体姿态无论如何朝向传感器,当人未脱离传感器的有效视野,人体自身构建的局部坐标系中相对位置不会变化,极大提高了人的舒适性。考虑到手臂的自遮挡问题,采用遮挡检测算法进行探测,利用该关键点历史坐标和其他未遮挡位置坐标自动恢复,系统抗干扰强。The present invention provides a multi-degree-of-freedom robot-oriented body posture interaction method using the human arm and the palm to control the position and posture of the multi-degree-of-freedom robot at the same time. The color image and its depth map of each frame of the video can be obtained at the same time; the color image is input into the human skeleton key point recognition algorithm to extract the skeleton key points, and then the depth information of each key point is obtained from the depth map for occlusion detection and recovery. Then, the spatial coordinates of the key points of the skeleton are obtained; the obtained key points of the human skeleton are reconstructed using the line connecting the two shoulders as a reference to reconstruct the coordinate system; finally, the spatial coordinates of the wrist relative to the shoulder are normalized and mapped as the robot end relative to the base The spatial coordinates of the palm are mapped to the posture Eler (ψ, θ, γ) of the palm as the robot end posture to realize human-computer interaction. In this method, the spatial triangle formed by the human arm is used to form the unique spatial position coordinates of the palm relative to the shoulder in the maximum working space, and the coordinates are normalized and mapped to different sizes of robotic arms, which can make the interaction process easier. Covering the entire working space of the robotic arm under the condition that the human does not exceed the effective field of view of the sensor, as shown in Figure 3, as the human wrist stretches, the robot also stretches, and at the same time, when the human hand stretches to the limit, the end of the robot also approaches its working space edge. At the same time, the posture of the palm in space is mapped to the posture of the end of the robotic arm, which can realize the rapid and efficient transmission of human intentions to the robot. In order to solve the problem that the coordinates of the palm relative to the shoulder are not fixed due to the change of the position of the person and the sensor, the present invention adopts a posture pre-correction method. The space vector between the two shoulders is parallel to the x axis of the camera coordinate system O xyz, as shown in the appendix Figure 4, no matter how the corrected human body poses towards the sensor, when the person does not leave the effective field of view of the sensor, the relative position in the local coordinate system constructed by the human body will not change, which greatly improves the comfort of the person. Taking into account the self-occlusion problem of the arm, the occlusion detection algorithm is used for detection, and the historical coordinates of the key point and other unoccluded position coordinates are used to automatically recover, and the system has strong anti-interference.

具体的,本发明提供的一种面向多自由度机器人的体势交互方法,包括以下步骤:Specifically, the present invention provides a multi-degree-of-freedom robot-oriented body posture interaction method, comprising the following steps:

步骤S1、采用人体骨骼关键点识别算法得到人体骨骼关键点像素坐标,根据人体骨骼关键点的像素坐标得到各人体骨骼关键点的三维空间坐标,所述人体骨骼关键点包括位于人体上的和位于手掌上的骨骼关键点。Step S1, using the human skeleton key point recognition algorithm to obtain the pixel coordinates of the human skeleton key points, and obtaining the three-dimensional space coordinates of each human skeleton key point according to the pixel coordinates of the human skeleton key points, and the human skeleton key points include on the human body. Skeletal keys on the palm.

步骤S1包括以下步骤:Step S1 includes the following steps:

步骤S11:采用图像采集传感器采集人体视频,得到视频每一帧图像的彩色图像和深度图像;Step S11: using an image acquisition sensor to collect human body video to obtain a color image and a depth image of each frame of the video;

在本发明其中一个实施例中,图像采集传感器为深度相机。In one embodiment of the present invention, the image acquisition sensor is a depth camera.

步骤12:将彩色图像输入人体骨骼关键点识别算法提取得到骨骼关键点像素坐标;Step 12: Input the color image into the human skeleton key point recognition algorithm to extract the pixel coordinates of the skeleton key points;

在本发明其中一个实施例中,采用的人体骨骼关键点识别算法为Open Pose,当然在其他实施例中,也可采用其他用于识别骨骼关键点的算法。In one embodiment of the present invention, the adopted human skeleton key point identification algorithm is Open Pose, of course, in other embodiments, other algorithms for identifying bone key points may also be used.

在本发明其中一个实施例中,人体骨骼关键点识别算法采集到的骨骼关键点信息包括三个元素k=[px py score],px、py是每个视频帧中骨骼关键点识别对应的像素坐标,score是该关键点的可信度。In one embodiment of the present invention, the skeleton key point information collected by the human skeleton key point recognition algorithm includes three elements k=[px py score], where px and py are the pixels corresponding to the skeleton key point recognition in each video frame Coordinate, score is the reliability of the key point.

由于不同的环境光照,不同的图像采集传感器,以及不同的人机交互人员等差异性,识别出的关键点置信度将会不相同,因此传统的基于固定阈值的分割是否为有效识别信息的方法不再适用;另外,基于频率域的自动阈值分割算法虽然可以较好的对关节点有效性进行识别,但是这个方法处理过程比较复杂,运算量很大。本发明中,当人体骨骼关键点被识别出来后,他们的置信度比较接近,而识别错误的关键点一般与正确识别的差别很大,因此,采用一种对必须识别关键点的自适应阈值分割方法,该方法是在传统基于固定值的阈值分割方法的基础上,以必须识别出关键点为基准,取上下各预设的阈值空间作为有效性的阈值区间(经数据分析本实施例取上下各20%的阈值空间作为有效性的阈值区间)。在本发明其中一个实施例中,采用左肩和右肩的骨骼关键点p5,p2作为必须识别出的关键点(后面步骤将以这两点的连线进行坐标系重建和姿态矫正)。计算过程如下:Due to the differences of different ambient lighting, different image acquisition sensors, and different human-computer interaction personnel, the confidence of the identified key points will be different, so whether the traditional segmentation based on fixed threshold is an effective method for identifying information It is no longer applicable; in addition, although the automatic threshold segmentation algorithm based on the frequency domain can better identify the validity of the joints, the processing process of this method is relatively complicated and the computational load is large. In the present invention, when the key points of human skeleton are identified, their confidence levels are relatively close, and the key points that are wrongly identified are generally very different from those that are correctly identified. Therefore, an adaptive threshold for the key points that must be identified is adopted. The segmentation method, which is based on the traditional threshold segmentation method based on fixed values, takes the key points that must be identified as the benchmark, and takes the upper and lower preset threshold spaces as the effective threshold interval (this embodiment is taken after data analysis. The upper and lower threshold spaces of 20% are used as the effective threshold interval). In one embodiment of the present invention, the bone key points p 5 and p 2 of the left shoulder and the right shoulder are used as the key points that must be identified (later steps will use the line connecting these two points to perform coordinate system reconstruction and posture correction). The calculation process is as follows:

标记每个关键点是否被有效识别:ValidMatrix=[[false][false][false]...[false]]Mark whether each key point is effectively recognized: ValidMatrix=[[false][false][false]...[false]]

选择必须识别出的骨骼关键点的置信度平均值作为参考置信度,在本发明其中一个实施例中,选择左肩和右肩的关键点p5,p2点的置信度平均值作为参考置信度s:

Figure GDA0003198953650000091
其中,scorei表示关键点ki的置信度,i为关键点的索引号;Selecting the confidence average value of the key points of the bones that must be identified as the reference confidence degree, in one embodiment of the present invention, selecting the confidence average value of the key points p 5 and p 2 of the left shoulder and the right shoulder as the reference confidence degree s:
Figure GDA0003198953650000091
Among them, score i represents the confidence of the key point k i , and i is the index number of the key point;

Figure GDA0003198953650000092
判断每个关键点是否被有效识别,若ValidMatrix[i]为false,去除该无效点。
Figure GDA0003198953650000092
Determine whether each key point is effectively recognized, if ValidMatrix[i] is false, remove the invalid point.

在本发明其中一个实施例中,通过深度相机采集人体彩色图像,通过人体骨骼关键点识别算法识别出人体25个骨骼关键点和右手20个指关节关键点,如图8所示,各关键点的序号和具体位置表1和表2所示。在人体骨骼关键点中选择25个关节中的左手臂上的骨骼关键点p6,p7,左肩上的骨骼关键点p5,脖子上的骨骼关键点p1,右肩上的骨骼关键点p2,右手臂上的骨骼关键点p3,p4,作为最重要的关键点,这些关键点是后续步骤的基础;其中,在本发明其中一个实施例中,人体骨骼识别算法得到关键点均位于骨骼的关节处,如果考虑到肢体的宽度,它位于中心位置,这些都由人体骨骼关键点识别算法本身决定。In one embodiment of the present invention, a color image of the human body is collected by a depth camera, and 25 key points of the human body and 20 key points of the knuckles of the right hand are identified through the key point recognition algorithm of human bones. As shown in FIG. 8 , each key point The serial number and specific location are shown in Table 1 and Table 2. In the human bone key points, select the bone key points p 6 , p 7 on the left arm among the 25 joints, the bone key point p 5 on the left shoulder, the bone key point p 1 on the neck, and the bone key point on the right shoulder. p 2 , the key points p 3 and p 4 of the bones on the right arm, as the most important key points, these key points are the basis of the subsequent steps; wherein, in one of the embodiments of the present invention, the human skeleton recognition algorithm obtains the key points Both are located at the joints of the bones. If the width of the limb is considered, it is located at the center. These are determined by the human skeleton key point recognition algorithm itself.

表1人体骨骼关键点序号和位置Table 1 Number and location of key points of human skeleton

Figure GDA0003198953650000093
Figure GDA0003198953650000093

Figure GDA0003198953650000101
Figure GDA0003198953650000101

表2手掌骨骼关键点序号和位置Table 2 Number and location of key points of palm bone

Figure GDA0003198953650000102
Figure GDA0003198953650000102

Figure GDA0003198953650000111
Figure GDA0003198953650000111

步骤S13:利用固定大小窗口进行移动均值滤波得到骨骼关键点像素坐标x,y的有效值(有效值指像素坐标滤除骨骼关键点识别算法噪音之后的值)。Step S13 : using a fixed size window to perform moving average filtering to obtain the effective values of the pixel coordinates x and y of the skeleton key point (the effective value refers to the value of the pixel coordinates after filtering the noise of the skeleton key point recognition algorithm).

在本发明其中一个实施例中,人体静止时人体骨骼关键点识别算法的噪音是近似以30个视频帧作为周期性的上下波动的,因此采用大小为30的滑动窗口进行滑动均值滤波,如图5所示。滤波的具体过程如下:In one embodiment of the present invention, when the human body is stationary, the noise of the human skeleton key point recognition algorithm is approximately 30 video frames as periodic fluctuations, so a sliding window with a size of 30 is used to perform sliding mean filtering, as shown in the figure 5 shown. The specific process of filtering is as follows:

步骤S131:为每个骨骼关键点配置预设大小的窗口window,本实施例中预设窗口的预设大小为30。Step S131 : configure a window with a preset size for each skeleton key point, and the preset size of the preset window in this embodiment is 30.

Figure GDA0003198953650000112
Figure GDA0003198953650000112

Figure GDA0003198953650000113
表示关键点ki的第j个输入值,取
Figure GDA0003198953650000114
中的px,py计算像素点坐标;
Figure GDA0003198953650000113
represents the jth input value of the key point ki, take
Figure GDA0003198953650000114
In px, py calculates pixel coordinates;

步骤S132:为所有骨骼关键点配置滑动滤波窗口WINDOW;Step S132: configure a sliding filter window WINDOW for all bone key points;

WINDOW=[window0 ... windowi ...]WINDOW=[window 0 ... window i ...]

windowi为关键点ki的滑动滤波窗口;window i is the sliding filter window of the key point ki ;

步骤S133:如果采集的图像帧数不足30,则

Figure GDA0003198953650000115
sumi为第i个滤波窗口内元素的和,i表示这是为第i个骨骼关键点配置的滤波窗口。如果采集的图像帧数大于30,将原来的和
Figure GDA0003198953650000116
加上新输入数据
Figure GDA0003198953650000117
再减去最先输入的数
Figure GDA0003198953650000118
即更新过程为
Figure GDA0003198953650000121
Step S133: If the number of captured image frames is less than 30, then
Figure GDA0003198953650000115
sum i is the sum of the elements in the ith filtering window, i indicates that this is the filtering window configured for the ith bone key point. If the number of captured image frames is greater than 30, the original and
Figure GDA0003198953650000116
plus new input data
Figure GDA0003198953650000117
Subtract the number entered first
Figure GDA0003198953650000118
That is, the update process is
Figure GDA0003198953650000121

在滑动均值滤波过程中,采用将当前窗口中总和加上待插入数据减去窗口中最早插入元素的方法,可以有效降低滤波过程反复求和的弊端,利用固定大小的循环队列实现。In the sliding mean filtering process, the method of adding the sum in the current window to the data to be inserted minus the earliest inserted element in the window can effectively reduce the drawbacks of repeated summation in the filtering process, and is implemented by a fixed-size circular queue.

步骤S134:计算得到关键点pi滑动窗口中所有数的平均值:

Figure GDA0003198953650000122
根据分析实际实验的数据和周期函数的特点,周期性上下波动噪音在一个完整周期内函数值和为零,因为sum是累计了30个帧的和,通过求平均得到单帧的结果。本步骤是为了降低人体骨骼关键点识别算法本身的噪音。Step S134: Calculate the average value of all the numbers in the sliding window of the key point p i :
Figure GDA0003198953650000122
According to the analysis of the actual experimental data and the characteristics of the periodic function, the function value of the periodic up and down fluctuation noise is zero in a complete cycle, because the sum is the sum of 30 frames, and the result of a single frame is obtained by averaging. This step is to reduce the noise of the human skeleton key point recognition algorithm itself.

步骤S14:将人体骨骼关键点坐标由二维像素坐标转变为三维空间坐标。Step S14: Convert the coordinates of key points of human bones from two-dimensional pixel coordinates to three-dimensional space coordinates.

在本发明其中一个实施例中,将图像采集传感器采集到的同一时刻的深度图和RGB视频帧进行像素级别对齐,得到每个像素点以相机为空间坐标系原点对应的三维坐标p(x,y,z)。将二维的像素坐标通过深度相机的深度图转换为三维坐标:pi=remap[pxi pyi],remap是深度相机开发套件(SDK)提供的开发接口(API),功能是将像素坐标转换为以相机为原点的空间坐标。In one embodiment of the present invention, the depth map and the RGB video frame at the same time collected by the image acquisition sensor are aligned at the pixel level, and the three-dimensional coordinate p(x, y, z). Convert the two-dimensional pixel coordinates into three-dimensional coordinates through the depth map of the depth camera: p i =remap[px i py i ], remap is the development interface (API) provided by the depth camera development kit (SDK), the function is to convert the pixel coordinates Convert to spatial coordinates with the camera as the origin.

每个骨骼关键点的空间坐标为:p=[x y z]The spatial coordinates of each bone key point are: p=[x y z]

步骤S2、检测交互过程肩部被手臂遮挡的异常错误,并自动恢复或标记为无效点,如图6所示。Step S2 , detecting an abnormal error that the shoulder is occluded by the arm during the interaction process, and automatically recovering or marking it as an invalid point, as shown in FIG. 6 .

本步骤中,首先利用遮挡探测算法检测左肩关键点p5是否被遮挡,检测到被遮挡后,则利用右肩关键点p2的深度信息进行恢复;如果右肩关键点p2也被遮挡,则选择左肩关键点的历史坐标

Figure GDA0003198953650000123
作为当前时刻的坐标值进行恢复,如果穷尽上述两种方法还不能恢复右肩关键点p5深度信息,则将左肩关键点p5标记为无效点,放弃本帧图像采集的数据。所述遮挡探测算法的过程如下:In this step, the occlusion detection algorithm is first used to detect whether the left shoulder key point p 5 is occluded, and after detection, the depth information of the right shoulder key point p 2 is used to restore; if the right shoulder key point p 2 is also occluded, Then select the historical coordinates of the left shoulder key point
Figure GDA0003198953650000123
As the coordinate value at the current moment to restore, if the depth information of the right shoulder key point p 5 cannot be restored after exhausting the above two methods, the left shoulder key point p 5 is marked as an invalid point, and the data collected in this frame of image is discarded. The process of the occlusion detection algorithm is as follows:

计算左手小臂p6 p7的方向向量Calculate the direction vector of the left forearm p 6 p 7

Figure GDA0003198953650000124
Figure GDA0003198953650000124

P7指向p5的向量 P7 points to the vector of p5

Figure GDA0003198953650000125
Figure GDA0003198953650000125

P6指向p5的向量 P6 points to the vector of p5

Figure GDA0003198953650000126
Figure GDA0003198953650000126

计算p2p4(两个关键点之间的连线)在p2p3上的投影平方值Calculate the projected square value of p 2 p 4 (the line between the two keypoints) on p 2 p 3

Figure GDA0003198953650000131
Figure GDA0003198953650000131

通过计算p2与直线p3 p4的空间距离探测p2是否被遮挡Detect whether p 2 is occluded by calculating the spatial distance between p 2 and the straight line p 3 p 4

Figure GDA0003198953650000132
Figure GDA0003198953650000132

其中,xa、ya、za、xb、yb、zb分别表示向量

Figure GDA0003198953650000133
的x,y,z分量。Among them, x a , y a , za a , x b , y b , and z b represent vectors respectively
Figure GDA0003198953650000133
The x, y, z components of .

仅仅通过p5与直线p6 p7的空间距离不足以判断是否真的发生遮挡,必须增加p5在垂直于

Figure GDA0003198953650000134
并且分别经过p6、p7的平面之间的约束条件。Only the spatial distance between p 5 and the straight line p 6 p 7 is not enough to judge whether occlusion really occurs, and p 5 must be increased in the vertical direction.
Figure GDA0003198953650000134
And the constraints between the planes of p 6 and p 7 are respectively passed.

Figure GDA0003198953650000135
代入经过以
Figure GDA0003198953650000136
为法向量且经过p6的空间平面方程:Will
Figure GDA0003198953650000135
Substitute through
Figure GDA0003198953650000136
is the normal vector and the space plane equation through p 6 :

Figure GDA0003198953650000137
Figure GDA0003198953650000137

xn、yn、zn表示向量

Figure GDA0003198953650000138
的x,y,z分量;x n , y n , and z n represent vectors
Figure GDA0003198953650000138
the x, y, z components of ;

Figure GDA0003198953650000139
代入经过以
Figure GDA00031989536500001310
为法向量且经过p7的空间平面方程:Will
Figure GDA0003198953650000139
Substitute through
Figure GDA00031989536500001310
is the normal vector and the space plane equation through p 7 :

Figure GDA00031989536500001311
Figure GDA00031989536500001311

取s1,s2的符号,如果符号为负号,说明左肩关键点p5位于两平面之间,则Take the signs of s 1 and s 2. If the sign is a negative sign, it means that the left shoulder key point p 5 is located between the two planes, then

s=s1·s2 s=s 1 ·s 2

Figure GDA00031989536500001312
成立,左肩关键点p5发生遮挡。threshhold可以调节,取决于传感器噪音水平,它表示左手腕与左小臂所在空间直线的空间距离,另外,由于图像与真实世界的镜像关系,图像与真实世界是左右颠倒的。在本发明其中一个实施例中,threshhold取值50mm,当然,在其他实施例中,也可根据实际采用其他数值。when
Figure GDA00031989536500001312
is established, the left shoulder key point p 5 is occluded. Threshhold can be adjusted, depending on the noise level of the sensor, it represents the spatial distance between the left wrist and the space line where the left forearm is located. In addition, due to the mirror image relationship between the image and the real world, the image and the real world are reversed left and right. In one embodiment of the present invention, the threshold value is 50 mm. Of course, in other embodiments, other values can also be used according to actual conditions.

S3、将人体空间姿态进行矫正,并对关键点空间坐标进行坐标重建。S3, correcting the human body space posture, and reconstructing the coordinates of the key point space coordinates.

初步得到的人体骨骼关键点三维坐标系是以传感器作为原点,当人与传感器成不同朝向时,人体姿态在相机坐标系将不相同,如图4矫正前的示例。在本发明其中一个实施例中,利用预先矫正的方法,即将人体姿态以左肩p5(x,y,z)指向右肩p2(x,y,z)的连线通过旋转变换R,最终平行于相机坐标系的x坐标轴o-x,即以两肩之间的空间向量平行于相机坐标系O·xyz的x轴的方式来对人体姿态进行矫正。The three-dimensional coordinate system of the key points of the human skeleton is initially obtained with the sensor as the origin. When the human and the sensor are in different orientations, the posture of the human body will be different in the camera coordinate system, as shown in the example before correction in Figure 4. In one of the embodiments of the present invention, a pre-correction method is used, that is, the line connecting the human body posture with the left shoulder p 5 (x, y, z) to the right shoulder p 2 (x, y, z) is transformed by the rotation R, and finally It is parallel to the x-coordinate axis ox of the camera coordinate system, that is, the human body posture is corrected in a way that the space vector between the two shoulders is parallel to the x-axis of the camera coordinate system O·xyz.

人的手臂在改变姿态时,左右肩膀p2,p5相对位置不会改变,坐标系需要建立在稳定的参考物上。将从左肩指向其他各个骨骼关键点的三维坐标都进行相同的旋转变换,最终建立以左肩为原点,

Figure GDA0003198953650000141
为x'轴,在平行于传感器坐标系的o·xz平面上垂直于
Figure GDA0003198953650000142
且指向传感器方向为y'轴,与传感器y轴相反方向作为z'轴的重建后的坐标系O·x'y'z',如附图7。过程如下:When the human arm changes the posture, the relative positions of the left and right shoulders p 2 and p 5 will not change, and the coordinate system needs to be established on a stable reference. The same rotation transformation is performed on the three-dimensional coordinates pointing from the left shoulder to other key points of the other bones, and finally the left shoulder is used as the origin.
Figure GDA0003198953650000141
is the x' axis, perpendicular to the o xz plane parallel to the sensor coordinate system
Figure GDA0003198953650000142
And the direction pointing to the sensor is the y' axis, and the direction opposite to the y axis of the sensor is the reconstructed coordinate system O·x'y'z' of the z' axis, as shown in FIG. 7 . The process is as follows:

p2指向p5的空间向量vp 2 points to the space vector v of p 5

v=p2-p5=[x y z]T (7)v=p 2 -p 5 =[xyz] T (7)

v与yoz平面的夹角Angle between v and yoz plane

Figure GDA0003198953650000143
Figure GDA0003198953650000143

θx对应的旋转矩阵Rotation matrix corresponding to θ x

Figure GDA0003198953650000144
Figure GDA0003198953650000144

v通过R(θx)旋转变换后与yoz平面平行的空间向量v A space vector parallel to the yoz plane after transformation by R(θ x ) rotation

v'=R(θx)×v (10)v'=R(θ x )×v (10)

空间向量v'与xoy平面的夹角The angle between the space vector v' and the xoy plane

Figure GDA0003198953650000145
Figure GDA0003198953650000145

θz对应的旋转矩阵Rotation matrix corresponding to θ z

Figure GDA0003198953650000146
Figure GDA0003198953650000146

v'通过R(θx)旋转变换后与yoz平面平行的空间向量v' is a space vector parallel to the yoz plane after R(θ x ) rotation transformation

v”=R(θz)×v' (13)v”=R(θ z )×v’ (13)

经过旋转变换后,p2新的空间位置After the rotation transformation, the new spatial position of p 2

p2'=p5+v” (14)p 2 '=p 5 +v” (14)

总的旋转变换total rotation transformation

R=R(θz)×R(θx) (15)R=R(θ z )×R(θ x ) (15)

(15)式的R是从相机坐标系到肩部为原点的坐标系的旋转矩阵。R in the formula (15) is a rotation matrix from the camera coordinate system to the coordinate system with the shoulder as the origin.

对于骨骼关键点pi,其重建后的坐标pi'For the bone key point p i , its reconstructed coordinate p i '

pi'=p5+vi'p i '=p 5 +v i '

其中,vi'=R×vi Wherein, v i ' = R×vi

vi=pi-p5 v i = pi -p 5

vi是骨骼关键点pi与左肩p5的矢量,vi'是通过旋转变换后的空间向量。v i is the vector between the bone key point p i and the left shoulder p 5 , and v i ' is the space vector transformed by rotation.

步骤S4、将手臂上手掌相对于肩部的坐标进行归一化,得到归一化后的坐标Np7,同时在手掌上建立局部空间坐标系,求解该局部坐标系在S3中建立的坐标系用欧拉角表示的姿态Euler(ψ,θ,γ)。Step S4, normalize the coordinates of the palm on the arm relative to the shoulder to obtain the normalized coordinates N p 7 , and establish a local space coordinate system on the palm at the same time, and solve the coordinates of the local coordinate system established in S3 is the attitude Euler (ψ, θ, γ) represented by Euler angles.

在本发明其中一个实施例中,步骤S4包括如下步骤:In one embodiment of the present invention, step S4 includes the following steps:

步骤S41:分别求取大臂p5'p6'长度dist1,小臂p6'p7'长度dist2以及手掌到肩部p5'p7'的距离dist3,计算式如下:Step S41: respectively obtain the length dist 1 of the big arm p 5 'p 6 ', the length dist 2 of the forearm p 6 'p 7 ' and the distance dist 3 from the palm to the shoulder p 5 'p 7 ', the calculation formula is as follows:

Figure GDA0003198953650000151
Figure GDA0003198953650000151

x6'、y6'、z6'是p6'在重建后的空间坐标系O·x'y'z'的坐标分量,x5'、y5'、z5'是p5'在重建后的空间坐标系O·x'y'z'的坐标分量;x 6 ', y 6 ', z 6 ' are the coordinate components of p 6 ' in the reconstructed spatial coordinate system O·x'y'z', x 5 ', y 5 ', z 5 ' are the coordinates of p 5 ' in the The coordinate components of the reconstructed spatial coordinate system O x'y'z';

Figure GDA0003198953650000152
Figure GDA0003198953650000152

Np7是归一化后的手部在左肩为原点的坐标系O·x'y'z'空间单位球(球上坐标内积为1)中的坐标,sacle为自适应缩放因子,通过该因子可以方便的转换到其他坐标系下。 N p 7 is the coordinate of the normalized hand in the coordinate system O x'y'z' space unit sphere with the left shoulder as the origin (the inner product of the coordinates on the sphere is 1), and sacle is the adaptive scaling factor. This factor can be easily converted to other coordinate systems.

步骤S42:为求解手掌在O·x'y'z'坐标系中的姿态,在手掌上建立局部空间坐标系Oh·x'y'z'。以手掌上的关键点p30'指向p32'的向量

Figure GDA0003198953650000161
作为局部坐标系的O·x轴,
Figure GDA0003198953650000162
与p31'指向p33'的向量
Figure GDA0003198953650000163
作为局部坐标系的O·xy平面,过p31'的向量
Figure GDA0003198953650000164
Figure GDA0003198953650000165
Figure GDA0003198953650000166
Figure GDA0003198953650000167
作为O·z轴,则有:Step S42: In order to solve the posture of the palm in the O·x'y'z' coordinate system, a local space coordinate system O h ·x'y'z' is established on the palm. A vector pointing to p 32 ' from the key point p 30 ' on the palm
Figure GDA0003198953650000161
As the O x axis of the local coordinate system,
Figure GDA0003198953650000162
vector with p31 ' pointing to p33 '
Figure GDA0003198953650000163
As the O·xy plane of the local coordinate system, the vector through p 31 '
Figure GDA0003198953650000164
and
Figure GDA0003198953650000165
and
Figure GDA0003198953650000166
by
Figure GDA0003198953650000167
As the O z axis, there are:

Figure GDA0003198953650000168
Figure GDA0003198953650000168

xc、yc、zc是向量

Figure GDA0003198953650000169
的三个坐标分量,xd、yd、zd是向量
Figure GDA00031989536500001610
的三个坐标分量;x c , y c , z c are vectors
Figure GDA0003198953650000169
The three coordinate components of , x d , y d , z d are vectors
Figure GDA00031989536500001610
The three coordinate components of ;

求解O.xz平面的法向量

Figure GDA00031989536500001611
并且
Figure GDA00031989536500001612
将向量
Figure GDA00031989536500001613
归一化:Find the normal vector of the O.xz plane
Figure GDA00031989536500001611
and
Figure GDA00031989536500001612
the vector
Figure GDA00031989536500001613
Normalized:

Figure GDA00031989536500001614
Figure GDA00031989536500001614

r11、r21、r31为向量

Figure GDA00031989536500001615
归一化后的三个坐标分量,r12、r22、r32为向量
Figure GDA00031989536500001616
归一化后的三个坐标分量,r13、r23、r33为向量
Figure GDA00031989536500001617
归一化后的三个坐标分量;r 11 , r 21 , and r 31 are vectors
Figure GDA00031989536500001615
The three coordinate components after normalization, r 12 , r 22 , and r 32 are vectors
Figure GDA00031989536500001616
The three coordinate components after normalization, r 13 , r 23 , r 33 are vectors
Figure GDA00031989536500001617
The three coordinate components after normalization;

Rh是Oh·x'y'z'在O·x'y'z'的旋转矩阵,姿态角Eler(ψ,θ,γ)即手掌的空间姿态通过下式计算:R h is the rotation matrix of O h x'y'z' in O x'y'z', and the attitude angle Eler(ψ,θ,γ), that is, the spatial attitude of the palm, is calculated by the following formula:

Figure GDA00031989536500001618
Figure GDA00031989536500001618

式中,ψ表示将坐标系绕x轴旋转的角度、θ表示坐标轴绕y轴旋转的角度、γ表示将坐标轴绕z轴旋转的角度。atan2为反三角函数,计算得到正切角。In the formula, ψ represents the angle by which the coordinate system is rotated around the x-axis, θ is the angle by which the coordinate axis is rotated around the y-axis, and γ is the angle by which the coordinate system is rotated around the z-axis. atan2 is an inverse trigonometric function, and the tangent angle is calculated.

步骤S5、对步骤S4得到的

Figure GDA00031989536500001619
轨迹点进行滤波处理,降低噪音对手腕空间坐标的不利影响,减少传感器累计误差的影响;对S4得到的Eler(ψ,θ,γ)进行滤波处理,减少手部局部坐标系姿态的抖动。Step S5, to the obtained in step S4
Figure GDA00031989536500001619
The trajectory points are filtered to reduce the adverse effect of noise on the wrist space coordinate and the influence of the cumulative error of the sensor; the Eler (ψ, θ, γ) obtained by S4 is filtered to reduce the jitter of the local coordinate system of the hand.

步骤S6、将步骤S5经滤波处理后的

Figure GDA00031989536500001620
乘以与机器人连杆长度之和L,与手掌姿态组成一个空间位姿ps(x,y,z,ψ,θ,γ)。Step S6, the filter processed in step S5
Figure GDA00031989536500001620
Multiply it by the sum L of the length of the robot link, and form a spatial pose p s (x, y, z, ψ, θ, γ) with the palm pose.

用上述ps(x,y,z,ψ,θ,γ),输入人机交互系统中,人机交互系先通过ROS自带逆运动学求解器为目标位置姿态计算机器人各关节角,再通过网络套接字链接控制机器人运动。Using the above p s (x, y, z, ψ, θ, γ), input it into the human-computer interaction system. The human-computer interaction system first calculates the joint angles of the robot for the target position and attitude through the built-in inverse kinematics solver of ROS. Robot motion is controlled via a web socket link.

通过机器人的动力学数据计算机器人连杆总长Calculate the total length of the robot connecting rod through the dynamic data of the robot

Figure GDA0003198953650000171
Figure GDA0003198953650000171

lidx为机器人第idx根连杆的长度,dof为机器人自由度,idx表示机器人连杆的序号。l idx is the length of the idx-th link of the robot, dof is the degree of freedom of the robot, and idx is the serial number of the robot link.

机器人末端位置Robot end position

PeNp7·L (22)P e = N p 7 ·L (22)

Np7为归一化后手腕在肩部为原点的坐标系中的坐标,代表了空间单位球中的一个坐标。 N p 7 is the coordinate of the normalized wrist in the coordinate system with the shoulder as the origin, which represents a coordinate in the space unit sphere.

本发明的体势交互方式具有巨大的优点,人的每个肢体都可以表达丰富的语义,而且人的肢体之间蕴含了丰富的空间关系,人的手臂的运动过程与机器人手臂的运动过程具有非常类似的特点。本发明的人机交互系统利用人手臂与手掌同时控制多自由度机器人的位置和姿态。利用人手臂组成的空间三角形,形成手掌相对于肩部在最大工作空间中的唯一空间位置坐标,将此坐标归一化处理后映射到不同大小机械臂上,可以使得交互过程中在人不超出传感器有效视野的条件下覆盖整个机械臂的工作空间。相比于传统的跟踪动态手势难以确定缩放因子产生的问题,本发明的方法稳定性强,可自适应调节缩放因子,实用性广。将手掌在空间中的姿态映射到机械臂TCP的姿态,能够实现人的意图快速高效的传递给机器人。本发明采用一种人体姿态预矫正方法,以两肩之间的的连线作为参考,对人体姿态进行坐标系重建,矫正后的人体姿态无论如何朝向传感器,当人未脱离传感器的有效视野,人体自身构建的局部坐标系中各关键点相对位置不会变化,极大提高了人的舒适性。由于不用提前标定传感、机器人、人之间的相对位置关系,提高了人机交互的效率。对于手臂的自遮挡问题,采用遮挡检测算法进行探测和恢复,保证了在复杂环境下也能够正常工作,系统抗干扰强。The body posture interaction method of the present invention has great advantages, each human limb can express rich semantics, and there are rich spatial relationships between human limbs, the movement process of the human arm and the movement process of the robot arm have the same very similar features. The human-computer interaction system of the present invention utilizes the human arm and the palm to simultaneously control the position and posture of the multi-degree-of-freedom robot. Using the spatial triangle formed by the human arm to form the unique spatial position coordinates of the palm relative to the shoulder in the maximum working space. After normalizing this coordinate, it is mapped to different sizes of robotic arms, so that the human does not exceed the limit during the interaction process. Covers the entire working space of the robotic arm under the condition of the effective field of view of the sensor. Compared with the problem that it is difficult to determine the scaling factor in the traditional tracking dynamic gesture, the method of the present invention has strong stability, can adjust the scaling factor adaptively, and has wide practicability. The gesture of the palm in space is mapped to the gesture of the robotic arm TCP, which can realize the rapid and efficient transmission of human intentions to the robot. The present invention adopts a human body posture pre-correction method, and takes the connection line between the two shoulders as a reference to reconstruct the coordinate system of the human body posture. The relative position of each key point in the local coordinate system constructed by the human body will not change, which greatly improves the comfort of human beings. Since there is no need to calibrate the relative positional relationship between sensors, robots, and people in advance, the efficiency of human-computer interaction is improved. For the self-occlusion problem of the arm, the occlusion detection algorithm is used for detection and recovery, which ensures that it can work normally in complex environments, and the system has strong anti-interference.

对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本发明。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本发明的精神或范围的情况下,在其他实施例中实现。因此,本发明将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。The above description of the disclosed embodiments enables any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be implemented in other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1.一种面向多自由度机器人的体势交互方法,其特征在于,包括以下步骤:1. a kind of body posture interaction method for multi-degree-of-freedom robot, is characterized in that, comprises the following steps: 采用人体骨骼关键点识别算法得到人体骨骼关键点像素坐标,根据到人体骨骼关键点的像素坐标得到各人体骨骼关键点的三维空间坐标;The human skeleton key point recognition algorithm is used to obtain the pixel coordinates of the human skeleton key points, and the three-dimensional space coordinates of each human skeleton key point are obtained according to the pixel coordinates of the human skeleton key points; 检测交互过程中过程是否存在肩部被手臂遮挡的异常错误,若存在,则进行恢复或将肩部的关键点标记为无效点;Detect whether there is an abnormal error that the shoulder is occluded by the arm during the interaction process, and if so, restore it or mark the key points of the shoulder as invalid points; 将人体空间姿态进行矫正,并对关键点空间坐标进行坐标重建,所述坐标重建为,建立以左肩关键点为坐标原点的空间直角坐标系O·x'y'z',其他骨骼关键点pi均以该坐标系为参考系进行坐标重建得到pi';Correct the human body space posture, and reconstruct the coordinates of the key point space coordinates. The coordinate reconstruction is to establish a space rectangular coordinate system O x'y'z' with the left shoulder key point as the coordinate origin, and other skeleton key points p i all use the coordinate system as the reference system to reconstruct the coordinates to obtain pi '; 将手臂上腕部相对于肩部的坐标进行归一化,得到归一化后的坐标Np7,并在手掌上建立局部空间坐标系,得到手掌上局部坐标系相对于肩部为原点的坐标系的姿态角Eler(ψ,θ,γ),姿态角Eler(ψ,θ,γ)表示手掌的空间姿态;Normalize the coordinates of the wrist on the arm relative to the shoulder to obtain the normalized coordinate N p 7 , and establish a local space coordinate system on the palm to obtain the coordinates of the local coordinate system on the palm relative to the shoulder as the origin The attitude angle Eler(ψ, θ, γ) of the system, and the attitude angle Eler(ψ, θ, γ) represents the spatial attitude of the palm; 结合归一化后的坐标、机器人连杆长度和手掌空间姿态,得到机器人各关节的关节角,以驱动机器人运动。Combined with the normalized coordinates, the length of the robot link and the palm space posture, the joint angles of each joint of the robot are obtained to drive the robot to move. 2.根据权利要求1所述的一种面向多自由度机器人的体势交互方法,其特征在于,所述人体骨骼关键点识别算法为Open Pose。2 . The method for body and posture interaction for a multi-degree-of-freedom robot according to claim 1 , wherein the algorithm for identifying key points of human skeleton is Open Pose. 3 . 3.根据权利要求1所述的一种面向多自由度机器人的体势交互方法,其特征在于,所述根据到人体骨骼关键点的像素坐标得到各人体骨骼关键点的三维空间坐标,包括:3. a kind of posture interaction method oriented to multi-degree-of-freedom robot according to claim 1, is characterized in that, described according to the pixel coordinates to the human skeleton key point to obtain the three-dimensional space coordinate of each human skeleton key point, comprising: 利用预设窗口大小进行滤波得到骨骼关键点像素坐标的有效值;Use the preset window size to filter to obtain the effective value of the pixel coordinates of the key point of the bone; 将采集到的同一时刻的深度图和RGB视频帧进行像素级别对齐,得到每个像素点以相机为空间坐标系原点对应的三维坐标。The depth map and RGB video frame collected at the same moment are aligned at the pixel level, and the three-dimensional coordinates corresponding to each pixel with the camera as the origin of the spatial coordinate system are obtained. 4.根据权利要求1所述的一种面向多自由度机器人的体势交互方法,其特征在于,所述检测交互过程中过程是否存在左肩部被左手臂遮挡的异常错误,若存在,则进行恢复或将肩部的关键点标记为无效点,包括:4. a kind of body posture interaction method for multi-degree-of-freedom robot according to claim 1, is characterized in that, whether there is an abnormal error that the left shoulder is blocked by the left arm in the described detection interaction process, if there is, then carry out. Recover or mark key points on the shoulder as invalid, including: 计算左手小臂p6 p7的方向向量Calculate the direction vector of the left forearm p 6 p 7
Figure RE-FDA0003154290950000011
Figure RE-FDA0003154290950000011
P7指向p5的向量 P7 points to the vector of p5
Figure RE-FDA0003154290950000012
Figure RE-FDA0003154290950000012
P6指向p5的向量 P6 points to the vector of p5
Figure RE-FDA0003154290950000021
Figure RE-FDA0003154290950000021
计算p2p4在p2p3上的投影平方值Calculate the projected square value of p 2 p 4 on p 2 p 3
Figure RE-FDA0003154290950000022
Figure RE-FDA0003154290950000022
通过计算p2与直线p3 p4的空间距离探测p2是否被遮挡Detect whether p 2 is occluded by calculating the spatial distance between p 2 and the straight line p 3 p 4
Figure RE-FDA0003154290950000023
Figure RE-FDA0003154290950000023
其中,xa、ya、za、xb、yb、zb分别表示向量
Figure RE-FDA0003154290950000024
的x,y,z分量;
Among them, x a , y a , za a , x b , y b , and z b represent vectors respectively
Figure RE-FDA0003154290950000024
the x, y, z components of ;
p5在垂直于
Figure RE-FDA0003154290950000025
并且分别经过p6、p7的平面之间的约束条件:
p 5 is perpendicular to
Figure RE-FDA0003154290950000025
And the constraints between the planes of p 6 and p 7 are respectively passed:
Figure RE-FDA0003154290950000026
代入经过以
Figure RE-FDA0003154290950000027
为法向量且经过p6的空间平面方程:
Will
Figure RE-FDA0003154290950000026
Substitute through
Figure RE-FDA0003154290950000027
is the normal vector and the space plane equation through p 6 :
Figure RE-FDA0003154290950000028
Figure RE-FDA0003154290950000028
xn、yn、zn表示向量
Figure RE-FDA0003154290950000029
的x,y,z分量;
x n , y n , and z n represent vectors
Figure RE-FDA0003154290950000029
the x, y, z components of ;
Figure RE-FDA00031542909500000210
代入经过以
Figure RE-FDA00031542909500000211
为法向量且经过p7的空间平面方程:
Will
Figure RE-FDA00031542909500000210
Substitute through
Figure RE-FDA00031542909500000211
is the normal vector and the space plane equation through p 7 :
Figure RE-FDA00031542909500000212
Figure RE-FDA00031542909500000212
取s1,s2的符号,如果符号为负号,说明左肩关键点p5位于两平面之间,则Take the signs of s 1 and s 2. If the sign is a negative sign, it means that the left shoulder key point p 5 is located between the two planes, then s=s1·s2 s=s 1 ·s 2
Figure RE-FDA00031542909500000213
成立,左肩关键点p5发生遮挡,threshhold表示左手腕与左小臂所在空间直线的空间距离。
when
Figure RE-FDA00031542909500000213
If established, the key point p 5 of the left shoulder is occluded, and the threshold represents the spatial distance between the left wrist and the space straight line where the left forearm is located.
5.根据权利要求1所述的一种面向多自由度机器人的体势交互方法,其特征在于,所述将人体空间姿态进行矫正中,以两肩之间的空间向量平行于相机坐标系O·xyz的x轴的方式来对人体姿态进行矫正。5. a kind of body posture interaction method for multi-degree-of-freedom robot according to claim 1 is characterized in that, in the described body space posture is corrected, with the space vector between two shoulders parallel to camera coordinate system 0 · The way of the x-axis of xyz to correct the human body posture. 6.根据权利要求1所述的一种面向多自由度机器人的体势交互方法,其特征在于,所述建立以左肩关键点为坐标原点的空间直角坐标系O·x'y'z',其他骨骼关键点pi均以该坐标系为参考系进行坐标重建得到pi',包括:6. a kind of body posture interaction method for multi-degree-of-freedom robot according to claim 1, is characterized in that, described establishing the space Cartesian coordinate system O x'y'z' with the left shoulder key point as coordinate origin, The other skeleton key points pi are reconstructed with this coordinate system as the reference system to obtain pi ', including: 以左肩为原点,
Figure FDA0003060761830000031
为x'轴,在平行于传感坐标系的o·xz平面上垂直于
Figure FDA0003060761830000032
且指向传感器方向为y'轴,与传感器y轴相反方向作为z'轴的重建后的坐标系O·x'y'z';
Taking the left shoulder as the origin,
Figure FDA0003060761830000031
is the x' axis, which is perpendicular to the o xz plane parallel to the sensing coordinate system
Figure FDA0003060761830000032
And the direction pointing to the sensor is the y' axis, and the opposite direction to the sensor y axis is the reconstructed coordinate system O x'y'z' of the z'axis;
p2指向p5的空间向量vp 2 points to the space vector v of p 5 v=p2-p5=[x y z]T (7)v=p 2 -p 5 =[xyz] T (7) v与yoz平面的夹角Angle between v and yoz plane
Figure FDA0003060761830000033
Figure FDA0003060761830000033
θx对应的旋转矩阵Rotation matrix corresponding to θ x
Figure FDA0003060761830000034
Figure FDA0003060761830000034
v通过R(θx)旋转变换后与yoz平面平行的空间向量v A space vector parallel to the yoz plane after transformation by R(θ x ) rotation v'=R(θx)×v (10)v'=R(θ x )×v (10) 空间向量v'与xoy平面的夹角The angle between the space vector v' and the xoy plane
Figure FDA0003060761830000035
Figure FDA0003060761830000035
θz对应的旋转矩阵Rotation matrix corresponding to θ z
Figure FDA0003060761830000036
Figure FDA0003060761830000036
v'通过R(θx)旋转变换后与yoz平面平行的空间向量v' is a space vector parallel to the yoz plane after R(θ x ) rotation transformation v”=R(θz)×v' (13)v”=R(θ z )×v’ (13) 经过旋转变换后,p2新的空间坐标After the rotation transformation, the new space coordinate of p 2 p2'=p5+v” (14)p 2 '=p 5 +v” (14) 总的旋转变换total rotation transformation R=R(θz)×R(θx) (15)R=R(θ z )×R(θ x ) (15) 对于骨骼关键点pi,其重建后的坐标pi'For the bone key point p i , its reconstructed coordinate p i ' pi'=p5+vi'p i '=p 5 +v i ' 其中,vi'=R×vi Wherein, v i ' = R×vi vi=pi-p5 v i = pi -p 5 式中,R是从相机坐标系到肩部为原点的坐标系的旋转矩阵。where R is the rotation matrix from the camera coordinate system to the coordinate system with the shoulder as the origin.
7.根据权利要求1所述的一种面向多自由度机器人的体势交互方法,其特征在于,所述将手臂上腕部相对于肩部的坐标进行归一化,得到坐标Np7,包括:7 . The method for interaction of body and posture for a multi-degree-of-freedom robot according to claim 1 , wherein the coordinates of the upper wrist of the arm relative to the shoulder are normalized to obtain the coordinates N p 7 , comprising: 8 . : 分别求取大臂p5'p6'长度dist1,小臂p6'p7'长度dist2以及手掌到肩部p5'p7'的距离dist3,计算式如下:Calculate the length dist 1 of the big arm p 5 'p 6 ', the length dist 2 of the forearm p 6 'p 7 ' and the distance dist 3 from the palm to the shoulder p 5 'p 7 ', the calculation formula is as follows:
Figure FDA0003060761830000041
Figure FDA0003060761830000041
Figure FDA0003060761830000042
Figure FDA0003060761830000042
Np7是归一化后的手部在左肩为原点的坐标系O·x'y'z'空间单位球中的坐标,x6'、y6'、z6'是p6'在重建后的空间坐标系O·x'y'z'的坐标分量,x5'、y5'、z5'是p5'在重建后的空间坐标系O·x'y'z'的坐标分量,sacle为自适应缩放因子。 N p 7 is the coordinate of the normalized hand in the coordinate system O x'y'z' space unit sphere with the left shoulder as the origin, x 6 ', y 6 ', z 6 ' are p 6 ' in the reconstruction The coordinate components of the reconstructed spatial coordinate system O·x'y'z', x 5 ', y 5 ', z 5 ' are the coordinate components of p 5 ' in the reconstructed spatial coordinate system O·x'y'z' , sacle is the adaptive scaling factor.
8.根据权利要求1所述的一种面向多自由度机器人的体势交互方法,其特征在于,所述在手掌上建立局部空间坐标系,得到手掌上局部坐标系相对于肩部为原点的坐标系的姿态角Eler(ψ,θ,γ)表示的姿态,包括:8. The method for interacting with a multi-degree-of-freedom robot according to claim 1, characterized in that, establishing a local space coordinate system on the palm to obtain the origin of the local coordinate system on the palm relative to the shoulder. The attitude represented by the attitude angle Eler (ψ, θ, γ) of the coordinate system, including: 以手掌上的关键点p30'指向p32'的向量
Figure FDA0003060761830000043
作为局部坐标系的O·x轴,
Figure FDA0003060761830000044
与p31'指向p33'的向量
Figure FDA0003060761830000045
作为局部坐标系的O·xy平面,过p31'的向量
Figure FDA0003060761830000046
Figure FDA0003060761830000047
Figure FDA0003060761830000048
Figure FDA0003060761830000049
作为O·z轴,则有:
A vector pointing to p 32 ' from the key point p 30 ' on the palm
Figure FDA0003060761830000043
As the O x axis of the local coordinate system,
Figure FDA0003060761830000044
vector with p31 ' pointing to p33 '
Figure FDA0003060761830000045
As the O·xy plane of the local coordinate system, the vector through p 31 '
Figure FDA0003060761830000046
and
Figure FDA0003060761830000047
and
Figure FDA0003060761830000048
by
Figure FDA0003060761830000049
As the O z axis, there are:
Figure FDA00030607618300000410
Figure FDA00030607618300000410
x,y,z为每个向量的坐标分量;x, y, z are the coordinate components of each vector; 求解O.xz平面的法向量
Figure FDA00030607618300000411
并且
Figure FDA00030607618300000412
将向量
Figure FDA00030607618300000413
归一化:
Find the normal vector of the O.xz plane
Figure FDA00030607618300000411
and
Figure FDA00030607618300000412
the vector
Figure FDA00030607618300000413
Normalized:
Figure FDA0003060761830000051
Figure FDA0003060761830000051
Figure FDA0003060761830000052
Figure FDA0003060761830000052
r11、r21、r31为向量
Figure FDA0003060761830000053
归一化后的三个坐标分量,r12、r22、r32为向量
Figure FDA0003060761830000054
归一化后的三个坐标分量,r13、r23、r33为向量
Figure FDA0003060761830000055
归一化后的三个坐标分量;
r 11 , r 21 , and r 31 are vectors
Figure FDA0003060761830000053
The three coordinate components after normalization, r 12 , r 22 , and r 32 are vectors
Figure FDA0003060761830000054
The three coordinate components after normalization, r 13 , r 23 , r 33 are vectors
Figure FDA0003060761830000055
The three coordinate components after normalization;
Rh是Oh·x'y'z'在O·x'y'z'的旋转矩阵,其姿态角Eler(ψ,θ,γ)通过下式计算:R h is the rotation matrix of O h x'y'z' at O x'y'z', and its attitude angle Eler(ψ,θ,γ) is calculated by the following formula:
Figure FDA0003060761830000056
Figure FDA0003060761830000056
ψ表示将坐标系绕x轴旋转的角度,θ表示坐标轴绕y轴旋转的角度,γ表示将坐标轴绕z轴旋转的角度,atan2为反三角函数,计算得到正切角。ψ represents the angle that the coordinate system rotates around the x-axis, θ represents the angle that the coordinate axis rotates around the y-axis, γ represents the angle that the coordinate axis rotates around the z-axis, atan2 is an inverse trigonometric function, and the tangent angle is calculated.
9.根据权利要求1所述的一种面向多自由度机器人的体势交互方法,其特征在于,在结合归一化后的坐标、机器人连杆长度和手掌姿态,得到机器人各关节的关节角前还包括对归一化后的坐标和手掌姿态角进行滤波操作。9 . The method for interacting with a multi-degree-of-freedom robot according to claim 1 , wherein the joint angles of each joint of the robot are obtained by combining the normalized coordinates, the length of the robot link and the palm posture. 10 . The former also includes filtering operations on the normalized coordinates and the palm posture angle. 10.根据权利要求1-9任一所述的一种面向多自由度机器人的体势交互方法,其特征在于,所述结合归一化后的坐标、机器人连杆长度和手掌姿态,得到机器人各关节的关节角,以驱动机器人运动,包括:10. The method for interaction of body postures for a multi-degree-of-freedom robot according to any one of claims 1 to 9, wherein the robot is obtained by combining the normalized coordinates, the length of the robot connecting rod and the palm posture The joint angle of each joint to drive the robot motion, including: ROS逆运动学求解器根据手掌的姿态姿态角Eler(ψ,θ,γ)、机器人末端位置得到机器人各关节的关节角;其中,机器人末端位置Pe的计算公式如下:The ROS inverse kinematics solver obtains the joint angles of each joint of the robot according to the posture and attitude angle Eler(ψ, θ, γ) of the palm and the position of the end of the robot; among them, the calculation formula of the end position P e of the robot is as follows: PeNp7·LP e = N p 7 ·L 式中,Np7为归一化后手腕在肩部为原点的坐标系中的坐标,L为机器人连杆总长。In the formula, N p 7 is the coordinate of the normalized wrist in the coordinate system with the shoulder as the origin, and L is the total length of the robot link.
CN202110512320.5A 2021-05-11 2021-05-11 Body potential interaction method for multi-degree-of-freedom robot Active CN113386128B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110512320.5A CN113386128B (en) 2021-05-11 2021-05-11 Body potential interaction method for multi-degree-of-freedom robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110512320.5A CN113386128B (en) 2021-05-11 2021-05-11 Body potential interaction method for multi-degree-of-freedom robot

Publications (2)

Publication Number Publication Date
CN113386128A CN113386128A (en) 2021-09-14
CN113386128B true CN113386128B (en) 2022-06-10

Family

ID=77616921

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110512320.5A Active CN113386128B (en) 2021-05-11 2021-05-11 Body potential interaction method for multi-degree-of-freedom robot

Country Status (1)

Country Link
CN (1) CN113386128B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114327047B (en) * 2021-12-01 2024-04-30 北京小米移动软件有限公司 Device control method, device control apparatus, and storage medium
CN114187343A (en) * 2021-12-16 2022-03-15 杭州萤石软件有限公司 3D data acquisition method, device and electronic device
CN114550284A (en) * 2022-01-13 2022-05-27 北京信息科技大学 Human body action standardized descriptor based on human body posture estimation
CN115331153B (en) * 2022-10-12 2022-12-23 山东省第二人民医院(山东省耳鼻喉医院、山东省耳鼻喉研究所) Posture monitoring method for assisting vestibule rehabilitation training
CN118288297B (en) * 2024-06-06 2024-08-16 北京人形机器人创新中心有限公司 Robot motion control method, system, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106313049A (en) * 2016-10-08 2017-01-11 华中科技大学 Somatosensory control system and control method for apery mechanical arm
CN107160364A (en) * 2017-06-07 2017-09-15 华南理工大学 A kind of industrial robot teaching system and method based on machine vision
CN107363813A (en) * 2017-08-17 2017-11-21 北京航空航天大学 A kind of desktop industrial robot teaching system and method based on wearable device
CN107953331A (en) * 2017-10-17 2018-04-24 华南理工大学 A kind of human body attitude mapping method applied to anthropomorphic robot action imitation
CN110480634A (en) * 2019-08-08 2019-11-22 北京科技大学 A kind of arm guided-moving control method for manipulator motion control
CN111738092A (en) * 2020-05-28 2020-10-02 华南理工大学 A Deep Learning-Based Method for Restoring Occluded Human Pose Sequences
CN112149455A (en) * 2019-06-26 2020-12-29 北京京东尚科信息技术有限公司 Method and device for detecting human body posture
JP2021068438A (en) * 2019-10-21 2021-04-30 ダッソー システムズDassault Systemes Computer-implemented method for making skeleton of modeled body take posture

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106313049A (en) * 2016-10-08 2017-01-11 华中科技大学 Somatosensory control system and control method for apery mechanical arm
CN107160364A (en) * 2017-06-07 2017-09-15 华南理工大学 A kind of industrial robot teaching system and method based on machine vision
CN107363813A (en) * 2017-08-17 2017-11-21 北京航空航天大学 A kind of desktop industrial robot teaching system and method based on wearable device
CN107953331A (en) * 2017-10-17 2018-04-24 华南理工大学 A kind of human body attitude mapping method applied to anthropomorphic robot action imitation
CN112149455A (en) * 2019-06-26 2020-12-29 北京京东尚科信息技术有限公司 Method and device for detecting human body posture
CN110480634A (en) * 2019-08-08 2019-11-22 北京科技大学 A kind of arm guided-moving control method for manipulator motion control
JP2021068438A (en) * 2019-10-21 2021-04-30 ダッソー システムズDassault Systemes Computer-implemented method for making skeleton of modeled body take posture
CN111738092A (en) * 2020-05-28 2020-10-02 华南理工大学 A Deep Learning-Based Method for Restoring Occluded Human Pose Sequences

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李瑞.图像和深度图中的动作识别与手势姿态估计.《中国博士学位论文全文数据库 信息科技辑》.2019, *
王志红.基于视觉手势识别的机械手操控系统的研究.《中国优秀硕士学位论文全文数据库 信息科技辑》.2017, *

Also Published As

Publication number Publication date
CN113386128A (en) 2021-09-14

Similar Documents

Publication Publication Date Title
CN113386128B (en) Body potential interaction method for multi-degree-of-freedom robot
CN106909216B (en) Kinect sensor-based humanoid manipulator control method
US11331806B2 (en) Robot control method and apparatus and robot using the same
CN110480634B (en) An arm-guided motion control method for robotic arm motion control
Lee et al. Model-based analysis of hand posture
WO2019218457A1 (en) Virtual reality driving method based on arm motion capture, and virtual reality system
CN106346485A (en) Non-contact control method of bionic manipulator based on learning of hand motion gestures
CN102350700A (en) Method for controlling robot based on visual sense
JP4765075B2 (en) Object position and orientation recognition system using stereo image and program for executing object position and orientation recognition method
CN110471526A (en) A kind of human body attitude estimates the unmanned aerial vehicle (UAV) control method in conjunction with gesture identification
Aristidou et al. Motion capture with constrained inverse kinematics for real-time hand tracking
CN113505694A (en) Human-computer interaction method and device based on sight tracking and computer equipment
Knoop et al. Modeling joint constraints for an articulated 3D human body model with artificial correspondences in ICP
CN117333635A (en) Interactive two-hand three-dimensional reconstruction method and system based on single RGB image
CN115240224A (en) Gesture feature extraction method based on three-dimensional hand key point and image feature fusion
WO2022074886A1 (en) Posture detection device, posture detection method, and sleeping posture determination method
Luck et al. Development and analysis of a real-time human motion tracking system
CN107363831B (en) Teleoperation robot control system and method based on vision
Stroppa et al. Real-time 3D tracker in robot-based neurorehabilitation
Fujiki et al. Real-time 3D hand shape estimation based on inverse kinematics and physical constraints
Liang et al. Hand pose estimation by combining fingertip tracking and articulated ICP
Infantino et al. Visual control of a robotic hand
Sigalas et al. Robust model-based 3d torso pose estimation in rgb-d sequences
Ehlers et al. Self-scaling Kinematic Hand Skeleton for Real-time 3D Hand-finger Pose Estimation.
Xu et al. Design of a human-robot interaction system for robot teleoperation based on digital twinning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant