WO2022227664A1 - Robot posture control method, robot, storage medium and computer program - Google Patents

Robot posture control method, robot, storage medium and computer program Download PDF

Info

Publication number
WO2022227664A1
WO2022227664A1 PCT/CN2021/142242 CN2021142242W WO2022227664A1 WO 2022227664 A1 WO2022227664 A1 WO 2022227664A1 CN 2021142242 W CN2021142242 W CN 2021142242W WO 2022227664 A1 WO2022227664 A1 WO 2022227664A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
joint
posture
target
rotation angle
Prior art date
Application number
PCT/CN2021/142242
Other languages
French (fr)
Chinese (zh)
Inventor
彭飞
Original Assignee
达闼机器人股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 达闼机器人股份有限公司 filed Critical 达闼机器人股份有限公司
Publication of WO2022227664A1 publication Critical patent/WO2022227664A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture

Definitions

  • Embodiments of the present invention relate to the field of robots, and in particular, to a method for controlling robot posture, a robot, a storage medium, and a computer program.
  • Robot actions are generated by a sequence of sequences, such as shaking hands, raising hands or shaking heads, etc.
  • the action sequence generation of the robot can be manually adjusted according to the target position of the robot motion; or the human body posture perception can be realized through motion capture equipment, for example,
  • the sensor data or 2D video processing technology can be used to detect and track the human skeleton to realize human posture perception; set the robot's action sequence according to the perceived human posture.
  • the manual debugging method needs to manually design the motion of each joint one by one, generate the action sequence of the robot, and form the posture of the robot. Since it needs to be debugged one by one, and the movement of each joint will affect the movement of other joints, the debugging time The process is long and the process is complicated; while capturing human movements through motion capture equipment requires additional equipment to capture human movements, resulting in inflexible generation of machine action sequences and high costs.
  • the purpose of the embodiments of the present invention is to provide a robot posture control method, robot, storage medium and computer program, which can quickly generate a target posture that matches the target object, enrich the robot's actions, and simplify the cost of the robot's learning action posture.
  • the embodiments of the present disclosure provide a method for controlling the posture of a robot, including: obtaining a three-dimensional skeleton model corresponding to the target posture according to the target posture of the target object in the image data;
  • the three-dimensional skeleton model is mapped into the joint space of the robot, and the posture features of the robot and the skeleton posture features of the three-dimensional skeleton model are obtained; the posture features of the robot are adjusted to match the skeleton posture features.
  • the target position is obtained, and the rotation angle information of each joint of the robot is obtained; the movement of the corresponding joint of the robot is controlled according to the rotation angle information of each joint to form the target posture.
  • obtaining the three-dimensional skeleton model corresponding to the target pose according to the target pose of the target object in the image data includes: inputting the image data into a preset first neural network model, and obtaining the target pose the two-dimensional skeleton data, the first neural network model is used to identify the target posture of the target object in the image data, and generate corresponding two-dimensional skeleton data based on the target posture; Inputting the preset second neural network model to obtain a three-dimensional skeleton model corresponding to the target pose.
  • adjusting the posture feature of the robot to a target position matching the skeleton posture feature, and acquiring the rotation angle information of each joint of the robot includes: dividing the posture feature of the robot into a plurality of mapping parts; The following processing is performed for each mapping part: the position of the vector of each joint in the mapping part is transformed into the position of the vector of the corresponding key point in the skeleton pose feature, and the rotation angle of the joint is obtained as the joint corner information.
  • the plurality of mapping parts include: a trunk part, a limb part, a head part and a waist part.
  • the method further includes: taking the corner information of each joint as the motion data of the current frame; performing collision detection on the motion data of the current frame; if no collision is detected, determining to execute the motion data according to each joint.
  • the rotation angle information of the control corresponds to the movement of the joint to form the step of forming the target posture.
  • controlling the movement of the robot corresponding to the joints according to the rotation angle information of the joints to form the target posture includes: filtering the rotation angle information of the joints;
  • the rotation angle information of the joint controls the rotation of the corresponding joint to form the target posture.
  • the method further includes: mapping the three-dimensional skeleton.
  • the model is normalized.
  • the method further includes: collecting video data of the target object; The image data is obtained from video data.
  • an embodiment of the present disclosure also provides a robot posture control device, including: a model acquisition module, configured to acquire a three-dimensional skeleton model corresponding to the target posture according to the target posture of the target object in the image data; an attitude acquisition module, used to map the three-dimensional skeleton model into the joint space of the robot, and acquire the attitude features of the robot and the skeleton attitude characteristics of the three-dimensional skeleton model; the corner acquisition module is used to map the robot The posture feature of the robot is adjusted to the target position matching the skeleton posture feature, and the rotation angle information of each joint of the robot is obtained; the control motion module is used to control the corresponding joint of the robot according to the rotation angle information of each joint. movement to form the target pose.
  • a model acquisition module configured to acquire a three-dimensional skeleton model corresponding to the target posture according to the target posture of the target object in the image data
  • an attitude acquisition module used to map the three-dimensional skeleton model into the joint space of the robot, and acquire the attitude features of the robot and the
  • model acquisition module is specifically used for:
  • the angle acquisition module is specifically used to: divide the posture feature of the robot into a plurality of mapping parts; perform the following processing for each mapping part: transform the position of the vector where each joint in the mapping part is located For the position of the vector where the corresponding key point in the skeleton pose feature is located, the rotation angle of the joint is obtained as the rotation angle information of the joint.
  • the plurality of mapping parts include: a trunk part, a limb part, a head part and a waist part.
  • the device further includes: a motion data module, used for taking the rotation angle information of each joint as the motion data of the current frame; a collision detection module, used for collision detection on the motion data of the current frame; control motion
  • the sub-module is configured to determine and execute the step of controlling the movement of the corresponding joint according to the rotation angle information of each joint to form the target posture if no collision is detected.
  • the motion control module is specifically configured to: filter the rotation angle information of each joint; and control the rotation of the corresponding joint according to the processed rotation angle information of each joint to form the target posture.
  • the apparatus further includes: a normalization module, configured to perform normalization processing on the three-dimensional skeleton model.
  • the device further comprises: a video acquisition module for acquiring video data of the target object; an image acquisition module for acquiring the image data from the video data.
  • an embodiment of the present disclosure also provides a robot, comprising: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores information that can be used by the at least one processor Instructions executed by the processor, where the instructions are executed by the at least one processor, so that the at least one processor can execute the above-mentioned method for controlling the posture of the robot.
  • an embodiment of the present disclosure also provides a computer-readable storage medium storing a computer program, and when the computer program is executed by a processor, the above-mentioned method for controlling the attitude of the robot is implemented.
  • an embodiment of the present disclosure also provides a computer program, including instructions, which, when executed on a computer, cause the computer to execute the above-mentioned method for controlling the posture of a robot.
  • the limbs of the robot move in the three-dimensional space, and the three-dimensional skeleton model corresponding to the target pose is obtained through the target pose of the target object in the image data, and the three-dimensional skeleton model is mapped into the joint space of the robot, so that The three-dimensional skeleton model and the joint space are in the same coordinate system, and the rotation angle information of each joint in the robot is obtained, so that the posture formed by controlling the motion of the robot based on the rotation angle information of each joint corresponds to the target posture, and the target object in the image data guides the robot's movement.
  • Motion without the need for additional motion capture sensors to upload the motion of the target object in real time, reducing the cost of controlling the robot to move according to the target posture; and no need to manually debug each joint, simplifying the complex steps to generate the target posture, making the robot faster The action to learn the target pose.
  • FIG. 1 is a flowchart of a method for controlling a robot posture in an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of a two-dimensional human skeleton provided in an embodiment of the present disclosure
  • FIG. 3 is a flow chart of acquiring rotation angle information of each joint provided in an embodiment of the present disclosure
  • Fig. 4 is a flow chart that provides collision detection on the rotation angle information of each joint in one embodiment of the present disclosure
  • FIG. 5 is a flowchart of filtering processing of the rotation angle information of each joint provided in an embodiment of the present disclosure
  • FIG. 6 is a flowchart of normalizing a three-dimensional skeleton model provided in an embodiment of the present disclosure
  • FIG. 7 is a flowchart of obtaining a three-dimensional skeleton model provided in an embodiment of the present disclosure.
  • FIG. 8 is a flowchart of acquiring image data provided in an embodiment of the present disclosure.
  • FIG. 9 is a flowchart provided in an embodiment of the present disclosure for obtaining the rotation angle information of the joint and performing collision detection on the rotation angle information of the joint;
  • Fig. 10 is the flow chart of the control method of the robot posture after adding filtering processing in Fig. 9;
  • Fig. 11 is the flow chart of the control method of the robot posture that adds the normalization processing to the three-dimensional skeleton model in Fig. 10;
  • Fig. 12 is the flow chart of adding the control method of acquiring the robot posture of the three-dimensional skeleton model in Fig. 11;
  • Fig. 13 is the flow chart of the control method of adding the robot posture of acquiring image data in Fig. 11;
  • FIG. 14 is a schematic diagram of a control device for a robot posture provided in another embodiment of the present disclosure.
  • FIG. 15 is a schematic structural diagram of a robot in another embodiment of the present disclosure.
  • FIG. 1 The flow of the control method for the robot posture in the embodiment of the present disclosure is shown in FIG. 1 :
  • Step 101 According to the target posture of the target object in the image data, obtain a three-dimensional skeleton model corresponding to the target posture.
  • the image data may be an image captured by a robot, or image data extracted from video data, for example, a frame of image in the video data is used as the image data.
  • the target object can be a human body, an animal, etc.
  • the robot in this embodiment is a multi-joint robot, such as a human-type robot, an animal-type robot, and the like.
  • the action gesture of the target object can be extracted as the target gesture by identifying the target object in the image data.
  • Two-dimensional skeleton data corresponding to the target pose is acquired, and a three-dimensional skeleton model of the target object can be constructed based on the two-dimensional skeleton data.
  • the multi-joint robot takes a humanoid robot as an example, and the target object is a human body.
  • the skeleton of the human body is composed of 17 three-dimensional joint points, such as the two-dimensional human skeleton shown in Figure 2.
  • the serial numbers 0 to 16 in Figure 2 represent: 0 pelvic center, 1 right hip joint, 2 right knee joint, 3 Right Ankle, 4 Left Hip, 5 Left Knee, 6 Left Ankle, 7 Midpoint of Spine, 8 Midpoint of Cervical Vertebra, 9 Head, 10 Linggai, 11 Left Shoulder, 12 Left Elbow, 13 Left Wrist, 14 Right Shoulder joint, 15 right elbow joint and 16 right wrist joint.
  • Step 102 Map the three-dimensional skeleton model into the joint space of the robot, and obtain the pose feature of the robot and the skeleton pose feature of the three-dimensional skeleton model.
  • all its link positions can be determined by a set of n joint variables.
  • the set of joint variables is called the n x 1 joint vector, and the space composed of all joint vectors is called the joint space.
  • the joint space of the robot can be determined according to the robotic arm of the robot. For example, the robotic arm of the robot can have 7 degrees of freedom, then based on the robotic arm with 7 degrees of freedom, a 7 x 1 joint vector can be constructed, and the space composed of all joint vectors is The joint space corresponding to the current robot arm.
  • the joint space corresponding to the torso of the robot may also be used as the joint space of the robot.
  • the space angle between adjacent parts in the 3D skeleton model is the same as the space angle between adjacent parts corresponding to the robot, and the robot can present the same target pose as the 3D skeleton model.
  • the 3D skeleton model and the robot are not in the same coordinate system.
  • the 3D skeleton model is mapped to the joint space of the robot, and the pose feature of the robot is obtained.
  • the pose feature includes the position of the vector of each joint of the robot and the 3D skeleton model.
  • the skeleton pose feature includes the vector formed by each key point in the skeleton.
  • the magnitude of the vector where the joint 11 is located as shown in Fig. 2 may be the coordinate difference between the joint point 11 and the joint point 12, and the direction may be the execution of the joint point 12 from the joint point.
  • Step 103 Adjust the posture feature of the robot to a target position matching the posture feature of the skeleton, and obtain the rotation angle information of each joint of the robot.
  • the posture feature of the robot includes the vectors where each joint of the robot is located, and the vector where each joint is located has a corresponding target vector, and the target vector is the vector where the corresponding key points in the skeleton posture feature are located. Transform the vector of the joint so that the vector of the joint coincides with the target vector, and obtain the rotation angle of the joint during the transformation process as the rotation angle information of the joint.
  • Step 104 Control the motion of the corresponding joint of the robot according to the rotation angle information of each joint to form a target posture.
  • the target pose can be formed by controlling each joint of the robot to move according to the corresponding joint angle.
  • the limbs of the robot move in the three-dimensional space, and the three-dimensional skeleton model corresponding to the target pose is obtained through the target pose of the target object in the image data, and the three-dimensional skeleton model is mapped into the joint space of the robot, so that The three-dimensional skeleton model and the joint space are in the same coordinate system, and the rotation angle information of each joint in the robot is obtained, so that the posture formed by controlling the motion of the robot based on the rotation angle information of each joint corresponds to the target posture, and the target object in the image data guides the robot's movement.
  • Motion without the need for additional motion capture sensors to upload the motion of the target object in real time, reducing the cost of controlling the robot to move according to the target posture; and no need to manually debug each joint, simplifying the complex steps to generate the target posture, making the robot faster The action to learn the target pose.
  • step 103 may perform sub-steps, the flow of which is shown in FIG. 3 .
  • Sub-step 1031 Divide the pose feature of the robot into multiple mapping parts.
  • the robot pose feature can be divided into several mapping parts according to the position of the joint points. Due to the need to obtain the angles of the joints, the RPY system can be constructed based on the positions of the joint points, and the divided mapping parts include: trunk, limbs, head and waist.
  • the trunk consists of: 0 center of pelvis, 11 left shoulder joint and 14 right shoulder joint.
  • the limbs include: left upper limb, left lower limb, right upper limb and right lower limb.
  • the left upper limb is: 11 left shoulder joint, 12 left elbow joint, 13 left wrist joint.
  • the left lower limb is: 4 left hip joint, 5 left knee joint, 6 left ankle joint.
  • the right upper limb is: 14 right shoulder joint, 15 right elbow joint and 16 right wrist joint.
  • the right lower limb is: 1 right hip joint, 2 right knee joint, 3 right ankle joint.
  • the head includes: 8 midpoints of cervical vertebrae, 9 heads, and 10 days Linggai.
  • the waist includes: 0 pelvic center, 1 right hip joint, 4 left hip joint.
  • mapping parts By dividing the mapping parts, it is convenient to obtain the Euler angles of the joints in each mapping part, thereby facilitating the calculation of the rotation angles of the joints, reducing the time for calculating the rotation angle information of the joints, and improving the calculation speed.
  • Sub-step 1032 Perform the following processing for each mapping part: transform the position of the vector of each joint in the mapping part into the position of the vector of the corresponding key point in the skeleton pose feature, and obtain the rotation angle of the joint as the rotation angle information of the joint.
  • the line segments of the three-dimensional skeleton model can be converted into vectors, and the vector of the key points in the skeleton pose feature can be obtained.
  • the rotation angle of each joint is obtained by means of Euclidean geometric settlement. Taking the left upper limb as an example and in conjunction with Figure 2, the detailed calculation process is as follows:
  • the positive direction of the x-axis of the coordinate system is the vertical when the person is upright.
  • the positive direction of the y-axis is the right side when the person is upright, and the positive direction of the z-axis is the front side when the person is upright. Converting all other joints to this coordinate system, the movement of human limbs is based on the torso, and the movement of the robot that imitates the movement of the human body is also based on the torso, so the torso of the robot can be used as the basic coordinate system for the entire calculation.
  • the space vector v corresponding to the key point in the skeleton pose feature is known, and the respective rotation angles of the two joints a and b of the left shoulder are obtained.
  • the shoulder of the robot corresponds to the two joints , respectively control the front and rear swing of the boom and the left and right swing.
  • the shoulder joint of the robot does not swing back and forth along the horizontal axis, but swings in the direction of an inclination of 20° with the horizontal axis, so the x-axis rotates around the vector (cos20°, sin20°, 0).
  • there are usually multiple groups of solutions and the group within the joint limit is selected from the multiple groups of solutions as the final output solution.
  • the rotation angle information of joint d is the angle between the key point "11 ⁇ 12" direction and the "12 ⁇ 13" vector.
  • the rotation angle information of joint c can be obtained by reverse calculation, that is, the two vectors are rotated to negative b radians around the z-axis, and then rotated to negative a radians around the vector (cos20°, sin20°, 0), so that the v vector is equal to -
  • the x vector coincides, and the key point "12 ⁇ 13" vector reaches a new position, and the vector of the new position is projected to the y-z plane, and the angle between y and z is the corner information of the joint c.
  • the rotation angles of the three joints of the head are obtained by means of Euclidean geometric settlement.
  • the joint structure of the head is a typical roll pitch yall system with three axes perpendicular to each other.
  • the Euler angles can be restored by constructing a rotation matrix from key points 8, 9, and 10.
  • the "9 ⁇ 10" vector is used as the z-axis vector to form the third column of the rotation, and the "8 ⁇ 9” vector is cross-multiplied by the "9 ⁇ 10" vector to obtain the result as the y vector, which forms the second column of the rotation matrix, and then the y cross-multiplies
  • the z vector gets the x vector, as the first column of the rotation matrix. The method of solving the Euler angles by the rotation matrix will not be repeated here.
  • the structure of the waist is similar to the structure of the head, and it is also a typical three-axis mutually perpendicular rpy system.
  • the calculation of the rotation angle information of the waist joints is similar to the calculation of the rotation angle information of each joint in the head, and will not be repeated here.
  • the data structure of the skeleton of the human body in this example adopts the skeleton model of Human 3.6M.
  • the three-dimensional skeleton model is divided into a plurality of mapping parts, and each mapping part is mapped separately.
  • the mapping part has a simpler structure and is easier to map to the joint space, reducing the difficulty of mapping
  • the limbs and joint spaces in the three-dimensional skeleton model are in the same space, which is more conducive to calculating the rotation angle information of the joints.
  • the Euler angle calculation method is adopted, and the calculation is simple and fast.
  • control method of the robot posture can also perform the following steps, and the process is shown in FIG. 4 :
  • Step 104-1 Use the rotation angle information of each joint as the motion data of the current frame.
  • Step 104-2 Perform collision detection on the motion data of the current frame.
  • the collision detection can be performed on the rotation angle information of each joint.
  • the collision detection model can be preset, and the collision detection model can be used to simulate the operation of each limb of the robot.
  • the moveit program can be used for collision detection, and the URDF file of the robot can be imported into the moveit, where the URDF file is the robot model description format;
  • the rotation angle information of each joint you can simulate the movement of each limb of the robot according to the rotation angle information of each joint, and judge whether there is a collision between the limbs. If no collision is detected, it means that the current information on the rotation angles of each joint is legal and can be applied to the robot.
  • Step 104-3 If no collision is detected, determine to execute the step of controlling the motion of the corresponding joint according to the rotation angle information of each joint to form the target posture.
  • the joint rotation angle is sent to the robot for execution, so as to achieve the effect of the actual robot tracking the posture of the target object.
  • collision detection is performed on the rotation angle information of each joint, and if the probability of collision is detected to be less than a preset threshold, the rotation angle information of each shutdown is applied to the robot to ensure the accuracy of the robot posture, It also ensures the safety of the robot.
  • step 104 can also perform the following sub-steps, and the flow is shown in FIG. 5 :
  • Step 1041 Perform filtering processing on the rotation angle information of each joint.
  • the rotation angle information of each joint is obtained based on the identified skeleton, and the identified skeleton inevitably has noise and beating.
  • the corner information of the joints based on the noisy skeleton will also have noise.
  • a sliding window filtering method can be used to perform the filtering processing to remove the burr noise in the joint motion.
  • Step 1042 Control the rotation of the corresponding joint according to the processed rotation angle information of each joint to form a target pose.
  • filtering processing is performed on the rotation angle information of each joint, noise and beating are eliminated, and the accuracy of the rotation angle information is improved.
  • step 102 how sub-steps may be performed, and the process is shown in FIG. 6 :
  • Sub-step 102-1 Normalize the three-dimensional skeleton model.
  • the length of the skeleton is not suitable for the limbs of the robot because the length of the skeleton is uneven.
  • the 3D skeleton model can be normalized. For example, the skeleton is converted from a line segment to a vector by subtracting adjacent keypoints. The vector is then unitized, and the resulting skeleton consists of unit vectors.
  • step 101 may perform sub-steps, and the flow is shown in FIG. 7 :
  • Step 1011 Input the image data into the preset first neural network model to obtain the two-dimensional skeleton data of the target posture.
  • the first neural network model is used to identify the target posture of the target object in the image data, and generate a corresponding target posture based on the target posture. 2D skeleton data.
  • Skeleton extraction is mainly completed by two neural networks.
  • the first neural network accepts image input to complete the recognition of target objects in image data and the extraction of two-dimensional skeleton data.
  • the two-dimensional skeleton data includes each joint point of the target object and the position of each joint point.
  • Step 1012 Input the two-dimensional skeleton data into a preset second neural network model to obtain a three-dimensional skeleton model corresponding to the target pose.
  • the input of the second neural network is the 2D skeleton node output by the first neural network, which outputs a 3D skeleton model.
  • the neural network model since the neural network model has strong learning energy, two neural network models are used to obtain the three-dimensional skeleton model, which can improve the accuracy and applicability of the three-dimensional skeleton model.
  • step 1011 the following steps may also be performed, and the flow is shown in FIG. 8 :
  • Step 1011-1 Collect video data of the target object.
  • the video data of the target object can be collected in real time.
  • the video data of the target object collected by other devices may also be acquired in real time.
  • Step 1011-2 Obtain image data from video data.
  • Image data can be obtained from the video data, and steps 101 to 104 are performed on the image data, so that the robot can present the target pose of the target object.
  • the robot can track and learn the pose of the target object.
  • FIG. 9 it is a flow chart of acquiring the rotation angle information of the joint and performing collision detection on the rotation angle information of the joint.
  • Step 101 Acquire a three-dimensional skeleton model corresponding to the target posture according to the target posture of the target object in the image data.
  • Step 102 Map the three-dimensional skeleton model into the joint space of the robot, and obtain the pose feature of the robot and the skeleton pose feature of the three-dimensional skeleton model.
  • Sub-step 1031 Divide the pose feature of the robot into multiple mapping parts.
  • Sub-step 1032 Perform the following processing for each mapping part: transform the position of the vector of each joint in the mapping part into the position of the vector of the corresponding key point in the skeleton pose feature, and obtain the rotation angle of the joint as the rotation angle information of the joint.
  • Step 104-1 Use the rotation angle information of each joint as the motion data of the current frame.
  • Step 104-2 Perform collision detection on the motion data of the current frame.
  • Step 104-3 If no collision is detected, determine to execute the step of controlling the motion of the corresponding joint according to the rotation angle information of each joint to form the target posture.
  • Step 104 Control the motion of the corresponding joint of the robot according to the rotation angle information of each joint to form a target posture.
  • FIG. 10 is a flowchart of adding filtering processing in FIG. 9 .
  • Step 101 Acquire a three-dimensional skeleton model corresponding to the target posture according to the target posture of the target object in the image data.
  • Step 102 Map the three-dimensional skeleton model into the joint space of the robot, and obtain the pose feature of the robot and the skeleton pose feature of the three-dimensional skeleton model.
  • Sub-step 1031 Divide the pose feature of the robot into multiple mapping parts.
  • Sub-step 1032 Perform the following processing for each mapping part: transform the position of the vector of each joint in the mapping part into the position of the vector of the corresponding key point in the skeleton pose feature, and obtain the rotation angle of the joint as the rotation angle information of the joint.
  • Step 104-1 Use the rotation angle information of each joint as the motion data of the current frame.
  • Step 104-2 Perform collision detection on the motion data of the current frame.
  • Step 104-3 If no collision is detected, determine to execute the step of controlling the motion of the corresponding joint according to the rotation angle information of each joint to form the target posture.
  • Step 1041 Perform filtering processing on the rotation angle information of each joint.
  • Step 1042 Control the rotation of the corresponding joint according to the processed rotation angle information of each joint to form a target pose.
  • FIG. 11 the flowchart of adding the normalization process to the three-dimensional skeleton model in FIG. 10 is shown.
  • Step 101 Acquire a three-dimensional skeleton model corresponding to the target posture according to the target posture of the target object in the image data.
  • Sub-step 102-1 Normalize the three-dimensional skeleton model.
  • Step 102 Map the three-dimensional skeleton model into the joint space of the robot, and obtain the pose feature of the robot and the skeleton pose feature of the three-dimensional skeleton model.
  • Sub-step 1031 Divide the pose feature of the robot into multiple mapping parts.
  • Sub-step 1032 Perform the following processing for each mapping part: transform the position of the vector of each joint in the mapping part into the position of the vector of the corresponding key point in the skeleton pose feature, and obtain the rotation angle of the joint as the rotation angle information of the joint.
  • Step 104-1 Use the rotation angle information of each joint as the motion data of the current frame.
  • Step 104-2 Perform collision detection on the motion data of the current frame.
  • Step 104-3 If no collision is detected, determine to execute the step of controlling the motion of the corresponding joint according to the rotation angle information of each joint to form the target posture.
  • Step 1041 Perform filtering processing on the rotation angle information of each joint.
  • Step 1042 Control the rotation of the corresponding joint according to the processed rotation angle information of each joint to form a target pose.
  • FIG. 12 the flowchart of acquiring a three-dimensional skeleton model in FIG. 11 is added.
  • Step 1011 Input the image data into the preset first neural network model to obtain the two-dimensional skeleton data of the target posture.
  • the first neural network model is used to identify the target posture of the target object in the image data, and generate a corresponding target posture based on the target posture. 2D skeleton data.
  • Step 1012 Input the two-dimensional skeleton data into a preset second neural network model to obtain a three-dimensional skeleton model corresponding to the target pose.
  • Sub-step 102-1 Normalize the three-dimensional skeleton model.
  • Step 102 Map the three-dimensional skeleton model into the joint space of the robot, and obtain the pose feature of the robot and the skeleton pose feature of the three-dimensional skeleton model.
  • Sub-step 1031 Divide the pose feature of the robot into multiple mapping parts.
  • Sub-step 1032 Perform the following processing for each mapping part: transform the position of the vector of each joint in the mapping part into the position of the vector of the corresponding key point in the skeleton pose feature, and obtain the rotation angle of the joint as the rotation angle information of the joint.
  • Step 104-1 Use the rotation angle information of each joint as the motion data of the current frame.
  • Step 104-2 Perform collision detection on the motion data of the current frame.
  • Step 104-3 If no collision is detected, determine to execute the step of controlling the motion of the corresponding joint according to the rotation angle information of each joint to form the target posture.
  • Sub-step 1041 Perform filtering processing on the rotation angle information of each joint.
  • Sub-step 1042 Control the rotation of the corresponding joint according to the processed rotation angle information of each joint to form a target pose.
  • FIG. 13 the flowchart of acquiring image data in FIG. 12 is added.
  • Step 1011-1 Collect video data of the target object.
  • Step 1011-2 Obtain image data from video data.
  • Step 1011 Input the image data into the preset first neural network model to obtain the two-dimensional skeleton data of the target posture.
  • the first neural network model is used to identify the target posture of the target object in the image data, and generate a corresponding target posture based on the target posture. 2D skeleton data.
  • Step 1012 Input the two-dimensional skeleton data into a preset second neural network model to obtain a three-dimensional skeleton model corresponding to the target pose.
  • Sub-step 102-1 Normalize the three-dimensional skeleton model.
  • Step 102 Map the three-dimensional skeleton model into the joint space of the robot, and obtain the pose feature of the robot and the skeleton pose feature of the three-dimensional skeleton model.
  • Sub-step 1031 Divide the pose feature of the robot into multiple mapping parts.
  • Sub-step 1032 Perform the following processing for each mapping part: transform the position of the vector of each joint in the mapping part into the position of the vector of the corresponding key point in the skeleton pose feature, and obtain the rotation angle of the joint as the rotation angle information of the joint.
  • Step 104-1 Use the rotation angle information of each joint as the motion data of the current frame.
  • Step 104-2 Perform collision detection on the motion data of the current frame.
  • Step 104-3 If no collision is detected, determine to execute the step of controlling the motion of the corresponding joint according to the rotation angle information of each joint to form the target posture.
  • Step 1041 Perform filtering processing on the rotation angle information of each joint.
  • Step 1042 Control the rotation of the corresponding joint according to the processed rotation angle information of each joint to form a target pose.
  • FIG. 14 is a schematic diagram of a control device for a robot posture provided in another embodiment of the present disclosure.
  • the robot posture control device includes: a model acquisition module 201 , a posture acquisition module 202 , a rotation angle acquisition module 203 and a motion control module 204 . in:
  • the model obtaining module 201 is configured to obtain a three-dimensional skeleton model corresponding to the target pose according to the target pose of the target object in the image data.
  • the image data may be an image captured by a robot, or image data extracted from video data, for example, a frame of image in the video data is used as the image data.
  • the target object can be a human body, an animal, etc.
  • the robot in this embodiment is a multi-joint robot, such as a human-type robot, an animal-type robot, and the like.
  • the action gesture of the target object can be extracted as the target gesture by identifying the target object in the image data.
  • Two-dimensional skeleton data corresponding to the target pose is acquired, and a three-dimensional skeleton model of the target object can be constructed based on the two-dimensional skeleton data.
  • the model acquisition module is specifically configured to: input the image data into a preset first neural network model to obtain two-dimensional skeleton data of the target posture, and the first neural network model is used to identify the image The target posture of the target object in the data, and the corresponding two-dimensional skeleton data is generated based on the target posture; the two-dimensional skeleton data is input into the preset second neural network model, and the corresponding target posture is obtained.
  • 3D Skeleton Model is specifically configured to: input the image data into a preset first neural network model to obtain two-dimensional skeleton data of the target posture, and the first neural network model is used to identify the image The target posture of the target object in the data, and the corresponding two-dimensional skeleton data is generated based on the target posture; the two-dimensional skeleton data is input into the preset second neural network model, and the corresponding target posture is obtained.
  • the attitude acquisition module 202 is used to map the three-dimensional skeleton model into the joint space of the robot, and obtain the attitude features of the robot and the skeleton attitude features of the three-dimensional skeleton model.
  • the space formed by all the joint vectors is the joint space corresponding to the current robot arm, and the joint space corresponding to the torso of the robot may also be used as the joint space of the robot.
  • the space angle between adjacent parts in the 3D skeleton model is the same as the space angle between adjacent parts corresponding to the robot, and the robot can present the same target pose as the 3D skeleton model.
  • the 3D skeleton model and the robot are not in the same coordinate system.
  • the 3D skeleton model is mapped to the joint space of the robot, and the pose feature of the robot is obtained.
  • the pose feature includes the position of the vector of each joint of the robot and the 3D skeleton model.
  • the skeleton pose feature includes the vector formed by each key point in the skeleton.
  • the magnitude of the vector where the joint 11 is located as shown in FIG. 2 may be the coordinate difference between the joint point 11 and the joint point 12
  • the direction may be the execution of the joint point 12 from the joint point.
  • the rotation angle acquisition module 203 is configured to adjust the posture feature of the robot to a target position matching the skeleton posture feature, and acquire the rotation angle information of each joint of the robot.
  • the posture feature of the robot includes the vectors where each joint of the robot is located, and the vector where each joint is located has a corresponding target vector, and the target vector is the vector where the corresponding key points in the skeleton posture feature are located. Transform the vector of the joint so that the vector of the joint coincides with the target vector, and obtain the rotation angle of the joint during the transformation process as the rotation angle information of the joint.
  • the rotation angle acquisition module is specifically used for: dividing the posture feature of the robot into a plurality of mapping parts; performing the following processing for each mapping part: transforming the position of the vector of each joint in the mapping part into the The position of the vector where the corresponding key point in the skeleton pose feature is located, and the rotation angle of the joint is obtained as the rotation angle information of the joint.
  • the plurality of mapping parts include: trunk, limbs, head and waist.
  • the motion control module 204 is configured to control the motion of the corresponding joint of the robot according to the rotation angle information of each joint to form the target posture.
  • the posture feature of the robot includes the vectors where each joint of the robot is located, and the vector where each joint is located has a corresponding target vector, and the target vector is the vector where the corresponding key points in the skeleton posture feature are located. Transform the vector of the joint so that the vector of the joint coincides with the target vector, and obtain the rotation angle of the joint during the transformation process as the rotation angle information of the joint.
  • the motion control module is specifically configured to: filter the rotation angle information of each joint; and control the rotation of the corresponding joint according to the processed rotation angle information of each joint to form the target posture.
  • control device for the robot posture further includes:
  • a motion data module used for taking the corner information of each described joint as the motion data of the current frame
  • a collision detection module for performing collision detection on the motion data of the current frame
  • the motion control sub-module is configured to determine and execute the step of controlling the motion of the corresponding joint according to the rotation angle information of each joint to form the target posture if no collision is detected.
  • the device also includes:
  • the normalization module is used for normalizing the three-dimensional skeleton model.
  • the device also includes:
  • a video collection module used for collecting the video data of the target object
  • An image acquisition module configured to acquire the image data from the video data.
  • FIG. 15 Another embodiment of the present disclosure further provides a robot, whose structural block diagram is shown in FIG. 15 , including: at least one processor 301 ; and a memory 302 communicatively connected to the at least one processor 301 ; wherein the memory 302 stores Instructions executable by the at least one processor 301, the instructions are executed by the at least one processor 301, so that the at least one processor 301 can execute the above-mentioned method for controlling the posture of the robot.
  • the memory 302 and the processor 301 are connected by a bus, and the bus may include any number of interconnected buses and bridges, and the bus links one or more processors 301 and various circuits of the memory 302 together.
  • the bus may also link together various other circuits, such as peripherals, voltage regulators, and power management circuits, which are well known in the art and therefore will not be described further herein.
  • the bus interface provides the interface between the bus and the transceiver.
  • a transceiver may be a single element or multiple elements, such as multiple receivers and transmitters, providing a means for communicating with various other devices over a transmission medium.
  • the data processed by the processor is transmitted on the wireless medium through the antenna, and further, the antenna also receives the data and transmits the data to the processor.
  • Processor 301 is responsible for managing the bus and general processing, and may also provide various functions including timing, peripheral interface, voltage regulation, power management, and other control functions. Instead, memory 302 may be used to store data used by the processor in performing operations.
  • an embodiment of the present disclosure further provides a computer-readable storage medium storing a computer program, and when the computer program is executed by a processor, the above-mentioned method for controlling the attitude of the robot is implemented.
  • an embodiment of the present disclosure also provides a computer program, including instructions, which, when run on a computer, cause the computer to execute the above-mentioned method for controlling the posture of a robot.
  • the program is stored in a storage medium and includes several instructions to make a device (which may be a single-chip microcomputer) , chip, etc.) or a processor (processor) to execute all or part of the steps of the methods described in the various embodiments of the present disclosure.
  • the aforementioned storage medium includes: U disk, mobile hard disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

A robot posture control method. The method comprises: according to a target posture of a target object in image data, acquiring a three-dimensional skeleton model corresponding to the target posture; mapping the three-dimensional skeleton model into a joint space of a robot, so as to acquire a posture feature of the robot and a skeleton posture feature of the three-dimensional skeleton model; adjusting the posture feature of the robot to a position that matches the skeleton posture feature, so as to acquire rotation angle information of each joint of the robot; and according to the rotation angle information of each joint, controlling the motion of the corresponding joint of the robot, so as to form the target posture. By means of the method, an additional motion capture sensor is not needed to upload a motion of a target object in real time, thereby reducing the cost of controlling a robot to move according to a target posture. The present application further relates to a robot posture control apparatus, a robot, a computer-readable storage medium and a computer program.

Description

机器人姿态的控制方法、机器人、存储介质及计算机程序Robot attitude control method, robot, storage medium and computer program
交叉引用cross reference
本申请要求于2021年04月25日提交中国专利局、申请号为202110450270.2,发明名称为“机器人姿态的控制方法、机器人及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application filed on April 25, 2021 with the application number 202110450270.2 and the title of the invention is "Control Method for Robot Pose, Robot and Storage Medium", the entire contents of which are incorporated by reference in in this application.
技术领域technical field
本发明实施例涉及机器人领域,特别涉及一种机器人姿态的控制方法、机器人、存储介质及计算机程序。Embodiments of the present invention relate to the field of robots, and in particular, to a method for controlling robot posture, a robot, a storage medium, and a computer program.
背景技术Background technique
随着科学技术的发展,大量的智能机器人的运动控制系统被设计和制造出来并应用于社会的生产和生活中,以提高社会生产力和提升人们的生活品质。机器人动作由一列的序列生成,如:握手、举手或摆头等;目前机器人的动作序列生成可以由人工根据机器人运动的目标位置,调试机器人运动;或者通过动作捕捉设备实现人体姿态感知,例如,可以利用传感器数据,或者2D视频处理技术,检测并跟踪人体骨架,实现人体姿态感知;根据感知的人体姿态设置机器人的动作序列。With the development of science and technology, a large number of motion control systems of intelligent robots have been designed and manufactured and applied in social production and life to improve social productivity and improve people's quality of life. Robot actions are generated by a sequence of sequences, such as shaking hands, raising hands or shaking heads, etc. At present, the action sequence generation of the robot can be manually adjusted according to the target position of the robot motion; or the human body posture perception can be realized through motion capture equipment, for example, The sensor data or 2D video processing technology can be used to detect and track the human skeleton to realize human posture perception; set the robot's action sequence according to the perceived human posture.
然而,由人工进行调试的方式需要人工逐个设计每个关节的运动,生成机器人的动作序列,形成该机器人的姿态,由于需要逐个调试,且每个关节的运动会影响其他关节的运动,故调试时间长,过程复杂;而通过动作捕捉设备捕捉人体动作,需要额外的设备捕捉人体动作,导致机器动作序列的生成不灵活,成本高。However, the manual debugging method needs to manually design the motion of each joint one by one, generate the action sequence of the robot, and form the posture of the robot. Since it needs to be debugged one by one, and the movement of each joint will affect the movement of other joints, the debugging time The process is long and the process is complicated; while capturing human movements through motion capture equipment requires additional equipment to capture human movements, resulting in inflexible generation of machine action sequences and high costs.
发明内容SUMMARY OF THE INVENTION
本发明实施方式的目的在于提供一种机器人姿态的控制方法、机器人、存储介质及计算机程序,可以快速生成与目标对象匹配的目标姿态,丰富机器人的动作,简化机器人学习动作姿态的成本。The purpose of the embodiments of the present invention is to provide a robot posture control method, robot, storage medium and computer program, which can quickly generate a target posture that matches the target object, enrich the robot's actions, and simplify the cost of the robot's learning action posture.
为解决上述技术问题,第一方面,本公开的实施方式提供了一种机器人姿态的控制方法,包括:根据图像数据中目标对象的目标姿态,获取与所述目标姿态对应的三维骨架模型;将所述三维骨架模型映射至所述机器 人的关节空间内,获取所述机器人的姿态特征和所述三维骨架模型的骨架姿态特征;将所述机器人的姿态特征调整为与所述骨架姿态特征匹配的目标位置,获取所述机器人的各关节的转角信息;根据各所述关节的转角信息控制所述机器人的对应所述关节的运动,形成所述目标姿态。In order to solve the above technical problems, in the first aspect, the embodiments of the present disclosure provide a method for controlling the posture of a robot, including: obtaining a three-dimensional skeleton model corresponding to the target posture according to the target posture of the target object in the image data; The three-dimensional skeleton model is mapped into the joint space of the robot, and the posture features of the robot and the skeleton posture features of the three-dimensional skeleton model are obtained; the posture features of the robot are adjusted to match the skeleton posture features. The target position is obtained, and the rotation angle information of each joint of the robot is obtained; the movement of the corresponding joint of the robot is controlled according to the rotation angle information of each joint to form the target posture.
进一步的,所述根据图像数据中目标对象的目标姿态,获取与所述目标姿态对应的三维骨架模型,包括:将所述图像数据输入预设的第一神经网络模型中,获得所述目标姿态的二维骨架数据,所述第一神经网络模型用于识别所述图像数据中所述目标对象的目标姿态,并基于所述目标姿态生成对应的二维骨架数据;将所述二维骨架数据输入预设的第二神经网络模型中,获得与所述目标姿态对应的三维骨架模型。Further, obtaining the three-dimensional skeleton model corresponding to the target pose according to the target pose of the target object in the image data includes: inputting the image data into a preset first neural network model, and obtaining the target pose the two-dimensional skeleton data, the first neural network model is used to identify the target posture of the target object in the image data, and generate corresponding two-dimensional skeleton data based on the target posture; Inputting the preset second neural network model to obtain a three-dimensional skeleton model corresponding to the target pose.
进一步的,将所述机器人的姿态特征调整为与所述骨架姿态特征匹配的目标位置,获取所述机器人的各关节的转角信息,包括:将所述机器人的姿态特征划分为多个映射部分;针对每个映射部分进行如下处理:将所述映射部分中每个关节所在向量的位置变换为所述骨架姿态特征中对应的关键点所在向量的位置,获取所述关节的转动角度作为所述关节的转角信息。Further, adjusting the posture feature of the robot to a target position matching the skeleton posture feature, and acquiring the rotation angle information of each joint of the robot, includes: dividing the posture feature of the robot into a plurality of mapping parts; The following processing is performed for each mapping part: the position of the vector of each joint in the mapping part is transformed into the position of the vector of the corresponding key point in the skeleton pose feature, and the rotation angle of the joint is obtained as the joint corner information.
进一步的,所述多个映射部分包括:躯干部、四肢部、头部和腰部。Further, the plurality of mapping parts include: a trunk part, a limb part, a head part and a waist part.
进一步的,所述方法还包括:将各所述关节的转角信息作为当前帧的运动数据;对当前帧的所述运动数据进行碰撞检测;若未检测到碰撞,则确定执行根据各所述关节的转角信息控制对应所述关节的运动,形成所述目标姿态的步骤。Further, the method further includes: taking the corner information of each joint as the motion data of the current frame; performing collision detection on the motion data of the current frame; if no collision is detected, determining to execute the motion data according to each joint. The rotation angle information of the control corresponds to the movement of the joint to form the step of forming the target posture.
进一步的,所述根据各所述关节的转角信息控制所述机器人的对应所述关节的运动,形成所述目标姿态,包括:对各所述关节的转角信息进行滤波处理;按照处理后的各所述关节的转角信息控制对应的所述关节旋转,形成所述目标姿态。Further, the controlling the movement of the robot corresponding to the joints according to the rotation angle information of the joints to form the target posture includes: filtering the rotation angle information of the joints; The rotation angle information of the joint controls the rotation of the corresponding joint to form the target posture.
进一步的,在将所述三维骨架模型映射至所述机器人的关节空间内,获取所述机器人的姿态特征和所述三维骨架模型的骨架姿态特征之前,所述方法还包括:对所述三维骨架模型进行归一化处理。Further, before the three-dimensional skeleton model is mapped into the joint space of the robot, and the posture features of the robot and the skeleton posture features of the three-dimensional skeleton model are acquired, the method further includes: mapping the three-dimensional skeleton. The model is normalized.
进一步的,在将所述图像数据输入预设的第一神经网络模型中,获得所述目标姿态的二维骨架数据之前,所述方法还包括:采集所述目标对象的视频数据;从所述视频数据中获取所述图像数据。Further, before the two-dimensional skeleton data of the target pose is obtained by inputting the image data into the preset first neural network model, the method further includes: collecting video data of the target object; The image data is obtained from video data.
另一方面,本公开的实施方式还提供了一种机器人姿态的控制装置,包括:模型获取模块,用于根据图像数据中目标对象的目标姿态,获取与所述目标姿态对应的三维骨架模型;姿态获取模块,用于将所述三维骨架模型映射至所述机器人的关节空间内,获取所述机器人的姿态特征和所述三维骨架模型的骨架姿态特征;转角获取模块,用于将所述机器人的姿态特征调整为与所述骨架姿态特征匹配的目标位置,获取所述机器人的各关节的转角信息;控制运动模块,用于根据各所述关节的转角信息控制所述机器人的对应所述关节的运动,形成所述目标姿态。On the other hand, an embodiment of the present disclosure also provides a robot posture control device, including: a model acquisition module, configured to acquire a three-dimensional skeleton model corresponding to the target posture according to the target posture of the target object in the image data; an attitude acquisition module, used to map the three-dimensional skeleton model into the joint space of the robot, and acquire the attitude features of the robot and the skeleton attitude characteristics of the three-dimensional skeleton model; the corner acquisition module is used to map the robot The posture feature of the robot is adjusted to the target position matching the skeleton posture feature, and the rotation angle information of each joint of the robot is obtained; the control motion module is used to control the corresponding joint of the robot according to the rotation angle information of each joint. movement to form the target pose.
进一步的,所述模型获取模块,具体用于:Further, the model acquisition module is specifically used for:
将所述图像数据输入预设的第一神经网络模型中,获得所述目标姿态的二维骨架数据,所述第一神经网络模型用于识别所述图像数据中所述目标对象的目标姿态,并基于所述目标姿态生成对应的二维骨架数据;将所述二维骨架数据输入预设的第二神经网络模型中,获得与所述目标姿态对应的三维骨架模型。Inputting the image data into a preset first neural network model to obtain two-dimensional skeleton data of the target posture, and the first neural network model is used to identify the target posture of the target object in the image data, and generate corresponding two-dimensional skeleton data based on the target posture; input the two-dimensional skeleton data into a preset second neural network model to obtain a three-dimensional skeleton model corresponding to the target posture.
进一步的,所述转角获取模块,具体用于:将所述机器人的姿态特征划分为多个映射部分;针对每个映射部分进行如下处理:将所述映射部分中每个关节所在向量的位置变换为所述骨架姿态特征中对应的关键点所在向量的位置,获取所述关节的转动角度作为所述关节的转角信息。Further, the angle acquisition module is specifically used to: divide the posture feature of the robot into a plurality of mapping parts; perform the following processing for each mapping part: transform the position of the vector where each joint in the mapping part is located For the position of the vector where the corresponding key point in the skeleton pose feature is located, the rotation angle of the joint is obtained as the rotation angle information of the joint.
进一步的,所述多个映射部分包括:躯干部、四肢部、头部和腰部。Further, the plurality of mapping parts include: a trunk part, a limb part, a head part and a waist part.
进一步的,所述装置还包括:运动数据模块,用于将各所述关节的转角信息作为当前帧的运动数据;碰撞检测模块,用于对当前帧的所述运动数据进行碰撞检测;控制运动子模块,用于若未检测到碰撞,则确定执行根据各所述关节的转角信息控制对应所述关节的运动,形成所述目标姿态的步骤。Further, the device further includes: a motion data module, used for taking the rotation angle information of each joint as the motion data of the current frame; a collision detection module, used for collision detection on the motion data of the current frame; control motion The sub-module is configured to determine and execute the step of controlling the movement of the corresponding joint according to the rotation angle information of each joint to form the target posture if no collision is detected.
进一步的,所述控制运动模块,具体用于:对各所述关节的转角信息进行滤波处理;按照处理后的各所述关节的转角信息控制对应的所述关节旋转,形成所述目标姿态。Further, the motion control module is specifically configured to: filter the rotation angle information of each joint; and control the rotation of the corresponding joint according to the processed rotation angle information of each joint to form the target posture.
进一步的,所述装置还包括:归一化模块,用于对所述三维骨架模型进行归一化处理。Further, the apparatus further includes: a normalization module, configured to perform normalization processing on the three-dimensional skeleton model.
进一步的,所述装置还包括:视频采集模块,用于采集所述目标对象 的视频数据;图像获取模块,用于从所述视频数据中获取所述图像数据。Further, the device further comprises: a video acquisition module for acquiring video data of the target object; an image acquisition module for acquiring the image data from the video data.
另一方面,本公开的实施方式还提供了一种机器人,包括:至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行上述的机器人姿态的控制方法。In another aspect, an embodiment of the present disclosure also provides a robot, comprising: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores information that can be used by the at least one processor Instructions executed by the processor, where the instructions are executed by the at least one processor, so that the at least one processor can execute the above-mentioned method for controlling the posture of the robot.
另一方面,本公开的实施方式还提供了一种计算机可读存储介质,存储有计算机程序,计算机程序被处理器执行时实现上述的机器人姿态的控制方法。On the other hand, an embodiment of the present disclosure also provides a computer-readable storage medium storing a computer program, and when the computer program is executed by a processor, the above-mentioned method for controlling the attitude of the robot is implemented.
另一方面,本公开的实施方式还提供了一种计算机程序,包括指令,当其在计算机上运行时,使得计算机执行上述的机器人姿态的控制方法。On the other hand, an embodiment of the present disclosure also provides a computer program, including instructions, which, when executed on a computer, cause the computer to execute the above-mentioned method for controlling the posture of a robot.
本公开实施例中,机器人的肢体在三维空间内运动,通过图像数据中目标对象的目标姿态,获取与该目标姿态对应的三维骨架模型,将该三维骨架模型映射至机器人的关节空间内,使得该三维骨架模型与关节空间处于同一坐标系内,获取机器人中各关节的转角信息,使得基于各关节的转角信息控制机器人运动形成的姿态与目标姿态对应,通过图像数据中的目标对象指导机器人的运动,无需额外的动作捕捉传感器实时上传目标对象的动作,降低了控制机器人按照目标姿态运动的成本;且无需人工对每个关节进行调试,简化了生成目标姿态的复杂步骤,使得机器人可以更快学习到目标姿态的动作。In the embodiment of the present disclosure, the limbs of the robot move in the three-dimensional space, and the three-dimensional skeleton model corresponding to the target pose is obtained through the target pose of the target object in the image data, and the three-dimensional skeleton model is mapped into the joint space of the robot, so that The three-dimensional skeleton model and the joint space are in the same coordinate system, and the rotation angle information of each joint in the robot is obtained, so that the posture formed by controlling the motion of the robot based on the rotation angle information of each joint corresponds to the target posture, and the target object in the image data guides the robot's movement. Motion, without the need for additional motion capture sensors to upload the motion of the target object in real time, reducing the cost of controlling the robot to move according to the target posture; and no need to manually debug each joint, simplifying the complex steps to generate the target posture, making the robot faster The action to learn the target pose.
附图说明Description of drawings
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。One or more embodiments are exemplified by the pictures in the corresponding drawings, and these exemplifications do not constitute limitations of the embodiments, and elements with the same reference numerals in the drawings are denoted as similar elements, Unless otherwise stated, the figures in the accompanying drawings do not constitute a scale limitation.
图1是本公开一个实施例中机器人姿态的控制方法的流程图;FIG. 1 is a flowchart of a method for controlling a robot posture in an embodiment of the present disclosure;
图2是本公开一个实施例中提供的一种二维人体骨架的示意图;2 is a schematic diagram of a two-dimensional human skeleton provided in an embodiment of the present disclosure;
图3是本公开一个实施例中提供的一种获取各关节的转角信息的流程图;FIG. 3 is a flow chart of acquiring rotation angle information of each joint provided in an embodiment of the present disclosure;
图4是本公开一个实施例中提供了对各关节的转角信息进行碰撞检测 的流程图;Fig. 4 is a flow chart that provides collision detection on the rotation angle information of each joint in one embodiment of the present disclosure;
图5是本公开一个实施例中提供了对各关节的转角信息进行滤波处理的流程图;FIG. 5 is a flowchart of filtering processing of the rotation angle information of each joint provided in an embodiment of the present disclosure;
图6是本公开一个实施例中提供了对三维骨架模型进行归一化处理的流程图;FIG. 6 is a flowchart of normalizing a three-dimensional skeleton model provided in an embodiment of the present disclosure;
图7是本公开一个实施例中提供了获取三维骨架模型的流程图;FIG. 7 is a flowchart of obtaining a three-dimensional skeleton model provided in an embodiment of the present disclosure;
图8是本公开一个实施例中提供了获取图像数据的流程图;FIG. 8 is a flowchart of acquiring image data provided in an embodiment of the present disclosure;
图9是本公开一个实施例中提供了获取关节的转角信息以及对关节的转角信息进行碰撞检测的流程图;FIG. 9 is a flowchart provided in an embodiment of the present disclosure for obtaining the rotation angle information of the joint and performing collision detection on the rotation angle information of the joint;
图10是对图9中增加滤波处理后的机器人姿态的控制方法的流程图;Fig. 10 is the flow chart of the control method of the robot posture after adding filtering processing in Fig. 9;
图11是对图10中增加对三维骨架模型进行归一化处理的机器人姿态的控制方法的流程图;Fig. 11 is the flow chart of the control method of the robot posture that adds the normalization processing to the three-dimensional skeleton model in Fig. 10;
图12是对图11中增加获取三维骨架模型的机器人姿态的控制方法的流程图;Fig. 12 is the flow chart of adding the control method of acquiring the robot posture of the three-dimensional skeleton model in Fig. 11;
图13是对图11中增加获取图像数据的机器人姿态的控制方法的流程图;Fig. 13 is the flow chart of the control method of adding the robot posture of acquiring image data in Fig. 11;
图14是本公开另一实施例中提供的机器人姿态的控制装置示意图;14 is a schematic diagram of a control device for a robot posture provided in another embodiment of the present disclosure;
图15是本公开另一实施例中机器人的结构示意图。FIG. 15 is a schematic structural diagram of a robot in another embodiment of the present disclosure.
具体实施方式Detailed ways
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合附图对本发明的各实施方式进行详细的阐述。然而,本领域的普通技术人员可以理解,在本发明各实施方式中,为了使读者更好地理解本公开而提出了许多技术细节。但是,即使没有这些技术细节和基于以下各实施方式的种种变化和修改,也可以实现本公开所要求保护的技术方案。In order to make the objectives, technical solutions and advantages of the embodiments of the present invention clearer, the various embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, those of ordinary skill in the art will appreciate that, in the various embodiments of the present invention, numerous technical details are set forth in order to provide the reader with a better understanding of the present disclosure. However, even without these technical details and various changes and modifications based on the following embodiments, the technical solutions claimed in the present disclosure can be realized.
以下各个实施例的划分是为了描述方便,不应对本发明的具体实现方式构成任何限定,各个实施例在不矛盾的前提下可以相互结合相互引用。The following divisions of the various embodiments are for the convenience of description, and should not constitute any limitation on the specific implementation of the present invention, and the various embodiments may be combined with each other and referred to each other on the premise of not contradicting each other.
本公开实施例中机器人姿态的控制方法的流程如图1所示:The flow of the control method for the robot posture in the embodiment of the present disclosure is shown in FIG. 1 :
步骤101:根据图像数据中目标对象的目标姿态,获取与目标姿态对 应的三维骨架模型。Step 101: According to the target posture of the target object in the image data, obtain a three-dimensional skeleton model corresponding to the target posture.
具体地,图像数据可以是机器人拍摄的图像,或者从视频数据中提取的图像数据,如将视频数据中的一帧图像作为该图像数据。目标对象可以人体、动物等。本实施例中的机器人为多关节机器人,例如:人型机器人、动物型机器人等。Specifically, the image data may be an image captured by a robot, or image data extracted from video data, for example, a frame of image in the video data is used as the image data. The target object can be a human body, an animal, etc. The robot in this embodiment is a multi-joint robot, such as a human-type robot, an animal-type robot, and the like.
可以通过识别该图像数据中目标对象,提取该目标对象的动作姿态作为目标姿态。获取与该目标姿态对应的二维骨架数据,基于该二维骨架数据可以构建该目标对象的三维骨架模型。The action gesture of the target object can be extracted as the target gesture by identifying the target object in the image data. Two-dimensional skeleton data corresponding to the target pose is acquired, and a three-dimensional skeleton model of the target object can be constructed based on the two-dimensional skeleton data.
本示例中多关节机器人以人型机器人为例,对应该目标对象为人体。通常人体的骨架由17个三维关节点组成,如图2所示的二维人体骨架,该图2中序号0至16分别表示:0盆骨中心、1右髋关节、2右膝关节、3右踝关节、4左髋关节、5左膝关节、6左踝关节、7脊椎中点、8颈椎中点、9头、10天灵盖、11左肩关节、12左肘关节、13左腕关节、14右肩关节、15右肘关节和16右腕关节。In this example, the multi-joint robot takes a humanoid robot as an example, and the target object is a human body. Usually the skeleton of the human body is composed of 17 three-dimensional joint points, such as the two-dimensional human skeleton shown in Figure 2. The serial numbers 0 to 16 in Figure 2 represent: 0 pelvic center, 1 right hip joint, 2 right knee joint, 3 Right Ankle, 4 Left Hip, 5 Left Knee, 6 Left Ankle, 7 Midpoint of Spine, 8 Midpoint of Cervical Vertebra, 9 Head, 10 Linggai, 11 Left Shoulder, 12 Left Elbow, 13 Left Wrist, 14 Right Shoulder joint, 15 right elbow joint and 16 right wrist joint.
步骤102:将三维骨架模型映射至机器人的关节空间内,获取机器人的姿态特征和三维骨架模型的骨架姿态特征。Step 102 : Map the three-dimensional skeleton model into the joint space of the robot, and obtain the pose feature of the robot and the skeleton pose feature of the three-dimensional skeleton model.
具体地,对于一个具有n个自由度的操作臂来说,它的所有连杆位置可由一组n个关节变量来确定。该组关节变量被称为n x 1的关节矢量,所有关节矢量组成的空间称为关节空间。机器人的关节空间可以根据机器人的机器臂确定,例如,机器人的机器臂可以为7自由度,那么基于该7自由度的机械臂,可以构建7 x 1的关节矢量,所有关节矢量组成的空间为当前该机器臂对应的关节空间。也可以将机器人的躯干部对应的关节空间作为该机器人的关节空间。Specifically, for an operating arm with n degrees of freedom, all its link positions can be determined by a set of n joint variables. The set of joint variables is called the n x 1 joint vector, and the space composed of all joint vectors is called the joint space. The joint space of the robot can be determined according to the robotic arm of the robot. For example, the robotic arm of the robot can have 7 degrees of freedom, then based on the robotic arm with 7 degrees of freedom, a 7 x 1 joint vector can be constructed, and the space composed of all joint vectors is The joint space corresponding to the current robot arm. The joint space corresponding to the torso of the robot may also be used as the joint space of the robot.
三维骨架模型中相邻部位之间的空间夹角与机器人对应的相邻部位之间的空间夹角相同,该机器人则可以呈现与三维骨架模型相同的目标姿态。而三维骨架模型与机器人不处于同一坐标系,本示例中将三维骨架模型映射至该机器人的关节空间,获取该机器人的姿态特征,该姿态特征包括机器人各关节所在向量的位置以及该三维骨架模型的骨架姿态特征,该骨架姿态特征包括该骨架中各关键点形成的向量。例如,如图2所示的关节11所在向量的大小可以是从关节点11与关节点12的坐标差值,方向 可以是从关节点执行关节点12。The space angle between adjacent parts in the 3D skeleton model is the same as the space angle between adjacent parts corresponding to the robot, and the robot can present the same target pose as the 3D skeleton model. The 3D skeleton model and the robot are not in the same coordinate system. In this example, the 3D skeleton model is mapped to the joint space of the robot, and the pose feature of the robot is obtained. The pose feature includes the position of the vector of each joint of the robot and the 3D skeleton model. The skeleton pose feature includes the vector formed by each key point in the skeleton. For example, the magnitude of the vector where the joint 11 is located as shown in Fig. 2 may be the coordinate difference between the joint point 11 and the joint point 12, and the direction may be the execution of the joint point 12 from the joint point.
步骤103:将机器人的姿态特征调整为与骨架姿态特征匹配的目标位置,获取机器人的各关节的转角信息。Step 103: Adjust the posture feature of the robot to a target position matching the posture feature of the skeleton, and obtain the rotation angle information of each joint of the robot.
具体地,机器人的姿态特征包括该机器人各关节所在向量,每个关节所在向量有对应的目标向量,目标向量即为骨架姿态特征中对应关键点所在向量。变换该关节所在向量以使关节所在向量与该目标向量重合,获取该关节变换过程中转动的角度作为关节的转角信息。Specifically, the posture feature of the robot includes the vectors where each joint of the robot is located, and the vector where each joint is located has a corresponding target vector, and the target vector is the vector where the corresponding key points in the skeleton posture feature are located. Transform the vector of the joint so that the vector of the joint coincides with the target vector, and obtain the rotation angle of the joint during the transformation process as the rotation angle information of the joint.
步骤104:根据各关节的转角信息控制机器人的对应关节的运动,形成目标姿态。Step 104: Control the motion of the corresponding joint of the robot according to the rotation angle information of each joint to form a target posture.
控制机器人各关节按照对应的关节转角运动,即可形成目标姿态。The target pose can be formed by controlling each joint of the robot to move according to the corresponding joint angle.
本公开实施例中,机器人的肢体在三维空间内运动,通过图像数据中目标对象的目标姿态,获取与该目标姿态对应的三维骨架模型,将该三维骨架模型映射至机器人的关节空间内,使得该三维骨架模型与关节空间处于同一坐标系内,获取机器人中各关节的转角信息,使得基于各关节的转角信息控制机器人运动形成的姿态与目标姿态对应,通过图像数据中的目标对象指导机器人的运动,无需额外的动作捕捉传感器实时上传目标对象的动作,降低了控制机器人按照目标姿态运动的成本;且无需人工对每个关节进行调试,简化了生成目标姿态的复杂步骤,使得机器人可以更快学习到目标姿态的动作。In the embodiment of the present disclosure, the limbs of the robot move in the three-dimensional space, and the three-dimensional skeleton model corresponding to the target pose is obtained through the target pose of the target object in the image data, and the three-dimensional skeleton model is mapped into the joint space of the robot, so that The three-dimensional skeleton model and the joint space are in the same coordinate system, and the rotation angle information of each joint in the robot is obtained, so that the posture formed by controlling the motion of the robot based on the rotation angle information of each joint corresponds to the target posture, and the target object in the image data guides the robot's movement. Motion, without the need for additional motion capture sensors to upload the motion of the target object in real time, reducing the cost of controlling the robot to move according to the target posture; and no need to manually debug each joint, simplifying the complex steps to generate the target posture, making the robot faster The action to learn the target pose.
在一个实施例中,步骤103可以执行如何子步骤,其流程如图3所示。In one embodiment, step 103 may perform sub-steps, the flow of which is shown in FIG. 3 .
子步骤1031:将机器人的姿态特征划分为多个映射部分。Sub-step 1031: Divide the pose feature of the robot into multiple mapping parts.
具体地,可以将机器人姿态特征按照关节点位置划分为若干个映射部分。由于需要获取关节的角度,可以基于关节点的位置构建RPY系统,划分的映射部分包括:躯干部、四肢部、头部和腰部。躯干部包括:0盆骨中心、11左肩关节和14右肩关节。四肢部包括:左上肢、左下肢、右上肢和右下肢。左上肢为:11左肩关节、12左肘关节、13左腕关节。左下肢为:4左髋关节、5左膝关节、6左踝关节。右上肢为:14右肩关节、15右肘关节和16右腕关节。右下肢为:1右髋关节、2右膝关节、3右踝 关节。头部包括:8颈椎中点、9头、10天灵盖。腰部包括:0盆骨中心、1右髋关节、4左髋关节。Specifically, the robot pose feature can be divided into several mapping parts according to the position of the joint points. Due to the need to obtain the angles of the joints, the RPY system can be constructed based on the positions of the joint points, and the divided mapping parts include: trunk, limbs, head and waist. The trunk consists of: 0 center of pelvis, 11 left shoulder joint and 14 right shoulder joint. The limbs include: left upper limb, left lower limb, right upper limb and right lower limb. The left upper limb is: 11 left shoulder joint, 12 left elbow joint, 13 left wrist joint. The left lower limb is: 4 left hip joint, 5 left knee joint, 6 left ankle joint. The right upper limb is: 14 right shoulder joint, 15 right elbow joint and 16 right wrist joint. The right lower limb is: 1 right hip joint, 2 right knee joint, 3 right ankle joint. The head includes: 8 midpoints of cervical vertebrae, 9 heads, and 10 days Linggai. The waist includes: 0 pelvic center, 1 right hip joint, 4 left hip joint.
通过划分映射部分,便于获取每个映射部分中各关节的欧拉角,进而便于计算关节的转角,减少计算关节的转角信息的时间,提高计算速度。By dividing the mapping parts, it is convenient to obtain the Euler angles of the joints in each mapping part, thereby facilitating the calculation of the rotation angles of the joints, reducing the time for calculating the rotation angle information of the joints, and improving the calculation speed.
子步骤1032:针对每个映射部分进行如下处理:将映射部分中每个关节所在向量的位置变换为骨架姿态特征中对应的关键点所在向量的位置,获取关节的转动角度作为关节的转角信息。Sub-step 1032: Perform the following processing for each mapping part: transform the position of the vector of each joint in the mapping part into the position of the vector of the corresponding key point in the skeleton pose feature, and obtain the rotation angle of the joint as the rotation angle information of the joint.
通过对相邻关键点做减法,可以把三维骨架模型的线段转化成向量,得到骨架姿态特征中的关键点所在向量。通过欧式几何结算的方式得到各关节的转动角度。下面以左上肢为例并结合图2,具体介绍的计算过程如下:By subtracting the adjacent key points, the line segments of the three-dimensional skeleton model can be converted into vectors, and the vector of the key points in the skeleton pose feature can be obtained. The rotation angle of each joint is obtained by means of Euclidean geometric settlement. Taking the left upper limb as an example and in conjunction with Figure 2, the detailed calculation process is as follows:
以0、11、14三个关节形成的平面的法向量为z轴,关节0到关节8形成的向量为x轴构建一个坐标系,该坐标系x轴的正向为人直立状态下的竖直方向,y轴的正向为人直立状态下的右方,z轴正向为人直立状态下的前方。将其他关节全部转换至该坐标系下,人类的四肢运动都是基于躯干的,模仿人体运动的机器人运动也是基于躯干部的,故可以以机器人的躯干部为整个计算的基础坐标系。Take the normal vector of the plane formed by the three joints 0, 11, and 14 as the z-axis, and the vector formed by the joint 0 to the joint 8 as the x-axis to construct a coordinate system. The positive direction of the x-axis of the coordinate system is the vertical when the person is upright. The positive direction of the y-axis is the right side when the person is upright, and the positive direction of the z-axis is the front side when the person is upright. Converting all other joints to this coordinate system, the movement of human limbs is based on the torso, and the movement of the robot that imitates the movement of the human body is also based on the torso, so the torso of the robot can be used as the basic coordinate system for the entire calculation.
计算关键点“11→12”向量的关节转角。本示例中已知骨架姿态特征中对应关键点的空间向量v,求左肩两个关节a和b各自转动的角度,机器人的肩部(如图2中的关键点11的位置)对应两个关节,分别控制大臂前后摆动以及左右摆动。x轴先绕向量(cos20°,sin20°,0)转a弧度后,再绕z轴转b弧度能够与v重合。本示例中机器人的肩关节并不是沿水平轴前后摆动,而是与水平轴有一个20°的倾角的方向摆动,故x轴绕向量(cos20°,sin20°,0)转动。在算法计算的过程中,通常会存在多组解,从多组解中选取关节限位内的那一组作为最后的输出解。Calculate the joint rotation angle of the keypoint "11→12" vector. In this example, the space vector v corresponding to the key point in the skeleton pose feature is known, and the respective rotation angles of the two joints a and b of the left shoulder are obtained. The shoulder of the robot (the position of the key point 11 in Figure 2) corresponds to the two joints , respectively control the front and rear swing of the boom and the left and right swing. After the x-axis is first rotated around the vector (cos20°, sin20°, 0) by a radian, and then rotated around the z-axis by b radian to coincide with v. In this example, the shoulder joint of the robot does not swing back and forth along the horizontal axis, but swings in the direction of an inclination of 20° with the horizontal axis, so the x-axis rotates around the vector (cos20°, sin20°, 0). In the process of algorithm calculation, there are usually multiple groups of solutions, and the group within the joint limit is selected from the multiple groups of solutions as the final output solution.
在获取关键点“11→12”向量对应的关节a的转角和关节b的转角,再计算关键点“12→13”向量对应的关节c和d各自的转角信息。关节d的转角信息为关键点“11→12”向和“12→13”向量之间的夹角。而关节c的转角信息可以通过逆向计算获得,即把两个向量向绕z轴转负b弧度,再绕向量(cos20°,sin20°,0)转负a弧度,这样,v向量就 和-x向量重合,而关键点“12→13”向量到达一个新的位置,将新位置的向量投影到y-z平面,y与z的夹角为关节c的转角信息。After obtaining the rotation angle of joint a and the rotation angle of joint b corresponding to the key point "11→12" vector, calculate the respective rotation angle information of joint c and d corresponding to the key point "12→13" vector. The rotation angle information of joint d is the angle between the key point "11→12" direction and the "12→13" vector. The rotation angle information of joint c can be obtained by reverse calculation, that is, the two vectors are rotated to negative b radians around the z-axis, and then rotated to negative a radians around the vector (cos20°, sin20°, 0), so that the v vector is equal to - The x vector coincides, and the key point "12→13" vector reaches a new position, and the vector of the new position is projected to the y-z plane, and the angle between y and z is the corner information of the joint c.
通过欧式几何结算的方式得到头部的三个关节的转角。The rotation angles of the three joints of the head are obtained by means of Euclidean geometric settlement.
头部的关节结构是一个很典型的三轴互相垂直的roll pitch yall系统。将关键点8、9、10构建出一个旋转矩阵,即可还原出欧拉角。“9→10”向量作为z轴向量构成旋转的第三列,“8→9”向量叉乘“9→10”向量得到结果作为y向量,构成旋转矩阵的第二列,然后y叉乘z向量得到x向量,作为旋转矩阵的第一列。旋转矩阵求解欧拉角的方式此处不再进行赘述。The joint structure of the head is a typical roll pitch yall system with three axes perpendicular to each other. The Euler angles can be restored by constructing a rotation matrix from key points 8, 9, and 10. The "9→10" vector is used as the z-axis vector to form the third column of the rotation, and the "8→9" vector is cross-multiplied by the "9→10" vector to obtain the result as the y vector, which forms the second column of the rotation matrix, and then the y cross-multiplies The z vector gets the x vector, as the first column of the rotation matrix. The method of solving the Euler angles by the rotation matrix will not be repeated here.
腰部的结构和头部结构类似的,也是一个典型的三轴互相垂直的rpy系统腰部关节的转角信息的计算与头部中各关节的转角信息计算类似,不再赘述。The structure of the waist is similar to the structure of the head, and it is also a typical three-axis mutually perpendicular rpy system. The calculation of the rotation angle information of the waist joints is similar to the calculation of the rotation angle information of each joint in the head, and will not be repeated here.
本示例中人体的骨架的数据结构采用了Human 3.6M的骨架模型。The data structure of the skeleton of the human body in this example adopts the skeleton model of Human 3.6M.
该实施例中,将三维骨架模型划分为多个映射部分,分别对针对每个映射部分进行映射,由于映射部分相对整个三维骨架模型,结构更简单,更容易映射至关节空间,降低映射的难度,同时三维骨架模型中各肢体和关节空间处于同一空间内,更有利于计算关节的转角信息,同时采用欧拉角的计算方式,计算简单快速。In this embodiment, the three-dimensional skeleton model is divided into a plurality of mapping parts, and each mapping part is mapped separately. Compared with the entire three-dimensional skeleton model, the mapping part has a simpler structure and is easier to map to the joint space, reducing the difficulty of mapping At the same time, the limbs and joint spaces in the three-dimensional skeleton model are in the same space, which is more conducive to calculating the rotation angle information of the joints. At the same time, the Euler angle calculation method is adopted, and the calculation is simple and fast.
在一个实施例中,该机器人姿态的控制方法还可以执行如下步骤,其流程如图4所示:In one embodiment, the control method of the robot posture can also perform the following steps, and the process is shown in FIG. 4 :
步骤104-1:将各关节的转角信息作为当前帧的运动数据。Step 104-1: Use the rotation angle information of each joint as the motion data of the current frame.
步骤104-2:对当前帧的运动数据进行碰撞检测。Step 104-2: Perform collision detection on the motion data of the current frame.
由于识别的三维骨架模型的姿势多种多样,为了避免出现关节信息在应用到机器人身上,机器人肢体间出现自干涉的问题,可以对各关节的转角信息进行碰撞检测。可以预先设置碰撞检测模型,该碰撞检测模型可以是用于模拟机器人各肢体运行,例如可以采用moveit程序进行碰撞检测,将机器人的URDF文件导入该moveit中,其中,URDF文件是机器人模型描述格式;输入各关节的转角信息,即可根据各关节的转角信息模拟机器人各肢体的运动,判断各肢体之间是否发生碰撞。若未检测到碰撞,则表明 当前的各关节的转角信息合法,可以应用于机器人。Due to the various poses of the recognized 3D skeleton model, in order to avoid the problem of self-interference between the robot limbs when the joint information is applied to the robot, the collision detection can be performed on the rotation angle information of each joint. The collision detection model can be preset, and the collision detection model can be used to simulate the operation of each limb of the robot. For example, the moveit program can be used for collision detection, and the URDF file of the robot can be imported into the moveit, where the URDF file is the robot model description format; By inputting the rotation angle information of each joint, you can simulate the movement of each limb of the robot according to the rotation angle information of each joint, and judge whether there is a collision between the limbs. If no collision is detected, it means that the current information on the rotation angles of each joint is legal and can be applied to the robot.
步骤104-3:若未检测到碰撞,则确定执行根据各关节的转角信息控制对应关节的运动,形成目标姿态的步骤。Step 104-3: If no collision is detected, determine to execute the step of controlling the motion of the corresponding joint according to the rotation angle information of each joint to form the target posture.
将关节转角送入到机器人中进行执行,达到实际机器人跟踪目标对象的姿势的效果。The joint rotation angle is sent to the robot for execution, so as to achieve the effect of the actual robot tracking the posture of the target object.
该实施例中,对各关节的转角信息进行碰撞检测,若检测到发生碰撞的概率小于预设阈值的情况下,将该各关机的转角信息应用至机器人上,确保了机器人姿态的准确性,也保证了机器人的安全性。In this embodiment, collision detection is performed on the rotation angle information of each joint, and if the probability of collision is detected to be less than a preset threshold, the rotation angle information of each shutdown is applied to the robot to ensure the accuracy of the robot posture, It also ensures the safety of the robot.
在一个实施例中,步骤104还可以执行如下子步骤,其流程如图5所示:In one embodiment, step 104 can also perform the following sub-steps, and the flow is shown in FIG. 5 :
步骤1041:对各关节的转角信息进行滤波处理。Step 1041: Perform filtering processing on the rotation angle information of each joint.
各关节的转角信息是基于识别出的骨架获得,而识别的骨架不可避免的存在噪声和跳动。基于带噪声的骨架出来的关节的转角信息,也会带有噪声。对关节的转角信息进行滤波处理,可以采用滑动窗口滤波的方式进行滤波处理,去除关节运动中的毛刺噪声。The rotation angle information of each joint is obtained based on the identified skeleton, and the identified skeleton inevitably has noise and beating. The corner information of the joints based on the noisy skeleton will also have noise. To filter the rotation angle information of the joints, a sliding window filtering method can be used to perform the filtering processing to remove the burr noise in the joint motion.
步骤1042:按照处理后的各关节的转角信息控制对应的关节旋转,形成目标姿态。Step 1042: Control the rotation of the corresponding joint according to the processed rotation angle information of each joint to form a target pose.
该实施例中,对各关节的转角信息进行滤波处理,消除噪声和跳动,提高转角信息的准确性。In this embodiment, filtering processing is performed on the rotation angle information of each joint, noise and beating are eliminated, and the accuracy of the rotation angle information is improved.
在一个实施例中,在步骤102之前还可以执行如何子步骤,其流程如图6所示:In one embodiment, before step 102, how sub-steps may be performed, and the process is shown in FIG. 6 :
子步骤102-1:对三维骨架模型进行归一化处理。Sub-step 102-1: Normalize the three-dimensional skeleton model.
具体地,获取了目标姿态的三维骨架模型之后,由于骨架的长度是参差不齐的,骨架的长度也不适合于机器人的肢体。为了便于将三维骨架模型映射至关节空间内,可以对三维骨架模型进行归一化处理。例如,通过对相邻关键点做减法,把骨架从线段转化成向量。然后向量做单位化处理,得到的骨架由单位向量构成。Specifically, after the three-dimensional skeleton model of the target pose is obtained, the length of the skeleton is not suitable for the limbs of the robot because the length of the skeleton is uneven. In order to facilitate the mapping of the 3D skeleton model into the joint space, the 3D skeleton model can be normalized. For example, the skeleton is converted from a line segment to a vector by subtracting adjacent keypoints. The vector is then unitized, and the resulting skeleton consists of unit vectors.
该实施例中,通过对三维骨架模型进行归一化处理,消除了不同目标 对象中肢体尺寸与机器人的肢体尺寸不同的问题,便于后续对三维骨架模型进行映射至关节空间。In this embodiment, by normalizing the three-dimensional skeleton model, the problem that the size of the limbs in different target objects is different from that of the robot is eliminated, which facilitates the subsequent mapping of the three-dimensional skeleton model to the joint space.
在一个实施例中,步骤101可以执行如何子步骤,其流程如图7所示:In one embodiment, step 101 may perform sub-steps, and the flow is shown in FIG. 7 :
步骤1011:将图像数据输入预设的第一神经网络模型中,获得目标姿态的二维骨架数据,第一神经网络模型用于识别图像数据中目标对象的目标姿态,并基于目标姿态生成对应的二维骨架数据。Step 1011: Input the image data into the preset first neural network model to obtain the two-dimensional skeleton data of the target posture. The first neural network model is used to identify the target posture of the target object in the image data, and generate a corresponding target posture based on the target posture. 2D skeleton data.
骨架提取主要是由两个神经网络来完成,第一个神经网络接受图片输入用于完成对图像数据中的目标对象的识别及其二维骨架数据的提取。二维骨架数据包括该目标对象各关节点以及各关节点的位置。Skeleton extraction is mainly completed by two neural networks. The first neural network accepts image input to complete the recognition of target objects in image data and the extraction of two-dimensional skeleton data. The two-dimensional skeleton data includes each joint point of the target object and the position of each joint point.
步骤1012:将二维骨架数据输入预设的第二神经网络模型中,获得与目标姿态对应的三维骨架模型。Step 1012: Input the two-dimensional skeleton data into a preset second neural network model to obtain a three-dimensional skeleton model corresponding to the target pose.
第二个神经网络的输入是第一个神经网络输出的二维骨架节点,输出三维骨架模型。The input of the second neural network is the 2D skeleton node output by the first neural network, which outputs a 3D skeleton model.
该实施例中,由于神经网络模型具有较强的学习能量,采用两个神经网络模型获取三维骨架模型,可以提高三维骨架模型的准确性和适用性。In this embodiment, since the neural network model has strong learning energy, two neural network models are used to obtain the three-dimensional skeleton model, which can improve the accuracy and applicability of the three-dimensional skeleton model.
在一个实施例中,在执行步骤1011之前,还可以进行如下步骤,其流程如图8所示:In one embodiment, before step 1011 is performed, the following steps may also be performed, and the flow is shown in FIG. 8 :
步骤1011-1:采集目标对象的视频数据。Step 1011-1: Collect video data of the target object.
具体地,可以实时采集目标对象的视频数据。也可以是实时获取其它设备采集的目标对象的视频数据。Specifically, the video data of the target object can be collected in real time. The video data of the target object collected by other devices may also be acquired in real time.
步骤1011-2:从视频数据中获取图像数据。Step 1011-2: Obtain image data from video data.
可以从视频数据中获取图像数据,对图像数据执行步骤101至步骤104,使得机器人可以呈现目标对象的目标姿态。若针对连续的图像数据,该机器人可以实现对目标对象姿态的跟踪学习。Image data can be obtained from the video data, and steps 101 to 104 are performed on the image data, so that the robot can present the target pose of the target object. For continuous image data, the robot can track and learn the pose of the target object.
以上各实施例可以相互结合相互引用,例如下面是各实施例结合后的例子,然并不以此为限;各实施例在不矛盾的前提下可以任意结合成为一个新的实施例。The above embodiments can be combined with each other and referenced to each other. For example, the following is an example of the combination of the various embodiments, but it is not limited thereto; the various embodiments can be arbitrarily combined into a new embodiment under the premise of not contradicting each other.
在一个实施例中,如图9所示为获取关节的转角信息以及对关节的转 角信息进行碰撞检测的流程图。In one embodiment, as shown in FIG. 9 , it is a flow chart of acquiring the rotation angle information of the joint and performing collision detection on the rotation angle information of the joint.
步骤101:根据图像数据中目标对象的目标姿态,获取与目标姿态对应的三维骨架模型。Step 101: Acquire a three-dimensional skeleton model corresponding to the target posture according to the target posture of the target object in the image data.
步骤102:将三维骨架模型映射至机器人的关节空间内,获取机器人的姿态特征和三维骨架模型的骨架姿态特征。Step 102 : Map the three-dimensional skeleton model into the joint space of the robot, and obtain the pose feature of the robot and the skeleton pose feature of the three-dimensional skeleton model.
子步骤1031:将机器人的姿态特征划分为多个映射部分。Sub-step 1031: Divide the pose feature of the robot into multiple mapping parts.
子步骤1032:针对每个映射部分进行如下处理:将映射部分中每个关节所在向量的位置变换为骨架姿态特征中对应的关键点所在向量的位置,获取关节的转动角度作为关节的转角信息。Sub-step 1032: Perform the following processing for each mapping part: transform the position of the vector of each joint in the mapping part into the position of the vector of the corresponding key point in the skeleton pose feature, and obtain the rotation angle of the joint as the rotation angle information of the joint.
步骤104-1:将各关节的转角信息作为当前帧的运动数据。Step 104-1: Use the rotation angle information of each joint as the motion data of the current frame.
步骤104-2:对当前帧的运动数据进行碰撞检测。Step 104-2: Perform collision detection on the motion data of the current frame.
步骤104-3:若未检测到碰撞,则确定执行根据各关节的转角信息控制对应关节的运动,形成目标姿态的步骤。Step 104-3: If no collision is detected, determine to execute the step of controlling the motion of the corresponding joint according to the rotation angle information of each joint to form the target posture.
步骤104:根据各关节的转角信息控制机器人的对应关节的运动,形成目标姿态。Step 104: Control the motion of the corresponding joint of the robot according to the rotation angle information of each joint to form a target posture.
在一个实施例,如图10所示为图9中增加滤波处理的流程图。In one embodiment, FIG. 10 is a flowchart of adding filtering processing in FIG. 9 .
步骤101:根据图像数据中目标对象的目标姿态,获取与目标姿态对应的三维骨架模型。Step 101: Acquire a three-dimensional skeleton model corresponding to the target posture according to the target posture of the target object in the image data.
步骤102:将三维骨架模型映射至机器人的关节空间内,获取机器人的姿态特征和三维骨架模型的骨架姿态特征。Step 102 : Map the three-dimensional skeleton model into the joint space of the robot, and obtain the pose feature of the robot and the skeleton pose feature of the three-dimensional skeleton model.
子步骤1031:将机器人的姿态特征划分为多个映射部分。Sub-step 1031: Divide the pose feature of the robot into multiple mapping parts.
子步骤1032:针对每个映射部分进行如下处理:将映射部分中每个关节所在向量的位置变换为骨架姿态特征中对应的关键点所在向量的位置,获取关节的转动角度作为关节的转角信息。Sub-step 1032: Perform the following processing for each mapping part: transform the position of the vector of each joint in the mapping part into the position of the vector of the corresponding key point in the skeleton pose feature, and obtain the rotation angle of the joint as the rotation angle information of the joint.
步骤104-1:将各关节的转角信息作为当前帧的运动数据。Step 104-1: Use the rotation angle information of each joint as the motion data of the current frame.
步骤104-2:对当前帧的运动数据进行碰撞检测。Step 104-2: Perform collision detection on the motion data of the current frame.
步骤104-3:若未检测到碰撞,则确定执行根据各关节的转角信息控 制对应关节的运动,形成目标姿态的步骤。Step 104-3: If no collision is detected, determine to execute the step of controlling the motion of the corresponding joint according to the rotation angle information of each joint to form the target posture.
步骤1041:对各关节的转角信息进行滤波处理。Step 1041: Perform filtering processing on the rotation angle information of each joint.
步骤1042:按照处理后的各关节的转角信息控制对应的关节旋转,形成目标姿态。Step 1042: Control the rotation of the corresponding joint according to the processed rotation angle information of each joint to form a target pose.
在一个实施例中,如图11所示为图10中增加对三维骨架模型进行归一化处理的流程图。In one embodiment, as shown in FIG. 11 , the flowchart of adding the normalization process to the three-dimensional skeleton model in FIG. 10 is shown.
步骤101:根据图像数据中目标对象的目标姿态,获取与目标姿态对应的三维骨架模型。Step 101: Acquire a three-dimensional skeleton model corresponding to the target posture according to the target posture of the target object in the image data.
子步骤102-1:对三维骨架模型进行归一化处理。Sub-step 102-1: Normalize the three-dimensional skeleton model.
步骤102:将三维骨架模型映射至机器人的关节空间内,获取机器人的姿态特征和三维骨架模型的骨架姿态特征。Step 102 : Map the three-dimensional skeleton model into the joint space of the robot, and obtain the pose feature of the robot and the skeleton pose feature of the three-dimensional skeleton model.
子步骤1031:将机器人的姿态特征划分为多个映射部分。Sub-step 1031: Divide the pose feature of the robot into multiple mapping parts.
子步骤1032:针对每个映射部分进行如下处理:将映射部分中每个关节所在向量的位置变换为骨架姿态特征中对应的关键点所在向量的位置,获取关节的转动角度作为关节的转角信息。Sub-step 1032: Perform the following processing for each mapping part: transform the position of the vector of each joint in the mapping part into the position of the vector of the corresponding key point in the skeleton pose feature, and obtain the rotation angle of the joint as the rotation angle information of the joint.
步骤104-1:将各关节的转角信息作为当前帧的运动数据。Step 104-1: Use the rotation angle information of each joint as the motion data of the current frame.
步骤104-2:对当前帧的运动数据进行碰撞检测。Step 104-2: Perform collision detection on the motion data of the current frame.
步骤104-3:若未检测到碰撞,则确定执行根据各关节的转角信息控制对应关节的运动,形成目标姿态的步骤。Step 104-3: If no collision is detected, determine to execute the step of controlling the motion of the corresponding joint according to the rotation angle information of each joint to form the target posture.
步骤1041:对各关节的转角信息进行滤波处理。Step 1041: Perform filtering processing on the rotation angle information of each joint.
步骤1042:按照处理后的各关节的转角信息控制对应的关节旋转,形成目标姿态。Step 1042: Control the rotation of the corresponding joint according to the processed rotation angle information of each joint to form a target pose.
在一个实施例中,如图12所示为图11中增加获取三维骨架模型的流程图。In one embodiment, as shown in FIG. 12 , the flowchart of acquiring a three-dimensional skeleton model in FIG. 11 is added.
步骤1011:将图像数据输入预设的第一神经网络模型中,获得目标姿态的二维骨架数据,第一神经网络模型用于识别图像数据中目标对象的目标姿态,并基于目标姿态生成对应的二维骨架数据。Step 1011: Input the image data into the preset first neural network model to obtain the two-dimensional skeleton data of the target posture. The first neural network model is used to identify the target posture of the target object in the image data, and generate a corresponding target posture based on the target posture. 2D skeleton data.
步骤1012:将二维骨架数据输入预设的第二神经网络模型中,获得与目标姿态对应的三维骨架模型。Step 1012: Input the two-dimensional skeleton data into a preset second neural network model to obtain a three-dimensional skeleton model corresponding to the target pose.
子步骤102-1:对三维骨架模型进行归一化处理。Sub-step 102-1: Normalize the three-dimensional skeleton model.
步骤102:将三维骨架模型映射至机器人的关节空间内,获取机器人的姿态特征和三维骨架模型的骨架姿态特征。Step 102 : Map the three-dimensional skeleton model into the joint space of the robot, and obtain the pose feature of the robot and the skeleton pose feature of the three-dimensional skeleton model.
子步骤1031:将机器人的姿态特征划分为多个映射部分。Sub-step 1031: Divide the pose feature of the robot into multiple mapping parts.
子步骤1032:针对每个映射部分进行如下处理:将映射部分中每个关节所在向量的位置变换为骨架姿态特征中对应的关键点所在向量的位置,获取关节的转动角度作为关节的转角信息。Sub-step 1032: Perform the following processing for each mapping part: transform the position of the vector of each joint in the mapping part into the position of the vector of the corresponding key point in the skeleton pose feature, and obtain the rotation angle of the joint as the rotation angle information of the joint.
步骤104-1:将各关节的转角信息作为当前帧的运动数据。Step 104-1: Use the rotation angle information of each joint as the motion data of the current frame.
步骤104-2:对当前帧的运动数据进行碰撞检测。Step 104-2: Perform collision detection on the motion data of the current frame.
步骤104-3:若未检测到碰撞,则确定执行根据各关节的转角信息控制对应关节的运动,形成目标姿态的步骤。Step 104-3: If no collision is detected, determine to execute the step of controlling the motion of the corresponding joint according to the rotation angle information of each joint to form the target posture.
子步骤1041:对各关节的转角信息进行滤波处理。Sub-step 1041: Perform filtering processing on the rotation angle information of each joint.
子步骤1042:按照处理后的各关节的转角信息控制对应的关节旋转,形成目标姿态。Sub-step 1042: Control the rotation of the corresponding joint according to the processed rotation angle information of each joint to form a target pose.
在一个实施例中,如图13所示为图12中增加获取图像数据的流程图。In one embodiment, as shown in FIG. 13 , the flowchart of acquiring image data in FIG. 12 is added.
步骤1011-1:采集目标对象的视频数据。Step 1011-1: Collect video data of the target object.
步骤1011-2:从视频数据中获取图像数据。Step 1011-2: Obtain image data from video data.
步骤1011:将图像数据输入预设的第一神经网络模型中,获得目标姿态的二维骨架数据,第一神经网络模型用于识别图像数据中目标对象的目标姿态,并基于目标姿态生成对应的二维骨架数据。Step 1011: Input the image data into the preset first neural network model to obtain the two-dimensional skeleton data of the target posture. The first neural network model is used to identify the target posture of the target object in the image data, and generate a corresponding target posture based on the target posture. 2D skeleton data.
步骤1012:将二维骨架数据输入预设的第二神经网络模型中,获得与目标姿态对应的三维骨架模型。Step 1012: Input the two-dimensional skeleton data into a preset second neural network model to obtain a three-dimensional skeleton model corresponding to the target pose.
子步骤102-1:对三维骨架模型进行归一化处理。Sub-step 102-1: Normalize the three-dimensional skeleton model.
步骤102:将三维骨架模型映射至机器人的关节空间内,获取机器人的姿态特征和三维骨架模型的骨架姿态特征。Step 102 : Map the three-dimensional skeleton model into the joint space of the robot, and obtain the pose feature of the robot and the skeleton pose feature of the three-dimensional skeleton model.
子步骤1031:将机器人的姿态特征划分为多个映射部分。Sub-step 1031: Divide the pose feature of the robot into multiple mapping parts.
子步骤1032:针对每个映射部分进行如下处理:将映射部分中每个关节所在向量的位置变换为骨架姿态特征中对应的关键点所在向量的位置,获取关节的转动角度作为关节的转角信息。Sub-step 1032: Perform the following processing for each mapping part: transform the position of the vector of each joint in the mapping part into the position of the vector of the corresponding key point in the skeleton pose feature, and obtain the rotation angle of the joint as the rotation angle information of the joint.
步骤104-1:将各关节的转角信息作为当前帧的运动数据。Step 104-1: Use the rotation angle information of each joint as the motion data of the current frame.
步骤104-2:对当前帧的运动数据进行碰撞检测。Step 104-2: Perform collision detection on the motion data of the current frame.
步骤104-3:若未检测到碰撞,则确定执行根据各关节的转角信息控制对应关节的运动,形成目标姿态的步骤。Step 104-3: If no collision is detected, determine to execute the step of controlling the motion of the corresponding joint according to the rotation angle information of each joint to form the target posture.
步骤1041:对各关节的转角信息进行滤波处理。Step 1041: Perform filtering processing on the rotation angle information of each joint.
步骤1042:按照处理后的各关节的转角信息控制对应的关节旋转,形成目标姿态。Step 1042: Control the rotation of the corresponding joint according to the processed rotation angle information of each joint to form a target pose.
图14为本公开另一实施例中提供的机器人姿态的控制装置示意图。该机器人姿态的控制装置包括:模型获取模块201、姿态获取模块202、转角获取模块203及控制运动模块204。其中:FIG. 14 is a schematic diagram of a control device for a robot posture provided in another embodiment of the present disclosure. The robot posture control device includes: a model acquisition module 201 , a posture acquisition module 202 , a rotation angle acquisition module 203 and a motion control module 204 . in:
所述模型获取模块201,用于根据图像数据中目标对象的目标姿态,获取与所述目标姿态对应的三维骨架模型。The model obtaining module 201 is configured to obtain a three-dimensional skeleton model corresponding to the target pose according to the target pose of the target object in the image data.
具体地,图像数据可以是机器人拍摄的图像,或者从视频数据中提取的图像数据,如将视频数据中的一帧图像作为该图像数据。目标对象可以人体、动物等。本实施例中的机器人为多关节机器人,例如:人型机器人、动物型机器人等。可以通过识别该图像数据中目标对象,提取该目标对象的动作姿态作为目标姿态。获取与该目标姿态对应的二维骨架数据,基于该二维骨架数据可以构建该目标对象的三维骨架模型。Specifically, the image data may be an image captured by a robot, or image data extracted from video data, for example, a frame of image in the video data is used as the image data. The target object can be a human body, an animal, etc. The robot in this embodiment is a multi-joint robot, such as a human-type robot, an animal-type robot, and the like. The action gesture of the target object can be extracted as the target gesture by identifying the target object in the image data. Two-dimensional skeleton data corresponding to the target pose is acquired, and a three-dimensional skeleton model of the target object can be constructed based on the two-dimensional skeleton data.
所述模型获取模块,具体用于:将所述图像数据输入预设的第一神经网络模型中,获得所述目标姿态的二维骨架数据,所述第一神经网络模型用于识别所述图像数据中所述目标对象的目标姿态,并基于所述目标姿态生成对应的二维骨架数据;将所述二维骨架数据输入预设的第二神经网络模型中,获得与所述目标姿态对应的三维骨架模型。The model acquisition module is specifically configured to: input the image data into a preset first neural network model to obtain two-dimensional skeleton data of the target posture, and the first neural network model is used to identify the image The target posture of the target object in the data, and the corresponding two-dimensional skeleton data is generated based on the target posture; the two-dimensional skeleton data is input into the preset second neural network model, and the corresponding target posture is obtained. 3D Skeleton Model.
所述姿态获取模块202,用于将所述三维骨架模型映射至所述机器人 的关节空间内,获取所述机器人的姿态特征和所述三维骨架模型的骨架姿态特征。The attitude acquisition module 202 is used to map the three-dimensional skeleton model into the joint space of the robot, and obtain the attitude features of the robot and the skeleton attitude features of the three-dimensional skeleton model.
具体地,所有关节矢量组成的空间为当前该机器臂对应的关节空间,也可以将机器人的躯干部对应的关节空间作为该机器人的关节空间。三维骨架模型中相邻部位之间的空间夹角与机器人对应的相邻部位之间的空间夹角相同,该机器人则可以呈现与三维骨架模型相同的目标姿态。而三维骨架模型与机器人不处于同一坐标系,本示例中将三维骨架模型映射至该机器人的关节空间,获取该机器人的姿态特征,该姿态特征包括机器人各关节所在向量的位置以及该三维骨架模型的骨架姿态特征,该骨架姿态特征包括该骨架中各关键点形成的向量。例如,如图2所示的关节11所在向量的大小可以是从关节点11与关节点12的坐标差值,方向可以是从关节点执行关节点12。Specifically, the space formed by all the joint vectors is the joint space corresponding to the current robot arm, and the joint space corresponding to the torso of the robot may also be used as the joint space of the robot. The space angle between adjacent parts in the 3D skeleton model is the same as the space angle between adjacent parts corresponding to the robot, and the robot can present the same target pose as the 3D skeleton model. The 3D skeleton model and the robot are not in the same coordinate system. In this example, the 3D skeleton model is mapped to the joint space of the robot, and the pose feature of the robot is obtained. The pose feature includes the position of the vector of each joint of the robot and the 3D skeleton model. The skeleton pose feature includes the vector formed by each key point in the skeleton. For example, the magnitude of the vector where the joint 11 is located as shown in FIG. 2 may be the coordinate difference between the joint point 11 and the joint point 12 , and the direction may be the execution of the joint point 12 from the joint point.
所述转角获取模块203,用于将所述机器人的姿态特征调整为与所述骨架姿态特征匹配的目标位置,获取所述机器人的各关节的转角信息。The rotation angle acquisition module 203 is configured to adjust the posture feature of the robot to a target position matching the skeleton posture feature, and acquire the rotation angle information of each joint of the robot.
具体地,机器人的姿态特征包括该机器人各关节所在向量,每个关节所在向量有对应的目标向量,目标向量即为骨架姿态特征中对应关键点所在向量。变换该关节所在向量以使关节所在向量与该目标向量重合,获取该关节变换过程中转动的角度作为关节的转角信息。Specifically, the posture feature of the robot includes the vectors where each joint of the robot is located, and the vector where each joint is located has a corresponding target vector, and the target vector is the vector where the corresponding key points in the skeleton posture feature are located. Transform the vector of the joint so that the vector of the joint coincides with the target vector, and obtain the rotation angle of the joint during the transformation process as the rotation angle information of the joint.
所述转角获取模块,具体用于:将所述机器人的姿态特征划分为多个映射部分;针对每个映射部分进行如下处理:将所述映射部分中每个关节所在向量的位置变换为所述骨架姿态特征中对应的关键点所在向量的位置,获取所述关节的转动角度作为所述关节的转角信息。其中,所述多个映射部分包括:躯干部、四肢部、头部和腰部。The rotation angle acquisition module is specifically used for: dividing the posture feature of the robot into a plurality of mapping parts; performing the following processing for each mapping part: transforming the position of the vector of each joint in the mapping part into the The position of the vector where the corresponding key point in the skeleton pose feature is located, and the rotation angle of the joint is obtained as the rotation angle information of the joint. Wherein, the plurality of mapping parts include: trunk, limbs, head and waist.
所述控制运动模块204,用于根据各所述关节的转角信息控制所述机器人的对应所述关节的运动,形成所述目标姿态。The motion control module 204 is configured to control the motion of the corresponding joint of the robot according to the rotation angle information of each joint to form the target posture.
具体地,机器人的姿态特征包括该机器人各关节所在向量,每个关节所在向量有对应的目标向量,目标向量即为骨架姿态特征中对应关键点所在向量。变换该关节所在向量以使关节所在向量与该目标向量重合,获取该关节变换过程中转动的角度作为关节的转角信息。Specifically, the posture feature of the robot includes the vectors where each joint of the robot is located, and the vector where each joint is located has a corresponding target vector, and the target vector is the vector where the corresponding key points in the skeleton posture feature are located. Transform the vector of the joint so that the vector of the joint coincides with the target vector, and obtain the rotation angle of the joint during the transformation process as the rotation angle information of the joint.
所述控制运动模块,具体用于:对各所述关节的转角信息进行滤波处理;按照处理后的各所述关节的转角信息控制对应的所述关节旋转,形成所述目标姿态。The motion control module is specifically configured to: filter the rotation angle information of each joint; and control the rotation of the corresponding joint according to the processed rotation angle information of each joint to form the target posture.
另外,在该实施例中,所述机器人姿态的控制装置,还包括:In addition, in this embodiment, the control device for the robot posture further includes:
运动数据模块,用于将各所述关节的转角信息作为当前帧的运动数据;a motion data module, used for taking the corner information of each described joint as the motion data of the current frame;
碰撞检测模块,用于对当前帧的所述运动数据进行碰撞检测;a collision detection module for performing collision detection on the motion data of the current frame;
控制运动子模块,用于若未检测到碰撞,则确定执行根据各所述关节的转角信息控制对应所述关节的运动,形成所述目标姿态的步骤。The motion control sub-module is configured to determine and execute the step of controlling the motion of the corresponding joint according to the rotation angle information of each joint to form the target posture if no collision is detected.
所述装置还包括:The device also includes:
归一化模块,用于对所述三维骨架模型进行归一化处理。The normalization module is used for normalizing the three-dimensional skeleton model.
所述装置还包括:The device also includes:
视频采集模块,用于采集所述目标对象的视频数据;a video collection module, used for collecting the video data of the target object;
图像获取模块,用于从所述视频数据中获取所述图像数据。An image acquisition module, configured to acquire the image data from the video data.
本公开另一实施例中还提供一种机器人,其结构框图如图15所示,包括:至少一个处理器301;以及,与至少一个处理器301通信连接的存储器302;其中,存储器302存储有可被至少一个处理器301执行的指令,指令被至少一个处理器301执行,以使至少一个处理器301能够执行上述的机器人姿态的控制方法。Another embodiment of the present disclosure further provides a robot, whose structural block diagram is shown in FIG. 15 , including: at least one processor 301 ; and a memory 302 communicatively connected to the at least one processor 301 ; wherein the memory 302 stores Instructions executable by the at least one processor 301, the instructions are executed by the at least one processor 301, so that the at least one processor 301 can execute the above-mentioned method for controlling the posture of the robot.
其中,存储器302和处理器301采用总线方式连接,总线可以包括任意数量的互联的总线和桥,总线将一个或多个处理器301和存储器302的各种电路链接在一起。总线还可以将诸如外围设备、稳压器和功率管理电路等之类的各种其他电路链接在一起,这些都是本领域所公知的,因此,本文不再对其进行进一步描述。总线接口在总线和收发机之间提供接口。收发机可以是一个元件,也可以是多个元件,比如多个接收器和发送器,提供用于在传输介质上与各种其他装置通信的单元。经处理器处理的数据通过天线在无线介质上进行传输,进一步,天线还接收数据并将数据传送给处理器。The memory 302 and the processor 301 are connected by a bus, and the bus may include any number of interconnected buses and bridges, and the bus links one or more processors 301 and various circuits of the memory 302 together. The bus may also link together various other circuits, such as peripherals, voltage regulators, and power management circuits, which are well known in the art and therefore will not be described further herein. The bus interface provides the interface between the bus and the transceiver. A transceiver may be a single element or multiple elements, such as multiple receivers and transmitters, providing a means for communicating with various other devices over a transmission medium. The data processed by the processor is transmitted on the wireless medium through the antenna, and further, the antenna also receives the data and transmits the data to the processor.
处理器301负责管理总线和通常的处理,还可以提供各种功能,包括 定时,外围接口,电压调节、电源管理以及其他控制功能。而存储器302可以被用于存储处理器在执行操作时所使用的数据。 Processor 301 is responsible for managing the bus and general processing, and may also provide various functions including timing, peripheral interface, voltage regulation, power management, and other control functions. Instead, memory 302 may be used to store data used by the processor in performing operations.
另外,本公开实施例中还提供一种计算机可读存储介质,存储有计算机程序,计算机程序被处理器执行时实现上述的机器人姿态的控制方法。In addition, an embodiment of the present disclosure further provides a computer-readable storage medium storing a computer program, and when the computer program is executed by a processor, the above-mentioned method for controlling the attitude of the robot is implemented.
另外,本公开的实施方式还提供了一种计算机程序,包括指令,当其在计算机上运行时,使得计算机执行上述的机器人姿态的控制方法。In addition, an embodiment of the present disclosure also provides a computer program, including instructions, which, when run on a computer, cause the computer to execute the above-mentioned method for controlling the posture of a robot.
本领域技术人员可以理解实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。Those skilled in the art can understand that all or part of the steps in the method of the above embodiments can be completed by instructing the relevant hardware through a program. The program is stored in a storage medium and includes several instructions to make a device (which may be a single-chip microcomputer) , chip, etc.) or a processor (processor) to execute all or part of the steps of the methods described in the various embodiments of the present disclosure. The aforementioned storage medium includes: U disk, mobile hard disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes .
本领域的普通技术人员可以理解,上述各实施方式是实现本发明的具体实施例,而在实际应用中,可以在形式上和细节上对其作各种改变,而不偏离本发明的精神和范围。Those skilled in the art can understand that the above-mentioned embodiments are specific examples for realizing the present invention, and in practical applications, various changes in form and details can be made without departing from the spirit and the spirit of the present invention. scope.

Claims (19)

  1. 一种机器人姿态的控制方法,其特征在于,包括:A method for controlling the attitude of a robot, comprising:
    根据图像数据中目标对象的目标姿态,获取与所述目标姿态对应的三维骨架模型;According to the target posture of the target object in the image data, obtain a three-dimensional skeleton model corresponding to the target posture;
    将所述三维骨架模型映射至所述机器人的关节空间内,获取所述机器人的姿态特征和所述三维骨架模型的骨架姿态特征;mapping the three-dimensional skeleton model into the joint space of the robot, and obtaining the robot's posture feature and the skeleton posture feature of the three-dimensional skeleton model;
    将所述机器人的姿态特征调整为与所述骨架姿态特征匹配的目标位置,获取所述机器人的各关节的转角信息;Adjust the posture feature of the robot to a target position matching the skeleton posture feature, and obtain the rotation angle information of each joint of the robot;
    根据各所述关节的转角信息控制所述机器人的对应所述关节的运动,形成所述目标姿态。The movement of the robot corresponding to the joint is controlled according to the rotation angle information of each joint to form the target posture.
  2. 根据权利要求1所述的机器人姿态的控制方法,其特征在于,所述根据图像数据中目标对象的目标姿态,获取与所述目标姿态对应的三维骨架模型,包括:The method for controlling robot posture according to claim 1, wherein the obtaining a three-dimensional skeleton model corresponding to the target posture according to the target posture of the target object in the image data comprises:
    将所述图像数据输入预设的第一神经网络模型中,获得所述目标姿态的二维骨架数据,所述第一神经网络模型用于识别所述图像数据中所述目标对象的目标姿态,并基于所述目标姿态生成对应的二维骨架数据;Inputting the image data into a preset first neural network model to obtain two-dimensional skeleton data of the target posture, and the first neural network model is used to identify the target posture of the target object in the image data, and generate corresponding two-dimensional skeleton data based on the target posture;
    将所述二维骨架数据输入预设的第二神经网络模型中,获得与所述目标姿态对应的三维骨架模型。Inputting the two-dimensional skeleton data into a preset second neural network model to obtain a three-dimensional skeleton model corresponding to the target posture.
  3. 根据权利要求1所述的机器人姿态的控制方法,其特征在于,将所述机器人的姿态特征调整为与所述骨架姿态特征匹配的目标位置,获取所述机器人的各关节的转角信息,包括:The method for controlling robot posture according to claim 1, wherein adjusting the posture feature of the robot to a target position matching the skeleton posture feature, and acquiring the rotation angle information of each joint of the robot, comprising:
    将所述机器人的姿态特征划分为多个映射部分;dividing the posture feature of the robot into a plurality of mapping parts;
    针对每个映射部分进行如下处理:将所述映射部分中每个关节所在向量的位置变换为所述骨架姿态特征中对应的关键点所在向量的位置,获取所述关节的转动角度作为所述关节的转角信息。The following processing is performed for each mapping part: the position of the vector of each joint in the mapping part is transformed into the position of the vector of the corresponding key point in the skeleton pose feature, and the rotation angle of the joint is obtained as the joint corner information.
  4. 根据权利要求3所述的机器人姿态的控制方法,其特征在于,所述多个映射部分包括:躯干部、四肢部、头部和腰部。The method for controlling the posture of a robot according to claim 3, wherein the plurality of mapping parts comprise: a trunk part, a limb part, a head part and a waist part.
  5. 根据权利要求1所述的机器人姿态的控制方法,其特征在于,所 述方法还包括:The control method of robot posture according to claim 1, is characterized in that, described method also comprises:
    将各所述关节的转角信息作为当前帧的运动数据;Taking the corner information of each described joint as the motion data of the current frame;
    对当前帧的所述运动数据进行碰撞检测;performing collision detection on the motion data of the current frame;
    若未检测到碰撞,则确定执行根据各所述关节的转角信息控制对应所述关节的运动,形成所述目标姿态的步骤。If no collision is detected, it is determined to execute the step of controlling the motion of the corresponding joint according to the rotation angle information of each joint to form the target posture.
  6. 根据权利要求1至5中任一项所述的机器人姿态的控制方法,其特征在于,所述根据各所述关节的转角信息控制所述机器人的对应所述关节的运动,形成所述目标姿态,包括:The method for controlling the posture of a robot according to any one of claims 1 to 5, wherein the movement of the robot corresponding to the joint is controlled according to the rotation angle information of each joint to form the target posture ,include:
    对各所述关节的转角信息进行滤波处理;filtering the rotation angle information of each of the joints;
    按照处理后的各所述关节的转角信息控制对应的所述关节旋转,形成所述目标姿态。The rotation of the corresponding joint is controlled according to the processed rotation angle information of each joint to form the target posture.
  7. 根据权利要求1至5中任一项所述的机器人姿态的控制方法,其特征在于,在将所述三维骨架模型映射至所述机器人的关节空间内,获取所述机器人的姿态特征和所述三维骨架模型的骨架姿态特征之前,所述方法还包括:The method for controlling the attitude of a robot according to any one of claims 1 to 5, wherein the three-dimensional skeleton model is mapped to the joint space of the robot, and the attitude features of the robot and the robot are acquired. Before the skeleton pose feature of the three-dimensional skeleton model, the method further includes:
    对所述三维骨架模型进行归一化处理。The three-dimensional skeleton model is normalized.
  8. 根据权利要求1所述的机器人姿态的控制方法,其特征在于,在将所述图像数据输入预设的第一神经网络模型中,获得所述目标姿态的二维骨架数据之前,所述方法还包括:The method for controlling robot posture according to claim 1, characterized in that, before the image data is input into the preset first neural network model and the two-dimensional skeleton data of the target posture is obtained, the method further comprises: include:
    采集所述目标对象的视频数据;collecting video data of the target object;
    从所述视频数据中获取所述图像数据。The image data is obtained from the video data.
  9. 一种机器人姿态的控制装置,其特征在于,包括:A control device for robot posture, comprising:
    模型获取模块,用于根据图像数据中目标对象的目标姿态,获取与所述目标姿态对应的三维骨架模型;a model acquisition module, configured to acquire a three-dimensional skeleton model corresponding to the target posture according to the target posture of the target object in the image data;
    姿态获取模块,用于将所述三维骨架模型映射至所述机器人的关节空间内,获取所述机器人的姿态特征和所述三维骨架模型的骨架姿态特征;an attitude acquisition module, configured to map the three-dimensional skeleton model into the joint space of the robot, and acquire the attitude features of the robot and the skeleton attitude features of the three-dimensional skeleton model;
    转角获取模块,用于将所述机器人的姿态特征调整为与所述骨架姿态 特征匹配的目标位置,获取所述机器人的各关节的转角信息;A turning angle acquisition module, for adjusting the posture feature of the robot to a target position matched with the skeleton posture feature, to obtain the turning angle information of each joint of the robot;
    控制运动模块,用于根据各所述关节的转角信息控制所述机器人的对应所述关节的运动,形成所述目标姿态。A motion control module is configured to control the motion of the corresponding joint of the robot according to the rotation angle information of each joint to form the target posture.
  10. 根据权利要求9所述的装置,其特征在于,所述模型获取模块,具体用于:The device according to claim 9, wherein the model acquisition module is specifically used for:
    将所述图像数据输入预设的第一神经网络模型中,获得所述目标姿态的二维骨架数据,所述第一神经网络模型用于识别所述图像数据中所述目标对象的目标姿态,并基于所述目标姿态生成对应的二维骨架数据;Inputting the image data into a preset first neural network model to obtain two-dimensional skeleton data of the target posture, and the first neural network model is used to identify the target posture of the target object in the image data, and generate corresponding two-dimensional skeleton data based on the target posture;
    将所述二维骨架数据输入预设的第二神经网络模型中,获得与所述目标姿态对应的三维骨架模型。Inputting the two-dimensional skeleton data into a preset second neural network model to obtain a three-dimensional skeleton model corresponding to the target posture.
  11. 根据权利要求9所述的装置,其特征在于,所述转角获取模块,具体用于:The device according to claim 9, wherein the corner acquiring module is specifically used for:
    将所述机器人的姿态特征划分为多个映射部分;dividing the posture feature of the robot into a plurality of mapping parts;
    针对每个映射部分进行如下处理:将所述映射部分中每个关节所在向量的位置变换为所述骨架姿态特征中对应的关键点所在向量的位置,获取所述关节的转动角度作为所述关节的转角信息。The following processing is performed for each mapping part: the position of the vector of each joint in the mapping part is transformed into the position of the vector of the corresponding key point in the skeleton pose feature, and the rotation angle of the joint is obtained as the joint corner information.
  12. 根据权利要求11所述的装置,其特征在于,所述多个映射部分包括:躯干部、四肢部、头部和腰部。11. The apparatus of claim 11, wherein the plurality of mapped parts comprises: a torso, a limb, a head, and a waist.
  13. 根据权利要求9所述的装置,其特征在于,所述装置还包括:The apparatus according to claim 9, wherein the apparatus further comprises:
    运动数据模块,用于将各所述关节的转角信息作为当前帧的运动数据;a motion data module, used for taking the corner information of each described joint as the motion data of the current frame;
    碰撞检测模块,用于对当前帧的所述运动数据进行碰撞检测;a collision detection module for performing collision detection on the motion data of the current frame;
    控制运动子模块,用于若未检测到碰撞,则确定执行根据各所述关节的转角信息控制对应所述关节的运动,形成所述目标姿态的步骤。The motion control sub-module is configured to determine and execute the step of controlling the motion of the corresponding joint according to the rotation angle information of each joint to form the target posture if no collision is detected.
  14. 根据权利要求9至13中任一项所述的装置,其特征在于,所述控制运动模块,具体用于:The device according to any one of claims 9 to 13, wherein the control motion module is specifically used for:
    对各所述关节的转角信息进行滤波处理;filtering the rotation angle information of each of the joints;
    按照处理后的各所述关节的转角信息控制对应的所述关节旋转,形成 所述目标姿态。Control the rotation of the corresponding joints according to the processed rotation angle information of the joints to form the target posture.
  15. 根据权利要求9至13中任一项所述的装置,其特征在于,所述装置还包括:The device according to any one of claims 9 to 13, wherein the device further comprises:
    归一化模块,用于对所述三维骨架模型进行归一化处理。The normalization module is used for normalizing the three-dimensional skeleton model.
  16. 根据权利要求1所述的装置,其特征在于,所述装置还包括:The device according to claim 1, wherein the device further comprises:
    视频采集模块,用于采集所述目标对象的视频数据;a video collection module, used for collecting the video data of the target object;
    图像获取模块,用于从所述视频数据中获取所述图像数据。An image acquisition module, configured to acquire the image data from the video data.
  17. 一种机器人,其特征在于,包括:A robot, characterized in that it includes:
    至少一个处理器;以及,at least one processor; and,
    与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如权利要求1-8任一所述的机器人姿态的控制方法。The memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to perform any one of claims 1-8 The control method of robot attitude.
  18. 一种计算机可读存储介质,存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至8中任一项所述的机器人姿态的控制方法。A computer-readable storage medium storing a computer program, characterized in that, when the computer program is executed by a processor, the method for controlling the attitude of a robot according to any one of claims 1 to 8 is implemented.
  19. 一种计算机程序,包括指令,当其在计算机上运行时,使得计算机执行根据权利要求1-8任一项所述的机器人姿态的控制方法。A computer program comprising instructions, when run on a computer, causes the computer to execute the method for controlling the attitude of a robot according to any one of claims 1-8.
PCT/CN2021/142242 2021-04-25 2021-12-28 Robot posture control method, robot, storage medium and computer program WO2022227664A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110450270.2A CN113146634A (en) 2021-04-25 2021-04-25 Robot attitude control method, robot and storage medium
CN202110450270.2 2021-04-25

Publications (1)

Publication Number Publication Date
WO2022227664A1 true WO2022227664A1 (en) 2022-11-03

Family

ID=76870561

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/142242 WO2022227664A1 (en) 2021-04-25 2021-12-28 Robot posture control method, robot, storage medium and computer program

Country Status (2)

Country Link
CN (1) CN113146634A (en)
WO (1) WO2022227664A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113146634A (en) * 2021-04-25 2021-07-23 达闼机器人有限公司 Robot attitude control method, robot and storage medium
CN113569828B (en) * 2021-09-27 2022-03-08 南昌嘉研科技有限公司 Human body posture recognition method, system, storage medium and equipment
TWI831531B (en) * 2022-12-20 2024-02-01 國立成功大學 Method and system of simulating human posture by using robot

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104615983A (en) * 2015-01-28 2015-05-13 中国科学院自动化研究所 Behavior identification method based on recurrent neural network and human skeleton movement sequences
CN104952105A (en) * 2014-03-27 2015-09-30 联想(北京)有限公司 Method and apparatus for estimating three-dimensional human body posture
JP2018008347A (en) * 2016-07-13 2018-01-18 東芝機械株式会社 Robot system and operation region display method
CN107953331A (en) * 2017-10-17 2018-04-24 华南理工大学 A kind of human body attitude mapping method applied to anthropomorphic robot action imitation
CN111002289A (en) * 2019-11-25 2020-04-14 华中科技大学 Robot online teaching method and device, terminal device and storage medium
CN112164091A (en) * 2020-08-25 2021-01-01 南京邮电大学 Mobile device human body pose estimation method based on three-dimensional skeleton extraction
CN113146634A (en) * 2021-04-25 2021-07-23 达闼机器人有限公司 Robot attitude control method, robot and storage medium

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9031691B2 (en) * 2013-03-04 2015-05-12 Disney Enterprises, Inc. Systemic derivation of simplified dynamics for humanoid robots
US9342888B2 (en) * 2014-02-08 2016-05-17 Honda Motor Co., Ltd. System and method for mapping, localization and pose correction of a vehicle based on images
CN105252532B (en) * 2015-11-24 2017-07-04 山东大学 The method of the flexible gesture stability of motion capture robot collaboration
CN106625658A (en) * 2016-11-09 2017-05-10 华南理工大学 Method for controlling anthropomorphic robot to imitate motions of upper part of human body in real time
CN108098780A (en) * 2016-11-24 2018-06-01 广州映博智能科技有限公司 A kind of new robot apery kinematic system
CN108053469A (en) * 2017-12-26 2018-05-18 清华大学 Complicated dynamic scene human body three-dimensional method for reconstructing and device under various visual angles camera
CN109145788B (en) * 2018-08-08 2020-07-07 北京云舶在线科技有限公司 Video-based attitude data capturing method and system
KR20210015211A (en) * 2019-08-01 2021-02-10 엘지전자 주식회사 Method of cloud slam in realtime and robot and cloud server implementing thereof
CN110480634B (en) * 2019-08-08 2020-10-02 北京科技大学 Arm guide motion control method for mechanical arm motion control
CN111208783B (en) * 2019-12-30 2021-09-17 深圳市优必选科技股份有限公司 Action simulation method, device, terminal and computer storage medium
CN111300421A (en) * 2020-03-17 2020-06-19 北京理工大学 Mapping method applied to simulation of actions of both hands of humanoid robot
CN112580582B (en) * 2020-12-28 2023-03-24 达闼机器人股份有限公司 Action learning method, action learning device, action learning medium and electronic equipment
CN112975993B (en) * 2021-02-22 2022-11-25 北京国腾联信科技有限公司 Robot teaching method, device, storage medium and equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104952105A (en) * 2014-03-27 2015-09-30 联想(北京)有限公司 Method and apparatus for estimating three-dimensional human body posture
CN104615983A (en) * 2015-01-28 2015-05-13 中国科学院自动化研究所 Behavior identification method based on recurrent neural network and human skeleton movement sequences
JP2018008347A (en) * 2016-07-13 2018-01-18 東芝機械株式会社 Robot system and operation region display method
CN107953331A (en) * 2017-10-17 2018-04-24 华南理工大学 A kind of human body attitude mapping method applied to anthropomorphic robot action imitation
CN111002289A (en) * 2019-11-25 2020-04-14 华中科技大学 Robot online teaching method and device, terminal device and storage medium
CN112164091A (en) * 2020-08-25 2021-01-01 南京邮电大学 Mobile device human body pose estimation method based on three-dimensional skeleton extraction
CN113146634A (en) * 2021-04-25 2021-07-23 达闼机器人有限公司 Robot attitude control method, robot and storage medium

Also Published As

Publication number Publication date
CN113146634A (en) 2021-07-23

Similar Documents

Publication Publication Date Title
WO2022227664A1 (en) Robot posture control method, robot, storage medium and computer program
US11302064B2 (en) Method and apparatus for reconstructing three-dimensional model of human body, and storage medium
CN111402290B (en) Action restoration method and device based on skeleton key points
US20210074005A1 (en) Image processing method and apparatus, image device, and storage medium
CN110480634B (en) Arm guide motion control method for mechanical arm motion control
CN109919141A (en) A kind of recognition methods again of the pedestrian based on skeleton pose
CN107767419A (en) A kind of skeleton critical point detection method and device
CN108829232A (en) The acquisition methods of skeleton artis three-dimensional coordinate based on deep learning
CN109079794B (en) Robot control and teaching method based on human body posture following
JP2019096113A (en) Processing device, method and program relating to keypoint data
CN106815855A (en) Based on the human body motion tracking method that production and discriminate combine
CN117671738B (en) Human body posture recognition system based on artificial intelligence
JP4164737B2 (en) Object recognition apparatus and method, and robot apparatus
CN113298047A (en) 3D form and posture estimation method and device based on space-time correlation image
Wu et al. An unsupervised real-time framework of human pose tracking from range image sequences
CN109531578B (en) Humanoid mechanical arm somatosensory control method and device
Tian et al. Design and implementation of dance teaching system based on Unity3D
CN115205737B (en) Motion real-time counting method and system based on transducer model
CN114974506B (en) Human body posture data processing method and system
CN116749168A (en) Rehabilitation track acquisition method based on gesture teaching
CN112975993B (en) Robot teaching method, device, storage medium and equipment
CN115890671A (en) SMPL parameter-based multi-geometry human body collision model generation method and system
CN115100745A (en) Swin transform model-based motion real-time counting method and system
Zhang et al. Human motion recognition for industrial human-robot collaboration based on a novel skeleton descriptor
Pan et al. A Study of Intelligent Rehabilitation Robot Imitation of Human Behavior Based on Kinect

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21939123

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21939123

Country of ref document: EP

Kind code of ref document: A1