CN113146634A - Robot attitude control method, robot and storage medium - Google Patents

Robot attitude control method, robot and storage medium Download PDF

Info

Publication number
CN113146634A
CN113146634A CN202110450270.2A CN202110450270A CN113146634A CN 113146634 A CN113146634 A CN 113146634A CN 202110450270 A CN202110450270 A CN 202110450270A CN 113146634 A CN113146634 A CN 113146634A
Authority
CN
China
Prior art keywords
robot
joint
posture
target
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110450270.2A
Other languages
Chinese (zh)
Inventor
彭飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Robotics Co Ltd
Original Assignee
Cloudminds Robotics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Robotics Co Ltd filed Critical Cloudminds Robotics Co Ltd
Priority to CN202110450270.2A priority Critical patent/CN113146634A/en
Publication of CN113146634A publication Critical patent/CN113146634A/en
Priority to PCT/CN2021/142242 priority patent/WO2022227664A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture

Abstract

The embodiment of the invention relates to the field of robots, and discloses a robot posture control method, a robot and a storage medium. The invention comprises the following steps: acquiring a three-dimensional skeleton model corresponding to a target posture according to the target posture of a target object in image data; mapping the three-dimensional skeleton model into a joint space of the robot to obtain the posture characteristic of the robot and the skeleton posture characteristic of the three-dimensional skeleton model; adjusting the posture characteristic of the robot to a position matched with the skeleton posture characteristic, and acquiring corner information of each joint of the robot; and controlling the motion of the robot corresponding to the joints according to the corner information of each joint to form a target posture. By adopting the method in the embodiment, the target gesture matched with the target object can be quickly generated, the actions of the robot are enriched, and the cost for the robot to learn the action gesture is simplified.

Description

Robot attitude control method, robot and storage medium
Technical Field
The embodiment of the invention relates to the field of robots, in particular to a robot posture control method, a robot and a storage medium.
Background
With the development of science and technology, a large number of motion control systems of intelligent robots are designed and manufactured and applied to social production and life to improve social productivity and improve life quality of people. The robot action is generated from a sequence of columns, such as: holding, lifting or swinging hands, etc.; the motion sequence generation of the current robot can be manually adjusted according to the target position of the motion of the robot; or human body posture sensing is realized through motion capture equipment, for example, human body skeletons can be detected and tracked by utilizing sensor data or 2D video processing technology to realize human body posture sensing; and setting the action sequence of the robot according to the perceived human body posture.
However, the manual debugging method requires manual design of the motion of each joint one by one, generates the motion sequence of the robot, and forms the posture of the robot, and the debugging time is long and the process is complicated because the debugging needs to be performed one by one and the motion of each joint affects the motion of other joints; the motion capture device is used for capturing human body motion, and extra equipment is needed for capturing the human body motion, so that the generation of a motion sequence of the machine is not flexible, and the cost is high.
Disclosure of Invention
An object of embodiments of the present invention is to provide a method for controlling a robot gesture, a robot, and a storage medium, which can quickly generate a target gesture matched with a target object, enrich actions of the robot, and simplify a cost for the robot to learn the action gesture.
In order to solve the above technical problem, in a first aspect, an embodiment of the present application provides a method for controlling a robot posture, including: acquiring a three-dimensional skeleton model corresponding to a target posture according to the target posture of a target object in image data; mapping the three-dimensional skeleton model into a joint space of the robot to obtain the posture characteristic of the robot and the skeleton posture characteristic of the three-dimensional skeleton model; adjusting the posture characteristic of the robot to a position matched with the skeleton posture characteristic, and acquiring corner information of each joint of the robot; and controlling the motion of the robot corresponding to the joints according to the corner information of each joint to form the target posture.
In a second aspect, embodiments of the present application further provide a robot, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to perform the robot pose control method described above.
In a third aspect, the present application further provides a computer-readable storage medium, which stores a computer program, and the computer program, when executed by a processor, implements the above-mentioned robot posture control method.
In the embodiment of the application, limbs of a robot move in a three-dimensional space, a three-dimensional skeleton model corresponding to a target posture is obtained through the target posture of a target object in image data, the three-dimensional skeleton model is mapped into a joint space of the robot, the three-dimensional skeleton model and the joint space are in the same coordinate system, corner information of each joint in the robot is obtained, the posture formed by controlling the robot to move based on the corner information of each joint corresponds to the target posture, the robot is guided to move through the target object in the image data, an extra motion capture sensor is not needed to upload the motion of the target object in real time, and the cost for controlling the robot to move according to the target posture is reduced; and each joint does not need to be debugged manually, so that the complex steps for generating the target posture are simplified, and the robot can learn the action of the target posture more quickly.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
FIG. 1 is a flow chart of a method for controlling robot poses in an embodiment of the application;
FIG. 2 is a schematic illustration of a two-dimensional human skeleton provided in one embodiment;
FIG. 3 is a flow chart of obtaining rotational angle information for joints provided in one embodiment;
FIG. 4 is a flow diagram that provides collision detection of rotational angle information for joints in one embodiment;
FIG. 5 is a flow diagram that provides for filtering of rotational angle information for joints, in one embodiment;
FIG. 6 is a flow diagram that provides a normalization process for a three-dimensional skeletal model, under an embodiment;
FIG. 7 is a flow diagram that provides for obtaining a three-dimensional skeletal model, under an embodiment;
FIG. 8 is a flow diagram providing for acquiring image data in one embodiment;
FIG. 9 is a flow diagram that provides for obtaining rotational angle information for a joint and performing collision detection on the rotational angle information for the joint in one embodiment;
fig. 10 is a flowchart of a control method for the robot posture after the filter process is added in fig. 9;
FIG. 11 is a flow chart of a method of controlling the pose of the robot to which the normalization process for the three-dimensional skeletal model of FIG. 10 is added;
FIG. 12 is a flowchart of a control method for adding poses of a robot for acquiring a three-dimensional skeletal model in FIG. 11;
FIG. 13 is a flowchart of a control method for adding pose of the robot acquiring image data of FIG. 11;
fig. 14 is a schematic structural diagram of a robot in the embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments.
The following embodiments are divided for convenience of description, and should not constitute any limitation to the specific implementation manner of the present invention, and the embodiments may be mutually incorporated and referred to without contradiction.
The flow of the control method of the robot posture in the application embodiment is shown in fig. 1:
step 101: and acquiring a three-dimensional skeleton model corresponding to the target posture according to the target posture of the target object in the image data.
Specifically, the image data may be an image taken by the robot, or image data extracted from video data, such as one frame image in the video data as the image data. The target object may be a human body, an animal, or the like. The robot in the present embodiment is a multi-joint robot, for example: a human robot, an animal robot, etc.
The motion posture of the target object can be extracted as the target posture by recognizing the target object in the image data. And acquiring two-dimensional skeleton data corresponding to the target posture, and constructing a three-dimensional skeleton model of the target object based on the two-dimensional skeleton data.
In this example, the articulated robot is exemplified by a human-type robot, and the target object is a human body. Generally, the skeleton of a human body is composed of 17 three-dimensional joint points, such as a two-dimensional skeleton shown in fig. 2, wherein reference numbers 0 to 16 in fig. 2 respectively represent: 0 pelvis center, 1 right hip joint, 2 right knee joint, 3 right ankle joint, 4 left hip joint, 5 left knee joint, 6 left ankle joint, 7 spine midpoint, 8 cervical spine midpoint, 9 heads, 10 heaven neck cover, 11 left shoulder joint, 12 left elbow joint, 13 left wrist joint, 14 right shoulder joint, 15 right elbow joint and 16 right wrist joint.
Step 102: and mapping the three-dimensional skeleton model into a joint space of the robot to obtain the posture characteristics of the robot and the skeleton posture characteristics of the three-dimensional skeleton model.
In particular, for an operating arm with n degrees of freedom, all its link positions can be determined by a set of n joint variables. This set of joint variables is called n x 1 joint vectors, and the space made up of all joint vectors is called the joint space. The joint space of the robot may be determined according to the robot arm of the robot, for example, the robot arm of the robot may be 7 degrees of freedom, and then based on the robot arm with 7 degrees of freedom, a 7 × 1 joint vector may be constructed, and a space formed by all the joint vectors is the joint space corresponding to the current robot arm. The joint space corresponding to the trunk of the robot may be the joint space of the robot.
The space included angle between adjacent parts in the three-dimensional skeleton model is the same as the space included angle between adjacent parts corresponding to the robot, and the robot can present the same target posture as the three-dimensional skeleton model. And the three-dimensional skeleton model and the robot are not in the same coordinate system, in this example, the three-dimensional skeleton model is mapped to a joint space of the robot, and posture features of the robot are obtained, wherein the posture features comprise positions of vectors of joints of the robot and skeleton posture features of the three-dimensional skeleton model, and the skeleton posture features comprise vectors formed by key points in the skeleton. For example, the size of the vector in which the joint 11 is located as shown in fig. 2 may be the coordinate difference from the joint 11 and the joint 12, and the direction may be the direction from the joint to execute the joint 12.
Step 103: and adjusting the posture characteristic of the robot to a position matched with the skeleton posture characteristic, and acquiring the corner information of each joint of the robot.
Specifically, the posture characteristics of the robot comprise vectors of joints of the robot, the vector of each joint has a corresponding target vector, and the target vector is the vector of the corresponding key point in the skeleton posture characteristics. And transforming the vector of the joint to enable the vector of the joint to be coincided with the target vector, and acquiring the rotating angle in the joint transformation process as the corner information of the joint.
Step 104: and controlling the motion of the corresponding joint of the robot according to the corner information of each joint to form a target posture.
And controlling each joint of the robot to move according to the corresponding joint angle, so that the target posture can be formed.
In the embodiment of the application, limbs of a robot move in a three-dimensional space, a three-dimensional skeleton model corresponding to a target posture is obtained through the target posture of a target object in image data, the three-dimensional skeleton model is mapped into a joint space of the robot, the three-dimensional skeleton model and the joint space are in the same coordinate system, corner information of each joint in the robot is obtained, the posture formed by controlling the robot to move based on the corner information of each joint corresponds to the target posture, the robot is guided to move through the target object in the image data, an extra motion capture sensor is not needed to upload the motion of the target object in real time, and the cost for controlling the robot to move according to the target posture is reduced; and each joint does not need to be debugged manually, so that the complex steps for generating the target posture are simplified, and the robot can learn the action of the target posture more quickly.
In one embodiment, step 103 may perform how the sub-steps are, the flow of which is shown in FIG. 3.
Substep 1031: the attitude characteristics of the robot are divided into a plurality of mapping parts.
In particular, the robot pose features may be divided into several mapping parts according to the joint positions. Because the angle of the joint needs to be acquired, the RPY system can be constructed based on the position of the joint point, and the divided mapping part comprises: trunk, four limbs, head and waist. The trunk portion includes: 0 pelvic center, 11 left shoulder joint and 14 right shoulder joint. The four limbs include: left upper limb, left lower limb, right upper limb and right lower limb. The left upper limb is: 11 left shoulder joint, 12 left elbow joint and 13 left wrist joint. The left lower limb is: 4 left hip joint, 5 left knee joint and 6 left ankle joint. The right upper limb is: 14 right shoulder joint, 15 right elbow joint and 16 right wrist joint. The right lower limb is: 1 right hip joint, 2 right knee joint and 3 right ankle joint. The head includes: 8 cervical vertebra midpoint, 9 heads, 10 Tianlinggai. The waist includes: 0 pelvis center, 1 right hip joint and 4 left hip joints.
By dividing the mapping parts, the Euler angles of all joints in each mapping part can be conveniently acquired, so that the rotation angles of the joints can be conveniently calculated, the time for calculating the rotation angle information of the joints is reduced, and the calculation speed is improved.
Substep 1032: the following processing is performed for each mapping part: and transforming the position of the vector of each joint in the mapping part into the position of the vector of the corresponding key point in the skeleton posture characteristic, and acquiring the rotation angle of the joint as the corner information of the joint.
By subtracting adjacent key points, line segments of the three-dimensional skeleton model can be converted into vectors, and the vectors of the key points in the skeleton posture characteristics are obtained. And obtaining the rotation angle of each joint in a European geometric settlement mode. The following calculation process, taking the left upper limb as an example and referring to fig. 2, is specifically described as follows:
a normal vector of a plane formed by three joints 0, 11 and 14 is taken as a z-axis, a vector formed by the joint 0 to the joint 8 is taken as an x-axis to construct a coordinate system, the positive direction of the x-axis of the coordinate system is the vertical direction in the standing state of the person, the positive direction of the y-axis is the right direction in the standing state of the person, and the positive direction of the z-axis is the front direction in the standing state of the person. All other joints are converted into the coordinate system, the four limbs of the human are moved based on the trunk, and the robot movement simulating the human body is also moved based on the trunk, so that the trunk of the robot can be used as a basic coordinate system of the whole calculation.
Calculate the joint angle of the keypoint "11 → 12" vector. In the example, the spatial vector v corresponding to the key point in the skeleton posture feature is known, the respective rotation angles of the two joints a and b of the left shoulder are obtained, and the shoulder (such as the position of the key point 11 in fig. 2) of the robot corresponds to the two joints, so that the front-back swing and the left-right swing of the upper arm are respectively controlled. The x-axis can be coincident with v by rotating a radians around the vector (cos20 °, sin20 °, 0) and then b radians around the z-axis. The shoulder joint of the robot in this example does not swing back and forth along the horizontal axis, but instead swings in a direction having an inclination of 20 ° with the horizontal axis, so that the x-axis rotates about the vectors (cos20 °, sin20 °, 0). In the course of the algorithmic calculation, there will typically be multiple sets of solutions, from which the one within the joint constraint is selected as the final output solution.
And then, the corner information of each of the joints c and d corresponding to the key point vector of "12 → 13" is calculated after the corner of the joint a and the corner of the joint b corresponding to the key point vector of "11 → 12" are obtained. The rotation angle information of the joint d is the angle between the key point "11 → 12" direction and the "12 → 13" vector. And the rotation angle information of the joint c can be obtained by reverse calculation, namely, two vectors are rotated around the z axis by minus b radian, and then, the vectors are rotated around the vectors (cos20 degrees, sin20 degrees and 0) by minus a radian, so that the v vector is coincided with the-x vector, the key point vector of 12 → 13 reaches a new position, the vector of the new position is projected to a y-z plane, and the included angle between y and z is the rotation angle information of the joint c.
The rotation angles of the three joints of the head are obtained by a European geometric settlement mode.
The articular structure of the head is a very typical roll pitch ball system with three mutually perpendicular axes. And constructing a rotation matrix by the key points 8, 9 and 10, and restoring the Euler angle. The "9 → 10" vector as the z-axis vector constitutes the third column of the rotation, the "8 → 9" vector cross-multiplies "9 → 10" vector yields the result as the y-vector, constitutes the second column of the rotation matrix, and then the y cross-multiplies the z-vector yields the x-vector as the first column of the rotation matrix. The manner in which the rotational matrix solves the euler angles is not described in detail herein.
The structure of the waist is similar to that of the head, and the calculation of the rotation angle information of the waist joint of a typical rpy system with three mutually perpendicular axes is similar to that of each joint in the head, and is not described again.
The data structure of the skeleton of the Human body in this example employs a Human 3.6M skeleton model.
In the embodiment, the three-dimensional skeleton model is divided into the plurality of mapping parts, each mapping part is mapped, the mapping parts are simpler in structure relative to the whole three-dimensional skeleton model, the mapping parts are easier to map to joint spaces, the mapping difficulty is reduced, meanwhile, each limb and each joint space in the three-dimensional skeleton model are located in the same space, the calculation of the corner information of the joints is facilitated, and meanwhile, the calculation mode of the Euler angle is adopted, so that the calculation is simple and quick.
In one embodiment, the robot posture control method may further perform the following steps, the flow of which is shown in fig. 4:
step 104-1: and taking the rotation angle information of each joint as the motion data of the current frame.
Step 104-2: and performing collision detection on the motion data of the current frame.
Because the gestures of the identified three-dimensional skeleton model are various, the problem of self-interference between the limbs of the robot when joint information is applied to the robot body is avoided, and collision detection can be carried out on the corner information of each joint. A collision detection model can be preset, and the collision detection model can be used for simulating the operation of each limb of the robot, for example, a moveit program can be adopted to perform collision detection, and a URDF file of the robot is imported into the moveit, wherein the URDF file is in a robot model description format; the rotation angle information of each joint is input, so that the motion of each limb of the robot can be simulated according to the rotation angle information of each joint, and whether collision occurs between the limbs is judged. If the collision is not detected, the current corner information of each joint is legal, and the method can be applied to the robot.
Step 104-3: and if the collision is not detected, determining to execute the step of controlling the motion of the corresponding joint according to the corner information of each joint to form the target posture.
And the joint rotation angle is sent into the robot to be executed, so that the effect of tracking the posture of the target object by the actual robot is achieved.
In the embodiment, collision detection is performed on the corner information of each joint, and if the probability of collision is smaller than the preset threshold value, the corner information of each shutdown is applied to the robot, so that the accuracy of the posture of the robot is ensured, and the safety of the robot is also ensured.
In one embodiment, step 104 may also perform the following sub-steps, the flow of which is shown in FIG. 5:
step 1041: and carrying out filtering processing on the rotation angle information of each joint.
The rotational angle information of each joint is obtained based on the identified skeleton, and noise and jerk inevitably exist in the identified skeleton. The information about the rotation angle of the joint based on the noisy skeleton is also noisy. And filtering the corner information of the joint by adopting a sliding window filtering mode to remove burr noise in the joint motion.
Step 1042: and controlling the corresponding joints to rotate according to the processed corner information of each joint to form the target posture.
In the embodiment, the corner information of each joint is filtered, noise and jitter are eliminated, and the accuracy of the corner information is improved.
In one embodiment, how sub-steps may be performed before step 102, the flow of which is shown in fig. 6:
substep 102-1: and carrying out normalization processing on the three-dimensional skeleton model.
Specifically, after the three-dimensional skeleton model of the target pose is acquired, since the lengths of the skeletons are uneven, the lengths of the skeletons are not suitable for the limbs of the robot. To facilitate mapping the three-dimensional skeletal model into joint space, normalization processing may be performed on the three-dimensional skeletal model. For example, the skeleton is converted from a line segment to a vector by subtracting adjacent keypoints. Then, the vector is processed in a unitization mode, and the obtained framework is composed of unit vectors.
In the embodiment, the problem that the sizes of the limbs in different target objects are different from the sizes of the limbs of the robot is solved by performing normalization processing on the three-dimensional skeleton model, and the three-dimensional skeleton model can be conveniently mapped to the joint space in the follow-up process.
In one embodiment, step 101 may perform how the sub-steps are, the flow of which is shown in FIG. 7:
step 1011: the image data are input into a preset first neural network model, two-dimensional skeleton data of a target posture are obtained, and the first neural network model is used for recognizing the target posture of a target object in the image data and generating corresponding two-dimensional skeleton data based on the target posture.
The skeleton extraction is mainly completed by two neural networks, and the first neural network receives picture input to complete the identification of a target object in image data and the extraction of two-dimensional skeleton data. The two-dimensional skeleton data includes joint points of the target object and positions of the joint points.
Step 1012: and inputting the two-dimensional skeleton data into a preset second neural network model to obtain a three-dimensional skeleton model corresponding to the target posture.
The input of the second neural network is the two-dimensional skeleton node output by the first neural network, and the three-dimensional skeleton model is output.
In the embodiment, because the neural network model has stronger learning energy, the accuracy and the applicability of the three-dimensional skeleton model can be improved by adopting the two neural network models to obtain the three-dimensional skeleton model.
In one embodiment, before step 1011 is executed, the following steps may be further performed, and the flow is shown in fig. 8:
step 1011-1: video data of a target object is acquired.
In particular, video data of the target object may be acquired in real time. Or acquiring video data of the target object acquired by other equipment in real time.
Step 1011-2: image data is acquired from the video data.
Image data may be acquired from the video data, and steps 101 to 104 are performed on the image data so that the robot can present a target pose of the target object. If the robot aims at continuous image data, the robot can realize tracking learning of the posture of the target object.
The above embodiments can be mutually combined and cited, for example, the following embodiments are examples after being combined, but not limited thereto; the embodiments can be arbitrarily combined into a new embodiment without contradiction.
In one embodiment, a flow chart for obtaining rotation angle information of a joint and performing collision detection on the rotation angle information of the joint is shown in fig. 9.
Step 101: and acquiring a three-dimensional skeleton model corresponding to the target posture according to the target posture of the target object in the image data.
Step 102: and mapping the three-dimensional skeleton model into a joint space of the robot to obtain the posture characteristics of the robot and the skeleton posture characteristics of the three-dimensional skeleton model.
Substep 1031: the attitude characteristics of the robot are divided into a plurality of mapping parts.
Substep 1032: the following processing is performed for each mapping part: and transforming the position of the vector of each joint in the mapping part into the position of the vector of the corresponding key point in the skeleton posture characteristic, and acquiring the rotation angle of the joint as the corner information of the joint.
Step 104-1: and taking the rotation angle information of each joint as the motion data of the current frame.
Step 104-2: and performing collision detection on the motion data of the current frame.
Step 104-3: and if the collision is not detected, determining to execute the step of controlling the motion of the corresponding joint according to the corner information of each joint to form the target posture.
Step 104: and controlling the motion of the corresponding joint of the robot according to the corner information of each joint to form a target posture.
In one embodiment, a flow chart for adding the filtering process of fig. 9 is shown in fig. 10.
Step 101: and acquiring a three-dimensional skeleton model corresponding to the target posture according to the target posture of the target object in the image data.
Step 102: and mapping the three-dimensional skeleton model into a joint space of the robot to obtain the posture characteristics of the robot and the skeleton posture characteristics of the three-dimensional skeleton model.
Substep 1031: the attitude characteristics of the robot are divided into a plurality of mapping parts.
Substep 1032: the following processing is performed for each mapping part: and transforming the position of the vector of each joint in the mapping part into the position of the vector of the corresponding key point in the skeleton posture characteristic, and acquiring the rotation angle of the joint as the corner information of the joint.
Step 104-1: and taking the rotation angle information of each joint as the motion data of the current frame.
Step 104-2: and performing collision detection on the motion data of the current frame.
Step 104-3: and if the collision is not detected, determining to execute the step of controlling the motion of the corresponding joint according to the corner information of each joint to form the target posture.
Step 1041: and carrying out filtering processing on the rotation angle information of each joint.
Step 1042: and controlling the corresponding joints to rotate according to the processed corner information of each joint to form the target posture.
In one embodiment, FIG. 11 is a flow chart illustrating the addition of the normalization process to the three-dimensional skeletal model of FIG. 10.
Step 101: and acquiring a three-dimensional skeleton model corresponding to the target posture according to the target posture of the target object in the image data.
Substep 102-1: and carrying out normalization processing on the three-dimensional skeleton model.
Step 102: and mapping the three-dimensional skeleton model into a joint space of the robot to obtain the posture characteristics of the robot and the skeleton posture characteristics of the three-dimensional skeleton model.
Substep 1031: the attitude characteristics of the robot are divided into a plurality of mapping parts.
Substep 1032: the following processing is performed for each mapping part: and transforming the position of the vector of each joint in the mapping part into the position of the vector of the corresponding key point in the skeleton posture characteristic, and acquiring the rotation angle of the joint as the corner information of the joint.
Step 104-1: and taking the rotation angle information of each joint as the motion data of the current frame.
Step 104-2: and performing collision detection on the motion data of the current frame.
Step 104-3: and if the collision is not detected, determining to execute the step of controlling the motion of the corresponding joint according to the corner information of each joint to form the target posture.
Step 1041: and carrying out filtering processing on the rotation angle information of each joint.
Step 1042: and controlling the corresponding joints to rotate according to the processed corner information of each joint to form the target posture.
In one embodiment, a flow chart for additionally acquiring a three-dimensional skeletal model in FIG. 11 is shown in FIG. 12.
Step 1011: the image data are input into a preset first neural network model, two-dimensional skeleton data of a target posture are obtained, and the first neural network model is used for recognizing the target posture of a target object in the image data and generating corresponding two-dimensional skeleton data based on the target posture.
Step 1012: and inputting the two-dimensional skeleton data into a preset second neural network model to obtain a three-dimensional skeleton model corresponding to the target posture.
Substep 102-1: and carrying out normalization processing on the three-dimensional skeleton model.
Step 102: and mapping the three-dimensional skeleton model into a joint space of the robot to obtain the posture characteristics of the robot and the skeleton posture characteristics of the three-dimensional skeleton model.
Substep 1031: the attitude characteristics of the robot are divided into a plurality of mapping parts.
Substep 1032: the following processing is performed for each mapping part: and transforming the position of the vector of each joint in the mapping part into the position of the vector of the corresponding key point in the skeleton posture characteristic, and acquiring the rotation angle of the joint as the corner information of the joint.
Step 104-1: and taking the rotation angle information of each joint as the motion data of the current frame.
Step 104-2: and performing collision detection on the motion data of the current frame.
Step 104-3: and if the collision is not detected, determining to execute the step of controlling the motion of the corresponding joint according to the corner information of each joint to form the target posture.
Substep 1041: and carrying out filtering processing on the rotation angle information of each joint.
Substeps 1042: and controlling the corresponding joints to rotate according to the processed corner information of each joint to form the target posture.
In one embodiment, a flow chart of fig. 12 that adds the step of acquiring image data is shown in fig. 13.
Step 1011-1: video data of a target object is acquired.
Step 1011-2: image data is acquired from the video data.
Step 1011: the image data are input into a preset first neural network model, two-dimensional skeleton data of a target posture are obtained, and the first neural network model is used for recognizing the target posture of a target object in the image data and generating corresponding two-dimensional skeleton data based on the target posture.
Step 1012: and inputting the two-dimensional skeleton data into a preset second neural network model to obtain a three-dimensional skeleton model corresponding to the target posture.
Substep 102-1: and carrying out normalization processing on the three-dimensional skeleton model.
Step 102: and mapping the three-dimensional skeleton model into a joint space of the robot to obtain the posture characteristics of the robot and the skeleton posture characteristics of the three-dimensional skeleton model.
Substep 1031: the attitude characteristics of the robot are divided into a plurality of mapping parts.
Substep 1032: the following processing is performed for each mapping part: and transforming the position of the vector of each joint in the mapping part into the position of the vector of the corresponding key point in the skeleton posture characteristic, and acquiring the rotation angle of the joint as the corner information of the joint.
Step 104-1: and taking the rotation angle information of each joint as the motion data of the current frame.
Step 104-2: and performing collision detection on the motion data of the current frame.
Step 104-3: and if the collision is not detected, determining to execute the step of controlling the motion of the corresponding joint according to the corner information of each joint to form the target posture.
Step 1041: and carrying out filtering processing on the rotation angle information of each joint.
Step 1042: and controlling the corresponding joints to rotate according to the processed corner information of each joint to form the target posture.
In an embodiment of the present application, a structural block diagram of the robot is shown in fig. 14, and the robot includes: at least one processor 201; and a memory 202 communicatively coupled to the at least one processor 201; the memory 202 stores instructions executable by the at least one processor 201, and the instructions are executed by the at least one processor 201, so that the at least one processor 201 can execute the robot posture control method.
The memory 202 and the processor 201 are connected by a bus, which may include any number of interconnected buses and bridges that link one or more of the various circuits of the processor 201 and the memory 202. The bus may also link various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor is transmitted over a wireless medium via an antenna, which further receives the data and transmits the data to the processor.
The processor 201 is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And the memory 202 may be used to store data used by the processor in performing operations.
The embodiment of the present application further provides a computer-readable storage medium, which stores a computer program, and the computer program is executed by a processor to implement the robot posture control method.
Those skilled in the art can understand that all or part of the steps in the method of the foregoing embodiments may be implemented by a program to instruct related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, etc.) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (10)

1. A robot posture control method is characterized by comprising the following steps:
acquiring a three-dimensional skeleton model corresponding to a target posture according to the target posture of a target object in image data;
mapping the three-dimensional skeleton model into a joint space of the robot to obtain the posture characteristic of the robot and the skeleton posture characteristic of the three-dimensional skeleton model;
adjusting the posture characteristic of the robot to a position matched with the skeleton posture characteristic, and acquiring corner information of each joint of the robot;
and controlling the motion of the robot corresponding to the joints according to the corner information of each joint to form the target posture.
2. The method of controlling robot pose according to claim 1, wherein the obtaining of rotation angle information of each joint of the robot by adjusting the pose feature of the robot to a position matching the skeleton pose feature comprises:
dividing the posture characteristics of the robot into a plurality of mapping parts;
the following processing is performed for each mapping part: and transforming the position of the vector of each joint in the mapping part into the position of the vector of the corresponding key point in the skeleton posture characteristic, and acquiring the rotation angle of the joint as the corner information of the joint.
3. The method of controlling robot pose according to claim 1, further comprising:
taking the corner information of each joint as the motion data of the current frame;
performing collision detection on the motion data of the current frame;
and if the collision is not detected, determining to execute the step of controlling the motion of the corresponding joint according to the corner information of each joint to form the target posture.
4. The method according to any one of claims 1 to 3, wherein the controlling the movement of the robot corresponding to the joints according to the rotation angle information of each joint to form the target posture includes:
carrying out filtering processing on the corner information of each joint;
and controlling the corresponding joints to rotate according to the processed corner information of each joint to form the target posture.
5. The method of controlling robot pose according to claim 1 or 2, wherein before mapping the three-dimensional skeletal model into joint space of the robot, acquiring pose features of the robot and skeletal pose features of the three-dimensional skeletal model, the method further comprises:
and carrying out normalization processing on the three-dimensional skeleton model.
6. The method of claim 5, wherein the obtaining a three-dimensional skeleton model corresponding to a target pose of a target object in the image data according to the target pose comprises:
inputting the image data into a preset first neural network model to obtain two-dimensional skeleton data of the target posture, wherein the first neural network model is used for identifying the target posture of the target object in the image data and generating corresponding two-dimensional skeleton data based on the target posture;
and inputting the two-dimensional skeleton data into a preset second neural network model to obtain a three-dimensional skeleton model corresponding to the target posture.
7. The robot pose control method according to claim 2, wherein the mapping section comprises: trunk, four limbs, head and waist.
8. The method of controlling robot pose according to claim 6, wherein before inputting the image data into a preset first neural network model to obtain two-dimensional skeleton data of the target pose, the method further comprises:
collecting video data of the target object;
the image data is acquired from the video data.
9. A robot, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method of controlling a pose of a robot according to any one of claims 1 to 8.
10. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the method of controlling the pose of a robot according to any one of claims 1 to 8.
CN202110450270.2A 2021-04-25 2021-04-25 Robot attitude control method, robot and storage medium Pending CN113146634A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110450270.2A CN113146634A (en) 2021-04-25 2021-04-25 Robot attitude control method, robot and storage medium
PCT/CN2021/142242 WO2022227664A1 (en) 2021-04-25 2021-12-28 Robot posture control method, robot, storage medium and computer program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110450270.2A CN113146634A (en) 2021-04-25 2021-04-25 Robot attitude control method, robot and storage medium

Publications (1)

Publication Number Publication Date
CN113146634A true CN113146634A (en) 2021-07-23

Family

ID=76870561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110450270.2A Pending CN113146634A (en) 2021-04-25 2021-04-25 Robot attitude control method, robot and storage medium

Country Status (2)

Country Link
CN (1) CN113146634A (en)
WO (1) WO2022227664A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569828A (en) * 2021-09-27 2021-10-29 南昌嘉研科技有限公司 Human body posture recognition method, system, storage medium and equipment
WO2022227664A1 (en) * 2021-04-25 2022-11-03 达闼机器人股份有限公司 Robot posture control method, robot, storage medium and computer program

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140249670A1 (en) * 2013-03-04 2014-09-04 Disney Enterprises, Inc., A Delaware Corporation Systemic derivation of simplified dynamics for humanoid robots
JP2015148601A (en) * 2014-02-08 2015-08-20 本田技研工業株式会社 System and method for mapping, localization and pose correction
CN105252532A (en) * 2015-11-24 2016-01-20 山东大学 Method of cooperative flexible attitude control for motion capture robot
CN106625658A (en) * 2016-11-09 2017-05-10 华南理工大学 Method for controlling anthropomorphic robot to imitate motions of upper part of human body in real time
CN107953331A (en) * 2017-10-17 2018-04-24 华南理工大学 A kind of human body attitude mapping method applied to anthropomorphic robot action imitation
CN108053469A (en) * 2017-12-26 2018-05-18 清华大学 Complicated dynamic scene human body three-dimensional method for reconstructing and device under various visual angles camera
CN108098780A (en) * 2016-11-24 2018-06-01 广州映博智能科技有限公司 A kind of new robot apery kinematic system
CN109145788A (en) * 2018-08-08 2019-01-04 北京云舶在线科技有限公司 Attitude data method for catching and system based on video
CN110480634A (en) * 2019-08-08 2019-11-22 北京科技大学 A kind of arm guided-moving control method for manipulator motion control
US20200004266A1 (en) * 2019-08-01 2020-01-02 Lg Electronics Inc. Method of performing cloud slam in real time, and robot and cloud server for implementing the same
CN111208783A (en) * 2019-12-30 2020-05-29 深圳市优必选科技股份有限公司 Action simulation method, device, terminal and computer storage medium
CN111300421A (en) * 2020-03-17 2020-06-19 北京理工大学 Mapping method applied to simulation of actions of both hands of humanoid robot
CN112580582A (en) * 2020-12-28 2021-03-30 达闼机器人有限公司 Action learning method, action learning device, action learning medium and electronic equipment
CN112975993A (en) * 2021-02-22 2021-06-18 北京国腾联信科技有限公司 Robot teaching method, device, storage medium and equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104952105B (en) * 2014-03-27 2018-01-23 联想(北京)有限公司 A kind of 3 D human body Attitude estimation method and apparatus
CN104615983B (en) * 2015-01-28 2018-07-31 中国科学院自动化研究所 Activity recognition method based on recurrent neural network and human skeleton motion sequence
JP2018008347A (en) * 2016-07-13 2018-01-18 東芝機械株式会社 Robot system and operation region display method
CN111002289B (en) * 2019-11-25 2021-08-17 华中科技大学 Robot online teaching method and device, terminal device and storage medium
CN112164091B (en) * 2020-08-25 2022-08-16 南京邮电大学 Mobile device human body pose estimation method based on three-dimensional skeleton extraction
CN113146634A (en) * 2021-04-25 2021-07-23 达闼机器人有限公司 Robot attitude control method, robot and storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140249670A1 (en) * 2013-03-04 2014-09-04 Disney Enterprises, Inc., A Delaware Corporation Systemic derivation of simplified dynamics for humanoid robots
JP2015148601A (en) * 2014-02-08 2015-08-20 本田技研工業株式会社 System and method for mapping, localization and pose correction
CN105252532A (en) * 2015-11-24 2016-01-20 山东大学 Method of cooperative flexible attitude control for motion capture robot
CN106625658A (en) * 2016-11-09 2017-05-10 华南理工大学 Method for controlling anthropomorphic robot to imitate motions of upper part of human body in real time
CN108098780A (en) * 2016-11-24 2018-06-01 广州映博智能科技有限公司 A kind of new robot apery kinematic system
CN107953331A (en) * 2017-10-17 2018-04-24 华南理工大学 A kind of human body attitude mapping method applied to anthropomorphic robot action imitation
CN108053469A (en) * 2017-12-26 2018-05-18 清华大学 Complicated dynamic scene human body three-dimensional method for reconstructing and device under various visual angles camera
CN109145788A (en) * 2018-08-08 2019-01-04 北京云舶在线科技有限公司 Attitude data method for catching and system based on video
US20200004266A1 (en) * 2019-08-01 2020-01-02 Lg Electronics Inc. Method of performing cloud slam in real time, and robot and cloud server for implementing the same
CN110480634A (en) * 2019-08-08 2019-11-22 北京科技大学 A kind of arm guided-moving control method for manipulator motion control
CN111208783A (en) * 2019-12-30 2020-05-29 深圳市优必选科技股份有限公司 Action simulation method, device, terminal and computer storage medium
CN111300421A (en) * 2020-03-17 2020-06-19 北京理工大学 Mapping method applied to simulation of actions of both hands of humanoid robot
CN112580582A (en) * 2020-12-28 2021-03-30 达闼机器人有限公司 Action learning method, action learning device, action learning medium and electronic equipment
CN112975993A (en) * 2021-02-22 2021-06-18 北京国腾联信科技有限公司 Robot teaching method, device, storage medium and equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022227664A1 (en) * 2021-04-25 2022-11-03 达闼机器人股份有限公司 Robot posture control method, robot, storage medium and computer program
CN113569828A (en) * 2021-09-27 2021-10-29 南昌嘉研科技有限公司 Human body posture recognition method, system, storage medium and equipment
CN113569828B (en) * 2021-09-27 2022-03-08 南昌嘉研科技有限公司 Human body posture recognition method, system, storage medium and equipment

Also Published As

Publication number Publication date
WO2022227664A1 (en) 2022-11-03

Similar Documents

Publication Publication Date Title
CN111460875B (en) Image processing method and apparatus, image device, and storage medium
CN111402290B (en) Action restoration method and device based on skeleton key points
CN109636831B (en) Method for estimating three-dimensional human body posture and hand information
CN105252532B (en) The method of the flexible gesture stability of motion capture robot collaboration
CN110480634B (en) Arm guide motion control method for mechanical arm motion control
KR101711736B1 (en) Feature extraction method for motion recognition in image and motion recognition method using skeleton information
CN107833271A (en) A kind of bone reorientation method and device based on Kinect
Tao et al. A novel sensing and data fusion system for 3-D arm motion tracking in telerehabilitation
CN109079794B (en) Robot control and teaching method based on human body posture following
CN113146634A (en) Robot attitude control method, robot and storage medium
US20220092302A1 (en) Skeleton recognition method, computer-readable recording medium storing skeleton recognition program, skeleton recognition system, learning method, computer-readable recording medium storing learning program, and learning device
CN112847336B (en) Action learning method and device, storage medium and electronic equipment
US20220410000A1 (en) Skeleton model updating apparatus, skeleton model updating method, and program
CN113103230A (en) Human-computer interaction system and method based on remote operation of treatment robot
JP2022501732A (en) Image processing methods and devices, image devices and storage media
CN112580582A (en) Action learning method, action learning device, action learning medium and electronic equipment
Wu et al. An unsupervised real-time framework of human pose tracking from range image sequences
CN109531578B (en) Humanoid mechanical arm somatosensory control method and device
CN115205737B (en) Motion real-time counting method and system based on transducer model
CN115890671A (en) SMPL parameter-based multi-geometry human body collision model generation method and system
CN113971835A (en) Control method and device of household appliance, storage medium and terminal device
CN116206026B (en) Track information processing method, track information processing device, computer equipment and readable storage medium
CN111857367B (en) Virtual character control method and device
WO2022138339A1 (en) Training data generation device, machine learning device, and robot joint angle estimation device
Pan et al. A Study of Intelligent Rehabilitation Robot Imitation of Human Behavior Based on Kinect

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 200245 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Applicant after: Dayu robot Co.,Ltd.

Address before: 200245 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Applicant before: Dalu Robot Co.,Ltd.

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20210723

RJ01 Rejection of invention patent application after publication