CN113492404A - Humanoid robot action mapping control method based on machine vision - Google Patents
Humanoid robot action mapping control method based on machine vision Download PDFInfo
- Publication number
- CN113492404A CN113492404A CN202110427319.2A CN202110427319A CN113492404A CN 113492404 A CN113492404 A CN 113492404A CN 202110427319 A CN202110427319 A CN 202110427319A CN 113492404 A CN113492404 A CN 113492404A
- Authority
- CN
- China
- Prior art keywords
- coordinate system
- camera
- robot
- angle
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Manipulator (AREA)
Abstract
The invention provides a humanoid robot action mapping control method based on machine vision, which comprises three parts of human body action data acquisition, conversion of visual data and action data and robot control, wherein the adopted action mapping control method has good control flexibility, can enable a humanoid robot to have the capability of simulating the behavior of a human in real time, can enable the humanoid robot to simulate the action of the human in real time without pre-programming, even can carry out robot training by simulating the action, improves the autonomy and adaptability of the robot, and has the capability of rapidly and accurately completing complex tasks It may even be robot training by simulating actions.
Description
Technical Field
The invention relates to the technical field of humanoid robot action mapping, in particular to a humanoid robot action mapping control method based on machine vision.
Background
In recent years, with the continuous development of artificial intelligence and robot technology, the humanoid robot has incomparable advantages in human-computer interactivity by virtue of the characteristic of humanoid appearance, and plays an increasingly important role in the fields of teaching, scientific research, industry and the like. However, the number of general humanoid robots is large, the traditional humanoid robot based on preprogramming has high automation control difficulty, high cost and low intelligent degree, and great obstruction is brought to the application and popularization of the humanoid robot. Therefore, it is imperative to explore more intelligent and convenient control modes of the humanoid robot.
Disclosure of Invention
In view of the above, the present invention provides a robot-vision-based motion mapping control method for a humanoid robot, which uses a robot to follow the motion of a human body as a means, and uses the motion of the human body, the robot and an action target as a closed loop to realize human-robot interaction, and the specific technical scheme is as follows:
a humanoid robot action mapping control method based on machine vision is characterized by comprising three parts, namely human body action data acquisition, visual data and action data conversion and robot control;
acquiring a depth map and an RGB map through an RGB-D camera to obtain pixel positions (u, v) of all joint points of a human body in the RGB map, and extracting pixel values of the same pixel positions in the depth map, wherein the pixel values are depth values D of the points;
calculating coordinates of the key points in a pixel coordinate system, an image coordinate system, a camera coordinate system and a world coordinate system by using a pinhole model;
the visual data and the motion data are converted in a human-machine motion mapping model, in order to realize the mapping from human body motion to robot motion, a space vector method is adopted, the three-dimensional coordinates of joints of a human body are calculated to obtain joint angles of each joint, the joint angles are mapped onto the humanoid robot, the human-machine matching is realized, and data are transmitted for the control of the robot;
matching the three-dimensional coordinates acquired by the depth camera with the human skeleton model by using a space vector method, performing operation between the coordinates according to a joint connection structure of the skeleton model so as to convert the three-dimensional coordinates into space vectors, and obtaining joint angles of all joints through the operation of outer products of the space vectors;
the robot control obtains relevant data of a human body arm through a depth camera, the data are analyzed and processed through an upper computer to obtain joint values, the human body arm is mapped to the robot arm, steering engine value data are transmitted to a control unit, the control unit generates control signals, when a steering engine receives PWM signals from a controller, the pulse width is compared with the pulse width generated by an internal circuit to obtain two pulses, one pulse is output to a driving circuit, the other pulse is used for controlling the driving direction, the position of a rotary driving potentiometer of a motor changes the future pulse width until the future pulse width is equal to the pulse width output by an external signal, when the motor stops rotating, a steering angle is fixed, the steering engine is controlled at a corresponding angle, and the control of the robot is achieved.
Further, the symbols used in the pinhole model illustrate: (u, v) are the coordinates in the pixel coordinate system, (x, y) the coordinates in the image coordinate system, (xc,yc,zc) Is a coordinate in the camera coordinates, (x)0,y0,z0) The coordinates of a certain point in a pixel coordinate system are coordinates in a world coordinate system and a coordinate system in a pinhole model, the coordinates of the certain point in the pixel coordinate system in an image by taking a pixel as a unit, if the pixel value is taken as an element, a picture is stored in a computer in a matrix mode, and the coordinates of the pixel coordinate system of the certain point are a row mark and a column mark of the corresponding element in the matrix;
the coordinate of a certain point in the image coordinate system by taking the physical length (m) as a unit, and the corresponding relation between the pixel coordinate system and the image coordinate system is as follows:
o1 is the intersection of the camera optical axis and the image plane, called the picture principal point. Defining the image principal point as the origin of the image coordinate system, and the pixel coordinate system coordinate is (u)0,v0). If the physical size of each pixel in the x, y axes is dx, dy, the relationship between the two coordinate systems is:
in matrix form, the following is true:
the camera coordinate system is the camera coordinate system with the camera optical center O as the origin, (X)c,Yc) The axes being parallel to the x, y axes of the image coordinate system, ZcThe axis is the optical axis of the camera and is perpendicular to the image plane. ZcThe intersection of the axis with the image plane is O1, the image principal point. OO1 is the camera focal length;
the correspondence between the image coordinate system and the camera coordinate system is as follows:
further, the correspondence between the camera coordinate system and the world coordinate system is as follows:
according to the conversion relation between the coordinate systems in the pinhole model, the mapping relation from one point on the image to one point in the space can be established by combining the key point pixel position and the depth value obtained by the depth camera;
according to the kinematics principle, the coordinate transformation matrix expression is as follows:
the mapping relation from one point on the image to one point in the space can be established through the formula.
To solve the camera intrinsic parameter fx、fy、cx、cyNeed to perform camera calibration, cx、cyIs the center of the aperture of the camera.
M is called an internal reference matrix, each value in the matrix is only related to the internal parameters of the camera and does not change along with the change of the position of the object, S is a scaling factor of the depth map, and when one point (u, v, d) and the corresponding space coordinate relation (x, y, z):
d=z*s。
further, the yaw angle is calculated as follows:
the yaw freedom degree corresponds to the motion of the elbow joint of the robot, and two vectors are formed in a plane formed by the elbow, the wrist and the shoulderAndthe two form an angle formed in the plane as a yaw angle.
By utilizing the thought of a space vector method, the component of ES projection in an XOY plane is taken, the positive direction included angle between the component and a Y coordinate axis is solved, namely the yaw joint angle, and the calculation process of the yaw angle of the left arm is as follows:
the yaw angle forming principle of the right arm is completely consistent with the calculation principle and the left arm, so that the yaw angle forming principle can also be calculated by using the formula.
By utilizing the thought of a space vector method, the component of ES projection in an XOY plane is taken, the positive direction included angle between the component and a Y coordinate axis is solved, namely the pitch joint angle, and the calculation process of the pitch angle of the left arm is as follows:
pitch angle of right armThe process of the component in the XOY plane and the process of solving the included angle are completely consistent with the calculation of the left arm, so the calculation and the solution can also be realized by utilizing the algorithm;
the rolling freedom determines the state of the whole arm rotating along the shoulder-shoulder straight line, and corresponds to the freedom of the human shoulder for controlling the arm to swing back and forth and also corresponds to a steering engine for the shoulder of the humanoid robot to rotate;
since the roll degree of freedom does not correspond to the shoulder degree of freedom, the roll degree of freedom calculation cannot be directly obtained using the shoulder-connected vector;
and (3) calculating the angle between the positive direction of the Y coordinate axis by using a space vector method and the rolling degree of freedom by using the outer product of the vector ES and the vector EW. The calculation method is as follows:
when in useRespectively located at the front and rear sides of the body, andcalculated theta at the same angle3Therefore, the two actions cannot be distinguished from each other, and in this case, it is necessary to determine the front-back relationship on the z-axis (in the depth direction) between the Y-coordinate of the left shoulder and the Y-coordinate of the left elbow and to distinguish them from each other;
the calculation method of the right arm is similar, but the direction obtained by performing the outer product operation on the two vectors is just opposite to that of the left arm, and in order to ensure the consistency of the operation, the calculation method is to useAndthe calculation method of (2) is kept unchanged, and the vector outer product operation of the right arm is modified as follows:
In the actual operation process, the data of the steering engine value always fluctuates, so that the data needs to be filtered by adopting an amplitude limiting filtering method;
the main idea of the limiting filtering method is as follows: according to the empirical judgment, the maximum deviation value (set as A) allowed by two times of sampling is determined, and the judgment is carried out each time a new value is detected: if the difference between the current value and the previous value is equal to A, the current value is valid, if the difference between the current value and the previous value is greater than A, the current value is invalid, the current value is abandoned, and the previous value is used for replacing the current value, so that the method can effectively overcome pulse interference caused by accidental factors under the condition of ensuring the existing precision; the steering engine values and joint angles of different steering engines have different mapping relations. After filtering, the joint angle is converted into a steering engine value according to the characteristics of the humanoid robot steering engine.
By adopting the technical scheme, the method has the following beneficial effects:
the control method of the action mapping adopted by the invention has good control flexibility, can enable the humanoid robot to have the capability of simulating the behavior of a human in real time, can enable the humanoid robot to simulate the action of the human in real time without pre-programming, even can carry out robot training by simulating the action, improves the autonomy and the adaptability of the robot, and enables the humanoid robot to have the capability of rapidly and accurately completing complex tasks.
Drawings
FIG. 1 is a schematic diagram of the relationship between four coordinate systems according to the present invention;
FIG. 2 is a diagram illustrating a correspondence between a pixel coordinate system and an image coordinate system;
FIG. 3 is a schematic diagram of the correspondence between the image coordinate system and the camera coordinate system according to the present invention;
FIG. 4 is a schematic diagram of a four coordinate system transformation sequence according to the present invention;
FIG. 5 is a schematic view of the calculation of the yaw degree of freedom of the present invention;
FIG. 6 is a flow chart of the robot motion control of the present invention;
fig. 7 is a schematic view showing states of a yaw degree of freedom, a roll degree of freedom, and a pitch degree of freedom of the robot according to the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Example 1: as shown in fig. 1-7: a humanoid robot action mapping control method based on machine vision is disclosed, wherein the scheme is mainly completed by three parts, namely human body action data acquisition, conversion of visual data and action data and robot control.
Collecting human body action data:
firstly, a depth map and an RGB map are acquired through an RGB-D camera, and the pixel positions (u, v) of all joint points of the human body in the RGB map are acquired. Then, the pixel value of the same pixel position is extracted from the depth map, which is the depth value D of the point.
Pinhole models are the most used model in camera imaging studies. Under the model, the space coordinate and the image coordinate of the object present a linear relation, so the solution of the camera parameters is reduced to the solution of a linear equation system. The model involves four coordinate systems: world coordinate system, camera coordinate system, image coordinate system, pixel coordinate system. The coordinates of the key point in any coordinate system can be calculated through the relation of four coordinate systems in the pinhole model.
Notation used in pinhole models: (u, v) are coordinates in a pixel coordinate system; (x, y) coordinates of an image coordinate system; (x)c,yc,zc) Are the coordinates in the camera coordinates; (x)0,y0,z0) Are coordinates in the world coordinate system. Description of coordinate systems in pinhole models: pixel coordinate system: the coordinates of a certain point in the image are in pixel units, if the pixel value is an element, the image is stored in a computer in a matrix mode, and the coordinates of the pixel coordinate system of the certain point are the row mark and the column mark of the corresponding element in the matrix.
Image coordinate system: the coordinate of a point in the image in physical length (e.g. cm) and the position relationship of the pixel coordinate system are shown in fig. 2. The corresponding relation between the pixel coordinate system and the image coordinate system is-
In fig. 2, O1 is the intersection of the camera optical axis and the image plane, called the picture principal point. Defining the image principal point as the origin of the image coordinate system, and the pixel coordinate system coordinate is (u)0,v0). If the physical size of each pixel in the x, y axes is dx, dy, the relationship between the two coordinate systems is — -
Is represented in a matrix form, and is — -)
Camera coordinate system: the camera coordinate system takes the camera optical center O as the origin, (X)c,Yc) The axes being parallel to the x, y axes of the image coordinate system, ZcThe axis is the optical axis of the camera and is perpendicular to the image plane. ZcThe intersection of the axis with the image plane is O1, the image principal point. OO1 is the camera focal length.
The corresponding relation between the image coordinate system and the camera coordinate system is-
The corresponding relation between the camera coordinate system and the world coordinate system is — -
According to the conversion relation between the coordinate systems in the pinhole model, the mapping relation from one point on the image to one point in the space can be established through the formula by combining the key point pixel position and the depth value obtained by the depth camera. The coordinates of the keypoints in any coordinate system can be found, and the coordinate system conversion order is shown in fig. 4.
According to the kinematics principle, the coordinate transformation matrix expression is — -)
The mapping relation from one point on the image to one point in the space can be established through the formula.
To solve the camera intrinsic parameter fx、fy、cx、cyNeed to perform camera calibration, cx、cyIs the center of the aperture of the camera.
M is called an internal reference matrix, each value in the matrix is only related to the internal parameters of the camera and does not change along with the change of the position of the object, and S is a scaling factor of the depth map. At this time, a point (u, v, d) has a corresponding spatial coordinate relationship (x, y, z) — to
d=z*s
Transformation of visual data and motion data
In the human-machine action mapping model, in order to realize the mapping from human body action to robot action, the joint angle is calculated by adopting a space vector method, the three-dimensional coordinates of the joints of the human body are calculated to obtain the angle of each joint, the angles are mapped onto the humanoid robot, the human-machine matching is realized, and data are transmitted for the control of the robot.
By using a space vector method, firstly, three-dimensional coordinates acquired by a depth camera are matched with a human skeleton model, and the coordinates are calculated according to a joint connection structure of the skeleton model, so that the coordinates are converted into space vectors.
The yaw degree of freedom corresponds to the motion of the robot elbow joint. In the plane formed by elbow, wrist and shoulder, two vectors are formedAndthe two form an angle formed in the plane as a yaw angle. The yaw angle of the left arm is calculated as —)
And (3) taking the component of the ES projected in the XOY plane by utilizing the thought of a space vector method, and solving the positive direction included angle between the component and the Y coordinate axis, namely the yaw joint angle. The yaw angle of the left arm is calculated as —)
The yaw angle forming principle of the right arm is completely consistent with the calculation principle and the left arm, so that the yaw angle forming principle can also be calculated by using the formula.
And (3) taking the component of the ES projected in the XOY plane by utilizing the thought of a space vector method, and solving the positive direction included angle between the component and the Y coordinate axis, namely the pitch joint angle. The calculation process of the pitching angle of the left arm is — -)
Pitch angle of right armThe process of the component in the XOY plane and the process of calculating the included angle are completely consistent with the calculation of the left arm, so the calculation and the solution can also be realized by utilizing the algorithm.
The rolling freedom determines the state of the whole arm rotating along the shoulder-shoulder straight line, and corresponds to the freedom of the human shoulder to control the arm to swing back and forth and also corresponds to the steering engine for the shoulder rotation of the humanoid robot.
Since the roll degree of freedom does not correspond to the shoulder degree of freedom, the roll degree of freedom calculation cannot be directly obtained using the shoulder-connecting vector.
And (3) calculating the angle between the positive direction of the Y coordinate axis by using a space vector method and the rolling degree of freedom by using the outer product of the vector ES and the vector EW. The calculation method is as follows
When in useRespectively located at the front and rear sides of the body, andcalculated theta at the same angle3Therefore, it is not possible to distinguish between these two actions, and in this case, it is necessary to determine the front-back relationship on the z axis (in the depth direction) between the Y coordinate of the left shoulder and the Y coordinate of the left elbow and to distinguish them.
The right arm is calculated similarly, but the direction of the outer product of the two vectors is exactly opposite to that of the left arm. To ensure consistency of the operation, the method will be usedAndthe calculation method of (2) is kept unchanged, and the vector outer product operation of the right arm is modified to be- (phi)
In the actual operation process, the data of the calculated steering engine value always fluctuates, so that the filtering operation is needed. The scheme adopts a limited-amplitude filtering method.
The main idea of the limiting filtering method is as follows: according to the empirical judgment, the maximum deviation value (set as A) allowed by two times of sampling is determined, and the judgment is carried out each time a new value is detected: if the difference between the current value and the previous value is equal to A, the current value is valid, and if the difference between the current value and the previous value is greater than A, the current value is invalid, the current value is discarded, and the previous value is used to replace the current value. The method can effectively overcome pulse interference caused by accidental factors under the condition of ensuring the existing precision.
The steering engine values and joint angles of different steering engines have different mapping relations. After filtering, the joint angle is converted into a steering engine value according to the characteristics of the humanoid robot steering engine.
Robot control
After the human body arm is mapped to the robot arm, the related data of the arm is obtained through the sensor, and the upper computer analyzes and processes the data to obtain a joint value. The joint value data is transmitted to the control unit, and the control unit generates a control signal. When the steering engine receives a PWM signal from the controller, the pulse width is compared with the pulse width generated by the internal circuit to obtain two pulses, one pulse is output to the driving circuit, and the other pulse is used for controlling the driving direction. The position of the rotary drive potentiometer of the motor changes the future pulse width until it equals the pulse width of the external signal output. When the motor stops rotating, the rudder angle is fixed. And the steering engine is controlled at a corresponding angle to realize the control of the robot.
Having thus described the basic principles and principal features of the invention, it will be appreciated by those skilled in the art that the invention is not limited by the embodiments described above, which are given by way of illustration only, but that various changes and modifications may be made therein without departing from the spirit and scope of the invention as defined by the appended claims and their equivalents.
Claims (4)
1. A humanoid robot action mapping control method based on machine vision is characterized by comprising three parts, namely human body action data acquisition, visual data and action data conversion and robot control;
acquiring a depth map and an RGB map through an RGB-D camera to obtain pixel positions (u, v) of all joint points of a human body in the RGB map, and extracting pixel values of the same pixel positions in the depth map, wherein the pixel values are depth values D of the points;
calculating coordinates of the key points in a pixel coordinate system, an image coordinate system, a camera coordinate system and a world coordinate system by using a pinhole model;
the conversion of the visual data and the motion data is performed in a human-machine motion mapping model, in order to realize the mapping from human body motion to robot motion, a space vector method is adopted, the three-dimensional coordinates of joints of a human body are calculated to obtain joint angles of each joint, the joint angles are mapped onto a humanoid robot, the human-machine matching is realized, and data are transmitted for the control of the robot;
matching the three-dimensional coordinates acquired by the depth camera with the human skeleton model by using a space vector method, performing operation between the coordinates according to a joint connection structure of the skeleton model so as to convert the three-dimensional coordinates into space vectors, and obtaining joint angles of all joints through the operation of outer products of the space vectors;
the robot control obtains relevant data of a human body arm through a depth camera, an upper computer analyzes and processes the data to obtain a joint value, the human body arm is mapped to the robot arm, steering engine value data are transmitted to a control unit, the control unit generates a control signal, when a steering engine receives a PWM signal from a controller, the pulse width is compared with the pulse width generated by an internal circuit to obtain two pulses, one pulse is output to a driving circuit, the other pulse is used for controlling the driving direction, the position of a rotary driving potentiometer of a motor changes the future pulse width until the future pulse width is equal to the pulse width output by an external signal, when the motor stops rotating, a rudder angle is fixed, the steering engine is controlled at a corresponding angle, and the control of the robot is realized.
2. The method for controlling mapping of actions of a humanoid robot based on machine vision as claimed in claim 1, wherein the symbols used in the pinhole model describe: (u, v) are the coordinates in the pixel coordinate system, (x, y) the coordinates in the image coordinate system, (xc,yc,zc) Is a coordinate in the camera coordinates, (x)0,y0,z0) The coordinates are coordinates in a world coordinate system, the coordinate system in a pinhole model indicates that a certain point in a pixel coordinate system takes a pixel as a unit coordinate in an image, if the pixel value is taken as an element, a picture is stored in a computer in a matrix mode, and the coordinates of the pixel coordinate system of the certain point are a row mark and a column mark of the corresponding element in the matrix;
the coordinate of a certain point in the image coordinate system by taking the physical length (m) as a unit, and the corresponding relation between the pixel coordinate system and the image coordinate system is as follows:
o1 is the intersection of the camera optical axis and the image plane, called the picture principal point. Defining the image principal point as the origin of the image coordinate system, and the pixel coordinate system coordinate is (u)0,v0). If the physical size of each pixel in the x, y axes is dx, dy, the relationship between the two coordinate systems is:
in matrix form, the following is true:
the camera coordinate system is the camera coordinate system with the camera optical center O as the origin, (X)c,Yc) The axes being parallel to the x, y axes of the image coordinate system, ZcThe axis is the optical axis of the camera and is perpendicular to the image plane. ZcThe intersection of the axis with the image plane is O1, the image principal point. OO1 is the camera focal length;
the correspondence between the image coordinate system and the camera coordinate system is as follows:
3. the method for controlling mapping of actions of a humanoid robot based on machine vision as claimed in claim 1, wherein the correspondence between the camera coordinate system and the world coordinate system is as follows:
according to the conversion relation between the coordinate systems in the pinhole model, the mapping relation from one point on the image to one point in the space can be established by combining the key point pixel position and the depth value obtained by the depth camera;
according to the kinematics principle, the coordinate transformation matrix expression is as follows:
the mapping relation from one point on the image to one point in the space can be established through the formula.
To solve the camera intrinsic parameter fx、fy、cx、cyNeed to perform camera calibration, cx、cyIs the center of the aperture of the camera.
M is called an internal reference matrix, each value in the matrix is only related to the internal parameters of the camera and does not change along with the change of the position of the object, S is a scaling factor of the depth map, and when one point (u, v, d) and the corresponding space coordinate relation (x, y, z):
d=z*s。
4. the method for controlling mapping of actions of a humanoid robot based on machine vision as claimed in claim 1, wherein the yaw angle is calculated as follows:
the yaw freedom degree corresponds to the motion of the elbow joint of the robot, and two vectors are formed in a plane formed by the elbow, the wrist and the shoulderAndthe two form an angle formed in the plane as a yaw angle.
By utilizing the thought of a space vector method, the component of ES projection in an XOY plane is taken, the positive direction included angle between the component and a Y coordinate axis is solved, namely the yaw joint angle, and the calculation process of the yaw angle of the left arm is as follows:
the yaw angle forming principle of the right arm is completely consistent with the calculation principle and the left arm, so that the yaw angle forming principle can also be calculated by using the formula.
By utilizing the thought of a space vector method, the component of ES projection in an XOY plane is taken, the positive direction included angle between the component and a Y coordinate axis is solved, namely the pitch joint angle, and the calculation process of the pitch angle of the left arm is as follows:
pitch angle of right armThe process of the component in the XOY plane and the process of solving the included angle are completely consistent with the calculation of the left arm, so the calculation and the solution can also be realized by utilizing the algorithm;
the rolling freedom determines the state of the whole arm rotating along the shoulder-shoulder straight line, and corresponds to the freedom of the human shoulder for controlling the arm to swing back and forth and also corresponds to a steering engine for the shoulder of the humanoid robot to rotate;
since the roll degree of freedom does not correspond to the shoulder degree of freedom, the roll degree of freedom calculation cannot be directly obtained using the shoulder-connected vector;
and (3) calculating the angle between the positive direction of the Y coordinate axis by using a space vector method and the rolling degree of freedom by using the outer product of the vector ES and the vector EW. The calculation method is as follows:
when in useRespectively located at the front and rear sides of the body, andcalculated theta at the same angle3Therefore, the difference between the two motions cannot be distinguished, and in this case, it is necessary to determine the front-back relationship on the z-axis (in the depth direction) between the Y-coordinate of the left shoulder and the Y-coordinate of the left elbow and to distinguish them;
the calculation method of the right arm is similar, but the direction obtained by performing the outer product operation on the two vectors is just opposite to that of the left arm, and in order to ensure the consistency of the operation, the calculation method is to useAndthe calculation method of (2) is kept unchanged, and the vector outer product operation of the right arm is modified as follows:
In the actual operation process, the data of the steering engine value always fluctuates, so that the data needs to be filtered by adopting an amplitude limiting filtering method;
the main idea of the limiting filtering method is as follows: according to the empirical judgment, the maximum deviation value (set as A) allowed by two times of sampling is determined, and the judgment is carried out each time a new value is detected: if the difference between the current value and the previous value is equal to A, the current value is valid, if the difference between the current value and the previous value is greater than A, the current value is invalid, the current value is abandoned, and the previous value is used for replacing the current value, so that the method can effectively overcome pulse interference caused by accidental factors under the condition of ensuring the existing precision; the steering engine values and joint angles of different steering engines have different mapping relations. After filtering, the joint angle is converted into a steering engine value according to the characteristics of the steering engine of the humanoid robot.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110427319.2A CN113492404B (en) | 2021-04-21 | 2021-04-21 | Humanoid robot action mapping control method based on machine vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110427319.2A CN113492404B (en) | 2021-04-21 | 2021-04-21 | Humanoid robot action mapping control method based on machine vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113492404A true CN113492404A (en) | 2021-10-12 |
CN113492404B CN113492404B (en) | 2022-09-30 |
Family
ID=77997599
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110427319.2A Active CN113492404B (en) | 2021-04-21 | 2021-04-21 | Humanoid robot action mapping control method based on machine vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113492404B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106078752A (en) * | 2016-06-27 | 2016-11-09 | 西安电子科技大学 | Method is imitated in a kind of anthropomorphic robot human body behavior based on Kinect |
CN107901041A (en) * | 2017-12-15 | 2018-04-13 | 中南大学 | A kind of robot vision servo control method based on image blend square |
CN107953331A (en) * | 2017-10-17 | 2018-04-24 | 华南理工大学 | A kind of human body attitude mapping method applied to anthropomorphic robot action imitation |
CN108098761A (en) * | 2016-11-24 | 2018-06-01 | 广州映博智能科技有限公司 | A kind of the arm arm device and method of novel robot crawl target |
CN108098780A (en) * | 2016-11-24 | 2018-06-01 | 广州映博智能科技有限公司 | A kind of new robot apery kinematic system |
CN109693235A (en) * | 2017-10-23 | 2019-04-30 | 中国科学院沈阳自动化研究所 | A kind of Prosthetic Hand vision tracking device and its control method |
CN110202583A (en) * | 2019-07-09 | 2019-09-06 | 华南理工大学 | A kind of Apery manipulator control system and its control method based on deep learning |
CN111152218A (en) * | 2019-12-31 | 2020-05-15 | 浙江大学 | Action mapping method and system of heterogeneous humanoid mechanical arm |
KR20210042644A (en) * | 2019-10-10 | 2021-04-20 | 한국기술교육대학교 산학협력단 | Control system and method for humanoid robot |
-
2021
- 2021-04-21 CN CN202110427319.2A patent/CN113492404B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106078752A (en) * | 2016-06-27 | 2016-11-09 | 西安电子科技大学 | Method is imitated in a kind of anthropomorphic robot human body behavior based on Kinect |
CN108098761A (en) * | 2016-11-24 | 2018-06-01 | 广州映博智能科技有限公司 | A kind of the arm arm device and method of novel robot crawl target |
CN108098780A (en) * | 2016-11-24 | 2018-06-01 | 广州映博智能科技有限公司 | A kind of new robot apery kinematic system |
CN107953331A (en) * | 2017-10-17 | 2018-04-24 | 华南理工大学 | A kind of human body attitude mapping method applied to anthropomorphic robot action imitation |
CN109693235A (en) * | 2017-10-23 | 2019-04-30 | 中国科学院沈阳自动化研究所 | A kind of Prosthetic Hand vision tracking device and its control method |
CN107901041A (en) * | 2017-12-15 | 2018-04-13 | 中南大学 | A kind of robot vision servo control method based on image blend square |
CN110202583A (en) * | 2019-07-09 | 2019-09-06 | 华南理工大学 | A kind of Apery manipulator control system and its control method based on deep learning |
KR20210042644A (en) * | 2019-10-10 | 2021-04-20 | 한국기술교육대학교 산학협력단 | Control system and method for humanoid robot |
CN111152218A (en) * | 2019-12-31 | 2020-05-15 | 浙江大学 | Action mapping method and system of heterogeneous humanoid mechanical arm |
Also Published As
Publication number | Publication date |
---|---|
CN113492404B (en) | 2022-09-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109571487B (en) | Robot demonstration learning method based on vision | |
CN111360827B (en) | Visual servo switching control method and system | |
CN112132894A (en) | Mechanical arm real-time tracking method based on binocular vision guidance | |
Li et al. | A mobile robot hand-arm teleoperation system by vision and imu | |
Sun et al. | A review of robot control with visual servoing | |
CN110900581A (en) | Four-degree-of-freedom mechanical arm vision servo control method and device based on RealSense camera | |
JP2022542241A (en) | Systems and methods for augmenting visual output from robotic devices | |
CN110815189B (en) | Robot rapid teaching system and method based on mixed reality | |
CN109940626B (en) | Control method of eyebrow drawing robot system based on robot vision | |
Li et al. | Visual servoing of wheeled mobile robots without desired images | |
Gratal et al. | Visual servoing on unknown objects | |
CN109291048A (en) | A kind of grinding and polishing industrial robot real-time online programing system and method | |
CN115741732A (en) | Interactive path planning and motion control method of massage robot | |
CN113119073A (en) | Mechanical arm system based on computer vision and machine learning and oriented to 3C assembly scene | |
CN114536346A (en) | Mechanical arm accurate path planning method based on man-machine cooperation and visual detection | |
Natale et al. | Learning precise 3d reaching in a humanoid robot | |
Song et al. | On-line stable evolutionary recognition based on unit quaternion representation by motion-feedforward compensation | |
CN109693235B (en) | Human eye vision-imitating tracking device and control method thereof | |
Gao et al. | Kinect-based motion recognition tracking robotic arm platform | |
CN113492404B (en) | Humanoid robot action mapping control method based on machine vision | |
CN110722547B (en) | Vision stabilization of mobile robot under model unknown dynamic scene | |
CN211890823U (en) | Four-degree-of-freedom mechanical arm vision servo control system based on RealSense camera | |
Bai et al. | Kinect-based hand tracking for first-person-perspective robotic arm teleoperation | |
Bodenstedt et al. | Learned partial automation for shared control in tele-robotic manipulation | |
Zhang et al. | Teaching-playback of robot manipulator based on human gesture recognition and motion tracking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |