CN114952854A - Human body collision object docking method and system based on man-machine cooperation - Google Patents

Human body collision object docking method and system based on man-machine cooperation Download PDF

Info

Publication number
CN114952854A
CN114952854A CN202210680195.3A CN202210680195A CN114952854A CN 114952854 A CN114952854 A CN 114952854A CN 202210680195 A CN202210680195 A CN 202210680195A CN 114952854 A CN114952854 A CN 114952854A
Authority
CN
China
Prior art keywords
human body
posture
coordinate system
collision object
human
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210680195.3A
Other languages
Chinese (zh)
Other versions
CN114952854B (en
Inventor
周乐来
王畅聪
李贻斌
田新诚
宋锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202210680195.3A priority Critical patent/CN114952854B/en
Publication of CN114952854A publication Critical patent/CN114952854A/en
Application granted granted Critical
Publication of CN114952854B publication Critical patent/CN114952854B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a human body collision object docking method and system based on man-machine cooperation, which comprises the following steps: acquiring human body posture information according to different measuring methods to obtain key point information of a human body colliding object; unifying a coordinate system according to the obtained human body posture information to obtain the position, posture, shape and size of the human body collision object relative to a global coordinate system, and introducing the position, posture, shape and size into a virtual planning space of the robot; and the robot executes the avoidance of the collided object according to the information obtained in the virtual planning space and stores a sequence formed by the posture information of the human body. The human body key point information is determined according to the characteristics of the human body collision object posture information obtained by different measuring methods, the position of the human body collision object is obtained, the human body posture information obtained by different methods is unified after the coordinate system is converted, a unified robot body collision object interface is formed, and convenience is provided for consistency in different working spaces.

Description

Human body collision object docking method and system based on man-machine cooperation
Technical Field
The invention relates to the technical field of human-computer cooperation, in particular to a human body collision object butt joint method and system based on human-computer cooperation.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
The scene that people and robots cooperate with each other to complete some work tasks in the same working space is called human-computer cooperation, and the human-computer cooperation exerts high-repeatability operation which is good at the robots and decision-making operation which is good at the human beings. In the man-machine cooperation process, in order to prevent the cooperative robot from colliding with the human body, the human body is considered as a collision object, and the robot needs to acquire the position and the posture of the human body collision object.
At present, a robot has various methods for acquiring the position and the posture of a human collision object, for example, a depth camera, a deep learning matched monocular or monocular camera, an inertial measurement unit and the like are used, although the purpose of collision avoidance is achieved, the types of data acquired in different methods are inconsistent, coordinate systems are not uniform, so that a uniform docking method is lacked in various different human posture acquisition methods, and the various methods for acquiring the posture of the human collision object are difficult to be matched with each other for use.
Disclosure of Invention
In order to solve the technical problems existing in the background technology, the invention provides a human body colliding object docking method and system based on human-computer cooperation.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention provides a human body collision object docking method based on man-machine cooperation, which comprises the following steps:
acquiring human body posture information according to different measuring methods to obtain key point information of a human body colliding object;
unifying a coordinate system according to the obtained human body posture information to obtain the position, posture, shape and size of the human body collision object relative to a global coordinate system, and introducing the position, posture, shape and size into a virtual planning space of the robot;
and the robot executes the avoidance of the collided object according to the information obtained in the virtual planning space and stores a sequence formed by the posture information of the human body.
Obtaining human body posture information according to different measuring methods to obtain key point information of a human body collision object, wherein the key point information comprises the following steps: and selecting at least one mode of depth camera measurement, a monocular camera combined with a deep learning method, an inertia measurement unit or a stored BVH file to obtain the human body posture information.
If the human body posture is obtained by selecting a depth camera measurement method or a monocular camera combined with a deep learning method, the key point information is the coordinate positions of the head, the neck, the spine, the waist, the left shoulder, the left elbow, the left wrist, the right shoulder, the right elbow, the right wrist, the left hip, the left knee, the left ankle, the right hip, the right knee and the right ankle respectively.
If the key point of the spine is not available, the key point K of the left shoulder and the right shoulder is used 1 、K 2 Connecting line L 1 And the key point K of the left and right hip 3 、K 4 Connecting line L 2 Respectively as a cylinder C 1 、C 2 Is provided with C 1 、C 2 Is I 1 (ii) a Line L 1 Midpoint K 5 To L 1 Normal plane S of 1 ,S 1 And I 1 0-1 intersection points exist among the points; when 2 intersection points exist, one point with higher confidence coefficient is taken as a key point of the spine; when one intersection point exists, selecting the intersection point as a key point of the spine; when there is no intersection, L 1 Midpoint and L 2 Midpoint connecting line L 3 The corresponding proportional position of (a) serves as a spinal key point.
After obtaining each key point, connecting the two key points with the human body collision object, taking the connecting line as a z axis, taking x and y axes which are mutually vertical to the z axis as the posture of the human body collision object, and taking the middle point of the connecting line of the key points as the position of the human body collision object.
If the inertial measurement units are used for measuring the human body posture, the posture quaternion q of each inertial measurement unit at the initial moment is used iinit Performing inverse initialization to obtain
Figure BDA0003698057210000031
The attitude quaternion of the inertial measurement unit relative to the initial time is the acquired attitude quaternion q of the inertial measurement unit i And
Figure BDA0003698057210000032
a value for the Shuster multiplication; assuming that the inertial measurement unit and the human body do not rotate relatively in the movement process, the relative relationship between the coordinate system of the inertial measurement unit and the coordinate system of the human body joint is fixed, and after the calibration posture is made at the initial moment to calibrate the posture of the human body, the obtained relative rotation posture of the human body is the relative initial rotation posture q of the inertial measurement unit ifin
Unifying a coordinate system according to the obtained human body posture information to obtain the position, the posture, the shape and the size of the human body collision object relative to a global coordinate system, and specifically comprising the following steps:
setting G: global coordinate system, C: camera or reference coordinate system, H: coordinate systems of all joints of the human body;
the position and the posture of the human body acquired by the sensor are
Figure BDA0003698057210000033
The coordinate system to which the robot refers is a global coordinate system, by
Figure BDA0003698057210000041
Converting the coordinate system to the same coordinate system;
the coordinate system referred by the gesture acquired by the monocular camera, the depth camera or the BVH is a camera coordinate system, and the pose of the camera relative to the robot coordinate system is acquired by calibrating the hand and the eye
Figure BDA0003698057210000042
The human body posture measured by the inertia measurement unit and the relative global position and posture thereof
Figure BDA0003698057210000043
The pose of the waist key point under the global coordinate system at the initial moment;
for partial poses which need to be shifted or rotated to reach a specified angle, the transformation of the relative coordinate system to the global coordinate system is carried out
Figure BDA0003698057210000044
Wherein R is R And finally obtaining the position, the posture, the shape and the size of each human collision object relative to the global coordinate system for the transformation of the required offset or rotation.
A second aspect of the present invention provides a system for implementing the above method, comprising:
a pose information acquisition module configured to: acquiring human body posture information according to different measuring methods to obtain key point information of a human body colliding object;
a coordinate system conversion module configured to: unifying a coordinate system according to the obtained human body posture information to obtain the position, posture, shape and size of the human body collision object relative to a global coordinate system, and introducing the position, posture, shape and size into a virtual planning space of the robot;
an action save module configured to: and the robot executes the avoidance of the collided object according to the information obtained in the virtual planning space and stores a sequence formed by the posture information of the human body.
A third aspect of the invention provides a computer-readable storage medium.
A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, carries out the steps in the human-computer-collaboration based human impactor docking method as described above.
A fourth aspect of the invention provides a computer apparatus.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the human-computer cooperation based human collision object docking method as described above when executing the program.
Compared with the prior art, the above one or more technical schemes have the following beneficial effects:
1. the human body key point information is determined according to the characteristics of the human body collision object posture information obtained by different measuring methods in the process, the position of the human body collision object is obtained, the human body posture information obtained by different methods is unified after the coordinate system conversion, a unified robot body collision object interface is formed, and convenience is provided for consistency in different working spaces.
2. When unified human body posture information is imported into a virtual space of the robot to execute collision object avoidance, the human body posture information can be more conveniently reproduced on site or played back at different frame rates when problems are checked.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
FIG. 1 is a schematic diagram of a human impactor docking process provided by one or more embodiments of the invention;
FIG. 2 is a schematic diagram of a manner in which a waist keypoint is determined according to one or more embodiments of the invention;
FIG. 3 is a schematic diagram of additional positions of skeletal muscle impactor for human joints according to one or more embodiments of the invention;
FIG. 4 is a diagram illustrating relative coordinates with respect to a global coordinate system provided by one or more embodiments of the invention.
Detailed Description
The invention is further described with reference to the following figures and examples.
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
As described in the background art, there are various methods for acquiring the position and posture of a human collision object in a robot, for example, using a depth camera, a deep learning method in cooperation with a monocular or monocular camera, an inertial measurement unit, etc., although the purpose of avoiding collision has been achieved, the types of data acquired in different methods are not consistent, and the coordinate systems are not uniform, so that the various human posture acquisition methods lack a uniform docking method, and it is difficult to use the various methods for acquiring the posture of the human collision object in cooperation with each other.
Therefore, the following embodiments provide a human-computer cooperation based human collision object docking method and system, wherein human body key point information is determined according to characteristics of a human body collision object posture information process obtained by different measurement methods, the position of the human body collision object is obtained, human body posture information obtained by different methods is unified after coordinate system conversion, and when the human body collision object docking method is introduced into a robot virtual space to execute collision object avoidance, on-site reproduction can be more conveniently carried out or playback is allowed to be carried out at different frame rates when problems are checked.
The first embodiment is as follows:
as shown in fig. 1-4, the human body collision object docking method based on man-machine cooperation comprises the following steps:
(1) acquiring different human body posture information according to different sensors;
(2) adjusting the size, position and shape of the collision object according to the user;
(3) introducing a collision object which can be sensed by a robot into the corresponding human body posture position;
(4) according to the introduced collision object, the robot selects a proper path to avoid;
(5) and recording the imported human body posture sequence, and modifying or replaying the sequence.
Specifically, the method comprises the following steps:
(1) and acquiring different human body posture information according to different sensors.
And selecting a human body posture measuring method, such as depth camera measurement, inertial unit measurement, monocular camera combined deep learning method measurement, a stored BVH file and the like.
In this embodiment, the test environment of the depth camera may select a depth camera such as Kinect Azure or RealSe nse, the test environment measured by the inertial unit may select an inertial measurement unit such as XSENS MTw awind, the monocular camera may select any camera capable of acquiring RGB images and any deep learning method for estimating 3-dimensional human body posture by combining with a deep learning method, and the BVH file may select any BVH file recorded with the human body posture.
If the human posture is obtained by selecting a depth camera measurement method or a monocular camera combined with a deep learning method, the obtained human key points are different, and only the required key points are reserved, namely the head, the neck, the spine, the waist, the left shoulder, the left elbow, the left wrist, the right shoulder, the right elbow, the right wrist, the left crotch, the left knee, the left ankle, the right crotch, the right knee and the right ankle.
The key point of the human body provided by part of the method does not have the key point of the spine, the left and right shoulder joint points and the left and right crotch key points can be used for acquiring the waist joint points, as shown in figure 2, the left and right shoulder key points K are used 1 、K 2 Connecting line L 1 And the key point K of the left and right hip 3 、K 4 Connecting line L 2 Respectively as a cylinder C 1 、C 2 Term C 1 、C 2 Is I 1 . Line L 1 Midpoint K 5 To L 1 Normal plane S of 1 ,S 1 And I 1 There are usually 0-1 intersections between;
when 2 intersection points exist, selecting a point with higher confidence coefficient as a spine key point;
when one intersection point exists, selecting the intersection point as a key point of the spine;
when there is no intersection, L is selected 1 Midpoint and L 2 Midpoint connecting line L 3 The corresponding proportional position of (a) serves as a spinal key point.
After obtaining each key point, connecting the two key points with the human body collision object, taking the connecting line as a z axis, taking x and y axes which are mutually vertical to the z axis as the posture of the human body collision object, and taking the middle point of the connecting line of the key points as the position of the human body collision object.
Aiming at measuring the human body posture by using the inertia measurement units, the posture quaternion q of each inertia measurement unit at the initial moment is measured iinit Performing inverse initialization to obtain
Figure BDA0003698057210000081
The attitude quaternion of the inertial measurement unit relative to the initial time is the acquired attitude quaternion q of the inertial measurement unit i And
Figure BDA0003698057210000082
carry out Shuster multiplication to obtain
Figure BDA0003698057210000083
If the inertial measurement unit and the human body do not rotate relatively in the movement process, the relative relationship between the coordinate system of the inertial measurement unit and the coordinate system of the human joint is a fixed relationship. Then, after the calibration posture is made at the initial moment to calibrate the human body posture, the obtained human body relative rotation posture is the relative initial rotation posture q of the inertia measurement unit ifin
The inertial measurement unit cannot acquire the three-dimensional position of the human body base coordinate system, and the three-dimensional position is generally obtained by calculation through other sensors and algorithms, and then the position of each joint point of the human body is calculated according to the length of each joint of the human body. The relative position and the posture of each key point relative to the last key point are known, the position and the posture of each joint point relative to the coordinate system of the position sensor can be obtained through the key point chains, then the middle points of the two adjacent key point chains are taken as the positions of the human body collided objects, and finally the position and the posture of each human body collided object relative to the coordinate system of the position sensor are reserved.
The coordinates of the key points of the human body and the addition positions of the attachments of the collision objects are shown in fig. 3, wherein the circle is the addition position of the collision objects.
(2) Adjusting the size, position and shape of the colliding object according to the user's setting
The coordinate system referred by the position and posture of the human body acquired by the sensor is different from the reference system of the mechanical arm, and the referenced coordinate system needs to be unified.
Using coordinate system notation:
g: a global coordinate system;
c: a camera or reference coordinate system;
h: coordinate systems of each joint of the human body;
the position and the posture of the human body acquired by the sensor are
Figure BDA0003698057210000091
The coordinate system referred by the general robot is a global coordinate system, and therefore, the coordinate system is converted into the same coordinate system through the following coordinate system transformation.
Figure BDA0003698057210000092
The coordinate system generally referred to by the gesture acquired by the monocular camera, the depth camera or the BVH is the camera coordinate system, and the pose of the camera relative to the mechanical arm coordinate system only needs to be calibrated by hands and eyes at the moment
Figure BDA0003698057210000093
The determination is performed.
Relative global position and attitude of the human body posture measured by the inertial measurement unit
Figure BDA0003698057210000094
The pose of the waist key point in the global coordinate system at the initial moment is shown.
For the pose of which part needs to be shifted or rotated to reach the designated angle, the transformation of the relative coordinate system to the global coordinate system is carried out
Figure BDA0003698057210000101
Wherein R is R A transformation that requires an offset or rotation. And finally, the position, the posture, the shape and the size of each human collision object relative to the global coordinate system are reserved. The invention stores a plurality of collision object adding points in sequence through different processing modes, wherein each collision object adding point has 1 quaternion data and 1 3-dimensional position data.
The relationship of the relative coordinates with respect to the global coordinate system is shown in fig. 4.
(3) Guiding the collision object sensed by the robot into the corresponding human body posture position
And aiming at the position, the posture, the shape and the size of the body collision object obtained in the step two relative to the global coordinate system. And guiding a plurality of collision objects into the virtual planning space of the mechanical arm in real time.
(4) According to the introduced collision object, the robot selects a proper path to avoid
The user can select various types of path planning algorithms to avoid the collision objects in the virtual space, for example, the algorithms such as RRT, PRM and the like are used, at this time, the obstacles can be introduced into the planning configuration space, and then the corresponding path planning algorithms are used in the configuration space to avoid the obstacles.
(5) Recording the imported human posture sequence
And storing the positions and postures of a plurality of human body collision objects according to a set frame rate. Playback at different frame rates is allowed when a user needs to reproduce in the field or to see a problem.
The method provides a uniform robot human collision object interface for various different human posture estimation modes, and provides convenience for consistency in different working spaces.
Example two:
the embodiment provides a system for implementing the method, which includes:
a pose information acquisition module configured to: acquiring human body posture information according to different measuring methods to obtain key point information of a human body colliding object;
a coordinate system conversion module configured to: unifying a coordinate system according to the obtained human body posture information to obtain the position, posture, shape and size of the human body collision object relative to a global coordinate system, and introducing the position, posture, shape and size into a virtual planning space of the robot;
an action save module configured to: and the robot executes the avoidance of the collided object according to the information obtained in the virtual planning space and stores a sequence formed by the posture information of the human body.
The system provides a uniform robot collision object interface for various different human posture estimation modes, and provides convenience for consistency in different working spaces.
EXAMPLE III
The present embodiment provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in the human-computer-collaboration-based human impactor docking method as set forth in the first embodiment above.
The human body collision object docking method based on human-computer cooperation executed by the computer program in the embodiment provides a uniform robot human body collision object interface for various different human body posture estimation modes, and provides convenience for consistency in different working spaces.
Example four
The embodiment provides a computer device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the steps of the human body collision object docking method based on human-computer cooperation as set forth in the embodiment.
The human body collision object docking method based on human-computer cooperation executed by the processor provides a uniform robot human body collision object interface for various human body posture estimation modes, and provides convenience for consistency in different working spaces.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a read-only memory or a random access memory.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. Human body collision object butt joint method based on man-machine cooperation is characterized in that: the method comprises the following steps:
acquiring human body posture information according to different measuring methods to obtain key point information of a human body colliding object;
unifying a coordinate system according to the obtained human body posture information to obtain the position, posture, shape and size of the human body collision object relative to a global coordinate system, and introducing the position, posture, shape and size into a virtual planning space of the robot;
and the robot executes the avoidance of the collided object according to the information obtained in the virtual planning space and stores a sequence formed by the posture information of the human body.
2. The human-computer cooperation based human body collision object docking method according to claim 1, characterized in that: obtaining human body posture information according to different measuring methods to obtain key point information of a human body collision object, wherein the key point information comprises the following steps: and selecting at least one mode of depth camera measurement, a monocular camera combined with a deep learning method, an inertia measurement unit or a stored BVH file to obtain the human body posture information.
3. The human-computer cooperation based human body collision object docking method according to claim 2, characterized in that: if the human body posture is obtained by selecting a depth camera measurement method or a monocular camera combined with a deep learning method, the key point information is the coordinate positions of the head, the neck, the spine, the waist, the left shoulder, the left elbow, the left wrist, the right shoulder, the right elbow, the right wrist, the left hip, the left knee, the left ankle, the right hip, the right knee and the right ankle respectively.
4. The human-computer cooperation based human body collision object docking method according to claim 3, characterized in that: if there is no spine key point, the key points K of the left and right shoulders are used 1 、K 2 Connecting line L 1 And the key point K of the left and right hip 3 、K 4 Connecting line L 2 Respectively as a cylinder C 1 、C 2 Is provided with C 1 、C 2 Is I 1 (ii) a Line L 1 Midpoint K 5 To L 1 Normal plane S of 1 ,S 1 And I 1 0-1 intersection points exist among the points;
when 2 intersection points exist, one point with higher confidence coefficient is taken as a key point of the spine;
when 1 intersection point exists, selecting the intersection point as a spine key point;
when there is no intersection, L 1 Midpoint and L 2 Midpoint connecting line L 3 The corresponding proportional position of (a) serves as a spinal key point.
5. The human-computer cooperation based human body collision object docking method according to claim 3, characterized in that: after obtaining each key point, connecting the two key points with the human body collision object, taking the connecting line as a z axis, taking x and y axes which are mutually vertical to the z axis as the posture of the human body collision object, and taking the middle point of the connecting line of the key points as the position of the human body collision object.
6. The human-computer cooperation based human body collision object docking method according to claim 2, characterized in that:
if the inertial measurement units are used for measuring the human body posture, the posture quaternion q of each inertial measurement unit at the initial moment is used i init Performing inverse initialization to obtain
Figure FDA0003698057200000021
The attitude quaternion of the inertial measurement unit with respect to the initial time is obtainedAttitude quaternion q of inertial measurement unit i And
Figure FDA0003698057200000022
a value for the Shuster multiplication; assuming that the inertial measurement unit and the human body do not rotate relatively in the movement process, the relative relationship between the coordinate system of the inertial measurement unit and the coordinate system of the human body joint is fixed, and after the calibration posture is made at the initial moment to calibrate the human body posture, the obtained human body relative rotation posture is the relative initial rotation posture q of the inertial measurement unit i fin
7. The human-computer cooperation based human body collision object docking method according to claim 1, characterized in that: unifying a coordinate system according to the obtained human body posture information to obtain the position, the posture, the shape and the size of the human body collision object relative to a global coordinate system, and specifically comprising the following steps:
setting G: global coordinate system, C: camera or reference coordinate system, H: coordinate systems of each joint of the human body;
the position and the posture of the human body acquired by the sensor are
Figure FDA0003698057200000031
The coordinate system to which the robot refers is a global coordinate system, by
Figure FDA0003698057200000032
Converting the coordinate system to the same coordinate system;
the coordinate system referred by the gesture acquired by the monocular camera, the depth camera or the BVH is a camera coordinate system, and the pose of the camera relative to the robot coordinate system is acquired by calibrating the hand and the eye
Figure FDA0003698057200000033
The posture of the human body measured by the inertial measurement unit and the relative global position and attitude thereof
Figure FDA0003698057200000035
The pose of the waist key point under the global coordinate system at the initial moment;
for partial poses which need to be shifted or rotated to reach a specified angle, the transformation of the relative coordinate system to the global coordinate system is carried out
Figure FDA0003698057200000034
Wherein R is R And finally obtaining the position, the posture, the shape and the size of each human collision object relative to the global coordinate system for the transformation of the required offset or rotation.
8. Human collision thing butt joint system based on man-machine cooperation, its characterized in that: the method comprises the following steps:
a pose information acquisition module configured to: acquiring human body posture information according to different measuring methods to obtain key point information of a human body colliding object;
a coordinate system conversion module configured to: unifying a coordinate system according to the obtained human body posture information to obtain the position, posture, shape and size of the human body collision object relative to a global coordinate system, and introducing the position, posture, shape and size into a virtual planning space of the robot;
an action save module configured to: and the robot executes the avoidance of the collided object according to the information obtained in the virtual planning space and stores a sequence formed by the posture information of the human body.
9. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the steps of the human-computer-collaboration-based human collision object docking method as defined in any one of claims 1 to 7.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps in the human impactor docking method based on human-computer cooperation according to any one of claims 1-7.
CN202210680195.3A 2022-06-16 2022-06-16 Human body collision object docking method and system based on man-machine cooperation Active CN114952854B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210680195.3A CN114952854B (en) 2022-06-16 2022-06-16 Human body collision object docking method and system based on man-machine cooperation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210680195.3A CN114952854B (en) 2022-06-16 2022-06-16 Human body collision object docking method and system based on man-machine cooperation

Publications (2)

Publication Number Publication Date
CN114952854A true CN114952854A (en) 2022-08-30
CN114952854B CN114952854B (en) 2024-07-09

Family

ID=82962718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210680195.3A Active CN114952854B (en) 2022-06-16 2022-06-16 Human body collision object docking method and system based on man-machine cooperation

Country Status (1)

Country Link
CN (1) CN114952854B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110495889A (en) * 2019-07-04 2019-11-26 平安科技(深圳)有限公司 Postural assessment method, electronic device, computer equipment and storage medium
CN110561432A (en) * 2019-08-30 2019-12-13 广东省智能制造研究所 safety cooperation method and device based on man-machine co-fusion
CN110978064A (en) * 2019-12-11 2020-04-10 山东大学 Human body safety assessment method and system in human-computer cooperation
CN111571582A (en) * 2020-04-02 2020-08-25 夏晶 Human-computer safety monitoring system and monitoring method for moxibustion robot
CN112957033A (en) * 2021-02-01 2021-06-15 山东大学 Human body real-time indoor positioning and motion posture capturing method and system in man-machine cooperation
US20210197384A1 (en) * 2019-12-26 2021-07-01 Ubtech Robotics Corp Ltd Robot control method and apparatus and robot using the same
CN113378799A (en) * 2021-07-21 2021-09-10 山东大学 Behavior recognition method and system based on target detection and attitude detection framework

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110495889A (en) * 2019-07-04 2019-11-26 平安科技(深圳)有限公司 Postural assessment method, electronic device, computer equipment and storage medium
CN110561432A (en) * 2019-08-30 2019-12-13 广东省智能制造研究所 safety cooperation method and device based on man-machine co-fusion
CN110978064A (en) * 2019-12-11 2020-04-10 山东大学 Human body safety assessment method and system in human-computer cooperation
US20210197384A1 (en) * 2019-12-26 2021-07-01 Ubtech Robotics Corp Ltd Robot control method and apparatus and robot using the same
CN111571582A (en) * 2020-04-02 2020-08-25 夏晶 Human-computer safety monitoring system and monitoring method for moxibustion robot
CN112957033A (en) * 2021-02-01 2021-06-15 山东大学 Human body real-time indoor positioning and motion posture capturing method and system in man-machine cooperation
CN113378799A (en) * 2021-07-21 2021-09-10 山东大学 Behavior recognition method and system based on target detection and attitude detection framework

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHANG HUI,RONG XUEWEN,LI BIN,LI YIBIN: "Leader Recognition Based on 2D Laser Scanner and Pan-Tilt for Quadruped Robots", 《IEEE》, 31 December 2017 (2017-12-31) *
李政源,马 昕,李贻斌: "基于最优投影平面的立体视觉空间圆位姿高精度测量方法", 《模式识别与人工之鞥》, vol. 32, no. 1, 1 January 2019 (2019-01-01) *

Also Published As

Publication number Publication date
CN114952854B (en) 2024-07-09

Similar Documents

Publication Publication Date Title
CN111402290B (en) Action restoration method and device based on skeleton key points
Riley et al. Methods for motion generation and interaction with a humanoid robot: Case studies of dancing and catching
CN107833271A (en) A kind of bone reorientation method and device based on Kinect
Borst et al. Realistic virtual grasping
JP5210884B2 (en) Computer-based method for controlling the posture of a physical articulated system and system for positioning an articulated system
JP5210883B2 (en) A method of using a computer to control the movement of a part of a physical multi-joint system, a system for controlling the movement of a part of a physical multi-joint system, A computer-based method for tracking motion, a system for tracking the motion of a human by a physical articulated system separate from a human, and a movement of a part of a physical articulated system separate from a source system Using a computer to control
KR20210011425A (en) Image processing method and device, image device, and storage medium
Riley et al. Enabling real-time full-body imitation: a natural way of transferring human movement to humanoids
Richter et al. Augmented reality predictive displays to help mitigate the effects of delayed telesurgery
US12062245B2 (en) System and method for real-time creation and execution of a human digital twin
CN109108968A (en) Exchange method, device, equipment and the storage medium of robot head movement adjustment
Gratal et al. Visual servoing on unknown objects
Kennedy et al. A novel approach to robotic cardiac surgery using haptics and vision
CN113146634A (en) Robot attitude control method, robot and storage medium
CN114952854A (en) Human body collision object docking method and system based on man-machine cooperation
CN114536351B (en) Redundant double-arm robot teaching method and device, electronic equipment and system
Ogawara et al. Grasp recognition using a 3D articulated model and infrared images
CN115890671A (en) SMPL parameter-based multi-geometry human body collision model generation method and system
JP6205387B2 (en) Method and apparatus for acquiring position information of virtual marker, and operation measurement method
Moeslund et al. Modelling and estimating the pose of a human arm
JP2014161937A (en) Posture detection device, position detection device, robot, robot system, posture detection method and program
Gail et al. Towards bridging the gap between motion capturing and biomechanical optimal control simulations
JP2915846B2 (en) 3D video creation device
CN109671108B (en) Single multi-view face image attitude estimation method capable of rotating randomly in plane
KR102623672B1 (en) Remote control method of motion tracking robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant