CN107392086B - Human body posture assessment device, system and storage device - Google Patents

Human body posture assessment device, system and storage device Download PDF

Info

Publication number
CN107392086B
CN107392086B CN201710386839.7A CN201710386839A CN107392086B CN 107392086 B CN107392086 B CN 107392086B CN 201710386839 A CN201710386839 A CN 201710386839A CN 107392086 B CN107392086 B CN 107392086B
Authority
CN
China
Prior art keywords
human body
posture
standard
relative position
reference point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710386839.7A
Other languages
Chinese (zh)
Other versions
CN107392086A (en
Inventor
黄源浩
肖振中
许宏淮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Orbbec Co Ltd
Original Assignee
Shenzhen Orbbec Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Orbbec Co Ltd filed Critical Shenzhen Orbbec Co Ltd
Priority to CN201710386839.7A priority Critical patent/CN107392086B/en
Publication of CN107392086A publication Critical patent/CN107392086A/en
Application granted granted Critical
Publication of CN107392086B publication Critical patent/CN107392086B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method, a device and a storage device for evaluating human body posture, wherein the method comprises the steps of obtaining a depth image sequence of a human body to be evaluated; identifying each part of the human body to be evaluated according to the depth image sequence and determining a human body reference point of the human body to be evaluated; acquiring the relative position relation between each part of the body and a human body reference point; comparing the relative position relation with a standard relative position relation of a pre-stored standard posture; and outputting the comparison result. The apparatus includes a processor and a depth camera coupled to the processor. The storage means stores program data which can be executed to implement the above-described method. The invention can accurately distinguish the positions of all parts of the human body and acquire accurate relative position relation so as to improve the accuracy of the posture evaluation result and further improve the training efficiency of each movement.

Description

Human body posture assessment device, system and storage device
Technical Field
The invention relates to the technical field of image processing, in particular to a human posture assessment method, a human posture assessment device and a storage device.
Background
The depth camera captures the depth information of each pixel in the depth image of the scene, wherein the depth information is the distance from the surface of the scene to the depth camera, and therefore the position information of the scene target can be acquired according to the depth image.
In exercises such as sports, dancing, and body shapes, it is necessary to capture body postures to form the postures of various parts of the body into data information, and to evaluate the body postures through analysis of the data information to improve the training effect. In the prior art, the 2D image sequence is adopted to evaluate the posture of the human body, and in the research and practice process of the prior art, the inventor of the present invention finds that the recognition of the posture with the shielding relationship of, for example, the limb in front of the trunk and the like in the 2D image sequence cannot be accurately distinguished, so that the posture evaluation result is easy to be inaccurate.
Disclosure of Invention
The invention provides a human body posture assessment method, a human body posture assessment device and a storage device, which can solve the problem that in the prior art, the human body posture assessment result is inaccurate.
In order to solve the technical problems, the invention adopts a technical scheme that: a method for evaluating human body posture is provided, which comprises the following steps: acquiring a depth image sequence of a human body to be evaluated; identifying all parts of the human body to be evaluated according to the depth image sequence and determining a human body reference point of the human body to be evaluated; acquiring the relative position relation between each part of the body and the human body reference point; comparing the relative position relation with a standard relative position relation of a pre-stored standard posture; and outputting the comparison result.
In order to solve the technical problem, the invention adopts another technical scheme that: providing an evaluation device of various human postures, wherein the evaluation device comprises a depth camera and a processor, and the depth camera is connected with the processor; the depth camera is used for acquiring a depth image sequence of a human body to be evaluated; the processor is used for identifying all parts of the human body to be evaluated according to the depth image sequence and determining a human body reference point of the human body to be evaluated; acquiring the relative position relation between each part of the body and the human body reference point; comparing the relative position relation with a standard relative position relation of a pre-stored standard posture; and outputting the comparison result.
In order to solve the technical problem, the invention adopts another technical scheme that: there is provided a storage device having stored program data executable to implement the above method.
The invention has the beneficial effects that: different from the situation of the prior art, the invention extracts all parts of the body and the human body reference points in the human body posture by processing the depth image sequence, and obtains the evaluation result of the human body posture by comparing the relative position relation between all parts of the body and the human body reference points with the standard relative position relation of the human body in the standard posture, can accurately distinguish the positions of all parts of the human body and obtain the accurate relative position relation, so as to improve the accuracy of the posture evaluation result and improve the training efficiency of each movement.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart of an embodiment of a human body posture assessment method provided by the present invention;
FIG. 2 is a schematic flow chart illustrating a human body posture estimation method according to another embodiment of the present invention;
FIG. 3 is a schematic flow chart of step S22 in FIG. 2;
FIG. 4 is a schematic flow chart of step S23 in FIG. 2;
FIG. 5 is a schematic diagram of a relationship between a center of mass of a knee joint and a spatial position of a center of a human body according to another embodiment of the method for estimating a posture of a human body provided by the present invention;
FIG. 6 is a schematic flow chart of step S26 in FIG. 2;
fig. 7 is a schematic structural diagram of an embodiment of a human body posture estimation device provided by the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a schematic flow chart of a human body posture estimation method according to an embodiment of the present invention. The human posture assessment method shown in fig. 1 comprises the following steps:
and S11, acquiring a depth image sequence of the human body to be evaluated.
In step S11, the human body posture of the human body to be evaluated is the human body posture to be evaluated. The specific application of the invention can be the human body posture in a certain state in the actions of dynamic movement, dance action and the like, and also can be the static human body posture of standing posture, sitting posture and the like. The sequence of depth images may be acquired by a depth camera. The depth image includes not only pixel information of an object in space, but also depth information of each pixel information, i.e., distance information between the object in space to the depth camera. A depth image sequence refers to a sequence of depth images over a period of time.
And S12, recognizing all parts of the human body to be evaluated according to the depth image sequence and determining human body reference points of the human body to be evaluated.
Specifically, the body parts of the human body to be evaluated may be the head, the shoulder and neck, the trunk, the four limbs, the hands, the feet, and the like, as well as the knee, the elbow, the wrist, the ankle, the hip joint, and the like. The human body reference point of the human body to be evaluated can be the human body centroid or the human body center, and the present embodiment describes the present invention with the human body center as the human body reference point. Of course, in other embodiments, other specific points of the human body may also be selected as the human body reference points.
And S13, acquiring the relative position relation between each part of the body and the human body reference point.
In one embodiment, the relative position relationship may be an euclidean distance and a cosine distance between the center of mass of each part of the body and the center of the reference body, for example, the euclidean distance and the cosine distance between the center of mass of the head and the center of the body, the euclidean distance and the cosine distance between the center of mass of the hand and the center of the body, and the like.
And S14, comparing the relative position relation with the standard relative position relation of the pre-stored standard posture.
The standard relative positional relationship of the standard posture may be obtained in the same manner as the posture to be evaluated of the human body to be evaluated of steps S11 to S13. The standard relative positional relationship of the standard posture may be saved for recall in step S14 for comparison before the body posture evaluation is performed for the first time.
And S15, outputting the comparison result.
In step S15, the comparison result includes that the posture of the human body to be evaluated is a standard posture, or the posture of the human body to be evaluated is not a standard posture, in some embodiments, an adjustment suggestion may be further output, including an adjusted part and an adjustment direction, for example, the left hand needs to move downward again, the foot needs to move upward again, the right arm needs to move leftward again, and the like, in other embodiments, a movement distance may be further explicitly provided, for example, the left hand moves 5cm downward, and the like.
The comparison result may be output in a voice or display screen display mode, the display screen display may be displayed in a text or image mode, and of course, the comparison result may be output in combination with the above modes. Thereby prompting the user to adjust his posture.
Different from the prior art, the invention extracts all parts of the body and the body reference points in the human body posture by processing the depth image sequence, and obtains the evaluation result of the human body posture by comparing the relative position relation between all parts of the body and the body reference points with the standard relative position relation of the human body in the standard posture, can accurately distinguish the positions of all parts of the human body and obtain the accurate relative position relation, so as to improve the accuracy of the posture evaluation result and improve the training efficiency of each movement.
Referring to fig. 2, fig. 2 is a schematic flow chart of another embodiment of a human body posture estimation method provided by the present invention.
And S21, acquiring a depth image sequence of the human body to be evaluated.
And S22, recognizing all parts of the human body to be evaluated according to the depth image sequence and determining human body reference points of the human body to be evaluated.
The present embodiment will be described with reference to the end posture of the squat operation as an example. The human body reference point of the present embodiment is the human body center.
Referring to fig. 3, fig. 3 is a schematic flowchart of step S22 in fig. 2. Step S22 further includes:
and S221, removing the background in the depth image series.
For example, one blob (i.e., a connected group of pixels having similar values) may be preliminarily determined in the depth map as the subject's body, and then other blobs having significantly different depth values may be removed from the blob. Plaque preliminarily determined in this manner must generally have some minimum size. However, for this reason, a simple euclidean distance between pixel coordinates at the edges of the plaque does not give an accurate measure of the size. The reason for this inaccuracy is that the size (in pixels) of the blob corresponding to an object of a given physical size increases or decreases as the distance of the object from the device changes.
Thus, to determine the actual size of an object, the (x, y, depth) coordinates of the object are first transformed into "real world" coordinates (xr, yr, depth) using the following formula:
xr (x-fovx/2) pixel size depth/reference depth
yr (y-fovy/2) pixel size depth/reference depth
Here, fovx and fovy are the fields of view (in pixels) of the depth map in the x and y directions. The pixel size is the length that the pixel subtends at a given distance (reference depth) from the drawing device. The size of the blob may then be determined by taking the euclidean distance between the real world coordinates of the edges of the blob.
Thus, the background in the depth image may be removed by identifying a blob having a required minimum size with a minimum average depth value among blobs in the scene. It may be assumed that the blob closest to the depth camera is a human body, that all pixels with a depth greater than the average depth value by at least some threshold are assumed to belong to background objects, and that the depth values of these pixels are set to zero. Wherein, the threshold value can be determined according to actual needs. Furthermore, in some embodiments, pixels having depth values significantly smaller than the average depth value of the blob may also be zeroed out. Alternatively, a maximum depth may be preset so that objects exceeding the maximum depth are ignored.
In some embodiments, the depth value may also be determined dynamically, beyond which objects are removed from the depth map. For this reason, it is assumed that objects in the scene are moving. Thus, any pixel that has not changed in depth for some minimum number of frames is assumed to be a background object. Pixels with depth values greater than the static depth value are considered to belong to the background object and are therefore all zeroed out. Initially, all pixels in the scene may be defined as static, or all pixels in the scene may be defined as non-static. In both cases, the actual depth filter can be dynamically generated as soon as the object starts to move.
Of course, the background in the depth image may also be removed by other methods known in the art.
S222, acquiring the contour of the human body in the depth image sequence.
After removing the background, the outer contour of the body can be found in the depth map by an edge detection method. In this embodiment, a two-step thresholding mechanism is used to find the contour of the human body:
first, all pixels in the blob in the depth image that correspond to the humanoid form are traversed and marked as contour locations if any given pixel has a valid depth value and if the difference in depth value between that pixel and at least one of its four connected neighboring pixels (right, left, top, and bottom) is greater than a first threshold. (where the difference between the effective depth value and the zero value is considered to be infinite).
Then, after the above step is completed, the blob is traversed again and marked as a contour position if there is a contour pixel among eight contiguous neighboring pixels of any pixel (which pixel has not yet been marked as a contour position) and if the difference in depth value between the current pixel and at least one of the remaining contiguous neighboring positions is greater than a second threshold (lower than the first threshold).
And S223, identifying the trunk of the human body according to the contour.
After finding the outer contour of the human body, various parts of the body, such as the head, torso, and limbs, are identified.
The depth image is first rotated so that the body contour is in a vertical position. The purpose of this rotation is to simplify the calculations in the following steps by aligning the longitudinal axis of the body with the Y coordinate (vertical) axis. Alternatively, the following calculations may be performed relative to the longitudinal axis of the body without the need to make this rotation, as will be appreciated by those skilled in the art.
The 3D axes of the body may be found prior to identifying various parts of the body. Specifically, finding the 3D axis of the body may employ the following method:
the original depth image is down-sampled (down-sample) into a grid of nodes, where one node is taken n pixels apart in the X-direction and Y-direction. The depth value of each node is calculated based on the depth values in the n × n squares centered on the node. If more than half of the pixels in a block have a value of zero, the corresponding node is set to a value of zero. Otherwise, the node is set to the average of the valid depth values in the nxn square.
This down-sampled depth image may then be further "cleaned up" based on the values of neighboring nodes: if most of the neighbors of a given node have a value of zero, then that node is also set to a value of zero (even if it has a valid depth value after the preceding steps).
Upon completion of the above steps, the vertical axes of the remaining nodes in the down-sampled graph are found. To do this, a linear least squares fit can be performed to find the line that best fits each node. Alternatively, an ellipse around each node may be fitted and its major axis found.
After finding the 3D axis of the body, the torso of the body is identified by measuring the thickness of the body contour in directions parallel and perpendicular to the longitudinal axis. To this end, a bounding box may be defined around the body contour, and the pixel values in this box may then be binarized: pixels with zero depth values are set to 0 and pixels with non-zero depth values are set to 1.
Then, a longitudinal thickness value is calculated for each X value within the box by summing the binary pixel values along the corresponding vertical line, and a transverse thickness value is calculated for each Y value by summing the binary pixel values along the corresponding horizontal line. A threshold is applied to the resulting values to identify along which vertical and horizontal lines the contour is relatively thick.
When the transverse thickness of a certain horizontal area of the outline exceeds an X threshold value and the longitudinal thickness of a certain vertical area exceeds a Y threshold value, the intersection of the horizontal area and the vertical area can be determined as the trunk.
And S224, identifying each part of the human body according to the trunk.
After the torso is determined, the head and limbs of the body may be identified based on geometric considerations. The hand arms are regions connected to the left and right sides of the torso region; the head is the connecting area above the torso area; the legs are the connection areas under the torso area. The upper left and right corners of the torso region may also be preliminarily identified as shoulders.
In another embodiment, identifying body parts of a human body to be evaluated can be further realized by the following three steps:
firstly, segmenting a human body. In the embodiment, a method combining interframe difference and background difference is adopted to segment a moving human body, one frame in an RGBD image is selected as a background frame in advance, a Gaussian model of each pixel point is established, then an interframe difference method is used for carrying out difference processing on two adjacent frames of images, background points and changed regions (the changed regions in the current frame comprise an exposed region and a moving object) are distinguished, then model fitting is carried out on the changed regions and the corresponding regions of the background frame to distinguish the exposed region and the moving object, and finally a shadow is removed from the moving object, so that the moving object without the shadow is segmented. When updating the background, determining the interframe difference as a background point, and updating according to a certain rule; and if the background difference is determined to be the point of the exposed area, updating the background frame at a higher updating rate, and not updating the area corresponding to the moving object. This method can obtain a more ideal segmentation target.
And (II) extracting and analyzing the contour. After the binarized image is acquired, the contour is acquired using some classical edge detection algorithm. For example, by adopting a Canny algorithm, a Canny edge detection operator fully reflects the mathematical characteristics of an optimal edge detector, has good signal-to-noise ratio and excellent positioning performance for different types of edges, generates low probability of multiple responses to a single edge and has the maximum inhibition capability on false edge responses, and after an optical flow segmentation field is obtained by utilizing the segmentation algorithm, all concerned moving objects are contained in the segmentation areas. Therefore, the Canny operator is used for extracting the edges in the segmentation areas, so that on one hand, background interference can be greatly limited, and on the other hand, the running speed can be effectively improved.
And (III) automatically marking the joint. The moving target is obtained through a difference method, after the Canny edge detection operator extracts the contour, the human body target is further analyzed through a 2D belt model (Ribbonmodel) of MaylorK, LeungandYee-HongYang. The model divides the front of the body into different areas, for example, the body is constructed with 5U-shaped areas representing the head and limbs of the body, respectively.
Thus, by finding the 5U-shaped body endpoints, the approximate location of the body can be determined, extracting the required information by vector contour compression based on the extracted contour, preserving the most prominent human extremity features, compressing the human contour into a fixed shape, e.g., such that the contour has fixed 8 endpoints and 5U-shaped points and 3 inverted U-shaped points, so that the apparent features facilitate the calculation of the contour. Here, the contour may be compressed using a distance algorithm of adjacent end points on the contour, and the contour is compressed into 8 end points through an iterative process.
After the compressed contour is obtained, the following algorithm is adopted to automatically label each part of the body:
(1) a U-shaped body end point is determined. Given a certain reference length M, a vector greater than M can be considered as a part of the body contour, and a vector smaller than M can be ignored. Searching from a certain point according to the vectorized contour, finding a vector larger than M as Mi, finding the next vector as Mj, comparing included angles from Mi to Mj, considering the included angles as U endpoints if the included angles are within a certain range (0-90 degrees) (note that the included angles are positive and indicate that the included angles are convex), and recording the two vectors to find the U endpoint. This is done until 5U endpoints are found.
(2) The end points of the three inverted U-shapes are determined. In the same step (1), the included angle condition is changed from positive to negative.
(3) The positions of the head, the hands and the feet can be easily obtained according to the end points of the U and the inverted U. According to the physiological shape of the body, each joint point can be determined, and the width and the length of the trunk can be respectively determined by utilizing the intersection angle part of the arms and the body and the intersection angle part of the head and the legs; then, the neck and waist positions account for 0.75 and 0.3 of the trunk respectively, the elbows are positioned at the midpoints of the shoulders and the hands, and the knees are positioned at the midpoints of the waist and the feet. Thus, the general position of each part of the body can be defined.
And S225, acquiring the center of the human body by combining the trunk and all parts of the human body to be used as a reference point of the human body.
Wherein, the human body center is the geometric center of the human body in the depth image. After the trunk and all parts of the human body are identified, the center of the human body can be determined through the outline of the whole human body of the depth image, namely the median of the outer edge values of the three-dimensional human body edge.
And S23, acquiring the relative position relation between each part of the body and the human body reference point.
The relative position relationship is a relative position relationship from the centroid of each part of the human body to the center of the human body in the posture to be evaluated, and may include, for example, an euclidean distance and a cosine distance from the centroid of each part of the human body to the center of the human body. The standard relative position relationship is the relative position relationship from the mass center of each part of the human body to the center of the human body in the standard posture.
As shown in fig. 4, fig. 4 is a schematic flowchart of step S23 in fig. 2. In this embodiment, step S23 includes:
s231, acquiring a first coordinate value of the center of the human body.
In this embodiment, the first coordinate value of the human body center is a coordinate value of the human body center in a camera coordinate system of the depth camera.
For example, in the end posture of the deep squatting motion, the first coordinate value of the human body center point A is (x)1,y1,z1)。
S232, obtaining the mass centers of all parts of the body and a second coordinate value of the mass center of all parts of the body.
Specifically, after each part of the body is identified, the centroid of each region of the body can be determined. Wherein the centroid of a region refers to the representative depth or position of the region. To this end, for example, a histogram of depth values within a region may be generated and the depth value having the highest frequency (or an average of two or more depth values having the highest frequencies) may be set as the centroid of the region. After the mass centers of all parts of the body are determined, the coordinates of the mass centers of all parts of the body in the camera coordinate system can be determined.
It is worth mentioning that the centroid in the present invention refers to a centroid obtained by depth image processing, and not to a physical centroid. The centroid of the present invention can be obtained by the centroid method, and can also be obtained by other methods, which is not limited in the present invention.
For example, in the end posture of the deep squat motion, the second coordinate value of the center of mass B of the knee joint is (x)2,y2,z2)。
And S233, calculating Euclidean distances and cosine distances between the center of mass of each part of the human body and the center of the human body according to the first coordinate value and the second coordinate value to form a vector to be evaluated of the body posture of the human body.
Cosine distance, also called cosine similarity, is a measure of the magnitude of the difference between two individuals using the cosine value of the angle between two vectors in a vector space. The difference between sample vectors is measured by the concept in machine learning. Wherein, the cosine distance of two vectors can be represented by the cosine value of the included angle between them.
For example, as shown in fig. 5, fig. 5 is a schematic diagram of a spatial position relationship between a center of mass of a knee joint and a center of a human body according to another embodiment of the human body posture estimation method provided by the present invention. After the first coordinate value and the second coordinate value are obtained, the vector of the human body center can be obtained
Figure BDA0001306580700000101
Figure BDA0001306580700000102
And the vector of the center of mass of the knee joint
Figure BDA0001306580700000103
Specifically, the euclidean distance between the knee joint and the center of the human body is calculated by the following formula:
Figure BDA0001306580700000111
Figure BDA0001306580700000112
and
Figure BDA0001306580700000113
the cosine distance between can be calculated by the following formula:
Figure BDA0001306580700000114
wherein the Euclidean distance measures the absolute distance of points in space, e.g. dABMeasuring the absolute distance between the point A and the point B, and directly correlating with the position coordinates of the points; the cosine distance measures the included angle of the space vector, and the difference in direction is reflected rather than the position.
Specifically, the cosine distance has a value range of [ -1, 1 ]. The larger the cosine of the included angle is, the smaller the included angle between the two vectors is, and the smaller the cosine of the included angle is, the larger the included angle between the two vectors is. When the directions of the two vectors are coincident, the cosine of the included angle takes the maximum value of 1, and when the directions of the two vectors are completely opposite, the cosine of the included angle takes the minimum value of-1.
Of course, in the estimation process of the human body posture, the Euclidean distance and the cosine distance from the mass center of the hand, the foot and other body parts to the center of the human body are usually calculated. Finally, the values obtained by the Euclidean distances and the cosine distances from the centroid of all the required body parts to the center of the human body correspond to all the body parts one by one, and the vector X to be evaluated of the posture of the human body to be evaluated is formed in an integrated mode.
And S24, acquiring the standard relative position relation between each part of the human body and the center of the human body in the standard posture.
The method similar to steps S22 and S23 may be adopted to identify the centroid and the center of the human body of each part of the human body in the standard posture, obtain a third coordinate of the centroid and a fourth coordinate of the center of the human body of each part of the human body in the standard posture, and calculate the euclidean distance and the cosine distance between the centroid and the center of the human body of each part of the human body in the standard posture through the third coordinate and the fourth coordinate to form the standard vector a of the standard posture.
It is worth mentioning that the acquisition of the standard relative position relationship needs to be consistent with the acquisition of the relative position relationship of the human body to be evaluated. For example, since the body pose to be evaluated selects the body center as the body reference point, the body reference point in the standard pose also needs to be the body center.
And S25, storing the standard relative position relation.
The standard relative positional relationship is saved for comparison as called for by step S26.
Among them, the steps S24-S25 may be before or after the steps S21-S23, or may be between any two steps of the steps S21-23, as long as it is before the step S26.
And S26, comparing the relative position relation with the standard relative position relation of the pre-stored standard posture.
Step S26 is to compare the posture of the human body to be evaluated with the pre-stored standard posture, and after the data is converted, the relative positional relationship between the body parts of the posture of the human body to be evaluated and the center of the human body is compared with the standard relative positional relationship between the body parts of the human body and the center of the human body in the standard posture.
For example, the center of mass of a knee joint in standard posture is at a Euclidean distance d from the center of the bodyAB', the cosine distance is cos θ'. Will dABAnd dAB', cos θ is compared to cos θ'.
As shown in fig. 6, fig. 6 is a schematic flow chart of step S26 in fig. 2. Specifically, step S26 includes:
and S261, calculating a difference value between the vector to be evaluated and the standard vector.
In step S261, R ═ X-a is calculated, where each item of R is a deviation of the euclidean distance and the cosine distance between the corresponding body part and the body part in the standard posture.
For example, R contains dABAnd dABThe deviation between ' and ' cos theta and theta ' also includes the deviation of the Euclidean distance between the centers of mass of other parts of the body and the center of the human body from the Euclidean distance between the centers of mass of parts of the body and the center of the human body in the standard posture, and the deviation of the cosine distance between the centers of mass of other parts of the body and the center of the human body from the center of the body in the standard posture.
And S262, comparing the difference value with a preset threshold value, and judging whether the difference value is smaller than the preset threshold value.
Specifically, in step S262, the vector eigenvalue of R may be calculated and compared with a preset threshold value set in advance.
In this embodiment, if the eigenvalue of the difference is smaller than the preset threshold, which indicates that the posture of the human body to be evaluated is similar to or consistent with the standard posture, or has reached the requirement of the standard posture, the process goes to step S263. If the eigenvalue of the difference is greater than or equal to the preset threshold, the standard posture of the human body to be evaluated is greatly different from the standard posture, and the step S264 is performed.
And S263, judging that the body posture is the standard posture according to the comparison result, and entering the step S27.
S264, judging that the body posture is not the standard posture according to the comparison result, and entering the step S27.
And S27, outputting the comparison result.
In this embodiment, the result is output to prompt the user regardless of whether the body posture to be evaluated meets the requirement of the standard posture.
And when the body posture meets the requirement of the standard posture, prompting the user to reach the standard posture. And when the body posture does not meet the requirement of the standard posture, prompting the user that the standard posture is not met, and simultaneously prompting the user about adjustment suggestions on how to adjust the body posture to the standard posture. For example, in the ending posture of the deep squatting action of the human body to be evaluated, if the eigenvalue of R is greater than or equal to the preset threshold, the R is analyzed to determine which does not meet the standard posture, and if the analysis result shows that the height of the hand is not enough, the user is prompted to make the hand too high.
The present embodiment is described by taking the end posture of the squat exercise as an example, but it is to be understood that the end posture may be a posture at a specific time point during the exercise in other embodiments.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an embodiment of a human body posture estimation device provided by the present invention.
The invention also provides a human body posture evaluation device which comprises a depth camera 10, a processor 11 and a memory 12, wherein the depth camera 10 and the memory 12 are connected with the processor 11.
The depth camera 10 is used to acquire a sequence of depth images of a human body to be evaluated. The depth image sequence of the human body to be evaluated can be shot by one depth camera 10, or can be shot from different angles by a plurality of depth cameras 10.
The processor 11 is configured to identify each part of the body of the human body to be evaluated according to the depth image sequence and determine a human body reference point of the human body to be evaluated; acquiring the relative position relation between each part of the body and a human body reference point; comparing the relative position relation with a standard relative position relation of a pre-stored standard posture; and outputting the comparison result.
The processor 11 is further configured to obtain a standard relative position relationship between each body part of the human body in the standard posture and a human body reference point.
The memory 12 is used for storing the standard relative positional relationship.
Wherein the relative position relationship is the relative position relationship from the mass center of each part of the human body of the posture to be evaluated to the human body reference point of the human body; the standard relative position relationship is the relative position relationship from the mass center of each part of the human body in the standard posture to the human body reference point of the human body in the standard posture.
The processor 11 is further configured to obtain a first coordinate value of the human body reference point; acquiring a second coordinate value of the mass center of each part of the body and the mass center of each part of the body; and calculating Euclidean distances and cosine distances between the mass center of each part of the human body and a human body reference point according to the first coordinate value and the second coordinate value so as to form a vector to be evaluated of the body posture of the human body.
The processor 11 is further configured to calculate euclidean and cosine distances between the centroid of each part of the body of the human body in the standard posture and the reference point of the human body to form a standard vector of the standard posture of the human body.
The processor 11 is further configured to calculate a difference between the vector to be evaluated and the standard vector; and comparing the difference value with a preset threshold value, and if the difference value is smaller than the preset threshold value, judging that the body posture is a standard posture according to the comparison result.
The processor 11 is further configured to determine that the body posture is not the standard posture if the difference is greater than or equal to the preset threshold; and outputting an adjustment suggestion when the comparison result is output.
The processor 11 is also arranged to remove background in the series of depth images; acquiring the contour of a human body in a depth image sequence; identifying the trunk of the human body according to the contour; identifying each part of the human body according to the trunk; and acquiring a human body reference point by combining the trunk and all parts of the body.
The present invention also provides a storage device storing program data that can be executed to implement the method of human body posture assessment of any of the above embodiments.
For example, the storage device may be a portable storage medium, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk. It is to be understood that the storage device may also be various media that can store program codes, such as a server.
In conclusion, the invention can accurately distinguish the positions of all parts of the human body and acquire accurate relative position relation, thereby improving the accuracy of the posture evaluation result and further improving the training efficiency of each movement.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (9)

1. A human posture assessment method is characterized by comprising the following steps:
acquiring a depth image sequence of a human body to be evaluated;
identifying all parts of the human body to be evaluated according to the depth image sequence and determining a human body reference point of the human body to be evaluated;
acquiring the relative position relation between each part of the body and the human body reference point;
comparing the relative position relation with a standard relative position relation of a pre-stored standard posture;
outputting a comparison result;
identifying all parts of the human body to be evaluated according to the depth image sequence and determining the human body reference point of the human body to be evaluated comprises the following steps:
removing background in the series of depth images;
acquiring the contour of the human body in the depth image sequence;
identifying a torso of the human body according to the contour;
identifying each part of the human body according to the trunk;
acquiring the human body reference point by combining the trunk and all parts of the body;
wherein the identifying of the torso of the human body from the contour identifies the torso of the body by finding a 3D axis of the human body by measuring a thickness of the contour of the body in directions parallel and perpendicular to the longitudinal axis.
2. The evaluation method according to claim 1, wherein the step of comparing the relative positional relationship with a standard relative positional relationship of a pre-stored standard posture is preceded by:
acquiring a standard relative position relation between each part of the human body in the standard posture and the human body reference point;
and saving the standard relative position relation.
3. The evaluation method according to claim 2, wherein the relative positional relationship is a relative positional relationship from a centroid of each part of the body of the human body in the posture to be evaluated to the human body reference point of the human body; the standard relative position relationship is the relative position relationship from the mass center of each part of the human body in the standard posture to the human body reference point of the human body in the standard posture;
the step of obtaining the relative position relationship between each part of the body and the human body reference point comprises the following steps:
acquiring a first coordinate value of the human body reference point;
acquiring the mass center of each part of the body and a second coordinate value of the mass center of each part of the body;
calculating Euclidean distances and cosine distances between the mass centers of all parts of the human body and a human body reference point according to the first coordinate value and the second coordinate value to form a vector to be evaluated of the body posture of the human body;
the step of obtaining the standard relative position relationship between each part of the human body in the standard posture and the human body reference point comprises the following steps:
calculating Euclidean distances and cosine distances between the mass center of each part of the human body in the standard posture and a human body reference point to form a standard vector of the standard posture;
the step of comparing the relative position relationship with a pre-stored standard relative position relationship of a standard posture comprises:
calculating the difference value between the vector to be evaluated and the standard vector;
and comparing the difference value with a preset threshold value, and if the difference value is smaller than the preset threshold value, judging that the body posture is the standard posture according to the comparison result.
4. The assessment method according to claim 3, wherein in the step of comparing the difference value with a preset threshold, if the difference value is greater than or equal to the preset threshold, the body posture is determined not to be the standard posture;
the step of outputting the comparison result further comprises outputting an adjustment suggestion.
5. The human body posture assessment device is characterized by comprising a depth camera and a processor, wherein the depth camera is connected with the processor;
the depth camera is used for acquiring a depth image sequence of a human body to be evaluated;
the processor is used for identifying all parts of the human body to be evaluated according to the depth image sequence and determining a human body reference point of the human body to be evaluated; acquiring the relative position relation between each part of the body and the human body reference point; comparing the relative position relation with a standard relative position relation of a pre-stored standard posture; outputting a comparison result;
the processor is used for identifying all parts of the body of the human body to be evaluated according to the depth image sequence and determining human body reference points of the human body to be evaluated, and the processor is used for removing the background in the depth image series; acquiring the contour of the human body in the depth image sequence; identifying a torso of the human body according to the contour; identifying each part of the human body according to the trunk; acquiring the human body reference point by combining the trunk and all parts of the body;
wherein the processor identifies the torso of the human body from the contour, the processor identifying the torso of the body by finding a 3D axis of the human body by measuring a thickness of the contour of the body in directions parallel and perpendicular to the longitudinal axis.
6. The evaluation device of claim 5, further comprising a memory coupled to the processor;
the processor is used for acquiring the standard relative position relation between each part of the human body in the standard posture and the human body reference point;
the memory is used for storing the standard relative position relation.
7. The evaluation apparatus according to claim 6, wherein the relative positional relationship is a relative positional relationship from a centroid of each part of the body of the human body in the posture to be evaluated to the human body reference point of the human body; the standard relative position relationship is the relative position relationship from the mass center of each part of the human body in the standard posture to the human body reference point of the human body in the standard posture;
the processor is used for acquiring a first coordinate value of the human body reference point; acquiring the mass center of each part of the body and a second coordinate value of the mass center of each part of the body; calculating Euclidean distances and cosine distances between the mass centers of all parts of the human body and a human body reference point according to the first coordinate value and the second coordinate value to form a vector to be evaluated of the body posture of the human body;
the processor is also used for calculating Euclidean distances and cosine distances between the mass center of each part of the human body in the standard posture and a human body reference point so as to form a standard vector of the standard posture;
the processor is further used for calculating a difference value between the vector to be evaluated and the standard vector; and comparing the difference value with a preset threshold value, and if the difference value is smaller than the preset threshold value, judging that the body posture is the standard posture according to the comparison result.
8. The evaluation device of claim 7, wherein the processor is further configured to determine that the body posture is not the standard posture if the difference is greater than or equal to the preset threshold; and outputting an adjustment suggestion when the comparison result is output.
9. Storage device, characterized in that program data are stored which can be executed to implement a method according to any one of claims 1 to 4.
CN201710386839.7A 2017-05-26 2017-05-26 Human body posture assessment device, system and storage device Active CN107392086B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710386839.7A CN107392086B (en) 2017-05-26 2017-05-26 Human body posture assessment device, system and storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710386839.7A CN107392086B (en) 2017-05-26 2017-05-26 Human body posture assessment device, system and storage device

Publications (2)

Publication Number Publication Date
CN107392086A CN107392086A (en) 2017-11-24
CN107392086B true CN107392086B (en) 2020-11-03

Family

ID=60338372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710386839.7A Active CN107392086B (en) 2017-05-26 2017-05-26 Human body posture assessment device, system and storage device

Country Status (1)

Country Link
CN (1) CN107392086B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108256433B (en) * 2017-12-22 2020-12-25 银河水滴科技(北京)有限公司 Motion attitude assessment method and system
CN110278389B (en) * 2018-03-13 2022-08-19 上海西门子医疗器械有限公司 X-ray image imaging method, device, system and storage medium
CN108573216A (en) * 2018-03-20 2018-09-25 浙江大华技术股份有限公司 A kind of limbs posture judgment method and device
CN110321754B (en) * 2018-03-28 2024-04-19 西安铭宇信息科技有限公司 Human motion posture correction method and system based on computer vision
CN108537284A (en) * 2018-04-13 2018-09-14 东莞松山湖国际机器人研究院有限公司 Posture assessment scoring method based on computer vision deep learning algorithm and system
CN108846996B (en) * 2018-08-06 2020-01-24 浙江理工大学 Tumble detection system and method
CN109330602B (en) * 2018-11-01 2022-06-24 中山市人民医院 Female body intelligent evaluation detection device and method and storage medium
CN109829442A (en) * 2019-02-22 2019-05-31 焦点科技股份有限公司 A kind of method and system of the human action scoring based on camera
CN111862296B (en) * 2019-04-24 2023-09-29 京东方科技集团股份有限公司 Three-dimensional reconstruction method, three-dimensional reconstruction device, three-dimensional reconstruction system, model training method and storage medium
CN111783702A (en) * 2020-07-20 2020-10-16 杭州叙简科技股份有限公司 Efficient pedestrian tumble detection method based on image enhancement algorithm and human body key point positioning
CN113033552B (en) * 2021-03-19 2024-02-02 北京字跳网络技术有限公司 Text recognition method and device and electronic equipment
CN113398556B (en) * 2021-06-28 2022-03-01 浙江大学 Push-up identification method and system
CN113673492B (en) * 2021-10-22 2022-03-11 科大讯飞(苏州)科技有限公司 Human body posture evaluation method, electronic device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390174A (en) * 2012-05-07 2013-11-13 深圳泰山在线科技有限公司 Physical education assisting system and method based on human body posture recognition
CN104167016A (en) * 2014-06-16 2014-11-26 西安工业大学 Three-dimensional motion reconstruction method based on RGB color and depth image
CN106056053A (en) * 2016-05-23 2016-10-26 西安电子科技大学 Human posture recognition method based on skeleton feature point extraction
CN106250867A (en) * 2016-08-12 2016-12-21 南京华捷艾米软件科技有限公司 A kind of skeleton based on depth data follows the tracks of the implementation method of system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318520A (en) * 2014-09-28 2015-01-28 南通大学 Pixel local area direction detection method
KR102097016B1 (en) * 2015-02-09 2020-04-06 한국전자통신연구원 Apparatus and methdo for analayzing motion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390174A (en) * 2012-05-07 2013-11-13 深圳泰山在线科技有限公司 Physical education assisting system and method based on human body posture recognition
CN104167016A (en) * 2014-06-16 2014-11-26 西安工业大学 Three-dimensional motion reconstruction method based on RGB color and depth image
CN106056053A (en) * 2016-05-23 2016-10-26 西安电子科技大学 Human posture recognition method based on skeleton feature point extraction
CN106250867A (en) * 2016-08-12 2016-12-21 南京华捷艾米软件科技有限公司 A kind of skeleton based on depth data follows the tracks of the implementation method of system

Also Published As

Publication number Publication date
CN107392086A (en) 2017-11-24

Similar Documents

Publication Publication Date Title
CN107392086B (en) Human body posture assessment device, system and storage device
US9235753B2 (en) Extraction of skeletons from 3D maps
US9898651B2 (en) Upper-body skeleton extraction from depth maps
CN102609683B (en) Automatic labeling method for human joint based on monocular video
Tafazzoli et al. Model-based human gait recognition using leg and arm movements
CN108717531B (en) Human body posture estimation method based on Faster R-CNN
Xia et al. Human detection using depth information by kinect
US9330307B2 (en) Learning based estimation of hand and finger pose
US8824781B2 (en) Learning-based pose estimation from depth maps
KR101519940B1 (en) Three-dimensional object modelling fitting & tracking
JP5873442B2 (en) Object detection apparatus and object detection method
Liu et al. Gait sequence analysis using frieze patterns
CN107335192A (en) Move supplemental training method, apparatus and storage device
Ohayon et al. Robust 3d head tracking using camera pose estimation
CN104200200B (en) Fusion depth information and half-tone information realize the system and method for Gait Recognition
US9117138B2 (en) Method and apparatus for object positioning by using depth images
CN107239744B (en) Method and system for monitoring human body incidence relation and storage device
CN107341179B (en) Standard motion database generation method and device and storage device
Kovač et al. Human skeleton model based dynamic features for walking speed invariant gait recognition
US10884116B2 (en) Human-body foreign-matter detection method and system based on millimetre-wave image
US11450148B2 (en) Movement monitoring system
JP5289290B2 (en) Posture estimation device
WO2021235440A1 (en) Method and device for acquiring movement feature amount using skin information
JP7024876B2 (en) Detection device, processing device, detection method, and processing program
Le et al. Geometry-Based 3D Object Fitting and Localizing in Grasping Aid for Visually Impaired

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant