CN116052273A - Action comparison method and device based on body state fishbone line - Google Patents

Action comparison method and device based on body state fishbone line Download PDF

Info

Publication number
CN116052273A
CN116052273A CN202310018949.3A CN202310018949A CN116052273A CN 116052273 A CN116052273 A CN 116052273A CN 202310018949 A CN202310018949 A CN 202310018949A CN 116052273 A CN116052273 A CN 116052273A
Authority
CN
China
Prior art keywords
gesture
point
value
auxiliary
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310018949.3A
Other languages
Chinese (zh)
Other versions
CN116052273B (en
Inventor
邸建
闫连富
雷东
卜震
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Titi Technology Co ltd
Original Assignee
Beijing Titi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Titi Technology Co ltd filed Critical Beijing Titi Technology Co ltd
Priority to CN202310018949.3A priority Critical patent/CN116052273B/en
Publication of CN116052273A publication Critical patent/CN116052273A/en
Application granted granted Critical
Publication of CN116052273B publication Critical patent/CN116052273B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a motion comparison method and a motion comparison device based on a body state fish bone line, wherein the method comprises the following steps: acquiring a group of human skeleton point coordinates of the gesture to be compared under an image pixel coordinate system; legality judgment is carried out on the human skeleton point coordinates; generating a plurality of auxiliary lines according to a group of human skeleton point coordinates judged by legitimacy; comparing the auxiliary lines with corresponding auxiliary lines of the preset standard action gesture in sequence, calculating the included angle between the auxiliary line of the gesture to be compared and the auxiliary line of the standard action gesture, recording the included angle exceeding the preset angle threshold value, and outputting an angle difference value; and carrying out weighted calculation on the angle difference value to obtain a quantization index of the current action completion degree. According to the invention, the plurality of auxiliary lines of the gesture to be compared and the corresponding plurality of auxiliary lines of the preset standard action gesture are sequentially compared, so that the difference between the motion gesture and the standard gesture can be accurately described in two dimensions, and a quantization index can be provided for a user.

Description

Action comparison method and device based on body state fishbone line
Technical Field
The invention relates to the technical field of motion comparison, in particular to a motion comparison method and device based on a body state fishbone line.
Background
The prior art aims at the problems that the small difference of motion postures of the trunk and the legs in partial motion is described and compared, the calculation is complex, the understanding is difficult, the method is not practical, and the difference of motion postures of the trunk and the legs and the standard posture cannot be accurately and easily understood in two dimensions.
Disclosure of Invention
In view of the above, the present invention has been made to provide a motion alignment method and apparatus based on a body state fishbone line, which overcomes the above problems or at least partially solves the above problems.
In one aspect of the invention, there is provided a motion alignment method based on a body state fishbone line, the method comprising:
acquiring a group of human skeleton point coordinates of the gesture to be compared under an image pixel coordinate system;
legality judgment is carried out on the human skeleton point coordinates;
generating a plurality of auxiliary lines according to a group of human skeleton point coordinates judged by legitimacy;
comparing the auxiliary lines with corresponding auxiliary lines of the preset standard action gesture in sequence, calculating the included angle between the auxiliary line of the gesture to be compared and the auxiliary line of the standard action gesture, recording the included angle exceeding the preset angle threshold value, and outputting an angle difference value;
and carrying out weighted calculation on the angle difference value to obtain a quantization index of the current action completion degree.
Further, the human skeletal point coordinates include: left shoulder point, right shoulder point, left hip point, right hip point, left knee point, right knee point, left ankle point, and right ankle point coordinates.
Further, the performing validity judgment on the coordinates of the human skeleton points includes:
judging the legitimacy of the relative positions of the human skeleton point coordinates;
the determining the validity of the relative position of the human skeleton point coordinates includes:
in the image coordinate system, judging whether the y value of the left shoulder point coordinate is smaller than the y value of the left hip point coordinate, whether the y value of the right shoulder point coordinate is smaller than the y value of the right hip point coordinate, whether the y value of the left knee point coordinate is larger than the y value of the left hip point coordinate, and whether the y value of the right knee point coordinate is larger than the y value of the right hip point coordinate;
and judging that the human bone point coordinate is judged by relative position legitimacy when the y value of the left shoulder point coordinate is smaller than the y value of the left hip point coordinate, the y value of the right shoulder point coordinate is smaller than the y value of the right hip point coordinate, the y value of the left knee point coordinate is larger than the y value of the left hip point coordinate, and the y value of the right knee point coordinate is larger than the y value of the right hip point coordinate.
Further, the legality judgment on the coordinates of the human skeleton points further includes: judging the coordinate validity of the human skeleton point coordinates;
the step of judging the coordinate validity of the human skeleton point coordinates comprises the following steps:
acquiring a target action in a target motion project to which the gesture to be compared belongs; judging whether the distribution position of the human skeleton point coordinates meets the point location distribution relation corresponding to the target action, and if so, judging the legality of the human skeleton point coordinates through point location distribution.
Further, the generating a plurality of auxiliary lines according to a set of human skeleton point coordinates judged by legitimacy includes:
and (3) connecting a set of human skeleton point coordinates judged by legitimacy with the left shoulder point and the right shoulder point, the left hip point and the right hip point, the left knee point and the right knee point, the left ankle point and the right ankle point, and generating a plurality of auxiliary lines of shoulder lines, hip lines, knee lines, ankle lines and central axes by extending the midpoints of the two shoulders and the midpoints of the two hips.
Further, the comparing the auxiliary lines with corresponding auxiliary lines of the preset standard action gesture in sequence, calculating an included angle between the auxiliary line of the gesture to be compared and the auxiliary line of the standard action gesture, recording the included angle exceeding the preset angle threshold, and outputting an angle difference value, including:
scaling the outline of the gesture graph to be compared according to the outline scale of the standard action gesture;
when the gesture to be compared is scaled to the same size as the outline of the standard action gesture, each auxiliary line of the gesture to be compared and the corresponding auxiliary line of the corresponding preset standard action gesture are moved in parallel in a vector space until one point of the two lines coincides;
and calculating the included angle between the auxiliary line of the gesture to be compared and the corresponding auxiliary line of the standard action gesture, performing difference record on the included angle exceeding the preset angle threshold value, and outputting an angle difference value.
Further, the auxiliary lines of the preset standard action gesture are stored in an action knowledge base;
the action knowledge base is generated in the following way: collecting standard actions of different persons under different heights and weights in different movements, acquiring joint point position coordinates through a human body posture key point identification model, and calculating to obtain included angles between vectors of a plurality of auxiliary lines and horizontal vectors in different movements corresponding to different movements; setting different weight proportions for a plurality of auxiliary lines corresponding to different actions of different movements; and generating an action knowledge base according to the included angles between each vector of the plurality of auxiliary lines and the horizontal vector in different actions corresponding to different motions and different weight ratios set for the plurality of auxiliary lines corresponding to different actions of different motions.
Further, the method further comprises: acquiring a preset training data set, wherein the training data set comprises: the method comprises the steps of obtaining a vector of each auxiliary line, an included angle of each vector and a horizontal vector in a plurality of auxiliary lines according to corresponding human skeleton point coordinates of different motions, and an angle difference value of the included angle of the auxiliary line of each motion gesture and the auxiliary line of a standard motion gesture in a motion knowledge base and a preset angle threshold;
judging the effectiveness of the actions on the basis of the action knowledge base on the included angles between the vectors of the auxiliary lines and the horizontal vectors, which are obtained by the coordinates of the human skeleton points corresponding to the actions in the training data set;
according to different weight ratios corresponding to a plurality of auxiliary lines corresponding to different motions in the motion knowledge base, carrying out weighted calculation on angle difference values corresponding to the motions which are judged by the motion effectiveness in the training data set;
and clustering the weighted calculation results through a kmeans clustering algorithm to generate five gradient indexes of the good, medium and bad of the corresponding actions of different motions.
Further, the weighting calculation is performed on the angle difference value to obtain a quantization index of the current action completion degree, and the method further includes:
obtaining a target weight proportion corresponding to each auxiliary line of a target action in a target motion project to which the gesture to be compared belongs from a motion knowledge base;
weighting calculation is carried out on the angle difference value according to the target weight proportion;
clustering the current weighted calculation result through a kmeans clustering algorithm to obtain a quantization index of the current action completion degree.
In a second aspect of the present invention, there is provided a motion alignment device based on a body state fishbone line, the device comprising:
the acquisition module is used for acquiring a group of human skeleton point coordinates of the gesture to be compared under the image pixel coordinate system;
the judging module is used for judging the legitimacy of the human skeleton point coordinates;
the computing module is used for generating a plurality of auxiliary lines according to a group of human skeleton point coordinates judged by legitimacy;
the comparison module is used for sequentially comparing the auxiliary lines with the corresponding auxiliary lines of the preset standard action gesture, calculating the included angle between the gesture auxiliary line to be compared and the standard action gesture auxiliary line, recording the included angle exceeding the preset angle threshold value and outputting an angle difference value;
and the calculation module is used for carrying out weighted calculation on the angle difference value to obtain a quantization index of the current action completion degree.
In another aspect of the invention, a computer readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above method of motion alignment based on a body state fishbone line.
In yet another aspect of the invention, there is also provided an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the motion alignment method based on a body aspect fish bone line as above when the computer program is executed.
The embodiment of the invention provides a motion comparison method and device based on a body state fishbone line, which are based on a plurality of key human body nodes and are used for describing and displaying motion states by using shoulder lines, hip lines, knee lines, ankle lines and central axis auxiliary lines; the visual effect of the action angle difference is amplified based on the extension line of the skeleton node of the two-dimensional image, so that the tiny action difference is visualized, and the action difference is displayed in a clear, accurate and popular and easily understood mode; the multiple auxiliary lines of the gesture to be compared and the corresponding multiple auxiliary lines of the preset standard action gesture are sequentially compared, so that the difference between the motion gesture and the standard gesture can be accurately described in two dimensions, quantization indexes can be provided for users, and the problem that the original action comparison cannot be quantized is solved.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 is a flow chart of a motion comparison method based on a body state fishbone line according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of key nodes of a human body according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an auxiliary line according to an embodiment of the present invention;
fig. 4 is a block diagram of a motion comparison device based on a body state fishbone line according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Figure 1 schematically shows a flow chart of a method of motion alignment based on a body state fishbone in accordance with an embodiment of the invention.
Referring to fig. 1, the motion comparison method based on a body state fishbone line according to the embodiment of the invention specifically includes the following steps:
s11, acquiring a group of human skeleton point coordinates of the gesture to be compared under an image pixel coordinate system;
further, the human skeletal point coordinates include: left shoulder point, right shoulder point, left hip point, right hip point, left knee point, right knee point, left ankle point, and right ankle point coordinates.
In this embodiment, the coordinates of the skeletal points of the human body are numbered points 1-8 in sequence.
S12, judging the legality of the human skeleton point coordinates;
further, the performing validity judgment on the coordinates of the human skeleton points includes:
judging the legitimacy of the relative positions of the human skeleton point coordinates;
the determining the validity of the relative position of the human skeleton point coordinates includes:
in the image coordinate system, judging whether the y value of the left shoulder point coordinate is smaller than the y value of the left hip point coordinate, whether the y value of the right shoulder point coordinate is smaller than the y value of the right hip point coordinate, whether the y value of the left knee point coordinate is larger than the y value of the left hip point coordinate, and whether the y value of the right knee point coordinate is larger than the y value of the right hip point coordinate;
and judging that the human bone point coordinate is judged by relative position legitimacy when the y value of the left shoulder point coordinate is smaller than the y value of the left hip point coordinate, the y value of the right shoulder point coordinate is smaller than the y value of the right hip point coordinate, the y value of the left knee point coordinate is larger than the y value of the left hip point coordinate, and the y value of the right knee point coordinate is larger than the y value of the right hip point coordinate.
Further, the legality judgment on the coordinates of the human skeleton points further includes: judging the coordinate validity of the human skeleton point coordinates;
the step of judging the coordinate validity of the human skeleton point coordinates comprises the following steps:
acquiring a target action in a target motion project to which the gesture to be compared belongs; judging whether the distribution position of the human skeleton point coordinates meets the point location distribution relation corresponding to the target action, and if so, judging the legality of the human skeleton point coordinates through point location distribution.
In this embodiment, for example, golf, the front standing posture preparation frame point position coordinates need to be satisfied, the wrist joints of both hands are positioned in the middle of the hip point and below, the y value of the wrist point coordinates is greater than the y value of the hip point coordinates, and the x value of the wrist point coordinates of the left hand is less than the x value of the hip point coordinates; the x-value of the right hand wrist point coordinate is greater than the x-value of the right hip point coordinate.
Further, after calculating the vector of each auxiliary line in the plurality of auxiliary lines, the method further comprises calculating an included angle generated by the vector and a horizontal vector in the positive direction of the X-axis of the rectangular coordinate system, and judging the action effectiveness based on the included angle. Specifically, if the included angle formed by the auxiliary lines corresponding to a certain bone key point does not conform to the preset angle range, the gesture is determined to be possibly misidentified, and the identification process needs to be performed again.
S13, generating a plurality of auxiliary lines according to a group of human skeleton point coordinates judged by legitimacy;
further, the generating a plurality of auxiliary lines according to a set of human skeleton point coordinates judged by legitimacy includes:
and (3) connecting a set of human skeleton point coordinates judged by legitimacy with the left shoulder point and the right shoulder point, the left hip point and the right hip point, the left knee point and the right knee point, the left ankle point and the right ankle point, and generating a plurality of auxiliary lines of shoulder lines, hip lines, knee lines, ankle lines and central axes by extending the midpoints of the two shoulders and the midpoints of the two hips.
In this embodiment, for example, the point connection line between the knee and the ankle, and the joint, needs to satisfy that the left knee connects the left ankle, and the right knee connects the right ankle.
For 5 auxiliary lines, the vectors of each auxiliary line and the auxiliary line included angle formed by the vectors and the positive direction of the X-axis of the rectangular coordinate system are calculated according to the sequence of the lines 1-5.
S14, sequentially comparing the auxiliary lines with corresponding auxiliary lines of the preset standard action gesture, calculating the included angle between the gesture auxiliary line to be compared and the standard action gesture auxiliary line, recording the included angle exceeding the preset angle threshold, and outputting an angle difference value;
further, the comparing the auxiliary lines with corresponding auxiliary lines of the preset standard action gesture in sequence, calculating an included angle between the auxiliary line of the gesture to be compared and the auxiliary line of the standard action gesture, recording the included angle exceeding the preset angle threshold, and outputting an angle difference value, including:
scaling the outline of the gesture graph to be compared according to the outline scale of the standard action gesture;
when the gesture to be compared is scaled to the same size as the outline of the standard action gesture, each auxiliary line of the gesture to be compared and the corresponding auxiliary line of the corresponding preset standard action gesture are moved in parallel in a vector space until one point of the two lines coincides;
and calculating the included angle between the auxiliary line of the gesture to be compared and the corresponding auxiliary line of the standard action gesture, performing difference record on the included angle exceeding the preset angle threshold value, and outputting an angle difference value.
Further, the auxiliary lines of the preset standard action gesture are stored in an action knowledge base;
the action knowledge base is generated in the following way: collecting standard actions of different persons under different heights and weights in different movements, acquiring joint point position coordinates through a human body posture key point identification model, and calculating to obtain included angles between vectors of a plurality of auxiliary lines and horizontal vectors in different movements corresponding to different movements; setting different weight proportions for a plurality of auxiliary lines corresponding to different actions of different movements; and generating an action knowledge base according to the included angles between each vector of the plurality of auxiliary lines and the horizontal vector in different actions corresponding to different motions and different weight ratios set for the plurality of auxiliary lines corresponding to different actions of different motions.
Further, the method further comprises: acquiring a preset training data set, wherein the training data set comprises: the method comprises the steps of obtaining a vector of each auxiliary line, an included angle of each vector and a horizontal vector in a plurality of auxiliary lines according to corresponding human skeleton point coordinates of different motions, and an angle difference value of the included angle of the auxiliary line of each motion gesture and the auxiliary line of a standard motion gesture in a motion knowledge base and a preset angle threshold;
judging the effectiveness of the actions on the basis of the action knowledge base on the included angles between the vectors of the auxiliary lines and the horizontal vectors, which are obtained by the coordinates of the human skeleton points corresponding to the actions in the training data set;
according to different weight ratios corresponding to a plurality of auxiliary lines corresponding to different motions in the motion knowledge base, carrying out weighted calculation on angle difference values corresponding to the motions which are judged by the motion effectiveness in the training data set;
and clustering the weighted calculation results through a kmeans clustering algorithm to generate five gradient indexes of the good, medium and bad of the corresponding actions of different motions.
In this embodiment, the threshold value of the included angles of 4 horizontal lines is within the range of 0-15 °, the threshold value of the included angles of 1 vertical line is within the range of 0-10 °, the included angles exceeding the threshold value of the preset angle are recorded differently, and the angle difference value is output.
And S15, carrying out weighted calculation on the angle difference value to obtain a quantization index of the current action completion degree.
Further, the weighting calculation is performed on the angle difference value to obtain a quantization index of the current action completion degree, and the method further includes:
obtaining a target weight proportion corresponding to each auxiliary line of a target action in a target motion project to which the gesture to be compared belongs from a motion knowledge base;
weighting calculation is carried out on the angle difference value according to the target weight proportion;
clustering the current weighted calculation result through a kmeans clustering algorithm to obtain a quantization index of the current action completion degree.
In this embodiment, the kmeans clustering algorithm is an iterative solution clustering analysis algorithm, and the data is divided into K groups in advance, then K objects are randomly selected as initial cluster centers, then the distance between each object and each seed cluster center is calculated, and each object is allocated to the cluster center closest to the object. The cluster centers and the objects assigned to them represent a cluster. For each sample assigned, the cluster center of the cluster is recalculated based on the existing objects in the cluster. This process will repeat until a certain termination condition is met. The termination condition may be that no objects are reassigned to different clusters, no cluster center is changed again, and the sum of squares of errors is locally minimal.
The kmeans clustering algorithm is a special case obtained when covariance of normal distribution of a gaussian mixture model solved by using a maximum expected algorithm is used as an identity matrix, and posterior distribution of hidden variables is a set of dirac delta functions.
According to the motion comparison method based on the posture fishbone line, provided by the embodiment of the invention, based on a plurality of human body key nodes, shoulder lines, hip lines, knee lines, ankle lines and central axis auxiliary lines are used for describing and displaying the motion state; the visual effect of the action angle difference is amplified based on the extension line of the skeleton node of the two-dimensional image, so that the tiny action difference is visualized, and the action difference is displayed in a clear, accurate and popular and easily understood mode; the multiple auxiliary lines of the gesture to be compared and the corresponding multiple auxiliary lines of the preset standard action gesture are sequentially compared, so that the difference between the motion gesture and the standard gesture can be accurately described in two dimensions, quantization indexes can be provided for users, and the problem that the original action comparison cannot be quantized is solved.
FIG. 2 is a schematic diagram of key nodes of a human body according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an auxiliary line according to an embodiment of the present invention;
for the purposes of simplicity of explanation, the methodologies are shown and described as a series of acts, it is to be understood and appreciated by one of ordinary skill in the art that the methodologies are not limited by the order of acts, as some acts may, in accordance with the methodologies, take place in other order or concurrently. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts are not necessarily required by the embodiments of the invention.
Fig. 4 schematically shows a schematic structural diagram of a motion alignment device based on a body-state fishbone line according to an embodiment of the invention. Referring to fig. 4, the motion comparison device based on a body state fishbone according to the embodiment of the invention specifically includes an obtaining module 401, a judging module 402, a calculating module 403, a comparing module 404, and a calculating module 405, where:
the acquisition module 401 is configured to acquire a set of human skeleton point coordinates of a gesture to be compared in an image pixel coordinate system;
a judging module 402, configured to perform validity judgment on the coordinates of the skeletal points of the human body;
a calculation module 403, configured to generate a plurality of auxiliary lines according to a set of human skeleton point coordinates determined by legitimacy;
the comparison module 404 is configured to sequentially compare the plurality of auxiliary lines with a plurality of corresponding auxiliary lines of a preset standard motion gesture, calculate an included angle between the to-be-compared gesture auxiliary line and the standard motion gesture auxiliary line, record an included angle exceeding a preset angle threshold, and output an angle difference value;
and the calculating module 405 is configured to perform weighted calculation on the angle difference value, so as to obtain a quantization index of the current motion completion degree.
Further, the judging module 402 is configured to perform a validity judgment on the coordinates of the left shoulder point, the right shoulder point, the left hip point, the right hip point, the left knee point, the right knee point, the left ankle point, and the right ankle point;
further, the judging module 402 is configured to judge the validity of the relative position of the coordinates of the skeletal points of the human body;
the determining the validity of the relative position of the human skeleton point coordinates includes:
in the image coordinate system, judging whether the y value of the left shoulder point coordinate is smaller than the y value of the left hip point coordinate, whether the y value of the right shoulder point coordinate is smaller than the y value of the right hip point coordinate, whether the y value of the left knee point coordinate is larger than the y value of the left hip point coordinate, and whether the y value of the right knee point coordinate is larger than the y value of the right hip point coordinate;
and judging that the human bone point coordinate is judged by relative position legitimacy when the y value of the left shoulder point coordinate is smaller than the y value of the left hip point coordinate, the y value of the right shoulder point coordinate is smaller than the y value of the right hip point coordinate, the y value of the left knee point coordinate is larger than the y value of the left hip point coordinate, and the y value of the right knee point coordinate is larger than the y value of the right hip point coordinate.
Further, the judging module 402 is further configured to perform coordinate validity judgment on the coordinates of the skeletal points of the human body;
the step of judging the coordinate validity of the human skeleton point coordinates comprises the following steps:
acquiring a target action in a target motion project to which the gesture to be compared belongs; judging whether the distribution position of the human skeleton point coordinates meets the point location distribution relation corresponding to the target action, and if so, judging the legality of the human skeleton point coordinates through point location distribution.
Further, the calculating module 403 is configured to connect the left shoulder point and the right shoulder point, the left hip point and the right hip point, the left knee point and the right knee point, the left ankle point and the right ankle point, and the midpoint of the two shoulders and the midpoint of the two hips, respectively, and extend the coordinates of a set of human skeleton points determined by legitimacy, so as to generate a shoulder line, a hip line, a knee line, an ankle line and a central axis multiple auxiliary lines.
Further, the comparing module 404 is configured to scale the profile of the gesture graph to be compared according to the profile scale of the standard action gesture; when the gesture to be compared is scaled to the same size as the outline of the standard action gesture, each auxiliary line of the gesture to be compared and the corresponding auxiliary line of the corresponding preset standard action gesture are moved in parallel in a vector space until one point of the two lines coincides; and calculating the included angle between the auxiliary line of the gesture to be compared and the corresponding auxiliary line of the standard action gesture, performing difference record on the included angle exceeding the preset angle threshold value, and outputting an angle difference value.
Further, the auxiliary lines of the preset standard action gesture are stored in an action knowledge base;
the action knowledge base is generated in the following way: collecting standard actions of different persons under different heights and weights in different movements, acquiring joint point position coordinates through a human body posture key point identification model, and calculating to obtain included angles between vectors of a plurality of auxiliary lines and horizontal vectors in different movements corresponding to different movements; setting different weight proportions for a plurality of auxiliary lines corresponding to different actions of different movements; and generating an action knowledge base according to the included angles between each vector of the plurality of auxiliary lines and the horizontal vector in different actions corresponding to different motions and different weight ratios set for the plurality of auxiliary lines corresponding to different actions of different motions.
Further, the apparatus further includes a configuration module, not shown in the drawings, configured to obtain a preset training data set, where the training data set includes: the method comprises the steps of obtaining a vector of each auxiliary line, an included angle of each vector and a horizontal vector in a plurality of auxiliary lines according to corresponding human skeleton point coordinates of different motions, and an angle difference value of the included angle of the auxiliary line of each motion gesture and the auxiliary line of a standard motion gesture in a motion knowledge base and a preset angle threshold; judging the effectiveness of the actions on the basis of the action knowledge base on the included angles between the vectors of the auxiliary lines and the horizontal vectors, which are obtained by the coordinates of the human skeleton points corresponding to the actions in the training data set; according to different weight ratios corresponding to a plurality of auxiliary lines corresponding to different motions in the motion knowledge base, carrying out weighted calculation on angle difference values corresponding to the motions which are judged by the motion effectiveness in the training data set; and clustering the weighted calculation results through a kmeans clustering algorithm to generate five gradient indexes of the good, medium and bad of the corresponding actions of different motions.
Further, the calculating module 405 is further configured to obtain, from the action knowledge base, a target weight ratio corresponding to each auxiliary line of the target action in the target motion item to which the gesture to be compared belongs; weighting calculation is carried out on the angle difference value according to the target weight proportion; clustering the current weighted calculation result through a kmeans clustering algorithm to obtain a quantization index of the current action completion degree.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
Furthermore, embodiments of the present invention provide a computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, implements the steps of the method as described above.
In this embodiment, the modules/units integrated with the motion alignment apparatus based on the body state fishbone may be stored in a computer readable storage medium if implemented as software functional units and sold or used as independent products. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
In addition, the embodiment of the invention also provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the steps of the method when executing the program. Such as steps S11-S15 shown in fig. 1. Alternatively, the processor may implement the functions of the modules/units in the embodiment of the motion alignment apparatus based on a body-state fishbone line when executing the computer program, for example, the obtaining module 401, the judging module 402, the calculating module 403, the comparing module 404, and the calculating module 405 shown in fig. 4.
The embodiment of the invention provides a motion comparison method and device based on a body state fishbone line, which are based on a plurality of key human body nodes and are used for describing and displaying motion states by using shoulder lines, hip lines, knee lines, ankle lines and central axis auxiliary lines; the visual effect of the action angle difference is amplified based on the extension line of the skeleton node of the two-dimensional image, so that the tiny action difference is visualized, and the action difference is displayed in a clear, accurate and popular and easily understood mode; the multiple auxiliary lines of the gesture to be compared and the corresponding multiple auxiliary lines of the preset standard action gesture are sequentially compared, so that the difference between the motion gesture and the standard gesture can be accurately described in two dimensions, quantization indexes can be provided for users, and the problem that the original action comparison cannot be quantized is solved.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, any of the claimed embodiments can be used in any combination.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for motion alignment based on a body form fishbone line, the method comprising:
acquiring a group of human skeleton point coordinates of the gesture to be compared under an image pixel coordinate system;
legality judgment is carried out on the human skeleton point coordinates;
generating a plurality of auxiliary lines according to a group of human skeleton point coordinates judged by legitimacy;
comparing the auxiliary lines with corresponding auxiliary lines of the preset standard action gesture in sequence, calculating the included angle between the auxiliary line of the gesture to be compared and the auxiliary line of the standard action gesture, recording the included angle exceeding the preset angle threshold value, and outputting an angle difference value;
and carrying out weighted calculation on the angle difference value to obtain a quantization index of the current action completion degree.
2. The method of claim 1, wherein the human skeletal point coordinates comprise: left shoulder point, right shoulder point, left hip point, right hip point, left knee point, right knee point, left ankle point, and right ankle point coordinates.
3. The method of claim 1, wherein the legality determination of the human skeletal point coordinates comprises:
judging the legitimacy of the relative positions of the human skeleton point coordinates;
the determining the validity of the relative position of the human skeleton point coordinates includes:
in the image coordinate system, judging whether the y value of the left shoulder point coordinate is smaller than the y value of the left hip point coordinate, whether the y value of the right shoulder point coordinate is smaller than the y value of the right hip point coordinate, whether the y value of the left knee point coordinate is larger than the y value of the left hip point coordinate, and whether the y value of the right knee point coordinate is larger than the y value of the right hip point coordinate;
and judging that the human bone point coordinate is judged by relative position legitimacy when the y value of the left shoulder point coordinate is smaller than the y value of the left hip point coordinate, the y value of the right shoulder point coordinate is smaller than the y value of the right hip point coordinate, the y value of the left knee point coordinate is larger than the y value of the left hip point coordinate, and the y value of the right knee point coordinate is larger than the y value of the right hip point coordinate.
4. The method of claim 1, wherein the legality determination of the human skeletal point coordinates further comprises: judging the coordinate validity of the human skeleton point coordinates;
the step of judging the coordinate validity of the human skeleton point coordinates comprises the following steps:
acquiring a target action in a target motion project to which the gesture to be compared belongs; judging whether the distribution position of the human skeleton point coordinates meets the point location distribution relation corresponding to the target action, and if so, judging the legality of the human skeleton point coordinates through point location distribution.
5. The method of claim 1, wherein generating a plurality of auxiliary lines from a set of human skeletal point coordinates determined by legitimacy comprises:
and (3) connecting a set of human skeleton point coordinates judged by legitimacy with the left shoulder point and the right shoulder point, the left hip point and the right hip point, the left knee point and the right knee point, the left ankle point and the right ankle point, and generating a plurality of auxiliary lines of shoulder lines, hip lines, knee lines, ankle lines and central axes by extending the midpoints of the two shoulders and the midpoints of the two hips.
6. The method according to claim 1, wherein comparing the plurality of auxiliary lines with a plurality of corresponding auxiliary lines of a preset standard motion gesture in sequence, calculating an included angle between an auxiliary line of a gesture to be compared and an auxiliary line of a standard motion gesture, recording an included angle exceeding a preset angle threshold, and outputting an angle difference value, includes:
scaling the outline of the gesture graph to be compared according to the outline scale of the standard action gesture;
when the gesture to be compared is scaled to the same size as the outline of the standard action gesture, each auxiliary line of the gesture to be compared and the corresponding auxiliary line of the corresponding preset standard action gesture are moved in parallel in a vector space until one point of the two lines coincides;
and calculating the included angle between the auxiliary line of the gesture to be compared and the corresponding auxiliary line of the standard action gesture, performing difference record on the included angle exceeding the preset angle threshold value, and outputting an angle difference value.
7. The method of claim 6, wherein the step of providing the first layer comprises,
the auxiliary lines of the preset standard action gesture are stored in an action knowledge base;
the action knowledge base is generated in the following way: collecting standard actions of different persons under different heights and weights in different movements, acquiring joint point position coordinates through a human body posture key point identification model, and calculating to obtain included angles between vectors of a plurality of auxiliary lines and horizontal vectors in different movements corresponding to different movements; setting different weight proportions for a plurality of auxiliary lines corresponding to different actions of different movements; and generating an action knowledge base according to the included angles between each vector of the plurality of auxiliary lines and the horizontal vector in different actions corresponding to different motions and different weight ratios set for the plurality of auxiliary lines corresponding to different actions of different motions.
8. The method of claim 7, wherein the method further comprises:
acquiring a preset training data set, wherein the training data set comprises: the method comprises the steps of obtaining a vector of each auxiliary line, an included angle of each vector and a horizontal vector in a plurality of auxiliary lines according to corresponding human skeleton point coordinates of different motions, and an angle difference value of the included angle of the auxiliary line of each motion gesture and the auxiliary line of a standard motion gesture in a motion knowledge base and a preset angle threshold;
judging the effectiveness of the actions on the basis of the action knowledge base on the included angles between the vectors of the auxiliary lines and the horizontal vectors, which are obtained by the coordinates of the human skeleton points corresponding to the actions in the training data set;
according to different weight ratios corresponding to a plurality of auxiliary lines corresponding to different motions in the motion knowledge base, carrying out weighted calculation on angle difference values corresponding to the motions which are judged by the motion effectiveness in the training data set;
and clustering the weighted calculation results through a kmeans clustering algorithm to generate five gradient indexes of the good, medium and bad of the corresponding actions of different motions.
9. The method of claim 1, wherein the weighting the angle difference value to obtain a quantization index of a current motion completion degree further comprises:
obtaining a target weight proportion corresponding to each auxiliary line of a target action in a target motion project to which the gesture to be compared belongs from a motion knowledge base;
weighting calculation is carried out on the angle difference value according to the target weight proportion;
clustering the current weighted calculation result through a kmeans clustering algorithm to obtain a quantization index of the current action completion degree.
10. An action comparison device based on a body state fishbone line, characterized in that the device comprises:
the acquisition module is used for acquiring a group of human skeleton point coordinates of the gesture to be compared under the image pixel coordinate system;
the judging module is used for judging the legitimacy of the human skeleton point coordinates;
the computing module is used for generating a plurality of auxiliary lines according to a group of human skeleton point coordinates judged by legitimacy;
the comparison module is used for sequentially comparing the auxiliary lines with the corresponding auxiliary lines of the preset standard action gesture, calculating the included angle between the gesture auxiliary line to be compared and the standard action gesture auxiliary line, recording the included angle exceeding the preset angle threshold value and outputting an angle difference value;
and the calculation module is used for carrying out weighted calculation on the angle difference value to obtain a quantization index of the current action completion degree.
CN202310018949.3A 2023-01-06 2023-01-06 Action comparison method and device based on body state fishbone line Active CN116052273B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310018949.3A CN116052273B (en) 2023-01-06 2023-01-06 Action comparison method and device based on body state fishbone line

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310018949.3A CN116052273B (en) 2023-01-06 2023-01-06 Action comparison method and device based on body state fishbone line

Publications (2)

Publication Number Publication Date
CN116052273A true CN116052273A (en) 2023-05-02
CN116052273B CN116052273B (en) 2024-03-08

Family

ID=86129110

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310018949.3A Active CN116052273B (en) 2023-01-06 2023-01-06 Action comparison method and device based on body state fishbone line

Country Status (1)

Country Link
CN (1) CN116052273B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120054708A1 (en) * 2010-08-27 2012-03-01 International Business Machines Corporation Electronic design automation object placement with partially region-constrained objects
CN110495889A (en) * 2019-07-04 2019-11-26 平安科技(深圳)有限公司 Postural assessment method, electronic device, computer equipment and storage medium
WO2020177498A1 (en) * 2019-03-04 2020-09-10 南京邮电大学 Non-intrusive human body thermal comfort detection method and system based on posture estimation
CN111666844A (en) * 2020-05-26 2020-09-15 电子科技大学 Badminton player motion posture assessment method
CN111881887A (en) * 2020-08-21 2020-11-03 董秀园 Multi-camera-based motion attitude monitoring and guiding method and device
CN111914598A (en) * 2019-05-09 2020-11-10 北京四维图新科技股份有限公司 Method, device and equipment for detecting key points of continuous frame human face and storage medium
CN113807289A (en) * 2021-09-24 2021-12-17 杭州晟冠科技有限公司 Human body posture detection method and device, electronic equipment and storage medium
CN114119911A (en) * 2020-08-27 2022-03-01 北京陌陌信息技术有限公司 Human body model neural network training method, device and storage medium
CN114842459A (en) * 2022-03-31 2022-08-02 上海商汤临港智能科技有限公司 Motion detection method, motion detection device, electronic device, and storage medium
CN115019399A (en) * 2022-06-24 2022-09-06 北京工业大学 Human body posture detection method
US20220410000A1 (en) * 2019-07-09 2022-12-29 Sony Interactive Entertainment Inc. Skeleton model updating apparatus, skeleton model updating method, and program

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120054708A1 (en) * 2010-08-27 2012-03-01 International Business Machines Corporation Electronic design automation object placement with partially region-constrained objects
WO2020177498A1 (en) * 2019-03-04 2020-09-10 南京邮电大学 Non-intrusive human body thermal comfort detection method and system based on posture estimation
CN111914598A (en) * 2019-05-09 2020-11-10 北京四维图新科技股份有限公司 Method, device and equipment for detecting key points of continuous frame human face and storage medium
CN110495889A (en) * 2019-07-04 2019-11-26 平安科技(深圳)有限公司 Postural assessment method, electronic device, computer equipment and storage medium
WO2021000401A1 (en) * 2019-07-04 2021-01-07 平安科技(深圳)有限公司 Posture assessment method, electronic apparatus, computer device, and storage medium
US20220410000A1 (en) * 2019-07-09 2022-12-29 Sony Interactive Entertainment Inc. Skeleton model updating apparatus, skeleton model updating method, and program
CN111666844A (en) * 2020-05-26 2020-09-15 电子科技大学 Badminton player motion posture assessment method
CN111881887A (en) * 2020-08-21 2020-11-03 董秀园 Multi-camera-based motion attitude monitoring and guiding method and device
CN114119911A (en) * 2020-08-27 2022-03-01 北京陌陌信息技术有限公司 Human body model neural network training method, device and storage medium
CN113807289A (en) * 2021-09-24 2021-12-17 杭州晟冠科技有限公司 Human body posture detection method and device, electronic equipment and storage medium
CN114842459A (en) * 2022-03-31 2022-08-02 上海商汤临港智能科技有限公司 Motion detection method, motion detection device, electronic device, and storage medium
CN115019399A (en) * 2022-06-24 2022-09-06 北京工业大学 Human body posture detection method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHUNYU WANG 等: "An Approach to Pose-Based Action Recognition", 《2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》, vol. 2013, 3 October 2013 (2013-10-03) *
季月鹏: "基于视频人体姿态估计的高尔夫挥杆动作比对分析研究", 《中国优秀硕士学位论文全文数据库 社会科学Ⅱ辑》, vol. 2020, no. 2, 15 February 2020 (2020-02-15), pages 134 - 437 *

Also Published As

Publication number Publication date
CN116052273B (en) 2024-03-08

Similar Documents

Publication Publication Date Title
CN111881705B (en) Data processing, training and identifying method, device and storage medium
CN110495889B (en) Posture evaluation method, electronic device, computer device, and storage medium
CN111460872B (en) Image processing method and device, image equipment and storage medium
Bianchi et al. Simultaneous topology and stiffness identification for mass-spring models based on fem reference deformations
JP7160932B2 (en) Generating prescriptive analytics using motion identification and motion information
CN105930767A (en) Human body skeleton-based action recognition method
CN111860157A (en) Motion analysis method, device, equipment and storage medium
CN111507184B (en) Human body posture detection method based on parallel cavity convolution and body structure constraint
US20220222975A1 (en) Motion recognition method, non-transitory computer-readable recording medium and information processing apparatus
CN116052273B (en) Action comparison method and device based on body state fishbone line
CN114093032A (en) Human body action evaluation method based on action state information
CN111353345B (en) Method, apparatus, system, electronic device, and storage medium for providing training feedback
CN111353347B (en) Action recognition error correction method, electronic device, and storage medium
US20230145451A1 (en) Monitoring exercise activity in a gym environment
CN116236208A (en) Multi-lead electrocardio electrode patch positioning method based on human body surface characteristics
CN110543845A (en) Face cascade regression model training method and reconstruction method for three-dimensional face
CN116012942A (en) Sign language teaching method, device, equipment and storage medium
CN112257642B (en) Human body continuous motion similarity evaluation method and evaluation device
CN114092863A (en) Human body motion evaluation method for multi-view video image
CN114360060B (en) Human body action recognition and counting method
CN114377373B (en) Method, system and equipment for analyzing take-off characteristics
Scandrett et al. Towards a semi-automatic method for the statistically rigorous ageing of the human face
CN112419112B (en) Method and device for generating academic growth curve, electronic equipment and storage medium
CN111476115B (en) Human behavior recognition method, device and equipment
Juan [Retracted] Convolutional Neural Network and Computer Vision‐Based Action Correction Method for Long‐Distance Running Technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 816, 7th Floor, Building 2, No. 128 South Fourth Ring West Road, Fengtai District, Beijing, 100160

Applicant after: Beijing Titi Technology Co.,Ltd.

Address before: 703B, North Block, 6th Floor, Building 4, Yard 1, Automobile Museum East Road, Fengtai District, Beijing, 100160

Applicant before: Beijing Titi Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant