CN113398556A - Push-up identification method and system - Google Patents

Push-up identification method and system Download PDF

Info

Publication number
CN113398556A
CN113398556A CN202110721723.0A CN202110721723A CN113398556A CN 113398556 A CN113398556 A CN 113398556A CN 202110721723 A CN202110721723 A CN 202110721723A CN 113398556 A CN113398556 A CN 113398556A
Authority
CN
China
Prior art keywords
evaluation
distance
action
push
angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110721723.0A
Other languages
Chinese (zh)
Other versions
CN113398556B (en
Inventor
叶生晅
方云浩
季彬浩
皇甫江涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Humpback Technology Co ltd
Zhejiang University ZJU
Original Assignee
Hangzhou Humpback Technology Co ltd
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Humpback Technology Co ltd, Zhejiang University ZJU filed Critical Hangzhou Humpback Technology Co ltd
Priority to CN202110721723.0A priority Critical patent/CN113398556B/en
Publication of CN113398556A publication Critical patent/CN113398556A/en
Application granted granted Critical
Publication of CN113398556B publication Critical patent/CN113398556B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B2071/065Visualisation of specific exercise parameters

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a push-up identification method and a push-up identification system, wherein the method comprises a geometric feature extraction step and an evaluation step, and the evaluation step specifically comprises the following steps: detecting whether the posture is standard or not based on the angle features, and updating the angle evaluation identifier when the posture is judged to be not standard; detecting whether the movement direction is overturned based on the speed characteristics; when the movement direction is overturned, whether the movement is qualified or not is detected based on the distance characteristics, and when the movement is judged to be unqualified, the distance evaluation identifier is updated; and when the motion direction is consistent with the preset judgment direction, detecting whether a judgment mark exists or not, when the judgment mark does not exist, recording the judgment mark, when the judgment mark exists, outputting an evaluation result based on the obtained evaluation mark, and resetting the evaluation mark. The invention automatically identifies each complete push-up in the motion image sequence by the discrimination identifier, and automatically evaluates whether the complete push-up is standard or not by the angle evaluation identifier and the distance evaluation identifier.

Description

Push-up identification method and system
Technical Field
The invention relates to the field of image processing, in particular to a push-up identification method and a push-up identification system.
Background
The push-up is the most basic dead weight training mode, can strengthen the core stability, strengthens the strength of chest, triceps brachii muscle and shoulder to improve the mobility through the shoulder joint, and need not extra fitness equipment to assist in the training process, people generally realize the exercise at home through the push-up nowadays.
However, the push-up is one of the training items which are most prone to making mistakes, and when the push-up action is not standard, the purpose of exercise cannot be achieved, and nowadays fitness guidance APPs such as keep and the like can only show the posture of the standard push-up for a user, but cannot supervise the action of the user.
Disclosure of Invention
The invention provides a push-up recognition technology aiming at the defects that in the prior art, a standard push-up method is only displayed for a user, and the user can not be supervised to move according to the standard.
In order to solve the technical problem, the invention is solved by the following technical scheme:
a push-up identification method comprises the following steps:
acquiring a plurality of moving images arranged according to time, and extracting geometric features of each moving image, wherein the geometric features comprise angle features, distance features and speed features;
and sequentially evaluating the moving images based on the geometric features, and outputting corresponding evaluation results, wherein the moving images are evaluated according to the following steps:
detecting whether the posture is standard or not based on the angle features, and updating the angle evaluation identifier when the posture is judged to be not standard;
detecting whether the movement direction is overturned based on the speed characteristics;
when the motion direction is not changed, evaluating the geometrical characteristics of the next frame of motion image;
when the motion direction is reversed:
detecting whether the action is qualified or not based on the distance characteristics, and updating the distance evaluation identifier when the action is judged to be unqualified;
comparing the motion direction with a preset judgment direction;
when the motion direction is inconsistent with a preset judgment direction, detecting the geometric characteristics of the next frame of motion image;
and when the movement direction is consistent with the preset judgment direction, detecting whether a judgment mark exists or not, when the judgment mark does not exist, recording the judgment mark, when the judgment mark exists, outputting an evaluation result based on the obtained angle evaluation mark and the distance evaluation mark, and resetting the angle evaluation mark and the distance evaluation mark.
As an implementable embodiment:
and extracting the bone points of each moving image to obtain corresponding bone point data, wherein the bone point data comprises the types of the bone points and the corresponding three-dimensional coordinates.
As an example, the angular characteristics include knee skeletal point angle and hip skeletal point angle;
comparing the knee skeleton point included angle with a preset knee included angle threshold value, comparing the hip skeleton point included angle with a preset hip included angle threshold value, judging that the posture is not standard when the knee skeleton point included angle is smaller than the knee included angle threshold value or the hip skeleton point included angle is smaller than the hip included angle threshold value, and updating the angle evaluation identification.
As an implementable embodiment:
the method for extracting the distance features comprises the following steps:
extracting three-dimensional coordinates of corresponding bone points from the bone point data based on a preset distance evaluation type, and calculating the distance between the bone points and the ground to obtain corresponding distance characteristics;
whether the action is qualified or not is detected based on the distance characteristics, and when the action is judged to be unqualified, the method for updating the distance evaluation identifier comprises the following steps:
and obtaining a movement direction based on the speed characteristics, extracting a preset distance threshold value based on the movement direction, judging that the action is not standard when the distance characteristics are failed to be matched with the preset distance threshold value, and updating the distance evaluation identifier.
As an implementation manner, the velocity feature includes a velocity vector of each bone point, and the velocity vector is extracted by:
and extracting the bone point data corresponding to the k frame of moving image and the k +1 frame of moving image, and calculating to obtain the speed vector of each bone point corresponding to the k frame of moving image.
As an implementable embodiment:
the velocity features further comprise velocity gradient vectors for each bone point, the velocity gradient vectors being derived from the corresponding velocity vectors over time;
the geometric characteristics further comprise an included angle of an elbow bone point, and before detecting whether the motion direction is overturned based on the speed characteristics, the method further comprises the following steps:
judging whether the acceleration direction is overturned or not based on the velocity gradient vector, comparing the elbow skeleton point included angle with a preset elbow included angle threshold value when the acceleration direction is overturned, and updating the angle evaluation identification when the elbow skeleton point included angle exceeds the elbow included angle threshold value.
As an implementable manner, after the evaluation result is output based on the obtained angle evaluation identifier and distance evaluation identifier, the method further comprises an evaluation verification step, and the specific steps are as follows:
extracting a time point corresponding to the current frame, and taking the time point as an ending time point of the current push-up action and an initial time point of the next push-up action;
extracting a starting time point of a current push-up action, extracting speed characteristics and distance characteristics of all moving images between the starting time point and an ending time point of the current push-up action, extracting key point speed characteristics from the speed characteristics based on a preset key part, forming an action frame based on the corresponding key point speed characteristics and distance characteristics, and obtaining a first action frame sequence;
extracting a speed feature with the direction of a speed vector or a speed gradient vector reversed from the first action frame sequence to obtain a second action frame sequence, wherein the second action frame sequence comprises 5 action frames ordered in time;
and inputting the second action frame sequence into a pre-constructed recognition model, and outputting a standard or non-standard recognition result by the recognition model.
As an implementation manner, the construction steps of the recognition model are as follows:
acquiring a sample moving image sequence which comprises a plurality of sample moving images which are ordered according to time;
extracting skeleton points of the sample motion image sequence to obtain a sample skeleton point sequence;
extracting geometric features of the sample skeleton point sequence to obtain a first sample frame sequence, wherein the first sample frame sequence comprises a plurality of sample frames which are arranged according to a time sequence, and all the sample frames comprise a key point speed feature and a distance feature;
extracting sample frames with changed motion direction or acceleration direction from the first sample frame sequence to obtain a second sample frame sequence;
splitting the second sample frame sequence into a plurality of sub-sample frame sequences, wherein each sub-sample frame sequence comprises 5 sample frames;
labeling each sub-sample frame with a sample label, wherein the sample label is used for indicating whether the push-up action corresponding to the sub-sample frame sequence is standard or not
And training by using the sub-sample frame sequence and the sample label to obtain a recognition model.
As an implementable embodiment:
the recognition model is an HMM model, an LSTM-FCN model or an SVM model.
The invention also provides a push-up recognition system, which comprises:
a feature extraction module for acquiring a plurality of frames of moving images arranged according to time and extracting geometric features of each moving image, wherein the geometric features comprise angle features, distance features and speed features
The evaluation module is used for sequentially evaluating the actions in the motion images based on the geometric characteristics and outputting corresponding evaluation results;
the evaluation module comprises an angle judgment unit, a turning judgment unit, a distance evaluation unit, a direction comparison unit, a judgment identifier detection unit and an evaluation unit;
the angle judging unit is used for detecting whether the gesture is standard or not based on the angle features, and updating an angle evaluation identifier when the gesture is judged to be not standard, wherein the angle evaluation identifier is used for indicating whether the gesture is standard or not;
the turnover judging unit is used for detecting whether the motion direction is turned over or not based on the speed characteristics;
the distance evaluation unit is used for detecting whether the action is qualified or not based on the distance characteristics when the movement direction is overturned, and updating a distance evaluation identifier when the action is judged to be unqualified, wherein the distance evaluation identifier is used for indicating whether the action is qualified or not;
the direction comparison unit is used for comparing the motion direction with a preset judgment direction;
a discrimination identifier detection unit for detecting whether a discrimination identifier exists when the movement direction is consistent with a preset discrimination direction;
and the evaluation unit is used for recording the judgment identification when the judgment identification does not exist, and is also used for outputting an evaluation result based on the obtained angle evaluation identification and distance evaluation identification and emptying the angle evaluation identification and the distance evaluation identification when the judgment identification exists.
Due to the adoption of the technical scheme, the invention has the remarkable technical effects that:
according to the invention, through the design of the distinguishing mark, each complete push-up in the motion image sequence can be automatically identified, and through the design of the angle evaluation mark and the distance evaluation mark, the evaluation and feedback of the posture and the action in the action period of one complete push-up are realized, so that the user is supervised and prompted to correct.
According to the method, through the design of the evaluation and verification steps, an HMM (hidden Markov model) is used for modeling the push-up process, and the detected complete push-up action is further identified, so that the accuracy of the push-up evaluation is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic view of an identification process of a push-up identification method of the present invention;
FIG. 2 is a schematic view of a module connection of a push-up recognition system of the present invention;
fig. 3 is a block diagram of the evaluation block 200 of fig. 2.
Detailed Description
The present invention will be described in further detail with reference to examples, which are illustrative of the present invention and are not to be construed as being limited thereto.
Embodiment 1, a push-up identification method, comprising the steps of:
s100, acquiring a plurality of moving images arranged according to time, and extracting geometric features of the moving images, wherein the geometric features comprise distance features and speed features;
s200, sequentially evaluating each motion image based on the geometric features, and outputting corresponding evaluation results;
as shown in fig. 1, each moving image is evaluated according to the following steps:
s210, detecting whether the gesture is standard:
detecting whether the posture is standard or not based on the angle characteristics, updating the angle evaluation identifier when the posture is judged to be not standard, and performing step S220;
the angle evaluation flag in the present embodiment is a variable flag1, the initial value of which is zero, and when the attitude irregularity is determined based on the angle characteristics, the value of the flag1 is updated to 1.
S220, whether the motion direction is reversed:
detecting whether the movement direction is overturned based on the speed characteristics;
when the image is not turned over, finishing the evaluation of the current moving image, acquiring the geometric characteristics of the next frame of moving image, and repeating the step;
when the motion direction is reversed, the step S230 is performed;
s230, detecting whether the action is qualified:
detecting whether the action is qualified or not based on the distance characteristics, updating the distance evaluation identifier when the action is judged to be unqualified, and performing step S40;
in this embodiment, the angle evaluation flag is a variable flag2, the initial value of which is zero, and when it is determined that the motion is not qualified based on the distance characteristics, the value of the flag2 is updated to 1;
the motion direction is turned, which indicates that the user bends the two elbows to the lowest position or straightens the two elbows to the highest position, so the distance between the body and the ground is evaluated when the motion direction is turned in this embodiment, so as to evaluate whether the push-up action is in place and qualified.
S240, whether the point is a judgment point:
acquiring a preset judging direction, and comparing the motion direction with the preset judging direction;
the technicians in the field can set the judging direction according to the actual needs, and the judging direction is used for indicating the starting direction of the push-up action;
the push-up movement comprises a descending action and an upward pushing action, and the action direction of the descending action is set as a judging direction in the embodiment, so that when the descending action is detected, a new push-up action can be determined to be started, and the new push-up action is taken as a judging point for detecting the complete action of the push-up;
when the motion direction is inconsistent with the preset judgment direction, indicating that the current frame is not a judgment point in the period of the current push-up action, ending the evaluation of the current motion image, acquiring the geometric characteristics of the next frame of motion image, and entering step S210;
when the moving direction is consistent with the preset judging direction, a new push-up action is performed, the current frame is a judging point, and the step S250 is executed;
s250, whether the push-up action is complete or not:
detecting whether a distinguishing mark exists or not;
when the judgment mark does not exist, recording the judgment mark;
when the angle evaluation identifier and the distance evaluation identifier exist, outputting an evaluation result based on the obtained angle evaluation identifier and distance evaluation identifier, and resetting the angle evaluation identifier and the distance evaluation identifier.
The evaluation result may include, for example, a standard evaluation result and a counting result, and as shown in fig. 1, when the situation that the posture is not standard or the motion is not qualified occurs in the current push-up motion, the standard evaluation result that is not standard is output and the current push-up motion is not subjected to the technique, otherwise, the standard evaluation result that is standard is output and the current push-up motion is counted.
The judgment mark is initially empty, when the push-up is started, the user performs descending action, the movement direction changes at the moment, the movement direction is consistent with the preset judgment direction, the judgment mark does not exist at first, and the judgment mark is recorded for recording the start of the push-up action;
after one push-up is completed, the user does descending action again, the existence of the judgment mark is detected, namely the fact that the user has completed a complete push-up is indicated, and the next push-up is performed, so that the obtained evaluation mark is firstly extracted to judge whether the push-up is standard, and a corresponding evaluation result is fed back to the user to remind the user to correct the push-up action, and then the evaluation mark is emptied so as to evaluate the action of the next push-up.
The complete push-up action comprises:
the initial state, namely the preparation state, wherein the two hands support the ground and the body keeps a straight line;
the first motion process, bending the elbow to make the chest downward to the ground, wherein the motion direction is downward, and the speed of the body descending process is firstly accelerated and then decelerated;
at the lowest state, the chest is descended to the ground, and the heights of the buttocks and the rest positions of the body are consistent;
a second movement process, namely pushing the body upwards to return to a preparation state, wherein the movement direction is upward in the process, and the speed is accelerated and then decelerated in the body rising process;
the final state is the initial state of the next push-up action.
According to the method for identifying the push-up, whether the action is standard or not is difficult to check by self when a user carries out push-up training, and the aim of exercise cannot be achieved.
Further, the specific steps of extracting the geometric features of each moving image in step S100 are:
s110, extracting skeleton points of each moving image to obtain corresponding skeleton point data, wherein the skeleton point data comprises types of the skeleton points and corresponding three-dimensional coordinates;
the method specifically comprises the following steps:
s111, extracting 2D skeleton points of each motion image based on the existing published OpenPose model;
s112, repairing the 2D bone points of each motion image to obtain corresponding bone point data, wherein the bone point data comprises the types of the bone points and corresponding three-dimensional coordinates;
due to the fact that the openpos model is difficult to find the bone points of the shielded part, the extraction effect of the bone points is easily influenced by the environment, when the push-up training is carried out, the body overlapping part is as high as 50%, and the accuracy of the 2D bone points extracted by the openpos model is low, so that the accuracy of push-up recognition can be influenced.
Therefore, in the embodiment, the obtained bone points are repaired and converted into three-dimensional data, so as to solve the defects.
A person skilled in the art can select any one of the existing repair methods to repair 2D bone points according to actual situations, for example, an existing 2D-to-3D conversion module is used for repairing, an optical flow method is used to form a vector diagram for repairing, a linear repair method can also be used, a position of each bone point in two adjacent frames (a previous frame and a next frame) of 2D bone point data and a speed characteristic corresponding to the bone point are obtained, and the bone point is completely repaired, which is not specifically limited in this embodiment.
In the embodiment, positions of the neck, shoulders, elbows, wrists, two sides of the hip, knees and feet of a human body in the moving image are extracted as skeleton points;
s120, constructing geometric features based on the bone point data, and obtaining geometric features corresponding to the moving images;
the technical personnel in the field can increase or reduce the positions of the bone points to be extracted according to actual needs, and set the angle characteristics, distance characteristics and speed characteristics to be extracted according to actual conditions, and only the angle characteristics can reflect the posture of the human body, the distance characteristics can reflect the distance between the human body and the ground, and the speed characteristics can reflect the motion direction of the human body.
Further, the specific steps of constructing geometric features based on the bone point data in step S120 are:
s121, extracting angle features:
the angle characteristics comprise a knee skeleton point included angle and a hip skeleton point included angle;
the knee skeleton point included angle is an included angle formed by a connecting line of a foot skeleton point and a knee skeleton point, a connecting line of the knee skeleton point and a hip skeleton point and two line segments;
the hip skeleton point included angle is an included angle formed by a connecting line of a knee skeleton point and a hip skeleton point, a connecting line of the hip skeleton point and a shoulder skeleton point and two line segments; from the above, the included angle is a poor angle or a flat angle formed by the corresponding three bone points.
In the push-up training, a human body needs to be kept to form a straight line from the shoulder to the ankle, and in the embodiment, whether the posture of the human body in each frame of moving image is standard or not is judged through the design of the knee skeleton point included angle and the hip skeleton point included angle in the moving process.
S122, extracting distance features:
extracting three-dimensional coordinates of corresponding bone points from the bone point data based on a preset distance evaluation type, and calculating the distance between the bone points and the ground to obtain corresponding distance characteristics;
the distance evaluation type can be set by a person skilled in the art according to actual needs, in this embodiment, the distance evaluation type is a shoulder, that is, three-dimensional coordinates of a bone point corresponding to the shoulder are extracted from the obtained bone point data, the distance from the shoulder to the ground is calculated, and the distance is used as a distance feature.
S123, extracting speed features:
the speed characteristics comprise speed vectors and speed gradient vectors of all skeleton points;
the speed feature corresponding to the k-th frame of moving image is extracted in the following manner:
and extracting the bone point data corresponding to the k frame of moving image and the k +1 frame of moving image, and calculating to obtain the velocity vector of each bone point, wherein k is more than 0 and less than m, and m is the total number of the moving images.
The three-dimensional coordinate of a certain bone point is (x, y, z), the three-dimensional coordinate corresponding to the bone point in the next frame is (x ', y ', z '), the time interval of the two frames is delta t, and the velocity vector corresponding to the bone point is
Figure BDA0003136754550000081
The calculation formula of (2) is as follows:
Figure BDA0003136754550000082
in practical applications, by specifying one or more bone points,velocity vector based on specified bone points
Figure BDA0003136754550000083
Determining direction of movement, e.g. based on corresponding velocity vectors
Figure BDA0003136754550000084
In
Figure BDA0003136754550000085
Positive or negative determining the direction of movement and the reversal of the direction of movement, or
Figure BDA0003136754550000086
Is changed in direction, and
Figure BDA0003136754550000087
and
Figure BDA0003136754550000088
when the direction of any one of the two is changed, the motion direction is judged to be reversed, and then
Figure BDA0003136754550000089
Determines the direction of motion.
Note that when
Figure BDA00031367545500000810
When the direction of the signal changes from 0 to a positive value or a negative value, the direction of the signal is still determined to change.
Further, in step S210, whether the pose is standard is detected based on the angular features, and when the pose is determined to be not standard, the specific step of updating the angle evaluation identifier is:
comparing the knee skeleton point included angle with a preset knee included angle threshold value, comparing the hip skeleton point included angle with a preset hip included angle threshold value, judging that the posture is not standard when the knee skeleton point included angle is smaller than the knee included angle threshold value or the hip skeleton point included angle is smaller than the hip included angle threshold value, and updating the angle evaluation identification.
The person skilled in the art can set the knee angle threshold and the hip angle threshold by himself or herself according to actual situations, and in this embodiment, the knee angle threshold and the hip angle threshold are both set to 150 °.
Those skilled in the art can set the updating mode of the angle evaluation identifier according to the actual need, for example:
and only one angle evaluation identifier is provided, the initial value of the angle evaluation identifier is 0, when the included angle of the knee skeleton point is smaller than the threshold value of the knee included angle or the included angle of the hip skeleton point is smaller than the threshold value of the hip included angle, the included angle is updated to 1, and when the step S250 detects that the angle evaluation identifier is 1, the situation that the posture of the user is not standard in the whole push-up period is indicated, so that the corresponding feedback is carried out on the user.
Each angle feature in the angle features has an angle evaluation identifier corresponding to the angle feature, the initial value of each angle evaluation identifier is 0, when the included angle is smaller than a preset included angle threshold value, the corresponding angle evaluation identifier is updated to be 1, each angle evaluation identifier is traversed in step S250, the feedback is given to a user based on the angle evaluation identifier with the value of 1, and the user posture is not standard and has problems in the push-up period, such as that the hip does not keep a straight line with the body.
Further, in step S230, whether the action is qualified is detected based on the distance feature, and when the action is determined to be unqualified, the specific step of updating the distance evaluation identifier is:
and obtaining a movement direction based on the speed characteristics, extracting a preset distance threshold value based on the movement direction, and updating the distance evaluation identifier when the distance characteristics are failed to be matched with the preset distance threshold value.
In this embodiment, a velocity vector of a corresponding bone point in the current frame is calculated based on the three-dimensional coordinates of the bone point in the next frame, so the velocity vector is used to indicate the motion direction of the motion to be performed;
that is, when the user is in the initial state or the final state, the moving direction is downward, and when the user is in the lowest state, the moving direction is upward;
in this embodiment, the distance threshold includes a first distance threshold and a second distance threshold, where the first distance threshold is greater than the second distance threshold, and when the moving direction is descending, the first distance threshold is extracted, otherwise, the second distance threshold is extracted; when the distance from the shoulder bone point to the ground is smaller than a first distance threshold value, judging that the action is not standard, and updating the distance evaluation threshold value;
when the distance from the shoulder bone point to the ground is larger than a second distance threshold value, judging that the action is not standard, and updating the distance evaluation threshold value;
the first distance threshold and the second distance threshold may be set by a person skilled in the art according to actual needs, for example, the distance from the shoulder to the ground when the user starts to do a push-up may be used as the first distance threshold, the arm length of the user may also be obtained, and the first distance threshold is obtained by calculation based on a preset weight coefficient.
Those skilled in the art can set the updating mode of the distance estimation flag according to actual needs, for example:
and only one distance evaluation identifier is provided, the initial value of the distance evaluation identifier is 0, when the distance feature fails to be matched with the preset distance threshold value, the distance feature is updated to be 1, and when the distance evaluation identifier detected in the step S250 is 1, the situation that the user action is not standard in the whole push-up period is shown, so that corresponding feedback is provided for the user.
Each movement direction corresponds to a distance evaluation identifier, the initial value of each distance evaluation identifier is 0, when the distance feature fails to match with the corresponding distance threshold, the corresponding distance evaluation identifier is updated to 1, in step S250, each distance evaluation identifier is traversed, and the distance evaluation identifier with the value of 1 is fed back to the user.
Further, the velocity features further comprise velocity gradient vectors of each bone point, the velocity gradient vectors being obtained by derivation of the corresponding velocity vectors with respect to time;
the velocity gradient vector Δ v corresponding to the bone point is:
Figure BDA0003136754550000101
wherein the content of the first and second substances,
Figure BDA0003136754550000102
the velocity vector corresponding to the bone point is represented, and Δ t represents the interval time used for calculating the velocity vector.
In this embodiment, the velocity vector and the velocity gradient vector are three-dimensional vectors, the velocity gradient vector is used to represent the gradient of the velocity vector corresponding to the bone point, the velocity gradient vector is used to identify the motion image frame corresponding to the intermediate state, where the intermediate state refers to a state where the direction of the motion acceleration is reversed in the first motion process or the second motion process, that is, a frame where positive and negative changes occur in the z direction of the velocity gradient vector.
Further, the geometric features also include elbow skeletal point angles;
the included angle of the elbow skeleton points is an included angle formed by a connecting line of the shoulder skeleton points and the elbow skeleton points, a connecting line of the elbow skeleton points and the wrist skeleton points and two connecting lines.
Further, step S210 further includes a step of updating the angle evaluation identifier based on the elbow bone point included angle, and the specific steps are as follows:
judging whether the acceleration direction is overturned or not based on the velocity gradient vector, comparing the elbow skeleton point included angle with a preset elbow included angle threshold value when the acceleration direction is overturned, and updating the angle evaluation identification when the elbow skeleton point included angle exceeds the elbow included angle threshold value.
When the value of the z direction of the corresponding velocity gradient vector changes from positive to negative or from negative to positive, it is determined that the direction of the acceleration is reversed, and the corresponding motion image is in the intermediate state.
Further, the evaluation result in step S250 includes a standard evaluation result;
judging whether the posture is standard or not based on the angle evaluation identifier to obtain a corresponding posture evaluation result;
judging whether the action is standard or not based on the distance evaluation identifier to obtain a corresponding action evaluation result;
generating a respective standard evaluation result based on the posture evaluation result and the action evaluation result;
embodiment 2, the evaluation verification step is added after step S250 in embodiment 1, that is, an evaluation result is output based on the obtained angle evaluation identifier and distance evaluation identifier, and after the angle evaluation identifier and the distance evaluation identifier, the evaluation verification step for the evaluation result is added, and the rest is the same as that in embodiment 1;
the evaluation and verification steps are specifically as follows:
s310, extracting a time point corresponding to the current frame, and taking the time point as an ending time point of the current push-up action and a starting time point of the next push-up action;
s320, extracting the starting time point of the current push-up action, extracting the speed characteristics and the distance characteristics of all moving images between the starting time point and the ending time point of the current push-up action, extracting the speed characteristics of key points from the speed characteristics based on a preset key part, forming an action frame based on the corresponding speed characteristics and distance characteristics of the key points, and obtaining a first action frame sequence;
in the embodiment, the key parts are shoulders, hips, knees, elbows and wrists, and the skilled person can automatically designate the key parts according to actual needs;
extracting the velocity vector and the velocity gradient vector of the bone point corresponding to each key part to obtain corresponding key point velocity characteristics;
and taking the key point speed characteristic and the distance characteristic corresponding to the same moving image as corresponding action frames.
S330, extracting a speed characteristic that the direction of a speed vector or a speed gradient vector is reversed from the first action frame sequence to obtain a second action frame sequence, wherein the second action frame sequence comprises 5 action frames ordered according to time;
when the push-up action is in the initial state, the lowest state and the final state, the velocity vector of the push-up action changes in the positive and negative directions in the z direction, and the direction of the velocity vector is determined to be reversed in the embodiment, so that the velocity characteristics corresponding to the three states are obtained; when the push-up action is in the intermediate state, the corresponding velocity gradient vector is inverted in the positive and negative directions in the z direction, and the direction of the acceleration is determined to be inverted in the embodiment, so that the velocity characteristic corresponding to the state is obtained, and one complete push-up action comprises two intermediate states, so that two corresponding velocity characteristics are obtained;
based on the time sequence information of the motion image corresponding to each speed feature, the speed features are arranged according to the time sequence to obtain the sequence indicating the initial state, the intermediate state (descending motion), the lowest state, the intermediate state (ascending motion) and the final state so as to indicate the complete motion frame sequence of the push-up motion, namely the second motion frame sequence.
S340, inputting the second action frame sequence into a pre-constructed recognition model, and outputting a standard or nonstandard recognition result by the recognition model.
The recognition model is a binary classification model.
Those skilled in the art can verify each evaluation result according to actual needs, verify when the evaluation result is not standard, or verify when the evaluation result is standard, obtain a corresponding recognition result, and count and feed back based on the obtained recognition result, which is not specifically limited in this embodiment.
In the embodiment, through the design of the evaluation and verification step, the push-up action can be further identified by using a machine learning technology, so that the accuracy of evaluation on the action standard is improved, and a user can be more accurately guided by training.
Further, the identification model is constructed by the following steps:
acquiring a sample moving image sequence which comprises a plurality of sample moving images which are ordered according to time;
extracting skeleton points of the sample motion image sequence to obtain a sample skeleton point sequence;
extracting geometric features of the sample skeleton point sequence to obtain a first sample frame sequence, wherein the first sample frame sequence comprises a plurality of sample frames which are arranged according to a time sequence, and all the sample frames comprise a key point speed feature and a distance feature;
splitting the second sample frame sequence into a plurality of sub-sample frame sequences, wherein each sub-sample frame sequence comprises 5 sample frames;
labeling each sub-sample frame with a sample label, wherein the sample label is used for indicating whether a push-up action corresponding to the sub-sample frame sequence is standard or not, the sample label can be standard or not, or states corresponding to each sample frame in the sub-sample frame sequence, and the states include an initial/final state, a first intermediate state, a lowest state, a second intermediate state and an initial/final state;
and training by using the sub-sample frame sequence and the sample label to obtain a recognition model.
Those skilled in the art can perform model training by using the obtained sample action frame sequence and corresponding sample labels according to the existing conventional model training steps to obtain the corresponding recognition model, and the detailed description of the training steps is not provided in this embodiment.
Further:
the recognition model is an existing public HMM model (hidden Markov model), an LSTM-FCN model (long-short term memory full convolution neural network model) or an SVM model (support vector machine).
Further:
the recognition model is an HMM model, parameter learning is carried out on the HMM model by utilizing a Baum _ Welch algorithm so as to maximize the output possibility of an observation sequence, and the parameter learning steps are as follows:
E. calculating hidden layer parameters:
the hidden layer parameter comprises alphat(j)、βt(i)、γt(i)、P(O1,O2,...,OT|λ);
Parameter alphat(i) The calculation formula of (a) is as follows:
Figure BDA0003136754550000121
where t denotes the t-th motion frame, i denotes the motion state, N denotes the number of motion states, OtRepresenting an observed value corresponding to the t-th action frame, wherein the observed value is characteristic data comprising a key point speed characteristic and a distance characteristic, ajiIs represented by state qjGo to qiIs a matrix, representing the transition of each state to qiProbability of state.
Parameter betat(i) The calculation formula of (a) is as follows:
Figure BDA0003136754550000122
wherein, aijIs represented by state qiGo to qjIs a matrix, representing the transition of each state to state qjProbability.
Parameter gammat(i) The calculation formula of (a) is as follows:
Figure BDA0003136754550000123
wherein alpha isT(i) When T is T, alphat(i) The value of (a).
Parameter P (O)1,...,OT| λ) is as follows:
Figure BDA0003136754550000131
wherein, λ is a parameter of the HMM model, and is obtained by maximum likelihood estimation.
The principle is as follows:
in the training process, the input sub-sample frame sequence is used as an observation sequence O1、O2、...、OTThe system comprises 5 frames of feature data, wherein each frame of feature data corresponds to an observation value;
at the same time, each sample frame indicates a state, in turn an initial/final state, a first intermediate state, a lowest stateStates, second intermediate states and initial/states, i.e. 5 motion states (q motion states)1、q2、...、qn) Each state having an output distribution di(O) and a transition probability distribution aij;di(O) represents in the state qiThe probability of the corresponding observed value O, aijIs represented by state qiGo to qjThe probability of (c).
Presetting the parameters of the hidden Markov model to be maximized as lambda and the variable X of the hidden layertRepresenting the state of the HMM model at time t (q)1、q2、...、qn) Using conditional probability P (O)1,O2,...,OT| λ) represents the coincidence degree of the event and the calculated HMM model, the higher the coincidence degree (the higher the probability value), the higher the possibility of indicating the action is correct, and the model parameter λ is estimated by a maximum likelihood method;
reuse of gammat(i)=P(Xt=qi|O1,O2,...,OTLambda) denotes the distribution of the sequence of hidden layer states over a given observation sequence O1、O2、...、OTThe conditional probability P (I | O) is calculated to obtain the most probable state prediction sequence Q ═ (Q)1,q2,...,qn) If the sequence of the state prediction sequence is consistent with the preset push-up action sequence, namely, the initial/final state, the first intermediate state, the lowest state, the second intermediate state and the initial/final state are sequentially carried out, judging that one complete action is carried out, and calculating the initial/final state and the second intermediate state through a forward-backward algorithm;
the result is passed through the variable alphat(i)=P(O1,O2,…,Ot,Xt=qiLambda) and the variable betat(i)=P(Ot,Ot+1,...,OT|Xt=qiAnd λ) is shown.
M, updating the parameter lambda of the HMM model to maximize the output possibility on the premise of giving the hidden layer parameter.
In actual use, the input second action frame sequence is used as an observation sequence O, a state sequence I with the maximum P (I | O) is obtained through calculation, the state sequence I is compared with a preset push-up action sequence, if the state sequence I is the same as the preset push-up action sequence, the standard is judged, and if not, the standard is judged.
Embodiment 3, a push-up recognition system, as shown in fig. 2, includes:
a feature extraction module 100, configured to obtain a plurality of frames of moving images arranged according to time, and extract geometric features of each of the moving images, where the geometric features include an angle feature, a distance feature, and a speed feature
An evaluation module 200, configured to evaluate the actions in each moving image in sequence based on the geometric features, and output corresponding evaluation results;
as shown in fig. 3, the evaluation module 200 includes an angle determination unit 210, a turning determination unit 220, a distance evaluation unit 230, a direction comparison unit 240, a discrimination identifier detection unit 250, and an evaluation unit 260;
the angle judging unit 210 is configured to detect whether the posture is standard based on the angle feature, and update an angle evaluation flag indicating whether the posture is standard when the posture is determined to be not standard;
the turning judgment unit 220 is configured to detect whether the movement direction is turned based on the speed characteristic;
a distance evaluation unit 230, configured to detect whether the motion is qualified based on the distance characteristics when the motion direction is reversed, and update a distance evaluation identifier when the motion is determined to be unqualified, where the distance evaluation identifier is used to indicate whether the motion is qualified;
a direction comparing unit 240, configured to compare the motion direction with a preset determination direction;
a discrimination indicator detecting unit 250 configured to detect whether a discrimination indicator is present when the moving direction coincides with a predetermined discrimination direction;
the evaluation unit 260 is configured to record the discrimination identifier when the discrimination identifier does not exist, and further configured to output an evaluation result based on the obtained angle evaluation identifier and distance evaluation identifier and clear the angle evaluation identifier and the distance evaluation identifier when the discrimination identifier exists.
Further, the feature extraction module 100 is configured to perform skeleton point extraction on each moving image to obtain corresponding skeleton point data, where the skeleton point data includes types of skeleton points and corresponding three-dimensional coordinates.
In this embodiment, the feature extraction module 100 includes a bone point extraction unit, a repair unit, and a feature extraction unit;
the skeleton point extracting unit is used for extracting 2D skeleton points of each motion image based on the existing published OpenPose model;
the repairing unit is used for repairing the 2D bone points of each motion image to obtain corresponding bone point data, and the bone point data comprises the types of the bone points and corresponding three-dimensional coordinates;
the feature extraction unit constructs geometric features based on the bone point data, and obtains geometric features corresponding to the moving images, wherein the geometric features comprise a first extraction subunit for extracting angle features, a second extraction subunit for extracting distance features and a third extraction subunit for extracting speed features.
Further, a verification module 300 is also included, the verification module 300 comprising:
the time configuration unit is used for extracting a time point corresponding to the current frame, and taking the time point as an ending time point of the current push-up action and a starting time point of the next push-up action;
the device comprises an action frame construction unit, a first action frame sequence acquisition unit and a second action frame sequence acquisition unit, wherein the action frame construction unit is used for extracting the starting time point of the current push-up action, extracting the speed characteristics and the distance characteristics of all motion images between the starting time point and the ending time point of the current push-up action, extracting the speed characteristics of key points from the speed characteristics based on a preset key part, and forming an action frame based on the corresponding speed characteristics and distance characteristics of the key points to acquire the first action frame sequence;
the motion frame extraction unit is used for extracting a speed characteristic that the direction of a speed vector or a speed gradient vector is reversed from the first motion frame sequence to obtain a second motion frame sequence, and the second motion frame sequence comprises 5 motion frames which are ordered in time;
and the recognition unit is used for inputting the second action frame sequence into a pre-constructed recognition model, and outputting a standard or nonstandard recognition result by the recognition model.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that:
reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. Thus, the appearances of the phrase "one embodiment" or "an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
In addition, it should be noted that the specific embodiments described in the present specification may differ in the shape of the components, the names of the components, and the like. All equivalent or simple changes of the structure, the characteristics and the principle of the invention which are described in the patent conception of the invention are included in the protection scope of the patent of the invention. Various modifications, additions and substitutions for the specific embodiments described may be made by those skilled in the art without departing from the scope of the invention as defined in the accompanying claims.

Claims (10)

1. A push-up identification method is characterized by comprising the following steps:
acquiring a plurality of moving images arranged according to time, and extracting geometric features of each moving image, wherein the geometric features comprise angle features, distance features and speed features;
and sequentially evaluating the moving images based on the geometric features, and outputting corresponding evaluation results, wherein the moving images are evaluated according to the following steps:
detecting whether the posture is standard or not based on the angle features, and updating the angle evaluation identifier when the posture is judged to be not standard;
detecting whether the movement direction is overturned based on the speed characteristics;
when the motion direction is not changed, evaluating the geometrical characteristics of the next frame of motion image;
when the motion direction is reversed:
detecting whether the action is qualified or not based on the distance characteristics, and updating the distance evaluation identifier when the action is judged to be unqualified;
comparing the motion direction with a preset judgment direction;
when the motion direction is inconsistent with a preset judgment direction, detecting the geometric characteristics of the next frame of motion image;
and when the movement direction is consistent with the preset judgment direction, detecting whether a judgment mark exists or not, when the judgment mark does not exist, recording the judgment mark, when the judgment mark exists, outputting an evaluation result based on the obtained angle evaluation mark and the distance evaluation mark, and resetting the angle evaluation mark and the distance evaluation mark.
2. The push-up identification method according to claim 1, characterized in that:
and extracting the bone points of each moving image to obtain corresponding bone point data, wherein the bone point data comprises the types of the bone points and the corresponding three-dimensional coordinates.
3. The push-up identification method according to claim 2, wherein the angular features include knee bone point angles and hip bone point angles;
comparing the knee skeleton point included angle with a preset knee included angle threshold value, comparing the hip skeleton point included angle with a preset hip included angle threshold value, judging that the posture is not standard when the knee skeleton point included angle is smaller than the knee included angle threshold value or the hip skeleton point included angle is smaller than the hip included angle threshold value, and updating the angle evaluation identification.
4. The push-up recognition method according to claim 2, characterized in that:
the method for extracting the distance features comprises the following steps:
extracting three-dimensional coordinates of corresponding bone points from the bone point data based on a preset distance evaluation type, and calculating the distance between the bone points and the ground to obtain corresponding distance characteristics;
whether the action is qualified or not is detected based on the distance characteristics, and when the action is judged to be unqualified, the method for updating the distance evaluation identifier comprises the following steps:
and obtaining a movement direction based on the speed characteristics, extracting a preset distance threshold value based on the movement direction, judging that the action is not standard when the distance characteristics are failed to be matched with the preset distance threshold value, and updating the distance evaluation identifier.
5. The push-up identification method according to any one of claims 2 to 4, wherein the velocity features include velocity vectors of respective bone points, and the velocity vectors are extracted by:
and extracting the bone point data corresponding to the k frame of moving image and the k +1 frame of moving image, and calculating to obtain the speed vector of each bone point corresponding to the k frame of moving image.
6. The push-up recognition method according to claim 5, characterized in that:
the velocity features further comprise velocity gradient vectors for each bone point, the velocity gradient vectors being derived from the corresponding velocity vectors over time;
the geometric characteristics further comprise an included angle of an elbow bone point, and before detecting whether the motion direction is overturned based on the speed characteristics, the method further comprises the following steps:
judging whether the acceleration direction is overturned or not based on the velocity gradient vector, comparing the elbow skeleton point included angle with a preset elbow included angle threshold value when the acceleration direction is overturned, and updating the angle evaluation identification when the elbow skeleton point included angle exceeds the elbow included angle threshold value.
7. The push-up recognition method according to claim 6, wherein after outputting the evaluation result based on the obtained angle evaluation identifier and distance evaluation identifier, the method further comprises an evaluation verification step, and the specific steps are as follows:
extracting a time point corresponding to the current frame, and taking the time point as an ending time point of the current push-up action and an initial time point of the next push-up action;
extracting a starting time point of a current push-up action, extracting speed characteristics and distance characteristics of all moving images between the starting time point and an ending time point of the current push-up action, extracting key point speed characteristics from the speed characteristics based on a preset key part, forming an action frame based on the corresponding key point speed characteristics and distance characteristics, and obtaining a first action frame sequence;
extracting a speed feature with the direction of a speed vector or a speed gradient vector reversed from the first action frame sequence to obtain a second action frame sequence, wherein the second action frame sequence comprises 5 action frames ordered in time;
and inputting the second action frame sequence into a pre-constructed recognition model, and outputting a standard or non-standard recognition result by the recognition model.
8. The push-up recognition method according to claim 7, wherein the recognition model is constructed by the steps of:
acquiring a sample moving image sequence which comprises a plurality of sample moving images which are ordered according to time;
extracting skeleton points of the sample motion image sequence to obtain a sample skeleton point sequence;
extracting geometric features of the sample skeleton point sequence to obtain a first sample frame sequence, wherein the first sample frame sequence comprises a plurality of sample frames which are arranged according to a time sequence, and all the sample frames comprise a key point speed feature and a distance feature;
extracting sample frames with changed motion direction or acceleration direction from the first sample frame sequence to obtain a second sample frame sequence;
splitting the second sample frame sequence into a plurality of sub-sample frame sequences, wherein each sub-sample frame sequence comprises 5 sample frames;
labeling each sub-sample frame with a sample label, wherein the sample label is used for indicating whether the push-up action corresponding to the sub-sample frame sequence is standard or not
And training by using the sub-sample frame sequence and the sample label to obtain a recognition model.
9. The push-up recognition method according to claim 8, characterized in that:
the recognition model is an HMM model, an LSTM-FCN model or an SVM model.
10. A push-up identification system, comprising:
a feature extraction module for acquiring a plurality of frames of moving images arranged according to time and extracting geometric features of each moving image, wherein the geometric features comprise angle features, distance features and speed features
The evaluation module is used for sequentially evaluating the actions in the motion images based on the geometric characteristics and outputting corresponding evaluation results;
the evaluation module comprises an angle judgment unit, a turning judgment unit, a distance evaluation unit, a direction comparison unit, a judgment identifier detection unit and an evaluation unit;
the angle judging unit is used for detecting whether the gesture is standard or not based on the angle features, and updating an angle evaluation identifier when the gesture is judged to be not standard, wherein the angle evaluation identifier is used for indicating whether the gesture is standard or not;
the turnover judging unit is used for detecting whether the motion direction is turned over or not based on the speed characteristics;
the distance evaluation unit is used for detecting whether the action is qualified or not based on the distance characteristics when the movement direction is overturned, and updating a distance evaluation identifier when the action is judged to be unqualified, wherein the distance evaluation identifier is used for indicating whether the action is qualified or not;
the direction comparison unit is used for comparing the motion direction with a preset judgment direction;
a discrimination identifier detection unit for detecting whether a discrimination identifier exists when the movement direction is consistent with a preset discrimination direction;
and the evaluation unit is used for recording the judgment identification when the judgment identification does not exist, and is also used for outputting an evaluation result based on the obtained angle evaluation identification and distance evaluation identification and emptying the angle evaluation identification and the distance evaluation identification when the judgment identification exists.
CN202110721723.0A 2021-06-28 2021-06-28 Push-up identification method and system Active CN113398556B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110721723.0A CN113398556B (en) 2021-06-28 2021-06-28 Push-up identification method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110721723.0A CN113398556B (en) 2021-06-28 2021-06-28 Push-up identification method and system

Publications (2)

Publication Number Publication Date
CN113398556A true CN113398556A (en) 2021-09-17
CN113398556B CN113398556B (en) 2022-03-01

Family

ID=77679898

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110721723.0A Active CN113398556B (en) 2021-06-28 2021-06-28 Push-up identification method and system

Country Status (1)

Country Link
CN (1) CN113398556B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113813570A (en) * 2021-09-22 2021-12-21 弗瑞尔(北京)科技有限公司 Physical fitness test method, system, electronic equipment and storage medium
CN114259721A (en) * 2022-01-13 2022-04-01 王东华 Training evaluation system and method based on Beidou positioning
CN115171208A (en) * 2022-05-31 2022-10-11 中科海微(北京)科技有限公司 Sit-up posture evaluation method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014057800A (en) * 2012-09-19 2014-04-03 Nagasakiken Koritsu Daigaku Hojin Motion evaluation support device and motion evaluation support method
CN105597294A (en) * 2014-11-21 2016-05-25 中国移动通信集团公司 Lying-prostrating movement parameter estimation and evaluation method, device and intelligent terminal
CN107392086A (en) * 2017-05-26 2017-11-24 深圳奥比中光科技有限公司 Apparatus for evaluating, system and the storage device of human body attitude
CN110170159A (en) * 2019-06-27 2019-08-27 郭庆龙 A kind of human health's action movement monitoring system
CN112818800A (en) * 2021-01-26 2021-05-18 中国人民解放军火箭军工程大学 Physical exercise evaluation method and system based on human skeleton point depth image
CN112932470A (en) * 2021-01-27 2021-06-11 上海萱闱医疗科技有限公司 Push-up training evaluation method and device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014057800A (en) * 2012-09-19 2014-04-03 Nagasakiken Koritsu Daigaku Hojin Motion evaluation support device and motion evaluation support method
CN105597294A (en) * 2014-11-21 2016-05-25 中国移动通信集团公司 Lying-prostrating movement parameter estimation and evaluation method, device and intelligent terminal
CN107392086A (en) * 2017-05-26 2017-11-24 深圳奥比中光科技有限公司 Apparatus for evaluating, system and the storage device of human body attitude
CN110170159A (en) * 2019-06-27 2019-08-27 郭庆龙 A kind of human health's action movement monitoring system
CN112818800A (en) * 2021-01-26 2021-05-18 中国人民解放军火箭军工程大学 Physical exercise evaluation method and system based on human skeleton point depth image
CN112932470A (en) * 2021-01-27 2021-06-11 上海萱闱医疗科技有限公司 Push-up training evaluation method and device, equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113813570A (en) * 2021-09-22 2021-12-21 弗瑞尔(北京)科技有限公司 Physical fitness test method, system, electronic equipment and storage medium
CN114259721A (en) * 2022-01-13 2022-04-01 王东华 Training evaluation system and method based on Beidou positioning
CN115171208A (en) * 2022-05-31 2022-10-11 中科海微(北京)科技有限公司 Sit-up posture evaluation method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113398556B (en) 2022-03-01

Similar Documents

Publication Publication Date Title
CN113398556B (en) Push-up identification method and system
CN109863535B (en) Motion recognition device, storage medium, and motion recognition method
CN114724241A (en) Motion recognition method, device, equipment and storage medium based on skeleton point distance
US9183431B2 (en) Apparatus and method for providing activity recognition based application service
JP6943294B2 (en) Technique recognition program, technique recognition method and technique recognition system
CN111597975B (en) Personnel action detection method and device and electronic equipment
CN109308437B (en) Motion recognition error correction method, electronic device, and storage medium
CN113128336A (en) Pull-up test counting method, device, equipment and medium
US20230149774A1 (en) Handle Motion Counting Method and Terminal
US20220222975A1 (en) Motion recognition method, non-transitory computer-readable recording medium and information processing apparatus
Yang et al. Human exercise posture analysis based on pose estimation
CN114343618A (en) Training motion detection method and device
Rahmadani et al. Human pose estimation for fitness exercise movement correction
Rozaliev et al. Methods and applications for controlling the correctness of physical exercises performance
Parisi et al. Learning human motion feedback with neural self-organization
CN111353345B (en) Method, apparatus, system, electronic device, and storage medium for providing training feedback
CN111353347B (en) Action recognition error correction method, electronic device, and storage medium
CN116306766A (en) Wisdom horizontal bar pull-up examination training system based on skeleton recognition technology
CN115690902A (en) Abnormal posture early warning method for body building action
CN115105821A (en) Gymnastics training auxiliary system based on OpenPose
CN110781857B (en) Motion monitoring method, device, system and storage medium
CN114580471A (en) Human body action recognition method and system
CN110148202B (en) Method, apparatus, device and storage medium for generating image
CN112801005A (en) Pull-up intelligent counting method based on human skeleton key point detection
Richter et al. Motion evaluation by means of joint filtering for assisted physical therapy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant