CN113255624A - System and method for intelligently identifying completion condition of pull-up action gesture - Google Patents

System and method for intelligently identifying completion condition of pull-up action gesture Download PDF

Info

Publication number
CN113255624A
CN113255624A CN202110792695.1A CN202110792695A CN113255624A CN 113255624 A CN113255624 A CN 113255624A CN 202110792695 A CN202110792695 A CN 202110792695A CN 113255624 A CN113255624 A CN 113255624A
Authority
CN
China
Prior art keywords
pull
action
stage
counting
joint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110792695.1A
Other languages
Chinese (zh)
Other versions
CN113255624B (en
Inventor
林平
李瀚懿
丁观莲
陈天宜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
One Body Technology Co.,Ltd.
Original Assignee
Beijing Yiti Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yiti Technology Co ltd filed Critical Beijing Yiti Technology Co ltd
Priority to CN202110792695.1A priority Critical patent/CN113255624B/en
Publication of CN113255624A publication Critical patent/CN113255624A/en
Application granted granted Critical
Publication of CN113255624B publication Critical patent/CN113255624B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B23/00Exercising apparatus specially adapted for particular parts of the body
    • A63B23/035Exercising apparatus specially adapted for particular parts of the body for limbs, i.e. upper or lower limbs, e.g. simultaneously
    • A63B23/12Exercising apparatus specially adapted for particular parts of the body for limbs, i.e. upper or lower limbs, e.g. simultaneously for upper limbs or related muscles, e.g. chest, upper back or shoulder muscles
    • A63B23/1209Involving a bending of elbow and shoulder joints simultaneously
    • A63B23/1218Chinning, pull-up, i.e. concentric movement
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/05Image processing for measuring physical parameters
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/17Counting, e.g. counting periodical movements, revolutions or cycles, or including further data processing to determine distances or speed

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a system and a method for intelligently identifying the completion status of pull-up action gestures, wherein the method comprises the following steps: setting a key angle formed by the body related part corresponding to the action posture; comparing key angles in two adjacent frames of images in the test video, and identifying the comparison result of the key angles; and connecting the identifiers in series to form a result list, traversing the first result list by using a sliding window, replacing all identifiers in the sliding window with the identifiers with the largest number, obtaining the completion condition of the action posture according to the change of the identifiers in the result list, and counting. According to the invention, the key angle of the front frame image and the back frame image in the video is compared to identify the descending stage and the ascending stage of the body, so that the completion condition of the action posture is identified, a recognition model does not need to be trained, the video images can be used for direct recognition, the algorithm is simplified, and the deployment is rapid and convenient. Meanwhile, oscillation data can be conveniently removed, and accuracy is improved.

Description

System and method for intelligently identifying completion condition of pull-up action gesture
Technical Field
The invention relates to the technical field of image intelligent recognition, in particular to a system and a method for intelligently recognizing the completion condition of a pull-up action gesture.
Background
The pull-up is mainly used for testing the development level of upper limb muscle strength, plays an important role in developing upper limb suspension strength, shoulder strap strength and grip strength, is also the most basic method for exercising the back, and is one of important reference standards and items for measuring the male physique. At present, the pull-up is a necessary subject for military training and also becomes an important subject for training and examination of schools at all levels.
Whether the action is standard and counted is very important during the pull-up exercise or the assessment. As DI technology advances, pull-up automatic identification counting has begun to become increasingly popular in place of manual counting. For example: chinese patent CN111282248A discloses a chin-up detection system and method based on bones and human face key points, which extracts the position change of a chin key point and a cross bar in the human face key points and the change of an included angle between a large arm (a connecting line of a shoulder bone key point-an elbow bone key point) and a small arm (a connecting line of an elbow bone key point-a wrist bone key point) to judge whether the human face key points are ascending, descending and effective degrees in the process of finishing actions. Although this scheme can realize automatic counting, the accuracy of recognition needs to be further improved.
In view of this, it is necessary to improve the existing pull-up gesture completion intelligent recognition algorithm to improve the recognition accuracy.
Disclosure of Invention
In view of the above-mentioned drawbacks, the present invention provides a system and a method for intelligently identifying a completion status of a pull-up gesture, so as to solve the problem of inaccurate identification result in the prior art.
Therefore, the invention provides a method for intelligently identifying the completion condition of the pull-up action gesture, which comprises the following steps of:
dividing the standard action posture for finishing one pull-up into a body ascending stage and a body descending stage;
setting an included angle formed by a shoulder joint, an elbow joint and a wrist joint as a first key angle;
comparing first key angles in two adjacent frames of images in front and back of a test video, identifying comparison results, and connecting the identifications in series to form a result list;
traversing the result list by using a sliding window, and replacing all identifiers in the sliding window with identifiers with the largest number, wherein the sliding window at least comprises five frames;
according to the change of the identification in the result list, identifying and obtaining a body ascending stage and a body descending stage of the chin;
and identifying the motion posture of the pull-up body according to the alternation of the body ascending stage and the body descending stage of the pull-up body obtained by identification, and counting.
In the above method, preferably, the camera for taking the test video is arranged laterally behind the tester and at a height level with the grip with the pull body upward;
taking a height difference between a wrist and a chin as a first condition;
taking the distance between the wrist and the shoulder as a second condition, and obtaining a conclusion whether the pull-up is standard according to the first condition and the second condition;
and performing correct counting or error counting according to the conclusion whether the pull-up is standard or not.
In the above method, preferably, the identifier is represented by a numeral 0 or 1.
In the above method, preferably, by detecting whether the first key angle is smaller than a first threshold, it is determined whether the body raising motion is completed, and a first completion flag is generated when the body raising motion is completed;
judging whether the body descending motion is finished or not by detecting whether the first key angle is larger than a second threshold value or not, and generating a second finishing mark when the body descending motion is finished;
carrying out correct counting or error counting according to the first completion mark and the second completion mark;
the first threshold value is an included angle formed by the shoulder joint, the elbow joint and the wrist joint when the pull body is in a posture of going up to the highest point of the body; and the second threshold value is an included angle formed by the shoulder joint, the elbow joint and the wrist joint when the pull-up initial posture is carried out.
In the above method, preferably, whether the leg of the tester is bent is determined by detecting whether the second key angle is smaller than a third threshold, and correct counting or wrong counting is performed according to the determination result of whether the leg is bent; the second key angle is an included angle formed by a shoulder joint, a waist joint and a knee joint.
In the above method, preferably, the height and the arm length of the tester are obtained from the image of the tester, and the upper and lower thresholds of the second condition are obtained.
In the above method, preferably, when the pull-up gesture of the tester changes to the initial stage, a timestamp of the test video is recorded and associated with each count, the count generates a drop-down list, and by clicking the count in the drop-down list, the test video guided by the timestamp is jumped to for playback.
The invention also provides a system for intelligently identifying the completion condition of the pull-up action gesture, which comprises an image acquisition device for acquiring the test video and an action gesture identification device, wherein the action gesture identification device comprises:
the identification module is used for comparing first key angles in two adjacent frames of images in the test video, identifying comparison results and connecting the identifications in series to form a result list; the first key angle is an included angle formed by a shoulder joint, an elbow joint and a wrist joint;
the lifting stage identification module is used for identifying and obtaining a body descending stage or a body ascending stage of the pull body upwards according to the change of the identification in the result list; the posture of finishing the one-time pull-up action is divided into a body descending stage and a body ascending stage in advance;
the correction module is used for traversing the result list by utilizing a sliding window and replacing all the identifiers in the sliding window with the identifiers with the largest number, wherein the sliding window at least comprises five frames;
the motion gesture recognition module is used for recognizing the motion gesture of the pull-up body according to the alternation of the descending stage or the ascending stage of the body obtained by recognition;
and the counting module counts according to the action posture.
In the above system, preferably, whether the ascending motion of the pull body is completed is determined by detecting whether the first key angle is smaller than a first threshold, and a first completion flag is generated when the ascending motion is completed;
judging whether the descending motion of the pull body upwards is finished or not by detecting whether the first key angle is larger than a second threshold value or not, and generating a second finishing mark when the descending motion is correct;
and carrying out correct counting or error counting according to the first completion mark and the second completion mark.
In the above system, preferably, the system further comprises a pole passing identification module, configured to:
taking a height difference between a wrist and a chin as a first condition;
taking the distance between the wrist and the shoulder as a second condition, and obtaining a conclusion whether the pull-up is standard according to the first condition and the second condition;
and performing correct counting or error counting according to the conclusion whether the pull-up is standard or not.
According to the technical scheme, the system and the method for intelligently identifying the completion condition of the pull-up action gesture solve the problem that the identification result in the prior art is not accurate enough. Compared with the prior art, the invention has the following beneficial effects:
the key angles of the front frame image and the rear frame image in the video are compared to identify the descending stage and the ascending stage of the body, the completion condition of the posture of the pull-up action is identified, a recognition model does not need to be trained, the video images can be used for direct recognition, the algorithm is simplified, and the deployment is rapid and convenient. Particularly, the result list is traversed by using a sliding window, and all the identifiers in the sliding window are replaced by the identifiers with the largest number, so that the influence of data oscillation is reduced, and the identification result is more accurate.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments of the present invention or the prior art will be briefly described and explained. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
FIG. 1 is a flow chart of a method for intelligently identifying a completion status of a pull-up gesture according to the present invention;
FIG. 2 is a schematic diagram of a standard pull-up gesture;
FIG. 3 is a schematic view of the first critical angle formed by the shoulder, elbow and wrist joints;
FIG. 4 is a schematic diagram illustrating comparison and identification of a first key angle between images of a t +1 th frame and a t-th frame;
FIG. 5 is a schematic diagram of sliding window calibration data in the present application;
FIG. 6 is a schematic view of the difference in height between the wrist and the chin in the present application;
FIG. 7 is a schematic illustration of the distance between the wrist and the shoulder in the present application;
FIG. 8 is a schematic view of the angle formed by the line connecting the waist joint and the ankle joint and the vertical line in the present application;
FIG. 9 is a schematic view of the angles formed by the waist joint, knee joint and ankle joint of the present application;
FIG. 10 is a schematic view of the angle formed by the line connecting the shoulder joint and the ankle joint and the vertical line in the present application;
FIG. 11 is a schematic view of a system for recognizing completion of pull-up gesture in accordance with the present invention;
FIG. 12 is a schematic diagram of the motion gesture recognition apparatus of the present invention;
fig. 13 is a screenshot example of the present invention in practical application.
Detailed Description
The technical solutions of the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings, and it is to be understood that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without any inventive step, are within the scope of the present invention.
The realization principle of the invention is as follows:
setting a key angle formed by the body related part corresponding to the pull-up action posture;
comparing key angles in two adjacent frames of images in front and back of a test video, and identifying comparison results of the key angles;
connecting the identifications in series to form a result list, traversing the result list by using a sliding window, and replacing all the identifications in the sliding window with the identifications with the largest quantity;
and obtaining the completion condition of the action posture according to the change of the identification in the result list, and counting.
According to the scheme provided by the invention, the key angles of the front frame image and the rear frame image in the video are compared to identify the descending stage and the ascending stage of the body, so that the completion condition of the action posture is identified, a recognition model is not required to be trained, the video images can be used for direct recognition, the algorithm is simplified, and the deployment is rapid and convenient. And oscillation data is eliminated through a sliding window, so that the identification accuracy is improved.
In order to make the technical solution and implementation of the present invention more clearly explained and illustrated, several preferred embodiments for implementing the technical solution of the present invention are described below.
It should be noted that the terms of orientation such as "inside, outside", "front, back" and "left and right" are used herein as reference objects, and it is obvious that the use of the corresponding terms of orientation does not limit the scope of protection of the present invention.
A complete standard pull-up gesture includes: an initial posture, a body-up phase, a peak, a body-down phase and a return to initial posture.
In the initial posture, the two hands hold the horizontal bar, the palm is forward and slightly wider than the shoulders, the two feet are off the ground, and the two arms naturally droop and stretch.
In the rising stage, the body is pulled up by the contraction force of the latissimus dorsi, and when the chin exceeds the horizontal bar, the body stops slightly and stands still for one second to reach the highest point.
In the descending stage, the latissimus dorsi is gradually relaxed to allow the body to descend slowly until the body returns to full prolapse and returns to the initial posture.
Common errors include:
first, pull up the chin without over the bar, and the drop arm does not straighten.
Second, the motion is assisted by body swing or foot-assisted motion (e.g., foot forward and leg kicks).
Third, the body is inclined upward and does not rise in a balanced state.
Detailed description of the preferred embodiment 1
Referring to fig. 1, fig. 1 is a flowchart of a method for intelligently identifying a completion status of a pull-up gesture according to the present invention, the method includes the following steps:
step 110, dividing the standard gesture of finishing one pull-up action into two stages, namely a body ascending stage and a body descending stage.
As shown in fig. 2, the directions along the arrow sequentially are: an initial posture, a body up phase, a peak, a body down phase, and a return to initial posture.
Therefore, the pull-up action passes through a body ascending stage and a body descending stage from the starting posture to the returning starting posture.
And step 120, setting an included angle formed by the shoulder joint, the elbow joint and the wrist joint as a first key angle.
As shown in fig. 3, the first critical angle formed by the shoulder, elbow and wrist joints is angle D in fig. 3.
The identification of the first key angle is realized based on the human skeleton identification technology (human posture estimation algorithm) in the prior art, for example: the scheme of the invention can adopt the human body bone recognition technology to realize an action posture recognition algorithm.
Step 130, capturing the ongoing chin-up test video, or importing and playing the existing chin-up test video. Starting from the starting posture of the pull-up, comparing the size of a first key angle in two adjacent frames of images in front and back of a test video in real time, identifying the comparison result of the first key angle, and connecting the identifications in series to form a result list.
For the sake of calculation, the comparison result of the key angle may be identified by using a number 0 or 1.
If the first key angle in the next frame of image is smaller than the first key angle in the previous frame of image in the two adjacent frames of images, marking the comparison result as 1, and indicating that the body is in a rising stage; on the contrary, if the key angle in the next frame image is larger than the key angle in the previous frame image, the comparison result is marked as 0, which indicates that the body descending stage is in.
As shown in FIG. 4, the critical angle D in the t +1 th framet+1Less than critical angle D in frame ttI.e. Dt+1<DtIf so, marking the comparison result as 1, and indicating that the body is in a rising stage; if D ist+1>DtThe comparison result is marked as 0, indicating that it is in a body descent stage.
Therefore, assuming that the posture standard of the motion of the current person in the chin direction is made, the detection environment is very good, and the data is completely accurate, the posture of the motion of the current person in the chin direction goes through a body ascending stage and a body descending stage, and a result list (11111111110000000000) is formed after identification.
In step 140, a pull-up body-up phase and a pull-down phase are identified and obtained based on the changes identified in the result list.
For example, in the result list [ 11111111110000000000 ], the first 10 bits are all 1, the last 10 bits are all 0, and the 10 th to 11 th bits are changed from 1 to 0, so that the motion before the 10 th bit is a body-up stage, the motion after the 11 th bit is a body-down stage, and according to the number of bits in the result list, a certain frame of picture in the test video can be corresponded.
And 150, recognizing the motion posture of the pull-up body according to the alternation of the body ascending stage and the body descending stage of the pull-up body obtained by recognition.
Since the test video is a continuous process, the result list will also be continuous, for example [ 1111111111000000000011111111110000000000 … … ], so that the gesture of the pull-up motion in the whole test video can be recognized and counted according to the alternation of the identification of the body ascending phase and the body descending phase.
For example, the above-mentioned consecutive result list would count as 2, indicating that two pull-up actions are completed.
In the process of identifying the test video, certain accidents may be caused due to the influence of factors such as video jitter and ambient light variation, so that data may oscillate unexpectedly, for example, the generated result list is [ 10101111110010010000 ], and therefore, the present invention further provides a method for eliminating the unexpected oscillation data, and the specific method is as follows:
using a sliding window with a smaller number of frames, traversing the result list, and uniformly replacing all identifiers in the sliding window with the identifiers with the largest number, where the size of the sliding window may be set based on the frame rate of the camera and other conditions, usually at least five frames are defined, and the step size of the sliding window is half of the sliding window and is an integer, for example, if the size of the sliding window is five frames, the step size is 3 frames.
As shown in fig. 5, for example, in the result list, through the sliding window, in the first sliding window, there are three 1's and two 0's, and therefore, two 0's are replaced with 1's, so that the result list is converted to [ 11111111110000000000 ], and then the identification division of the body ascending and body descending stages is performed through the converted result list.
In the scheme of the invention, a highest included angle threshold value and a lowest included angle threshold value are respectively set at the highest point and the lowest point of the pull-up action and are respectively recorded as a first threshold value and a second threshold value, and whether the body descending and body ascending stages are completed correctly is judged according to the comparison between the first key angle and the highest included angle threshold value or the lowest included angle threshold value.
For example: in the body rising stage, if the first key angle is smaller than or equal to the highest included angle threshold value, the rising action is finished; otherwise, the ascending motion is not finished, if the ascending motion is not finished, and then the descending stage of the body is entered, the posture of the pull-up motion is wrong, the correct counting is not performed, and the wrong counting is increased by 1.
In the body descending stage, if the first key angle is larger than or equal to the minimum included angle threshold value, the descending action is finished; otherwise, the descending action is not finished, if the descending action is not finished, and then the body ascending stage is entered, the posture of the current pull-up action is wrong, the correct counting is not performed, and the wrong counting is increased by 1.
After one continuous body rising stage and one continuous body falling stage, the counting period is set as one counting period, and 1 is added to the counting period after one continuous rising motion completion and one continuous falling motion completion.
The first and second thresholds are typically set at 35 degrees and 175 degrees, although the first and second thresholds may be manually modified to accommodate different individuals in the inventive arrangements.
The main factor of poor recognition accuracy of the pull-up action posture in the prior art is the influence of individual difference of testers. The specific method comprises the following steps:
the method comprises the following steps of (1) simulating by utilizing a human body height, the lengths of a big arm and a small arm and a proportional model in advance, and calculating a corresponding most reasonable first threshold value and a second threshold value; the method can be obtained through simulation in a drawing mode and can also be obtained through big data modeling.
Calculating the height of the tester according to the bone recognition data of the tester obtained by the human posture estimation algorithm and the distance between the image acquisition device and the tester on site;
and automatically matching the most reasonable first threshold value and the second threshold value according to the length and the proportion of the big arm and the small arm of the tester obtained by the human body posture estimation algorithm.
Therefore, the judgment results of the completion states of the body ascending and body descending actions can be more accurate, and the accuracy of action posture recognition in the scheme of the invention is far higher than that in the prior art.
Specific example 2
The embodiment 2 is a further improvement made on the basis of the embodiment 1, and aims to solve the problem of accurate identification of the pull-up and pull-down over-bar standard. The main aim is to do this when the tester leans up over the cross bar by the chin, but the movement is not in place.
For this purpose, in the scheme of the invention, the camera for acquiring the test video is arranged at the lateral rear part of the tester, the height of the camera is horizontal to the holding rod with the pull-up body, the judgment is carried out by combining the following two conditions in the body lifting stage, and the correct counting or the error counting is carried out according to the conclusion whether the pull-up and the pull-down pass the bar.
The first condition is that: the height difference H between the wrist and the chin, as shown in fig. 6.
The second condition is that: the distance S between the wrist and the shoulder, as shown in fig. 7.
And obtaining the height and the arm length of the tester according to the image of the tester, and obtaining the upper threshold and the lower threshold of the second condition. For example: the height can be calculated through the chin and ankle joint skeleton points of a tester, the length of the upper arm and the length of the lower arm can be calculated through the shoulder joint, elbow joint and wrist joint skeleton points, and therefore when the height difference H =0 between the wrist and the chin is further calculated (for convenience of calculation, a negative value is taken when the chin is located below the wrist, and a positive value is taken otherwise), the minimum value Smin of the distance S between the wrist and the shoulder is calculated.
When H is more than or equal to 0, if the distance S is less than or equal to Smin, the action is correct (the standard of pulling up the chin to pass a bar), and 1 is added to the correct counting number; otherwise, the tester is judged to be over the cross bar by bending the chin upwards, the actual action is not in place, the correct counting is not performed, and the error counting is increased by 1.
By the method, whether the pull-up action is standard or not can be judged more accurately.
Specific example 3
This embodiment 3 is a further improvement made on the basis of embodiment 1, and is intended to judge that the completion of the movement is assisted by the test subject by the swing of the body or the bending of the foot and the leg kicking.
Therefore, in the scheme of the invention, the swing amplitude of the body is judged by judging the included angle E formed by the connecting line of the waist joint and the ankle joint and the vertical line, when the swing amplitude exceeds the set range, the correct counting is not carried out, and the error counting is added by 1, as shown in figure 8.
Further, the leg-kicking motion is judged by the degree of pinching F formed by the waist joint, the knee joint, and the ankle joint, and if the degree of pinching is out of the set range, the wrong count is increased by 1 instead of the correct count, as shown in fig. 9.
The change range of the included angle between the waist joint and the ankle joint and the threshold value of the included angle between the waist joint, the knee joint and the ankle joint can be preset or calculated and obtained according to the height of the tester, the length and the proportion of the thigh and the shank and other data.
Specific example 4
This embodiment 4 is a further improvement made on the basis of embodiment 1, and aims to recognize a case where the body of the test subject is inclined upward, rather than rising in a balanced state.
Therefore, in the scheme of the invention, the included angle G formed by the connecting line of the shoulder joint and the ankle joint and the vertical line is judged, when the included angle is beyond the set range, the correct counting is not carried out, and the error counting is added by 1, as shown in figure 10.
The included angle G formed by the connecting line of the shoulder joint and the ankle joint and the vertical perpendicular line can be preset or calculated and obtained according to data such as the height of a tester.
In the above embodiments, when the pull-up gesture of the tester changes to the initial stage, the timestamp (time on the time axis of the current video) of the test video is recorded, the timestamp is associated with each count, a pull-down list is generated for both the correct count and the error count, and the test video guided by the timestamp is skipped and played back by clicking the count in the pull-down list. The function realizes the playback verification of the action posture, can quickly play back the action video pointed by the appointed counting, and improves the verification efficiency.
In combination with the above method, the present invention further provides a system for intelligently identifying the completion status of the pull-up gesture, as shown in fig. 11, the system includes an image acquisition device 10 for acquiring a test video, a gesture recognition device 20, a display device 30, and a prompt device 40.
The motion gesture recognition device 20 is provided with the above-mentioned algorithm for intelligently recognizing the completion status of the pull-up motion gesture. Specifically, the motion gesture recognition apparatus 20 includes:
the identification module 21 is configured to compare first key angles in two adjacent frames of images in the test video, and identify a comparison result of the first key angles in the two frames of images. After identification, the identifications are connected in series to form a result list; wherein, the first key angle is the included angle formed by the shoulder joint, the elbow joint and the wrist joint.
The lifting stage identification module 22 is used for identifying and obtaining a body descending stage or a body ascending stage of the pull body according to the change of the identification in the result list; the posture of finishing the pull-up action is divided into two stages of a body descending stage and a body ascending stage in advance.
And the motion gesture recognition module 23 is configured to recognize a motion gesture of a pull-up body according to the alternation of the descending stage or the ascending stage of the body obtained through recognition.
And the counting module 25 counts according to the posture of the pull-up action.
On the basis, the action gesture recognition device further comprises a correction module 24, which is used for traversing the result list by using a sliding window and replacing all identifiers in the sliding window with the identifiers with the largest number, wherein the sliding window at least comprises five frames.
On the basis, the counting module 25 judges whether the ascending action of the pull body is finished or not by detecting whether the first key angle is smaller than a first threshold value or not, and generates an ascending completion mark when the ascending action is finished; judging whether the descending action of the pull body upwards is finished or not by detecting whether the first key angle is larger than a second threshold value or not, and generating a descending completion mark when the descending action is finished; and carrying out correct counting or error counting according to the rising completion mark and the falling completion mark.
On the basis, the pull-up gesture-up test video playback device further comprises a timestamp association module, when the pull-up gesture of the tester changes to the initial stage, the timestamp (the moment on the current video time axis) of the test video is recorded, the timestamp is associated with each count, the count generates a pull-down list, and the test video guided by the timestamp is jumped to for playback by clicking the count in the pull-down list.
By combining the description of the above specific embodiments, the system and method for intelligently identifying the completion status of the pull-up gesture provided by the invention have the following advantages compared with the prior art:
firstly, the body ascending stage and the body descending stage are identified by comparing the first key angles of the front frame image and the rear frame image in the video, the finishing condition of the pull-up action gesture is identified, a recognition model is not required to be trained, the video images can be used for direct identification, the algorithm is simplified, and the deployment is rapid and convenient.
And secondly, the comparison result of the first key angle is identified by adopting the numbers 1 and 0, and compared with the prior art of directly comparing the included angle value, the subsequent data processing algorithm is simplified, and the data processing efficiency is improved.
Thirdly, based on the 1 and 0 identifiers, unexpected oscillation data is eliminated through a sliding window algorithm, and the accuracy of recognition is improved.
And fourthly, taking the height difference between the wrist and the chin as a first condition, taking the distance between the wrist and the shoulder as a second condition, obtaining a conclusion whether the pull-up is standard according to the first condition and the second condition, and carrying out correct counting or error counting according to the conclusion whether the pull-up is standard. The accuracy is greatly improved, the situation that a tester leans up to exceed the cross rod through the chin but the action is not in place can be avoided.
Fifthly, the whole system is high in convenience, can operate without accessing the Internet, and is rapid and convenient to deploy.
Sixth, the user can accurately recognize the false action of assisting the completion of the action by swinging the body or bending the feet forward and kicking the legs, and the body is inclined upwards, so that the accuracy is further improved.
Seventh, all the threshold values in the scheme can be automatically obtained through data such as the height, the arm length and the proportion, the leg length and the proportion of the tester, and result errors caused by individual differences are avoided.
Finally, it should also be noted that the terms "comprises," "comprising," or any other variation thereof, as used herein, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The present invention is not limited to the above-mentioned preferred embodiments, and any structural changes made under the teaching of the present invention shall fall within the scope of the present invention, which is similar or similar to the technical solutions of the present invention.

Claims (10)

1. A method for intelligently identifying the completion condition of a pull-up action gesture is characterized by comprising the following steps:
dividing the standard action posture for finishing one pull-up into a body ascending stage and a body descending stage;
setting an included angle formed by a shoulder joint, an elbow joint and a wrist joint as a first key angle;
comparing first key angles in two adjacent frames of images in front and back of a test video, identifying comparison results, and connecting the identifications in series to form a result list;
traversing the result list by using a sliding window, and replacing all the identifiers in the sliding window with the identifiers with the largest number;
according to the change of the identification in the result list, identifying and obtaining a body ascending stage and a body descending stage of the chin;
and identifying the motion posture of the pull-up body according to the alternation of the body ascending stage and the body descending stage of the pull-up body obtained by identification, and counting.
2. The method of claim 1,
the camera for acquiring the test video is arranged at the lateral rear part of the tester, and the height of the camera is horizontal to the holding rod with the pull body upward;
taking a height difference between a wrist and a chin as a first condition;
taking the distance between the wrist and the shoulder as a second condition, and obtaining a conclusion whether the pull-up is standard according to the first condition and the second condition;
and performing correct counting or error counting according to the conclusion whether the pull-up is standard or not.
3. The method of claim 1, wherein the identifier is represented by a number 0 or 1.
4. The method of claim 1,
judging whether the body lifting action is finished or not by detecting whether the first key angle is smaller than a first threshold value or not, and generating a first finishing mark when the body lifting action is finished;
judging whether the body descending motion is finished or not by detecting whether the first key angle is larger than a second threshold value or not, and generating a second finishing mark when the body descending motion is finished;
carrying out correct counting or error counting according to the first completion mark and the second completion mark;
the first threshold value is an included angle formed by the shoulder joint, the elbow joint and the wrist joint when the pull body is in a posture of going up to the highest point of the body; and the second threshold value is an included angle formed by the shoulder joint, the elbow joint and the wrist joint when the pull-up initial posture is carried out.
5. The method according to claim 1, wherein whether the leg of the tester is bent is determined by detecting whether the second key angle is smaller than a third threshold value, and the correct counting or the wrong counting is performed according to the determination result of whether the leg is bent; the second key angle is an included angle formed by a shoulder joint, a waist joint and a knee joint.
6. The method of claim 2, wherein the height and arm length of the tester are obtained from the image of the tester, and the upper and lower thresholds for the second condition are obtained.
7. The method of claim 1, wherein when the pull-up gesture of the tester changes to the start phase, a timestamp of the test video is recorded and associated with each count, the count generates a drop-down list, and by clicking on the count in the drop-down list, the test video is jumped to for playback as directed by the timestamp.
8. A system for intelligently identifying the completion condition of a pull-up action gesture comprises an image acquisition device and an action gesture recognition device, wherein the image acquisition device is used for acquiring a test video, and the action gesture recognition device is characterized by comprising:
the identification module is used for comparing first key angles in two adjacent frames of images in the test video, identifying comparison results and connecting the identifications in series to form a result list; the first key angle is an included angle formed by a shoulder joint, an elbow joint and a wrist joint;
the lifting stage identification module is used for identifying and obtaining a body descending stage or a body ascending stage of the pull body upwards according to the change of the identification in the result list; the posture of finishing the one-time pull-up action is divided into a body descending stage and a body ascending stage in advance;
the correction module is used for traversing the result list by utilizing a sliding window and replacing all the identifiers in the sliding window with the identifiers with the largest number, wherein the sliding window at least comprises five frames;
the motion gesture recognition module is used for recognizing the motion gesture of the pull-up body according to the alternation of the descending stage or the ascending stage of the body obtained by recognition;
and the counting module counts according to the action posture.
9. The system of claim 8,
judging whether the ascending action of the pull body is finished or not by detecting whether the first key angle is smaller than a first threshold value or not, and generating a first finishing mark when the ascending action is finished;
judging whether the descending motion of the pull body upwards is finished or not by detecting whether the first key angle is larger than a second threshold value or not, and generating a second finishing mark when the descending motion is correct;
and carrying out correct counting or error counting according to the first completion mark and the second completion mark.
10. The system of claim 8, further comprising a pole identification module to:
taking a height difference between a wrist and a chin as a first condition;
taking the distance between the wrist and the shoulder as a second condition, and obtaining a conclusion whether the pull-up is standard according to the first condition and the second condition;
and performing correct counting or error counting according to the conclusion whether the pull-up is standard or not.
CN202110792695.1A 2021-07-14 2021-07-14 System and method for intelligently identifying completion condition of pull-up action gesture Active CN113255624B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110792695.1A CN113255624B (en) 2021-07-14 2021-07-14 System and method for intelligently identifying completion condition of pull-up action gesture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110792695.1A CN113255624B (en) 2021-07-14 2021-07-14 System and method for intelligently identifying completion condition of pull-up action gesture

Publications (2)

Publication Number Publication Date
CN113255624A true CN113255624A (en) 2021-08-13
CN113255624B CN113255624B (en) 2021-09-21

Family

ID=77191221

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110792695.1A Active CN113255624B (en) 2021-07-14 2021-07-14 System and method for intelligently identifying completion condition of pull-up action gesture

Country Status (1)

Country Link
CN (1) CN113255624B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115138059A (en) * 2022-09-06 2022-10-04 南京市觉醒智能装备有限公司 Pull-up standard counting method, pull-up standard counting system and storage medium of pull-up standard counting system

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110043626A1 (en) * 2009-08-18 2011-02-24 Wesley Kenneth Cobb Intra-trajectory anomaly detection using adaptive voting experts in a video surveillance system
CN105608467A (en) * 2015-12-16 2016-05-25 西北工业大学 Kinect-based non-contact type student physical fitness evaluation method
CN105894540A (en) * 2016-04-11 2016-08-24 上海斐讯数据通信技术有限公司 Method and system for counting vertical reciprocating movements based on mobile terminal
CN107909081A (en) * 2017-10-27 2018-04-13 东南大学 The quick obtaining and quick calibrating method of image data set in a kind of deep learning
CN108577855A (en) * 2018-05-07 2018-09-28 北京大学 A kind of non-contact type body building monitoring method
CN110163038A (en) * 2018-03-15 2019-08-23 南京硅基智能科技有限公司 A kind of human motion method of counting based on depth convolutional neural networks
CN110909687A (en) * 2019-11-26 2020-03-24 爱菲力斯(深圳)科技有限公司 Action feature validity determination method, computer storage medium, and electronic device
CN111282248A (en) * 2020-05-12 2020-06-16 西南交通大学 Pull-up detection system and method based on skeleton and face key points
CN111368810A (en) * 2020-05-26 2020-07-03 西南交通大学 Sit-up detection system and method based on human body and skeleton key point identification
CN111507301A (en) * 2020-04-26 2020-08-07 腾讯科技(深圳)有限公司 Video processing method, video processing device, computer equipment and storage medium
CN112528957A (en) * 2020-12-28 2021-03-19 北京万觉科技有限公司 Human motion basic information detection method and system and electronic equipment
CN112800905A (en) * 2021-01-19 2021-05-14 浙江光珀智能科技有限公司 Pull-up counting method based on RGBD camera attitude estimation
CN113011344A (en) * 2021-03-23 2021-06-22 安徽一视科技有限公司 Pull-up quantity calculation method based on machine vision

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110043626A1 (en) * 2009-08-18 2011-02-24 Wesley Kenneth Cobb Intra-trajectory anomaly detection using adaptive voting experts in a video surveillance system
CN105608467A (en) * 2015-12-16 2016-05-25 西北工业大学 Kinect-based non-contact type student physical fitness evaluation method
CN105894540A (en) * 2016-04-11 2016-08-24 上海斐讯数据通信技术有限公司 Method and system for counting vertical reciprocating movements based on mobile terminal
CN107909081A (en) * 2017-10-27 2018-04-13 东南大学 The quick obtaining and quick calibrating method of image data set in a kind of deep learning
CN110163038A (en) * 2018-03-15 2019-08-23 南京硅基智能科技有限公司 A kind of human motion method of counting based on depth convolutional neural networks
CN108577855A (en) * 2018-05-07 2018-09-28 北京大学 A kind of non-contact type body building monitoring method
CN110909687A (en) * 2019-11-26 2020-03-24 爱菲力斯(深圳)科技有限公司 Action feature validity determination method, computer storage medium, and electronic device
CN111507301A (en) * 2020-04-26 2020-08-07 腾讯科技(深圳)有限公司 Video processing method, video processing device, computer equipment and storage medium
CN111282248A (en) * 2020-05-12 2020-06-16 西南交通大学 Pull-up detection system and method based on skeleton and face key points
CN111368810A (en) * 2020-05-26 2020-07-03 西南交通大学 Sit-up detection system and method based on human body and skeleton key point identification
CN112528957A (en) * 2020-12-28 2021-03-19 北京万觉科技有限公司 Human motion basic information detection method and system and electronic equipment
CN112800905A (en) * 2021-01-19 2021-05-14 浙江光珀智能科技有限公司 Pull-up counting method based on RGBD camera attitude estimation
CN113011344A (en) * 2021-03-23 2021-06-22 安徽一视科技有限公司 Pull-up quantity calculation method based on machine vision

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115138059A (en) * 2022-09-06 2022-10-04 南京市觉醒智能装备有限公司 Pull-up standard counting method, pull-up standard counting system and storage medium of pull-up standard counting system
CN115138059B (en) * 2022-09-06 2022-12-02 南京市觉醒智能装备有限公司 Pull-up standard counting method, pull-up standard counting system and storage medium of pull-up standard counting system

Also Published As

Publication number Publication date
CN113255624B (en) 2021-09-21

Similar Documents

Publication Publication Date Title
JP6733738B2 (en) MOTION RECOGNITION DEVICE, MOTION RECOGNITION PROGRAM, AND MOTION RECOGNITION METHOD
CN105903157B (en) Electronic coach realization method and system
CN111437583B (en) Badminton basic action auxiliary training system based on Kinect
KR20230056118A (en) Exercise program recommendation system according to physical ability
CN109191588A (en) Move teaching method, device, storage medium and electronic equipment
CN111282248A (en) Pull-up detection system and method based on skeleton and face key points
JP6943294B2 (en) Technique recognition program, technique recognition method and technique recognition system
CN113255622B (en) System and method for intelligently identifying sit-up action posture completion condition
CN113255623B (en) System and method for intelligently identifying push-up action posture completion condition
CN113398556B (en) Push-up identification method and system
CN113255624B (en) System and method for intelligently identifying completion condition of pull-up action gesture
EP3786971A1 (en) Advancement manager in a handheld user device
CN113128336A (en) Pull-up test counting method, device, equipment and medium
CN110227243A (en) Table tennis practice intelligent correcting system and its working method
CN111569397B (en) Handle motion counting method and terminal
CN114973401A (en) Standardized pull-up assessment method based on motion detection and multi-mode learning
CN114093032A (en) Human body action evaluation method based on action state information
CN113191200A (en) Push-up test counting method, device, equipment and medium
CN111353345B (en) Method, apparatus, system, electronic device, and storage medium for providing training feedback
CN115068919B (en) Examination method of horizontal bar project and implementation device thereof
CN116271757A (en) Auxiliary system and method for basketball practice based on AI technology
Karunaratne et al. Objectively measure player performance on olympic weightlifting
CN114092863A (en) Human body motion evaluation method for multi-view video image
CN114038054A (en) Pull-up detection device and method
Kahtan et al. Motion analysis-based application for enhancing physical education

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220902

Address after: Room 2310, 23rd Floor, No. 24, Jianguomenwai Street, Chaoyang District, Beijing 100010

Patentee after: One Body Technology Co.,Ltd.

Address before: Room zt1009, science and technology building, No. 45, Zhaitang street, Mentougou District, Beijing 102300 (cluster registration)

Patentee before: Beijing Yiti Technology Co.,Ltd.

TR01 Transfer of patent right