CN113255622B - System and method for intelligently identifying sit-up action posture completion condition - Google Patents

System and method for intelligently identifying sit-up action posture completion condition Download PDF

Info

Publication number
CN113255622B
CN113255622B CN202110792620.3A CN202110792620A CN113255622B CN 113255622 B CN113255622 B CN 113255622B CN 202110792620 A CN202110792620 A CN 202110792620A CN 113255622 B CN113255622 B CN 113255622B
Authority
CN
China
Prior art keywords
sit
stage
action
posture
key
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110792620.3A
Other languages
Chinese (zh)
Other versions
CN113255622A (en
Inventor
林平
李瀚懿
丁观莲
陈天宜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
One Body Technology Co.,Ltd.
Original Assignee
Beijing Yiti Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yiti Technology Co ltd filed Critical Beijing Yiti Technology Co ltd
Priority to CN202110792620.3A priority Critical patent/CN113255622B/en
Publication of CN113255622A publication Critical patent/CN113255622A/en
Application granted granted Critical
Publication of CN113255622B publication Critical patent/CN113255622B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Psychiatry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Epidemiology (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a system and a method for intelligently identifying the completion status of sit-up action gestures, which comprises the following steps: setting a key angle formed by the body related part corresponding to the action posture; comparing key angles in two adjacent frames of images in the test video, and identifying the comparison result of the key angles; and connecting the identifiers in series to form a result list, and obtaining the completion condition of the action posture according to the change of the identifiers in the result list. According to the invention, the key angle of the front frame image and the back frame image in the video is compared to identify the descending stage and the ascending stage of the body, so that the completion condition of the action posture is identified, a recognition model does not need to be trained, the video images can be used for direct recognition, the algorithm is simplified, and the deployment is rapid and convenient.

Description

System and method for intelligently identifying sit-up action posture completion condition
Technical Field
The invention relates to the technical field of image intelligent recognition, in particular to a system and a method for intelligently recognizing a sit-up action posture completion condition.
Background
Sit-up is one of the most common ways of exercising the body (strength training), as sit-up exercises are simple and easy to implement and are not limited by the field. Therefore, the training aid is a necessary subject for military training and becomes an important subject for strength training and assessment of schools at all levels.
In the process of sit-up exercise or examination, the judgment of whether the action is standard or not and the counting are important for the exercise or examination result.
With the development and progress of AI technology, the automatic identification and counting of sit-up has begun to replace manual counting and is gradually widely used. For example: the chinese invention patent CN111368810B discloses a sit-up detection system and method based on human body and bone key point identification, which detects the change of human body frame shape in continuous frames and the change of the included angle between the straight line formed by the waist key point and the shoulder key point and the horizontal position of the human body bone key point to comprehensively judge whether the person lies down, sits up or sits up in the process from lying down to sitting up, thereby completing the sit-up detection of the person to be tested within a fixed time. However, there are the following problems:
1. the scheme is realized based on two parameters, one is that the line of the hip-shoulder skeleton key points of the tested person forms an included angle V with the horizontal line; the other is the angle V' between the diagonal line and the bottom edge of the human body frame. The recognition error of the human body frame is large, so that the recognition result is not accurate enough.
2. The sit-up behavior recognition module that this scheme adopted, sit-up behavior recognition deep learning model training process based on attention mechanism, LSTM does: collecting sit-up video images of testees of different ages, sexes and statures and marking the sit-up video images as positive samples, and simultaneously collecting some non-sit-up videos and marking the non-sit-up videos as negative samples; constructing an end-to-end network with a double-layer structure by finely adjusting corresponding parameters according to a currently disclosed attention mechanism, namely LSTM; and inputting the video frame sequence of the model, and outputting whether the current frame sequence end is a sit-up behavior. The training of the model needs a large amount of sample data, and the workload at the early stage is large.
In view of this, it is necessary to improve the existing intelligent recognition algorithm for the sit-up gesture completion status to improve the recognition accuracy, reduce the workload in the early stage, and facilitate the deployment of the system.
Disclosure of Invention
In view of the above-mentioned defects, the technical problem to be solved by the present invention is to provide a system and a method for intelligently identifying the sit-up gesture completion status, so as to solve the problems that the identification result in the prior art is not accurate enough, a large amount of sample data training models are required at the early stage, and the workload is large.
Therefore, the invention provides a method for intelligently identifying the completion condition of the sit-up action posture, which comprises the following steps:
dividing the standard action posture for completing one sit-up into a body ascending stage and a body descending stage;
setting an included angle formed by a shoulder joint, a waist joint and a knee joint as a key angle;
comparing key angles in two adjacent frames of images in front and back of a test video, identifying comparison results of the key angles in the two adjacent frames of images in front and back, and connecting the identifications in series to form a result list;
traversing the result list by using a sliding window, and replacing all identifiers in the sliding window with identifiers with the largest number, wherein the sliding window at least comprises five frames;
identifying a body ascending stage and a body descending stage for obtaining the sit-up according to the change of the identification in the result list;
and recognizing the movement posture of the sit-up according to the alternation of the body ascending stage and the body descending stage of the sit-up obtained by recognition.
In the above method, preferably, the identifier is represented by a numeral 0 or 1.
In the above method, preferably, whether the body raising motion is completed is determined by detecting whether the key angle is smaller than a first threshold, and a first completion flag is generated when the body raising motion is completed;
judging whether the body descending motion is finished or not by detecting whether the key angle is larger than a second threshold value or not, and generating a second finishing mark when the body descending motion is finished;
counting according to the first completion mark and the second completion mark;
the first threshold value is an included angle formed by a shoulder joint, a waist joint and a knee joint when the sit-up body is in the highest point posture; the second threshold value is an included angle formed by a shoulder joint, a waist joint and a knee joint when the sit-up starts to be in the posture.
In the above method, preferably, after one consecutive pass of the first completion flag and the second completion flag, the correct count is incremented by 1, otherwise, the error count is incremented by 1.
In the above method, it is preferable that the rising motion is determined to be correct by detecting whether or not the leg knees are bent, whether or not the elbows touch the knees, and whether or not the hands are away from the shoulders.
In the above method, preferably, when the posture of the sit-up action of the tester changes to the initial stage, a timestamp of the test video is recorded and associated with each count, the count generates a drop-down list, and by clicking the count in the drop-down list, the test video guided by the timestamp is jumped to for playback.
The invention also provides a system for intelligently identifying the sit-up action posture completion condition, which comprises an image acquisition device for acquiring the test video and an action posture identification device, wherein the action posture identification device comprises:
the identification module is used for comparing key angles in two adjacent frames of images in the test video, identifying the comparison result of the key angles in the two frames of images, and connecting the identifications in series to form a result list; wherein the critical angle is an included angle formed by a shoulder joint, a waist joint and a knee joint;
the lifting stage identification module is used for identifying and obtaining a body descending stage or a body ascending stage of the sit-up according to the change of the identification in the result list; the movement posture for completing the sit-up is divided into a body descending stage and a body ascending stage in advance;
the correction module is used for traversing the result list by utilizing a sliding window and replacing all the identifiers in the sliding window with the identifiers with the largest number, wherein the sliding window at least comprises five frames;
and the motion posture identification module is used for identifying the motion posture of the sit-up according to the alternation of the descending stage or the ascending stage of the body obtained by identification.
In the above system, preferably, further comprising a counting module,
judging whether the ascending action of the sit-up is finished or not by detecting whether the key angle is smaller than a first threshold value or not, and generating a first finishing mark when the ascending action is finished;
judging whether the descending action of the sit-up is finished or not by detecting whether the key angle is larger than a second threshold value or not, and generating a second finishing mark when the descending action is correct;
counting is performed according to the first completion flag and the second completion flag.
In the above system, preferably, the system further includes a timestamp association module, when the posture of the sit-up action of the tester changes to the initial stage, the timestamp of the test video is recorded, and the timestamp is associated with each count, the count generates a drop-down list, and the test video guided by the timestamp is skipped to for playback by clicking the count in the drop-down list.
According to the technical scheme, the system and the method for intelligently identifying the sit-up action posture completion condition solve the problem that the identification result in the prior art is not accurate enough. Compared with the prior art, the invention has the following beneficial effects:
the key angles of the front frame image and the rear frame image in the video are compared to identify the descending stage and the ascending stage of the body, so that the finishing condition of the sit-up action posture is identified, a recognition model does not need to be trained, the video images can be used for direct recognition, the algorithm is simplified, and the deployment is rapid and convenient.
In addition, the result list is traversed by using the sliding window, and all the identifiers in the sliding window are replaced by the identifiers with the largest number, so that the influence of data oscillation is reduced, and the identification result is more accurate.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments of the present invention or the prior art will be briefly described and explained. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a flowchart of a method for intelligently identifying a sit-up gesture completion status according to the present invention;
FIG. 2 is a schematic diagram of a standard sit-up gesture completion process;
FIG. 3 is a schematic view of the critical angles formed by the shoulder, waist and knee joints;
FIG. 4 is a schematic diagram illustrating the comparison and identification of key angles between the t +1 th frame and the t th frame;
FIG. 5 is a schematic diagram of sliding window calibration data in the present application;
FIG. 6 is a schematic diagram of a system for recognizing the completion of a sit-up gesture according to the present invention;
FIG. 7 is a schematic diagram of the motion gesture recognition apparatus according to the present invention;
FIG. 8 is a diagram of an identification frame in the present invention, showing a count and an AR auxiliary line;
fig. 9 is a screenshot of the present invention in practical application.
Detailed Description
The technical solutions of the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings, and it is to be understood that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without any inventive step, are within the scope of the present invention.
The realization principle of the invention is as follows:
setting a key angle formed by the relevant body parts corresponding to the sit-up action posture;
comparing key angles in two adjacent frames of images in front and back of a test video, and identifying comparison results of the key angles;
and connecting the identifiers in series to form a result list, and obtaining the completion condition of the action posture according to the change of the identifiers in the result list.
According to the scheme provided by the invention, the key angles of the front frame image and the rear frame image in the video are compared to identify the descending stage and the ascending stage of the body, so that the completion condition of the action posture is identified, a recognition model is not required to be trained, the video images can be used for direct recognition, the algorithm is simplified, and the deployment is rapid and convenient.
In order to make the technical solution and implementation of the present invention more clearly explained and illustrated, several preferred embodiments for implementing the technical solution of the present invention are described below.
It should be noted that the terms of orientation such as "inside, outside", "front, back" and "left and right" are used herein as reference objects, and it is obvious that the use of the corresponding terms of orientation does not limit the scope of protection of the present invention.
A complete standard sit-up gesture includes: an initial posture, a body-up phase, a peak, a body-down phase and a return to initial posture.
Starting posture: the upper body is tilted backwards until the scapula touches the floor mat, the knees are bent to about 90 degrees, the feet are horizontally placed on the floor mat, and the two hands are overlapped and placed in front of the chest and on the two shoulders or placed behind the brain.
A body ascending stage: tightening the abdominal muscles, slowly lifting the head, then lifting the upper body, wherein the two soles must be always tightly attached to the ground in the whole process, and the eyes watch the knees to contract the abdominal muscles until the upper body forms an angle of 90 degrees with the thighs or the elbows touch or exceed the knees.
A body descending stage: after reaching the highest point, keeping for a certain time, then slowly lying the upper body back until the body lies on the floor mat, and returning to the initial posture again.
The key to sit-up action comprises: the upper body leans backwards until the scapula touches the floor, the upper body bends forwards, the lower jaw is tightened, the upper body bends forwards until the two elbows touch the knee or thigh part at the same time, and the two arms always keep holding the shoulders crosswise in front of the chest and in both hands.
Common errors: when the user lies on the back, the back of the two shoulders does not touch the cushion, when the user bends, the elbows do not touch the knees, the two hands do not hold the heads, the two hands are away from the shoulders, the knee joints do not bend to 90 degrees, the user can sit up by using the elbow supporting cushion or the lifting force of the buttocks, and the like, and when the wrong postures occur, the counting is not carried out.
And (5) the condition of resting on the mat appears, and the examination is finished.
Detailed description of the preferred embodiment 1
Referring to fig. 1, fig. 1 is a flowchart of a method for intelligently recognizing a sit-up gesture completion status provided by the present invention, the method includes the following steps:
step 110, dividing the standard movement posture for completing one sit-up into two stages, namely a body ascending stage and a body descending stage.
As shown in fig. 2, the following steps are performed from top to bottom: an initial posture, a body up phase, a peak, a body down phase, and a return to initial posture.
Therefore, the sit-up action passes through a body ascending stage and a body descending stage from the starting posture to the returning starting posture, and the sit-up action posture is recognized based on the recognition of the body ascending stage and the body descending stage.
And step 120, setting an included angle formed by the shoulder joint, the waist joint and the knee joint as a key angle.
As shown in fig. 3, the critical angle formed by the shoulder, waist and knee joints is angle a in fig. 3.
The identification of the key angle is realized based on the human skeleton identification technology (human posture estimation algorithm) in the prior art, for example: the scheme of the invention can adopt the human body bone recognition technology to realize an action posture recognition algorithm.
Step 130, capturing a test video of the ongoing sit-up, or importing and playing a test video of an already existing sit-up. From the initial posture of the sit-up, the sizes of key angles in two adjacent frames of images in front and back of a test video are compared in real time, the comparison results of the key angles are identified, and the identification is connected in series to form a result list.
For the sake of calculation, the comparison result of the key angle may be identified by using a number 0 or 1.
If the key angle of the two adjacent frames of images in the next frame of image is smaller than the key angle of the image in the previous frame of image, marking the comparison result as 1, and indicating that the body is in a rising stage; on the contrary, if the key angle in the next frame image is larger than the key angle in the previous frame image, the comparison result is marked as 0, which indicates that the body descending stage is in.
As shown in FIG. 4, the critical angle A in the t +1 th framet+1Less than the critical angle A in the tth frametI.e. At+1<AtIf so, marking the comparison result as 1, and indicating that the body is in a rising stage; if A ist+1>AtThe comparison result is marked as 0, indicating that it is in a body descent stage.
Therefore, assuming that the motion posture standard of the sit-up performed by the current person is good, the detection environment is very good, and the data is completely accurate, the motion posture of the sit-up goes through a body ascending stage and a body descending stage, and a result list (11111111110000000000) is formed after identification.
In step 140, the body up phase and the body down phase of the sit-up are identified according to the change identified in the result list.
For example, in the result list [ 11111111110000000000 ], the first 10 bits are all 1, the last 10 bits are all 0, and the 10 th to 11 th bits are changed from 1 to 0, so that the motion before the 10 th bit is a body-up stage, the motion after the 11 th bit is a body-down stage, and according to the number of bits in the result list, a certain frame of picture in the test video can be corresponded.
And 150, recognizing the movement posture of the sit-up according to the alternation of the body ascending stage and the body descending stage of the sit-up obtained by recognition.
Since the test video is a continuous process, the result list will also be continuous, for example [ 1111111111000000000011111111110000000000 … … ], so that the motion posture of the sit-up can be identified and counted accordingly in the whole test video according to the alternation of the identification of the body ascending phase and the body descending phase.
For example, the above-mentioned consecutive result list would count as 2, indicating that two sit-ups have been performed.
In the process of identifying the test video, certain accidents may be caused due to the influence of factors such as video jitter and ambient light variation, so that data may oscillate unexpectedly, for example, the generated result list is [ 10101111110010010000 ], and therefore, the present invention further provides a method for eliminating the unexpected oscillation data, and the specific method is as follows:
using a sliding window with a smaller number of frames, traversing the result list, and uniformly replacing all identifiers in the sliding window with the identifiers with the largest number, where the size of the sliding window may be set based on the frame rate of the camera and other conditions, usually at least five frames are defined, and the step size of the sliding window is half of the sliding window and is an integer, for example, if the size of the sliding window is five frames, the step size is 3 frames.
As shown in fig. 5, for example, in the result list, through the sliding window, in the first sliding window, there are three 1's and two 0's, and therefore, two 0's are replaced with 1's, so that the result list is converted to [ 11111111110000000000 ], and then the identification division of the body ascending and body descending stages is performed through the converted result list.
In the scheme of the invention, a highest included angle threshold and a lowest included angle threshold are respectively set at the highest point and the lowest point of the action and are respectively recorded as a first threshold and a second threshold, and whether the body descending and body ascending stages are correctly completed is judged according to the comparison between the key angle and the highest included angle threshold or the lowest included angle threshold.
For example: in the body rising stage, if the key angle is less than or equal to the maximum included angle threshold value, the rising action is finished; otherwise, the ascending action is not finished, if the ascending action is not finished, and then the body descending stage is entered, the posture of the sit-up action is wrong, the correct counting is not performed, and the wrong counting is increased by 1.
In the body descending stage, if the key angle is larger than or equal to the minimum included angle threshold value, the descending action is finished; otherwise, the descending action is not finished, if the descending action is not finished and then the body ascending stage is entered, the posture of the sit-up action is wrong, the correct counting is not performed, and the wrong counting is increased by 1.
After one continuous body rising stage and one continuous body falling stage, the counting period is set as one counting period, and 1 is added to the counting period after one continuous rising motion completion and one continuous falling motion completion.
The first threshold and the second threshold are usually set to 50 degrees and 120 degrees, but the first threshold and the second threshold may be manually modified to adapt to different individuals in the solution of the present invention.
According to the scheme provided by the invention, in order to improve the accuracy of sit-up action posture recognition to the maximum extent, two important parameters, namely a first threshold and a second threshold, can be automatically adjusted. The specific method comprises the following steps:
the method comprises the steps of simulating by utilizing the length and proportion models of the height, the big arm, the small arm, the trunk, the thigh and the shank of a human body in advance, and calculating a first threshold value and a second threshold value which correspond to the most reasonable values; the method can be obtained through simulation in a drawing mode and can also be obtained through big data modeling.
Calculating the height of the tester according to the bone recognition data of the tester obtained by the human posture estimation algorithm and the distance between the image acquisition device and the tester on site;
and automatically matching the most reasonable first threshold value and the second threshold value according to the lengths and proportions of the upper arm, the lower arm, the trunk, the thigh and the lower leg of the tester obtained by the human posture estimation algorithm.
Therefore, the judgment results of the completion states of the body ascending and body descending actions can be more accurate, and the accuracy of action posture recognition in the scheme of the invention is far higher than that in the prior art.
The scheme provided by the invention can also detect common errors and judge by combining the connection of the key points of the skeleton and the motion stage of the body, for example, in the rising stage of the body, whether the action posture is standard or not is judged by detecting the bending included angle of the knees and the knees of the legs, whether the elbows touch the knees, whether the hands leave the shoulders and the like, and the wrong counting is added with 1 for the action posture which is not standard.
The above detections are obtained by identifying the corresponding included angles and the corresponding connection lines of the bone points are overlapped, and the algorithm is simpler than that of the previous detection, and is not described herein again.
According to the method, when the sit-up action posture of a tester changes to be at an initial stage, a timestamp (time on the time axis of the current video) of a test video is recorded, the timestamp is associated with each count, the count generates a drop-down list, and the test video guided by the timestamp is skipped to for playback by clicking the count in the drop-down list.
Specific example 2
The embodiment 2 of the present invention provides a system for intelligently identifying the state of completion of the sit-up action posture, as shown in fig. 6, the system includes an image acquisition device 10 for acquiring a test video, an action posture identification device 20, a display device 30 and a prompt device 40.
The motion gesture recognition device 20 is provided with the above-mentioned algorithm for intelligently recognizing the completion status of the sit-up motion gesture. Specifically, the motion gesture recognition apparatus 20 includes:
the identification module 21 is configured to compare key angles in two adjacent frames of images in the test video, and identify a comparison result of the key angles in the two frames of images. After identification, the identifications are connected in series to form a result list; wherein, the key angle is the included angle formed by the shoulder, the elbow and the wrist.
A lifting stage identification module 22, configured to identify a body descending stage or a body ascending stage of the sit-up according to a change of the identifier in the result list; the movement posture for completing one sit-up is divided into two stages, namely a body descending stage and a body ascending stage.
And the motion gesture recognition module 23 is configured to recognize the motion gesture of the sit-up according to the alternation of the body descending stage or the body ascending stage obtained through recognition.
On the basis, the action gesture recognition device further comprises a correction module 24, which is used for traversing the result list by using a sliding window and replacing all identifiers in the sliding window with the identifiers with the largest number, wherein the sliding window at least comprises five frames.
On this basis, the motion gesture recognition apparatus further includes a counting module 25.
The counting module 25 determines whether the ascending motion of the sit-up is completed by detecting whether the key angle is smaller than a first threshold, and generates an ascending completion flag when the ascending motion is completed; judging whether the descending action of the sit-up is finished or not by detecting whether the key angle is larger than a second threshold value or not, and generating a descending completion mark when the descending action is finished; counting is performed according to the rising completion flag and the falling completion flag.
On the basis, the sit-up exercise device further comprises a timestamp association module, when the sit-up exercise posture of the tester changes to be in the initial stage, the timestamp (the moment on the time axis of the current video) of the test video is recorded, the timestamp is associated with each count, the count generates a drop-down list, and the test video guided by the timestamp is jumped to for playback by clicking the count in the drop-down list.
In the system deployment process, in order to make the detection result more accurate and eliminate the influence of environmental factors, the invention is further provided with an AR (augmented reality) auxiliary line, as shown in fig. 7, the AR auxiliary line includes:
and the position auxiliary line 31 is used for determining that the tester moves to a position recommended by the tester for testing or training, the position auxiliary line is a line, positions needing to be placed, such as a sports carpet, a yoga carpet, a training mat, a sports mat, an anti-skid sports mat and the like, are marked on the display device, and the side edge of the sports mat is overlapped with the position auxiliary line, so that the identification accuracy is improved.
The test area auxiliary line 32 is a rectangular frame and is used for setting a test area, reasoning and judging bone key points only in the test area, so that the calculation amount is reduced, meanwhile, the bone key points are connected to form key corners and can also be displayed, and the auxiliary information can be compared to help correct the motion posture.
By combining the description of the above specific embodiments, the system and method for intelligently identifying the completion status of the sit-up action posture provided by the invention have the following advantages compared with the prior art:
firstly, the body ascending stage and the body descending stage are identified by comparing the key angles of the front frame image and the back frame image in the video, the finishing condition of the sit-up action posture is identified, a recognition model does not need to be trained, the video images can be used for direct recognition, the algorithm is simplified, and the deployment is rapid and convenient.
And secondly, the comparison result of the key angle is identified by adopting the numbers 1 and 0, and compared with the prior art of directly comparing the included angle value, the subsequent data processing algorithm is simplified, and the data processing efficiency is improved.
Thirdly, based on the 1 and 0 identifiers, unexpected oscillation data is eliminated through a sliding window algorithm, and the accuracy of recognition is improved.
Fourthly, by arranging the AR auxiliary line, on one hand, the identification accuracy is improved, on the other hand, the calculated amount is prevented from being increased due to the fact that other people accidentally enter the video, and the efficiency is improved.
Fifthly, the whole system is high in convenience, can operate without accessing the Internet, and is rapid and convenient to deploy.
Finally, it should also be noted that the terms "comprises," "comprising," or any other variation thereof, as used herein, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The present invention is not limited to the above-mentioned preferred embodiments, and any structural changes made under the teaching of the present invention shall fall within the scope of the present invention, which is similar or similar to the technical solutions of the present invention.

Claims (9)

1. A method for intelligently identifying the sit-up action posture completion condition is characterized by comprising the following steps:
dividing the standard action posture for completing one sit-up into a body ascending stage and a body descending stage;
setting an included angle formed by a shoulder joint, a waist joint and a knee joint as a key angle;
comparing key angles in two adjacent frames of images in front and back of a test video, identifying comparison results of the key angles in the two adjacent frames of images in front and back, and connecting the identifications in series to form a result list;
traversing the result list by using a sliding window, and replacing all identifiers in the sliding window with identifiers with the largest number, wherein the sliding window at least comprises five frames;
identifying a body ascending stage and a body descending stage for obtaining the sit-up according to the change of the identification in the result list;
and recognizing the movement posture of the sit-up according to the alternation of the body ascending stage and the body descending stage of the sit-up obtained by recognition.
2. The method of claim 1, wherein the identifier is represented by a number 0 or 1.
3. The method of claim 1,
judging whether the body lifting action is finished or not by detecting whether the key angle is smaller than a first threshold value or not, and generating a first finishing mark when the body lifting action is finished;
judging whether the body descending motion is finished or not by detecting whether the key angle is larger than a second threshold value or not, and generating a second finishing mark when the body descending motion is finished;
carrying out correct counting or error counting according to the first completion mark and the second completion mark;
the first threshold value is an included angle formed by a shoulder joint, a waist joint and a knee joint when the sit-up body is in the highest point posture; the second threshold value is an included angle formed by a shoulder joint, a waist joint and a knee joint when the sit-up starts to be in the posture.
4. The method of claim 1, wherein a correct count is incremented by 1 over one consecutive first completion flag and second completion flag, and wherein otherwise an error count is incremented by 1.
5. The method according to claim 1, wherein the rising movement is judged to be correct by detecting whether the leg knee is bent, the elbow is touching the knee, and the hand is away from the shoulder in the rising stage of the body.
6. The method of claim 1, wherein when the posture of the sit-up action of the tester changes to the start stage, a time stamp of the test video is recorded and associated to each count, the count generates a drop-down list, and the test video guided by the time stamp is jumped to for playback by clicking the count in the drop-down list.
7. The utility model provides a system for situation is accomplished to intelligent recognition sit up action gesture, includes the image acquisition device that is used for acquireing the test video to and action gesture recognition device, its characterized in that, action gesture recognition device includes:
the identification module is used for comparing key angles in two adjacent frames of images in the test video, identifying the comparison result of the key angles in the two frames of images, and connecting the identifications in series to form a result list; wherein the critical angle is an included angle formed by a shoulder joint, a waist joint and a knee joint;
the lifting stage identification module is used for identifying and obtaining a body descending stage or a body ascending stage of the sit-up according to the change of the identification in the result list; the movement posture for completing the sit-up is divided into a body descending stage and a body ascending stage in advance;
the correction module is used for traversing the result list by utilizing a sliding window and replacing all the identifiers in the sliding window with the identifiers with the largest number, wherein the sliding window at least comprises five frames;
and the motion posture identification module is used for identifying the motion posture of the sit-up according to the alternation of the descending stage or the ascending stage of the body obtained by identification.
8. The system of claim 7, further comprising a counting module,
judging whether the ascending action of the sit-up is finished or not by detecting whether the key angle is smaller than a first threshold value or not, and generating a first finishing mark when the ascending action is finished;
judging whether the descending action of the sit-up is finished or not by detecting whether the key angle is larger than a second threshold value or not, and generating a second finishing mark when the descending action is correct;
and carrying out correct counting or error counting according to the first completion mark and the second completion mark.
9. The system of claim 7, further comprising a timestamp association module that records timestamps of the test videos and associates the timestamps to each count when the posture of the sit-up movement of the tester changes to the start stage, wherein the count generates a drop-down list, and the test videos guided by the timestamps are skipped for playback by clicking the count in the drop-down list.
CN202110792620.3A 2021-07-14 2021-07-14 System and method for intelligently identifying sit-up action posture completion condition Active CN113255622B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110792620.3A CN113255622B (en) 2021-07-14 2021-07-14 System and method for intelligently identifying sit-up action posture completion condition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110792620.3A CN113255622B (en) 2021-07-14 2021-07-14 System and method for intelligently identifying sit-up action posture completion condition

Publications (2)

Publication Number Publication Date
CN113255622A CN113255622A (en) 2021-08-13
CN113255622B true CN113255622B (en) 2021-09-21

Family

ID=77191209

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110792620.3A Active CN113255622B (en) 2021-07-14 2021-07-14 System and method for intelligently identifying sit-up action posture completion condition

Country Status (1)

Country Link
CN (1) CN113255622B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569828B (en) * 2021-09-27 2022-03-08 南昌嘉研科技有限公司 Human body posture recognition method, system, storage medium and equipment
JP7169718B1 (en) 2021-11-12 2022-11-11 株式会社エクサウィザーズ Information processing method, device and program
CN118503858A (en) * 2024-07-11 2024-08-16 南京陆加壹智能科技有限公司 Intelligent sit-up test method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101964047A (en) * 2009-07-22 2011-02-02 深圳泰山在线科技有限公司 Multiple trace point-based human body action recognition method
CN108564596A (en) * 2018-03-01 2018-09-21 南京邮电大学 A kind of the intelligence comparison analysis system and method for golf video
CN111104816A (en) * 2018-10-25 2020-05-05 杭州海康威视数字技术股份有限公司 Target object posture recognition method and device and camera
CN111368810A (en) * 2020-05-26 2020-07-03 西南交通大学 Sit-up detection system and method based on human body and skeleton key point identification
US20200372288A1 (en) * 2014-08-11 2020-11-26 Arizona Board Of Regents On Behalf Of Arizona State University Systems and methods for non-contact tracking and analysis of physical activity using imaging

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105913045B (en) * 2016-05-09 2019-04-16 深圳泰山体育科技股份有限公司 The method of counting and system of sit-ups test
CN112870641B (en) * 2021-01-20 2021-11-19 岭南师范学院 Sit-up exercise information management system based on Internet of things and detection method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101964047A (en) * 2009-07-22 2011-02-02 深圳泰山在线科技有限公司 Multiple trace point-based human body action recognition method
US20200372288A1 (en) * 2014-08-11 2020-11-26 Arizona Board Of Regents On Behalf Of Arizona State University Systems and methods for non-contact tracking and analysis of physical activity using imaging
CN108564596A (en) * 2018-03-01 2018-09-21 南京邮电大学 A kind of the intelligence comparison analysis system and method for golf video
CN111104816A (en) * 2018-10-25 2020-05-05 杭州海康威视数字技术股份有限公司 Target object posture recognition method and device and camera
CN111368810A (en) * 2020-05-26 2020-07-03 西南交通大学 Sit-up detection system and method based on human body and skeleton key point identification

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Python玩人工智能:你的仰卧起坐达标了吗?;编程玩家俱乐部;《CSDN》;20210505;第1-8页 *

Also Published As

Publication number Publication date
CN113255622A (en) 2021-08-13

Similar Documents

Publication Publication Date Title
CN113255622B (en) System and method for intelligently identifying sit-up action posture completion condition
CN111368810B (en) Sit-up detection system and method based on human body and skeleton key point identification
CN113762133A (en) Self-weight fitness auxiliary coaching system, method and terminal based on human body posture recognition
CN112749684A (en) Cardiopulmonary resuscitation training and evaluating method, device, equipment and storage medium
CN113262459B (en) Method, apparatus and medium for determining motion standard of sport body-building mirror
JP2020174910A (en) Exercise support system
CN114596451B (en) Body fitness testing method and device based on AI vision and storage medium
CN112966370B (en) Design method of human body lower limb muscle training system based on Kinect
CN113255623B (en) System and method for intelligently identifying push-up action posture completion condition
CN114093032A (en) Human body action evaluation method based on action state information
CN113255624B (en) System and method for intelligently identifying completion condition of pull-up action gesture
CN112827127A (en) Sit-up training system for physical education
CN112818800A (en) Physical exercise evaluation method and system based on human skeleton point depth image
CN115068919B (en) Examination method of horizontal bar project and implementation device thereof
CN116271757A (en) Auxiliary system and method for basketball practice based on AI technology
CN115761873A (en) Shoulder rehabilitation movement duration evaluation method based on gesture and posture comprehensive visual recognition
CN113842622B (en) Motion teaching method, device, system, electronic equipment and storage medium
WO2021261529A1 (en) Physical exercise assistance system
CN115116125A (en) Push-up examination method and implementation device thereof
KR102456055B1 (en) Apparatus and method for quantitatively analyzing and evaluating posture to train exercise by repetitive motion
CN114038054A (en) Pull-up detection device and method
CN113975776A (en) Sit-up evaluation system and method thereof
CN114092862A (en) Action evaluation method based on optimal frame selection
TWI821772B (en) Muscle state detection method and muscle state detection device using the same
CN111012357A (en) Seat body forward-bending detection device and method based on image recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220906

Address after: Room 2310, 23rd Floor, No. 24, Jianguomenwai Street, Chaoyang District, Beijing 100010

Patentee after: One Body Technology Co.,Ltd.

Address before: Room zt1009, science and technology building, No. 45, Zhaitang street, Mentougou District, Beijing 102300 (cluster registration)

Patentee before: Beijing Yiti Technology Co.,Ltd.

PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A System and Method of Intelligently Recognizing the Completion Status of Sit-up Movement Attitude

Effective date of registration: 20230627

Granted publication date: 20210921

Pledgee: Zhongguancun Beijing technology financing Company limited by guarantee

Pledgor: One Body Technology Co.,Ltd.

Registration number: Y2023990000325