Disclosure of Invention
In view of the above-mentioned defects, the technical problem to be solved by the present invention is to provide a system and a method for intelligently identifying the sit-up gesture completion status, so as to solve the problems that the identification result in the prior art is not accurate enough, a large amount of sample data training models are required at the early stage, and the workload is large.
Therefore, the invention provides a method for intelligently identifying the completion condition of the sit-up action posture, which comprises the following steps:
dividing the standard action posture for completing one sit-up into a body ascending stage and a body descending stage;
setting an included angle formed by a shoulder joint, a waist joint and a knee joint as a key angle;
comparing key angles in two adjacent frames of images in front and back of a test video, identifying comparison results of the key angles in the two adjacent frames of images in front and back, and connecting the identifications in series to form a result list;
traversing the result list by using a sliding window, and replacing all identifiers in the sliding window with identifiers with the largest number, wherein the sliding window at least comprises five frames;
identifying a body ascending stage and a body descending stage for obtaining the sit-up according to the change of the identification in the result list;
and recognizing the movement posture of the sit-up according to the alternation of the body ascending stage and the body descending stage of the sit-up obtained by recognition.
In the above method, preferably, the identifier is represented by a numeral 0 or 1.
In the above method, preferably, whether the body raising motion is completed is determined by detecting whether the key angle is smaller than a first threshold, and a first completion flag is generated when the body raising motion is completed;
judging whether the body descending motion is finished or not by detecting whether the key angle is larger than a second threshold value or not, and generating a second finishing mark when the body descending motion is finished;
counting according to the first completion mark and the second completion mark;
the first threshold value is an included angle formed by a shoulder joint, a waist joint and a knee joint when the sit-up body is in the highest point posture; the second threshold value is an included angle formed by a shoulder joint, a waist joint and a knee joint when the sit-up starts to be in the posture.
In the above method, preferably, after one consecutive pass of the first completion flag and the second completion flag, the correct count is incremented by 1, otherwise, the error count is incremented by 1.
In the above method, it is preferable that the rising motion is determined to be correct by detecting whether or not the leg knees are bent, whether or not the elbows touch the knees, and whether or not the hands are away from the shoulders.
In the above method, preferably, when the posture of the sit-up action of the tester changes to the initial stage, a timestamp of the test video is recorded and associated with each count, the count generates a drop-down list, and by clicking the count in the drop-down list, the test video guided by the timestamp is jumped to for playback.
The invention also provides a system for intelligently identifying the sit-up action posture completion condition, which comprises an image acquisition device for acquiring the test video and an action posture identification device, wherein the action posture identification device comprises:
the identification module is used for comparing key angles in two adjacent frames of images in the test video, identifying the comparison result of the key angles in the two frames of images, and connecting the identifications in series to form a result list; wherein the critical angle is an included angle formed by a shoulder joint, a waist joint and a knee joint;
the lifting stage identification module is used for identifying and obtaining a body descending stage or a body ascending stage of the sit-up according to the change of the identification in the result list; the movement posture for completing the sit-up is divided into a body descending stage and a body ascending stage in advance;
the correction module is used for traversing the result list by utilizing a sliding window and replacing all the identifiers in the sliding window with the identifiers with the largest number, wherein the sliding window at least comprises five frames;
and the motion posture identification module is used for identifying the motion posture of the sit-up according to the alternation of the descending stage or the ascending stage of the body obtained by identification.
In the above system, preferably, further comprising a counting module,
judging whether the ascending action of the sit-up is finished or not by detecting whether the key angle is smaller than a first threshold value or not, and generating a first finishing mark when the ascending action is finished;
judging whether the descending action of the sit-up is finished or not by detecting whether the key angle is larger than a second threshold value or not, and generating a second finishing mark when the descending action is correct;
counting is performed according to the first completion flag and the second completion flag.
In the above system, preferably, the system further includes a timestamp association module, when the posture of the sit-up action of the tester changes to the initial stage, the timestamp of the test video is recorded, and the timestamp is associated with each count, the count generates a drop-down list, and the test video guided by the timestamp is skipped to for playback by clicking the count in the drop-down list.
According to the technical scheme, the system and the method for intelligently identifying the sit-up action posture completion condition solve the problem that the identification result in the prior art is not accurate enough. Compared with the prior art, the invention has the following beneficial effects:
the key angles of the front frame image and the rear frame image in the video are compared to identify the descending stage and the ascending stage of the body, so that the finishing condition of the sit-up action posture is identified, a recognition model does not need to be trained, the video images can be used for direct recognition, the algorithm is simplified, and the deployment is rapid and convenient.
In addition, the result list is traversed by using the sliding window, and all the identifiers in the sliding window are replaced by the identifiers with the largest number, so that the influence of data oscillation is reduced, and the identification result is more accurate.
Detailed Description
The technical solutions of the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings, and it is to be understood that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without any inventive step, are within the scope of the present invention.
The realization principle of the invention is as follows:
setting a key angle formed by the relevant body parts corresponding to the sit-up action posture;
comparing key angles in two adjacent frames of images in front and back of a test video, and identifying comparison results of the key angles;
and connecting the identifiers in series to form a result list, and obtaining the completion condition of the action posture according to the change of the identifiers in the result list.
According to the scheme provided by the invention, the key angles of the front frame image and the rear frame image in the video are compared to identify the descending stage and the ascending stage of the body, so that the completion condition of the action posture is identified, a recognition model is not required to be trained, the video images can be used for direct recognition, the algorithm is simplified, and the deployment is rapid and convenient.
In order to make the technical solution and implementation of the present invention more clearly explained and illustrated, several preferred embodiments for implementing the technical solution of the present invention are described below.
It should be noted that the terms of orientation such as "inside, outside", "front, back" and "left and right" are used herein as reference objects, and it is obvious that the use of the corresponding terms of orientation does not limit the scope of protection of the present invention.
A complete standard sit-up gesture includes: an initial posture, a body-up phase, a peak, a body-down phase and a return to initial posture.
Starting posture: the upper body is tilted backwards until the scapula touches the floor mat, the knees are bent to about 90 degrees, the feet are horizontally placed on the floor mat, and the two hands are overlapped and placed in front of the chest and on the two shoulders or placed behind the brain.
A body ascending stage: tightening the abdominal muscles, slowly lifting the head, then lifting the upper body, wherein the two soles must be always tightly attached to the ground in the whole process, and the eyes watch the knees to contract the abdominal muscles until the upper body forms an angle of 90 degrees with the thighs or the elbows touch or exceed the knees.
A body descending stage: after reaching the highest point, keeping for a certain time, then slowly lying the upper body back until the body lies on the floor mat, and returning to the initial posture again.
The key to sit-up action comprises: the upper body leans backwards until the scapula touches the floor, the upper body bends forwards, the lower jaw is tightened, the upper body bends forwards until the two elbows touch the knee or thigh part at the same time, and the two arms always keep holding the shoulders crosswise in front of the chest and in both hands.
Common errors: when the user lies on the back, the back of the two shoulders does not touch the cushion, when the user bends, the elbows do not touch the knees, the two hands do not hold the heads, the two hands are away from the shoulders, the knee joints do not bend to 90 degrees, the user can sit up by using the elbow supporting cushion or the lifting force of the buttocks, and the like, and when the wrong postures occur, the counting is not carried out.
And (5) the condition of resting on the mat appears, and the examination is finished.
Detailed description of the preferred embodiment 1
Referring to fig. 1, fig. 1 is a flowchart of a method for intelligently recognizing a sit-up gesture completion status provided by the present invention, the method includes the following steps:
step 110, dividing the standard movement posture for completing one sit-up into two stages, namely a body ascending stage and a body descending stage.
As shown in fig. 2, the following steps are performed from top to bottom: an initial posture, a body up phase, a peak, a body down phase, and a return to initial posture.
Therefore, the sit-up action passes through a body ascending stage and a body descending stage from the starting posture to the returning starting posture, and the sit-up action posture is recognized based on the recognition of the body ascending stage and the body descending stage.
And step 120, setting an included angle formed by the shoulder joint, the waist joint and the knee joint as a key angle.
As shown in fig. 3, the critical angle formed by the shoulder, waist and knee joints is angle a in fig. 3.
The identification of the key angle is realized based on the human skeleton identification technology (human posture estimation algorithm) in the prior art, for example: the scheme of the invention can adopt the human body bone recognition technology to realize an action posture recognition algorithm.
Step 130, capturing a test video of the ongoing sit-up, or importing and playing a test video of an already existing sit-up. From the initial posture of the sit-up, the sizes of key angles in two adjacent frames of images in front and back of a test video are compared in real time, the comparison results of the key angles are identified, and the identification is connected in series to form a result list.
For the sake of calculation, the comparison result of the key angle may be identified by using a number 0 or 1.
If the key angle of the two adjacent frames of images in the next frame of image is smaller than the key angle of the image in the previous frame of image, marking the comparison result as 1, and indicating that the body is in a rising stage; on the contrary, if the key angle in the next frame image is larger than the key angle in the previous frame image, the comparison result is marked as 0, which indicates that the body descending stage is in.
As shown in FIG. 4, the critical angle A in the t +1 th framet+1Less than the critical angle A in the tth frametI.e. At+1<AtIf so, marking the comparison result as 1, and indicating that the body is in a rising stage; if A ist+1>AtThe comparison result is marked as 0, indicating that it is in a body descent stage.
Therefore, assuming that the motion posture standard of the sit-up performed by the current person is good, the detection environment is very good, and the data is completely accurate, the motion posture of the sit-up goes through a body ascending stage and a body descending stage, and a result list (11111111110000000000) is formed after identification.
In step 140, the body up phase and the body down phase of the sit-up are identified according to the change identified in the result list.
For example, in the result list [ 11111111110000000000 ], the first 10 bits are all 1, the last 10 bits are all 0, and the 10 th to 11 th bits are changed from 1 to 0, so that the motion before the 10 th bit is a body-up stage, the motion after the 11 th bit is a body-down stage, and according to the number of bits in the result list, a certain frame of picture in the test video can be corresponded.
And 150, recognizing the movement posture of the sit-up according to the alternation of the body ascending stage and the body descending stage of the sit-up obtained by recognition.
Since the test video is a continuous process, the result list will also be continuous, for example [ 1111111111000000000011111111110000000000 … … ], so that the motion posture of the sit-up can be identified and counted accordingly in the whole test video according to the alternation of the identification of the body ascending phase and the body descending phase.
For example, the above-mentioned consecutive result list would count as 2, indicating that two sit-ups have been performed.
In the process of identifying the test video, certain accidents may be caused due to the influence of factors such as video jitter and ambient light variation, so that data may oscillate unexpectedly, for example, the generated result list is [ 10101111110010010000 ], and therefore, the present invention further provides a method for eliminating the unexpected oscillation data, and the specific method is as follows:
using a sliding window with a smaller number of frames, traversing the result list, and uniformly replacing all identifiers in the sliding window with the identifiers with the largest number, where the size of the sliding window may be set based on the frame rate of the camera and other conditions, usually at least five frames are defined, and the step size of the sliding window is half of the sliding window and is an integer, for example, if the size of the sliding window is five frames, the step size is 3 frames.
As shown in fig. 5, for example, in the result list, through the sliding window, in the first sliding window, there are three 1's and two 0's, and therefore, two 0's are replaced with 1's, so that the result list is converted to [ 11111111110000000000 ], and then the identification division of the body ascending and body descending stages is performed through the converted result list.
In the scheme of the invention, a highest included angle threshold and a lowest included angle threshold are respectively set at the highest point and the lowest point of the action and are respectively recorded as a first threshold and a second threshold, and whether the body descending and body ascending stages are correctly completed is judged according to the comparison between the key angle and the highest included angle threshold or the lowest included angle threshold.
For example: in the body rising stage, if the key angle is less than or equal to the maximum included angle threshold value, the rising action is finished; otherwise, the ascending action is not finished, if the ascending action is not finished, and then the body descending stage is entered, the posture of the sit-up action is wrong, the correct counting is not performed, and the wrong counting is increased by 1.
In the body descending stage, if the key angle is larger than or equal to the minimum included angle threshold value, the descending action is finished; otherwise, the descending action is not finished, if the descending action is not finished and then the body ascending stage is entered, the posture of the sit-up action is wrong, the correct counting is not performed, and the wrong counting is increased by 1.
After one continuous body rising stage and one continuous body falling stage, the counting period is set as one counting period, and 1 is added to the counting period after one continuous rising motion completion and one continuous falling motion completion.
The first threshold and the second threshold are usually set to 50 degrees and 120 degrees, but the first threshold and the second threshold may be manually modified to adapt to different individuals in the solution of the present invention.
According to the scheme provided by the invention, in order to improve the accuracy of sit-up action posture recognition to the maximum extent, two important parameters, namely a first threshold and a second threshold, can be automatically adjusted. The specific method comprises the following steps:
the method comprises the steps of simulating by utilizing the length and proportion models of the height, the big arm, the small arm, the trunk, the thigh and the shank of a human body in advance, and calculating a first threshold value and a second threshold value which correspond to the most reasonable values; the method can be obtained through simulation in a drawing mode and can also be obtained through big data modeling.
Calculating the height of the tester according to the bone recognition data of the tester obtained by the human posture estimation algorithm and the distance between the image acquisition device and the tester on site;
and automatically matching the most reasonable first threshold value and the second threshold value according to the lengths and proportions of the upper arm, the lower arm, the trunk, the thigh and the lower leg of the tester obtained by the human posture estimation algorithm.
Therefore, the judgment results of the completion states of the body ascending and body descending actions can be more accurate, and the accuracy of action posture recognition in the scheme of the invention is far higher than that in the prior art.
The scheme provided by the invention can also detect common errors and judge by combining the connection of the key points of the skeleton and the motion stage of the body, for example, in the rising stage of the body, whether the action posture is standard or not is judged by detecting the bending included angle of the knees and the knees of the legs, whether the elbows touch the knees, whether the hands leave the shoulders and the like, and the wrong counting is added with 1 for the action posture which is not standard.
The above detections are obtained by identifying the corresponding included angles and the corresponding connection lines of the bone points are overlapped, and the algorithm is simpler than that of the previous detection, and is not described herein again.
According to the method, when the sit-up action posture of a tester changes to be at an initial stage, a timestamp (time on the time axis of the current video) of a test video is recorded, the timestamp is associated with each count, the count generates a drop-down list, and the test video guided by the timestamp is skipped to for playback by clicking the count in the drop-down list.
Specific example 2
The embodiment 2 of the present invention provides a system for intelligently identifying the state of completion of the sit-up action posture, as shown in fig. 6, the system includes an image acquisition device 10 for acquiring a test video, an action posture identification device 20, a display device 30 and a prompt device 40.
The motion gesture recognition device 20 is provided with the above-mentioned algorithm for intelligently recognizing the completion status of the sit-up motion gesture. Specifically, the motion gesture recognition apparatus 20 includes:
the identification module 21 is configured to compare key angles in two adjacent frames of images in the test video, and identify a comparison result of the key angles in the two frames of images. After identification, the identifications are connected in series to form a result list; wherein, the key angle is the included angle formed by the shoulder, the elbow and the wrist.
A lifting stage identification module 22, configured to identify a body descending stage or a body ascending stage of the sit-up according to a change of the identifier in the result list; the movement posture for completing one sit-up is divided into two stages, namely a body descending stage and a body ascending stage.
And the motion gesture recognition module 23 is configured to recognize the motion gesture of the sit-up according to the alternation of the body descending stage or the body ascending stage obtained through recognition.
On the basis, the action gesture recognition device further comprises a correction module 24, which is used for traversing the result list by using a sliding window and replacing all identifiers in the sliding window with the identifiers with the largest number, wherein the sliding window at least comprises five frames.
On this basis, the motion gesture recognition apparatus further includes a counting module 25.
The counting module 25 determines whether the ascending motion of the sit-up is completed by detecting whether the key angle is smaller than a first threshold, and generates an ascending completion flag when the ascending motion is completed; judging whether the descending action of the sit-up is finished or not by detecting whether the key angle is larger than a second threshold value or not, and generating a descending completion mark when the descending action is finished; counting is performed according to the rising completion flag and the falling completion flag.
On the basis, the sit-up exercise device further comprises a timestamp association module, when the sit-up exercise posture of the tester changes to be in the initial stage, the timestamp (the moment on the time axis of the current video) of the test video is recorded, the timestamp is associated with each count, the count generates a drop-down list, and the test video guided by the timestamp is jumped to for playback by clicking the count in the drop-down list.
In the system deployment process, in order to make the detection result more accurate and eliminate the influence of environmental factors, the invention is further provided with an AR (augmented reality) auxiliary line, as shown in fig. 7, the AR auxiliary line includes:
and the position auxiliary line 31 is used for determining that the tester moves to a position recommended by the tester for testing or training, the position auxiliary line is a line, positions needing to be placed, such as a sports carpet, a yoga carpet, a training mat, a sports mat, an anti-skid sports mat and the like, are marked on the display device, and the side edge of the sports mat is overlapped with the position auxiliary line, so that the identification accuracy is improved.
The test area auxiliary line 32 is a rectangular frame and is used for setting a test area, reasoning and judging bone key points only in the test area, so that the calculation amount is reduced, meanwhile, the bone key points are connected to form key corners and can also be displayed, and the auxiliary information can be compared to help correct the motion posture.
By combining the description of the above specific embodiments, the system and method for intelligently identifying the completion status of the sit-up action posture provided by the invention have the following advantages compared with the prior art:
firstly, the body ascending stage and the body descending stage are identified by comparing the key angles of the front frame image and the back frame image in the video, the finishing condition of the sit-up action posture is identified, a recognition model does not need to be trained, the video images can be used for direct recognition, the algorithm is simplified, and the deployment is rapid and convenient.
And secondly, the comparison result of the key angle is identified by adopting the numbers 1 and 0, and compared with the prior art of directly comparing the included angle value, the subsequent data processing algorithm is simplified, and the data processing efficiency is improved.
Thirdly, based on the 1 and 0 identifiers, unexpected oscillation data is eliminated through a sliding window algorithm, and the accuracy of recognition is improved.
Fourthly, by arranging the AR auxiliary line, on one hand, the identification accuracy is improved, on the other hand, the calculated amount is prevented from being increased due to the fact that other people accidentally enter the video, and the efficiency is improved.
Fifthly, the whole system is high in convenience, can operate without accessing the Internet, and is rapid and convenient to deploy.
Finally, it should also be noted that the terms "comprises," "comprising," or any other variation thereof, as used herein, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The present invention is not limited to the above-mentioned preferred embodiments, and any structural changes made under the teaching of the present invention shall fall within the scope of the present invention, which is similar or similar to the technical solutions of the present invention.