CN112966597A - Human motion action counting method based on skeleton key points - Google Patents
Human motion action counting method based on skeleton key points Download PDFInfo
- Publication number
- CN112966597A CN112966597A CN202110239387.6A CN202110239387A CN112966597A CN 112966597 A CN112966597 A CN 112966597A CN 202110239387 A CN202110239387 A CN 202110239387A CN 112966597 A CN112966597 A CN 112966597A
- Authority
- CN
- China
- Prior art keywords
- model
- motion
- counting
- human
- points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06M—COUNTING MECHANISMS; COUNTING OF OBJECTS NOT OTHERWISE PROVIDED FOR
- G06M1/00—Design features of general application
- G06M1/27—Design features of general application for representing the result of count in the form of electric signals, e.g. by sensing markings on the counter drum
- G06M1/272—Design features of general application for representing the result of count in the form of electric signals, e.g. by sensing markings on the counter drum using photoelectric means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
- G06V20/42—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
Abstract
A human motion action counting method based on skeleton key points obtains the human key points through a deep learning model algorithm, and finishes counting work of motion actions by sampling angle changes of the motion influencing the key points. The mode can objectively and efficiently complete the technical work of the movement action, and effectively assists the user to do autonomous movement exercise through the online sports software. The hrnet model is mainly used. Based on AI processing of the image, only a motion video needs to be collected, and data collection by other external sensors and other equipment is not needed, so that the method is simple and convenient. Meanwhile, the invention completes a general action counting evaluation model based on angle change, only needs to modify corresponding parameters for actions such as opening and closing jump, push-up, squatting and the like, does not need to repeatedly model for each movement, effectively reduces repeated work and improves efficiency.
Description
Technical Field
The invention relates to the technical field of information, in particular to a human motion action counting method based on skeleton key points.
Background
With the improvement of living standard of people, people have more and more demands on sports and fitness, the demands of people can not be met by the prior off-line gymnasiums and gymnasiums for exercising and learning various sports, especially in epidemic situations, people can not go to the gymnasiums and gymnasiums, the on-line body building and learning sports are deeply embedded into the concept of people, the demand of on-line autonomous learning and fitness is increasingly vigorous, and the on-line sports becomes more and more important.
The traditional motion counting mode is usually a manual mode or a mode of recording the motion number by means of special equipment, the modes are limited to scenes or tools, and the problems of high labor cost, low efficiency and large error are easily caused.
Disclosure of Invention
In order to overcome the defects of the technology, the invention provides a counting method for acquiring key points of a human body through a deep learning model algorithm and sampling angle changes of the key points influenced by actions to finish the movement actions.
The technical scheme adopted by the invention for overcoming the technical problems is as follows:
a human motion action counting method based on skeleton key points comprises the following steps:
a) constructing a counting model for repeated motion movement based on angle change by using the model according to the coordinates of the human skeleton points, wherein the counting model comprises 17 human skeleton coordinate points and 4 extreme point coordinates;
b) randomly selecting three coordinate points from 17 human skeleton coordinate points in the counting model, and respectively defining the three coordinate points as Pn、Pm,PqThe included angle of the three coordinate points is [ P ]n-Pm-Pq];
c) Accessing a moving video, and extracting video frames from the video to obtain frame pictures of the moving video;
d) obtaining the coordinates of human skeleton points with 17 human skeleton coordinate points and 4 extreme point coordinates by using the model in the step a) from the frame of the obtained motion video, and finding P in the step b) from the human skeleton point coordinatesn、Pm,PqThree points are arranged in the same way and are defined as Pn′、Pm′,Pq′;
e) Setting the counting evaluation angle of the motion starting state as alpha1Setting the counting evaluation angle of the motion ending state as alpha2When { [ P ]n′-Pm′-Pq′][<|>][α1]When the state is equal to the preset value, { [ P ]n′-Pm′-Pq′][<|>][α2]Judging the motion ending state, and adding 1 to the motion count of the human body when the motion video has a starting state and then has an ending state;
f) and outputting the acquired motion count for display.
Further, the model in the step a) is an hrnet model or an openPose model or an AlphaPose model or a depPose model, and a counting model is constructed for repeated motion movement based on angle change according to the coordinates of the human skeleton points.
Further, an hrnet model is used in step a).
The invention has the beneficial effects that: human body key points are obtained through a deep learning model algorithm, and the counting work of the movement actions is completed by sampling the angle change of the key points influenced by the actions. The mode can objectively and efficiently complete the technical work of the movement action, and effectively assists the user to do autonomous movement exercise through the online sports software. The hrnet model is mainly used. Based on AI processing of the image, only a motion video needs to be collected, and data collection by other external sensors and other equipment is not needed, so that the method is simple and convenient. Meanwhile, the invention completes a general action counting evaluation model based on angle change, only needs to modify corresponding parameters for actions such as opening and closing jump, push-up, squatting and the like, does not need to repeatedly model for each movement, effectively reduces repeated work and improves efficiency.
Drawings
Fig. 1 is a schematic diagram of coordinates of key points of human bones according to the present invention.
Detailed Description
The invention is further described below with reference to fig. 1.
A human motion action counting method based on skeleton key points comprises the following steps:
a) and constructing a counting model for the repeated motion movement based on the angle change by using the model according to the coordinates of the human skeleton points. As shown in fig. 1, the counting model has 17 human skeleton coordinate points and 4 extreme point coordinates; b) randomly selecting three coordinate points from 17 human skeleton coordinate points in the counting model, and respectively defining the three coordinate points as Pn、Pm,PqThe included angle of the three coordinate points is [ P ]n-Pm-Pq]。
c) And accessing the motion video, extracting video frames from the video, and acquiring frame pictures of the motion video.
d) Obtaining the coordinates of human skeleton points with 17 human skeleton coordinate points and 4 extreme point coordinates by using the model in the step a) from the frame of the obtained motion video, and finding P in the step b) from the human skeleton point coordinatesn、Pm,PqThree points are arranged in the same way and are defined as Pn′、Pm′,Pq′。
e) Setting the counting evaluation angle of the motion starting state as alpha1Setting the counting evaluation angle of the motion ending state as alpha2When { [ P ]n′-Pm′-Pq′][<|>][α1]When the state is equal to the preset value, { [ P ]n′-Pm′-Pq′][<|>][α2]Judging the motion ending state, and adding 1 to the motion count of the human body when the motion video has a starting state and then has an ending state. And < | > means greater than or less than.
Explanation is given by taking the opening and closing jumping movement as an example:
[4-5-6>175&7-0-5>175&0-7-8>175],[5-4-A<95&0-7-8>175]”
wherein: 4-5-6> 175: the angle 4-5-6 (left knee angle) is greater than 175.
4-5-6>175&7-0-5>175&0-7-8> 175: the left knee is angled at greater than 175 deg., and the torso is angled at greater than 175 deg. to the leg, and the torso is straight (0-7-5 angle greater than 175 deg.). This defines the "end" state of the opening and closing jump movement. 5-4-A <95&0-7-8>175 means that the human body is in a standing state, which defines the starting state of the opening and closing jump.
f) And outputting the acquired motion count for display.
Human body key points are obtained through a deep learning model algorithm, and the counting work of the movement actions is completed by sampling the angle change of the key points influenced by the actions. The mode can objectively and efficiently complete the technical work of the movement action, and effectively assists the user to do autonomous movement exercise through the online sports software. The hrnet model is mainly used. Based on AI processing of the image, only a motion video needs to be collected, and data collection by other external sensors and other equipment is not needed, so that the method is simple and convenient. Meanwhile, the invention completes a general action counting evaluation model based on angle change, only needs to modify corresponding parameters for actions such as opening and closing jump, push-up, squatting and the like, does not need to repeatedly model for each movement, effectively reduces repeated work and improves efficiency.
Example 1:
further, the model in the step a) is an hrnet model or an openPose model or an AlphaPose model or a depPose model, and a counting model is constructed for repeated motion movement based on angle change according to the coordinates of the human skeleton points.
Example 2:
further, an hrnet model is used in step a).
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (3)
1. A human motion action counting method based on skeleton key points is characterized by comprising the following steps:
a) constructing a counting model for repeated motion movement based on angle change by using the model according to the coordinates of the human skeleton points, wherein the counting model comprises 17 human skeleton coordinate points and 4 extreme point coordinates;
b) randomly selecting three coordinate points from 17 human skeleton coordinate points in the counting model, and respectively defining the three coordinate points as Pn、Pm,PqThe included angle of the three coordinate points is [ P ]n-Pm-Pq];
c) Accessing a moving video, and extracting video frames from the video to obtain frame pictures of the moving video;
d) obtaining the coordinates of human skeleton points with 17 human skeleton coordinate points and 4 extreme point coordinates by using the model in the step a) from the frame of the obtained motion video, and finding P in the step b) from the human skeleton point coordinatesn、Pm,PqThree points are arranged in the same way and are defined as Pn′、Pm′,Pq′;
e) Setting the counting evaluation angle of the motion starting state as alpha1Setting the counting evaluation angle of the motion ending state as alpha2When { [ P ]n′-Pm′-Pq′][<|>][α1]When the state is equal to the preset value, { [ P ]n′-Pm′-Pq′][<|>][α2]Judging the motion ending state, and adding 1 to the motion count of the human body when the motion video has a starting state and then has an ending state;
f) and outputting the acquired motion count for display.
2. The method for counting human motion actions based on skeletal key points as claimed in claim 1, wherein: in the step a), the model is an hrnet model or openPose model or AlphaPose model or depPose model, and a counting model is constructed for repeated motion movement based on angle change according to coordinates of human skeleton points.
3. The method for counting human motion actions based on skeletal key points as claimed in claim 1, wherein: the hrnet model was used in step a).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110239387.6A CN112966597A (en) | 2021-03-04 | 2021-03-04 | Human motion action counting method based on skeleton key points |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110239387.6A CN112966597A (en) | 2021-03-04 | 2021-03-04 | Human motion action counting method based on skeleton key points |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112966597A true CN112966597A (en) | 2021-06-15 |
Family
ID=76276403
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110239387.6A Pending CN112966597A (en) | 2021-03-04 | 2021-03-04 | Human motion action counting method based on skeleton key points |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112966597A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113379024A (en) * | 2021-06-25 | 2021-09-10 | 杨洋 | Deep squatting movement counting method and system, computer equipment and storage medium |
CN114011026A (en) * | 2021-10-29 | 2022-02-08 | 北京林业大学 | Non-contact physical ability test system and method |
CN114764946A (en) * | 2021-09-18 | 2022-07-19 | 北京甲板智慧科技有限公司 | Action counting method and system based on time sequence standardization and intelligent terminal |
CN115100745A (en) * | 2022-07-05 | 2022-09-23 | 北京甲板智慧科技有限公司 | Swin transform model-based motion real-time counting method and system |
CN115205737A (en) * | 2022-07-05 | 2022-10-18 | 北京甲板智慧科技有限公司 | Real-time motion counting method and system based on Transformer model |
CN115223240A (en) * | 2022-07-05 | 2022-10-21 | 北京甲板智慧科技有限公司 | Motion real-time counting method and system based on dynamic time warping algorithm |
CN115620392A (en) * | 2022-09-26 | 2023-01-17 | 珠海视熙科技有限公司 | Action counting method, device, medium and fitness equipment |
CN115661919A (en) * | 2022-09-26 | 2023-01-31 | 珠海视熙科技有限公司 | Repeated action cycle statistical method and device, fitness equipment and storage medium |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109919132A (en) * | 2019-03-22 | 2019-06-21 | 广东省智能制造研究所 | A kind of pedestrian's tumble recognition methods based on skeleton detection |
CN110313923A (en) * | 2019-07-05 | 2019-10-11 | 昆山杜克大学 | Autism early screening system based on joint ability of attention test and audio-video behavioural analysis |
CN111275032A (en) * | 2020-05-07 | 2020-06-12 | 西南交通大学 | Deep squatting detection method, device, equipment and medium based on human body key points |
CN111401260A (en) * | 2020-03-18 | 2020-07-10 | 南通大学 | Sit-up test counting method and system based on Quick-OpenPose model |
WO2020177498A1 (en) * | 2019-03-04 | 2020-09-10 | 南京邮电大学 | Non-intrusive human body thermal comfort detection method and system based on posture estimation |
CN111652078A (en) * | 2020-05-11 | 2020-09-11 | 浙江大学 | Yoga action guidance system and method based on computer vision |
CN112090053A (en) * | 2020-09-14 | 2020-12-18 | 成都拟合未来科技有限公司 | 3D interactive fitness training method, device, equipment and medium |
CN112163516A (en) * | 2020-09-27 | 2021-01-01 | 深圳市悦动天下科技有限公司 | Rope skipping counting method and device and computer storage medium |
CN112381035A (en) * | 2020-11-25 | 2021-02-19 | 山东云缦智能科技有限公司 | Motion similarity evaluation method based on motion trail of skeleton key points |
CN112396001A (en) * | 2020-11-20 | 2021-02-23 | 安徽一视科技有限公司 | Rope skipping number statistical method based on human body posture estimation and TPA (tissue placement model) attention mechanism |
-
2021
- 2021-03-04 CN CN202110239387.6A patent/CN112966597A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020177498A1 (en) * | 2019-03-04 | 2020-09-10 | 南京邮电大学 | Non-intrusive human body thermal comfort detection method and system based on posture estimation |
CN109919132A (en) * | 2019-03-22 | 2019-06-21 | 广东省智能制造研究所 | A kind of pedestrian's tumble recognition methods based on skeleton detection |
CN110313923A (en) * | 2019-07-05 | 2019-10-11 | 昆山杜克大学 | Autism early screening system based on joint ability of attention test and audio-video behavioural analysis |
CN111401260A (en) * | 2020-03-18 | 2020-07-10 | 南通大学 | Sit-up test counting method and system based on Quick-OpenPose model |
CN111275032A (en) * | 2020-05-07 | 2020-06-12 | 西南交通大学 | Deep squatting detection method, device, equipment and medium based on human body key points |
CN111652078A (en) * | 2020-05-11 | 2020-09-11 | 浙江大学 | Yoga action guidance system and method based on computer vision |
CN112090053A (en) * | 2020-09-14 | 2020-12-18 | 成都拟合未来科技有限公司 | 3D interactive fitness training method, device, equipment and medium |
CN112163516A (en) * | 2020-09-27 | 2021-01-01 | 深圳市悦动天下科技有限公司 | Rope skipping counting method and device and computer storage medium |
CN112396001A (en) * | 2020-11-20 | 2021-02-23 | 安徽一视科技有限公司 | Rope skipping number statistical method based on human body posture estimation and TPA (tissue placement model) attention mechanism |
CN112381035A (en) * | 2020-11-25 | 2021-02-19 | 山东云缦智能科技有限公司 | Motion similarity evaluation method based on motion trail of skeleton key points |
Non-Patent Citations (2)
Title |
---|
巩维: "《基于骨骼关键点检测的学生学习行为识别系统的设计与实现》", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 * |
赵素芳: "《结合深度图像的引体向上自动测试系统研发》", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113379024A (en) * | 2021-06-25 | 2021-09-10 | 杨洋 | Deep squatting movement counting method and system, computer equipment and storage medium |
CN113379024B (en) * | 2021-06-25 | 2023-06-30 | 杨洋 | Squatting movement counting method, squatting movement counting system, computer equipment and storage medium |
CN114764946A (en) * | 2021-09-18 | 2022-07-19 | 北京甲板智慧科技有限公司 | Action counting method and system based on time sequence standardization and intelligent terminal |
CN114764946B (en) * | 2021-09-18 | 2023-08-11 | 北京甲板智慧科技有限公司 | Action counting method and system based on time sequence standardization and intelligent terminal |
CN114011026A (en) * | 2021-10-29 | 2022-02-08 | 北京林业大学 | Non-contact physical ability test system and method |
CN115100745A (en) * | 2022-07-05 | 2022-09-23 | 北京甲板智慧科技有限公司 | Swin transform model-based motion real-time counting method and system |
CN115205737A (en) * | 2022-07-05 | 2022-10-18 | 北京甲板智慧科技有限公司 | Real-time motion counting method and system based on Transformer model |
CN115223240A (en) * | 2022-07-05 | 2022-10-21 | 北京甲板智慧科技有限公司 | Motion real-time counting method and system based on dynamic time warping algorithm |
CN115620392A (en) * | 2022-09-26 | 2023-01-17 | 珠海视熙科技有限公司 | Action counting method, device, medium and fitness equipment |
CN115661919A (en) * | 2022-09-26 | 2023-01-31 | 珠海视熙科技有限公司 | Repeated action cycle statistical method and device, fitness equipment and storage medium |
CN115661919B (en) * | 2022-09-26 | 2023-08-29 | 珠海视熙科技有限公司 | Repeated action period statistics method and device, body-building equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112966597A (en) | Human motion action counting method based on skeleton key points | |
CN108764120B (en) | Human body standard action evaluation method | |
CN106650687B (en) | Posture correction method based on depth information and skeleton information | |
CN108463271B (en) | System and method for motor skill analysis and skill enhancement and prompting | |
Trejo et al. | Recognition of yoga poses through an interactive system with kinect device | |
CN110428486B (en) | Virtual interaction fitness method, electronic equipment and storage medium | |
WO2018070414A1 (en) | Motion recognition device, motion recognition program, and motion recognition method | |
CN103678859B (en) | Motion comparison method and motion comparison system | |
CN109308438B (en) | Method for establishing action recognition library, electronic equipment and storage medium | |
CN105512621A (en) | Kinect-based badminton motion guidance system | |
CN109308437B (en) | Motion recognition error correction method, electronic device, and storage medium | |
CN110298218B (en) | Interactive fitness device and interactive fitness system | |
CN107930048B (en) | Space somatosensory recognition motion analysis system and motion analysis method | |
JPWO2014042121A1 (en) | Operation evaluation apparatus and program thereof | |
Wang et al. | Synthesis and evaluation of linear motion transitions | |
CN113705540A (en) | Method and system for recognizing and counting non-instrument training actions | |
CN111840920A (en) | Upper limb intelligent rehabilitation system based on virtual reality | |
CN112101315A (en) | Deep learning-based exercise judgment guidance method and system | |
He et al. | A new Kinect-based posture recognition method in physical sports training based on urban data | |
CN113409651B (en) | Live broadcast body building method, system, electronic equipment and storage medium | |
CN110227249A (en) | A kind of upper limb training system | |
CN110070036B (en) | Method and device for assisting exercise motion training and electronic equipment | |
CN111353345B (en) | Method, apparatus, system, electronic device, and storage medium for providing training feedback | |
Rozaliev et al. | Methods and applications for controlling the correctness of physical exercises performance | |
Li et al. | Study on action recognition based on kinect and its application in rehabilitation training |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210615 |
|
RJ01 | Rejection of invention patent application after publication |