CN114582020A - Pull-up counting method based on image vision technology - Google Patents
Pull-up counting method based on image vision technology Download PDFInfo
- Publication number
- CN114582020A CN114582020A CN202210223136.3A CN202210223136A CN114582020A CN 114582020 A CN114582020 A CN 114582020A CN 202210223136 A CN202210223136 A CN 202210223136A CN 114582020 A CN114582020 A CN 114582020A
- Authority
- CN
- China
- Prior art keywords
- pull
- counting
- detection
- action
- standard
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 14
- 238000005516 engineering process Methods 0.000 title claims abstract description 12
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 10
- 238000001514 detection method Methods 0.000 claims description 15
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000005452 bending Methods 0.000 claims description 3
- 210000004247 hand Anatomy 0.000 claims description 3
- 210000000707 wrist Anatomy 0.000 claims description 3
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B71/00—Games or sports accessories not covered in groups A63B1/00 - A63B69/00
- A63B71/06—Indicating or scoring devices for games or players, or for other sports activities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physical Education & Sports Medicine (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a chin counting-up method based on an image vision technology, which is characterized in that a horizontal bar, a human face and a human hand are detected and positioned based on a convolutional neural network, and the correction of the bar-holding direction is judged; the method is based on the convolutional neural network to detect and position the joints of the human body, judges whether the pull-up action is standard or not through the posture, the joint angle and the action duration of the hand, can effectively judge the action number meeting the standard, and improves the accuracy and the real-time performance of pull-up counting.
Description
Technical Field
The invention relates to the field of physical ability testing, in particular to a method for automatically counting a pull-up direction based on image vision.
Background
The pull-up technical method based on the deep learning technology and the image vision technology judges whether the action is standard or not by modeling and analyzing the testee, thereby calculating the standard action quantity.
The current pull-up counting method mainly has the following defects:
(1) the cost is high;
(2) whether the action is standard or not cannot be judged;
(3) manual operation is needed for counting;
the above disadvantages will affect the accuracy and real-time performance of pull-up counting.
Disclosure of Invention
The invention aims to: the pull-up counting method based on the image vision technology is provided, a human body joint is detected and positioned based on a convolutional neural network, whether pull-up action is standard or not is judged through judgment of hand posture, joint angle and action duration, and the action quantity meeting the standard can be effectively judged.
The technical scheme of the invention is as follows:
a pull-up counting method based on image vision technology comprises the following steps:
s101, installing pressure sensors at two hands on a horizontal bar, and sensing the pressure signals to start detection;
s102, inputting a single-frame image;
s103, detecting and positioning a horizontal bar, a human face and a human hand by using the trained convolutional neural network;
s104, judging the holding bar direction according to the detection and positioning of the horizontal bar, the human face and the human hand: if the user holds the bar reversely, the detection is finished; if the user holds the bar, the step S105 is executed;
s105, detecting and positioning the human body joint points by using the trained convolutional neural network, wherein the positioned human body joint points comprise: wrist a, elbow b, shoulder c;
s106, calculating the vertical distance Y from the horizontal bar to the center of mass of the face, and putting the calculation result into a set Y;
s107, calculating a bending angle ^ abc of the elbow, and putting a calculation result into the set A;
s108, if the angle abc is larger than an angle threshold value alpha, and the continuous time t is larger than a time threshold value tthres1Or less than a time threshold tthres2And the detection is finished;
s109, after the detection is finished, the image is not input any more, the set A is taken, and n subsets A are obtained by dividing the set A at the maximum valuen;
S110, taking subset AnTwo endpoint values An1,An2;
S111, enabling the set Y to correspond to n subsets AnIs divided into n subsets YnSubset YnMaximum value of (Y)n max;
S112, judging whether each pull-up action is standard or not, and inputting a distance threshold value ythresIf A isn1Not less than alpha, and An2≥α,Yn max >ythresIf the three conditions are met simultaneously, the pull-up action standard of the time is considered, counting is added by 1, otherwise, the pull-up action of the time is considered not to be standard and not counting is considered;
and S113, outputting a counting result.
Preferably, the angle threshold α is 165 ° to 175 °.
The invention has the advantages that:
the pull-up counting method based on the image vision technology detects and positions human joints based on the convolutional neural network, judges whether pull-up actions are standard or not by judging the posture, joint angle and action duration of a hand, can effectively judge the number of actions meeting the standard, and improves the accuracy and real-time performance of counting.
Drawings
The invention is further described with reference to the following figures and examples:
FIG. 1 is a flow chart of a pull-up counting method based on image vision technology according to the present invention;
fig. 2 is a schematic view of joint positioning.
Detailed Description
As shown in FIG. 1, the method for counting the pull-up based on the image vision technology comprises the following steps:
s101, installing pressure sensors at two hands on a horizontal bar, and sensing the pressure signals to start detection;
s102, inputting a single-frame image;
s103, detecting and positioning a horizontal bar, a human face and a human hand by using the trained convolutional neural network;
s104, judging the bar holding direction according to the detection and positioning of the horizontal bar, the human face and the human hand: if the user holds the bar reversely, the detection is finished; if the user holds the bar, the step S105 is executed;
s105, detecting and positioning the human body joint points by using the trained convolutional neural network, wherein the positioned human body joint points comprise: wrist a, elbow b, shoulder c, as shown in fig. 2;
s106, calculating the vertical distance Y from the horizontal bar to the center of mass of the face, and putting the calculation result into a set Y;
s107, calculating a bending angle ^ abc of the elbow, and putting a calculation result into the set A;
s108, if the angle abc is larger than 170 degrees, and the continuous time t is larger than a time threshold tthres1Or less than a time threshold tthres2And the detection is finished;
s109, after the detection is finished, the image is not input any more, the set A is taken, and n subsets A are obtained by dividing the set A at the maximum valuen;
S110, taking subset AnTwo endpoint values An1,An2;
S111, corresponding the set Y to n subsets AnIs divided into n subsets YnSubset YnMaximum value of (Y)n max;
S112, judging whether each pull-up action is standard or not, and inputting a distance threshold value ythresIf A isn1Not less than alpha, and An2≥170°,Yn max >ythresIf the three conditions are met simultaneously, the pull-up action standard of the time is considered, counting is added by 1, otherwise, the pull-up action of the time is considered not to be standard and not counting is considered;
and S113, outputting a counting result.
The above embodiments are merely illustrative of the technical ideas and features of the present invention, and the purpose of the embodiments is to enable those skilled in the art to understand the contents of the present invention and implement the present invention, and not to limit the protection scope of the present invention. All modifications made according to the spirit of the main technical scheme of the invention are covered in the protection scope of the invention.
Claims (2)
1. A pull-up counting method based on image vision technology is characterized by comprising the following steps:
s101, installing pressure sensors at two hands on a horizontal bar, and sensing the pressure signals to start detection;
s102, inputting a single-frame image;
s103, detecting and positioning a horizontal bar, a human face and a human hand by using the trained convolutional neural network;
s104, judging the bar holding direction according to the detection and positioning of the horizontal bar, the human face and the human hand: if the user holds the bar reversely, the detection is finished; if the user holds the bar, the step S105 is executed;
s105, detecting and positioning the human body joint points by using the trained convolutional neural network, wherein the positioned human body joint points comprise: wrist a, elbow b, shoulder c;
s106, calculating the vertical distance Y from the horizontal bar to the center of mass of the face, and putting the calculation result into a set Y;
s107, calculating a bending angle ^ abc of the elbow, and putting a calculation result into the set A;
s108, if the angle abc is larger than an angle threshold value alpha, and the continuous time t is larger than a time threshold value tthres1Or less than a time threshold tthres2And the detection is finished;
s109, after the detection is finished, the image is not input any more, the set A is taken, and n subsets A are obtained by dividing the set A at the maximum valuen;
S110, taking subset AnTwo endpoint values An1,An2;
S111, corresponding the set Y to n subsets AnIs divided into n subsets YnSubset YnMaximum value of (Y)n max;
S112, judging whether each pull-up action is standard or not, and inputting a distance threshold value ythresIf A isn1Not less than alpha, and An2≥α,Yn max >ythresIf the three conditions are met simultaneously, the pull-up action standard of the time is considered, counting is added by 1, otherwise, the pull-up action of the time is considered not to be standard and not counting is considered;
and S113, outputting a counting result.
2. The pull-up counting method based on image vision technology as claimed in claim 1, wherein the angle threshold α is 165 ° to 175 °.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210223136.3A CN114582020A (en) | 2022-03-09 | 2022-03-09 | Pull-up counting method based on image vision technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210223136.3A CN114582020A (en) | 2022-03-09 | 2022-03-09 | Pull-up counting method based on image vision technology |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114582020A true CN114582020A (en) | 2022-06-03 |
Family
ID=81773000
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210223136.3A Withdrawn CN114582020A (en) | 2022-03-09 | 2022-03-09 | Pull-up counting method based on image vision technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114582020A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115138059A (en) * | 2022-09-06 | 2022-10-04 | 南京市觉醒智能装备有限公司 | Pull-up standard counting method, pull-up standard counting system and storage medium of pull-up standard counting system |
-
2022
- 2022-03-09 CN CN202210223136.3A patent/CN114582020A/en not_active Withdrawn
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115138059A (en) * | 2022-09-06 | 2022-10-04 | 南京市觉醒智能装备有限公司 | Pull-up standard counting method, pull-up standard counting system and storage medium of pull-up standard counting system |
CN115138059B (en) * | 2022-09-06 | 2022-12-02 | 南京市觉醒智能装备有限公司 | Pull-up standard counting method, pull-up standard counting system and storage medium of pull-up standard counting system |
WO2024051597A1 (en) * | 2022-09-06 | 2024-03-14 | 南京市觉醒智能装备有限公司 | Standard pull-up counting method, and system and storage medium therefor |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI450148B (en) | Touch screen touch track detection method and detection device | |
CN103795850B (en) | Electronic equipment and audio recognition method | |
CN114582020A (en) | Pull-up counting method based on image vision technology | |
US9000956B2 (en) | Portable terminal and input control method | |
CN110170159A (en) | A kind of human health's action movement monitoring system | |
CN104103274B (en) | Speech processing apparatus and speech processing method | |
US7861711B2 (en) | Method for judging the reverse connection of a flow sensor and a respiratory mechanics measuring module used therein | |
CN112364694B (en) | Human body sitting posture identification method based on key point detection | |
CN112587901B (en) | Swimming gesture recognition method, device, system and storage medium | |
CN103543834A (en) | Gesture recognition device and method | |
CN102331873B (en) | Touch-point tracking, positioning and correcting method and system | |
CN102667398A (en) | Flexible capacitive sensor array | |
CN108549878A (en) | Hand detection method based on depth information and system | |
CN105094440B (en) | A kind of touch screen anti-fluttering method, system and mobile terminal based on mobile terminal | |
CN107066086A (en) | A kind of gesture identification method and device based on ultrasonic wave | |
CN103942524A (en) | Gesture recognition module and gesture recognition method | |
CN105004368B (en) | A kind of collision checking method of autonomous robot, apparatus and system | |
CN102866789A (en) | Human-computer interactive ring | |
CN102551664B (en) | Sleep analysis method, sleep analysis table and sleep analysis system | |
CN114582021A (en) | Sit-up counting method based on image vision technology | |
CN103544469A (en) | Fingertip detection method and device based on palm ranging | |
CN114582022A (en) | Push-up automatic counting method based on image vision technology | |
CN109099827B (en) | Method for detecting posture of pen body through capacitance and electromagnetic positioning double sensors | |
CN109032387B (en) | Method for detecting posture of pen body through ultrasonic wave and electromagnetic positioning double sensors | |
CN110705355A (en) | Face pose estimation method based on key point constraint |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20220603 |