CN112464715A - Sit-up counting method based on human body bone point detection - Google Patents

Sit-up counting method based on human body bone point detection Download PDF

Info

Publication number
CN112464715A
CN112464715A CN202011140756.8A CN202011140756A CN112464715A CN 112464715 A CN112464715 A CN 112464715A CN 202011140756 A CN202011140756 A CN 202011140756A CN 112464715 A CN112464715 A CN 112464715A
Authority
CN
China
Prior art keywords
sit
human
feature map
point detection
method based
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011140756.8A
Other languages
Chinese (zh)
Inventor
袁夏
叶佳林
赵春霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202011140756.8A priority Critical patent/CN112464715A/en
Publication of CN112464715A publication Critical patent/CN112464715A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/17Counting, e.g. counting periodical movements, revolutions or cycles, or including further data processing to determine distances or speed

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a sit-up counting method based on human body bone point detection, which comprises the following steps: s1, acquiring a real-time video stream of the tested person in sit-up; s2, performing frame separation on the real-time video stream to store the real-time video stream into an image, and performing preprocessing operation on the stored image; s3, sending the preprocessed image into a human skeleton point detection network for human skeleton point detection; s4, judging whether the tested person is in a preparation state or not through the human skeleton points; s5, when in the ready state, judging whether to complete a complete sit-up through the human body posterior bone point, and when completing a complete sit-up, adding 1 to the counter. The invention can realize self-service sit-up test and counting, does not need manual interference in the test process, and improves the test efficiency.

Description

Sit-up counting method based on human body bone point detection
Technical Field
The invention relates to the field of computer vision and the technical field of sports equipment, in particular to a sit-up counting method based on human body bone point detection.
Background
The sit-up is an important sport in domestic student physical training and military physical training, and most of sit-up tests at present are that testers lie on a cushion for testing, and then statistics people count. Exercising and testing in this manner requires more human resources and if the actions are not standardized, the test results may not meet the accuracy requirements of the final test.
Still another type of counting implements motion specification by wearing the electronic device on the tester, which makes the tester feel tethered and experience poor.
Therefore, the test method provided by the invention does not have good test experience and counting precision, and has low test efficiency and poor test experience.
Disclosure of Invention
The invention aims to provide a sit-up counting method based on human body bone point detection.
The technical solution for realizing the purpose of the invention is as follows: a sit-up counting method based on human body bone point detection comprises the following steps:
s1, acquiring a real-time video stream of the tested person in sit-up;
s2, performing frame-by-frame storage on the real-time video stream to generate an image, and performing preprocessing operation on the stored image;
s3, sending the preprocessed image into a human skeleton point detection network for human skeleton point detection to obtain a human skeleton point bitmap;
s4, judging whether the tested person is in a preparation state or not through the human skeleton points;
s5, when in the ready state, judging whether to finish a sit-up through the human body posterior bone point, and when finishing a complete sit-up, adding 1 to the counter.
Compared with the prior art, the invention has the following remarkable advantages: 1) the invention realizes the sit-up counting through deep learning, so that the sit-up counting is more accurate and the precision is higher. 2) The invention does not need to wear any electronic equipment for sit-up testers on site, and the testers experience better during testing.
Drawings
Fig. 1 is a flowchart of a sit-up counting method based on human skeletal point detection according to the present invention.
Fig. 2 is a body posture point diagram.
Detailed Description
The invention is further described with reference to the drawings and examples.
With reference to fig. 1, the invention relates to a sit-up counting method based on bone point detection, which comprises the following steps:
step one, the human body skeleton point detection network structure comprises a backbone network, a feature fusion network and a detection network. Its backbone network inherits the structure of the darknet53 network. The feature fusion network performs feature fusion on the third downsampling feature map A, the fourth downsampling feature map B and the fifth downsampling feature map C. The sizes of the third downsampled feature map a, the fourth downsampled feature map B and the fifth downsampled feature map C are 52 × 52 × 128,26 × 26 × 256 and 13 × 13 × 512 respectively. And performing 1 × 1 convolution on the fifth downsampling feature map C to change the number of channels to 256, performing upsampling and fourth downsampling feature map fusion on the fifth downsampling feature map C to form a new fourth downsampling feature map D, performing 1 × 1 convolution on the new fourth downsampling feature map D to change the number of channels to 128, and performing upsampling and third downsampling feature map A fusion on the new fourth downsampling feature map D to form a new third downsampling feature map E.
As regards the activation function, the invention uses the leak-ReLu as an activation function, since the leak-ReLu activation function has advantages in this province including: when the functions such as Sigmoid are adopted, the calculation amount is huge when the activation function is calculated, the calculation amount required by derivation is too large when the error gradient is calculated by back propagation, and the whole calculation amount is saved by adopting the Leaky-ReLu activation function. For a deep network, when the Sigmoid function is reversely propagated, the phenomena of gradient explosion and gradient dispersion can easily occur, and the Leaky-ReLu can effectively solve the problem.
Regarding the selection of the feature fusion layer, the feature fusion is performed to obtain the final third-time downsampling feature map E, the fourth-time downsampling feature map D and the fifth-time downsampling feature map C, wherein the feature map sizes of the final third-time downsampling feature map E, the fourth-time downsampling feature map D and the fifth-time downsampling feature map C are 52 × 52 × 128,26 × 26 × 256 and 13 × 13 × 512 respectively. And 5 times of 3 × 3 convolution operations are performed on each feature map to form new feature maps F, G and H, respectively. And passing F, G, H through the output layer, respectively.
Regarding the selection of the output layer, the output layer of the present invention is a matrix of W/s × H/s × C, where W represents the width of the image in the input network, H represents the height of the image in the input network, and C represents the classification number of the last output keypoint. S represents a multiple of the down-sampling.
Step two, training the model:
firstly, normalization operation is carried out on a selected sample human body image, each person is cut out, and human body bone points are labeled.
Secondly, the image is stretched in multiple scales by means of translation invariance of data, the stretching is 1.1 times or 1.2 times of the original stretching in multiple scales, the image is rotated by an angle of-15 degrees to 15 degrees, and the image is turned over and processed in a mirror image mode to obtain more training images in different scales.
Thirdly, dividing the whole data set into K parts, selecting one subset as a test set each time, selecting 80 percent of the K-1 subsets as a training set and 20 percent of the K-1 subsets as a verification set, and performing K times of cross verification, thereby training the network model.
Fourthly, when the neural network model is trained, the convolution part uses the pre-training weight of the darknet53, and a random initialization method is adopted in the feature fusion layer, so that the training time can be reduced, and a better detection effect can be obtained in less time. In addition, some hyper-parameters are required to be set, including the number of iterations, the size of the image quantity batch-size input to the neural network for training each time is set, and the training end condition is determined. In the present invention, the value of epoch is set to 50 and the value of batch-size is set to 64. By setting initial network weight and adopting a random initialization method for network parameters, iterative training is continuously carried out until loss is less than a set threshold value or the iteration times is greater than the set threshold value, and the training is finished.
Fifth, the method comprises the following steps: obtaining a trained neural network model
Thirdly, shooting a video of the human body posture test area by the camera, and reading an image frame of the video shot by the camera in real time; preferably, the camera is separately installed above the test area, and the image of the camera covers the position of the test area to be read;
acquiring a detection image, and carrying out normalization operation on the resolution ratio of the detection image;
and sending the normalized picture into a trained human body posture model for human body skeleton point detection, positioning coordinates of 18 skeleton points in a human body through the human body posture model, scoring each skeleton point, and displaying the detected skeleton point in an image to be detected. With reference to fig. 2, the region formed by the human skeleton points 2, 3, 4 is an acute triangle, and the region formed by the human skeleton points 5, 6, 7 is an acute triangle, so that the tester is ready to send out, the maximum distance between the human skeleton point 0 and the human skeleton point 10 at the moment is recorded and recorded as a threshold value a, and the half of the maximum distance between the human skeleton point 0 and the human skeleton point 13 at the moment is recorded and recorded as a threshold value B; when the maximum distance between the human skeleton point 0 and the human skeleton points 10 and 13 is smaller than a threshold value B, a bending signal is sent out; when the maximum distance between the human skeleton point 0 and the human skeleton points 10 and 13 is greater than or equal to the threshold value A, a straightening state is recorded, and when a continuous bending and straightening state signal appears, the sit-up counter is increased by 1.

Claims (10)

1. A sit-up counting method based on human body bone point detection is characterized by comprising the following steps:
s1, acquiring a real-time video stream of the tested person in sit-up;
s2, performing frame-by-frame storage on the real-time video stream to generate an image, and performing preprocessing operation on the stored image;
s3, sending the preprocessed image into a human skeleton point detection network for human skeleton point detection to obtain a human skeleton point bitmap;
s4, judging whether the tested person is in a preparation state or not through the human skeleton points;
s5, when in the ready state, judging whether to finish a sit-up through the human body posterior bone point, and when finishing a complete sit-up, adding 1 to the counter.
2. The sit-up method based on human skeletal point detection as claimed in claim 1, wherein: in step s1, the real-time video stream of the tested person is shot, and the real-time video stream image is analyzed to obtain the sit-up information of the tested person.
3. The sit-up method based on human skeletal point detection as claimed in claim 1, wherein: in step s2, the analyzed image is corrected to a rectangular shape, and the image resolution is normalized.
4. The sit-up method based on human skeletal point detection as claimed in claim 1, wherein: in step s3, the human body skeleton point detection network structure comprises a backbone network, a feature fusion network and a detection network; the backbone network inherits the darknet53 network structure; the feature fusion network performs feature fusion on the third downsampling feature map A, the fourth downsampling feature map B and the fifth downsampling feature map C; the sizes of the third downsampling feature map A, the fourth downsampling feature map B and the fifth downsampling feature map C are respectively 52 × 52 × 128,26 × 26 × 256 and 13 × 13 × 512; and performing 1 × 1 convolution on the fifth downsampling feature map C to change the number of channels to 256, performing upsampling and fourth downsampling feature map fusion on the fifth downsampling feature map C to form a new fourth downsampling feature map D, performing 1 × 1 convolution on the new fourth downsampling feature map D to change the number of channels to 128, and performing upsampling and third downsampling feature map A fusion on the new fourth downsampling feature map D to form a new third downsampling feature map E.
5. The sit-up counting method based on human skeletal point detection as claimed in claim 4, wherein: the feature map sizes of the final third downsampling feature map E, the fourth downsampling feature map D and the fifth downsampling feature map C obtained through feature fusion are respectively 52 × 52 × 128,26 × 26 × 256 and 13 × 13 × 512; and 5 times of 3 × 3 convolution operations are performed on each feature map to form new feature maps F, G and H respectively, and the new feature maps F, G and H are passed through the output layer respectively.
6. The sit-up counting method based on human skeletal point detection as claimed in claim 5, wherein: the output layer is a matrix of W/s × H/s × C, wherein W represents the width of the image in the input network, H represents the height of the image in the input network, C represents the classification number of the final output key point, and s represents a multiple of down-sampling.
7. The sit-up method based on human skeletal key point detection as claimed in claim 1, wherein: training a human skeleton point detection network model: selecting a sit-up video, cutting the sit-up video into images, calibrating a test area for sit-up in the images, carrying out affine transformation on the calibrated area, and carrying out human body key point marking on the area subjected to affine transformation to obtain a training sample; and (4) sending the training samples into a human skeleton key point detection model for training, thereby obtaining a trained human posture key point model.
8. The sit-up counting method based on human skeletal point detection as claimed in claim 1, wherein: the skeleton point diagram of the human body totally records 18 point positions, 0 represents a nose, 1 represents a neck, 2 represents a right shoulder, 3 represents a right elbow, 4 represents a right wrist, 5 represents a left shoulder, 6 represents a left elbow, 7 represents a left wrist, 8 represents a right hip, 9 represents a right knee, 10 represents a right ankle, 11 represents a left hip, 12 represents a left knee, 13 a left ankle, 14 represents a right eye, 15 represents a left eye, 16 represents a right ear, and 17 represents a left ear.
9. The sit-up counting method based on human bone point detection according to claim 1 or 8, wherein in step s4, the region formed by human bone points 2, 3 and 4 is an acute triangle, and the region formed by human bone points 5, 6 and 7 is an acute triangle, so that the tester is ready to record the maximum distance between human bone point 0 and human bone point 10 at this time as threshold a, and record the half of the maximum distance between human bone point 0 and human bone point 13 at this time as threshold B.
10. The sit-up counting method based on human skeletal point detection according to claim 1 or 9, wherein: in step s5, when it is detected that the maximum distance between the human skeleton point 0 and the human skeleton points 10 and 13 is smaller than the threshold value B, a bending signal is sent out; when the maximum distance between the human skeleton point 0 and the human skeleton points 10 and 13 is greater than or equal to the threshold value A, a straightening state is recorded, and when a continuous bending and straightening state signal appears, the sit-up counter is increased by 1.
CN202011140756.8A 2020-10-22 2020-10-22 Sit-up counting method based on human body bone point detection Pending CN112464715A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011140756.8A CN112464715A (en) 2020-10-22 2020-10-22 Sit-up counting method based on human body bone point detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011140756.8A CN112464715A (en) 2020-10-22 2020-10-22 Sit-up counting method based on human body bone point detection

Publications (1)

Publication Number Publication Date
CN112464715A true CN112464715A (en) 2021-03-09

Family

ID=74834147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011140756.8A Pending CN112464715A (en) 2020-10-22 2020-10-22 Sit-up counting method based on human body bone point detection

Country Status (1)

Country Link
CN (1) CN112464715A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033515A (en) * 2021-05-24 2021-06-25 北京每日优鲜电子商务有限公司 Wearing detection method and device, electronic equipment and computer-readable storage medium
CN113487635A (en) * 2021-07-01 2021-10-08 盛视科技股份有限公司 Sit-up counting method based on image difference
CN113893515A (en) * 2021-10-13 2022-01-07 恒鸿达科技有限公司 Sit-up test counting method, sit-up test counting device and sit-up test counting medium based on vision technology
CN114259721A (en) * 2022-01-13 2022-04-01 王东华 Training evaluation system and method based on Beidou positioning
CN115171208A (en) * 2022-05-31 2022-10-11 中科海微(北京)科技有限公司 Sit-up posture evaluation method and device, electronic equipment and storage medium
CN117095152A (en) * 2023-10-17 2023-11-21 南京佳普科技有限公司 Bone recognition camera for physical training evaluation and training evaluation method
CN113487635B (en) * 2021-07-01 2024-05-28 盛视科技股份有限公司 Sit-up counting method based on image difference

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017080202A (en) * 2015-10-29 2017-05-18 キヤノンマーケティングジャパン株式会社 Information processing device, information processing method and program
CN111368810A (en) * 2020-05-26 2020-07-03 西南交通大学 Sit-up detection system and method based on human body and skeleton key point identification
CN111401260A (en) * 2020-03-18 2020-07-10 南通大学 Sit-up test counting method and system based on Quick-OpenPose model

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017080202A (en) * 2015-10-29 2017-05-18 キヤノンマーケティングジャパン株式会社 Information processing device, information processing method and program
CN111401260A (en) * 2020-03-18 2020-07-10 南通大学 Sit-up test counting method and system based on Quick-OpenPose model
CN111368810A (en) * 2020-05-26 2020-07-03 西南交通大学 Sit-up detection system and method based on human body and skeleton key point identification

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033515A (en) * 2021-05-24 2021-06-25 北京每日优鲜电子商务有限公司 Wearing detection method and device, electronic equipment and computer-readable storage medium
CN113487635A (en) * 2021-07-01 2021-10-08 盛视科技股份有限公司 Sit-up counting method based on image difference
CN113487635B (en) * 2021-07-01 2024-05-28 盛视科技股份有限公司 Sit-up counting method based on image difference
CN113893515A (en) * 2021-10-13 2022-01-07 恒鸿达科技有限公司 Sit-up test counting method, sit-up test counting device and sit-up test counting medium based on vision technology
CN113893515B (en) * 2021-10-13 2022-12-27 恒鸿达科技有限公司 Sit-up test counting method, sit-up test counting device and sit-up test counting medium based on vision technology
CN114259721A (en) * 2022-01-13 2022-04-01 王东华 Training evaluation system and method based on Beidou positioning
CN115171208A (en) * 2022-05-31 2022-10-11 中科海微(北京)科技有限公司 Sit-up posture evaluation method and device, electronic equipment and storage medium
CN117095152A (en) * 2023-10-17 2023-11-21 南京佳普科技有限公司 Bone recognition camera for physical training evaluation and training evaluation method
CN117095152B (en) * 2023-10-17 2024-01-26 南京佳普科技有限公司 Bone recognition camera for physical training evaluation and training evaluation method

Similar Documents

Publication Publication Date Title
CN112464715A (en) Sit-up counting method based on human body bone point detection
CN110555434B (en) Method for detecting visual saliency of three-dimensional image through local contrast and global guidance
CN106920224B (en) A method of assessment stitching image clarity
CN110852383B (en) Target detection method and device based on attention mechanism deep learning network
CN107463920A (en) A kind of face identification method for eliminating partial occlusion thing and influenceing
CN110879982B (en) Crowd counting system and method
CN104573731A (en) Rapid target detection method based on convolutional neural network
CN110619638A (en) Multi-mode fusion significance detection method based on convolution block attention module
Jiang et al. A deep evaluator for image retargeting quality by geometrical and contextual interaction
CN113034495B (en) Spine image segmentation method, medium and electronic device
CN107292299B (en) Side face recognition methods based on kernel specification correlation analysis
CN113762133A (en) Self-weight fitness auxiliary coaching system, method and terminal based on human body posture recognition
CN105184266B (en) A kind of finger venous image recognition methods
CN109461177B (en) Monocular image depth prediction method based on neural network
CN113610046B (en) Behavior recognition method based on depth video linkage characteristics
CN113947589A (en) Missile-borne image deblurring method based on countermeasure generation network
CN114241422A (en) Student classroom behavior detection method based on ESRGAN and improved YOLOv5s
CN110543916A (en) Method and system for classifying missing multi-view data
CN110705566A (en) Multi-mode fusion significance detection method based on spatial pyramid pool
CN107895145A (en) Method based on convolutional neural networks combination super-Gaussian denoising estimation finger stress
CN115138059A (en) Pull-up standard counting method, pull-up standard counting system and storage medium of pull-up standard counting system
CN115205547A (en) Target image detection method and device, electronic equipment and storage medium
CN113569805A (en) Action recognition method and device, electronic equipment and storage medium
CN115731597A (en) Automatic segmentation and restoration management platform and method for mask image of face mask
Chandaliya et al. Conditional perceptual adversarial variational autoencoder for age progression and regression on child face

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210309

RJ01 Rejection of invention patent application after publication