CN113095248B - Technical action correcting method for badminton - Google Patents

Technical action correcting method for badminton Download PDF

Info

Publication number
CN113095248B
CN113095248B CN202110419564.9A CN202110419564A CN113095248B CN 113095248 B CN113095248 B CN 113095248B CN 202110419564 A CN202110419564 A CN 202110419564A CN 113095248 B CN113095248 B CN 113095248B
Authority
CN
China
Prior art keywords
racket
action
technical
technical action
image obtained
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110419564.9A
Other languages
Chinese (zh)
Other versions
CN113095248A (en
Inventor
赵晶晶
王胜孟
叶端南
徐长坤
韩宗志
张乐
朱畅
张一帆
李沐瀚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Petroleum East China
Original Assignee
China University of Petroleum East China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Petroleum East China filed Critical China University of Petroleum East China
Priority to CN202110419564.9A priority Critical patent/CN113095248B/en
Publication of CN113095248A publication Critical patent/CN113095248A/en
Application granted granted Critical
Publication of CN113095248B publication Critical patent/CN113095248B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B69/00Training appliances or apparatus for special sports
    • A63B69/0017Training appliances or apparatus for special sports for badminton
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

A technical action correcting method for badminton belongs to the technical field of action recognition. The invention solves the problem of low accuracy of technical action evaluation by adopting the existing method. The method extracts the position coordinates of the racket in the image through a Faster-rcnn algorithm, extracts the position coordinates of the key point of the human body through an OpenPose algorithm, evaluates whether the technical action in the image is standard or not according to the extracted position coordinates of the racket and the position coordinates of the key point, and finally corrects the technical action which is not standard. The method evaluates the technical action according to the extracted position coordinates of the racket and the position coordinates of the key point, so that the evaluation result is more credible, and the accuracy of evaluating the technical action is improved. The invention can be applied to the evaluation of the technical action in the badminton.

Description

Technical action correcting method for badminton
Technical Field
The invention belongs to the technical field of motion recognition, and particularly relates to a technical motion correction method for badminton.
Background
The wave of the artificial intelligence era rolls the lives of people, and various machine learning algorithms are gradually integrated into various industries to inject new vitality into various fields. With the development of human posture estimation research, increasingly mature motion capture technologies are widely applied to various fields such as various sports, game development, movie and television production and the like. Badminton is used as a project in a college sports class, wherein a plurality of technical actions of different types exist, and the technical actions made by students are evaluated to determine whether the technical actions are standard or not, and then the technical actions which are not standard are corrected, so that the badminton playing method has important significance for improving the technical level of the students.
However, the accuracy of the technical action evaluation by the existing method is still low, so a new technical action evaluation method needs to be provided to improve the accuracy of the technical action evaluation in the badminton.
Disclosure of Invention
The invention aims to solve the problem that the accuracy of technical action evaluation by adopting the existing method is low, and provides a technical action correction method for badminton.
The technical scheme adopted by the invention for solving the technical problems is as follows: a technical action correcting method for badminton comprises the following steps:
the method comprises the steps that firstly, after videos of technical actions of students in badminton are collected, the collected videos are preprocessed, and preprocessed videos are obtained;
step two, performing framing processing on the preprocessed video to obtain a framed image;
step three, detecting the racket position in the image obtained in the step two to obtain the coordinate of the racket position;
extracting the coordinates of the key points in the image obtained in the step two by adopting an OpenPose algorithm;
and step five, judging whether the technical action in the image obtained in the step two is standard or not according to the position coordinates and the key point coordinates of the racket, if not, correcting the technical action, and if so, not processing.
The invention has the beneficial effects that: the invention provides a technical action correcting method for badminton, which extracts racket position coordinates in an image through a Faster-rcnn algorithm, extracts key point position coordinates of a human body through an OpenPose algorithm, evaluates whether technical actions in the image are standard or not according to the extracted racket position coordinates and the key point position coordinates, and finally corrects nonstandard technical actions.
The method evaluates the technical action according to the extracted position coordinates of the racket and the position coordinates of the key point, so that the evaluation result is more credible, and the accuracy of evaluating the technical action is improved.
Drawings
FIG. 1 is a flow chart of a method for correcting technical movements used in badminton of the present invention;
FIG. 2 is a schematic diagram of an image after framing processing;
FIG. 3 is a schematic illustration of detected keypoint locations and racket position;
in the figure, 0 is the position of the nose, 1 is the position of the clavicle, 2 is the position of the left shoulder, 3 is the position of the left elbow, 4 is the position of the left wrist, 5 is the position of the right shoulder, 6 is the position of the right elbow, 7 is the position of the right wrist, 8 is the position of the left crotch, 9 is the position of the left knee, 10 is the position of the left ankle, 11 is the position of the right crotch, 12 is the position of the right knee, 13 is the position of the right ankle, 14 is the position of the left eye, 15 is the position of the right eye, 16 is the position of the left ear, 17 is the position of the right ear, and 18 is the position of the racket.
Detailed Description
First embodiment this embodiment will be described with reference to fig. 1. The technical action correcting method for badminton comprises the following steps:
the method comprises the steps that firstly, after videos of technical actions of students in badminton are collected, the collected videos are preprocessed, and preprocessed videos are obtained;
step two, performing framing processing on the preprocessed video to obtain a framed image, as shown in fig. 2;
step three, detecting the racket position in the image obtained in the step two to obtain the coordinate of the racket position;
extracting the coordinates of the key points in the image obtained in the step two by adopting an OpenPose algorithm;
and step five, judging whether the technical action in the image obtained in the step two is standard or not according to the position coordinates and the key point coordinates of the racket, if not, correcting the technical action, and if so, not processing.
The second embodiment is as follows: the difference between this embodiment and the first embodiment is that, in the first step, the collected video is preprocessed, which specifically includes: and converting the collected analog video into digital video and storing the digital video.
The third concrete implementation mode: the difference between the present embodiment and the first embodiment is that, in the third step, the position of the racket in the image obtained in the second step is detected by using a fast-rcnn algorithm.
The fourth concrete implementation mode is as follows: the third embodiment is different from the third embodiment in that the specific process of the third step is as follows:
step three, constructing a fast-rcnn network model, and after training the constructed fast-rcnn network model, obtaining a trained fast-rcnn network model;
when a Faster-rcnn network model is trained, firstly, a video set is collected, and then the collected video set is subjected to framing processing to obtain an image; then marking the position of the racket in the image, and training the Faster-rcnn network model by using the marked image;
step two, inputting the image obtained in the step two into a VGG16 module of a trained Faster-rcnn network model, and extracting the characteristics of the input image;
inputting the features extracted in the third step into an RPN (Region probable Network) module of the trained fast-rcnn Network model to generate suggestion windows (proposals);
step three, inputting the features extracted in the step two and the suggestion window generated in the step three into a Rol Pooling layer of the trained Faster-rcnn network model to obtain the comprehensive features of the image obtained in the step two;
and step three, acquiring the position coordinates of the racket according to the comprehensive characteristics obtained in the step three.
The invention uses the fast-rcnn algorithm to detect the racket, firstly takes out the rectangular frame, and then takes the middle point of the rectangle as the position of the racket.
Faster-rcnn generally includes 4 parts:
(1) Extracting picture features (feature maps) using a mature model such as VGG 16;
(2) Feeding the picture characteristics into an RPN (Region probable Network) Network to obtain the probable;
(3) Feeding the picture characteristics and the proposals into a Rol Pooling layer to obtain integrated proposals characteristics;
(4) Predicting a bounding box of the object and the class of the object (including a second regression) according to the popusals characteristics;
the overall flow of the Faster-rcnn algorithm is as follows:
(1) Inputting a test image;
(2) Inputting the whole picture into CNN for feature extraction;
(3) Generating a pile of Anchor boxes by using RPN, cutting and filtering the Anchor boxes, and judging whether anchors belong to foreground (forkround) or background (background) by softmax (namely, whether an object or is not an object, so that the Anchor boxes are classified into two categories; meanwhile, the other branch bounding box regression correcting anchor box forms a more accurate proposal (note: the more accurate here is relative to the next box regression of the rear full connection layer);
(4) Mapping the suggestion window to the last layer convolution feature map of the CNN;
(5) Enabling each Rol to generate a feature map with a fixed size through a Rol pooling layer;
(6) And jointly training the classification probability and the Bounding box regression (Bounding box regression) by utilizing Softmax Loss (detection classification probability) and Smooth L1 Loss (detection Bounding box regression).
The fifth concrete implementation mode: this embodiment will be described with reference to fig. 3. This embodiment differs from the first embodiment in that the key point coordinates include position coordinates of a nose, a clavicle, a left shoulder, a left elbow, a left wrist, a right shoulder, a right elbow, a right wrist, a left crotch, a left knee, a left ankle, a right crotch, a right knee, a right ankle, a left eye, a right eye, a left ear, and a right ear of a human body.
The overall process of the Openpos algorithm adopted by the invention is as follows:
firstly, extracting features of an input picture through a 10-layer VGG19 network, putting the obtained feature graph into two convolutional neural networks for calculation, respectively predicting confidence coefficient and affinity vector of each key point to obtain a heat graph containing the confidence coefficient and a heat graph containing an affinity field, then realizing clustering of the key points according to binary matching in a graph theory, and finally splicing the joint points of the same person to be combined into an integral framework due to the fact that the accuracy of binary matching is guaranteed by the vectority of PAF.
(1) Keypoint detection
The process is as follows: obtaining mapMax through probmap- > Gauss filtering- >
1. cv2.Gaussianblur: the Gaussian filtering is a linear smooth filtering and has a good effect of removing Gaussian noise.
2. mapmaster is a prob binary graph with greater than threshold of 1 and others of 0.
(2) Key point coordinate value
To find the exact location of the keypoints, we need to find the maxima of each blob. The method is realized by the following steps:
1. firstly, finding out all contours of each key point area;
2. generating a mask for the region;
3. extracting the probMap of the region by multiplying the mask by the probMap;
4. local maxima for this region are found. Each contour (i.e., keypoint region) is processed.
(3) Detection of key point pairs
1. Dividing a connecting line between two points of the key point pair to obtain n points on the connecting line;
2. judging whether the PAF on the points is the same as the direction of a line connecting the key points or not;
3. if the direction meets a specific degree, the key point pair is effective;
(4) Concrete implementation analysis of key point combination
For each detected valid join pair, assigning joins belonging to a human body;
1. an empty list is first created that holds all the key points for each human body.
2. If partA is not in any of the body lists, it indicates that the key point pair belongs to a newly appearing body, so a new list is created.
The sixth specific implementation mode: the difference between this embodiment and the fifth embodiment is that the specific process of the fifth step is as follows:
when the technical action in the image obtained in the step two is a preparation posture of a backhand serve action, if the ordinate of the right wrist position is larger than the ordinate of the right crotch position and the ordinate of the right crotch position is larger than the ordinate of the racket position, the preparation posture of the backhand serve action is standard, otherwise, the technical action is corrected according to the ordinate of the right wrist position, the right crotch position and the racket position;
and when the technical action in the image obtained in the step two is a swinging batting of a backhanded batting action, if the vertical coordinate of the racket position is less than or equal to the vertical coordinate of the left crotch position, or the vertical coordinate of the racket position is less than or equal to the vertical coordinate of the right crotch position, the swinging batting action is standard, otherwise, the technical action is corrected according to the requirements on the vertical coordinates of the racket position and the left crotch position, or according to the requirements on the vertical coordinates of the racket position and the right crotch position.
The above-described calculation examples of the present invention are merely to explain the calculation model and the calculation flow of the present invention in detail, and are not intended to limit the embodiments of the present invention. It will be apparent to those skilled in the art that other variations and modifications of the present invention can be made based on the above description, and it is not intended to be exhaustive or to limit the invention to the precise form disclosed, and all such modifications and variations are possible and contemplated as falling within the scope of the invention.

Claims (4)

1. A technical action correction method for badminton is characterized by specifically comprising the following steps of:
the method comprises the steps that firstly, after videos of technical actions of students in badminton are collected, the collected videos are preprocessed, and preprocessed videos are obtained;
step two, performing framing processing on the preprocessed video to obtain a framed image;
step three, detecting the racket position in the image obtained in the step two to obtain the coordinate of the racket position; detecting the position of the racket in the image obtained in the step two by adopting a detection method of a Faster-rcnn algorithm;
the third step comprises the following specific processes:
step three, constructing a fast-rcnn network model, and after training the constructed fast-rcnn network model, obtaining a trained fast-rcnn network model;
step two, inputting the image obtained in the step two into a VGG16 module of a trained Faster-rcnn network model, and extracting the characteristics of the input image;
inputting the features extracted in the step three into an RPN module of a trained Faster-rcnn network model to generate a suggestion window;
step three, inputting the features extracted in the step two and the suggestion window generated in the step three into a Rol Pooling layer of the trained Faster-rcnn network model to obtain the comprehensive features of the image obtained in the step two;
step three, acquiring a racket position coordinate according to the comprehensive characteristics obtained in the step three;
extracting the coordinates of the key points in the image obtained in the step two by adopting an OpenPose algorithm;
and step five, judging whether the technical action in the image obtained in the step two is standard or not according to the position coordinates and the key point coordinates of the racket, if not, correcting the technical action, and if so, not processing.
2. The method for correcting technical actions of badminton according to claim 1, wherein in the first step, the collected video is preprocessed, and the preprocessing specifically comprises: and converting the collected analog video into digital video and storing the digital video.
3. The method of claim 1, wherein the keypoint coordinates comprise position coordinates of the nose, clavicle, left shoulder, left elbow, left wrist, right shoulder, right elbow, right wrist, left crotch, left knee, left ankle, right crotch, right knee, right wrist, left eye, right eye, left ear, and right ear of the human body.
4. The technical action correcting method for badminton according to claim 3, wherein the concrete process of the step five is as follows:
when the technical action in the image obtained in the step two is a preparation posture of a backhand serve action, if the ordinate of the right wrist position is larger than the ordinate of the right crotch position and the ordinate of the right crotch position is larger than the ordinate of the racket position, the preparation posture standard of the backhand serve action is determined, otherwise the technical action is corrected according to the ordinate of the right wrist position, the right crotch position and the racket position;
and when the technical action in the image obtained in the step two is a swinging batting of a backhanded batting action, if the vertical coordinate of the racket position is less than or equal to the vertical coordinate of the left crotch position, or the vertical coordinate of the racket position is less than or equal to the vertical coordinate of the right crotch position, the swinging batting action is standard, otherwise, the technical action is corrected according to the requirements on the vertical coordinates of the racket position and the left crotch position, or according to the requirements on the vertical coordinates of the racket position and the right crotch position.
CN202110419564.9A 2021-04-19 2021-04-19 Technical action correcting method for badminton Active CN113095248B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110419564.9A CN113095248B (en) 2021-04-19 2021-04-19 Technical action correcting method for badminton

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110419564.9A CN113095248B (en) 2021-04-19 2021-04-19 Technical action correcting method for badminton

Publications (2)

Publication Number Publication Date
CN113095248A CN113095248A (en) 2021-07-09
CN113095248B true CN113095248B (en) 2022-10-25

Family

ID=76678723

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110419564.9A Active CN113095248B (en) 2021-04-19 2021-04-19 Technical action correcting method for badminton

Country Status (1)

Country Link
CN (1) CN113095248B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109753891A (en) * 2018-12-19 2019-05-14 山东师范大学 Football player's orientation calibration method and system based on human body critical point detection
CN109829442A (en) * 2019-02-22 2019-05-31 焦点科技股份有限公司 A kind of method and system of the human action scoring based on camera
CN110781777A (en) * 2019-10-10 2020-02-11 深圳市牧爵电子科技有限公司 Method, system and storage medium for judging human body action in sports training
CN110929595A (en) * 2019-11-07 2020-03-27 河海大学 System and method for training or entertainment with or without ball based on artificial intelligence
CN111369629A (en) * 2019-12-27 2020-07-03 浙江万里学院 Ball return trajectory prediction method based on binocular visual perception of swinging, shooting and hitting actions
CN111773651A (en) * 2020-07-06 2020-10-16 湖南理工学院 Badminton training monitoring and evaluating system and method based on big data
WO2020215565A1 (en) * 2019-04-26 2020-10-29 平安科技(深圳)有限公司 Hand image segmentation method and apparatus, and computer device
CN112057833A (en) * 2020-09-09 2020-12-11 刘圆芳 Badminton forehand high-distance ball flapping motion identification method
CN112446313A (en) * 2020-11-20 2021-03-05 山东大学 Volleyball action recognition method based on improved dynamic time warping algorithm

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109508656A (en) * 2018-10-29 2019-03-22 重庆中科云丛科技有限公司 A kind of dancing grading automatic distinguishing method, system and computer readable storage medium
CN109948459B (en) * 2019-02-25 2023-08-25 广东工业大学 Football action evaluation method and system based on deep learning
CN110348524B (en) * 2019-07-15 2022-03-04 深圳市商汤科技有限公司 Human body key point detection method and device, electronic equipment and storage medium
CN110941990B (en) * 2019-10-22 2023-06-16 泰康保险集团股份有限公司 Method and device for evaluating human body actions based on skeleton key points
CN110956141B (en) * 2019-12-02 2023-02-28 郑州大学 Human body continuous action rapid analysis method based on local recognition

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109753891A (en) * 2018-12-19 2019-05-14 山东师范大学 Football player's orientation calibration method and system based on human body critical point detection
CN109829442A (en) * 2019-02-22 2019-05-31 焦点科技股份有限公司 A kind of method and system of the human action scoring based on camera
WO2020215565A1 (en) * 2019-04-26 2020-10-29 平安科技(深圳)有限公司 Hand image segmentation method and apparatus, and computer device
CN110781777A (en) * 2019-10-10 2020-02-11 深圳市牧爵电子科技有限公司 Method, system and storage medium for judging human body action in sports training
CN110929595A (en) * 2019-11-07 2020-03-27 河海大学 System and method for training or entertainment with or without ball based on artificial intelligence
CN111369629A (en) * 2019-12-27 2020-07-03 浙江万里学院 Ball return trajectory prediction method based on binocular visual perception of swinging, shooting and hitting actions
CN111773651A (en) * 2020-07-06 2020-10-16 湖南理工学院 Badminton training monitoring and evaluating system and method based on big data
CN112057833A (en) * 2020-09-09 2020-12-11 刘圆芳 Badminton forehand high-distance ball flapping motion identification method
CN112446313A (en) * 2020-11-20 2021-03-05 山东大学 Volleyball action recognition method based on improved dynamic time warping algorithm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Faster-RCNN电力输送塔检测算法;严星等;《计算机仿真》;20200215(第02期);全文 *
基于Kinect 的体育运动自训练系统;李鑫 等;《计算机技术与发展》;20190430;第122-127页 *

Also Published As

Publication number Publication date
CN113095248A (en) 2021-07-09

Similar Documents

Publication Publication Date Title
CN110472554B (en) Table tennis action recognition method and system based on attitude segmentation and key point features
CN107610087B (en) Tongue coating automatic segmentation method based on deep learning
CN109949341B (en) Pedestrian target tracking method based on human skeleton structural features
CN109522850B (en) Action similarity evaluation method based on small sample learning
CN110688965B (en) IPT simulation training gesture recognition method based on binocular vision
CN109684924A (en) Human face in-vivo detection method and equipment
WO2019237567A1 (en) Convolutional neural network based tumble detection method
CN109684925A (en) A kind of human face in-vivo detection method and equipment based on depth image
CN113095263B (en) Training method and device for pedestrian re-recognition model under shielding and pedestrian re-recognition method and device under shielding
CN111160291B (en) Human eye detection method based on depth information and CNN
WO2021051526A1 (en) Multi-view 3d human pose estimation method and related apparatus
CN110458235B (en) Motion posture similarity comparison method in video
CN108898623A (en) Method for tracking target and equipment
CN111723687A (en) Human body action recognition method and device based on neural network
CN111709365A (en) Automatic human motion posture detection method based on convolutional neural network
CN112200074A (en) Attitude comparison method and terminal
CN107463873A (en) A kind of real-time gesture analysis and evaluation methods and system based on RGBD depth transducers
CN112446313A (en) Volleyball action recognition method based on improved dynamic time warping algorithm
CN110197501B (en) Image processing method and apparatus
CN111178201A (en) Human body sectional type tracking method based on OpenPose posture detection
CN104331700B (en) Group Activity recognition method based on track energy dissipation figure
CN113095248B (en) Technical action correcting method for badminton
CN110163112B (en) Examinee posture segmentation and smoothing method
CN115240269A (en) Gait recognition method and device based on body type transformation and storage medium
CN112329712A (en) 2D multi-person posture estimation method combining face detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant