CN109948459B - Football action evaluation method and system based on deep learning - Google Patents

Football action evaluation method and system based on deep learning Download PDF

Info

Publication number
CN109948459B
CN109948459B CN201910143782.7A CN201910143782A CN109948459B CN 109948459 B CN109948459 B CN 109948459B CN 201910143782 A CN201910143782 A CN 201910143782A CN 109948459 B CN109948459 B CN 109948459B
Authority
CN
China
Prior art keywords
football
standard
action
video
actions
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910143782.7A
Other languages
Chinese (zh)
Other versions
CN109948459A (en
Inventor
邹凯
尹明
黄伟填
曾弈秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN201910143782.7A priority Critical patent/CN109948459B/en
Publication of CN109948459A publication Critical patent/CN109948459A/en
Application granted granted Critical
Publication of CN109948459B publication Critical patent/CN109948459B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a football action evaluation method based on deep learning, which comprises the following steps: s1: making standard templates of category standard actions of various football actions; s2: collecting a sports video of football training and competition by using a camera; s3: processing video data in the sports video to obtain football action categories; s4: and (3) matching the standard action of the football action category in the standard template by using the football action category obtained in the step (S3), and outputting the difference from the standard action. The human body posture estimation model provided by the invention reduces the redundancy of the human body posture estimation model based on deep learning, accelerates the operation speed of the human body posture estimation model on the extraction of human skeleton key points of video frames, reduces the operation time, adopts a skeleton point diagram structure to construct skeleton data with space-time information, has good characterization on the action of local limb actions in football, and improves the recognition rate.

Description

Football action evaluation method and system based on deep learning
Technical Field
The invention relates to the technical field of computer vision, in particular to a football action evaluation method and system based on deep learning.
Background
The football game is called as the world first sport, and is characterized by more participants, large field, complex technology and long playing time, thus being a collective sport with stronger antagonism. Along with the development of rebroadcasting technology and big data, more and more football professionals can calculate the running distance, technical characteristics, running habit and other characteristics of the football professionals through analyzing the match video, and then obtain statistical rules through analyzing the characteristics, so as to formulate targeted personnel deployment and tactical arrangement. With the development of wearable equipment, data such as running speed, running heat map and the like of a player can be obtained in a football match, but if the player actions are to be counted and analyzed, the conventional method usually uses manual video watching, so how to rapidly analyze and understand the player actions through a computer is a very necessary and significant topic, and is an important application for estimating human body gestures and identifying human body actions by utilizing the computer.
Currently, the conventional football action evaluation is performed by professional persons to watch videos manually and evaluate, and with the development of deep learning, action recognition based on the deep learning has become a very popular research method in the field of computer vision.
A human motion recognition method is presented in fig. 1. The method comprises the steps of sampling an input video to be detected to obtain a sampling image sequence of the video to be detected, detecting human skeleton key points of sampling image frames by adopting a trained human skeleton key point (human body posture estimation) detection model to obtain a human skeleton key point position heat map (representing probability characteristics of positions of preset human skeleton key points) of each sampling image frame in the sampling image frame sequence, and inputting the human skeleton key point position heat map of the sampling image frame sequence into a trained action classification model to classify the human skeleton key points to obtain human actions corresponding to the video to be identified.
In the existing scheme, a human body action recognition system is usually installed on a PC, football match and training are mostly carried out in outdoor scenes, football action evaluation is also carried out outdoors, and the equipment of the existing scheme limits the use in the outdoor scenes; the trained human skeleton key point detection model is adopted to detect human skeleton key points of each frame of image, the existing human skeleton key point detection models are redundant, each time consuming time is long because the models are too complex and the required calculation resources are large in the detection process, the system needs to be deployed on the embedded equipment due to scene limitation of football action evaluation, and the hardware resources of the embedded equipment are fewer than those of the PC end, so that real-time requirements are difficult to achieve in actual use; the method comprises the steps of sampling a video to be detected, sampling video data into video frame data, detecting human skeleton points of each frame of image, wherein in reality, one action is a continuous process, time continuity exists between frames, space continuity exists between human skeleton key points of different frames, the video is split into frames according to the existing scheme, only one action at a certain moment can be evaluated, and football action evaluation needs to evaluate one action process.
Disclosure of Invention
The invention provides a football action evaluation method and system based on deep learning.
The invention aims at providing a football action evaluation method based on deep learning, which solves the problems of long recognition time consumption caused by complicated existing models and large required calculation resources and also solves the problem that space-time continuity is not considered in the existing scheme.
The invention further aims to provide a football action evaluation system based on deep learning, which reduces equipment cost and operation difficulty and improves portability.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a football action evaluation method based on deep learning comprises the following steps:
s1: making standard templates of category standard actions of various football actions;
s2: collecting a sports video of football training and competition by using a camera;
s3: processing video data in the sports video to obtain football action categories;
s4: and (3) matching the standard action of the football action category in the standard template by using the football action category obtained in the step (S3), and outputting the difference from the standard action.
Preferably, the step S1 of formulating standard templates of various football category standard actions includes the following steps:
s1.1: collecting public videos of standard actions of various football action categories and performing manual classification;
s1.2: the human body skeleton key points are calculated for the standard actions of each category by utilizing the human body posture estimation model, the human body skeleton key points of the standard actions of each category are averaged, a skeleton point sequence is obtained for the standard actions of each category, and all skeleton point sequences form a standard template.
By observing the normal use actions of football match, 12 common football standard actions such as chest ball stopping, ball stopping at the inner side of foot, head ball stopping and the like are selected as the observation standard actions, the public videos of the actions are collected and classified manually, then the human body posture estimation model with trained parameters is used for solving the key points of human body bones, the key points of the bones of each type of actions are averaged, and each type of actions can obtain a skeleton point sequence which is a football standard template.
Preferably, the human body posture estimation model is specifically as follows:
inputting the video frames of the normalized motion video into a VGG16 or ResNet convolutional neural network to obtain a feature map with high-dimensional features of the video frames and a high-dimensional feature matrix; the feature map is convolved by LSC (Large separable convolution) and then is subjected to RPN (Region Proposal Network) network to obtain a Propos area, PSROI mapping (Position Sensitiveregion of interestPooling) is used for detecting Propos areas, the Propos areas are input into an R-CNN network for classification and position regression, and the R-CNN network has only one full-connection layer, so that a skeleton point sequence is obtained.
In the existing method, each ROI is independently subjected to full connection twice to extract the characteristics, the number of channels of each ROI is consistent with the characteristic diagram, the full connection is used for carrying out inner product operation on all channels, the calculated amount is large, the weights cannot be shared, and a large amount of calculation resources and time can be occupied. The LSC (Large separable convolution) convolution is set after the feature map, the obtained ROI (region of interest) channels are less than those in the existing scheme, PSROI mapping (Position Sensitive ROI Pooling, the position-sensitive candidate region is used for replacing RoIAlign), and position information is manually introduced when Propos regions are pooled, so that the sensitivity of a deeper neural network to the object position is effectively improved, only one layer of full connection is provided in the final R-CNN sub-network through the improvement of the two steps, the number of the channels of the ROI is reduced, full connection inner product operation is reduced, and the model operation speed is accelerated.
Preferably, the processing of the video data in the sports video in step S3 to obtain the football action category includes the following steps:
s3.1: normalizing the length of the motion video;
s3.2: obtaining a skeleton point sequence in each frame of the motion video by using a human body posture estimation model;
s3.3: the skeleton points of each frame of the motion video are connected with each other according to the physiological structure of a human body, the skeleton points at the same position among different frames are connected with each other, and a skeleton point time space diagram G= (V, E) is constructed, wherein V is the number of nodes, namely the number of skeleton points, the space structure of the skeleton points is represented, E is an edge in the diagram structure, namely a feature vector, the connection among different frames is represented, and the track of each specific node along with the time is represented;
s3.4: and carrying out feature extraction on the skeleton point space-time diagram by adopting a graph convolution neural network to obtain the football action category.
Because football training video collected by a camera each time is influenced by factors such as technical habits of action performers, action executing speed and the like, the length of the video collected each time has great randomness, therefore, preprocessing, namely length normalization, is needed to be carried out on the video of each input sample so as to extract key points of human bones by a human body posture estimation model. The human body is formed by connecting a plurality of parts such as limbs, trunk and head through joints, in the football technical action, a plurality of actions are completed by the limbs of the human body, if the relative positions of the parts of the human body can be determined, a good foundation can be laid for identifying and guiding football training actions, the common football actions are mainly limb local actions, human skeleton keys have good interpretation on the space structure of the human body, and one action is a section of track of the limbs in a time domain, so that the action can be represented by using a skeleton point sequence. The skeleton point time space diagram is non-Euclidean data, convolution operation in a common convolution neural network cannot be operated, and characteristic extraction is carried out by adopting the graph neural network.
Preferably, the step S3.1 of normalizing the length of the motion video includes the following steps:
s3.1.1: the expected length of the video is made to be N, the length of the motion video is judged to be N, if N is larger than N, the motion video is sampled at equal intervals, and the length of the motion video is made to be equal to N; if N < N, the motion video is subjected to equidistant interpolation, so that the length of the motion video is N.
Preferably, in S3.4, the feature extraction is performed on the skeleton point space-time diagram by using a graph convolution neural network, and each graph convolution operation is as follows:
wherein A is an adjacent matrix of the skeleton point space-time diagram, I is a self-expression matrix of the skeleton point space-time diagram,symmetric normalized Laplacian decomposition for skeleton point space diagram, f in The input representing the graph convolution, i.e., the graph structure data of the previous layer; f (f) out The data obtained by the current graph convolution operation of the graph structure data representing the upper layer, and j represents the input data f of the current graph convolution operation in The number of nodes in the graph is consistent with j; the method comprises the steps of carrying out a first treatment on the surface of the W is the weight matrix of the graph convolution.
Preferably, the difference between the output in S4 and the standard motion, specifically, the difference between the coordinates of the bone point and the standard bone point is output. This difference is measured and different evaluations are given according to the magnitude of the difference. For irregular actions, the trainer is guided to act correctly according to the relative positions of the key points of the template human skeleton.
Football action evaluation system based on degree of depth study, including degree of depth camera and raspberry group treater, wherein, the degree of depth camera passes through the data line and is connected with the raspberry group treater, includes in the raspberry group treater:
the football standard action making module is used for making standard templates of category standard actions of various football actions;
the video acquisition module is used for acquiring sports videos of football training and competition by using the camera;
the motion recognition and classification module is used for processing video data in the sports video to obtain football motion categories;
and the matching evaluation module is used for matching the obtained football action category with the standard action of the football action category in the standard template and outputting the difference between the football action category and the standard action.
Preferably, the raspberry group processor is also connected with a display and a sound output device.
Preferably, the raspberry group processor is connected with the wireless communication module and is connected with the Ethernet.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
1. the invention provides a human body posture estimation model, which reduces redundancy of the human body posture estimation model based on deep learning by adding LSC convolution and manually introducing position information after a feature map, accelerates the operation speed of the human body posture estimation model on extraction of human skeleton key points of a video frame under the condition of limited calculation resources of embedded equipment, and reduces operation time;
2. the skeleton data with space-time information is constructed by adopting a skeleton point diagram structure, so that the method has good characterization on the actions of local limb actions in the football and improves the recognition rate;
3. the system is integrated in the raspberry group of the embedded device, so that portability is improved, the problem that football action is evaluated outdoors is solved, and equipment cost and operation difficulty are reduced.
Drawings
Fig. 1 is a schematic diagram of a conventional human motion recognition method.
Fig. 2 is a flowchart of a football motion evaluation method based on deep learning.
Fig. 3 is a video length normalization flowchart.
Fig. 4 is a schematic representation of a three-dimensional representation of a human body joint.
Fig. 5 is a schematic diagram of a conventional human body posture estimation model structure.
Fig. 6 is a schematic diagram of a human body posture estimation model structure according to the present invention.
Fig. 7 is a schematic diagram of a skeleton point space-time diagram.
Fig. 8 is a schematic diagram of a football motion evaluation system based on deep learning.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the present patent;
for the purpose of better illustrating the embodiments, certain elements of the drawings may be omitted, enlarged or reduced and do not represent the actual product dimensions;
it will be appreciated by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical scheme of the invention is further described below with reference to the accompanying drawings and examples.
Example 1
The embodiment provides a football action evaluation method based on deep learning, as shown in fig. 2, comprising the following steps:
s1: making standard templates of category standard actions of various football actions;
in step S1, a standard template of various football action category standard actions is formulated, comprising the following steps:
s1.1: collecting public videos of standard actions of various football action categories and performing manual classification;
s1.2: the human body skeleton key points are calculated for the standard actions of each category by utilizing the human body posture estimation model, the human body skeleton key points of the standard actions of each category are averaged, a skeleton point sequence is obtained for the standard actions of each category, and all skeleton point sequences form a standard template.
The human body posture estimation model specifically comprises the following steps:
s2: collecting a sports video of football training and competition by using a camera;
s3: processing video data in the sports video to obtain football action categories;
in step S3, video data in the sports video is processed to obtain a football action category, which includes the following steps:
s3.1: normalizing the length of the motion video;
normalizing the length of the motion video in step S3.1, as shown in fig. 3, includes the following steps:
s3.1.1: the expected length of the video is made to be N, the length of the motion video is judged to be N, if N is larger than N, the motion video is sampled at equal intervals, and the length of the motion video is made to be equal to N; if N < N, the motion video is subjected to equidistant interpolation, so that the length of the motion video is N.
S3.2: obtaining a skeleton point sequence in each frame of the motion video by using a human body posture estimation model, as shown in fig. 4;
s3.3: the skeleton points of each frame of the motion video are connected with each other according to the physiological structure of the human body, the skeleton points at the same position among different frames are connected with each other, a skeleton point time space diagram G= (V, E) is constructed, wherein V is the number of nodes, namely the number of skeleton points, the space structure of the skeleton points is represented, E is an edge in the diagram structure, namely a feature vector, the connection among different frames is represented, and the trajectory of each specific node along with the time is represented, as shown in fig. 7;
s3.4: and carrying out feature extraction on the skeleton point space-time diagram by adopting a graph convolution neural network to obtain the football action category.
In step S3.4, a graph convolution neural network is adopted to extract characteristics of a skeleton point space-time diagram, and each graph convolution operation is as follows:
wherein A is an adjacent matrix of the skeleton point space-time diagram, I is a self-expression matrix of the skeleton point space-time diagram,symmetric normalized Laplacian decomposition for skeleton point space diagram, f in The input representing the graph convolution, i.e., the graph structure data of the previous layer; f (f) out The data obtained by the current graph convolution operation of the graph structure data representing the upper layer, and j represents the input data f of the current graph convolution operation in The number of nodes in the graph is consistent with j; the method comprises the steps of carrying out a first treatment on the surface of the W is the weight matrix of the graph convolution.
S4: and (3) matching the standard action of the football action category in the standard template by using the football action category obtained in the step (S3), and outputting the difference from the standard action.
In step S4, the difference between the standard motion and the output is specifically the difference between the coordinates of the bone points and the coordinates of the standard bone points.
The human body posture estimation model specifically comprises the following steps:
the schematic diagram of the Mask R-CNN structure of the prior method is shown in fig. 5, a normalized video frame is input, and the backbone network is a VGG16 convolutional neural network; the feature map is a high-dimensional feature matrix extracted after convolution by a convolution neural network, and has the high-dimensional feature of a video frame; RPN (Region Proposal Network) is Proposals Regions network for extracting video frames; roialign is to further refine the Propos regions, and since the Propos regions obtained by RPN is only a rough region, the coordinates of the region position are further regressed by Roialign to obtain the ROI (Area-of-interest); the R-CNN classifies and positional regression for each ROI and takes pixel-level segmentation for ROI deconvolution using FCN (fully connection network). In the existing method, each ROI is independently subjected to full connection twice to extract the characteristics, the number of channels of each ROI is consistent with the characteristic diagram, the full connection is used for carrying out inner product operation on all channels, the calculated amount is large, the weights cannot be shared, and a large amount of calculation resources and time can be occupied.
In view of this problem, this embodiment proposes a human body posture estimation model, as shown in fig. 6, in order to reduce the total connection that the weights of each ROI are not shared twice, according to the present invention, LSC (Large separable convolution) convolution is set after the feature map, when the number of channels of the feature map obtained by the backbone network is 2048, the number of channels after LSC becomes 490, and the number of channels of the obtained ROI is smaller than that of the channels in fig. 5, and the weights of the LSC are shared by the subsequent R-CNN network. PSROI mapping is used for replacing RoIALign, and position information is manually introduced when Propos regions are pooled, so that the sensitivity of a deeper neural network to the position of an object is effectively improved. Through the improvement of the two steps, only one layer of full connection exists in the final R-CNN sub-network, the number of channels of the ROI is reduced, the full connection inner product operation is reduced, and the model operation speed is accelerated.
Example 2
The present embodiment provides a football motion evaluation system based on deep learning based on the football motion evaluation method based on deep learning as described in embodiment 1, as shown in fig. 8, including a deep camera and a raspberry group processor, wherein the deep camera is connected with the raspberry group processor through a data line, and the raspberry group processor includes:
the football standard action making module is used for making standard templates of category standard actions of various football actions;
the video acquisition module is used for acquiring sports videos of football training and competition by using the camera;
the motion recognition and classification module is used for processing video data in the sports video to obtain football motion categories;
and the matching evaluation module is used for matching the obtained football action category with the standard action of the football action category in the standard template and outputting the difference between the football action category and the standard action.
The raspberry group processor is also connected with a display and a sound output device.
The raspberry group processor is connected with the wireless communication module and is connected with the Ethernet.
The same or similar reference numerals correspond to the same or similar components;
the terms describing the positional relationship in the drawings are merely illustrative, and are not to be construed as limiting the present patent;
it is to be understood that the above examples of the present invention are provided by way of illustration only and not by way of limitation of the embodiments of the present invention. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the invention are desired to be protected by the following claims.

Claims (8)

1. The football action evaluation method based on deep learning is characterized by comprising the following steps of:
s1: making standard templates of category standard actions of various football actions;
s2: collecting a sports video of football training and competition by using a camera;
s3: processing video data in the sports video to obtain football action categories;
s4: matching the standard action of the football action category in the standard template by using the football action category obtained in the step S3, and outputting the difference between the standard action and the football action category;
in step S1, a standard template of various football action category standard actions is formulated, comprising the following steps:
s1.1: collecting public videos of standard actions of various football action categories and performing manual classification;
s1.2: obtaining human skeleton key points for each category of standard actions by utilizing a human body posture estimation model, averaging the human skeleton key points of each category of standard actions, obtaining a skeleton point sequence for each category of standard actions, and forming a standard template by all skeleton point sequences;
the human body posture estimation model is specifically as follows:
inputting the video frames of the normalized motion video into a VGG16 or ResNet convolutional neural network to obtain a feature map with high-dimensional features of the video frames and a high-dimensional feature matrix; the feature map is convolved by LSC and then passes through RPN network to obtain Propos area, PSROI mapping is used to detect Propos area, then the detected Propos area is input into R-CNN network for classification and position regression, R-CNN network has one full-connection layer, and finally skeleton point sequence is obtained.
2. The method for evaluating football actions based on deep learning according to claim 1, wherein the step S3 of processing video data in a sports video to obtain a football action category comprises the steps of:
s3.1: normalizing the length of the motion video;
s3.2: obtaining a skeleton point sequence in each frame of the motion video by using a human body posture estimation model;
s3.3: the skeleton points of each frame of the motion video are connected with each other according to the physiological structure of a human body, the skeleton points at the same position among different frames are connected with each other, and a skeleton point time space diagram G= (V, E) is constructed, wherein V is the number of nodes, namely the number of skeleton points, the space structure of the skeleton points is represented, E is an edge in the diagram structure, namely a feature vector, the connection among different frames is represented, and the track of each specific node along with the time is represented;
s3.4: and carrying out feature extraction on the skeleton point space-time diagram by adopting a graph convolution neural network to obtain the football action category.
3. The method for evaluating football actions based on deep learning according to claim 2, wherein normalizing the length of the sports video in step S3.1 comprises the steps of:
s3.1.1: the expected length of the video is made to be N, the length of the motion video is judged to be N, if N is larger than N, the motion video is sampled at equal intervals, and the length of the motion video is made to be equal to N; if N < N, the motion video is subjected to equidistant interpolation, so that the length of the motion video is N.
4. The method for evaluating football actions based on deep learning according to claim 3, wherein in step S3.4, feature extraction is performed on the skeleton point space-time diagram by using a graph convolution neural network, and each graph convolution operation is as follows:
wherein A is an adjacent matrix of the skeleton point space-time diagram, I is a self-expression matrix of the skeleton point space-time diagram,symmetric normalized Laplacian decomposition for skeleton point space diagram, f in The input representing the graph convolution, i.e., the graph structure data of the previous layer; f (f) out The data obtained by the current graph convolution operation of the graph structure data representing the upper layer, and j represents the input data f of the current graph convolution operation in The number of nodes in the graph is consistent with j; w is the weight matrix of the graph convolution.
5. The method according to claim 4, wherein the step S4 is performed to output a difference from the standard motion, specifically, a difference between the coordinates of the skeletal points and the coordinates of the standard skeletal points.
6. A deep learning based football motion assessment system according to claim 5, comprising a depth camera and a raspberry group processor, wherein the depth camera is connected to the raspberry group processor by a data line, the raspberry group processor comprising:
the football standard action making module is used for making standard templates of category standard actions of various football actions;
the video acquisition module is used for acquiring sports videos of football training and competition by using the camera;
the motion recognition and classification module is used for processing video data in the sports video to obtain football motion categories;
and the matching evaluation module is used for matching the obtained football action category with the standard action of the football action category in the standard template and outputting the difference between the football action category and the standard action.
7. The deep learning based football action evaluation system of claim 6, wherein the raspberry group processor is further connected to a display and a sound output device.
8. The deep learning based football action evaluation system of claim 6 or claim 7, wherein the raspberry group processor is connected to a wireless communication module and to an ethernet network.
CN201910143782.7A 2019-02-25 2019-02-25 Football action evaluation method and system based on deep learning Active CN109948459B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910143782.7A CN109948459B (en) 2019-02-25 2019-02-25 Football action evaluation method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910143782.7A CN109948459B (en) 2019-02-25 2019-02-25 Football action evaluation method and system based on deep learning

Publications (2)

Publication Number Publication Date
CN109948459A CN109948459A (en) 2019-06-28
CN109948459B true CN109948459B (en) 2023-08-25

Family

ID=67006989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910143782.7A Active CN109948459B (en) 2019-02-25 2019-02-25 Football action evaluation method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN109948459B (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110633736A (en) * 2019-08-27 2019-12-31 电子科技大学 Human body falling detection method based on multi-source heterogeneous data fusion
CN110705418B (en) * 2019-09-25 2021-11-30 西南大学 Taekwondo kicking motion video capture and scoring system based on deep LabCut
CN110941990B (en) * 2019-10-22 2023-06-16 泰康保险集团股份有限公司 Method and device for evaluating human body actions based on skeleton key points
CN110929242B (en) * 2019-11-20 2020-07-10 上海交通大学 Method and system for carrying out attitude-independent continuous user authentication based on wireless signals
CN111223549B (en) * 2019-12-30 2023-05-12 华东师范大学 Mobile terminal system and method for disease prevention based on posture correction
CN111259796A (en) * 2020-01-16 2020-06-09 东华大学 Lane line detection method based on image geometric features
CN111507192A (en) * 2020-03-19 2020-08-07 北京捷通华声科技股份有限公司 Appearance instrument monitoring method and device
CN111460960A (en) * 2020-03-27 2020-07-28 重庆电政信息科技有限公司 Motion classification and counting method
CN111680560A (en) * 2020-05-07 2020-09-18 南通大学 Pedestrian re-identification method based on space-time characteristics
CN111666844A (en) * 2020-05-26 2020-09-15 电子科技大学 Badminton player motion posture assessment method
CN111881859A (en) * 2020-07-31 2020-11-03 北京融链科技有限公司 Template generation method and device
CN112069979B (en) * 2020-09-03 2024-02-02 浙江大学 Real-time action recognition man-machine interaction system
CN111986260A (en) * 2020-09-04 2020-11-24 北京小狗智能机器人技术有限公司 Image processing method and device and terminal equipment
CN112370045B (en) * 2020-10-15 2022-04-05 北京大学 Functional action detection method and system based on artificial intelligence
CN112818800A (en) * 2021-01-26 2021-05-18 中国人民解放军火箭军工程大学 Physical exercise evaluation method and system based on human skeleton point depth image
CN112749684A (en) * 2021-01-27 2021-05-04 萱闱(北京)生物科技有限公司 Cardiopulmonary resuscitation training and evaluating method, device, equipment and storage medium
CN113052061A (en) * 2021-03-22 2021-06-29 中国石油大学(华东) Speed skating athlete motion identification method based on human body posture estimation
CN113095248B (en) * 2021-04-19 2022-10-25 中国石油大学(华东) Technical action correcting method for badminton
CN113642525A (en) * 2021-09-02 2021-11-12 浙江大学 Infant neural development assessment method and system based on skeletal points
CN114821812B (en) * 2022-06-24 2022-09-13 西南石油大学 Deep learning-based skeleton point action recognition method for pattern skating players
CN115661929B (en) * 2022-10-28 2023-11-17 北京此刻启动科技有限公司 Time sequence feature coding method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390174A (en) * 2012-05-07 2013-11-13 深圳泰山在线科技有限公司 Physical education assisting system and method based on human body posture recognition
CN107220608A (en) * 2017-05-22 2017-09-29 华南理工大学 What a kind of basketball action model was rebuild and defended instructs system and method
CN108664896A (en) * 2018-04-16 2018-10-16 彭友 Fencing action acquisition methods based on OpenPose and computer storage media
CN108960078A (en) * 2018-06-12 2018-12-07 温州大学 A method of based on monocular vision, from action recognition identity

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390174A (en) * 2012-05-07 2013-11-13 深圳泰山在线科技有限公司 Physical education assisting system and method based on human body posture recognition
CN107220608A (en) * 2017-05-22 2017-09-29 华南理工大学 What a kind of basketball action model was rebuild and defended instructs system and method
CN108664896A (en) * 2018-04-16 2018-10-16 彭友 Fencing action acquisition methods based on OpenPose and computer storage media
CN108960078A (en) * 2018-06-12 2018-12-07 温州大学 A method of based on monocular vision, from action recognition identity

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《基于时空结构关系的3D人体行为识别研究》;万晓依;《中国优秀硕士学位论文全文数据库信息科技辑》;20190115(第1期);第四章第4.1节-4.2节 *

Also Published As

Publication number Publication date
CN109948459A (en) 2019-06-28

Similar Documents

Publication Publication Date Title
CN109948459B (en) Football action evaluation method and system based on deep learning
US11544928B2 (en) Athlete style recognition system and method
Jiang et al. Seeing invisible poses: Estimating 3d body pose from egocentric video
Li et al. Intelligent sports training system based on artificial intelligence and big data
CN110705390A (en) Body posture recognition method and device based on LSTM and storage medium
Joshi et al. Robust sports image classification using InceptionV3 and neural networks
US9020250B2 (en) Methods and systems for building a universal dress style learner
CN107944431A (en) A kind of intelligent identification Method based on motion change
WO2013058427A1 (en) Apparatus and method for tracking the position of each part of the body for golf swing analysis
CN110490109B (en) Monocular vision-based online human body rehabilitation action recognition method
CN105912991B (en) Activity recognition based on 3D point cloud and crucial bone node
CN110738154A (en) pedestrian falling detection method based on human body posture estimation
Meng et al. A video information driven football recommendation system
Loureiro et al. Using a skeleton gait energy image for pathological gait classification
Hu et al. Basketball activity classification based on upper body kinematics and dynamic time warping
Wu et al. Pose-Guided Inflated 3D ConvNet for action recognition in videos
Krzeszowski et al. Estimation of hurdle clearance parameters using a monocular human motion tracking method
Zhi-chao et al. Key pose recognition toward sports scene using deeply-learned model
Li et al. Automated fine motor evaluation for developmental coordination disorder
CN115346272A (en) Real-time tumble detection method based on depth image sequence
CN113975775B (en) Wearable inertial body feeling ping-pong exercise training system and working method thereof
Tay et al. Markerless gait estimation and tracking for postural assessment
Liu et al. Research on action recognition of player in broadcast sports video
CN116597507A (en) Human body action normalization evaluation method and system
Li et al. Application of gait recognition technology in badminton action analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant