CN113673327B - Penalty hit prediction method based on human body posture estimation - Google Patents

Penalty hit prediction method based on human body posture estimation Download PDF

Info

Publication number
CN113673327B
CN113673327B CN202110793365.4A CN202110793365A CN113673327B CN 113673327 B CN113673327 B CN 113673327B CN 202110793365 A CN202110793365 A CN 202110793365A CN 113673327 B CN113673327 B CN 113673327B
Authority
CN
China
Prior art keywords
penalty
human body
body part
frame
gesture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110793365.4A
Other languages
Chinese (zh)
Other versions
CN113673327A (en
Inventor
陈煜�
陈志�
李玲娟
岳文静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202110793365.4A priority Critical patent/CN113673327B/en
Publication of CN113673327A publication Critical patent/CN113673327A/en
Application granted granted Critical
Publication of CN113673327B publication Critical patent/CN113673327B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30221Sports video; Sports image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a penalty hit prediction method based on human body posture estimation, which comprises the steps of firstly inputting a ball service video of a player, preprocessing the video frame by frame, and then recognizing human body postures in key fragments; the human body line characteristics in the video are extracted by using a network with proper layer numbers; human body tracking is carried out on a player in a video through an interframe gesture distance calculation method, super-pixel segmentation is carried out on an image through a super-pixel method, and detected human skeleton joint points are screened through a method of setting a threshold value; converting the extracted human body posture coordinate position into an angle characteristic by a characteristic conversion method; and finally, predicting whether the player hits the ball in service by using a support vector machine as an algorithm for predicting basketball penalty basket results.

Description

Penalty hit prediction method based on human body posture estimation
Technical Field
The invention relates to the technical field of gesture recognition prediction, in particular to a penalty hit prediction method based on human body gesture estimation.
Background
In basketball games, the hit rate of penalty is extremely important data, penalty can change the trend of the game and even determine win-or-lose of the game, the proportion of penalty score in the whole game is gradually increased, and the team pays great attention to the hit rate of the player.
In the general case, a shooting coach in a team evaluates the shooting gesture of a player through an observation method, or analyzes through a synchronous method of a video, a force measuring table and an eight-channel wireless surface myoelectric instrument, or acquires three-dimensional depth dynamic image data through a Kinect technology, converts the data into a quaternion format, shapes the data into a dynamic three-dimensional model, and then judges whether the movement gesture is standard or not through an Euclidean distance comparison method.
In recent years, compared with methods based on external equipment, methods based on computer image recognition can predict results more stably and accurately, such as a human body gesture recognition method based on a support vector machine, a method based on image processing and contour comparison algorithm, a method for carrying out hand feature recognition through feature fusion support vector machine, and a method for carrying out human body gesture recognition by transferring pictures into a deep neural network, wherein main human body gesture recognition methods are mainly divided into two types, namely a top-down human body limb recognition method based on computer vision, and the method is firstly used for confirming joint points through recognizing limbs of a human body. The other is a human body limb identification method from bottom to top based on computer vision, wherein the method realizes limb identification by identifying the joint points of a human body and then connecting the joint points.
At present, the prediction and evaluation method for basketball penalty basket posture is not mature, and further improvement and innovation are needed.
Disclosure of Invention
The invention aims to: aiming at the problems in the prior art, the invention provides a penalty hit prediction method based on human body posture estimation, solves the problems of player posture identification and characteristic extraction in an input video, and can effectively improve the accuracy of penalty hit result prediction.
The technical scheme is as follows: in order to achieve the above purpose, the invention adopts the following technical scheme:
a penalty hit prediction method based on human body posture estimation comprises the following steps:
s1, inputting a player penalty video, and preprocessing the video frame by frame; the preprocessing comprises image ashing and size normalization processing;
s2, recognizing human body gestures of a player in the input video frame by frame;
s2.1, inputting a preprocessed video frame image with the size of w multiplied by h at the input end of the convolutional neural network, and extracting a group of feature mapping F from the input image;
step S2.2, taking the characteristic mapping F as the input of two branch networks, inputting the characteristic mapping F into a convolutional neural network, and obtaining a body part confidence map S l =ρ l (F) And a vector field representing position and orientation information of the limb portionBody part confidence set s= (S 1 ,S 2 ,S 3 ,....S I ) Wherein S is i ∈R w×h I is {1 }. The number I, position and direction information vector field set l= (L) 1 ,L 2 ,L 3 ...L c ) Wherein L is c ∈R w×h×2 c.epsilon.1..C.), iterating the prediction result of the previous stage and the original image feature F as inputs of the network:
wherein t represents the number of iterations; setting a loss function at the end of each iteration; at t iterations, the loss function is as follows:
wherein W (P) is 0 or 1, when the key point is missing, 0, and the loss value of the missing point is not calculated;
after the iteration is finished, obtaining a joint point diagram S of the body part and L of position and direction information of the limb part;
s2.3, calculating the attitude distance between adjacent frames, and judging and tracking; by k 1 ,k 2 Respectively describe the pose of adjacent frames, wherein the pose k 1 And gesture k 2 Is represented as the i-th body part of (a)And->Surrounding the posture body part->And->Respectively extracting bounding boxes->And->From->Extracting x from the extract i Characteristic points from->Extracting y from i Characteristic points, attitudes k 1 And gesture k 2 Is expressed as follows:
judging and tracking the player in the video through the interframe gesture distance calculation;
step S3, screening the body part joint points identified in the step S2.2;
setting threshold th b When the body part confidence is below the threshold th b Boundary box B for the posture of the body part node i Sequentially amplifying by one time, two times and three times, and respectively taking tr as each multiple 1 ,tr 2 ,tr 3 Sampling the multiple scale, carrying out gesture recognition again, and if the body part joint point subjected to gesture recognition again is still lower than a threshold value, calculating the similarity between the gesture of the previous frame and the current gesture as follows:
wherein m is i Representing the number of characteristic points of the boundary box of the ith joint point in the gesture in the g frame, n i Representing the number of feature points matched with the boundary box of the ith joint point in the h frame, when the similarity is higher than a threshold th b Using body part nodes of the preamble frame as candidate nodes if the similarity is below a threshold th b Body part node information for the frame is cleared.
Step S4, converting the human body posture coordinate position screened in the step S3 into an angle characteristic by adopting a characteristic conversion method, wherein coordinates of three joint points connecting two limbs are Rt (x) respectively 0 ,y 0 ),Rb(x 1 ,y 1 ),Rm(x 2 ,y 2 ) And converting into a vector:
l 1 =(x 0 -x 1 ,y 0 -y 1 ),l 2 =(x 2 -x 1 ,y 2 -y 1 )
the vector angles are as follows:
extracting angle feature sets of multiple time periods when detecting the gesture of a playerThe variation of the angle characteristic of a certain time period is obtained by limiting:
step S5, collecting the angle characteristics obtained in the step S4And inputting the penalty result into a support vector machine, acquiring a penalty prediction model, and labeling the penalty result, wherein the penalty score is 1, and the penalty score is 0.
Further, a threshold th in the step S3 b Set to 0.5.
Further, tr in the step S3 1 ,tr 2 ,tr 3 And respectively taking values 0.4,0.6,0.8.
Further, the initial value of α in step S3 is 0.7.
Further, in the step S4, the dimension of the angle feature is 3 dimensions.
The beneficial effects are that:
the invention provides a basketball penalty hit prediction method based on video human body posture estimation, which aims to better identify the human body posture of a basketball player in the basketball penalty process and predict the result, and the accuracy and the efficiency of identification are improved by improving a human body posture identification algorithm and further processing the result identified by the traditional algorithm. And the characteristics of the identification output are converted into the characteristics of the included angles of the limbs, and the accuracy and the robustness of the experiment are improved by combining multi-scale characteristic fusion.
Drawings
FIG. 1 is a flow chart of a penalty hit prediction method based on human body posture estimation provided by the invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings, which provide a specific example.
A penalty hit prediction method based on human body pose estimation as shown in fig. 1 is a flowchart. The method comprises the following specific steps:
the user inputs basketball penalty video of athlete, and carries out frame-by-frame preprocessing on the video, so as to make the input video frame image carry out ashing processing in advance, and the size of the input video image is normalized, in this embodiment, the size of the input video image is w×h. The input end of the convolutional neural network inputs the preprocessed video frame image, and a group of feature maps F are extracted from the input image.
The feature map F is used as input of two branch networks and is input into a convolutional neural network to acquire a body part confidence map S l =ρ l (F) And a vector field representing position and orientation information of the limb portionBody part confidence set s= (S 1 ,S 2 ,S 3 ,....S I ) Wherein S is i ∈R w×h I is {1 }. The number I, position and direction information vector field set l= (L) 1 ,L 2 ,L 3 ...L c ) Wherein L is c ∈R w×h×2 c.epsilon.1..C.), the prediction result of the previous stage and the original image feature F are used as the input of the network to iterate as follows:
where t represents the number of iterations. Taking the feature set F as input, the network will output a set of confidence maps S l =ρ l (F) And a set of position and orientation information vectors for the limb portionAnd iterating, wherein in the iteration process, the output of the previous stage takes the feature mapping F with the original image as the input of the next stage.
Setting a loss function at the end of each iteration; at t iterations, the loss function is as follows:
wherein W (P) is 0 or 1, when the key point is missing, the loss value of the missing point is not calculated, and the results S (body part joint point diagram) and L (limb connection diagram) are finally obtained after the iteration is finished.
And then, carrying out human body tracking on the player in the video by an inter-frame gesture distance calculation method. By k 1 ,k 2 Respectively describe the pose of adjacent frames, wherein the pose k 1 And gesture k 2 Is represented as the ith joint point of (2)And->Surrounding the gesture node->And->Respectively extracting bounding boxes->And->From->Extracting x from the extract i Characteristic points from->Extracting y from i Characteristic points, attitudes k 1 And gesture k 2 Is expressed as follows:
and judging and tracking the player in the video through the inter-frame attitude distance calculation.
Setting threshold th b When the body part confidence is below the threshold th b Boundary box B for the posture of the body part node i Amplifying by one time, two times, three times and each multiple is respectively denoted by tr 1 ,tr 2 ,tr 3 Sampling the multiple scale, carrying out gesture recognition again, and if the body part joint point subjected to gesture recognition again is still lower than a threshold value, calculating the similarity between the gesture of the previous frame and the current gesture as follows:
wherein m is i Representing the number of characteristic points of the boundary box of the ith joint point in the gesture in the g frame, n i Representing the number of feature points matched with the boundary box of the ith joint point in the h frame, when the similarity is higher than a threshold th b Using body part nodes of the preamble frame as candidate nodes if the similarity is below a threshold th b Body part node information for the frame is cleared. In the present embodiment, th b The setting is made to be 0.5,tr 1 ,tr 2 ,tr 3 the initial value of alpha is 0.7 and the value 0.4,0.6,0.8 respectively is taken.
Further, the screened human body posture coordinate position is converted into an angle characteristic by adopting a characteristic conversion method. The angle characteristic dimension is set to be 3 dimensions in the invention. The coordinates of the three joint points connecting the two limbs are Rt (x 0 ,y 0 ),Rb(x 1 ,y 1 ),Rm(x 2 ,y 2 ) And converting into a vector:
l 1 =(x 0 -x 1 ,y 0 -y 1 ),l 2 =(x 2 -x 1 ,y 2 -y 1 )
the vector angles are as follows:
extracting angle feature sets of multiple time periods when detecting the gesture of a playerThe variation of the angle characteristic of a certain time period is obtained by limiting:
finally, the obtained angle characteristic is gatheredAnd inputting the penalty result into a support vector machine, acquiring a penalty prediction model, and labeling the penalty result, wherein the penalty score is 1, and the penalty score is 0.
The foregoing is only a preferred embodiment of the invention, it being noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the invention.

Claims (5)

1. The penalty hit prediction method based on human body posture estimation is characterized by comprising the following steps of:
s1, inputting a player penalty video, and preprocessing the video frame by frame; the preprocessing comprises image ashing and size normalization processing;
s2, recognizing human body gestures of a player in the input video frame by frame;
s2.1, inputting a preprocessed video frame image with the size of w multiplied by h at the input end of the convolutional neural network, and extracting a group of feature mapping F from the input image;
step S2.2, taking the characteristic mapping F as the input of two branch networks, inputting the characteristic mapping F into a convolutional neural network, and obtaining a body part confidence map S l =ρ l (F) And a vector field representing position and orientation information of the limb portionBody part confidence set s= (S 1 ,S 2 ,S 3 ,....S I ) Wherein S is i ∈R w×h I is {1 }. The number I, position and direction information vector field set l= (L) 1 ,L 2 ,L 3 ...L c ) Wherein L is c ∈R w×h×2 c.epsilon.1..C.), iterating the prediction result of the previous stage and the original image feature F as inputs of the network:
wherein t represents the number of iterations; setting a loss function at the end of each iteration; at t iterations, the loss function is as follows:
wherein W (P) is 0 or 1, when the key point is missing, 0, and the loss value of the missing point is not calculated;
after the iteration is finished, obtaining a joint point diagram S of the body part and L of position and direction information of the limb part;
s2.3, calculating the attitude distance between adjacent frames, and judging and tracking; by k 1 ,k 2 Respectively describe the pose of adjacent frames, wherein the pose k 1 And gesture k 2 Is represented as the i-th body part of (a)And->Surrounding the posture body part->And->Respectively extracting bounding boxes->And->From->Extracting x from the extract i Characteristic points from->Extracting y from i Characteristic points, attitudes k 1 And gesture k 2 Is expressed as follows:
judging and tracking the player in the video through the interframe gesture distance calculation;
step S3, screening the body part joint points identified in the step S2.2;
setting threshold th b When the body part confidence is below the threshold th b Boundary box B for the posture of the body part node i Sequentially amplifying by one time, two times and three times, and respectively taking tr as each multiple 1 ,tr 2 ,tr 3 Sampling the multiple scale, carrying out gesture recognition again, and if the body part joint point subjected to gesture recognition again is still lower than a threshold value, calculating the similarity between the gesture of the previous frame and the current gesture as follows:
wherein m is i Representing the number of characteristic points of the boundary box of the ith joint point in the gesture in the g frame, n i Representing the number of feature points matched with the boundary box of the ith joint point in the h frame, when the similarity is higher than a threshold th b Using body part nodes of the preamble frame as candidate nodes if the similarity is below a threshold th b Clearing body part node information of the frame;
step S4, converting the human body posture coordinate position screened in the step S3 into an angle characteristic by adopting a characteristic conversion method, wherein coordinates of three joint points connecting two limbs are Rt (x) respectively 0 ,y 0 ),Rb(x 1 ,y 1 ),Rm(x 2 ,y 2 ) And converting into a vector:
l 1 =(x 0 -x 1 ,y 0 -y 1 ),l 2 =(x 2 -x 1 ,y 2 -y 1 )
the vector angles are as follows:
extracting angle feature sets of multiple time periods when detecting the gesture of a playerThe variation of the angle characteristic of a certain time period is obtained by limiting:
step S5, collecting the angle characteristics obtained in the step S4And inputting the penalty result into a support vector machine, acquiring a penalty prediction model, and labeling the penalty result, wherein the penalty score is 1, and the penalty score is 0.
2. The method for predicting penalty hit based on human body posture estimation according to claim 1, wherein the threshold th in step S3 is b Set to 0.5.
3. The method for predicting penalty hit based on human body posture estimation according to claim 1, wherein tr in said step S3 1 ,tr 2 ,tr 3 And respectively taking values 0.4,0.6,0.8.
4. The method for predicting the penalty hit based on the human body posture estimation according to claim 1, wherein the initial value of α in step S3 is 0.7.
5. The method for predicting the penalty hit based on the human body posture estimation according to claim 1, wherein the angular feature dimension in the step S4 is 3-dimensional.
CN202110793365.4A 2021-07-14 2021-07-14 Penalty hit prediction method based on human body posture estimation Active CN113673327B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110793365.4A CN113673327B (en) 2021-07-14 2021-07-14 Penalty hit prediction method based on human body posture estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110793365.4A CN113673327B (en) 2021-07-14 2021-07-14 Penalty hit prediction method based on human body posture estimation

Publications (2)

Publication Number Publication Date
CN113673327A CN113673327A (en) 2021-11-19
CN113673327B true CN113673327B (en) 2023-08-18

Family

ID=78539264

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110793365.4A Active CN113673327B (en) 2021-07-14 2021-07-14 Penalty hit prediction method based on human body posture estimation

Country Status (1)

Country Link
CN (1) CN113673327B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114973305B (en) * 2021-12-30 2023-03-28 昆明理工大学 Accurate human body analysis method for crowded people

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111177396A (en) * 2019-11-13 2020-05-19 浙江广播电视集团 Automatic analysis and visualization method for basketball event shooting event by combining knowledge map
CN111310659A (en) * 2020-02-14 2020-06-19 福州大学 Human body action recognition method based on enhanced graph convolution neural network
CN112446319A (en) * 2020-11-23 2021-03-05 新华智云科技有限公司 Intelligent analysis system, analysis method and equipment for basketball game
CN112699771A (en) * 2020-12-26 2021-04-23 南京理工大学 Abnormal behavior detection algorithm based on human body posture prediction

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110097639B (en) * 2019-03-18 2023-04-18 北京工业大学 Three-dimensional human body posture estimation method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111177396A (en) * 2019-11-13 2020-05-19 浙江广播电视集团 Automatic analysis and visualization method for basketball event shooting event by combining knowledge map
CN111310659A (en) * 2020-02-14 2020-06-19 福州大学 Human body action recognition method based on enhanced graph convolution neural network
CN112446319A (en) * 2020-11-23 2021-03-05 新华智云科技有限公司 Intelligent analysis system, analysis method and equipment for basketball game
CN112699771A (en) * 2020-12-26 2021-04-23 南京理工大学 Abnormal behavior detection algorithm based on human body posture prediction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于多点标记的罚球出手帧获取算法研究;孙锐兴;包晓敏;吕文涛;唐晓彤;汪亚明;;工业控制计算机(第04期);全文 *

Also Published As

Publication number Publication date
CN113673327A (en) 2021-11-19

Similar Documents

Publication Publication Date Title
CN110472554B (en) Table tennis action recognition method and system based on attitude segmentation and key point features
CN104463250B (en) A kind of Sign Language Recognition interpretation method based on Davinci technology
JP6025845B2 (en) Object posture search apparatus and method
Uddin et al. Human activity recognition using body joint‐angle features and hidden Markov model
JP5845365B2 (en) Improvements in or related to 3D proximity interaction
CN102682302B (en) Human body posture identification method based on multi-characteristic fusion of key frame
Jalal et al. Improved behavior monitoring and classification using cues parameters extraction from camera array images
CN102567703B (en) Hand motion identification information processing method based on classification characteristic
CN108256421A (en) A kind of dynamic gesture sequence real-time identification method, system and device
CN107424161B (en) Coarse-to-fine indoor scene image layout estimation method
CN108491754B (en) Dynamic representation and matching human behavior identification method based on bone features
CN110738154A (en) pedestrian falling detection method based on human body posture estimation
CN110232308A (en) Robot gesture track recognizing method is followed based on what hand speed and track were distributed
Uddin et al. Human Activity Recognition via 3-D joint angle features and Hidden Markov models
CN110895683B (en) Kinect-based single-viewpoint gesture and posture recognition method
Chang et al. The model-based human body motion analysis system
CN106815855A (en) Based on the human body motion tracking method that production and discriminate combine
CN107832736A (en) The recognition methods of real-time body's action and the identification device of real-time body's action
CN106682585A (en) Dynamic gesture identifying method based on kinect 2
CN113516005A (en) Dance action evaluation system based on deep learning and attitude estimation
CN113673327B (en) Penalty hit prediction method based on human body posture estimation
CN111105443A (en) Video group figure motion trajectory tracking method based on feature association
CN111833439A (en) Artificial intelligence-based ammunition throwing analysis and mobile simulation training method
Pang et al. Analysis of computer vision applied in martial arts
Keceli et al. Recognition of basic human actions using depth information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant