CN117333949A - Method for identifying limb actions based on video dynamic analysis - Google Patents
Method for identifying limb actions based on video dynamic analysis Download PDFInfo
- Publication number
- CN117333949A CN117333949A CN202311545619.6A CN202311545619A CN117333949A CN 117333949 A CN117333949 A CN 117333949A CN 202311545619 A CN202311545619 A CN 202311545619A CN 117333949 A CN117333949 A CN 117333949A
- Authority
- CN
- China
- Prior art keywords
- video
- actions
- training
- extracting
- human body
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000009471 action Effects 0.000 title claims abstract description 86
- 238000004458 analytical method Methods 0.000 title claims abstract description 26
- 238000000034 method Methods 0.000 title claims abstract description 26
- 238000012549 training Methods 0.000 claims abstract description 54
- 238000009432 framing Methods 0.000 claims abstract description 7
- 238000007781 pre-processing Methods 0.000 claims abstract description 7
- 238000013527 convolutional neural network Methods 0.000 claims description 18
- 230000006399 behavior Effects 0.000 claims description 16
- 238000001514 detection method Methods 0.000 claims description 12
- 230000008859 change Effects 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000012795 verification Methods 0.000 claims description 6
- 238000011156 evaluation Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 4
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 238000003066 decision tree Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 claims description 3
- 238000007637 random forest analysis Methods 0.000 claims description 3
- 230000006641 stabilisation Effects 0.000 claims description 3
- 238000011105 stabilization Methods 0.000 claims description 3
- 238000012706 support-vector machine Methods 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 3
- 230000005764 inhibitory process Effects 0.000 claims description 2
- 238000004422 calculation algorithm Methods 0.000 abstract description 8
- 238000013473 artificial intelligence Methods 0.000 abstract description 3
- 210000003414 extremity Anatomy 0.000 description 12
- 230000006872 improvement Effects 0.000 description 7
- 238000005452 bending Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 241000270295 Serpentes Species 0.000 description 2
- 210000001015 abdomen Anatomy 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 210000003127 knee Anatomy 0.000 description 2
- 238000005096 rolling process Methods 0.000 description 2
- 210000000707 wrist Anatomy 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 238000012916 structural analysis Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Human Computer Interaction (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a method for identifying limb actions based on video dynamic analysis, which relates to the technical field of artificial intelligence and comprises the following steps: collecting video streams by using camera equipment, setting behavior labels to divide each type of behavior video, framing into different action combinations, and extracting basic identification features serving as complete actions; preprocessing data, extracting key point coordinate data of pedestrians in the acquired pictures, extracting gesture information of characters under different classifications as action characteristics, and then integrating the characteristics; the invention realizes intelligent recognition, scoring, prompting and intelligent assessment of the training actions by means of video recognition training and real-time mastering and analyzing the action structure of a human body by means of an artificial intelligent image algorithm model, and specifically comprises the following steps: the training pictures of the participant are acquired in real time through the terminal camera equipment, various training actions are identified through analysis of the images, the standard is evaluated, and the subjectivity and time cost of manual judgment are reduced.
Description
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a method for identifying limb actions based on video dynamic analysis.
Background
Under the strong support of leading promotion and all levels of matched policy measures by national strategic planning, the artificial intelligence industry is rapidly developed, the application fields of deep learning-based image recognition, voice processing, natural language processing and the like are accelerating to realize industrialization, related equipment, equipment and technical services are continuously emerging and are becoming mature day by day, and the system becomes a new engine for driving the transformation of a plurality of industries, and the application of video training and video analysis technology in the training process can replace the traditional visual analysis method to improve the training level;
in the prior art, aiming at the evaluation and assessment of some human body actions, such as push-ups, sit-ups, horizontal bar pull-ups, horizontal bar abdomen rolling, parallel bar end arm bending and stretching, parallel bar swing arm bending and stretching, 30x2 snake running and other training actions, manual judgment and scoring are needed, wherein the method has certain subjectivity, is not objective enough, and consumes a great amount of time cost and manpower and material resources, so the invention provides a method for identifying limb actions based on video dynamic analysis to solve the problems in the prior art.
Disclosure of Invention
Aiming at the problems, the invention provides a method for identifying limb actions based on video dynamic analysis, which collects training pictures of a participant in real time through a terminal camera device, identifies various training actions through analysis of images, intelligently evaluates the standardization of the actions, realizes intelligent supervision of the training actions, and reduces subjectivity and time cost of manual judgment.
In order to achieve the purpose of the invention, the invention is realized by the following technical scheme: a method for identifying limb movements based on video dynamic analysis, comprising the steps of:
s1: collecting video streams by using camera equipment, setting behavior labels to divide each type of behavior video, framing into different action combinations, and extracting basic identification features serving as complete actions;
s2: preprocessing data, extracting key point coordinate data of pedestrians in the acquired pictures, extracting gesture information of characters under different classifications as action characteristics, and then integrating the characteristics;
s3: human body detection is carried out through a convolutional neural network CNN two-stage model;
s4: after detecting a human body, detecting key joint points, extracting gesture information of people under different classifications under a data set as action characteristics, integrating, and constructing a training mode;
s5: tracking the positions of key articulation points among different video frames so as to grasp the continuity and change of limb actions;
s6: and analyzing the movement and the relative position of the key joint points, predicting the positions and the association relation of the key points, and recovering the posture of the human body, thereby summarizing the training actions.
The further improvement is that: the step S1 comprises the following steps:
s11: collecting video streams by using camera equipment;
s12: classifying each type of behavior in the collected video by setting behavior labels of various actions, and framing the video into a plurality of pictures;
s13: and different actions are formed by combining different photos, gesture features in the actions are extracted by using openpost to serve as basic recognition features of the complete actions, and information in the actions is integrated into a txt file.
The further improvement is that: the step S2 comprises the following steps:
s21: preprocessing the collected video image, including denoising, image enhancement and image stabilization;
s22: putting the acquired pictures into an openpost skeleton extraction network, extracting key point coordinate data of pedestrians, extracting gesture information of characters under different classifications as action characteristics, and storing the gesture information as corresponding TXT documents;
s23: and integrating the extracted characteristic information and the corresponding picture in one TXT file in a corresponding way, removing useless redundant data sets, and taking the integrated TXT information as an input label csv file and an output label csv file respectively.
The further improvement is that: the step S3 comprises the following steps:
s31: detecting and locating the presence of a human body in the video image;
s32: human body detection is carried out through a convolutional neural network CNN two-stage model, wherein the two-stage model comprises an R-CNN model and a Mask R-CNN model;
s33: firstly, carrying out region extraction operation, classifying and regression predicting each region, and then obtaining a final detection result through a non-maximum value inhibition means.
The further improvement is that: in the step S4, the action node detection includes the following steps:
s411: after detecting the human body, detecting a key joint point;
s412: using openpore to traverse the gesture information of the characters under different classifications in the data set, extracting the gesture information as action characteristics and storing the action characteristics as corresponding TXT documents;
s413: the extracted characteristic information and the corresponding pictures are correspondingly integrated into one TXT file, meanwhile, useless redundant data sets are removed, and the integrated TXT information is respectively used as an input label csv file and an output label csv file
The further improvement is that: in the step S4, constructing the training pattern includes the following steps:
s421: taking human skeleton information read from an input end csv as input, outputting a csv file read by a label as output, dividing a training set and a verification set according to a proportion of 0.3, and converting the training set and the verification set into a numpy matrix to participate in operation;
s422: testing model effects according to decision trees, random forests, neural networks and support vector machines;
s423: evaluation was performed based on confusion matrix, precision, recall, and F1 score.
The further improvement is that: the step S5 comprises the following steps:
s51: tracking the positions of key nodes between different video frames to know the continuity and change of limb actions;
s52: in the training process, images with key point marks are used for supervised learning, and the connection relation, namely the association, between different key points is predicted;
s53: by predicting the association relationship, the structure of the human body posture is understood, so that tracking is performed.
The further improvement is that: in the step S6, summarizing the training actions includes: and identifying the training actions, scoring the action standards, prompting and checking.
The beneficial effects of the invention are as follows:
1. the invention realizes the functions of intelligent recognition, scoring of action standardization, prompting, intelligent assessment and the like of training actions by video recognition training and real-time mastering and analyzing action structures of human bodies by means of an artificial intelligent image algorithm model, and specifically comprises the following steps: the training pictures of the training staff are acquired in real time through the terminal camera equipment, various training actions are identified through analysis of the images, the standardization of the actions is intelligently evaluated, intelligent supervision of the training actions is realized, and subjectivity and time cost of manual judgment are reduced.
2. According to the invention, the behavior labels are set to divide each class of behavior video, so that real training scene data are collected, the training algorithm model is optimized, and the accuracy of algorithm identification is improved.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The present invention will be further described in detail with reference to the following examples, which are only for the purpose of illustrating the invention and are not to be construed as limiting the scope of the invention.
Example 1
According to fig. 1, the present embodiment provides a method for identifying limb actions based on video dynamic analysis, which includes the following steps:
s1: collecting video streams by using camera equipment, setting behavior labels to divide each type of behavior video, framing into different action combinations, and extracting basic identification features serving as complete actions;
s2: preprocessing data, extracting key point coordinate data of pedestrians in the acquired pictures, extracting gesture information of characters under different classifications as action characteristics, and then integrating the characteristics;
s3: human body detection is carried out through a convolutional neural network CNN two-stage model;
s4: after detecting a human body, detecting key joint points, extracting gesture information of people under different classifications under a data set as action characteristics, integrating, and constructing a training mode;
s5: tracking the positions of key articulation points among different video frames so as to grasp the continuity and change of limb actions;
s6: and analyzing the movement and the relative position of the key joint points, predicting the positions and the association relation of the key points, and recovering the posture of the human body, thereby summarizing the training actions.
The method supports structural analysis of video pictures, can obtain information of who training personnel are, what training content is, training standard characters do not accord with standards, training times and the like through analysis, superimposes dynamic display in the pictures, can calculate the training rate through the number of recognized training persons, can prompt and record archives for situation display and provide data support for the situation display.
Example two
According to fig. 1, the present embodiment provides a method for identifying limb actions based on video dynamic analysis, which includes the following steps:
and (3) video acquisition: first, a video stream needs to be acquired using a camera device (e.g., a video camera or a video camera). By setting running, jumping, sitting down, squatting, kicking, boxing, waving and the like as behavior labels, classifying each type of behavior video, framing the video into a plurality of pictures, combining different pictures to form different actions, extracting the gesture characteristics from the gesture characteristics by using openpost as basic identification characteristics of complete actions, and integrating the information into txt files. Wherein the extracted parts include a nose, a neck, left and right shoulders, left and right wrists, left and right knees, and the like, respectively.
And (3) data processing: the captured video images typically require pre-processing to improve the accuracy of subsequent analysis. The method comprises the steps of denoising, image enhancement, image stabilization and the like, wherein the first step of data processing is to put the acquired picture into an openpost framework extraction network, extract key point coordinate data of pedestrians, extract gesture information of people under different classifications, serve as action characteristics and store the action characteristics as corresponding TXT documents. And then carrying out feature integration: and integrating the extracted characteristic information and the corresponding picture in a TXT file in a corresponding way, and removing useless redundant data sets. And finally integrating TXT information to be respectively used as an input tag csv file and an output tag csv file.
Human body detection: in video images, it is necessary to detect and locate the presence of a human body. Human body detection is performed through a Convolutional Neural Network (CNN) two-stage model, wherein the two-stage model usually adopts R-CNN, mask R-CNN and other models. The two-stage model firstly carries out region extraction operation, carries out prediction such as classification, regression and the like on each region, and then obtains a final detection result through non-maximum suppression and other means.
And (3) detecting action joint points: once a human body is detected, the next step is to detect key joints such as nose, neck, left and right shoulders, left and right wrists, left and right knees, etc. And extracting the gesture information of the characters under different classifications by using the openpore traversal data set, and storing the gesture information as action characteristics as corresponding TXT documents. And then carrying out feature integration: and integrating the extracted characteristic information and the corresponding picture in a TXT file in a corresponding way, and removing useless redundant data sets. And finally integrating TXT information to be respectively used as an input tag csv file and an output tag csv file.
Human skeleton information read from an input end csv is taken as input, a csv file read by an output tag is taken as output, a training set and a verification set are divided according to a proportion of 0.3, and the training set and the verification set are converted into a numpy matrix to participate in operation. The model classifier is selected mainly according to decision trees, random forests, neural networks and support vector machines to test model effects. The evaluation of the model is mainly performed according to the confusion matrix, the precision, the recall and the F1 score.
Video motion tracking: the positions of key nodes need to be tracked between different video frames so as to know the continuity and change of limb actions, and a large number of images with key point marks are used for supervised learning in the training process. The connection relationship, i.e. the association, between different key points is predicted. By predicting these associations, the algorithm can better understand the structure of the human body gestures for tracking.
And (3) action recognition: by analyzing the movement and relative position of key nodes. The positions and the association relations of the key points are predicted, the posture of the human body is restored, and the functions of intelligent recognition, scoring of action standardization, prompting, intelligent assessment and the like of training actions are achieved.
According to the method for identifying the limb actions based on the video dynamic analysis, through video identification training, the action structure of a human body is mastered and analyzed in real time by means of an artificial intelligent image algorithm model, so that the functions of intelligent identification, scoring of action standardization, prompting, intelligent assessment and the like of training actions are realized, and specifically, the method comprises the following steps of: training pictures of a participant are acquired in real time through the terminal camera equipment, training actions such as push-ups, sit-ups, horizontal bar pull-up, horizontal bar abdomen rolling, parallel bar end arm bending and stretching, parallel bar swing arm bending and stretching, 30x2 snake running and the like are identified through analysis of images, the standardization of the actions is intelligently evaluated, intelligent supervision of the training actions is realized, and subjectivity and time cost of manual judgment are reduced. In addition, the invention sets the behavior labels to divide each class of behavior video, thereby collecting real training scene data, optimizing the training algorithm model and improving the accuracy of algorithm identification.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and that the above embodiments and descriptions are merely illustrative of the principles of the present invention, and various changes and modifications may be made without departing from the spirit and scope of the invention, which is defined in the appended claims. The scope of the invention is defined by the appended claims and equivalents thereof.
Claims (8)
1. A method for identifying limb movements based on video dynamic analysis, comprising the steps of:
s1: collecting video streams by using camera equipment, setting behavior labels to divide each type of behavior video, framing into different action combinations, and extracting basic identification features serving as complete actions;
s2: preprocessing data, extracting key point coordinate data of pedestrians in the acquired pictures, extracting gesture information of characters under different classifications as action characteristics, and then integrating the characteristics;
s3: human body detection is carried out through a convolutional neural network CNN two-stage model;
s4: after detecting a human body, detecting key joint points, extracting gesture information of people under different classifications under a data set as action characteristics, integrating, and constructing a training mode;
s5: tracking the positions of key articulation points among different video frames so as to grasp the continuity and change of limb actions;
s6: and analyzing the movement and the relative position of the key joint points, predicting the positions and the association relation of the key points, and recovering the posture of the human body, thereby summarizing the training actions.
2. A method of identifying limb movements based on video dynamics analysis according to claim 1, wherein: the step S1 comprises the following steps:
s11: collecting video streams by using camera equipment;
s12: classifying each type of behavior in the collected video by setting behavior labels of various actions, and framing the video into a plurality of pictures;
s13: and different actions are formed by combining different photos, gesture features in the actions are extracted by using openpost to serve as basic recognition features of the complete actions, and information in the actions is integrated into a txt file.
3. A method of identifying limb movements based on video dynamics analysis according to claim 2, wherein: the step S2 comprises the following steps:
s21: preprocessing the collected video image, including denoising, image enhancement and image stabilization;
s22: putting the acquired pictures into an openpost skeleton extraction network, extracting key point coordinate data of pedestrians, extracting gesture information of characters under different classifications as action characteristics, and storing the gesture information as corresponding TXT documents;
s23: and integrating the extracted characteristic information and the corresponding picture in one TXT file in a corresponding way, removing useless redundant data sets, and taking the integrated TXT information as an input label csv file and an output label csv file respectively.
4. A method of identifying limb movements based on video dynamics analysis according to claim 3, wherein: the step S3 comprises the following steps:
s31: detecting and locating the presence of a human body in the video image;
s32: human body detection is carried out through a convolutional neural network CNN two-stage model, wherein the two-stage model comprises an R-CNN model and a Mask R-CNN model;
s33: firstly, carrying out region extraction operation, classifying and regression predicting each region, and then obtaining a final detection result through a non-maximum value inhibition means.
5. The method for identifying limb movements based on video dynamic analysis of claim 4, wherein: in the step S4, the action node detection includes the following steps:
s411: after detecting the human body, detecting a key joint point;
s412: using openpore to traverse the gesture information of the characters under different classifications in the data set, extracting the gesture information as action characteristics and storing the action characteristics as corresponding TXT documents;
s413: and integrating the extracted characteristic information and the corresponding picture in one TXT file in a corresponding way, removing useless redundant data sets, and taking the integrated TXT information as an input label csv file and an output label csv file respectively.
6. The method for identifying limb movements based on video dynamic analysis according to claim 5, wherein: in the step S4, constructing the training pattern includes the following steps:
s421: taking human skeleton information read from an input end csv as input, outputting a csv file read by a label as output, dividing a training set and a verification set according to a proportion of 0.3, and converting the training set and the verification set into a numpy matrix to participate in operation;
s422: testing model effects according to decision trees, random forests, neural networks and support vector machines;
s423: evaluation was performed based on confusion matrix, precision, recall, and F1 score.
7. The method for identifying limb movements based on video dynamic analysis of claim 6, wherein: the step S5 comprises the following steps:
s51: tracking the positions of key nodes between different video frames to know the continuity and change of limb actions;
s52: in the training process, images with key point marks are used for supervised learning, and the connection relation, namely the association, between different key points is predicted;
s53: by predicting the association relationship, the structure of the human body posture is understood, so that tracking is performed.
8. The method for identifying limb movements based on video dynamic analysis of claim 7, wherein: in the step S6, summarizing the training actions includes: and identifying the training actions, scoring the action standards, prompting and checking.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311545619.6A CN117333949A (en) | 2023-11-20 | 2023-11-20 | Method for identifying limb actions based on video dynamic analysis |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311545619.6A CN117333949A (en) | 2023-11-20 | 2023-11-20 | Method for identifying limb actions based on video dynamic analysis |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117333949A true CN117333949A (en) | 2024-01-02 |
Family
ID=89295737
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311545619.6A Pending CN117333949A (en) | 2023-11-20 | 2023-11-20 | Method for identifying limb actions based on video dynamic analysis |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117333949A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118470472A (en) * | 2024-07-09 | 2024-08-09 | 杭州申邦科技有限公司 | Scoring method based on image recognition return data |
-
2023
- 2023-11-20 CN CN202311545619.6A patent/CN117333949A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118470472A (en) * | 2024-07-09 | 2024-08-09 | 杭州申邦科技有限公司 | Scoring method based on image recognition return data |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110674785A (en) | Multi-person posture analysis method based on human body key point tracking | |
US11837028B2 (en) | Dance segment recognition method, dance segment recognition apparatus, and storage medium | |
CN107301376B (en) | Pedestrian detection method based on deep learning multi-layer stimulation | |
CN117333949A (en) | Method for identifying limb actions based on video dynamic analysis | |
CN113111733B (en) | Posture flow-based fighting behavior recognition method | |
CN113920326A (en) | Tumble behavior identification method based on human skeleton key point detection | |
CN107578015B (en) | First impression recognition and feedback system and method based on deep learning | |
Yang et al. | Human exercise posture analysis based on pose estimation | |
CN112906520A (en) | Gesture coding-based action recognition method and device | |
CN116229507A (en) | Human body posture detection method and system | |
Guo et al. | PhyCoVIS: A visual analytic tool of physical coordination for cheer and dance training | |
CN111860165B (en) | Dynamic face recognition method and device based on video stream | |
US11948400B2 (en) | Action detection method based on human skeleton feature and storage medium | |
CN118097709A (en) | Pig posture estimation method and device | |
Sommana et al. | Development of a face mask detection pipeline for mask-wearing monitoring in the era of the COVID-19 pandemic: A modular approach | |
CN112487926A (en) | Scenic spot feeding behavior identification method based on space-time diagram convolutional network | |
CN117011932A (en) | Running behavior detection method, electronic device and storage medium | |
CN108197593B (en) | Multi-size facial expression recognition method and device based on three-point positioning method | |
CN115761814A (en) | System for detecting emotion in real time according to human body posture | |
CN115457620A (en) | User expression recognition method and device, computer equipment and storage medium | |
Ma et al. | Sports competition assistant system based on fuzzy big data and health exercise recognition algorithm | |
Yang et al. | Video quality evaluation toward complicated sport activities for clustering analysis | |
CN112529938A (en) | Intelligent classroom monitoring method and system based on video understanding | |
Xie | Intelligent Analysis Method of Sports Training Posture Based on Artificial Intelligence | |
Ananth et al. | Yoga Posture Classification using Deep Learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |