CN111914643A - Human body action recognition method based on skeleton key point detection - Google Patents
Human body action recognition method based on skeleton key point detection Download PDFInfo
- Publication number
- CN111914643A CN111914643A CN202010614639.4A CN202010614639A CN111914643A CN 111914643 A CN111914643 A CN 111914643A CN 202010614639 A CN202010614639 A CN 202010614639A CN 111914643 A CN111914643 A CN 111914643A
- Authority
- CN
- China
- Prior art keywords
- action
- video segment
- key
- key point
- human body
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000009471 action Effects 0.000 title claims abstract description 72
- 238000000034 method Methods 0.000 title claims abstract description 53
- 238000001514 detection method Methods 0.000 title claims abstract description 28
- 239000013598 vector Substances 0.000 claims abstract description 49
- 238000007781 pre-processing Methods 0.000 claims abstract description 16
- 230000008569 process Effects 0.000 claims description 16
- 239000011159 matrix material Substances 0.000 claims description 9
- 230000004927 fusion Effects 0.000 claims description 6
- 238000002372 labelling Methods 0.000 claims description 3
- 210000000988 bone and bone Anatomy 0.000 claims 4
- 230000000875 corresponding effect Effects 0.000 description 19
- 230000006870 function Effects 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 238000011161 development Methods 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24147—Distances to closest patterns, e.g. nearest neighbour classification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a human body action recognition method based on skeleton key point detection, which comprises the following steps: acquiring a plurality of video segments of each standard action of a computer vision system, and performing data preprocessing on each video segment; extracting key point information of each frame of image of a video segment after data preprocessing by adopting a key point detection network to form a plurality of key characteristic vectors of the video segment; establishing a corresponding pre-action recognition model of each video segment according to a plurality of key feature vectors of each video segment; extracting real-time key point information from the real-time human motion video by using a key point detection network to generate a real-time key feature vector; identifying the human body action by adopting a human body motion identification algorithm; the method has high identification precision, the human body action is identified by using the weighted K-nearest neighbor algorithm, and the final identification result is more accurate by giving different weights to each distance point to obtain more balanced key feature vectors with stronger anti-fidget capability.
Description
Technical Field
The invention belongs to the technical field of motion recognition, and particularly relates to a human motion recognition method based on skeleton key point detection.
Background
Human body action recognition is always a popular research direction such as computer vision, artificial intelligence, mode recognition and the like, and has very wide application in the fields of human-computer interaction, virtual reality, video retrieval, security monitoring and the like. The existing human motion recognition method mainly comprises a human motion recognition method based on a wearable inertial sensor and a human motion recognition method based on computer vision. The human body action recognition method based on the wearable sensor is characterized in that action information of a human body is collected through the sensor fixed on a specific part of the human body and is transmitted to a computer through a wireless transmission module, and then data are preprocessed, feature extracted, selected and classified. The method has the advantages that the data information acquisition is accurate, but the human body burden is increased, the operation is complex, and the actual popularization is difficult; at present, the mainstream research direction is a human body action recognition method based on computer vision, and an original image or image sequence data acquired by a camera is processed and analyzed through a computer to learn and understand the action and the behavior of a person in the original image or the image sequence data.
With the development of depth cameras and human skeleton extraction algorithms, people can conveniently acquire human skeleton joint point information. Since the human body can be regarded as a system constructed by the interconnection of rigid skeletal joint points, motion recognition based on the skeletal joint points of the human body has a significant advantage over image-based motion recognition.
Disclosure of Invention
The invention aims to provide a human body action recognition method based on skeleton key point detection, which aims to solve the problems of difficult operation and poor portability of the existing human body action recognition method.
The invention adopts the technical scheme that a human body action recognition method based on skeleton key point detection is implemented according to the following steps:
and 5, identifying the human body action by adopting a human body motion identification algorithm.
The invention is also characterized in that:
the specific process of data preprocessing on the video frequency band comprises the following steps: when the frame number of each action video segment is larger or smaller than a certain fixed value, the frame numbers of the video segments are unified by adopting a mode of uniformly extracting frames or inserting frames through motion compensation.
The specific process of the step 2 is as follows: performing key point information extraction on each frame of image of the video subjected to data preprocessing based on a multi-feature fusion algorithm of a CenterNet network and a stack Hourglass Hourglass network, generating a group of key feature vectors from the key point information of each frame of image, and forming a plurality of key feature vectors of the video segment.
The specific process of generating a group of key feature vectors by using the key point information of each frame of image comprises the following steps:
each frame of image has n key points, the coordinates of the n key points form a matrix with n rows and 2 columns, then the eigenvalue of the matrix is solved by using a formula (lambda E-A) x being 0, and then the eigenvalue is substituted into the formula to solve the eigenvector corresponding to the matrix, namely a group of key eigenvectors.
The specific steps of the step 3 are as follows:
step 3.1, calculating the mean characteristic vector of each video segment as the identification model of the action by adopting a weighted average method for a plurality of key characteristic vectors of each video segment;
and 3.2, labeling the action corresponding to the identification model of the action, storing the labeled action in a database, and obtaining the pre-action identification model corresponding to each video segment.
The specific process of the step 5 is as follows: and respectively calculating the similarity of the real-time key feature vector and the pre-action recognition model corresponding to each video segment by using a weighted K-nearest neighbor algorithm based on the real-time key feature vector and the pre-action recognition model corresponding to each video segment, wherein the standard action corresponding to the highest similarity is the action corresponding to the real-time human motion video.
The specific process of the weighted K-nearest neighbor algorithm is as follows:
calculating the distance between the real-time key feature vector and each mean value feature vector, sorting the distances, selecting the first K values, and passing the first K values through a Gaussian function according to the distanceAdd different weights and then use the function:and calculating the weight sum of each class, and selecting the standard action corresponding to the class weight and the maximum mean characteristic vector as the final action classification.
The invention has the beneficial effects that:
according to the human body action recognition method based on the skeleton key point detection, the human body key points captured by the camera are detected by using the convolutional neural network model in deep learning to obtain the positions in the image, and compared with wearable equipment and a depth sensor, the human body action recognition method based on the skeleton key point detection is simpler to operate and higher in portability. The precision of the method can be improved by training the model, and the precision required by motion recognition can be achieved.
The invention uses a weighted K-nearest neighbor algorithm to identify the human body action, and obtains more balanced key characteristic vectors with stronger anti-noise capability by endowing different weights to each distance point, so that the final identification result is more accurate.
Drawings
FIG. 1 is a flowchart of a human body motion recognition method according to the present invention;
FIG. 2 is a schematic diagram of the grouping of skeletal joint points in a sample of the inventive action;
FIG. 3 is a flow chart of establishing a pre-action recognition model according to the key point information.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention is a new technology for detecting key points, and the human skeleton key point detection method based on deep learning effectively fuses information of different scales and different tasks, so that the information fusion mode is changed from plane to three-dimensional development, and the method has practical significance for model development of human skeleton key point detection.
The invention relates to a human body action recognition method based on skeleton key point detection, which is implemented according to the following steps as shown in figure 1:
the specific process of collecting a plurality of video segments for each standard action is as follows: three different angles of 2 persons doing the same standard action are collected as follows: front, video segment with front 60 degree to left and front 60 degree to right.
The specific process of data preprocessing on the video frequency band comprises the following steps: when the number of frames per motion video segment is greater or less than a fixed value, such as: when 10, the video frame number is unified by adopting a mode of uniformly extracting frames or motion compensation frame insertion.
The specific process of generating a group of key feature vectors by using the key point information of each frame of image comprises the following steps:
as shown in fig. 2, 17 human skeletal key point coordinates are collected for each frame in the video segment according to the COCO dataset used; forming a matrix with 17 rows and 2 columns by the coordinates of the 17 key points, solving the eigenvalue of the matrix by using a formula (lambda E-A) x as 0, and solving the eigenvector corresponding to the matrix by substituting the eigenvalue into the formula, namely a group of key eigenvectors.
step 3.1, calculating the mean characteristic vector of each video segment as the identification model of the action by adopting a weighted average method for a plurality of key characteristic vectors of each video segment;
and 3.2, labeling the action corresponding to the identification model of the action, storing the labeled action in a database, and obtaining the pre-action identification model corresponding to each video segment.
and 5, identifying the human body action by adopting a human body motion identification algorithm. The specific process is as follows: and respectively calculating the similarity of the real-time key feature vector and the pre-action recognition model corresponding to each video segment by using a weighted K-nearest neighbor algorithm based on the real-time key feature vector and the pre-action recognition model corresponding to each video segment, wherein the standard action corresponding to the highest similarity is the action corresponding to the real-time human motion video.
The specific process of the weighted K-nearest neighbor algorithm is as follows:
calculating the distance between the real-time key feature vector and each mean value feature vector, sorting the distances, selecting the first K values, and passing the first K values through a Gaussian function according to the distanceAdd different weights and then use the function:and calculating the weight sum of each class, and selecting the standard action corresponding to the class weight and the maximum mean characteristic vector as the final action classification.
According to the invention, a plurality of video segments are acquired for each standard action, so that the diversity and comprehensiveness of sample data are ensured, the information of each standard action can be better described, and more accurate calculation sample data is provided for a human motion recognition algorithm. The data preprocessing is carried out on each standard action video segment, so that the frame number of each video segment can be standardized, the fairness of sample data is ensured, and the calculation of the mean characteristic vector is facilitated.
Compared with a depth sensor or wearable equipment, the method can finish the collection of the key point information only by a single common RGB camera and a computer provided with a key point detection model, has low price and strong portability, does not need complicated equipment angle configuration or wearing various information collection equipment, and reduces the operation and use pressure of a user. The multi-feature fusion algorithm based on deep learning can effectively fuse information of different scales and different tasks, so that the information fusion mode is developed from a plane to a three-dimensional mode, the precision of a detection model can be improved through a large amount of training, and the precision required by human body action recognition is achieved.
Compared with the common K-nearest neighbor algorithm, the method takes the unequal status among the feature data into consideration, the closer the distance is, the higher the similarity degree is, different weights are given to the feature data at different distances, the measurement of the similarity degree can be effectively improved, meanwhile, the Gaussian function is used for weighting, so that when the distance is small, the weight value tends to 1, when the distance value is large, the weight value can always keep a positive value, and the influence of noise data on the final classification result is reduced.
According to the method, the video frame number is unified by using a uniform frame extraction method or a motion compensation frame interpolation method for a video segment after data preprocessing, then key point information of each frame in the video segment is extracted to form a key feature vector, then a weighted average method is adopted to operate the key feature vector in the video segment, and a mean feature vector is constructed to serve as an identification model of a corresponding action. Human key points captured by the camera are detected through the convolutional neural network model in the deep learning so as to obtain the position in the image, and compared with wearable equipment and a depth sensor, the human key point detection method is simpler to operate and stronger in portability. The precision of the method can be improved by training the model, and the precision required by motion recognition can be achieved; the human body action is identified through a weighted K-nearest neighbor algorithm, and the key characteristic vector which is more balanced and has stronger anti-fidget capability is obtained by giving different weights to each distance point, so that the final identification result is more accurate.
Claims (7)
1. A human body motion identification method based on skeleton key point detection is characterized by comprising the following steps:
step 1, acquiring a plurality of video segments of each standard action of a computer vision system, and performing data preprocessing on each video segment;
step 2, extracting key point information of each frame of image of the video segment after the data preprocessing by adopting a key point detection network to form a plurality of key characteristic vectors of the video segment;
step 3, establishing a corresponding pre-action recognition model of each video segment according to the plurality of key feature vectors of each video segment;
step 4, extracting real-time key point information from the real-time human motion video by using a key point detection network, and generating a real-time key feature vector;
and 5, identifying the human body action by adopting a human body motion identification algorithm.
2. The method according to claim 1, wherein the specific process of preprocessing the video segment data is as follows: when the frame number of each action video segment is larger or smaller than a certain fixed value, the frame numbers of the video segments are unified by adopting a mode of uniformly extracting frames or inserting frames through motion compensation.
3. The human body motion recognition method based on bone key point detection as claimed in claim 1, wherein the specific process of the step 2 is as follows: and extracting key point information of each frame of image of the video subjected to data preprocessing based on a multi-feature fusion algorithm of a CenterNet network and a stack Hourglass Hourglass network, generating a group of key feature vectors from the key point information of each frame of image, and forming a plurality of key feature vectors of the video segment.
4. The human body motion recognition method based on skeleton key point detection as claimed in claim 3, wherein the specific process of generating a group of key feature vectors from the key point information of each frame of image is as follows:
each frame of image has n key points, the coordinates of the n key points form a matrix with n rows and 2 columns, then the eigenvalue of the matrix is solved by using a formula (lambda E-A) x being 0, and then the eigenvalue is substituted into the formula to solve the eigenvector corresponding to the matrix, namely a group of key eigenvectors.
5. The human body motion recognition method based on bone key point detection as claimed in claim 1, wherein the specific steps of the step 3 are as follows:
step 3.1, calculating the mean characteristic vector of each video segment as the identification model of the action by adopting a weighted average method for a plurality of key characteristic vectors of each video segment;
and 3.2, labeling the action corresponding to the identification model of the action, storing the labeled action in a database, and obtaining the pre-action identification model corresponding to each video segment.
6. The human body motion recognition method based on the bone key point detection as claimed in claim 1, wherein the specific process of step 5 is as follows: and respectively calculating the similarity of the real-time key feature vector and the pre-action recognition model corresponding to each video segment by using a weighted K-nearest neighbor algorithm based on the real-time key feature vector and the pre-action recognition model corresponding to each video segment, wherein the standard action corresponding to the highest similarity is the action corresponding to the real-time human motion video.
7. The human motion recognition method based on bone key point detection as claimed in claim 6, wherein the weighted K-nearest neighbor algorithm comprises the specific processes of:
calculating the distance between the real-time key feature vector and each mean value feature vector, sorting the distances, selecting the first K values, and passing the first K values through a Gaussian function according to the distanceAdd different weights and then use the function:and calculating the weight sum of each class, and selecting the standard action corresponding to the class weight and the maximum mean characteristic vector as the final action classification.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010614639.4A CN111914643A (en) | 2020-06-30 | 2020-06-30 | Human body action recognition method based on skeleton key point detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010614639.4A CN111914643A (en) | 2020-06-30 | 2020-06-30 | Human body action recognition method based on skeleton key point detection |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111914643A true CN111914643A (en) | 2020-11-10 |
Family
ID=73226940
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010614639.4A Pending CN111914643A (en) | 2020-06-30 | 2020-06-30 | Human body action recognition method based on skeleton key point detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111914643A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113094547A (en) * | 2021-04-06 | 2021-07-09 | 大连理工大学 | Method for searching specific action video clip in Japanese online video corpus |
CN113378758A (en) * | 2021-06-24 | 2021-09-10 | 首都师范大学 | Action recognition method and device, electronic equipment and storage medium |
CN113409374A (en) * | 2021-07-12 | 2021-09-17 | 东南大学 | Character video alignment method based on motion registration |
CN113792595A (en) * | 2021-08-10 | 2021-12-14 | 北京爱笔科技有限公司 | Target behavior detection method and device, computer equipment and storage medium |
CN114334084A (en) * | 2022-03-01 | 2022-04-12 | 深圳市海清视讯科技有限公司 | Body-building data processing method, device, equipment and storage medium |
CN114783045A (en) * | 2021-01-06 | 2022-07-22 | 北京航空航天大学 | Virtual reality-based motion training detection method, device, equipment and medium |
CN115412188A (en) * | 2022-08-26 | 2022-11-29 | 福州大学 | Power distribution station room operation behavior identification method based on wireless sensing |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108985259A (en) * | 2018-08-03 | 2018-12-11 | 百度在线网络技术(北京)有限公司 | Human motion recognition method and device |
CN110096950A (en) * | 2019-03-20 | 2019-08-06 | 西北大学 | A kind of multiple features fusion Activity recognition method based on key frame |
WO2020107833A1 (en) * | 2018-11-26 | 2020-06-04 | 平安科技(深圳)有限公司 | Skeleton-based behavior detection method, terminal device, and computer storage medium |
-
2020
- 2020-06-30 CN CN202010614639.4A patent/CN111914643A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108985259A (en) * | 2018-08-03 | 2018-12-11 | 百度在线网络技术(北京)有限公司 | Human motion recognition method and device |
EP3605394A1 (en) * | 2018-08-03 | 2020-02-05 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for recognizing body movement |
WO2020107833A1 (en) * | 2018-11-26 | 2020-06-04 | 平安科技(深圳)有限公司 | Skeleton-based behavior detection method, terminal device, and computer storage medium |
CN110096950A (en) * | 2019-03-20 | 2019-08-06 | 西北大学 | A kind of multiple features fusion Activity recognition method based on key frame |
Non-Patent Citations (2)
Title |
---|
党宏社;侯金良;强华;张梦腾;: "基于Kinect的人体动作识别算法研究", 电子器件, no. 05 * |
张聪聪;何宁;: "基于关键帧的双流卷积网络的人体动作识别方法", 南京信息工程大学学报(自然科学版), no. 06 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114783045A (en) * | 2021-01-06 | 2022-07-22 | 北京航空航天大学 | Virtual reality-based motion training detection method, device, equipment and medium |
CN113094547A (en) * | 2021-04-06 | 2021-07-09 | 大连理工大学 | Method for searching specific action video clip in Japanese online video corpus |
CN113094547B (en) * | 2021-04-06 | 2022-01-18 | 大连理工大学 | Method for searching specific action video clip in Japanese online video corpus |
CN113378758A (en) * | 2021-06-24 | 2021-09-10 | 首都师范大学 | Action recognition method and device, electronic equipment and storage medium |
CN113409374A (en) * | 2021-07-12 | 2021-09-17 | 东南大学 | Character video alignment method based on motion registration |
CN113409374B (en) * | 2021-07-12 | 2024-05-10 | 东南大学 | Character video alignment method based on action registration |
CN113792595A (en) * | 2021-08-10 | 2021-12-14 | 北京爱笔科技有限公司 | Target behavior detection method and device, computer equipment and storage medium |
CN114334084A (en) * | 2022-03-01 | 2022-04-12 | 深圳市海清视讯科技有限公司 | Body-building data processing method, device, equipment and storage medium |
CN115412188A (en) * | 2022-08-26 | 2022-11-29 | 福州大学 | Power distribution station room operation behavior identification method based on wireless sensing |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111914643A (en) | Human body action recognition method based on skeleton key point detection | |
CN108052896B (en) | Human body behavior identification method based on convolutional neural network and support vector machine | |
CN111814661B (en) | Human body behavior recognition method based on residual error-circulating neural network | |
CN107194341B (en) | Face recognition method and system based on fusion of Maxout multi-convolution neural network | |
Devanne et al. | 3-d human action recognition by shape analysis of motion trajectories on riemannian manifold | |
CN105574505B (en) | The method and system that human body target identifies again between a kind of multiple-camera | |
CN104881637B (en) | Multimodal information system and its fusion method based on heat transfer agent and target tracking | |
CN110852182B (en) | Depth video human body behavior recognition method based on three-dimensional space time sequence modeling | |
Obinata et al. | Temporal extension module for skeleton-based action recognition | |
CN109934195A (en) | A kind of anti-spoofing three-dimensional face identification method based on information fusion | |
CN111274998B (en) | Parkinson's disease finger knocking action recognition method and system, storage medium and terminal | |
CN109255289B (en) | Cross-aging face recognition method based on unified generation model | |
Aurangzeb et al. | Human behavior analysis based on multi-types features fusion and Von Nauman entropy based features reduction | |
Singh et al. | Human pose estimation using convolutional neural networks | |
CN114998934B (en) | Clothes-changing pedestrian re-identification and retrieval method based on multi-mode intelligent perception and fusion | |
CN106548194B (en) | The construction method and localization method of two dimensional image human joint points location model | |
Yang et al. | Depth sequential information entropy maps and multi-label subspace learning for human action recognition | |
Fang et al. | Dynamic gesture recognition using inertial sensors-based data gloves | |
CN111444488A (en) | Identity authentication method based on dynamic gesture | |
CN110633624A (en) | Machine vision human body abnormal behavior identification method based on multi-feature fusion | |
CN113516005A (en) | Dance action evaluation system based on deep learning and attitude estimation | |
CN115273150A (en) | Novel identification method and system for wearing safety helmet based on human body posture estimation | |
Özbay et al. | 3D Human Activity Classification with 3D Zernike Moment Based Convolutional, LSTM-Deep Neural Networks. | |
CN114821786A (en) | Gait recognition method based on human body contour and key point feature fusion | |
Batool et al. | Fundamental recognition of ADL assessments using machine learning engineering |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
AD01 | Patent right deemed abandoned |
Effective date of abandoning: 20240920 |