CN117173792A - Multi-person gait recognition system based on three-dimensional human skeleton - Google Patents
Multi-person gait recognition system based on three-dimensional human skeleton Download PDFInfo
- Publication number
- CN117173792A CN117173792A CN202311379123.6A CN202311379123A CN117173792A CN 117173792 A CN117173792 A CN 117173792A CN 202311379123 A CN202311379123 A CN 202311379123A CN 117173792 A CN117173792 A CN 117173792A
- Authority
- CN
- China
- Prior art keywords
- gait
- skeleton
- dimensional
- human skeleton
- sequence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000005021 gait Effects 0.000 title claims abstract description 113
- 238000001514 detection method Methods 0.000 claims abstract description 27
- 239000013598 vector Substances 0.000 claims abstract description 18
- 238000000605 extraction Methods 0.000 claims abstract description 17
- 238000012549 training Methods 0.000 claims abstract description 7
- 238000005259 measurement Methods 0.000 claims abstract description 6
- 230000009471 action Effects 0.000 claims abstract description 4
- 238000000034 method Methods 0.000 claims description 17
- 238000013528 artificial neural network Methods 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 4
- 238000011176 pooling Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 4
- 101100259947 Homo sapiens TBATA gene Proteins 0.000 claims description 3
- 230000002776 aggregation Effects 0.000 claims description 3
- 238000004220 aggregation Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000004927 fusion Effects 0.000 claims description 3
- 230000007774 longterm Effects 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 2
- 230000008859 change Effects 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 4
- 230000003542 behavioural effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000011840 criminal investigation Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 239000002360 explosive Substances 0.000 description 2
- 210000000887 face Anatomy 0.000 description 2
- 210000000554 iris Anatomy 0.000 description 2
- 238000005096 rolling process Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Image Analysis (AREA)
Abstract
The application discloses a multi-person gait recognition system based on a three-dimensional human skeleton, which comprises the following steps: acquiring a gait video sequence, capturing RGB video of pedestrian walking by using a camera, and acquiring an original RGB gait video sequence; pedestrian detection and tracking, using a target detection model YOLOv8 as a target detector; extracting a pedestrian skeleton, and carrying out 2D gesture estimation on each frame of image of the pedestrian with the action track recorded by using a 2D gesture estimation model to obtain 2D human skeleton key points; converting the two-dimensional framework into a three-dimensional framework; gait feature extraction, namely placing a 3D skeleton key point sequence into a graph convolution network for training, and extracting gait features in human skeleton key points by using a trained graph convolution network model; and (3) performing gait feature matching, namely performing similarity measurement on the obtained gait features of the pedestrians and feature vectors of registered pedestrians in a registered library, and completing feature matching. The system improves the robustness to the interference of clothing, carrying object change and the like, and solves the gait recognition problem under the view angle changing scene.
Description
Technical Field
The application relates to the technical field of gait recognition systems, in particular to a three-dimensional human skeleton-based multi-person gait recognition system.
Background
Gait recognition technology is to verify the identity of an individual using the physiological and behavioral characteristics of a pedestrian walking video. Compared with other biological recognition technologies such as faces, fingerprints, irises and the like, the gait of the pedestrian can be captured from a distance without human and body contact, and meanwhile, the gait is a motion characteristic and is difficult to disguise and forge, so that the gait has good robustness to covariates commonly related to a subject, such as wearing, carrying and standing conditions. These advantages make gait recognition suitable for public safety applications such as criminal investigation and suspicion tracking. With the explosive development of deep learning, gait recognition has made significant progress in the past decade, however, many experimental results demonstrate that most existing gait recognition techniques perform poorly in the field, and this performance gap is mainly due to complex occlusion, background changes, bright-dark changes, and the like.
Gait recognition technology is to verify the identity of an individual using the physiological and behavioral characteristics of a pedestrian walking video. Compared with other biological recognition technologies such as faces, fingerprints, irises and the like, the gait of the pedestrian can be captured from a distance without human and body contact, and meanwhile, the gait is a motion characteristic and is difficult to disguise and forge, so that the gait has good robustness to covariates commonly related to a subject, such as wearing, carrying and standing conditions. These advantages make gait recognition suitable for public safety applications such as criminal investigation and suspicion tracking. With the explosive development of deep learning, gait recognition has made significant progress in the past decade, however, many experimental results demonstrate that most existing gait recognition techniques perform poorly in the field, and this performance gap is mainly due to complex occlusion, background changes, bright-dark changes, and the like.
And the first prior art: the prior gait recognition technology mainly uses a feature extraction method based on a gait clip, the gait recognition method based on the gait clip aims at extracting gait features from human body appearance, the gait clip is generally obtained by separating a human body mask from a background in an original RGB video through a background subtraction or segmentation algorithm, and further the gait features are extracted from a gait clip sequence by a deep learning method. Gait recognition method based on gait scissoring image depends on the extraction of gait scissoring image, and in actual gait recognition scene, factors such as static shielding, clothing shielding, carrying shielding of road surface objects such as vehicles and trees can cause the situation that the gait scissoring image extracted by the segmentation algorithm has large area of incomplete, so that the performance of the gait recognition method based on robust gait feature extraction is greatly reduced, and the recognition effect is reduced.
And the second prior art is as follows: the human skeleton-based gait recognition method aims at extracting gait features for a human body by establishing a model, firstly extracting human body key points from an original RGB video by adopting a human body posture estimation algorithm to obtain a human skeleton diagram, and secondly extracting the gait features from the human skeleton diagram by adopting a deep learning method. The existing human skeleton-based method is used for extracting gait features in two-dimensional gestures, but the two-dimensional gestures are not accurate enough, and the two-dimensional gestures are in two-dimensional space and lack depth and scale information, so that sensitivity to visual angles is achieved, and the resistance to visual angle switching interference is poor.
Disclosure of Invention
(one) solving the technical problems
Aiming at the defects of the prior art, the application provides a multi-person gait recognition system based on a three-dimensional human skeleton, which improves the robustness to the interference of the change of clothes and carrying objects and the like and solves the gait recognition problem under the view angle change scene.
(II) technical scheme
In order to achieve the above purpose, the present application provides the following technical solutions: a three-dimensional human skeleton-based multi-person gait recognition system, comprising the steps of:
s1, acquiring a gait video sequence, capturing RGB video of pedestrian walking by using a camera, and acquiring an original RGB gait video sequence;
s2, pedestrian detection and tracking, wherein a target detection model YOLOv8 is used as a target detector;
s3, extracting a pedestrian skeleton, and carrying out 2D gesture estimation on each frame of image of the pedestrian with the action track recorded by using a 2D gesture estimation model to obtain 2D human skeleton key points;
s4, converting the two-dimensional framework into a three-dimensional framework;
s5, gait feature extraction, namely putting the 3D skeleton key point sequence into a graph convolution network for training, and extracting gait features in human skeleton key points by using a trained graph convolution network model;
s6, gait feature matching, namely, performing similarity measurement on the obtained gait features of the pedestrians and feature vectors of registered pedestrians in a registered library, and completing feature matching.
Preferably, in the step S2, a real-time multi-target tracking algorithm ByteTrack is adopted to track the detected pedestrians, and the detection frame is first divided into a high-resolution frame and a low-resolution frame according to the detected target score, and a total of two times of matching are performed:
firstly, matching the high frame with the existing motion trail according to the similarity of the motion trail of the object, and predicting the motion trail of the next frame by using Kalman filtering;
matching the low frame with the motion trail of the target which is not matched with the low frame for the second time, so as to avoid the loss of the target caused by shielding;
and finally, reserving the motion trail which is not matched with the detection frame for a period of time for the two times, waiting for the appearance of the matched target, and creating a motion trail for the high-resolution detection frame which is not matched with the detection frame.
Preferably, the gesture estimation network adopted in the step S3 is HRNet, a high-resolution sub-network is adopted as a first stage, then a high-resolution sub-network is gradually added to a low-resolution sub-network, finally, outputs of the multi-resolution sub-networks are connected in parallel, multi-scale fusion is performed for a plurality of times, so that the HRNet can always ensure high-resolution representation, the skeleton heat map obtained in the mode has higher spatial accuracy, and finally, the heat map is converted into skeleton key points according to the scores of key points in the heat map.
Preferably, in the step S4, the gesture of the current frame is better predicted by introducing the context information provided by the adjacent frames, for the occlusion situation, some reasonable predictions can be made according to the gesture of several frames before and after, and because the skeleton length of the same person in a video is unchanged, the constraint limit of skeleton length consistency is introduced here, a more stable three-dimensional skeleton can be output, and the operation process is as follows:
the obtained 2D skeleton key points are spliced together according to a time sequence, a two-dimensional gesture sequence is taken as input, the 2D skeleton key point sequence is further processed through a trained time sequence convolution network (TCN), long-term information is captured, the precision is improved, meanwhile, the receptive field of the TCN is expanded by utilizing expansion convolution, the relative three-dimensional coordinates of each key point of the human skeleton are output, and finally the 3D skeleton key point sequence is obtained.
Preferably, the gait feature extraction process in S5 is as follows:
training the 3D skeleton key point sequence in a graph convolution network, and extracting gait characteristics in the human skeleton key points by using a trained graph convolution network model;
the gait feature extraction uses a graph convolution network, a 3D human skeleton key point sequence is input into a trained graph convolution neural network STGCN to carry out convolution operation, spatial features are extracted from the 3D key point sequence by using spatial convolution operation, and then time sequence convolution operation is utilized to carry out time dimension aggregation pooling operation on the extracted human spatial features, so that the convolution network can extract the space-time features of the gait sequence;
and finally, mapping the feature map to a feature space by using the full connection layer to obtain the gait feature vector of the pedestrian.
Preferably, the gait feature matching process is as follows: and (5) performing similarity measurement on the obtained gait characteristics of the pedestrians and the characteristic vectors of the registered pedestrians in the registered library, and completing characteristic matching. The Euclidean distance between the feature vectors is used as a judgment basis, the distance calculation is carried out on the feature vectors of the registered people on the basis that the score threshold is exceeded and the feature vectors are considered to be effective, the registered people with the nearest distance are used as recognition results, and otherwise, the registered people are regarded as unregistered people.
(III) beneficial effects
Compared with the prior art, the application provides a three-dimensional human skeleton-based multi-person gait recognition system, which has the following beneficial effects: the application adopts the gait recognition system based on the three-dimensional posture, can improve the robustness of gait recognition to the interference to a certain extent by extracting the gait characteristics based on the three-dimensional space, and solves the problem of gait recognition under the view angle transformation scene.
Drawings
FIG. 1 is a flow chart of the method of the system of the present application.
Detailed Description
For a better understanding of the objects, structures and functions of the present application, a three-dimensional human skeleton-based multi-person gait recognition system of the present application will be described in further detail with reference to the specific embodiments and the accompanying drawings.
Referring to fig. 1, the present application: a three-dimensional human skeleton-based multi-person gait recognition system, comprising:
acquisition of gait video sequences
And capturing RGB video of pedestrian walking by using a camera, and acquiring an original RGB gait video sequence.
Pedestrian detection and tracking
The single-stage target detection model YOLOv8 which is most practical at present is adopted as a target detector, the YOLOv8 is innovated and improved on the basis of the previous YOLOv series, the performance and the flexibility are greatly improved, and the single-stage target detection model YOLOv8 is the current SOTA target detector. Performing frame extraction on the acquired pedestrian gait video by using YOLOv8, then performing target detection, acquiring a detection frame of each target in a picture, and filtering targets except pedestrians according to target categories;
the detected pedestrians are tracked by adopting a real-time multi-target tracking algorithm ByteTrack, and the detection frame is divided into a high sub-frame and a low sub-frame according to the score of the detected target, and the detected pedestrians are matched twice in total: matching the high frame with the existing motion trail according to the similarity of the motion trail of the object for the first time, and predicting the motion trail of the next frame by using Kalman filtering; matching the low frame with the motion trail of the target which is not matched with the low frame for the second time, so as to avoid the loss of the target caused by shielding; and finally, reserving the motion trail which is not matched with the detection frame for a period of time for the two times, waiting for the appearance of the matched target, and creating a motion trail for the high-resolution detection frame which is not matched with the detection frame.
For the pedestrian detection model, the application is not limited to using the target detection model YOLOv8, and can be replaced by other target detection models which can be used for acquiring a pedestrian detection frame.
For pedestrian tracking algorithms, the application is not limited to using ByteTrack tracking, and other target tracking algorithms may be employed instead.
Pedestrian skeleton extraction
2D gesture estimation is carried out on each frame of image of the pedestrian with the action track recorded by using a 2D gesture estimation model, so that 2D human skeleton key points are obtained; the adopted gesture estimation network is HRNet, a high-resolution sub-network is adopted as a first stage, then the high-resolution sub-network is gradually added to the low-resolution sub-network, finally, the outputs of the multi-resolution sub-networks are connected in parallel, and multi-scale fusion is carried out for a plurality of times, so that the high-resolution representation can be always ensured by the HRNet, and the skeleton heat map obtained by the method has higher space accuracy. And finally converting the heat map into skeleton key points according to the scores of the key points in the heat map.
In order to improve the robustness of gait recognition to static shielding, dynamic shielding and other interferences, compared with a method based on a gait scissoring diagram, the three-dimensional human skeleton is less influenced by clothes and carrying objects only through reasonable skeleton extraction, and the interference can be eliminated during training. The extraction method for the three-dimensional gait is to predict two-dimensional gait sequences of adjacent frames, and the skeleton key points of the current frame are complemented by context information, so that shielding influence can be effectively eliminated. This approach is therefore highly robust to various disturbances. In order to have strong interference resistance to angle of view switching, three-dimensional skeleton gait recognition can provide more accurate posture information including the position, rotation, dimensional change and the like of a human body in a three-dimensional space. In contrast, two-dimensional skeleton gait recognition can only provide pose information of a human body on a two-dimensional plane, and details in terms of depth and scale cannot be captured. Thus, three-dimensional skeletal gait recognition can more accurately represent and analyze the gait characteristics of the human body.
Two-dimensional framework to three-dimensional framework
The difficulty and cost of three-dimensional gesture data acquisition are far more than those of two-dimensional gesture extraction, so that the thought of converting a two-dimensional skeleton into a three-dimensional skeleton is provided, the video-based method is presented based on the thought, the gesture of the current frame can be better predicted by introducing context information provided by adjacent frames, and reasonable speculation can be performed according to the gestures of a plurality of frames before and after the occlusion condition. And because the skeleton length of the same person in a video is unchanged, the constraint limit of skeleton length consistency is introduced, and a more stable three-dimensional skeleton can be output;
the obtained 2D skeleton key points are spliced together according to a time sequence, a two-dimensional gesture sequence is taken as input, the 2D skeleton key point sequence is further processed through a trained time sequence convolution network (TCN), long-term information is captured, the precision is improved, meanwhile, the receptive field of the TCN is expanded by utilizing expansion convolution, the relative three-dimensional coordinates of each key point of the human skeleton are output, and finally the 3D skeleton key point sequence is obtained.
Gait feature extraction
Training the 3D skeleton key point sequence in a graph convolution network, and extracting gait characteristics in the human skeleton key points by using a trained graph convolution network model; the gait feature extraction uses a graph rolling network, a 3D human skeleton key point sequence is input into a trained graph rolling neural network STGCN to carry out convolution operation, spatial features are extracted from the 3D key point sequence by using spatial convolution operation, then the extracted human spatial features are subjected to aggregation pooling operation in time dimension by using time sequence convolution operation, so that the convolution network can extract space-time features of the gait sequence, and finally a feature map is mapped into feature space by using a full-connection layer to obtain gait feature vectors of pedestrians.
Gait feature matching
And (5) performing similarity measurement on the obtained gait characteristics of the pedestrians and the characteristic vectors of the registered pedestrians in the registered library, and completing characteristic matching. The Euclidean distance between the feature vectors is used as a judgment basis, the distance calculation is carried out on the feature vectors of the registered people on the basis that the score threshold is exceeded and the feature vectors are considered to be effective, the registered people with the nearest distance are used as recognition results, and otherwise, the registered people are regarded as unregistered people.
For the gait recognition algorithm, the gait recognition network used by the application is not limited to GCN (graph convolution neural network) as a gait feature extraction model, and other gait recognition algorithm models can be adopted for substitution.
The application changes the input of a gait recognition algorithm from the previous gait-based scissoring diagram, two-dimensional gesture, energy diagram and the like to the three-dimensional gesture-based gait recognition system, inputs the three-dimensional gesture-based gait-based scissoring diagram, the two-dimensional gesture-based scissoring system and the three-dimensional gesture-based scissoring system into a two-dimensional gesture estimator after acquiring a pedestrian walking video sequence, acquires two-dimensional gesture key points, the two-dimensional gesture estimator consists of an hourglass structure, and then the two-dimensional gesture is polymerized through time sequence convolution, the time sequence convolution consists of a plurality of expansion convolution layers and a pooling layer, and three-dimensional key points are predicted through key point information of adjacent frames.
It will be understood that the application has been described in terms of several embodiments, and that various changes and equivalents may be made to these features and embodiments by those skilled in the art without departing from the spirit and scope of the application. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the application without departing from the essential scope thereof. Therefore, it is intended that the application not be limited to the particular embodiment disclosed, but that the application will include all embodiments falling within the scope of the appended claims.
Claims (6)
1. A three-dimensional human skeleton-based multi-person gait recognition system, comprising the steps of:
s1, acquiring a gait video sequence, capturing RGB video of pedestrian walking by using a camera, and acquiring an original RGB gait video sequence;
s2, pedestrian detection and tracking, wherein a target detection model YOLOv8 is used as a target detector;
s3, extracting a pedestrian skeleton, and carrying out 2D gesture estimation on each frame of image of the pedestrian with the action track recorded by using a 2D gesture estimation model to obtain 2D human skeleton key points;
s4, converting the two-dimensional framework into a three-dimensional framework;
s5, gait feature extraction, namely putting the 3D skeleton key point sequence into a graph convolution network for training, and extracting gait features in human skeleton key points by using a trained graph convolution network model;
s6, gait feature matching, namely, performing similarity measurement on the obtained gait features of the pedestrians and feature vectors of registered pedestrians in a registered library, and completing feature matching.
2. The three-dimensional human skeleton-based multi-person gait recognition system according to claim 1, wherein in S2, a real-time multi-objective tracking algorithm ByteTrack is adopted to track the detected pedestrians, and the detection frame is first divided into a high frame and a low frame according to the detected objective score, and a total of two matches are performed:
firstly, matching the high frame with the existing motion trail according to the similarity of the motion trail of the object, and predicting the motion trail of the next frame by using Kalman filtering;
matching the low frame with the motion trail of the target which is not matched with the low frame for the second time, so as to avoid the loss of the target caused by shielding;
and finally, reserving the motion trail which is not matched with the detection frame for a period of time for the two times, waiting for the appearance of the matched target, and creating a motion trail for the high-resolution detection frame which is not matched with the detection frame.
3. The three-dimensional human skeleton-based multi-person gait recognition system according to claim 1, wherein the posture estimation network adopted in the step S3 is HRNet, a high-resolution sub-network is adopted as a first stage, then a high-resolution sub-network to a low-resolution sub-network is gradually added, finally, outputs of the multi-resolution sub-networks are connected in parallel, and multi-scale fusion is performed for a plurality of times, so that HRNet can always guarantee high-resolution representation, a skeleton heat map obtained in the manner has higher spatial accuracy, and finally, the heat map is converted into skeleton key points according to each key point score in the heat map.
4. The three-dimensional human skeleton-based multi-person gait recognition system according to claim 1, wherein the S4 is characterized in that the gesture of the current frame is better predicted by introducing the context information provided by the adjacent frames, and for the occlusion situation, reasonable speculations can be made according to the gesture of several frames before and after, and since the skeleton length of the same person in a video is unchanged, the constraint limit of skeleton length consistency is introduced, and a more stable three-dimensional skeleton can be output, and the operation process is as follows:
the obtained 2D skeleton key points are spliced together according to a time sequence, a two-dimensional gesture sequence is taken as input, the 2D skeleton key point sequence is further processed through a trained time sequence convolution network (TCN), long-term information is captured, the precision is improved, meanwhile, the receptive field of the TCN is expanded by utilizing expansion convolution, the relative three-dimensional coordinates of each key point of the human skeleton are output, and finally the 3D skeleton key point sequence is obtained.
5. The three-dimensional human skeleton-based multi-person gait recognition system according to claim 1, wherein the step of extracting gait features in S5 is as follows:
training the 3D skeleton key point sequence in a graph convolution network, and extracting gait characteristics in the human skeleton key points by using a trained graph convolution network model;
the gait feature extraction uses a graph convolution network, a 3D human skeleton key point sequence is input into a trained graph convolution neural network STGCN to carry out convolution operation, spatial features are extracted from the 3D key point sequence by using spatial convolution operation, and then time sequence convolution operation is utilized to carry out time dimension aggregation pooling operation on the extracted human spatial features, so that the convolution network can extract the space-time features of the gait sequence;
and finally, mapping the feature map to a feature space by using the full connection layer to obtain the gait feature vector of the pedestrian.
6. The three-dimensional human skeleton-based multi-person gait recognition system of claim 1, wherein the gait feature matching process is: and (5) performing similarity measurement on the obtained gait characteristics of the pedestrians and the characteristic vectors of the registered pedestrians in the registered library, and completing characteristic matching. The Euclidean distance between the feature vectors is used as a judgment basis, the distance calculation is carried out on the feature vectors of the registered people on the basis that the score threshold is exceeded and the feature vectors are considered to be effective, the registered people with the nearest distance are used as recognition results, and otherwise, the registered people are regarded as unregistered people.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311379123.6A CN117173792A (en) | 2023-10-24 | 2023-10-24 | Multi-person gait recognition system based on three-dimensional human skeleton |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311379123.6A CN117173792A (en) | 2023-10-24 | 2023-10-24 | Multi-person gait recognition system based on three-dimensional human skeleton |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117173792A true CN117173792A (en) | 2023-12-05 |
Family
ID=88937716
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311379123.6A Pending CN117173792A (en) | 2023-10-24 | 2023-10-24 | Multi-person gait recognition system based on three-dimensional human skeleton |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117173792A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117789255A (en) * | 2024-02-27 | 2024-03-29 | 沈阳二一三电子科技有限公司 | Pedestrian abnormal behavior video identification method based on attitude estimation |
-
2023
- 2023-10-24 CN CN202311379123.6A patent/CN117173792A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117789255A (en) * | 2024-02-27 | 2024-03-29 | 沈阳二一三电子科技有限公司 | Pedestrian abnormal behavior video identification method based on attitude estimation |
CN117789255B (en) * | 2024-02-27 | 2024-06-11 | 沈阳二一三电子科技有限公司 | Pedestrian abnormal behavior video identification method based on attitude estimation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zeng et al. | Silhouette-based gait recognition via deterministic learning | |
CN109035293B (en) | Method suitable for segmenting remarkable human body example in video image | |
CN114187665B (en) | Multi-person gait recognition method based on human skeleton heat map | |
CN107833239B (en) | Optimization matching target tracking method based on weighting model constraint | |
Zhou et al. | Learning to estimate 3d human pose from point cloud | |
Mahfouf et al. | Investigating the use of motion-based features from optical flow for gait recognition | |
Park et al. | 2D human pose estimation based on object detection using RGB-D information. | |
Kusakunniran et al. | Automatic gait recognition using weighted binary pattern on video | |
CN113378649A (en) | Identity, position and action recognition method, system, electronic equipment and storage medium | |
Nallasivam et al. | Moving human target detection and tracking in video frames | |
Azmat et al. | An elliptical modeling supported system for human action deep recognition over aerial surveillance | |
CN117173792A (en) | Multi-person gait recognition system based on three-dimensional human skeleton | |
Azmat et al. | Aerial Insights: Deep Learning-based Human Action Recognition in Drone Imagery | |
CN112989889A (en) | Gait recognition method based on posture guidance | |
Deng et al. | Individual identification using a gait dynamics graph | |
Guo et al. | Multi-modal human authentication using silhouettes, gait and rgb | |
Liao et al. | An edge-based approach to improve optical flow algorithm | |
Zafar et al. | Human silhouette extraction on FPGAs for infrared night vision military surveillance | |
Kandil et al. | A comparative study between SIFT-particle and SURF-particle video tracking algorithms | |
Лобачев et al. | Machine learning models and methods for human gait recognition | |
Lingaswamy et al. | An efficient moving object detection and tracking system based on fractional derivative | |
CN112613430B (en) | Gait recognition method based on deep migration learning | |
Juang et al. | Moving object recognition by a shape-based neural fuzzy network | |
Asif et al. | Human identification on the basis of gaits using time efficient feature extraction and temporal median background subtraction | |
Zhang et al. | V-HPM Based Gait Recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |