CN111105443A - Video group figure motion trajectory tracking method based on feature association - Google Patents
Video group figure motion trajectory tracking method based on feature association Download PDFInfo
- Publication number
- CN111105443A CN111105443A CN201911362575.7A CN201911362575A CN111105443A CN 111105443 A CN111105443 A CN 111105443A CN 201911362575 A CN201911362575 A CN 201911362575A CN 111105443 A CN111105443 A CN 111105443A
- Authority
- CN
- China
- Prior art keywords
- frame
- video
- video frame
- mark
- person
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000033001 locomotion Effects 0.000 title claims abstract description 51
- 238000000034 method Methods 0.000 title claims abstract description 28
- 230000008569 process Effects 0.000 claims abstract description 7
- 238000001514 detection method Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 description 5
- 238000011160 research Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/53—Recognition of crowd images, e.g. recognition of crowd congestion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a video group character motion trail tracking method based on feature association. Firstly, detecting group people appearing in a video, and acquiring position information and a feature mask of the group people; detecting newly added characters, selecting the current tracking character, and calculating the association similarity between the current tracking character and the characters in the adjacent video frames frame by frame; and finally, determining the inter-frame dynamics of the current tracked figure by combining the correlation similarity, updating the motion clue of the current tracked figure, and traversing the video sequence to complete the motion trail tracking of the figure of the video group. The method utilizes the motion characteristics of group characters, comprehensively considers the influence of the position relation and the action form in the inter-frame character correlation matching process, can effectively improve the accuracy of tracking the motion track of the group characters, and has good implementation and robustness.
Description
Technical Field
The invention relates to a video group character motion trail tracking method based on feature association, and belongs to the cross technical field of computer vision, mode recognition and the like. In the face of massive video data, relevant researchers in the field of computer vision begin to explore how to automatically and efficiently extract motion tracks of group characters in videos. The video group figure motion track tracking has huge application prospect and use value, and has wide application in the fields of video monitoring, sports analysis, human-computer interaction and the like.
Background
The tracking of the motion trail of the people in the video group is an important research subject in the field of computer vision, and has important theoretical significance and application value.
The group character movement refers to an interactive movement with collective characteristics among a plurality of individual characters, and the group interactive movement generally shows diversity. Because the video contains image sequence information such as rich position relation and motion form of people and objects, understanding of motion tracks of group people is facilitated, and research on tracking of motion tracks of group people based on the video is gradually a hotspot.
At present, the research on tracking the motion trail of a video group figure can be divided into two types, namely model-based and feature-based, from the starting point and the side focus of an algorithm core. The character tracking algorithm based on the model simulates the change of the character movement track of the group through a priori character movement model of the group, then uses training data to train the model parameters, and utilizes the adjusted character movement track model of the group to judge the track dynamics of characters in the scene. The character tracking algorithm based on the characteristics obtains the interest area of the video frame from the video sequence, performs characteristic description on the interest area of the video frame, and obtains the tracking result of the group character motion track through training the characteristic description. The two modes have advantages and disadvantages, the performance of the character tracking algorithm based on the model depends on the design of a prior model, and the character tracking algorithm based on the characteristics needs to optimize and adjust the subsequent algorithm processing according to the different extracted characteristic scales.
So far, a great deal of research work is needed to be carried out on the method and the system for tracking the motion trail of the group people in the video.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to solve the technical problems that the matching precision of inter-frame characters is insufficient, the traditional character matching adopts single position information as a matching principle, but in group character scenes, a large amount of collective behavior interaction exists, and the tracking error is easily caused by tracking character tracks by simply using the position information.
The technical scheme is as follows: in order to achieve the purpose, the invention adopts the technical scheme that:
a video group character motion track tracking method based on feature association comprises the following steps:
step 1) inputting 1 video, setting the size of a standard video frame to be H multiplied by W, if the size of the video frame is inconsistent with that of the standard video frame, using a bilinear interpolation algorithm to scale the video frame to the size of the standard video frame, wherein the video is input by a user, H represents the height of the video frame, W represents the width of the video frame, and the bilinear interpolation algorithm is a common image processing algorithm;
step 2) carrying out character detection on each frame of video frame by using Mask-RCNN network to obtain character set detected in the t-th frame of video frameIth frame of the tth video frameCharacterThe position information ofThe feature mask isSetting ith personal object of tth frame video frameThe matching state parameter match is 0, the mark state parameter mark is 0, the Mask-RCNN network is an effective person detection algorithm, i is the person number of the t-th frame video frame, the person number is ordered in an increasing mode according to the position information, and n is the number of the t-th frame video frametThe number of the characters detected by the t frame video frame is as follows, and the matching state parameter match belongs to {0,1 };
step 3) setting the number of group characters appearing in the video to be N, and initializing N to N1Assigning the mark state parameters mark of all the characters in the first frame of video frame as the corresponding character numbers, wherein n is1Matching characters from the first frame video frame to the first frame video frame until the motion trail tracking of the group characters is completed by traversing the whole video, and detailing the character matching process from the t frame video frame to the t +1 frame video frame of the adjacent video frames in steps 4) to 5);
step 4) sequentially detecting mark state parameters mark of all people in the t-th frame of video frame, if the mark state parameter mark corresponding to the people is 0, determining the people as a newly added person, modifying N to be N +1, and determining the mark state parameter mark of the people to be N
Step 5) setting the character set with the matching state parameter match being 0 in the t-th frame of video frame as Pnon-matchGo through Pnon-matchCompleting character matching, comprising the following specific steps:
step 51) selecting Pnon-matchThe person with the smallest number of the middle persons is taken as the current tracking person and is recorded asRespectively, the location information and the feature mask areAnd
step 52) setting the character set of the t +1 th frame video frame mark 0 as Pnon-markCalculating the current tracked personAnd Pnon-markThe method comprises the following specific steps of:
f) is provided withIs Pnon-markThe person in (1), calculating the current tracking personAnddegree of positional difference ofThe above-mentionedIs thatThe location information of (a);
g) computing a current tracked personAndposition weight parameter ofThe above-mentionedIs currently tracking a personAnd Pnon-markThe sum of the position difference degrees of all people in the house;
h) computing a current tracked personAndcorrelation similarity ofThe above-mentionedIs thatThe feature mask of (1);
step 53) judging the currently tracked personAnd Pnon-markThe method comprises the following steps of:
i) if LinknowIn which there is a currently tracked personAndassociated similarity link ofnow-track≥linkminAnd linknow-trackIn LinknowIf the maximum value is in the middle, the current tracked person is determinedAndsuccessful match, updateThe match status parameter match is 1, updateMarking state parameter mark of current tracking personThe marking state parameter is taken and updatedThe motion track from the t frame video frame to the t +1 frame video frame isThe LinknowIs thatAnd Pnon-markIs associated with a similarity set, linkminIs the minimum association similarity threshold;
j) if LinknowAll values in (1) are less than linkminThen, the current tracking person P is determinednowUpdating when the tracking of the video frame fails in the t +1 th frameThe match status parameter match is 1, updateThe motion track from the t frame video frame to the t +1 frame video frame is
And 6) setting the persons with the same mark state parameter mark value in the video sequence as the same person, updating and sorting the movement tracks of the persons in the video group, and finishing the tracking of the movement tracks of the persons in the video group.
Wherein,
in the step 1), H is taken according to a data set 720, and W is taken according to a data set 1280.
In the step 5), the position weight parameters W of all the charactersnow-jIs 1, linkminEmpirically 0.5 was taken.
Has the advantages that: compared with the prior art, the method for tracking the motion trail of the characters in the video group based on the characteristic association has the following technical effects:
the method comprises the steps of detecting group characters appearing in a video, and obtaining position information and feature masks of the group characters; detecting newly added characters, selecting the current tracking character, and calculating the association similarity between the current tracking character and the characters in the adjacent video frames frame by frame; and finally, determining the inter-frame dynamics of the current tracked figure by combining the correlation similarity, updating the motion clue of the current tracked figure, and traversing the video sequence to complete the motion trail tracking of the figure of the video group. By applying the methods, the motion tracks of group characters in the video can be effectively extracted, and the method has good accuracy and effectiveness, and particularly:
(1) the Mask-RCNN network used by the invention adopts the fusion characteristic to detect the person, and provides rich position information and semantic information for person tracking.
(2) According to the method, the characteristics of diversification and randomness of group character motion are considered, weight parameters are constructed for character positions, the influence of wrong character matching to a tracking result is reduced, and the robustness of group character tracking is improved.
The invention converts the complex characteristic diagram into the characteristic mask which is convenient for calculation to participate in character matching, and greatly reduces the calculation cost while keeping the semantic description capability.
Drawings
FIG. 1 is a flow of a video group person motion trajectory tracking method based on feature association.
Fig. 2 is a schematic diagram of person matching.
Detailed Description
The present invention will be further described with reference to the accompanying drawings.
FIG. 1 is a flow of a video group person motion trajectory tracking method based on feature association. Firstly, inputting 1 video, normalizing the sizes of all video frames in a video sequence for facilitating the processing of subsequent tracking operation, setting the size of a standard video frame, and carrying out image scaling on the video frames which do not meet the size specification of the standard video frame by adopting a bilinear interpolation algorithm.
In consideration of the characteristics of serious mutual occlusion and complex illumination change of group characters, a Mask-RCNN network adopting high-low fusion characteristics is used for detecting the characters so as to reduce the influence of environmental factors on subsequent tracking operation and obtain accurate character position and characteristic description, and in order to reduce the characteristic calculation cost, the characteristic description is converted into characteristic Mask codes convenient for calculation to participate in character matching. In order to effectively track the movement tracks of the group characters, different marking state parameters are distributed to each character, and the initial marking state parameter values are sorted in an increasing mode according to the position information.
Before character matching, whether a newly added character exists in a current video frame needs to be detected, if the newly added character does not exist, all characters in the current video frame complete the previous round of character matching, namely the marking state parameter of each character is not 0, if the character with the marking state parameter of 0 exists, the character is the newly added character, the number of the characters in the video group is updated, and the marking state parameter of the newly added character is modified.
Through experimental verification, the common position matching sub-persons have poor performance in the group figure environment, because the group movement is more disordered compared with the personal movement in the motion form, and the figure matching error is easily caused by the single position matching characteristic; the commonly used feature matching persons can well complete person matching, but persons with similar actions are easily confused due to the fact that group motions have collectiveness. By considering the factors, a mask position matching sub is designed, the influence of position information and feature masks on character matching is measured, and the accuracy of character tracking is improved.
Specifically, a currently tracked person is selected, in order to accelerate the tracking process, persons matched with the persons in the current video frame are not included in the selected range, and person matching is performed sequentially from small to large according to the number of the persons. In order to simplify the calculation process, people marked in the next frame of video frame are not matched, the position discrimination of the current tracked people and people not marked in the next frame of video frame is calculated, the corresponding position weight parameter is calculated according to the position discrimination, and the corresponding correlation similarity is calculated by combining the mask discrimination, wherein the higher the correlation similarity is, the higher the possibility of people matching is. Meanwhile, in order to avoid the occurrence of repeated matching and false matching, the tag status parameter and the matching status parameter of the person whose person matching has been completed are updated. And repeating the processes until all the group characters in the video complete character matching. And (4) sorting the motion clues of the group of people, and clustering the people with the same marked state parameter value into the motion trail of the same person.
FIG. 2 is a schematic diagram of a character matching process, wherein the character is shown in the tth video frameAnd the t +1 th frame video frame characterThe matching of the characters is completed, marked by a green solid line,andthe mark state parameters mark of (1) indicate that both are the same person.Selecting the character with the minimum character number in the character set with the map 0 in the t-th frame video frameIn order to currently track the person or persons,calculating the association similarity of all the marks which are 0 in the t +1 th frame of the video frame to carry out character matching, marking by blue dotted lines,selecting the character with the maximum association similarity exceeding the minimum association similarity thresholdAnd updateIs 2 denotesAndand completing character matching for the same character from this point.
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.
Claims (3)
1. A video group character motion track tracking method based on feature association is characterized by comprising the following steps:
step 1) inputting 1 video, setting the size of a standard video frame to be H multiplied by W, if the size of the video frame is inconsistent with that of the standard video frame, using a bilinear interpolation algorithm to scale the video frame to the size of the standard video frame, wherein the video is input by a user, H represents the height of the video frame, W represents the width of the video frame, and the bilinear interpolation algorithm is a common image processing algorithm;
step 2) carrying out character detection on each frame of video frame by using Mask-RCNN network to obtain character set detected in the t-th frame of video frameIth personal object of tth frame video frameThe position information ofThe feature mask isSetting ith personal object of tth frame video frameThe matching state parameter match is 0, the mark state parameter mark is 0, the Mask-RCNN network is an effective person detection algorithm, i is the person number of the t-th frame video frame, the person number is ordered in an increasing mode according to the position information, and n is the number of the t-th frame video frametThe number of the characters detected by the t frame video frame is as follows, and the matching state parameter match belongs to {0,1 };
step 3) setting the number of group characters appearing in the video to be N, and initializing N to N1Assigning the mark state parameters mark of all the characters in the first frame of video frame as the corresponding character numbers, wherein n is1Matching characters from the first frame video frame to the first frame video frame until the motion trail tracking of the group characters is completed by traversing the whole video, and detailing the character matching process from the t frame video frame to the t +1 frame video frame of the adjacent video frames in steps 4) to 5);
step 4) sequentially detecting mark state parameters mark of all people in the t-th frame of video frame, if the mark state parameter mark corresponding to the people is 0, determining the people as a newly added person, modifying N to be N +1, and determining the mark state parameter mark of the people to be N
Step 5) Setting the character set with matching state parameter match being 0 in the t-th frame video frame as Pnon-matchGo through Pnon-matchCompleting character matching, comprising the following specific steps:
step 51) selecting Pnon-matchThe person with the smallest number of the middle persons is taken as the current tracking person and is recorded asRespectively, the location information and the feature mask areAnd
step 52) setting the character set of the t +1 th frame video frame mark 0 as Pnon-markCalculating the current tracked personAnd Pnon-markThe method comprises the following specific steps of:
a) is provided withIs Pnon-markThe person in (1), calculating the current tracking personAnddegree of positional difference ofThe above-mentionedIs thatThe location information of (a);
b) computing a current tracked personAndposition weight parameter ofThe above-mentionedIs currently tracking a personAnd Pnon-markThe sum of the position difference degrees of all people in the house;
c) computing a current tracked personAndcorrelation similarity ofThe above-mentionedIs thatThe feature mask of (1);
step 53) judging the currently tracked personAnd Pnon-markThe method comprises the following steps of:
d) if LinknowIn which there is a currently tracked personAndassociated similarity link ofnow-track≥linkminAnd linknow-trackIn LinknowIf the maximum value is in the middle, the current tracked person is determinedAndsuccessful match, updateThe match status parameter match is 1, updateMarking state parameter mark of current tracking personThe marking state parameter is taken and updatedThe motion track from the t frame video frame to the t +1 frame video frame isThe LinknowIs thatAnd Pnon-markIs associated with a similarity set, linkminIs the most importantA small association similarity threshold;
e) if LinknowAll values in (1) are less than linkminThen, the current tracking person P is determinednowUpdating when the tracking of the video frame fails in the t +1 th frameThe match status parameter match is 1, updateThe motion track from the t frame video frame to the t +1 frame video frame is
And 6) setting the persons with the same mark state parameter mark value in the video sequence as the same person, updating and sorting the movement tracks of the persons in the video group, and finishing the tracking of the movement tracks of the persons in the video group.
2. The feature association based video group person motion trajectory tracking method according to claim 1, wherein in step 1), H is taken as 720 according to a data set, and W is taken as 1280 according to the data set.
3. The method as claimed in claim 1, wherein in step 5), the position weight parameters W of all people are determined according to the motion trajectory of the people in the video group based on the feature associationnow-jIs 1, linkminEmpirically 0.5 was taken.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911362575.7A CN111105443A (en) | 2019-12-26 | 2019-12-26 | Video group figure motion trajectory tracking method based on feature association |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911362575.7A CN111105443A (en) | 2019-12-26 | 2019-12-26 | Video group figure motion trajectory tracking method based on feature association |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111105443A true CN111105443A (en) | 2020-05-05 |
Family
ID=70424271
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911362575.7A Pending CN111105443A (en) | 2019-12-26 | 2019-12-26 | Video group figure motion trajectory tracking method based on feature association |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111105443A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111753756A (en) * | 2020-06-28 | 2020-10-09 | 浙江大华技术股份有限公司 | Object identification-based deployment alarm method and device and storage medium |
CN113255549A (en) * | 2021-06-03 | 2021-08-13 | 中山大学 | Intelligent recognition method and system for pennisseum hunting behavior state |
CN113326850A (en) * | 2021-08-03 | 2021-08-31 | 中国科学院烟台海岸带研究所 | Example segmentation-based video analysis method for group behavior of Charybdis japonica |
CN113361360A (en) * | 2021-05-31 | 2021-09-07 | 山东大学 | Multi-person tracking method and system based on deep learning |
CN113808158A (en) * | 2020-06-15 | 2021-12-17 | 中移(苏州)软件技术有限公司 | Method, device and equipment for analyzing group object motion in video and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103440667A (en) * | 2013-07-19 | 2013-12-11 | 杭州师范大学 | Automatic device for stably tracing moving targets under shielding states |
CN107527350A (en) * | 2017-07-11 | 2017-12-29 | 浙江工业大学 | A kind of solid waste object segmentation methods towards visual signature degraded image |
CN110135314A (en) * | 2019-05-07 | 2019-08-16 | 电子科技大学 | A kind of multi-object tracking method based on depth Trajectory prediction |
-
2019
- 2019-12-26 CN CN201911362575.7A patent/CN111105443A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103440667A (en) * | 2013-07-19 | 2013-12-11 | 杭州师范大学 | Automatic device for stably tracing moving targets under shielding states |
CN107527350A (en) * | 2017-07-11 | 2017-12-29 | 浙江工业大学 | A kind of solid waste object segmentation methods towards visual signature degraded image |
CN110135314A (en) * | 2019-05-07 | 2019-08-16 | 电子科技大学 | A kind of multi-object tracking method based on depth Trajectory prediction |
Non-Patent Citations (1)
Title |
---|
掌静 等: "基于特征关联的视频中群体人物行为语义抽取", 《HTTP://KNS.CNKI.NET/KCMS/DETAIL/61.1450.TP.20191218.1110.010.HTML》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113808158A (en) * | 2020-06-15 | 2021-12-17 | 中移(苏州)软件技术有限公司 | Method, device and equipment for analyzing group object motion in video and storage medium |
CN111753756A (en) * | 2020-06-28 | 2020-10-09 | 浙江大华技术股份有限公司 | Object identification-based deployment alarm method and device and storage medium |
CN113361360A (en) * | 2021-05-31 | 2021-09-07 | 山东大学 | Multi-person tracking method and system based on deep learning |
CN113255549A (en) * | 2021-06-03 | 2021-08-13 | 中山大学 | Intelligent recognition method and system for pennisseum hunting behavior state |
CN113255549B (en) * | 2021-06-03 | 2023-12-05 | 中山大学 | Intelligent recognition method and system for behavior state of wolf-swarm hunting |
CN113326850A (en) * | 2021-08-03 | 2021-08-31 | 中国科学院烟台海岸带研究所 | Example segmentation-based video analysis method for group behavior of Charybdis japonica |
CN113326850B (en) * | 2021-08-03 | 2021-10-26 | 中国科学院烟台海岸带研究所 | Example segmentation-based video analysis method for group behavior of Charybdis japonica |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110472554B (en) | Table tennis action recognition method and system based on attitude segmentation and key point features | |
CN111105443A (en) | Video group figure motion trajectory tracking method based on feature association | |
CN109919977B (en) | Video motion person tracking and identity recognition method based on time characteristics | |
Parkhi et al. | Deep face recognition | |
Doliotis et al. | Comparing gesture recognition accuracy using color and depth information | |
CN107563286B (en) | Dynamic gesture recognition method based on Kinect depth information | |
CN103593464B (en) | Video fingerprint detecting and video sequence matching method and system based on visual features | |
CN110458059B (en) | Gesture recognition method and device based on computer vision | |
CN104601964B (en) | Pedestrian target tracking and system in non-overlapping across the video camera room of the ken | |
CN103593680B (en) | A kind of dynamic gesture identification method based on the study of HMM independent increment | |
Nazir et al. | A bag of expression framework for improved human action recognition | |
US7983448B1 (en) | Self correcting tracking of moving objects in video | |
CN110674785A (en) | Multi-person posture analysis method based on human body key point tracking | |
CN112418095A (en) | Facial expression recognition method and system combined with attention mechanism | |
CN107633226A (en) | A kind of human action Tracking Recognition method and system | |
Tan et al. | Dynamic hand gesture recognition using motion trajectories and key frames | |
Chen et al. | Using FTOC to track shuttlecock for the badminton robot | |
CN109325440A (en) | Human motion recognition method and system | |
CN111931654A (en) | Intelligent monitoring method, system and device for personnel tracking | |
CN113158914B (en) | Intelligent evaluation method for dance action posture, rhythm and expression | |
Yi et al. | Human action recognition based on action relevance weighted encoding | |
CN106127112A (en) | Data Dimensionality Reduction based on DLLE model and feature understanding method | |
CN113312973A (en) | Method and system for extracting features of gesture recognition key points | |
CN111444817B (en) | Character image recognition method and device, electronic equipment and storage medium | |
CN113591692A (en) | Multi-view identity recognition method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200505 |
|
RJ01 | Rejection of invention patent application after publication |