CN111105443A - Video group figure motion trajectory tracking method based on feature association - Google Patents

Video group figure motion trajectory tracking method based on feature association Download PDF

Info

Publication number
CN111105443A
CN111105443A CN201911362575.7A CN201911362575A CN111105443A CN 111105443 A CN111105443 A CN 111105443A CN 201911362575 A CN201911362575 A CN 201911362575A CN 111105443 A CN111105443 A CN 111105443A
Authority
CN
China
Prior art keywords
frame
video
video frame
mark
person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911362575.7A
Other languages
Chinese (zh)
Inventor
陈志�
掌静
岳文静
周传
陈璐
刘玲
任杰
周松颖
江婧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN201911362575.7A priority Critical patent/CN111105443A/en
Publication of CN111105443A publication Critical patent/CN111105443A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a video group character motion trail tracking method based on feature association. Firstly, detecting group people appearing in a video, and acquiring position information and a feature mask of the group people; detecting newly added characters, selecting the current tracking character, and calculating the association similarity between the current tracking character and the characters in the adjacent video frames frame by frame; and finally, determining the inter-frame dynamics of the current tracked figure by combining the correlation similarity, updating the motion clue of the current tracked figure, and traversing the video sequence to complete the motion trail tracking of the figure of the video group. The method utilizes the motion characteristics of group characters, comprehensively considers the influence of the position relation and the action form in the inter-frame character correlation matching process, can effectively improve the accuracy of tracking the motion track of the group characters, and has good implementation and robustness.

Description

Video group figure motion trajectory tracking method based on feature association
Technical Field
The invention relates to a video group character motion trail tracking method based on feature association, and belongs to the cross technical field of computer vision, mode recognition and the like. In the face of massive video data, relevant researchers in the field of computer vision begin to explore how to automatically and efficiently extract motion tracks of group characters in videos. The video group figure motion track tracking has huge application prospect and use value, and has wide application in the fields of video monitoring, sports analysis, human-computer interaction and the like.
Background
The tracking of the motion trail of the people in the video group is an important research subject in the field of computer vision, and has important theoretical significance and application value.
The group character movement refers to an interactive movement with collective characteristics among a plurality of individual characters, and the group interactive movement generally shows diversity. Because the video contains image sequence information such as rich position relation and motion form of people and objects, understanding of motion tracks of group people is facilitated, and research on tracking of motion tracks of group people based on the video is gradually a hotspot.
At present, the research on tracking the motion trail of a video group figure can be divided into two types, namely model-based and feature-based, from the starting point and the side focus of an algorithm core. The character tracking algorithm based on the model simulates the change of the character movement track of the group through a priori character movement model of the group, then uses training data to train the model parameters, and utilizes the adjusted character movement track model of the group to judge the track dynamics of characters in the scene. The character tracking algorithm based on the characteristics obtains the interest area of the video frame from the video sequence, performs characteristic description on the interest area of the video frame, and obtains the tracking result of the group character motion track through training the characteristic description. The two modes have advantages and disadvantages, the performance of the character tracking algorithm based on the model depends on the design of a prior model, and the character tracking algorithm based on the characteristics needs to optimize and adjust the subsequent algorithm processing according to the different extracted characteristic scales.
So far, a great deal of research work is needed to be carried out on the method and the system for tracking the motion trail of the group people in the video.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to solve the technical problems that the matching precision of inter-frame characters is insufficient, the traditional character matching adopts single position information as a matching principle, but in group character scenes, a large amount of collective behavior interaction exists, and the tracking error is easily caused by tracking character tracks by simply using the position information.
The technical scheme is as follows: in order to achieve the purpose, the invention adopts the technical scheme that:
a video group character motion track tracking method based on feature association comprises the following steps:
step 1) inputting 1 video, setting the size of a standard video frame to be H multiplied by W, if the size of the video frame is inconsistent with that of the standard video frame, using a bilinear interpolation algorithm to scale the video frame to the size of the standard video frame, wherein the video is input by a user, H represents the height of the video frame, W represents the width of the video frame, and the bilinear interpolation algorithm is a common image processing algorithm;
step 2) carrying out character detection on each frame of video frame by using Mask-RCNN network to obtain character set detected in the t-th frame of video frame
Figure BDA0002337585280000021
Ith frame of the tth video frameCharacter
Figure BDA0002337585280000022
The position information of
Figure BDA0002337585280000023
The feature mask is
Figure BDA0002337585280000024
Setting ith personal object of tth frame video frame
Figure BDA0002337585280000025
The matching state parameter match is 0, the mark state parameter mark is 0, the Mask-RCNN network is an effective person detection algorithm, i is the person number of the t-th frame video frame, the person number is ordered in an increasing mode according to the position information, and n is the number of the t-th frame video frametThe number of the characters detected by the t frame video frame is as follows, and the matching state parameter match belongs to {0,1 };
step 3) setting the number of group characters appearing in the video to be N, and initializing N to N1Assigning the mark state parameters mark of all the characters in the first frame of video frame as the corresponding character numbers, wherein n is1Matching characters from the first frame video frame to the first frame video frame until the motion trail tracking of the group characters is completed by traversing the whole video, and detailing the character matching process from the t frame video frame to the t +1 frame video frame of the adjacent video frames in steps 4) to 5);
step 4) sequentially detecting mark state parameters mark of all people in the t-th frame of video frame, if the mark state parameter mark corresponding to the people is 0, determining the people as a newly added person, modifying N to be N +1, and determining the mark state parameter mark of the people to be N
Step 5) setting the character set with the matching state parameter match being 0 in the t-th frame of video frame as Pnon-matchGo through Pnon-matchCompleting character matching, comprising the following specific steps:
step 51) selecting Pnon-matchThe person with the smallest number of the middle persons is taken as the current tracking person and is recorded as
Figure BDA0002337585280000026
Respectively, the location information and the feature mask are
Figure BDA0002337585280000027
And
Figure BDA0002337585280000028
step 52) setting the character set of the t +1 th frame video frame mark 0 as Pnon-markCalculating the current tracked person
Figure BDA0002337585280000031
And Pnon-markThe method comprises the following specific steps of:
f) is provided with
Figure BDA0002337585280000032
Is Pnon-markThe person in (1), calculating the current tracking person
Figure BDA0002337585280000033
And
Figure BDA0002337585280000034
degree of positional difference of
Figure BDA0002337585280000035
The above-mentioned
Figure BDA0002337585280000036
Is that
Figure BDA0002337585280000037
The location information of (a);
g) computing a current tracked person
Figure BDA0002337585280000038
And
Figure BDA0002337585280000039
position weight parameter of
Figure BDA00023375852800000310
The above-mentioned
Figure BDA00023375852800000331
Is currently tracking a person
Figure BDA00023375852800000311
And Pnon-markThe sum of the position difference degrees of all people in the house;
h) computing a current tracked person
Figure BDA00023375852800000312
And
Figure BDA00023375852800000313
correlation similarity of
Figure BDA00023375852800000314
The above-mentioned
Figure BDA00023375852800000315
Is that
Figure BDA00023375852800000316
The feature mask of (1);
step 53) judging the currently tracked person
Figure BDA00023375852800000317
And Pnon-markThe method comprises the following steps of:
i) if LinknowIn which there is a currently tracked person
Figure BDA00023375852800000318
And
Figure BDA00023375852800000319
associated similarity link ofnow-track≥linkminAnd linknow-trackIn LinknowIf the maximum value is in the middle, the current tracked person is determined
Figure BDA00023375852800000320
And
Figure BDA00023375852800000321
successful match, update
Figure BDA00023375852800000322
The match status parameter match is 1, update
Figure BDA00023375852800000323
Marking state parameter mark of current tracking person
Figure BDA00023375852800000324
The marking state parameter is taken and updated
Figure BDA00023375852800000325
The motion track from the t frame video frame to the t +1 frame video frame is
Figure BDA00023375852800000326
The LinknowIs that
Figure BDA00023375852800000327
And Pnon-markIs associated with a similarity set, linkminIs the minimum association similarity threshold;
j) if LinknowAll values in (1) are less than linkminThen, the current tracking person P is determinednowUpdating when the tracking of the video frame fails in the t +1 th frame
Figure BDA00023375852800000328
The match status parameter match is 1, update
Figure BDA00023375852800000329
The motion track from the t frame video frame to the t +1 frame video frame is
Figure BDA00023375852800000330
And 6) setting the persons with the same mark state parameter mark value in the video sequence as the same person, updating and sorting the movement tracks of the persons in the video group, and finishing the tracking of the movement tracks of the persons in the video group.
Wherein,
in the step 1), H is taken according to a data set 720, and W is taken according to a data set 1280.
In the step 5), the position weight parameters W of all the charactersnow-jIs 1, linkminEmpirically 0.5 was taken.
Has the advantages that: compared with the prior art, the method for tracking the motion trail of the characters in the video group based on the characteristic association has the following technical effects:
the method comprises the steps of detecting group characters appearing in a video, and obtaining position information and feature masks of the group characters; detecting newly added characters, selecting the current tracking character, and calculating the association similarity between the current tracking character and the characters in the adjacent video frames frame by frame; and finally, determining the inter-frame dynamics of the current tracked figure by combining the correlation similarity, updating the motion clue of the current tracked figure, and traversing the video sequence to complete the motion trail tracking of the figure of the video group. By applying the methods, the motion tracks of group characters in the video can be effectively extracted, and the method has good accuracy and effectiveness, and particularly:
(1) the Mask-RCNN network used by the invention adopts the fusion characteristic to detect the person, and provides rich position information and semantic information for person tracking.
(2) According to the method, the characteristics of diversification and randomness of group character motion are considered, weight parameters are constructed for character positions, the influence of wrong character matching to a tracking result is reduced, and the robustness of group character tracking is improved.
The invention converts the complex characteristic diagram into the characteristic mask which is convenient for calculation to participate in character matching, and greatly reduces the calculation cost while keeping the semantic description capability.
Drawings
FIG. 1 is a flow of a video group person motion trajectory tracking method based on feature association.
Fig. 2 is a schematic diagram of person matching.
Detailed Description
The present invention will be further described with reference to the accompanying drawings.
FIG. 1 is a flow of a video group person motion trajectory tracking method based on feature association. Firstly, inputting 1 video, normalizing the sizes of all video frames in a video sequence for facilitating the processing of subsequent tracking operation, setting the size of a standard video frame, and carrying out image scaling on the video frames which do not meet the size specification of the standard video frame by adopting a bilinear interpolation algorithm.
In consideration of the characteristics of serious mutual occlusion and complex illumination change of group characters, a Mask-RCNN network adopting high-low fusion characteristics is used for detecting the characters so as to reduce the influence of environmental factors on subsequent tracking operation and obtain accurate character position and characteristic description, and in order to reduce the characteristic calculation cost, the characteristic description is converted into characteristic Mask codes convenient for calculation to participate in character matching. In order to effectively track the movement tracks of the group characters, different marking state parameters are distributed to each character, and the initial marking state parameter values are sorted in an increasing mode according to the position information.
Before character matching, whether a newly added character exists in a current video frame needs to be detected, if the newly added character does not exist, all characters in the current video frame complete the previous round of character matching, namely the marking state parameter of each character is not 0, if the character with the marking state parameter of 0 exists, the character is the newly added character, the number of the characters in the video group is updated, and the marking state parameter of the newly added character is modified.
Through experimental verification, the common position matching sub-persons have poor performance in the group figure environment, because the group movement is more disordered compared with the personal movement in the motion form, and the figure matching error is easily caused by the single position matching characteristic; the commonly used feature matching persons can well complete person matching, but persons with similar actions are easily confused due to the fact that group motions have collectiveness. By considering the factors, a mask position matching sub is designed, the influence of position information and feature masks on character matching is measured, and the accuracy of character tracking is improved.
Specifically, a currently tracked person is selected, in order to accelerate the tracking process, persons matched with the persons in the current video frame are not included in the selected range, and person matching is performed sequentially from small to large according to the number of the persons. In order to simplify the calculation process, people marked in the next frame of video frame are not matched, the position discrimination of the current tracked people and people not marked in the next frame of video frame is calculated, the corresponding position weight parameter is calculated according to the position discrimination, and the corresponding correlation similarity is calculated by combining the mask discrimination, wherein the higher the correlation similarity is, the higher the possibility of people matching is. Meanwhile, in order to avoid the occurrence of repeated matching and false matching, the tag status parameter and the matching status parameter of the person whose person matching has been completed are updated. And repeating the processes until all the group characters in the video complete character matching. And (4) sorting the motion clues of the group of people, and clustering the people with the same marked state parameter value into the motion trail of the same person.
FIG. 2 is a schematic diagram of a character matching process, wherein the character is shown in the tth video frame
Figure BDA0002337585280000051
And the t +1 th frame video frame character
Figure BDA0002337585280000052
The matching of the characters is completed, marked by a green solid line,
Figure BDA0002337585280000053
and
Figure BDA0002337585280000054
the mark state parameters mark of (1) indicate that both are the same person.
Figure BDA0002337585280000055
Selecting the character with the minimum character number in the character set with the map 0 in the t-th frame video frame
Figure BDA0002337585280000056
In order to currently track the person or persons,
Figure BDA0002337585280000057
calculating the association similarity of all the marks which are 0 in the t +1 th frame of the video frame to carry out character matching, marking by blue dotted lines,
Figure BDA0002337585280000058
selecting the character with the maximum association similarity exceeding the minimum association similarity threshold
Figure BDA0002337585280000059
And update
Figure BDA00023375852800000510
Is 2 denotes
Figure BDA00023375852800000511
And
Figure BDA00023375852800000512
and completing character matching for the same character from this point.
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.

Claims (3)

1. A video group character motion track tracking method based on feature association is characterized by comprising the following steps:
step 1) inputting 1 video, setting the size of a standard video frame to be H multiplied by W, if the size of the video frame is inconsistent with that of the standard video frame, using a bilinear interpolation algorithm to scale the video frame to the size of the standard video frame, wherein the video is input by a user, H represents the height of the video frame, W represents the width of the video frame, and the bilinear interpolation algorithm is a common image processing algorithm;
step 2) carrying out character detection on each frame of video frame by using Mask-RCNN network to obtain character set detected in the t-th frame of video frame
Figure FDA0002337585270000011
Ith personal object of tth frame video frame
Figure FDA0002337585270000012
The position information of
Figure FDA0002337585270000013
The feature mask is
Figure FDA0002337585270000014
Setting ith personal object of tth frame video frame
Figure FDA0002337585270000015
The matching state parameter match is 0, the mark state parameter mark is 0, the Mask-RCNN network is an effective person detection algorithm, i is the person number of the t-th frame video frame, the person number is ordered in an increasing mode according to the position information, and n is the number of the t-th frame video frametThe number of the characters detected by the t frame video frame is as follows, and the matching state parameter match belongs to {0,1 };
step 3) setting the number of group characters appearing in the video to be N, and initializing N to N1Assigning the mark state parameters mark of all the characters in the first frame of video frame as the corresponding character numbers, wherein n is1Matching characters from the first frame video frame to the first frame video frame until the motion trail tracking of the group characters is completed by traversing the whole video, and detailing the character matching process from the t frame video frame to the t +1 frame video frame of the adjacent video frames in steps 4) to 5);
step 4) sequentially detecting mark state parameters mark of all people in the t-th frame of video frame, if the mark state parameter mark corresponding to the people is 0, determining the people as a newly added person, modifying N to be N +1, and determining the mark state parameter mark of the people to be N
Step 5) Setting the character set with matching state parameter match being 0 in the t-th frame video frame as Pnon-matchGo through Pnon-matchCompleting character matching, comprising the following specific steps:
step 51) selecting Pnon-matchThe person with the smallest number of the middle persons is taken as the current tracking person and is recorded as
Figure FDA0002337585270000016
Respectively, the location information and the feature mask are
Figure FDA0002337585270000017
And
Figure FDA0002337585270000018
step 52) setting the character set of the t +1 th frame video frame mark 0 as Pnon-markCalculating the current tracked person
Figure FDA0002337585270000021
And Pnon-markThe method comprises the following specific steps of:
a) is provided with
Figure FDA0002337585270000022
Is Pnon-markThe person in (1), calculating the current tracking person
Figure FDA0002337585270000023
And
Figure FDA0002337585270000024
degree of positional difference of
Figure FDA0002337585270000025
The above-mentioned
Figure FDA0002337585270000026
Is that
Figure FDA0002337585270000027
The location information of (a);
b) computing a current tracked person
Figure FDA0002337585270000028
And
Figure FDA0002337585270000029
position weight parameter of
Figure FDA00023375852700000210
The above-mentioned
Figure FDA00023375852700000211
Is currently tracking a person
Figure FDA00023375852700000212
And Pnon-markThe sum of the position difference degrees of all people in the house;
c) computing a current tracked person
Figure FDA00023375852700000213
And
Figure FDA00023375852700000214
correlation similarity of
Figure FDA00023375852700000215
The above-mentioned
Figure FDA00023375852700000216
Is that
Figure FDA00023375852700000217
The feature mask of (1);
step 53) judging the currently tracked person
Figure FDA00023375852700000218
And Pnon-markThe method comprises the following steps of:
d) if LinknowIn which there is a currently tracked person
Figure FDA00023375852700000219
And
Figure FDA00023375852700000220
associated similarity link ofnow-track≥linkminAnd linknow-trackIn LinknowIf the maximum value is in the middle, the current tracked person is determined
Figure FDA00023375852700000221
And
Figure FDA00023375852700000222
successful match, update
Figure FDA00023375852700000223
The match status parameter match is 1, update
Figure FDA00023375852700000224
Marking state parameter mark of current tracking person
Figure FDA00023375852700000225
The marking state parameter is taken and updated
Figure FDA00023375852700000226
The motion track from the t frame video frame to the t +1 frame video frame is
Figure FDA00023375852700000227
The LinknowIs that
Figure FDA00023375852700000228
And Pnon-markIs associated with a similarity set, linkminIs the most importantA small association similarity threshold;
e) if LinknowAll values in (1) are less than linkminThen, the current tracking person P is determinednowUpdating when the tracking of the video frame fails in the t +1 th frame
Figure FDA00023375852700000229
The match status parameter match is 1, update
Figure FDA00023375852700000230
The motion track from the t frame video frame to the t +1 frame video frame is
Figure FDA00023375852700000231
And 6) setting the persons with the same mark state parameter mark value in the video sequence as the same person, updating and sorting the movement tracks of the persons in the video group, and finishing the tracking of the movement tracks of the persons in the video group.
2. The feature association based video group person motion trajectory tracking method according to claim 1, wherein in step 1), H is taken as 720 according to a data set, and W is taken as 1280 according to the data set.
3. The method as claimed in claim 1, wherein in step 5), the position weight parameters W of all people are determined according to the motion trajectory of the people in the video group based on the feature associationnow-jIs 1, linkminEmpirically 0.5 was taken.
CN201911362575.7A 2019-12-26 2019-12-26 Video group figure motion trajectory tracking method based on feature association Pending CN111105443A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911362575.7A CN111105443A (en) 2019-12-26 2019-12-26 Video group figure motion trajectory tracking method based on feature association

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911362575.7A CN111105443A (en) 2019-12-26 2019-12-26 Video group figure motion trajectory tracking method based on feature association

Publications (1)

Publication Number Publication Date
CN111105443A true CN111105443A (en) 2020-05-05

Family

ID=70424271

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911362575.7A Pending CN111105443A (en) 2019-12-26 2019-12-26 Video group figure motion trajectory tracking method based on feature association

Country Status (1)

Country Link
CN (1) CN111105443A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111753756A (en) * 2020-06-28 2020-10-09 浙江大华技术股份有限公司 Object identification-based deployment alarm method and device and storage medium
CN113255549A (en) * 2021-06-03 2021-08-13 中山大学 Intelligent recognition method and system for pennisseum hunting behavior state
CN113326850A (en) * 2021-08-03 2021-08-31 中国科学院烟台海岸带研究所 Example segmentation-based video analysis method for group behavior of Charybdis japonica
CN113361360A (en) * 2021-05-31 2021-09-07 山东大学 Multi-person tracking method and system based on deep learning
CN113808158A (en) * 2020-06-15 2021-12-17 中移(苏州)软件技术有限公司 Method, device and equipment for analyzing group object motion in video and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440667A (en) * 2013-07-19 2013-12-11 杭州师范大学 Automatic device for stably tracing moving targets under shielding states
CN107527350A (en) * 2017-07-11 2017-12-29 浙江工业大学 A kind of solid waste object segmentation methods towards visual signature degraded image
CN110135314A (en) * 2019-05-07 2019-08-16 电子科技大学 A kind of multi-object tracking method based on depth Trajectory prediction

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440667A (en) * 2013-07-19 2013-12-11 杭州师范大学 Automatic device for stably tracing moving targets under shielding states
CN107527350A (en) * 2017-07-11 2017-12-29 浙江工业大学 A kind of solid waste object segmentation methods towards visual signature degraded image
CN110135314A (en) * 2019-05-07 2019-08-16 电子科技大学 A kind of multi-object tracking method based on depth Trajectory prediction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
掌静 等: "基于特征关联的视频中群体人物行为语义抽取", 《HTTP://KNS.CNKI.NET/KCMS/DETAIL/61.1450.TP.20191218.1110.010.HTML》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113808158A (en) * 2020-06-15 2021-12-17 中移(苏州)软件技术有限公司 Method, device and equipment for analyzing group object motion in video and storage medium
CN111753756A (en) * 2020-06-28 2020-10-09 浙江大华技术股份有限公司 Object identification-based deployment alarm method and device and storage medium
CN113361360A (en) * 2021-05-31 2021-09-07 山东大学 Multi-person tracking method and system based on deep learning
CN113255549A (en) * 2021-06-03 2021-08-13 中山大学 Intelligent recognition method and system for pennisseum hunting behavior state
CN113255549B (en) * 2021-06-03 2023-12-05 中山大学 Intelligent recognition method and system for behavior state of wolf-swarm hunting
CN113326850A (en) * 2021-08-03 2021-08-31 中国科学院烟台海岸带研究所 Example segmentation-based video analysis method for group behavior of Charybdis japonica
CN113326850B (en) * 2021-08-03 2021-10-26 中国科学院烟台海岸带研究所 Example segmentation-based video analysis method for group behavior of Charybdis japonica

Similar Documents

Publication Publication Date Title
CN110472554B (en) Table tennis action recognition method and system based on attitude segmentation and key point features
CN111105443A (en) Video group figure motion trajectory tracking method based on feature association
CN109919977B (en) Video motion person tracking and identity recognition method based on time characteristics
Parkhi et al. Deep face recognition
Doliotis et al. Comparing gesture recognition accuracy using color and depth information
CN107563286B (en) Dynamic gesture recognition method based on Kinect depth information
CN103593464B (en) Video fingerprint detecting and video sequence matching method and system based on visual features
CN110458059B (en) Gesture recognition method and device based on computer vision
CN104601964B (en) Pedestrian target tracking and system in non-overlapping across the video camera room of the ken
CN103593680B (en) A kind of dynamic gesture identification method based on the study of HMM independent increment
Nazir et al. A bag of expression framework for improved human action recognition
US7983448B1 (en) Self correcting tracking of moving objects in video
CN110674785A (en) Multi-person posture analysis method based on human body key point tracking
CN112418095A (en) Facial expression recognition method and system combined with attention mechanism
CN107633226A (en) A kind of human action Tracking Recognition method and system
Tan et al. Dynamic hand gesture recognition using motion trajectories and key frames
Chen et al. Using FTOC to track shuttlecock for the badminton robot
CN109325440A (en) Human motion recognition method and system
CN111931654A (en) Intelligent monitoring method, system and device for personnel tracking
CN113158914B (en) Intelligent evaluation method for dance action posture, rhythm and expression
Yi et al. Human action recognition based on action relevance weighted encoding
CN106127112A (en) Data Dimensionality Reduction based on DLLE model and feature understanding method
CN113312973A (en) Method and system for extracting features of gesture recognition key points
CN111444817B (en) Character image recognition method and device, electronic equipment and storage medium
CN113591692A (en) Multi-view identity recognition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200505

RJ01 Rejection of invention patent application after publication