CN104954743A - Multi-camera semantic association target tracking method - Google Patents

Multi-camera semantic association target tracking method Download PDF

Info

Publication number
CN104954743A
CN104954743A CN201510324658.2A CN201510324658A CN104954743A CN 104954743 A CN104954743 A CN 104954743A CN 201510324658 A CN201510324658 A CN 201510324658A CN 104954743 A CN104954743 A CN 104954743A
Authority
CN
China
Prior art keywords
camera
target
frame
object chain
data table
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510324658.2A
Other languages
Chinese (zh)
Other versions
CN104954743B (en
Inventor
朱虹
沈冬辰
何毅枫
高炳辉
程玉爽
郭松
张静波
路凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN201510324658.2A priority Critical patent/CN104954743B/en
Publication of CN104954743A publication Critical patent/CN104954743A/en
Application granted granted Critical
Publication of CN104954743B publication Critical patent/CN104954743B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a multi-camera semantic association target tracking method which comprises the following steps: 1, performing association definition on positions of multiple cameras in a monitoring network; 2, detecting a moving target and generating a target chain; 3, generating semantic characteristics of the target chain; 4, performing multi-camera association matching according to a concerned target determined by a user, and selecting candidate targets; and 5, determining the same targets in the associated cameras. The method is simple in step, convenient to implement, low in calculation amount and high in result accuracy, and the provided result is capable of correctly describing the behavior expression of the targets in the monitoring network.

Description

A kind of polyphaser semantic association method for tracking target
Technical field
The invention belongs to intelligent monitoring technical field of video processing, relate to a kind of polyphaser semantic association method for tracking target.
Background technology
At present, the network monitoring pattern of safety defense monitoring system is very universal, but, internetwork camera associate feature is utilized if insufficient, if will cause at a time point, concern target has been found in a phase machine monitoring ken, will in the other time period, or carry out retrieval in other the phase machine monitoring ken to follow the tracks of, just have to again see the video once recorded again, search amount increased like this will be huge, and fitness is low, validity is poor, has had a strong impact on the efficiency of searching associated video.
Summary of the invention
The object of this invention is to provide a kind of polyphaser semantic association method for tracking target, solve method for tracking target of the prior art, owing to not making full use of the associate feature between camera between each camera of monitor network, target is in different time sections, when repeating in different cameral to identify, need to be published in each video collected and to search, cause that fitness is low, validity is poor, the problem that workload is huge.
The technical solution adopted in the present invention is, a kind of polyphaser semantic association method for tracking target, specifically implements according to following steps:
Step 1, the position of the multiple cameras in monitor network carried out to association definition;
Step 2, detection moving target, and generate object chain;
The semantic feature of step 3, generation object chain;
Step 4, the concern target determined according to user carry out polyphaser association coupling, provide candidate target;
Step 5, the same target determined in associated camera.
The invention has the beneficial effects as follows, method step is simple, and be convenient to implement, amount of calculation is little, and result accuracy is high, and the result provided correctly can describe the behavior statement of target in monitor network.
Accompanying drawing explanation
Fig. 1 is monitor network camera position of the present invention association schematic diagram.
Embodiment
Below in conjunction with the drawings and specific embodiments, the present invention is described in detail.
The present invention is a kind of polyphaser semantic association method for tracking target, and step comprises on the whole: step 1, the proximity association camera in mark monitor network; Step 2, detection moving target, and generate object chain; The semantic feature of step 3, generation object chain; Step 4, the concern target determined according to user, extract the object chain of associated camera, carry out Similarity matching, and provide the high front some targets alternatively target of similarity; Step 5, from candidate target, determine that the concern target with step 4 belongs to the object chain of same target, export the behavioural characteristic in its semantic feature, complete and the polyphaser association of target is followed the tracks of.
Polyphaser semantic association method for tracking target of the present invention, specifically implement according to following steps:
Step 1, the position of the multiple cameras in monitor network carried out to association definition
By the position of the multiple cameras in a guarded region numerically, afterwards, a camera associated data table is built R k = { camera 1 k , camera 2 k , . . . , camera m k k } , k = 1,2 , . . . , M ;
Wherein, R kfor the contingency table of a kth camera, M is the camera number in monitor network overlay area, for the label of q camera be associated with a kth camera, q=1,2 ..., m k, m kfor the camera number that a kth camera is associated, the camera with neighbouring relations connected for there being with it road.
With reference to Fig. 1,10 cameras are provided with in embodiment, code name 1. 2. 3. ... 10. first camera, second camera, the 3rd camera is represented respectively ... tenth camera, each camera position is substantially according to from top to bottom, order is from left to right arranged, then indivedual camera position is upset a bit, and object is the numbering of newly-increased camera in analog network.
The associated data table R of the 1st camera 1=2,4}, namely the 1st camera associates with the 2nd, the 4th camera;
In like manner, the associated data table R of the 2nd camera 2={ 1,3}, shows that the 2nd camera associates with the 1st, the 3rd camera;
The associated data table R of the 3rd camera 3={ 2,6,8}, shows that the 3rd camera associates with the 2nd, the 6th, the 8th camera;
The associated data table R of the 4th camera 4={ 1,5}, shows that the 4th camera associates with the 1st, the 5th camera;
The associated data table R of the 5th camera 5={ 4,9}, shows that the 5th camera associates with the 4th, the 9th camera;
The associated data table R of the 6th camera 6={ 3,10}, shows that the 6th camera associates with the 3rd, the 10th camera;
The associated data table R of the 7th camera 7={ 8,10}, shows that the 7th camera associates with the 8th, the 10th camera;
The associated data table R of the 8th camera 8={ 3,7}, shows that the 8th camera associates with the 3rd, the 7th camera;
The associated data table R of the 9th camera 9={ 5}, shows that the 9th camera associates with the 5th camera;
The associated data table R of the 10th camera 10={ 6,7}, shows that the 10th camera associates with the 6th, the 7th camera;
This camera associated data table is prestored in a database.
Step 2, detection moving target, and generate object chain
To the monitor video that each camera in network is recorded, according to recording time, carry out segmentation, if the frame number of basic segment is Δ t, considering the problems such as computation complexity, is preferably a basic segment with the recorded video of two hours, from each basic segment, extract its moving target chain respectively, method is as follows:
If certain video basic segment is wherein, k represents the kth camera in network, t kfor the initial time of this video basic segment, lasting frame number is after Δ t, and the end time of this video basic segment is t k+ Δ t (for convenience of description, describing moment movement on a timeline with the number of frame number here),
2.1) according to background subtraction (all can find in relevant professional book and paper), from in each frame all detect moving target,
T is the frame number in the video basic segment of place, n tfor the target number detected at t frame,
So detect n at t frame tindividual target connected domain, then with the top left co-ordinate of the minimum enclosed rectangle of target connected domain with lower right corner coordinate represent target, the aggregate expression that the moving target connected domain detected in t frame is formed is:
{ O 1 k ( t ) , O 2 k ( t ) , . . . , O n t k ( t ) } , t = t k , t k + 1 , . . . , t k + Δt ;
2.2) initialization
Make t=t k, the first frame getting video basic segment is frame under process, and the first frame target connected domain set obtained is the then number N of the object chain of video basic segment kthe number of frame object chain just, namely individual, the length of object chain is
2.3) object chain is generated, if by step 2.1 according to the maximum mode regarding as same target of the overlapping area of consecutive frame connected domain) the t+1 frame target connected domain set that obtains is be identified as and suppose there be m with the connected domain of t frame same target, be not identified as and suppose have with the connected domain of former frame same target it is individual, the number then upgrading object chain is upgrade the chain length of object chain more new formula is as follows:
Again 2.2 are circulated), until the target complete chain in all video basic segments generates complete, namely obtain in N kindividual object chain
The semantic feature of step 3, generation object chain
If obtained by step 2 in N kindividual object chain is its semantic feature is defined as:
Ch i k = { frame i k , ( x L i k , y L i k ) , ( x R i k , y R i k ) , t i k , l i k , color i k , action i k } , i = 1,2 , . . . , N k ,
Wherein,
it is the top left co-ordinate of the minimum enclosed rectangle of the target connected domain of i-th object chain start frame;
it is the lower right corner coordinate of the minimum enclosed rectangle of the target connected domain of i-th object chain start frame;
it is the frame number of the start frame of i-th object chain;
it is the chain length of i-th object chain;
be the key frame of i-th object chain, get object chain the target shown in target connected domain at place is key frame;
the color histogram of the target connected domain of i-th object chain key frame, (method for solving of color histogram all has introduction in relevant professional book and paper);
be the behavioral parameters of i-th object chain, adopt one group of binary Boolean amount to describe, expression formula is: seven preferred parameters wherein represent the description requirement of safety monitoring to personage's behavior respectively, are defined as follows:
3.1) behavior of whether squatting down is judged
According to the priori depth-width ratio of people's height, if boundary rectangle depth-width ratio is less than proportional numerical value when standing, namely assertive goal is attitude of squatting down; Otherwise think and do not squat down, expression formula is as follows:
3.2) behavior of standing is determined whether
Determine according to the change in location of the start frame in object chain to the boundary rectangle of abort frame, if change in location can be ignored, be namely judged as midstance; Otherwise be judged as not standing, expression formula is as follows:
3.3) behavior of walking is determined whether
Determine according to the change in location of the start frame in object chain to the boundary rectangle of abort frame, if change in location towards certain direction according to certain velocity variations, be namely judged as walking; Otherwise be judged as it not being ambulatory status, expression formula is as follows:
3.4) behavior of running is determined whether
Determine according to the change in location of the start frame in object chain to the boundary rectangle of abort frame, if change in location towards certain direction according to velocity variations faster, be namely judged as running; Otherwise be judged as it not being the state of running, expression formula is as follows:
3.5) behavior of hovering is determined whether
Determine according to the change in location of the start frame in object chain to the boundary rectangle of abort frame, if change in location is turned back back and forth, be namely judged as hovering; Otherwise be judged as it not being the state of hovering, expression formula is as follows:
3.6) behavior of glancing right and left is determined whether
According to the start frame in object chain in abort frame, target cranial is determined towards the head skin tone regional change (face is in the face of the degree of camera lens) of camera lens, if head skin tone region exists ascending, during descending change, be namely judged as glancing right and left; Otherwise be judged as it not being the state of glancing right and left, expression formula is as follows:
3.7) close behavior is determined whether
Determine according to the change in location of the start frame in object chain to the boundary rectangle of the position of the boundary rectangle of abort frame and other targets, if the position of the boundary rectangle of the position of its boundary rectangle and another one target is more and more close, until there is adhesion between two boundary rectangles, be namely judged as that there is the behavior close to other targets; Otherwise be judged as that expression formula is as follows not close to other targets:
Step 4, the concern target determined according to user carry out polyphaser association coupling, provide candidate target
Determine to pay close attention to target when user clicks in the video of certain incident case, adopt man-machine interaction mode, provide and pay close attention to the time period;
Again according to the information that user specifies, the camera associated data table provided according to step 1, search the elementary video section of camera within the concern time period be associated, to the key frame of all object chains in the elementary video section satisfied condition, color histogram match is carried out with the key frame paying close attention to target, according to the criterion that Euclidean distance is nearest, provide front N topthe key frame of individual candidate target exports on interface;
Consider that track rejection rate is low, and convenience when user observes, preferred N top=10.
Step 5, the same target determined in associated camera
The matching result that user provides according to step 4, to N before providing topindividual candidate target confirms, provides the result that in associated camera, crime target occurs, all object chains namely matched in behavioral trait complete the polyphaser association paying close attention to target to follow the tracks of.

Claims (7)

1. a polyphaser semantic association method for tracking target, is characterized in that, specifically implements according to following steps:
Step 1, the position of the multiple cameras in monitor network carried out to association definition;
Step 2, detection moving target, and generate object chain;
The semantic feature of step 3, generation object chain;
Step 4, the concern target determined according to user carry out polyphaser association coupling, provide candidate target;
Step 5, the same target determined in associated camera.
2. polyphaser semantic association method for tracking target according to claim 1, it is characterized in that: described step 1, specific implementation process is:
By the position of the multiple cameras in a guarded region numerically, afterwards, a camera associated data table is built R k = { camera 1 k , camera 2 k , . . . , camera m k k } , k = 1,2 , . . . , M ;
Wherein, R kfor the contingency table of a kth camera, M is the camera number in monitor network overlay area, for the label of q camera be associated with a kth camera, q=1,2 ..., m k, m kfor the camera number that a kth camera is associated, this camera associated data table is prestored in a database.
3. polyphaser semantic association method for tracking target according to claim 2, is characterized in that: the manufacture method of described camera associated data table is,
Each camera position is substantially according to from top to bottom, and serial number from left to right, then have:
The associated data table R of the 1st camera 1=2,4}, namely the 1st camera associates with the 2nd, the 4th camera;
In like manner, the associated data table R of the 2nd camera 2={ 1,3}, shows that the 2nd camera associates with the 1st, the 3rd camera;
The associated data table R of the 3rd camera 3={ 2,6,8}, shows that the 3rd camera associates with the 2nd, the 6th, the 8th camera;
The associated data table R of the 4th camera 4={ 1,5}, shows that the 4th camera associates with the 1st, the 5th camera;
The associated data table R of the 5th camera 5={ 4,9}, shows that the 5th camera associates with the 4th, the 9th camera;
The associated data table R of the 6th camera 6={ 3,10}, shows that the 6th camera associates with the 3rd, the 10th camera;
The associated data table R of the 7th camera 7={ 8,10}, shows that the 7th camera associates with the 8th, the 10th camera;
The associated data table R of the 8th camera 8={ 3,7}, shows that the 8th camera associates with the 3rd, the 7th camera;
The associated data table R of the 9th camera 9={ 5}, shows that the 9th camera associates with the 5th camera;
The associated data table R of the 10th camera 10={ 6,7} shows that the 10th camera associates with the 6th, the 7th camera.
4. polyphaser semantic association method for tracking target according to claim 2, it is characterized in that: in described step 2, specific implementation process is,
To the monitor video that each camera in network is recorded, according to recording time, carry out segmentation, if the frame number of basic segment is Δ t, from each basic segment, extract its moving target chain respectively, method is as follows:
If certain video basic segment is wherein, k represents the kth camera in network, t kfor the initial time of this video basic segment, lasting frame number is after Δ t, and the end time of this video basic segment is t k+ Δ t,
2.1) according to background subtraction, from in each frame all detect moving target,
T is the frame number in the video basic segment of place, n tfor the target number detected at t frame,
So detect n at t frame tindividual target connected domain, then with the top left co-ordinate of the minimum enclosed rectangle of target connected domain with lower right corner coordinate represent target, the aggregate expression that the moving target connected domain detected in t frame is formed is:
{ O 1 k ( t ) , O 2 k ( t ) , . . . , O n t k ( t ) } , t = t k , t k + 1 , . . . , t k + Δt ;
2.2) initialization
Make t=t k, the first frame getting video basic segment is frame under process, and the first frame target connected domain set obtained is the then number N of the object chain of video basic segment kthe number of frame object chain just, namely individual, the length of object chain is l i k=1, i=1,2 ..., N k;
2.3) object chain is generated, if by step 2.1 according to the maximum mode regarding as same target of the overlapping area of consecutive frame connected domain) the t+1 frame target connected domain set that obtains is be identified as and suppose there be m with the connected domain of t frame same target, be not identified as and suppose have with the connected domain of former frame same target it is individual, the number then upgrading object chain is upgrade the chain length l of object chain i k, i=1,2 ..., N k, more new formula is as follows:
Again 2.2 are circulated), until the target complete chain in all video basic segments generates complete, namely obtain in N kindividual object chain
5. polyphaser semantic association method for tracking target according to claim 4, it is characterized in that: in described step 3, specific implementation process is,
If obtained by step 2 in N kindividual object chain is its semantic feature is defined as:
Ch i k = { frame i k , ( x L i k , y L i k ) , ( x R i k , y R i k ) , t i k , l i k , color i k , action i k } , i = 1,2 , . . . , N k ,
Wherein,
it is the top left co-ordinate of the minimum enclosed rectangle of the target connected domain of i-th object chain start frame;
it is the lower right corner coordinate of the minimum enclosed rectangle of the target connected domain of i-th object chain start frame;
it is the frame number of the start frame of i-th object chain;
it is the chain length of i-th object chain;
be the key frame of i-th object chain, get object chain the target shown in target connected domain at place is key frame;
the color histogram of the target connected domain of i-th object chain key frame,
be the behavioral parameters of i-th object chain, adopt one group of binary Boolean amount to describe, expression formula is: seven preferred parameters wherein represent the description requirement of safety monitoring to personage's behavior respectively, are defined as follows:
3.1) behavior of whether squatting down is judged
According to the priori depth-width ratio of people's height, if boundary rectangle depth-width ratio is less than proportional numerical value when standing, namely assertive goal is attitude of squatting down; Otherwise think and do not squat down, expression formula is as follows:
3.2) behavior of standing is determined whether
Determine according to the change in location of the start frame in object chain to the boundary rectangle of abort frame, if change in location can be ignored, be namely judged as midstance; Otherwise be judged as not standing, expression formula is as follows:
3.3) behavior of walking is determined whether
Determine according to the change in location of the start frame in object chain to the boundary rectangle of abort frame, if change in location towards certain direction according to certain velocity variations, be namely judged as walking; Otherwise be judged as it not being ambulatory status, expression formula is as follows:
3.4) behavior of running is determined whether
Determine according to the change in location of the start frame in object chain to the boundary rectangle of abort frame, if change in location towards certain direction according to velocity variations faster, be namely judged as running; Otherwise be judged as it not being the state of running, expression formula is as follows:
3.5) behavior of hovering is determined whether
Determine according to the change in location of the start frame in object chain to the boundary rectangle of abort frame, if change in location is turned back back and forth, be namely judged as hovering; Otherwise be judged as it not being the state of hovering, expression formula is as follows:
3.6) behavior of glancing right and left is determined whether
According to the start frame in object chain in abort frame, target cranial is determined towards the head skin tone regional change (face is in the face of the degree of camera lens) of camera lens, if head skin tone region exists ascending, during descending change, be namely judged as glancing right and left; Otherwise be judged as it not being the state of glancing right and left, expression formula is as follows:
3.7) close behavior is determined whether
Determine according to the change in location of the start frame in object chain to the boundary rectangle of the position of the boundary rectangle of abort frame and other targets, if the position of the boundary rectangle of the position of its boundary rectangle and another one target is more and more close, until there is adhesion between two boundary rectangles, be namely judged as that there is the behavior close to other targets; Otherwise be judged as that expression formula is as follows not close to other targets:
6. polyphaser semantic association method for tracking target according to claim 5, it is characterized in that: in described step 4, specific implementation process is,
Determine to pay close attention to target when user clicks in the video of certain incident case, adopt man-machine interaction mode, provide and pay close attention to the time period;
Again according to the information that user specifies, the camera associated data table provided according to step 1, search the elementary video section of camera within the concern time period be associated, to the key frame of all object chains in the elementary video section satisfied condition, color histogram match is carried out with the key frame paying close attention to target, according to the criterion that Euclidean distance is nearest, provide front N topthe key frame of individual candidate target exports on interface.
7. polyphaser semantic association method for tracking target according to claim 6, is characterized in that: in described step 5, the matching result that user provides according to step 4, to N before providing topindividual candidate target confirms, provides the result that in associated camera, crime target occurs, all object chains namely matched in behavioral trait complete the polyphaser association paying close attention to target to follow the tracks of.
CN201510324658.2A 2015-06-12 2015-06-12 A kind of polyphaser semantic association method for tracking target Expired - Fee Related CN104954743B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510324658.2A CN104954743B (en) 2015-06-12 2015-06-12 A kind of polyphaser semantic association method for tracking target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510324658.2A CN104954743B (en) 2015-06-12 2015-06-12 A kind of polyphaser semantic association method for tracking target

Publications (2)

Publication Number Publication Date
CN104954743A true CN104954743A (en) 2015-09-30
CN104954743B CN104954743B (en) 2017-11-28

Family

ID=54169042

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510324658.2A Expired - Fee Related CN104954743B (en) 2015-06-12 2015-06-12 A kind of polyphaser semantic association method for tracking target

Country Status (1)

Country Link
CN (1) CN104954743B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018035667A1 (en) * 2016-08-22 2018-03-01 深圳前海达闼云端智能科技有限公司 Display method and apparatus, electronic device, computer program product, and non-transient computer readable storage medium
CN108986143A (en) * 2018-08-17 2018-12-11 浙江捷尚视觉科技股份有限公司 Target detection tracking method in a kind of video
CN112268554A (en) * 2020-09-16 2021-01-26 四川天翼网络服务有限公司 Regional range loitering detection method and system based on path trajectory analysis
CN112764635A (en) * 2021-01-27 2021-05-07 浙江大华技术股份有限公司 Display method and device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101616309A (en) * 2009-07-16 2009-12-30 上海交通大学 Non-overlapping visual field multiple-camera human body target tracking method
US20100238351A1 (en) * 2009-03-13 2010-09-23 Eyal Shamur Scene recognition methods for virtual insertions
CN102436662A (en) * 2011-11-29 2012-05-02 南京信息工程大学 Human body target tracking method in nonoverlapping vision field multi-camera network
US20130088592A1 (en) * 2011-09-30 2013-04-11 OOO "ITV Group" Method for searching for objects in video data received from a fixed camera
CN104123732A (en) * 2014-07-14 2014-10-29 中国科学院信息工程研究所 Online target tracking method and system based on multiple cameras

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100238351A1 (en) * 2009-03-13 2010-09-23 Eyal Shamur Scene recognition methods for virtual insertions
CN101616309A (en) * 2009-07-16 2009-12-30 上海交通大学 Non-overlapping visual field multiple-camera human body target tracking method
US20130088592A1 (en) * 2011-09-30 2013-04-11 OOO "ITV Group" Method for searching for objects in video data received from a fixed camera
CN102436662A (en) * 2011-11-29 2012-05-02 南京信息工程大学 Human body target tracking method in nonoverlapping vision field multi-camera network
CN104123732A (en) * 2014-07-14 2014-10-29 中国科学院信息工程研究所 Online target tracking method and system based on multiple cameras

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李志华等: "基于多摄像头的目标连续跟踪", 《电子测量与仪器学报》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018035667A1 (en) * 2016-08-22 2018-03-01 深圳前海达闼云端智能科技有限公司 Display method and apparatus, electronic device, computer program product, and non-transient computer readable storage medium
CN108986143A (en) * 2018-08-17 2018-12-11 浙江捷尚视觉科技股份有限公司 Target detection tracking method in a kind of video
CN108986143B (en) * 2018-08-17 2022-05-03 浙江捷尚视觉科技股份有限公司 Target detection tracking method in video
CN112268554A (en) * 2020-09-16 2021-01-26 四川天翼网络服务有限公司 Regional range loitering detection method and system based on path trajectory analysis
CN112764635A (en) * 2021-01-27 2021-05-07 浙江大华技术股份有限公司 Display method and device, computer equipment and storage medium
CN112764635B (en) * 2021-01-27 2022-07-08 浙江大华技术股份有限公司 Display method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN104954743B (en) 2017-11-28

Similar Documents

Publication Publication Date Title
Leal-Taixé et al. Learning an image-based motion context for multiple people tracking
CN110706247B (en) Target tracking method, device and system
CN107169411B (en) A kind of real-time dynamic gesture identification method based on key frame and boundary constraint DTW
CN104809387B (en) Contactless unlocking method and device based on video image gesture identification
CN104954743A (en) Multi-camera semantic association target tracking method
CN106934817B (en) Multi-attribute-based multi-target tracking method and device
CN102855461B (en) In image, detect the method and apparatus of finger
CN107424161B (en) Coarse-to-fine indoor scene image layout estimation method
CN104932804B (en) A kind of intelligent virtual assembles action identification method
CN101770568A (en) Target automatically recognizing and tracking method based on affine invariant point and optical flow calculation
CN101098465A (en) Moving object detecting and tracing method in video monitor
CN103034860A (en) Scale-invariant feature transform (SIFT) based illegal building detection method
CN102629385A (en) Object matching and tracking system based on multiple camera information fusion and method thereof
CN101339664A (en) Object tracking method and system
KR101762010B1 (en) Method of modeling a video-based interactive activity using the skeleton posture datset
CN101477618B (en) Process for pedestrian step gesture periodic automatic extraction from video
CN106295532A (en) A kind of human motion recognition method in video image
CN110569855A (en) Long-time target tracking algorithm based on correlation filtering and feature point matching fusion
CN107274679A (en) Vehicle identification method, device, equipment and computer-readable recording medium
KR101866381B1 (en) Apparatus and Method for Pedestrian Detection using Deformable Part Model
CN111105443A (en) Video group figure motion trajectory tracking method based on feature association
Chen et al. A precise information extraction algorithm for lane lines
CN106056627A (en) Robustness object tracking method based on local identification sparse representation
CN112989889A (en) Gait recognition method based on posture guidance
CN107391365A (en) A kind of hybrid characteristic selecting method of software-oriented failure prediction

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20171128

Termination date: 20200612