CN101354787B - Intelligent method for extracting target movement track characteristics in vision monitoring searches - Google Patents

Intelligent method for extracting target movement track characteristics in vision monitoring searches Download PDF

Info

Publication number
CN101354787B
CN101354787B CN2008101202062A CN200810120206A CN101354787B CN 101354787 B CN101354787 B CN 101354787B CN 2008101202062 A CN2008101202062 A CN 2008101202062A CN 200810120206 A CN200810120206 A CN 200810120206A CN 101354787 B CN101354787 B CN 101354787B
Authority
CN
China
Prior art keywords
target
vector
target trajectory
flow vector
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2008101202062A
Other languages
Chinese (zh)
Other versions
CN101354787A (en
Inventor
陈耀武
曲琳
孟旭炯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN2008101202062A priority Critical patent/CN101354787B/en
Publication of CN101354787A publication Critical patent/CN101354787A/en
Application granted granted Critical
Publication of CN101354787B publication Critical patent/CN101354787B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a method for extracting the characteristics of target motion trajectories from intelligent visual monitoring retrieval. The method comprises the following steps that: a target motion trajectory is acquired and described through a two-dimensional space coordinate sequence; according to the two-dimensional space coordinate sequence for describing the target motion trajectory, the horizontal component and vertical component of the motion direction of a target in each sampling are calculated; a two-dimensional space coordinate of the target in each sampling is combined with the longitudinal slope and transverse slope of the motion direction of the target in each sampling so as to form the current vectors of the target in each sampling; the target motion trajectory is described through a current vector sequence ; a reference vector set established in advance is read; and the distances from the current vector sequence of the target motion trajectory to every reference vector are calculated and described as the characteristic vector of the target motion trajectory. The method uses the current vector sequence to describe the target motion trajectory and extracts the characteristic vector, and can ensure that trajectory description simultaneously comprises the information of positions and directions, thereby avoiding measurement error.

Description

Extract the method for target trajectory feature in a kind of intelligent vision monitoring retrieval
Technical field
The present invention relates to the intelligent vision monitoring technical field, relate in particular to the method for extracting the target trajectory feature in a kind of intelligent vision monitoring retrieval.
Background technology
Raising along with computer process ability, the development of audio/video encoding/decoding technology, Internet technology, network multimedia technology and high capacity memory technology, and the growing demand of industry such as security protection, finance, education, the video monitoring technology has obtained developing rapidly.The intelligent vision monitoring system has to be observed and the ability of analysis monitoring scene content, can not have or the situation of a small amount of human intervention under, automatically the video sequence of multiple-camera record is analyzed, thereby the replacement people finishes the vision monitoring task.
Intelligent vision monitoring comprises the behavioural analysis and the contents such as understanding and vision monitoring retrieval of target detection, target classification, target following, target.Target detection splits the moving target in the scene from background, the static information of filtering extracts the moving region.Target classification is distinguished the moving target of each moving region representative with the object classification of detected moving region by particular type.Target following is chosen can unique expression clarification of objective, and the target location that search and this feature are mated most in subsequent frame is to set up the target corresponding relation of continuous videos interframe.The goal behavior analysis is the advanced stage of intelligent vision monitoring with understanding, and comprises that goal behavior abnormality detection, goal behavior are understood and the contents such as natural language description of goal behavior.Vision monitoring retrieval is carried out index and retrieval to the visual signature and the motion feature of moving target, comprises the content-based retrieval method and based on the search method of semanteme.
How to describe movable information, and realize that thus the semantic retrieval of moving target behavior is one of key issue of intelligent vision monitoring system.The semantic retrieving method of moving target behavior need address the problem: at first, accurately extract the motion feature of monitoring objective, and be converted to searching algorithm easy to handle form.Secondly, under the prerequisite that guarantees retrieval rate, reduce computation complexity as far as possible.At last, find to contain high-rise behavior semanteme wherein, cross between high-level semantic and low-level image feature " semantic wide gap ", realize the semantic retrieval of visual monitor system by the low-level image feature that extracts.
The relevant feedback technology is the effective scheme that solves " semantic wide gap " problem between searching system semanteme on the middle and senior level and low-level image feature, has obtained widespread use at content-based text and image indexing system at present.In a lot of content-based texts and image indexing system, relevant feedback is proved and can significantly improves retrieval performance.The relevant feedback technology participates in the people in the information retrieval process, and search modes is become interactive retrieval by primary retrieval, has effectively improved retrieval performance.In retrieving, the user is as long as whether relevant or degree of correlation provide judgement to current result for retrieval according to demand, and system is then by learning to provide the better retrieval result to user's feedback.In the process of relevant feedback, at first obtain initial result for retrieval by certain mode, whether relevant provide the judgement of result for retrieval or degree of correlation by the user according to oneself information requirement then, final system is learnt according to user's feedback, returns new result for retrieval.Repeat this process and can obtain better Query Result.
Summary of the invention
The invention provides a kind of method of extracting the target trajectory feature in the intelligent vision monitoring retrieval of retrieving computation complexity and finding high-rise behavior semanteme that reduces.
Extract the method for target trajectory feature in a kind of intelligent vision monitoring retrieval, may further comprise the steps:
(1) obtains target trajectory, with the two-dimensional space coordinate sequence target trajectory that obtains is described behind the filtering noise;
Target trajectory obtains by target tracking algorism, target tracking algorism is sampled to the image sequence that comprises moving target at interval according to equal time (Δ t), each sampling can obtain the center-of-mass coordinate of target, the broken line that the center-of-mass coordinate of the different targets constantly that are linked in sequence forms is target trajectory.
The method of filtering noise adopts moving average filtering.
(2), calculate horizontal component and the vertical component of describing target travel direction in each sampling according to the two-dimensional space coordinate sequence of describing target trajectory;
(3) will sample at every turn in the two-dimensional space coordinate of target and horizontal component and the vertical component of describing target travel direction in each sampling merge the flow vector of forming target in each sampling;
Can carry out normalized transformation to the position attribution of the flow vector of target in each sampling, two components of two-dimensional space coordinate are zoomed to In, make follow-up retrieval more accurate.
(4) will sample at every turn in the flow vector of target merge to form the flow vector sequence target trajectory be described;
Because of the flow vector sequence has comprised positional information and directional information, avoided measuring error.
(5) read the reference vector set of setting up in advance;
The purpose of setting up reference vector set mainly is in order to make the proper vector equal in length of target trajectory, to be convenient to calculate.
The method for building up of reference vector can be varied, is preferably following form:
A. periodically read and be stored in that all describe the flow vector sequence of target trajectory in the track data storehouse;
B. all are described flow vector in the target trajectory flow vector sequence and carry out fuzzy C-means clustering and obtain several bunches, the center of getting each bunch is as a flow vector pattern;
All flow vector patterns that c. will obtain are as the reference set of vectors.
(6) calculate the distance that the flow vector sequence of describing target trajectory arrives each reference vector;
The flow vector sequence of describing target trajectory is that each flow vector in the flow vector sequence is to this reference vector minimum value and value to the distance of certain reference vector.
(7) the flow vector sequence that will describe target trajectory arrives the proper vector of the distance set cooperation of each reference vector for this target trajectory.
The inventive method is used flow vector sequence description target trajectory and is extracted proper vector, can comprise position and directional information simultaneously in track is described, and has avoided measuring error.
By using the reference vector set to extract the track characteristic vector, accelerated orbit interval from the speed of calculating, and made the proper vector length of all tracks identical.
Description of drawings
Fig. 1 is an intelligent vision monitoring retrieval module structural representation of the present invention;
Fig. 2 extracts the FB(flow block) of target trajectory characterization method for the present invention;
Fig. 3 is the mutual retrieval flow block diagram of intelligent vision monitoring of the present invention.
Embodiment
As shown in Figure 1, a kind of intelligent vision monitoring retrieval module structure comprises:
One target is cut apart and tracking module 110: the image sequence that comprises moving target is sampled, generate target image and target trajectory.
One track pretreatment module 120: use the noise of moving average filtering filtering target trajectory, make it level and smooth, generate the two-dimensional space coordinate sequence of describing target trajectory.
One track characteristic extraction module 130: according to the two-dimensional space coordinate sequence of describing target trajectory, generate the flow vector sequence of describing target trajectory, and generate the proper vector of target trajectory together with the reference vector set of setting up in advance.Simultaneously, periodically read the flow vector sequence that all describe target trajectory from monitor database 160, rebulid the reference vector set by access module 150.
One image characteristics extraction module 140: the proper vector of from target image, extracting color of object.
One access module 150: on the one hand the proper vector of target trajectory and the proper vector of color of object are merged composition clarification of objective vector, the flow vector sequence with target image, target trajectory and description target trajectory together is stored in the monitor database 160 then; On the other hand, from monitor database 160, read the flow vector sequence that all describe target trajectory, offer track characteristic extraction module 130 and be used to set up the reference vector set.
One monitor database 160: be used to store the flow vector sequence of clarification of objective vector, target image, target trajectory and description target travel, read for mutual retrieval module 170 and access module 150.
One mutual retrieval module 170: read the content of storage in the monitor database 160, guides user is finished mutual retrieving.
The user initiates mutual retrieval and marks returning target in retrieving.
The method of above-mentioned intelligent vision monitoring retrieval module structure extraction target trajectory may further comprise the steps:
(1) obtains target trajectory, target trajectory is obtained by target tracking algorism, target tracking algorism is sampled to the image sequence that comprises moving target at interval according to equal time (Δ t), each sampling can obtain the center-of-mass coordinate of target, the broken line that the center-of-mass coordinate of the different targets constantly that are linked in sequence forms is target trajectory.
With moving average filtering target trajectory is carried out noise filtering then, the target trajectory that obtains is described, wherein the volume coordinate sequence t of j bar target movement locus with the two-dimensional space coordinate sequence jFor:
t j={(x 1,y 1),(x 2,y 2),...,(x l,y l)}
J=1,2 ..., (x i, y i) two-dimensional coordinate of the target barycenter that is j bar target movement locus in the i time sampling, i=1,2 ..., l, l is the number of times that continues sampling in the target travel.
(2) calculate horizontal component and the vertical component of representing target travel direction in the each sampling of every target movement locus, wherein represent the horizontal component dx of target travel direction in the i time sampling of j bar target movement locus iWith vertical component dy iFor:
dx i=(x i+1-x i)/[(x i+1-x i) 2+(y i+1-y i) 2] 1/2
dy i=(y i+1-y i)/[(x i+1-x i) 2+(y i+1-y i) 2] 1/2
(3) replace volume coordinate that the target in the each sampling of every target movement locus is described with flow vector, the position attribution of the flow vector of target in each sampling is carried out normalized transformation, two component x of two-dimensional space coordinate iAnd y iZoom to
Figure G2008101202062D00051
In, the flow vector f of target during wherein j bar target movement locus is sampled for the i time iFor:
f i={x i,y i,dx i,dy i}
(4) replace the volume coordinate sequence that every target movement locus is described with the flow vector sequence, wherein the flow vector sequence F of j bar target movement locus jFor:
F j={f 1,f 2,...,f l}
(5) read the reference vector set P that sets up in advance, the method for wherein setting up the reference vector set is:
P={p 1,p 2,...,p m}
M is the number of reference vector;
The method for building up of reference vector set is as follows:
A. periodically read and be stored in that all describe the flow vector sequence of target trajectory in the track data storehouse;
B. all are described flow vector in the target trajectory flow vector sequence and carry out fuzzy C-means clustering and obtain m bunch, the center of getting each bunch is as a flow vector pattern;
C. with m flow vector pattern obtaining as the reference set of vectors.
(6) calculate the distance of the flow vector sequence of every target movement locus to each reference vector, wherein j bar target movement locus to k reference vector apart from d (p k, F j) be:
d(p k,F j)=min(p k,f)f∈F j,k=1、2、...、m
Min (p k, f) the object flow vector in all samplings of expression j bar target movement locus is to k reference vector minimum value and value;
(7) be the proper vector of this target trajectory with every target movement locus to the distance set cooperation of each reference vector, the proper vector V of j bar target movement locus wherein jFor:
V j={d(p 1,F j),d(p 2,F j),...,d(p m,F j)}。
The proper vector of above-mentioned target trajectory is retrieved after extracting and finishing at last alternately, and operation steps is as follows:
(1) in monitor database 160, selects k target as target to be marked at random, return to the user;
(2) user judges whether the clarification of objective vector that returns is annotated with the semantic relevant rower of going forward side by side of inquiry;
(3) with the clarification of objective vector training svm classifier device behind user's mark
(4) calculate the vectorial distance of all clarifications of objective in the monitor database 160 with the svm classifier device after the training to the classification lineoid:
f ( x ) = Σ i = 1 l α i y i K ( x i , x ) + b
Wherein, K (x i, x) be kernel function, y iRepresent x iAffiliated classification, α iWith b is to train the parameter that obtains.
A. select N maximum target of f (x) as result for retrieval, return to the user
B. select | f (x) | k minimum target returns to the user as epicycle target to be marked
C. repeated execution of steps (2)~(4) stop inquiry until the user.
The target of using in the mutual retrieving of vision monitoring comprises target trajectory and target image two parts, and the clarification of objective vector is made up of the proper vector of target trajectory and the proper vector of color of object.The proper vector of color of object is described by the hsv color histogram of 256 Nogata bars (bin), and wherein the Hue element quantization is 8 Nogata bars, and the Saturation element quantization is 8 Nogata bars, and the Value element quantization is 4 Nogata bars.

Claims (4)

1. extract the method for target trajectory feature during an intelligent vision monitoring is retrieved, may further comprise the steps:
(1) obtains target trajectory, with the two-dimensional space coordinate sequence target trajectory that obtains is described behind the filtering noise;
(2), calculate horizontal component and the vertical component of describing target travel direction in each sampling according to the two-dimensional space coordinate sequence of describing target trajectory;
(3) will sample at every turn in the two-dimensional space coordinate of target and horizontal component and the vertical component of describing target travel direction in each sampling merge the flow vector of forming target in each sampling;
(4) will sample at every turn in the flow vector of target merge to form the flow vector sequence target trajectory be described;
(5) read the reference vector set of setting up in advance;
The method of setting up the reference vector set in the described step (5) may further comprise the steps:
A. periodically read and be stored in that all describe the flow vector sequence of target trajectory in the track data storehouse;
B. all are described flow vector in the target trajectory flow vector sequence and carry out fuzzy C-means clustering and obtain several bunches, the center of getting each bunch is as a flow vector pattern;
All flow vector patterns that c. will obtain are as the reference set of vectors;
(6) calculate the distance that the flow vector sequence of describing target trajectory arrives each reference vector;
The flow vector sequence of described description target trajectory is that each flow vector in the flow vector sequence is to this reference vector minimum value and value to the distance of each reference vector;
(7) the flow vector sequence that will describe target trajectory arrives the proper vector of the distance set cooperation of each reference vector for this target trajectory.
2. the method for claim 1, it is characterized in that: the target trajectory in the described step (1) obtains by target tracking algorism.
3. the method for claim 1 is characterized in that: the method employing moving average filtering of the filtering noise in the described step (1).
4. the method for claim 1 is characterized in that: comprise that the position attribution to the flow vector of target in the each sampling in the step (3) carries out normalized transformation, two component equal proportions of two-dimensional space coordinate are zoomed to
Figure F2008101202062C00011
In.
CN2008101202062A 2008-08-18 2008-08-18 Intelligent method for extracting target movement track characteristics in vision monitoring searches Expired - Fee Related CN101354787B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008101202062A CN101354787B (en) 2008-08-18 2008-08-18 Intelligent method for extracting target movement track characteristics in vision monitoring searches

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008101202062A CN101354787B (en) 2008-08-18 2008-08-18 Intelligent method for extracting target movement track characteristics in vision monitoring searches

Publications (2)

Publication Number Publication Date
CN101354787A CN101354787A (en) 2009-01-28
CN101354787B true CN101354787B (en) 2010-06-02

Family

ID=40307587

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008101202062A Expired - Fee Related CN101354787B (en) 2008-08-18 2008-08-18 Intelligent method for extracting target movement track characteristics in vision monitoring searches

Country Status (1)

Country Link
CN (1) CN101354787B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101916374B (en) * 2010-08-20 2012-10-03 浙江大学 Characteristic selection method based on tracking time prediction
CN103838750A (en) * 2012-11-23 2014-06-04 苏州千视通信科技有限公司 Video investigation technique for obtaining vehicle features based on video retrieval abstract
CN103853725B (en) * 2012-11-29 2017-09-08 深圳先进技术研究院 A kind of traffic track data noise-reduction method and system
CN103020275A (en) * 2012-12-28 2013-04-03 苏州千视通信科技有限公司 Video analysis method based on video abstraction and video retrieval
CN104034316B (en) * 2013-03-06 2018-02-06 深圳先进技术研究院 A kind of space-location method based on video analysis
CN109190656B (en) * 2018-07-16 2020-07-21 浙江大学 Indoor semantic track marking and complementing method under low-sampling positioning environment
CN110070560B (en) * 2019-03-20 2021-12-17 西安理工大学 Object motion direction identification method based on target detection
CN113537323B (en) * 2021-07-02 2023-11-07 香港理工大学深圳研究院 Indoor track error assessment method based on LSTM neural network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006129272A (en) * 2004-10-29 2006-05-18 Olympus Corp Camera, tracking apparatus, tracking method, and tracking program
CN101123721A (en) * 2007-09-30 2008-02-13 湖北东润科技有限公司 An intelligent video monitoring system and its monitoring method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006129272A (en) * 2004-10-29 2006-05-18 Olympus Corp Camera, tracking apparatus, tracking method, and tracking program
CN101123721A (en) * 2007-09-30 2008-02-13 湖北东润科技有限公司 An intelligent video monitoring system and its monitoring method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
乔传标, 王素玉,卓力,沈兰荪.智能视觉监控中的目标检测与跟踪技术.测控技术27 5.2008,27(5),22-24.
乔传标, 王素玉,卓力,沈兰荪.智能视觉监控中的目标检测与跟踪技术.测控技术27 5.2008,27(5),22-24. *

Also Published As

Publication number Publication date
CN101354787A (en) 2009-01-28

Similar Documents

Publication Publication Date Title
CN101354787B (en) Intelligent method for extracting target movement track characteristics in vision monitoring searches
Lin et al. Human in events: A large-scale benchmark for human-centric video analysis in complex events
CN107330362B (en) Video classification method based on space-time attention
CN106295564B (en) A kind of action identification method of neighborhood Gaussian structures and video features fusion
Milan et al. MOT16: A benchmark for multi-object tracking
CN108734151B (en) Robust long-range target tracking method based on correlation filtering and depth twin network
Bhattacharya et al. Recognition of complex events: Exploiting temporal dynamics between underlying concepts
CN101344966B (en) Method for detecting exception target behavior in intelligent vision monitoring
CN102165464A (en) Method and system for automated annotation of persons in video content
Gong et al. Pagerank tracker: From ranking to tracking
CN105654139A (en) Real-time online multi-target tracking method adopting temporal dynamic appearance model
CN108711086A (en) Man-machine interaction method, device, article-storage device and storage medium in article-storage device
CN111105443A (en) Video group figure motion trajectory tracking method based on feature association
CN109544595A (en) A kind of customer's route tracing method and system
Wang et al. Machine learning for vision-based motion analysis: Theory and techniques
CN104809455A (en) Action recognition method based on distinguishable binary tree voting
Roy et al. Sparsity-inducing dictionaries for effective action classification
Liang et al. Egocentric hand pose estimation and distance recovery in a single RGB image
CN104050461A (en) Complex 3D motion recognition method and device
Moral et al. Vehicle re-identification in multi-camera scenarios based on ensembling deep learning features
Zhu et al. Multi-target tracking via hierarchical association learning
Huang et al. View-independent behavior analysis
Yasin et al. DeepSegment: Segmentation of motion capture data using deep convolutional neural network
Liang et al. 3D action recognition using depth-based feature and locality-constrained affine subspace coding
Biswas et al. Multiple instance triplet loss for weakly supervised multi-label action localisation of interacting persons

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20100602

Termination date: 20200818

CF01 Termination of patent right due to non-payment of annual fee