CN104200495B - A kind of multi-object tracking method in video monitoring - Google Patents

A kind of multi-object tracking method in video monitoring Download PDF

Info

Publication number
CN104200495B
CN104200495B CN201410497957.1A CN201410497957A CN104200495B CN 104200495 B CN104200495 B CN 104200495B CN 201410497957 A CN201410497957 A CN 201410497957A CN 104200495 B CN104200495 B CN 104200495B
Authority
CN
China
Prior art keywords
asift
target
characteristic
vector
target area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410497957.1A
Other languages
Chinese (zh)
Other versions
CN104200495A (en
Inventor
杨丰瑞
窦绍宾
吴翠先
刘欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHONGQING XINKE DESIGN Co Ltd
Original Assignee
CHONGQING XINKE DESIGN Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHONGQING XINKE DESIGN Co Ltd filed Critical CHONGQING XINKE DESIGN Co Ltd
Priority to CN201410497957.1A priority Critical patent/CN104200495B/en
Publication of CN104200495A publication Critical patent/CN104200495A/en
Application granted granted Critical
Publication of CN104200495B publication Critical patent/CN104200495B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses the video target tracking method of a kind of fusion ASIFT features and particle filter, belongs to video information process and mode identification technology.The method comprising the steps of:The moving target in video sequence is obtained using neighbor frame difference method;According to the corresponding region of complete object for obtaining, tracking object module is set up, and builds the ASIFT characteristic vectors of object module;Using particle filter technology predicting candidate regional aim, and build the ASIFT characteristic vectors of candidate target model;Match with candidate region target feature vector to tracking target feature vector;Erroneous matching is rejected using RANSAC algorithms;Object module is updated, target following is realized.The present invention can quickly and accurately track target under brightness flop, circumstance of occlusion, have preferable real-time and robustness.

Description

A kind of multi-object tracking method in video monitoring
Technical field
The invention belongs to video information process and mode identification technology, the multiple target in specifically a kind of video monitoring Tracking.
Background technology
Target following is always machine vision, artificial intelligence and area of pattern recognition basis.Target following extensively can be applied In industries such as navigator fix, military guidance, security monitorings.
Target following is to find to feel emerging using known target position information and goal succession on one section of sequence image The moving target of interest.Have various currently for the method for tracking target in video monitoring, such as based on the tracking of particle filter, Based on the tracking of Mean Shift, method for tracking target based on Kalman filtering etc..But these traditional methods are in mesh Mark is blocked, and easily goes out active target, tracking window deviation etc. existing As causing tracking failure.
The method that multiple target followings are carried out to video image using distinguished point based has higher robustness, such as base In the target following technology of SIFT (Scale Invariant Feature Transform, SIFT) feature, can go out in target Now in the case of rotation, change of scale and luminance transformation, remain able to carry out stable identification to target.But, the technology is not Possessing higher anti-affinity, deficiency being there is also in object matching precision, target easily occurs in the target larger for deformation Lose.Furthermore, the method similarly existing defects in real-time.
The content of the invention
The deficiency of the prior art for more than, it is an object of the invention to provide the multiple target in a kind of video monitoring with Track method, is described to object module with affine-scale invariant feature conversion (Affine-SIFT, ASIFT) feature, then Moving target is scanned for using particle filter method, target area is carried out finally by improved ASIFT matching algorithms Characteristic matching, carries out object module renewal, realizes target following.Occur, under circumstance of occlusion, to carry in illumination variation environment and target The high accuracy for tracking, robustness and real-time.
Multi-object tracking method in a kind of video monitoring of the present invention, detects moving target with neighbor frame difference method, right The moving target for detecting sets up tracking object module, and builds the ASIFT characteristic vectors of object module;It is pre- using particle filter Astronomical observation favored area target, and the ASIFT characteristic vectors of candidate target model are set up, to tracking target feature vector and candidate region Target feature vector is matched, and erroneous matching is rejected using RANSAC algorithms, update object module, realize target with Track, comprises the following steps:
Step A:Video image initial frame is read, the moving target in video sequence is examined using neighbor frame difference method Survey;
Video image initial frame is read, mathematic interpolation is carried out to the image respective pixel values of adjacent two frame in video,
Dk(x, y)=| fk(x,y)-fk+1(x,y)|
Wherein, fkThe image of (x, y) for present frame, fk+1(x, y) is the adjacent next two field picture of current frame image;DkFor two The absolute value of two field picture difference, Dk=1 is motion target area;Wherein T0=0.7;
Step B:Build affine-scale invariant feature conversion (Affine-SIFT, ASIFT) feature of tracking object module Vectorial A;Concretely comprise the following steps:
Step B1, carries out affine transformation parameter to motion target area, in motion target area, angle of latitude θ is adopted Geometric Sequence is sampled:1,a,a2...an, a>1, whereinN=5;To longitude angleCarry out equal difference sampling:0,b/t, ... kb/t, wherein b=72 °, t=| 1/cos θ |, k are to meet condition kb/t<180 ° last integer;
Step B2, carries out affine transformation to motion target area, is calculated using the sequential parameter for obtaining:
Wherein, I is motion target area, and I ' is the motion target area after affine transformation;
Step B3, carries out ASIFT feature point detections to the motion target area after affine analog converting;
Step B4, the characteristic point to motion target area carry out vector description, build the ASIFT characteristic vectors of 128 dimensions;
Step B5, space dimensionality reduction is carried out using main constituent amount analytic process (PCA) to ASIFT characteristic vectors obtain characteristic vector A;
Step C, reads next two field picture;
Step D, using particle filter method to the image prediction candidate region target that reads in step C, and builds candidate's mesh ASIFT characteristic vectors B of mark model;Comprise the concrete steps that:
D1, to motion target area, randomly chooses M particle sample from one group of probability sample of former frame of t;
D2, the M particle to newly collecting carry out probability redistribution;
D3, to M particle according to RGB rectangular histograms calculating histogrammic weighted value, then by M particle position according to power Average computation is weighted again, obtains tracking the candidate region of target;
D4, the ASIFT characteristic vectors of structure candidate region carry out space dimensionality reduction using main constituent amount analytic process (PCA) and obtain Characteristic vector B;
Step E:Motion target area characteristic vector A is matched with ASIFT characteristic vectors B of candidate region;
Step F:Rejecting erroneous matching is carried out using random sample consensus RANSAC methods;
Step G:Object module is updated, return to step C realizes target following.
Further:In step B5 and step D4, main constituent amount analytic process (PCA) is adopted to ASIFT characteristic vectors Space dimensionality reduction is carried out, is comprised the concrete steps that:
B51, each ASIFT characteristic point of acquisition are described as the vector of one 128 dimension respectively, using characteristic point as sample, Sample matrix is write out for [x1,x2,...,xn]T, wherein n is characterized a number, xiRepresent ith feature point 128 dimensional features to Amount;
B52, calculates the averaged feature vector of n sample
B53, calculates the characteristic vector of all sample points and the difference of feature average vector, obtains difference value vector
B54, builds covariance matrixWherein Q=[d1,d2,...,dn];
B55, seeks 128 eigenvalue λs of covariance matrixiWith 128 characteristic vectors ei
Obtain 128 eigenvalues are carried out arrangement λ by order from big to small by B561≥λ2≥...≥λ128And correspondence Characteristic vector (e1,e2...e128);
B57, chooses m maximal eigenvector of correspondence as the direction of main constituent;
B58, builds the matrix R of a 128*t, and its every string is made up of t characteristic vector;
B59, presses y 128 original dimension ASIFT feature descriptorsi=xi* R projections, the ASIFT features for calculating 36 dimensions are retouched State symbol y1,y2,···,yn, wherein, xiFor the vector representation of the ASIFT characteristic points of original target area, yiFor target after dimensionality reduction The vector representation of region ASIFT characteristic points.
Further, in step E, to motion target area characteristic vector and the ASIFT characteristic vectors of candidate region When carrying out matching operation, using the approximate KNN searching method based on KD-Tree.
Beneficial effects of the present invention:
(1) compared for SIFT, SURF feature matching method using ASIFT feature matching methods, target occlusion with And under the influence of environmental factorss, more characteristic points are able to detect that, and it is more stable in target following, mesh will not be lost easily Mark.
(2) ASIFT characteristic vectors are carried out reducing dimension process using PCA technologies, by 32 dimensional vectors of vector of 128 dimensions It is indicated, reduces amount of calculation, more meet the real-time of target following.
(3) replace global nearest neighbor search special to tracking target using the approximate KNN way of search based on KD-Tree Levy vector to be matched with candidate region target feature vector, improve the search efficiency of matching characteristic point, reduce calculating consumption When.
(4) the ASIFT feature matching methods after improvement are merged with particle filter, is predicted by particle filter technology The region that object module occurs in next frame, it is to avoid ASIFT is matched to whole two field picture, improves degree of accuracy.
Compared with currently existing scheme, the method for the present invention quickly and accurately can be tracked under brightness flop, circumstance of occlusion Target, has preferable real-time and robustness.
Description of the drawings
Fig. 1 is the multi-object tracking method flow chart in a kind of video monitoring of the present invention;
Specific embodiment
With reference to Fig. 1, the multi-object tracking method in a kind of video monitoring, moving target is detected with neighbor frame difference method, it is right The moving target for detecting sets up tracking object module, and builds the ASIFT characteristic vectors of object module;It is pre- using particle filter Astronomical observation favored area target, and the ASIFT characteristic vectors of candidate target model are set up, to tracking target feature vector and candidate region Target feature vector is matched, and erroneous matching is rejected using RANSAC algorithms, update object module, realize target with Track, comprises the following steps:
Step A:Video image initial frame is read, the moving target in video sequence is examined using neighbor frame difference method Survey;The video that the video image for being read is collected by monitoring camera.
Video image initial frame is read, mathematic interpolation is carried out to the image respective pixel values of adjacent two frame in video,
Dk(x, y)=| fk(x,y)-fk+1(x,y)|
Wherein, fkThe image of (x, y) for present frame, x, y represent the abscissa and vertical coordinate of pixel, f respectivelyk+1(x,y) For the adjacent next two field picture of current frame image;DkFor the absolute value of two field pictures difference, moving region, D are representedk=1 is motion mesh Mark region;Wherein T0For binaryzation threshold values, the binaryzation threshold values T of the present invention0=0.7, according to different requirements, it is also possible to take which Its value;
Calculate more than, pixel value only has 0 and 1 two kind in figure, be worth the pixel region as target corresponding region for 1, By this mode, the motion target area in video sequence can be divided out.
Step B:Build affine-scale invariant feature conversion (Affine-SIFT, ASIFT) feature of tracking object module Vectorial A;Concretely comprise the following steps:
Step B1, carries out affine transformation parameter to motion target area, in motion target area, angle of latitude θ is adopted Geometric Sequence is sampled:1,a,a2...an, a>1, whereinN=5;To longitude angleCarry out equal difference sampling:0,b/t, ... kb/t, wherein b=72 °, t=| 1/cos θ |, k are to meet condition kb/t<180 ° last integer;
Wherein, parameter θ andThe longitude angle of the angle of latitude and camera optical axis of shooting camera optical axis is represented respectively.Target area Can typically there is a certain degree of affine deformation, mainly by caused by the conversion of camera light direction of principal axis, and optical axis direction conversion Depending on parameter θ andNeed before affine simulation is carried out to target area to parameter θ andCarry out resampling.
The parameter θ obtained to motion target area is as shown in table 1.
Table 1
Parameter to motion target areaSampling intervalIt is set as:AndSample range be [0,180°].As t=1, parameterSpecifically sampled value is:0,72 °, 144 °.
Step B2, carries out affine transformation to motion target area, is calculated using the sequential parameter for obtaining:
Wherein, I is motion target area, and I ' is the motion target area after affine transformation;
Step B3, carries out ASIFT feature point detections to the motion target area after affine analog converting;
Step B4, the characteristic point to motion target area carry out vector description, build the ASIFT characteristic vectors of 128 dimensions;
Step B5, space dimensionality reduction is carried out using main constituent amount analytic process (PCA) to ASIFT characteristic vectors obtain characteristic vector A;
Step C, reads next two field picture;
Step D, using particle filter method to the image prediction candidate region target that reads in step C, and builds candidate's mesh ASIFT characteristic vectors B of mark model;Comprise the concrete steps that:
D1, to motion target area, randomly chooses M particle sample from one group of probability sample of former frame of t;
D2, the M particle to newly collecting carry out probability redistribution;
If the movement velocity that the t-1 moment tracks target is:
WithThe position skew of t-1 moment motion target areas is represented respectively, and vecuniteperpixel represents every The motor unit of individual pixel.
The new position of each particle of t can be obtained by formula below:
Wherein,For Gauss number,It is high for particle,For particle width.
D3, to M particle according to RGB rectangular histograms calculating histogrammic weighted value, then by M particle position according to power Average computation is weighted again, obtains tracking the candidate region of target;
Computing formula is as follows:
Wherein, f is normalization coefficient,WiFor the weight of each particle.
After calculating tracking target state estimator position, with t-1 moment initial positions around 3x3 pixel rectangular extents, formed 10 searching positions, search a new position wherein so that with previous frame t-1 moment target areas gray scale difference square (SSD) be minimum, with this new position as moving target new position.
S (x, y)=(∫ ∫w|(J(X)-I(X)|) (11)
Wherein, S represents the brightness of this position and the luminance difference of template;X, y are expression with xm, ymCentered on new position Put.J, I represent the luminosity function of two width images of t-1 and t respectively.
Variable M=150 in step D1-D3.
D4, the ASIFT characteristic vectors of structure candidate region carry out space dimensionality reduction using main constituent amount analytic process (PCA) and obtain Characteristic vector B;
By the method for step B, the same ASIFT characteristic vectors for building candidate target model, and analyzed using main constituent amount Technology (PCA) carries out space dimensionality reduction, and the ASIFT characteristic points of final candidate target region are also adopted by 36 dimensional vectors and represent.
Step E:Motion target area characteristic vector A is matched with ASIFT characteristic vectors B of candidate region;
Step F:Rejecting erroneous matching is carried out using random sample consensus RANSAC methods;
Step G:Object module is updated, return to step C realizes target following.
Further:In step B5 and step D4, main constituent amount analytic process (PCA) is adopted to ASIFT characteristic vectors Space dimensionality reduction is carried out, is comprised the concrete steps that:
B51, each ASIFT characteristic point of acquisition are described as the vector of one 128 dimension respectively, using characteristic point as sample, Sample matrix is write out for [x1,x2,...,xn]T, wherein n is characterized a number, xiRepresent ith feature point 128 dimensional features to Amount;
B52, calculates the averaged feature vector of n sample
B53, calculates the characteristic vector of all sample points and the difference of feature average vector, obtains difference value vector
B54, builds covariance matrixWherein Q=[d1,d2,...,dn];
B55, seeks 128 eigenvalue λs of covariance matrixiWith 128 characteristic vectors ei
Obtain 128 eigenvalues are carried out arrangement λ by order from big to small by B561≥λ2≥...≥λ128And correspondence Characteristic vector (e1,e2...e128);
B57, chooses m maximal eigenvector of correspondence as the direction of main constituent;
B58, builds the matrix R of a 128*t, and its every string is made up of t characteristic vector;
B59, presses y 128 original dimension ASIFT feature descriptorsi=xi* R projections, the ASIFT features for calculating 36 dimensions are retouched State symbol y1,y2,···,yn, wherein, xiFor the vector representation of the ASIFT characteristic points of original target area, yiFor target after dimensionality reduction The vector representation of region ASIFT characteristic points.
Wherein, xiFor the vector representation of the ASIFT characteristic points of original target area.yiIt is special for target area ASIFT after dimensionality reduction Levy vector representation a little.
Further, in step E, to motion target area characteristic vector and the ASIFT characteristic vectors of candidate region When carrying out matching operation, using the approximate KNN searching method based on KD-Tree.
Calculation procedure is:
(1) KD-Tree is set up according to ASIFT characteristic points, implements step as follows
A, the value for determining split domains;
By calculating data variance of the characteristic point on x, y-dimension, the maximum dimension of variance yields is taken as split domains Value;
B, determine Node-data domains;
According to the value in the split domains for obtaining, characteristic point data is sorted in the dimension, is worth to according in data Node-data numeric field datas point, so, has determined that the super face of segmentation of the node;
C, determine left and right subspace;
The super face of segmentation is divided into two parts whole space, and the point for splitting the super face left side is left subspace, is split on the right of super face Point be right subspace.
D and then one-level child node can be obtained according to left subspace and right subspace, then respectively by space and data set again Further segment, a data point is only included in space.
(2) by binary tree search, retrieve in KD-Tree with query point apart from neighbour approximate point;
(3) according to being contrasted with adjacent other characteristic points, find it is nearest with query point Euclidean distance before two
Individual characteristic point;
(4) nearest Euclidean distance, is connect if the value is less than certain proportion threshold value γ divided by secondary near Euclidean distance By this pair of match points, Feature Points Matching success, conversely, matching is unsuccessful.
Wherein d1For the nearest Euclidean distance of two characteristic points to be matched;d2It is near for two characteristic points to be matched time Euclidean distance.Threshold gamma=0.8 is set in the present invention..
Judge, successfully whether tracking target feature vector is matched with candidate region target feature vector, is held if success Row step F.Conversely, then returning execution step (3).
Embodiments of the invention are interpreted as being merely to illustrate the present invention rather than limit the scope of the invention. After the content of the record for having read the present invention, technical staff can be made various changes or modifications to the present invention, these equivalent changes Change and modification equally falls into the scope of the claims in the present invention.

Claims (3)

1. the multi-object tracking method in a kind of video monitoring, detects moving target with neighbor frame difference method, to the fortune for detecting Moving-target sets up tracking object module, and builds the ASIFT characteristic vectors of object module;Using particle filter predicting candidate region Target, and set up the ASIFT characteristic vectors of candidate target model, to track target feature vector and candidate region target characteristic to Amount matched, erroneous matching is rejected using RANSAC algorithms, update object module, realize target following, including with Lower step:
Step A:Video image initial frame is read, the moving target in video sequence is detected using neighbor frame difference method;
Video image initial frame is read, mathematic interpolation is carried out to the image respective pixel values of adjacent two frame in video,
Dk(x, y)=| fk(x,y)-fk+1(x,y)|
D k = 0 D k < T 0 1 D k &GreaterEqual; T 0
Wherein, fkThe image of (x, y) for present frame, fk+1(x, y) is the adjacent next two field picture of current frame image;DkFor two frame figures The absolute value of aberration, Dk=1 is motion target area;Wherein T0=0.7;
Step B:Build affine-scale invariant feature conversion (Affine-SIFT, ASIFT) characteristic vector of tracking object module A;Concretely comprise the following steps:
Step B1, carries out affine transformation parameter to motion target area, in motion target area, to ratios such as angle of latitude θ employings Ordered series of numbers is sampled:1,a,a2...an, a>1, whereinN=5;To longitude angleCarry out equal difference sampling:0, b/t ... kb/t, Wherein b=72 °, t=| 1/cos θ |, k are to meet condition kb/t<180 ° last integer;
Step B2, carries out affine transformation to motion target area, is calculated using the sequential parameter for obtaining:
Wherein, I is motion target area, and I ' is the motion target area after affine transformation;
Step B3, carries out ASIFT feature point detections to the motion target area after affine analog converting;
Step B4, the characteristic point to motion target area carry out vector description, build the ASIFT characteristic vectors of 128 dimensions;
Step B5, space dimensionality reduction is carried out using main constituent amount analytic process (PCA) to ASIFT characteristic vectors obtain characteristic vector A;
Step C, reads next two field picture;
Step D, using particle filter method to the image prediction candidate region target that reads in step C, and builds candidate target mould ASIFT characteristic vectors B of type;Comprise the concrete steps that:
D1, to motion target area, randomly chooses M particle sample from one group of probability sample of former frame of t;
D2, the M particle to newly collecting carry out probability redistribution;
Then M particle position entered according to weight calculating histogrammic weighted value according to RGB rectangular histograms by D3 to M particle Row weighted average calculation, obtains tracking the candidate region of target;
D4, the ASIFT characteristic vectors of structure candidate region carry out space dimensionality reduction using main constituent amount analytic process (PCA) and obtain feature Vectorial B;
Step E:Motion target area characteristic vector A is matched with ASIFT characteristic vectors B of candidate region;
Step F:Rejecting erroneous matching is carried out using random sample consensus RANSAC methods;
Step G:Object module is updated, return to step C realizes target following.
2. the multi-object tracking method in video monitoring according to claim 1, is characterized in that:
In step B5 and step D4, space dimensionality reduction is carried out using main constituent amount analytic process (PCA) to ASIFT characteristic vectors, specifically Step is:
B51, each ASIFT characteristic point of acquisition are described as the vector of one 128 dimension respectively, using characteristic point as sample, write out Sample matrix is [x1,x2,...,xn]T, wherein n is characterized a number, xiRepresent 128 dimensional feature vectors of ith feature point;
B52, calculates the averaged feature vector of n sample
B53, calculates the characteristic vector of all sample points and the difference of feature average vector, obtains difference value vector
B54, builds covariance matrixWherein Q=[d1,d2,...,dn];
B55, seeks 128 eigenvalue λs of covariance matrixiWith 128 characteristic vectors ei
Obtain 128 eigenvalues are carried out arrangement λ by order from big to small by B561≥λ2≥...≥λ128With corresponding spy Levy vector (e1, e2...e128);
B57, chooses m maximal eigenvector of correspondence as the direction of main constituent;
B58, builds the matrix R of a 128*t, and its every string is made up of t characteristic vector;
B59, presses y 128 original dimension ASIFT feature descriptorsi=xi* R projections, calculate the ASIFT feature descriptors of 36 dimensions y1,y2,···,yn, wherein, xiFor the vector representation of the ASIFT characteristic points of original target area, yiFor target area after dimensionality reduction The vector representation of ASIFT characteristic points.
3. the multi-object tracking method in video monitoring according to claim 1, is characterized in that:In step E, to motion mesh When mark provincial characteristicss vector carries out matching operation with the ASIFT characteristic vectors of candidate region, using based on KD-Tree it is approximate most Neighbor search method.
CN201410497957.1A 2014-09-25 2014-09-25 A kind of multi-object tracking method in video monitoring Active CN104200495B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410497957.1A CN104200495B (en) 2014-09-25 2014-09-25 A kind of multi-object tracking method in video monitoring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410497957.1A CN104200495B (en) 2014-09-25 2014-09-25 A kind of multi-object tracking method in video monitoring

Publications (2)

Publication Number Publication Date
CN104200495A CN104200495A (en) 2014-12-10
CN104200495B true CN104200495B (en) 2017-03-29

Family

ID=52085781

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410497957.1A Active CN104200495B (en) 2014-09-25 2014-09-25 A kind of multi-object tracking method in video monitoring

Country Status (1)

Country Link
CN (1) CN104200495B (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104392469B (en) * 2014-12-15 2017-05-31 辽宁工程技术大学 A kind of method for tracking target based on soft characteristic theory
CN104751412B (en) * 2015-04-23 2018-01-30 重庆信科设计有限公司 A kind of image split-joint method based on affine invariants
CN105631895B (en) * 2015-12-18 2018-05-29 重庆大学 With reference to the space-time context video target tracking method of particle filter
CN105787963B (en) * 2016-02-26 2019-04-16 浪潮软件股份有限公司 A kind of video target tracking method and device
CN106327528A (en) * 2016-08-23 2017-01-11 常州轻工职业技术学院 Moving object tracking method and operation method of unmanned aerial vehicle
CN106296743A (en) * 2016-08-23 2017-01-04 常州轻工职业技术学院 A kind of adaptive motion method for tracking target and unmanned plane follow the tracks of system
CN106227216B (en) * 2016-08-31 2019-11-12 朱明� Home-services robot towards house old man
CN108073864B (en) * 2016-11-15 2021-03-09 北京市商汤科技开发有限公司 Target object detection method, device and system and neural network structure
CN107917646B (en) * 2017-01-10 2020-11-24 北京航空航天大学 Infrared air-to-air missile anti-interference guidance method based on target terminal reachable area prediction
CN107369164B (en) * 2017-06-20 2020-05-22 成都中昊英孚科技有限公司 Infrared weak and small target tracking method
CN107545583B (en) * 2017-08-21 2020-06-26 中国科学院计算技术研究所 Target tracking acceleration method and system based on Gaussian mixture model
CN108416258B (en) * 2018-01-23 2020-05-08 华侨大学 Multi-human body tracking method based on human body part model
CN110110111B (en) * 2018-02-02 2021-12-31 兴业数字金融服务(上海)股份有限公司 Method and device for monitoring screen
CN108596949B (en) * 2018-03-23 2020-06-12 云南大学 Video target tracking state analysis method and device and implementation device
CN110769214A (en) * 2018-08-20 2020-02-07 成都极米科技股份有限公司 Automatic tracking projection method and device based on frame difference
CN110111364B (en) * 2019-04-30 2022-12-27 腾讯科技(深圳)有限公司 Motion detection method and device, electronic equipment and storage medium
CN110264501A (en) * 2019-05-05 2019-09-20 中国地质大学(武汉) A kind of adaptive particle filter video target tracking method and system based on CNN
CN110516528A (en) * 2019-07-08 2019-11-29 杭州电子科技大学 A kind of moving-target detection and tracking method based under movement background
CN110490902B (en) * 2019-08-02 2022-06-14 西安天和防务技术股份有限公司 Target tracking method and device applied to smart city and computer equipment
CN112559959B (en) * 2020-12-07 2023-11-07 中国西安卫星测控中心 Space-based imaging non-cooperative target rotation state resolving method based on feature vector

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101026759A (en) * 2007-04-09 2007-08-29 华为技术有限公司 Visual tracking method and system based on particle filtering
CN102184551A (en) * 2011-05-10 2011-09-14 东北大学 Automatic target tracking method and system by combining multi-characteristic matching and particle filtering
CN102231191A (en) * 2011-07-17 2011-11-02 西安电子科技大学 Multimodal image feature extraction and matching method based on ASIFT (affine scale invariant feature transform)
CN103440645A (en) * 2013-08-16 2013-12-11 东南大学 Target tracking algorithm based on self-adaptive particle filter and sparse representation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10178396B2 (en) * 2009-09-04 2019-01-08 Stmicroelectronics International N.V. Object tracking

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101026759A (en) * 2007-04-09 2007-08-29 华为技术有限公司 Visual tracking method and system based on particle filtering
CN102184551A (en) * 2011-05-10 2011-09-14 东北大学 Automatic target tracking method and system by combining multi-characteristic matching and particle filtering
CN102231191A (en) * 2011-07-17 2011-11-02 西安电子科技大学 Multimodal image feature extraction and matching method based on ASIFT (affine scale invariant feature transform)
CN103440645A (en) * 2013-08-16 2013-12-11 东南大学 Target tracking algorithm based on self-adaptive particle filter and sparse representation

Also Published As

Publication number Publication date
CN104200495A (en) 2014-12-10

Similar Documents

Publication Publication Date Title
CN104200495B (en) A kind of multi-object tracking method in video monitoring
Zhou et al. To learn or not to learn: Visual localization from essential matrices
CN108090919B (en) Improved kernel correlation filtering tracking method based on super-pixel optical flow and adaptive learning factor
Li et al. DXSLAM: A robust and efficient visual SLAM system with deep features
Himstedt et al. Large scale place recognition in 2D LIDAR scans using geometrical landmark relations
CN106780557B (en) Moving object tracking method based on optical flow method and key point features
CN102236794B (en) Recognition and pose determination of 3D objects in 3D scenes
Su et al. Global localization of a mobile robot using lidar and visual features
Lee et al. Place recognition using straight lines for vision-based SLAM
CN103729654A (en) Image matching retrieval system on account of improving Scale Invariant Feature Transform (SIFT) algorithm
CN105160649A (en) Multi-target tracking method and system based on kernel function unsupervised clustering
Chen et al. A degraded reconstruction enhancement-based method for tiny ship detection in remote sensing images with a new large-scale dataset
Chen et al. Multiple object tracking using edge multi-channel gradient model with ORB feature
Li et al. Adaptive and compressive target tracking based on feature point matching
Cheng et al. OpenMPR: Recognize places using multimodal data for people with visual impairments
CN117496401A (en) Full-automatic identification and tracking method for oval target points of video measurement image sequences
CN106558065A (en) The real-time vision tracking to target is realized based on color of image and texture analysiss
Xiang et al. Delightlcd: a deep and lightweight network for loop closure detection in lidar slam
Munoz et al. Improving Place Recognition Using Dynamic Object Detection
Chen et al. An application of improved RANSAC algorithm in visual positioning
Leonardi et al. Convolutional Autoencoder aided loop closure detection for monocular SLAM
Wang et al. A Pointer Instrument Reading Approach Based On Mask R-CNN Key Points Detection
Han et al. Adapting dynamic appearance for robust visual tracking
Liu et al. An improved ORB-SLAM algorithm for mobile robots
Wang Visual Object Detection for Tree Leaves Based on Deep Learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant