CN106952293A - A kind of method for tracking target based on nonparametric on-line talking - Google Patents

A kind of method for tracking target based on nonparametric on-line talking Download PDF

Info

Publication number
CN106952293A
CN106952293A CN201611219002.5A CN201611219002A CN106952293A CN 106952293 A CN106952293 A CN 106952293A CN 201611219002 A CN201611219002 A CN 201611219002A CN 106952293 A CN106952293 A CN 106952293A
Authority
CN
China
Prior art keywords
target
cluster centre
feature
template
nonparametric
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611219002.5A
Other languages
Chinese (zh)
Other versions
CN106952293B (en
Inventor
姬晓晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yingpu Technology Co ltd
Original Assignee
Beijing Yingpu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yingpu Technology Co Ltd filed Critical Beijing Yingpu Technology Co Ltd
Priority to CN201611219002.5A priority Critical patent/CN106952293B/en
Publication of CN106952293A publication Critical patent/CN106952293A/en
Application granted granted Critical
Publication of CN106952293B publication Critical patent/CN106952293B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

A kind of method for tracking target based on nonparametric on-line talking, its solve technical problem be:A video is given, online target following is carried out to a certain specified interesting target, so that the particular location and boundary rectangle frame of subsequent image frames target are predicted, the small depth network arrived based on big depth e-learning, to interesting target extractive distillation feature.This feature has stronger appearance representation ability, discriminating power and robustness.It is of the invention to be the problem of further solve:History outward appearance to interesting target carries out nonparametric on-line talking, cluster centre based on weighting produces the target appearance template of a reservation historical information, and produce object space distribution with the target appearance template, so as to improve target following accuracy rate, the risk of target drift is reduced, and then reduces error rate.

Description

A kind of method for tracking target based on nonparametric on-line talking
Technical field
Present invention relates generally to the fields such as Computer Vision, intelligent video monitoring, pattern-recognition, and in particular to be A kind of method for tracking target based on nonparametric on-line talking.
Background technology
Target following is the most basic and most challenging key problem of field of intelligent video surveillance one.It is used as calculating The problem of comparing bottom in machine vision algorithm, it is with a wide range of applications.As the motion analysis in field of intelligent video surveillance, The intellectual analysis application of the high level such as abnormality detection is carried out based on target following.Moving target in scene is carried out Track is extracted, effective basis can be provided for intellectual analysis higher level in monitoring scene.
Traditional method for tracking target initializes a certain interesting target in the given frame of video first mostly, then trains one Individual target appearance model, carries out discriminate judgement by display model during tracking or production is predicted, so as to be inferred to The positional information of next frame interesting target.This kind of method is entered during tracking using the pattern of on-line study to display model Row is constantly updated, have ignored interesting target easily by blocking, disappearance-reproduction is influenceed, and is easily caused model drifting problem.Together When, because the learning process of model largely relies on display model, after target is drifted about, it is difficult to object module is reinitialized, So as to cause tracking to fall flat, poor tracking result is obtained.Traditional fusion Meanshift and Kalman filter tracking Method, can be modeled to target motion, using the effective forecast function of Kalman filtering, in each frame Meanshift first Algorithm starts before prediction, and dbjective state is predicted according to conventional movable information, obtains a possible position of target Point, then carries out the iteration of Meanshift algorithms from the possible position point predicted, and wherein Meanshift algorithms are mainly first The core color histogram of target is first calculated, the similarity function of a space smoothing is then defined on this basis, and use The mode of gradient optimizing is quickly positioned to target.Different from mode of the Kalman filtering to target motion modeling, Wo Menti It is multiple initial to obtain that the method gone out calculates similarity based on the apparent information of target to multiple space lattices of target context Position candidate, rather than the possible predicted position that Kalman filter is got, so as to add tracker to quick The robustness and adaptability moved and blocked.Different from Meanshift algorithms expressed using core color histogram target with Track device, we employ the feature of feature distiller extraction preferably to express the apparent information of target.Different from Meanshift Algorithm is quickly positioned by the way of gradient optimizing to target, our tracker by domain space learn filtering with Track device simultaneously realizes the quick detection of target and final positioning using Fast Fourier Transform (FFT).
The content of the invention
The invention provides a kind of method for tracking target, its technical problem solved is:A video is given, to a certain Interesting target is specified to carry out online target following, so as to predict the particular location and boundary rectangle of subsequent image frames target Frame.
It is of the invention to be the problem of further solve:The small depth network arrived based on big depth e-learning, to interested Objective extraction distills feature.This feature has stronger appearance representation ability, discriminating power and robustness.
It is of the invention to be the problem of further solve:History outward appearance to interesting target carries out nonparametric on-line talking, Cluster centre based on weighting produces the target appearance template of a reservation historical information, and produces mesh with the target appearance template Spatial distribution is marked, so as to improve target following accuracy rate, the risk of target drift is reduced, and then reduce error rate.
It is of the invention to be the problem of further solve:Reduce hsrdware requirements and algorithm complex, realize quick target with Track.
To solve the above problems, the invention discloses a kind of method for tracking target based on nonparametric on-line talking, for Given video initialization target performs this method, obtains target following result.
The step of method for tracking target, comprises the following steps:
Step 1, an input video is given, and the square frame of an interesting target is initialized in the first frame;To the mesh Mark square frame and extract color histogram feature, and cluster centre is initialized using the method for K averages;
Step 2, the image of spatial distribution is kept to distill characteristic pattern the context area image zooming-out of the square frame, it is assumed that should There is the priori label distribution of a same scale size in underlying image feature figure;
Step 3, based on image distill characteristic pattern and priori label distribution, by the way of ridge regression domain space without It is that traditional time domain space learns a filter tracker;
Step 4, for next two field picture, the target's center position predicted based on previous frame target tracker is divided many Individual grid, color histogram feature is extracted to each grid, by the color histogram feature and the template generated by cluster centre Similarity is calculated, reference value value spatial probability distribution is obtained.Based on the rough object space probability distribution, similarity is obtained Maximum preceding k grid.
Step 5, to each in this k grid, one and the first frame context area formed objects image are extracted Characteristic pattern, by filter tracker and the characteristics of image figure by convolution with FFT, so as to obtain each locus Confidence level.It is target's center position to take the maximum locus of confidence value.The state of target can also be obtained.Based on the mesh Mark state, extracts new color histogram feature;
Step 6, new color histogram was compared with the former cluster centre learnt, if similarity is more than necessarily The color histogram, then be included into the cluster centre most like with it, and update the cluster centre by value;If similarity is less than certain Value, then newly create a cluster centre.By nonparametric after line incremental clustering obtains cluster centre, by these cluster centres It is merged into a To Template.Context area characteristic pattern is extracted based on the center, and learns a new tracking filter Device, the tracking filter of previous frame is updated with the tracking filter;
Step 7, circulation performs step 4-6, carries out online target following.
In addition, this method mainly includes following components, it is exactly specifically:Candidate Submission generation unit, for base A small amount of relatively reliable candidate samples motion is produced in priori or history space time information;Feature extraction unit, for carrying Take the feature representation of the corresponding image-region of each candidate samples motion;Observation model judging unit, for motion feature Classified or returned, judge that the motion belongs to the probability of target or whether is target;Integrated study integrated unit, is used for Multiple observation model judging units are subjected to integrated study by certain strategy, and multiple tracking results are merged, from And obtain current image frame target position information and extraneous rectangle frame information;On-time model updating block, for observation model Judging unit carries out online updating, with the robustness of assurance model.
The motion of the present invention produces process for being possible to model drifting problem occur in object tracking process, produces a small amount of The higher Candidate Submission of confidence level, the possibility for detecting or correcting again is provided for observation model judging unit below.So drop It is low block for a long time or disappearance-rendering problems caused by template drift risk.This motion, which produces process, mainly includes target mould Plate generating process and object space distribution generation process.
To Template generating process mainly use nonparametric on-line talking method and generate a To Template process for:It is right The new online incremental clustering of color histogram nonparametric, obtains new cluster centre.By these cluster centres according to certain Weight is merged into a To Template.The weight of each cluster centre is determined by the number of samples in each cluster.In addition, being More stable To Template is obtained, before the To Template is generated, weight less cluster centre is first excluded.To weight compared with Big cluster centre is normalized in the way of weight is added, so as to obtain To Template.The wherein calculating side of cluster centre Formula is as follows:
Here wjIt is the weight of j-th of cluster centre.Specifically, it is exactly to obtain a sample x onlineiAfterwards, the sample is calculated Sheet and each cluster centre cjSimilarity.Its similarity function s (xi,cj) be:
The computing formula of To Template is as follows:
wiIt is the weight of each cluster centre.
Spatial distribution produces process primarily with respect to next two field picture, the target predicted based on previous frame target tracker Center, divides multiple grids, and color histogram feature is extracted to each grid, by the color histogram feature with by clustering The formwork calculation similarity being centrally generated, obtains object space distribution.Spatial distribution based on target obtains similarity maximum Preceding k grid.
The characteristic extraction procedure of the present invention is that the big deep learning network for learning other by database is small with one Moldeed depth degree network carries out extraction characteristic simulation, and in the training stage, supervision message is the spy that big every layer of deep learning network is extracted Levy, simulated as much as possible with small-sized depth network and approach these supervision messages;In test phase, arrived with small-sized depth network extraction Feature be distillation feature.Feature is distilled using the depth of small network extraction, with stronger feature representation ability and differentiation Ability.
The observation model training process of the present invention is directed to target corresponding model of on-line study under different video environment, from And irrational tracking result can be effectively filtered during tracking, it is to avoid tracking result and surrounding in existing method Caused by environmental information mismatch the problem of tracking result Wrong localization.
The integrated study fusion of the present invention is employed to be clustered to the target appearance information in history, so as to obtain relative Stable appearance representation cluster centre, enhances the stability of display model expression.
The renewal process of the present invention carries out the training of model just for specific effective sample selection rule during tracking Practise, it is to avoid all sample rows predicted are calculated, reduce model modification and occur drift or track inaccurate risk.This The method of invention and corresponding algorithm have higher reasonability and operational efficiency, reduce complexity, can reach in real time Target following.
The fields such as vision security protection, machine vision tracking, man-machine interface are the method can be widely used in, are that one kind has versatility Intelligent vision monitoring core methed.
Brief description of the drawings
Figure 1A show the structured flowchart of the Target Tracking System of the present invention;
Figure 1B show Candidate Submission generation unit structured flowchart in Target Tracking System of the present invention;
Fig. 1 C show Target Tracking System feature extraction unit structured flowchart of the present invention;
Fig. 1 D show Target Tracking System observation model judging unit structured flowchart of the present invention;
Fig. 2 show the flow chart of method for tracking target of the present invention.
Embodiment
For the object, technical solutions and advantages of the present invention are more clearly understood, below in conjunction with specific embodiment, and reference Accompanying drawing, the present invention is described in more detail.
The present invention carries out manually that initialization or auto-initiation be (for example to the first frame general target of video sequence Detection), then export the target trajectory of subsequent frame, i.e. target position in each frame and profile boundary rectangle frame.
Further, the present invention distills feature based on the small depth network extraction learnt, with stronger expression energy Power.
Further, the present invention non-ginseng of on-line study from the history space-time target visual feature in video sequence image Number cluster centre and target cluster template, improve the robustness for the problems such as target following is to blocking, improve accuracy rate, reduce Error rate.
Further, the present invention differentiates to spatial distribution constraint of the target in current frame image, reduces mistake Misprediction result, further increases tracking accuracy.
Further, the present invention is low to hardware requirement, and algorithm complex is low.
The structural representation of Target Tracking System of the invention shown in reference picture 1A, the system is obtained comprising video frame images Take device, target tracker, storage device and display device.
Image acquiring device is used to obtain input picture, and for example it can be a monitoring camera, will obtain one section of image Sequence and the target frame of the first frame initialization are sent to target tracker.
Target tracker is used to be handled for the image sequence.Tracks of device may be provided at ordinary PC, board, In graphic process unit or embedded processing box.Target tracker include Candidate Submission generation unit, low-level image feature extraction unit, Observation model judging unit, integrated study integrated unit and on-time model updating block.
Candidate Submission generation unit receives the target initial block of image sequence and the first frame from image acquiring device, when When handling the second frame centered on the target initial position of the first frame, a small amount of Candidate Submission is extracted according to certain strategy.
As shown in Figure 1B, Candidate Submission generation unit further comprises:On-line checking filter unit, target distribution probability list Member and Candidate Submission output unit.
On-line checking filter unit Main Analysis history target tracking data, and obtained by on-line study outside a target See expression wave filter.Target distribution probability unit mainly calculates target in the range of image overall by on-line checking filter unit Spatial probability distribution.
Candidate Submission output unit is based primarily upon several positions that destination probability distribution roughly estimates the presence of target most probable Put, regard these positions as Candidate Submission.
As shown in Figure 1 C, feature extraction unit obtains small character network mould by distilling existing big depth network model Type, so as to extract the depth characteristic with higher robustness and ability to express more quickly.
As shown in figure iD, observation model judging unit mainly includes model parameter, convolution operation and threshold value arbiter.Model Parameter mainly learns what is obtained by ridge regression.Main process is the feature by target area and its context area, carries out Fu In leaf transformation, and based on loop structure (referring specifically to Joao F.Henriques, et.al.High-Speed Tracking With Kernelized Correlation Filters.PAMI2015.) intensive sampling is carried out, so that rapidly mould is arrived in study The parameter of type.The feature that quick detection mainly extracts model parameter and region of search carries out Fourier transformation, Ran Houbian Model parameter and feature after changing carry out convolution operation in domain space, so as to quickly obtain classifying for each position Divide, i.e. the confidence level figure of target.
Threshold value arbiter, is analyzed confidence level figure, more than certain threshold value and the maximum position of confidence level, by it It is determined as target;If the maximum of confidence level figure is less than the threshold value, then it is assumed that the target of present frame is blocked, at this moment just not to seeing Model is surveyed to be updated.
Integrated study integrated unit carries out final decision-making to the multiple candidate target frames finally given and differentiated, filters coincidence Some higher tracking results of rate, obtain the final position for tracking target and boundary rectangle information (referring specifically to Guibo Zhu,Jinqiao Wang and Hanqing Lu,“Clustering Ensemble Tracking”,ACCV,2014)。
On-time model updating block is mainly carries out online updating to the pattern function learnt according to certain ratio.With Threshold value arbiter come judge whether update model parameter, with prevent model occur update storage device, for storing Decision fusion The final goal tracking result of unit output.Display device, can be a display screen, final in the input picture for being shown in Target following result is watched for user.
Fig. 2 shows a kind of flow chart of method for tracking target of the invention, and this method comprises the following steps:
Step 200, Target Tracking System is started;
Step 201, start step, while obtaining a series of frame video data, and the target of the first frame is carried out initially Change square frame;
Step 202, a small amount of candidate target motion is produced;
Step 203, feature is extracted to each candidate target motion;
Step 204, observation model judging unit;
Step 205, judge whether to merge observation model, if fusion link, be then transferred to step 206;Otherwise It is transferred to step 207;
Step 206, integrated study is carried out to multiple observation models, so as to obtain more preferable result;
Step 207, the classification response figure or confidence level figure of target are calculated using step 205 or 206 result of calculation;
Step 208, judge whether the maximum of confidence level figure is more than a certain threshold value, if it is, into on-time model more New unit;Otherwise not more new model;
Step 209, when needing on-time model updating block, the target frame output based on step 210 is sentenced to observation model Disconnected unit carries out online updating;
Step 210, the objective degrees of confidence figure based on step 207 is calculated, so as to export target location and target frame; To after target frame, step 202 is jumped to, then the target to subsequent frame is tracked prediction;
Step 211, the target frame of history is exported, so as to obtain tracking the track of target;
Step 212, target end tracking system.
The above-described embodiments merely illustrate the principles and effects of the present invention, and the embodiment that part is used, for For one of ordinary skill in the art, without departing from the concept of the premise of the invention, can also make it is some deformation and Improve, these belong to protection scope of the present invention.

Claims (6)

1. a kind of method for tracking target based on nonparametric on-line talking, comprises the following steps:
Step 1, an input video is given, and a target frame interested is initialized in the first frame;The target frame is carried Color histogram feature is taken, and cluster centre is initialized using the method for K averages;
Step 2, the image distillation characteristic pattern of holding spatial distribution is calculated to the context area area image of the target frame, and assumes to be somebody's turn to do There is the priori label distribution of a same scale size in underlying image feature figure;
Step 3, characteristic pattern and the distribution of priori label are distilled based on image, in domain space rather than biography by the way of ridge regression The time domain space of system learns a filter tracker;
Step 4, for next two field picture, the target's center position predicted based on previous frame target tracker divides multiple grid Lattice, color histogram feature is extracted to each grid, by the color histogram feature and the formwork calculation generated by cluster centre Similarity, obtains reference value value spatial probability distribution, based on the rough object space probability distribution, obtains similarity maximum Preceding k grid;
Step 5, to each in this k grid, the feature of one and the first frame context area formed objects image are extracted Figure, by filter tracker and the characteristics of image figure by convolution with FFT, so as to obtain putting for each locus Reliability.It is target's center position to take the maximum locus of confidence value, and the state of target can also be obtained, based on the target-like State, extracts new color histogram feature;
Step 6, new color histogram was compared with the former cluster centre learnt, if similarity is more than certain value, The color histogram is then included into the cluster centre most like with it, and updates the cluster centre;If similarity is less than certain value, Then newly create a cluster centre.By nonparametric after line incremental clustering obtains cluster centre, these cluster centres are closed And into a To Template.Context area characteristic pattern, and one new tracking filter of study are extracted based on the center, The tracking filter of previous frame is updated with the tracking filter;
Step 7, ring performs step 4-6, carries out online target following.
2. according to the method described in claim 1, it is characterised in that step 2 extractive distillation feature is exactly specifically by other The big deep learning network learnt by database carries out extraction characteristic simulation with a small-sized depth network, in training rank Section, supervision message is the feature that big every layer of deep learning network is extracted, and is simulated as much as possible with small-sized depth network and approaches this A little supervision messages;In test phase, with small-sized depth network extraction to feature be distillation feature.
3. according to the method described in claim 1, it is characterised in that obtain the sky of target by the way of grid using step 4 Between a kind of nonparametric on-line talking algorithm for proposing of distribution and step 7, then these cluster centres are closed according to certain weight And into a To Template.
4. according to the method described in claim 1, it is characterised in that in step 4, obtaining object space distributed process is:Base The target's center position predicted in previous frame target tracker, divides multiple grids, and color histogram is extracted to each grid Feature, by the color histogram feature and the To Template calculating similarity generated by cluster centre, obtains object space distribution.
5. according to the method described in claim 1, it is characterised in that in step 7, using nonparametric on-line talking method and life It is into a To Template process:To the new online incremental clustering of color histogram nonparametric, new cluster centre is obtained, will These cluster centres are merged into a To Template according to certain weight, and the weight of each cluster centre is by each cluster Number of samples determine, in addition, in order to obtain more stable To Template, before the To Template is generated, first exclude weight Less cluster centre, the cluster centre larger to weight is normalized in the way of weight is added, so as to obtain target mould The calculation of plate, wherein cluster centre is as follows:
Here wjIt is the weight of j-th of cluster centre, is exactly to obtain a sample x online specificallyiAfterwards, calculate the sample with Each cluster centre cjSimilarity, its similarity function s (xi,cj) be:
The computing formula of To Template is as follows:
wiIt is the weight of each cluster centre.
6. according to the method described in claim 1, it is characterised in that in step 3, based on distillation characteristic pattern and priori label point Cloth, learns a filter tracker, based on loop structure and Fast Fourier Transform (FFT), so as to fast by the way of ridge regression Speed study filter tracker is simultaneously used for quickly detecting with it.
CN201611219002.5A 2016-12-26 2016-12-26 Target tracking method based on nonparametric online clustering Active CN106952293B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611219002.5A CN106952293B (en) 2016-12-26 2016-12-26 Target tracking method based on nonparametric online clustering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611219002.5A CN106952293B (en) 2016-12-26 2016-12-26 Target tracking method based on nonparametric online clustering

Publications (2)

Publication Number Publication Date
CN106952293A true CN106952293A (en) 2017-07-14
CN106952293B CN106952293B (en) 2020-02-28

Family

ID=59465781

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611219002.5A Active CN106952293B (en) 2016-12-26 2016-12-26 Target tracking method based on nonparametric online clustering

Country Status (1)

Country Link
CN (1) CN106952293B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108093153A (en) * 2017-12-15 2018-05-29 深圳云天励飞技术有限公司 Method for tracking target, device, electronic equipment and storage medium
CN108986146A (en) * 2017-12-11 2018-12-11 罗普特(厦门)科技集团有限公司 A kind of correlation filtering tracking based on background information and adaptive recurrence label
CN109215057A (en) * 2018-07-31 2019-01-15 中国科学院信息工程研究所 A kind of high-performance visual tracking method and device
CN109785368A (en) * 2017-11-13 2019-05-21 腾讯科技(深圳)有限公司 A kind of method for tracking target and device
CN109919921A (en) * 2019-02-25 2019-06-21 天津大学 Based on the influence degree modeling method for generating confrontation network
CN110111370A (en) * 2019-05-15 2019-08-09 重庆大学 A kind of vision object tracking methods based on TLD and the multiple dimensioned space-time characteristic of depth
CN110633597A (en) * 2018-06-21 2019-12-31 北京京东尚科信息技术有限公司 Driving region detection method and device
CN114973167A (en) * 2022-07-28 2022-08-30 松立控股集团股份有限公司 Multi-target tracking method based on off-line clustering and unsupervised contrast learning

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101251893A (en) * 2008-03-28 2008-08-27 西安电子科技大学 Self-adapting multi-dimension veins image segmenting method based on wavelet and average value wander
CN101777187A (en) * 2010-01-15 2010-07-14 西安电子科技大学 Video microscopic image cell automatic tracking method based on Meanshift arithmetic
US20110113095A1 (en) * 2009-11-10 2011-05-12 Hamid Hatami-Hanza System and Method For Value Significance Evaluation of Ontological Subjects of Networks and The Applications Thereof
WO2011067790A3 (en) * 2009-12-02 2011-10-06 Tata Consultancy Services Limited Cost-effective system and method for detecting, classifying and tracking the pedestrian using near infrared camera
CN103761532A (en) * 2014-01-20 2014-04-30 清华大学 Label space dimensionality reducing method and system based on feature-related implicit coding
CN104021577A (en) * 2014-06-19 2014-09-03 上海交通大学 Video tracking method based on local background learning
US20150074036A1 (en) * 2013-09-12 2015-03-12 Agt International Gmbh Knowledge management system
WO2016040304A1 (en) * 2014-09-10 2016-03-17 Bae Systems Information And Electronic Systems Integration Inc. A method for detection and characterization of technical emergence and associated methods
CN105976400A (en) * 2016-05-10 2016-09-28 北京旷视科技有限公司 Object tracking method and device based on neural network model
CN106097391A (en) * 2016-06-13 2016-11-09 浙江工商大学 A kind of multi-object tracking method identifying auxiliary based on deep neural network

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101251893A (en) * 2008-03-28 2008-08-27 西安电子科技大学 Self-adapting multi-dimension veins image segmenting method based on wavelet and average value wander
US20110113095A1 (en) * 2009-11-10 2011-05-12 Hamid Hatami-Hanza System and Method For Value Significance Evaluation of Ontological Subjects of Networks and The Applications Thereof
WO2011067790A3 (en) * 2009-12-02 2011-10-06 Tata Consultancy Services Limited Cost-effective system and method for detecting, classifying and tracking the pedestrian using near infrared camera
CN101777187A (en) * 2010-01-15 2010-07-14 西安电子科技大学 Video microscopic image cell automatic tracking method based on Meanshift arithmetic
US20150074036A1 (en) * 2013-09-12 2015-03-12 Agt International Gmbh Knowledge management system
CN103761532A (en) * 2014-01-20 2014-04-30 清华大学 Label space dimensionality reducing method and system based on feature-related implicit coding
CN104021577A (en) * 2014-06-19 2014-09-03 上海交通大学 Video tracking method based on local background learning
WO2016040304A1 (en) * 2014-09-10 2016-03-17 Bae Systems Information And Electronic Systems Integration Inc. A method for detection and characterization of technical emergence and associated methods
CN105976400A (en) * 2016-05-10 2016-09-28 北京旷视科技有限公司 Object tracking method and device based on neural network model
CN106097391A (en) * 2016-06-13 2016-11-09 浙江工商大学 A kind of multi-object tracking method identifying auxiliary based on deep neural network

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
CHUNG-CHENG CHIU ET AL.: ""A Robust Object Segmentation System Using a Probability-Based Background Extraction Algorithm"", 《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》 *
GEOFFREY HINTON ET AL.: ""Distilling the Knowledge in a Neural Network"", 《ARXIV:1503.02531V1》 *
JOÃO F. HENRIQUES ET AL.: ""High-Speed Tracking with Kernelized Correlation Filters"", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
杨恒 等: ""基于随机背景建模的目标检测算法"", 《应用光学》 *
王韦桦: ""智能视觉监控中运动目标检测与行为识别方法"", 《中国博士学位论文全文数据库信息科技辑》 *
葛仕明 等: ""基于相关滤波器组的人眼定位方法"", 《网络新媒体技术》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109785368B (en) * 2017-11-13 2022-07-22 腾讯科技(深圳)有限公司 Target tracking method and device
CN109785368A (en) * 2017-11-13 2019-05-21 腾讯科技(深圳)有限公司 A kind of method for tracking target and device
CN108986146A (en) * 2017-12-11 2018-12-11 罗普特(厦门)科技集团有限公司 A kind of correlation filtering tracking based on background information and adaptive recurrence label
CN108093153A (en) * 2017-12-15 2018-05-29 深圳云天励飞技术有限公司 Method for tracking target, device, electronic equipment and storage medium
CN110633597A (en) * 2018-06-21 2019-12-31 北京京东尚科信息技术有限公司 Driving region detection method and device
CN110633597B (en) * 2018-06-21 2022-09-30 北京京东尚科信息技术有限公司 Drivable region detection method and device
CN109215057B (en) * 2018-07-31 2021-08-20 中国科学院信息工程研究所 High-performance visual tracking method and device
CN109215057A (en) * 2018-07-31 2019-01-15 中国科学院信息工程研究所 A kind of high-performance visual tracking method and device
CN109919921A (en) * 2019-02-25 2019-06-21 天津大学 Based on the influence degree modeling method for generating confrontation network
CN109919921B (en) * 2019-02-25 2023-10-20 天津大学 Environmental impact degree modeling method based on generation countermeasure network
CN110111370A (en) * 2019-05-15 2019-08-09 重庆大学 A kind of vision object tracking methods based on TLD and the multiple dimensioned space-time characteristic of depth
CN110111370B (en) * 2019-05-15 2023-05-30 重庆大学 Visual object tracking method based on TLD and depth multi-scale space-time features
CN114973167A (en) * 2022-07-28 2022-08-30 松立控股集团股份有限公司 Multi-target tracking method based on off-line clustering and unsupervised contrast learning
CN114973167B (en) * 2022-07-28 2022-11-04 松立控股集团股份有限公司 Multi-target tracking method based on off-line clustering and unsupervised contrast learning

Also Published As

Publication number Publication date
CN106952293B (en) 2020-02-28

Similar Documents

Publication Publication Date Title
CN106952293A (en) A kind of method for tracking target based on nonparametric on-line talking
CN109800689B (en) Target tracking method based on space-time feature fusion learning
CN106897670B (en) Express violence sorting identification method based on computer vision
Huang et al. Multiple target tracking by learning-based hierarchical association of detection responses
CN105405154B (en) Target object tracking based on color-structure feature
Yang et al. Context-aware visual tracking
CN100573548C (en) The method and apparatus of tracking bimanual movements
KR102462934B1 (en) Video analysis system for digital twin technology
CN111932583A (en) Space-time information integrated intelligent tracking method based on complex background
CN108447080A (en) Method for tracking target, system and storage medium based on individual-layer data association and convolutional neural networks
CN110569843B (en) Intelligent detection and identification method for mine target
CN108664844A (en) The image object semantics of convolution deep neural network identify and tracking
CN106355604A (en) Target image tracking method and system
CN106570490B (en) A kind of pedestrian's method for real time tracking based on quick clustering
CN112836640A (en) Single-camera multi-target pedestrian tracking method
CN108734109B (en) Visual target tracking method and system for image sequence
Atanasov et al. Hypothesis testing framework for active object detection
CN111368634B (en) Human head detection method, system and storage medium based on neural network
CN108320306A (en) Merge the video target tracking method of TLD and KCF
CN113688797A (en) Abnormal behavior identification method and system based on skeleton extraction
CN109697727A (en) Method for tracking target, system and storage medium based on correlation filtering and metric learning
CN106447698A (en) Multi-pedestrian tracking method and system based on distance sensor
López-Rubio et al. Anomalous object detection by active search with PTZ cameras
CN106127798B (en) Dense space-time contextual target tracking based on adaptive model
CN109615641A (en) Multiple target pedestrian tracking system and tracking based on KCF algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200927

Address after: Room 108, No. 318, Shuixiu Road, Jinze town (Xichen), Qingpu District, Shanghai 201700

Patentee after: Shanghai Yingpu Technology Co.,Ltd.

Address before: 100025, Beijing, Gaobeidian Township North Garden Village North 166 media elite headquarters, 6 floor, room 609, Chaoyang District

Patentee before: BEIJING MOVIEBOOK SCIENCE AND TECHNOLOGY Co.,Ltd.

PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A Target Tracking Method Based on Non parametric Online Clustering

Effective date of registration: 20230425

Granted publication date: 20200228

Pledgee: Bank of Communications Co.,Ltd. Beijing Tongzhou Branch

Pledgor: Shanghai Yingpu Technology Co.,Ltd.

Registration number: Y2023990000234

PP01 Preservation of patent right
PP01 Preservation of patent right

Effective date of registration: 20231128

Granted publication date: 20200228