CN103440668A - Method and device for tracing online video target - Google Patents

Method and device for tracing online video target Download PDF

Info

Publication number
CN103440668A
CN103440668A CN2013103905294A CN201310390529A CN103440668A CN 103440668 A CN103440668 A CN 103440668A CN 2013103905294 A CN2013103905294 A CN 2013103905294A CN 201310390529 A CN201310390529 A CN 201310390529A CN 103440668 A CN103440668 A CN 103440668A
Authority
CN
China
Prior art keywords
target
image
unit
next frame
foreground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013103905294A
Other languages
Chinese (zh)
Other versions
CN103440668B (en
Inventor
葛仕明
文辉
陈水仙
秦伟俊
孙利民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Information Engineering of CAS
Original Assignee
Institute of Information Engineering of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Information Engineering of CAS filed Critical Institute of Information Engineering of CAS
Priority to CN201310390529.4A priority Critical patent/CN103440668B/en
Publication of CN103440668A publication Critical patent/CN103440668A/en
Application granted granted Critical
Publication of CN103440668B publication Critical patent/CN103440668B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a method and device for tracing an online video target. The method comprises the following steps that image characteristics of a start frame in an online video are obtained and an initial background model is established; an image of a next frame is obtained; compared with the initial background model and the image of the next frame, so that a comparison result is obtained, and the initial background model is updated according to the comparison result; a foreground image is obtained and a foreground target is extracted; target characteristics are obtained through the online learning method and the foreground target is located so that the position information of the foreground target can be obtained; the position of the foreground target is marked and an image of a next frame after the marking is conducted is output; all the output image of the next frames are combined so that the moving trajectory of the foreground target can be obtained. According to the method for tracing the online video target, a real-time monitored video is processed, the target is traced at the first time, the phenomenon that the target is traced after all original video images are obtained is avoided, the instantaneity and effectiveness of data are guaranteed, and influence on the accuracy rate after multiple targets intersect and are blocked in an existing tracing mode is avoided.

Description

A kind of Online Video method for tracking target and device
Technical field
The present invention relates to analysis and the process field of video flowing, particularly a kind of Online Video method for tracking target and device.
Background technology
In recent years, along with the high speed development of digital media technology and Intelligent Video Surveillance Technology, the public safety situation is subject to society and the public's extensive concern, and multimedia and security protection video data are explosive growth.The tradition simple original artificial browsing mode of leaning on consuming time can not meet the demand of people to video information analysis and processing far away.Therefore, fast in the urgent need to a kind of processing speed, target following is accurate, and there is Online Video method for tracking target and the system of good robustness.
Target following is exactly to find in real time interested moving target in one section image sequence, comprises the kinematic parameters such as its position, speed and acceleration.Target following is the hot issue of computer vision field research, obtains swift and violent development along with the development of computer technology, and the target following technology has also obtained significant progress thus.Mainly be the processing that concentrate on single image to the processing of image last century, even if in dynamic image sequence the pursuit movement target, the characteristics of also processing with dense still image.Until last century, the people such as the eighties BKP Horn proposed optical flow method (Optical Flow, referring to Determining Optical Flow, BKP Horn, BG Schunck, Artificial intelligence, 1981, Elsevier), the research field of dynamic image sequence has just truly been stepped in target following research.But, because optical flow method is high to the requirement of computing machine processing speed, be difficult to meet the demand of real-time in practical application area.In addition, the noise that video sequence exists can be followed the tracks of to produce greatly to optical flow method and disturb, so optical flow method present stage is difficult to be applied to practical matter.
The track algorithm of target tracking domain emerges in an endless stream, and can meet the requirement of some application background on tracking effect, but lacks versatility.The people such as Fukunaga in 1975 propose mean shift (Mean Shift at one piece in about the estimation of probability density gradient function first, referring to The Estimation of the Gradient of a Density Function, with Applications in Pattern Recognition) concept, Yizong Zheng has expanded the scope of application of Mean Shift in " Mean Shift Mode Seeking and Clustering " literary composition in nineteen ninety-five.Although have speed fast with Mean Shift algorithm to target following, and have stronger antijamming capability, this algorithm is followed the tracks of the target of varying environment, different motion characteristic, also can produce the factor that some affect tracking stability.Such as the target following under complex background, when the target of situations such as in motion, deformation, convergent-divergent having occurred, block is followed the tracks of for a long time, its tracking stability is subject to larger impact.For these problems, although can choose by rational target signature, effectively kernel function (Kernel Function is specifically referring to Huang Jibin, the concept of kernel function, character and application thereof, Hubei Normal University's journal, 2007), bandwidth self-adaption renewal, template renewal and occlusion detection mechanism are solved, yet, under a lot of different applied environments, accomplish that not be that part is easy to thing at above 4.Although have a lot of scholars to do a lot of research for these, and solving in varying degrees the problems referred to above, or be that algorithm complex is difficult to meet real-time, or be exactly many preconditions, thereby make actual tracking effect unsatisfactory.In the target following process, directly all targets in scene are carried out to matching operation, find best match position, need to process a large amount of redundant informations, operand is larger like this, and there is no need.The common method of one class is to find optimum point in the predicted motion object next frame ,Qi relevant range, position that may occur.Kalman wave filter (Kalman Filter, referring to A New Approach to Linear Filtering and Prediction Problems, RE Kalman, Journal of basic Engineering, 1960) be the algorithm that a status switch to dynamic system carries out the Linear Minimum Variance estimation, it describes a dynamic system by state equation and observation equation, status switch based on before system is done optimal estimation to next state, during prediction, have without inclined to one side, stablize and optimum characteristics, and it is little to have calculated amount, the characteristics that can calculate in real time, the position of target of prediction and speed accurately, but it is only suitable in linear and be the system of Gaussian distribution.
Summary of the invention
Technical matters to be solved by this invention is to provide a kind of Real-time Obtaining Online Video and the target in Online Video is carried out to Online Video method for tracking target and the device of on-line tracing.
The technical scheme that the present invention solves the problems of the technologies described above is as follows: a kind of Online Video method for tracking target comprises the following steps:
Step 1: obtain the start frame image in Online Video, extract characteristics of image, according to characteristics of image, set up initial back-ground model;
Step 2: obtain the next frame image, proceed to step 3 and step 4 simultaneously;
Step 3: the characteristics of image of the characteristics of image of initial back-ground model and next frame image is contrasted, obtain comparing result, according to comparing result, upgrade initial back-ground model;
Step 4: obtain the foreground image in the next frame image, extract foreground target in foreground image;
Step 5: utilize the on-line study method to obtain the target signature of foreground target, foreground target is positioned in the next frame image according to target signature, obtain the positional information of foreground target;
Step 6: carry out mark according to positional information position to foreground target in the next frame image of foreground target, the next frame image after mark is exported;
Step 7: repeated execution of steps 2 is to step 6, until the Online Video input is complete, combines the next frame image of all outputs, obtains the running orbit of foreground target.
The invention has the beneficial effects as follows: the present invention is directed to real-time monitor video and processed, in the very first time, target is followed the tracks of, without after obtaining whole raw video images, carrying out again target following, guaranteed the real-time effectiveness of data, also avoided existing tracking mode on multiple goal intersect block after on the impact of accuracy rate, the algorithm that the present invention adopts has higher rationality and operational efficiency, has reduced complexity, has improved accuracy rate.
On the basis of technique scheme, the present invention can also do following improvement.
Further, described on-line study method is specially the target signature that adopts Boosting learning algorithm and manifold learning to obtain respectively foreground target, obtain respectively First Characteristic and Second Characteristic, adopt the mode of weighting coefficient to integrate First Characteristic and Second Characteristic, obtain final target signature.
Further, described characteristics of image comprises textural characteristics.
Further, described step 3 further comprises:
Step 3.1: the textural characteristics of the textural characteristics of initial back-ground model and next frame image is mated to calculating;
Step 3.2: if the Image Feature Matching of the characteristics of image of initial back-ground model and next frame image is labeled as background by the pixel of compatible portion, proceed to step 3.3, otherwise, the pixel of compatible portion not is labeled as to prospect, proceed to step 3.3;
Step 3.3: prospect and context update initial back-ground model according to mark proceed to step 3.1.
Further, a kind of Online Video target tracker, comprise the background modeling unit, target extraction unit, target signature on-line study unit, target localization unit and sequence mark unit;
Described background modeling unit, for obtaining the start frame image of Online Video, extract characteristics of image, set up initial back-ground model according to characteristics of image, obtain the next frame image, the characteristics of image of the characteristics of image of initial back-ground model and next frame image is contrasted, obtain comparing result, upgrade initial back-ground model according to comparing result, the information of next frame image is sent to the target extraction unit;
Described target extraction unit for obtaining the foreground image of next frame image, extracts foreground target in foreground image, and the information of foreground target is sent to target signature on-line study unit;
Described target signature on-line study unit, for receiving the information of foreground target, utilize the on-line study method to obtain the target signature of foreground target, and the information of target signature is sent to the target localization unit;
Described target localization unit, for the information of receiving target feature, position foreground target in the next frame image according to target signature, obtains the positional information of foreground target, and the positional information of foreground target is sent to the sequence mark unit;
Described sequence mark unit, for according to the positional information of foreground target, at the next frame image, mark being carried out in the position of foreground target, next frame image after mark is exported, repeat target extraction unit, target signature on-line study unit and target localization unit, until the Online Video input is complete, combine the next frame image of all outputs, obtain the running orbit of foreground target.
Further, described target signature on-line study unit comprises Boosting feature learning unit, stream shape feature learning unit and weighted comprehensive unit;
Described Boosting feature learning unit, obtain the target signature of foreground target for adopting the Boosting learning algorithm, obtains First Characteristic, and First Characteristic is sent to the weighted comprehensive unit;
Described stream shape feature learning unit, obtain the target signature of foreground target for adopting manifold learning, obtains Second Characteristic, and Second Characteristic is sent to the weighted comprehensive unit;
Described weighted comprehensive unit, for receiving First Characteristic and Second Characteristic, adopt the mode of weighting coefficient to integrate First Characteristic and Second Characteristic, obtains final target signature.
Further, described characteristics of image comprises textural characteristics.
Further, described background modeling unit further comprises acquiring unit, matching unit, indexing unit and updating block;
Acquiring unit, for obtaining the start frame image of Online Video, extract characteristics of image, according to characteristics of image, sets up initial back-ground model, obtains the next frame image, and the information of the information of initial back-ground model and next frame image is sent to matching unit;
Described matching unit, for the information that receives initial back-ground model and the information of next frame image, mate calculating by the textural characteristics of the textural characteristics of initial back-ground model and next frame image, and the result that described coupling is calculated sends to indexing unit;
Described indexing unit, the result of calculating for receiving coupling, if the Image Feature Matching of the characteristics of image of initial back-ground model and next frame image, the pixel of compatible portion is labeled as to background, carry out updating block, otherwise, the pixel of compatible portion not is labeled as to prospect, carry out updating block;
Described updating block, for the prospect according to mark and context update initial back-ground model, carry out matching unit.
Further, described Online Video target tracker also comprises memory storage, display device and image acquiring device;
Described memory storage, the running orbit of the foreground target generated for the storage sequence indexing unit;
Described display device, the running orbit of the foreground target generated for the display sequence indexing unit;
Described image acquiring device, for the real-time Online Video that obtains, and send to the background modeling unit by Online Video.
The accompanying drawing explanation
Fig. 1 is the inventive method flow chart of steps;
Fig. 2 is apparatus of the present invention structural drawing;
Fig. 3 is input and output effect schematic diagram of the present invention.
In accompanying drawing, the list of parts of each label representative is as follows:
1, background modeling unit, 1-1, acquiring unit, 1-2, matching unit, 1-3, indexing unit, 1-4, updating block, 2, the target extraction unit, 3, target signature on-line study unit, 3-1, Boosting feature learning unit, 3-2, stream shape feature learning unit, 3-3, weighted comprehensive unit, 4, target localization unit, 5, sequence mark unit 6, memory storage, 7, display device, 8, image acquiring device.
Embodiment
Below in conjunction with accompanying drawing, principle of the present invention and feature are described, example, only for explaining the present invention, is not intended to limit scope of the present invention.
As shown in Figure 1, be the inventive method flow chart of steps; Fig. 2 is apparatus of the present invention structural drawing; Fig. 3 is input and output effect schematic diagram of the present invention.
Embodiment 1
A kind of Online Video method for tracking target comprises the following steps:
Step 1: obtain the start frame image in Online Video, extract characteristics of image, according to characteristics of image, set up initial back-ground model;
Step 2: obtain the next frame image, proceed to step 3 and step 4 simultaneously;
Step 3: the characteristics of image of the characteristics of image of initial back-ground model and next frame image is contrasted, obtain comparing result, according to comparing result, upgrade initial back-ground model;
Step 4: obtain the foreground image in the next frame image, extract foreground target in foreground image;
Step 5: utilize the on-line study method to obtain the target signature of foreground target, foreground target is positioned in the next frame image according to target signature, obtain the positional information of foreground target;
Step 6: carry out mark according to positional information position to foreground target in the next frame image of foreground target, the next frame image after mark is exported;
Step 7: repeated execution of steps 2 is to step 6, until the Online Video input is complete, combines the next frame image of all outputs, obtains the running orbit of foreground target.
Described on-line study method is specially the target signature that adopts Boosting learning algorithm and manifold learning to obtain respectively foreground target, obtain respectively First Characteristic and Second Characteristic, adopt the mode of weighting coefficient to integrate First Characteristic and Second Characteristic, obtain final target signature.
Described characteristics of image comprises textural characteristics.
Described step 3 further comprises:
Step 3.1: the textural characteristics of the textural characteristics of initial back-ground model and next frame image is mated to calculating;
Step 3.2: if the Image Feature Matching of the characteristics of image of initial back-ground model and next frame image is labeled as background by the pixel of compatible portion, proceed to step 3.3, otherwise, the pixel of compatible portion not is labeled as to prospect, proceed to step 3.3;
Step 3.3: prospect and context update initial back-ground model according to mark proceed to step 3.1.
A kind of Online Video target tracker, comprise background modeling unit 1, target extraction unit 2, target signature on-line study unit 3, target localization unit 4 and sequence mark unit 5;
Described background modeling unit 1, for obtaining the start frame image of Online Video, extract characteristics of image, set up initial back-ground model according to characteristics of image, obtain the next frame image, the characteristics of image of the characteristics of image of initial back-ground model and next frame image is contrasted, obtain comparing result, upgrade initial back-ground model according to comparing result, the information of next frame image is sent to target extraction unit 2;
Described target extraction unit 2 for obtaining the foreground image of next frame image, extracts foreground target in foreground image, and the information of foreground target is sent to target signature on-line study unit 3;
Described target signature on-line study unit 3, for receiving the information of foreground target, utilize the on-line study method to obtain the target signature of foreground target, and the information of target signature is sent to target localization unit 4;
Described target localization unit 4, for the information of receiving target feature, position foreground target in the next frame image according to target signature, obtains the positional information of foreground target, and the positional information of foreground target is sent to sequence mark unit 5;
Described sequence mark unit 5, for according to the positional information of foreground target, at the next frame image, mark being carried out in the position of foreground target, next frame image after mark is exported, repeat target extraction unit 2, target signature on-line study unit 3 and target localization unit 4, until the Online Video input is complete, combine the next frame image of all outputs, obtain the running orbit of foreground target.
Described target signature on-line study unit 3 comprises Boosting feature learning unit 3-1, stream shape feature learning unit 3-2 and weighted comprehensive unit 3-3;
Described Boosting feature learning unit 3-1, obtain the target signature of foreground target for adopting the Boosting learning algorithm, obtains First Characteristic, and First Characteristic is sent to weighted comprehensive unit 3-3;
Described stream shape feature learning unit 3-2, obtain the target signature of foreground target for adopting manifold learning, obtains Second Characteristic, and Second Characteristic is sent to weighted comprehensive unit 3-3;
Described weighted comprehensive unit 3-3, for receiving First Characteristic and Second Characteristic, adopt the mode of weighting coefficient to integrate First Characteristic and Second Characteristic, obtains final target signature.
Described characteristics of image comprises textural characteristics.
Described background modeling unit 1 further comprises acquiring unit 1-1, matching unit 1-2, indexing unit 1-3 and updating block 1-4;
Acquiring unit 1-1, for obtaining the start frame image of Online Video, extract characteristics of image, according to characteristics of image, sets up initial back-ground model, obtains the next frame image, and the information of the information of initial back-ground model and next frame image is sent to matching unit 1-2;
Described matching unit 1-2, for the information that receives initial back-ground model and the information of next frame image, mate calculating by the textural characteristics of the textural characteristics of initial back-ground model and next frame image, and the result that described coupling is calculated sends to indexing unit 1-3;
Described indexing unit 1-3, the result of calculating for receiving coupling, if the Image Feature Matching of the characteristics of image of initial back-ground model and next frame image, the pixel of compatible portion is labeled as to background, carry out updating block 1-4, otherwise, the pixel of compatible portion not is labeled as to prospect, carry out updating block 1-4;
Described updating block 1-4, for the prospect according to mark and context update initial back-ground model, carry out matching unit 1-2.
Described Online Video target tracker also comprises memory storage 6, display device 7 and image acquiring device 8;
Described memory storage 6, the running orbit of the foreground target generated for storage sequence indexing unit 5;
Described display device 7, the running orbit of the foreground target generated for display sequence indexing unit 5;
Described image acquiring device 8, for the real-time Online Video that obtains, and send to background modeling unit 1 by Online Video, and image acquiring device 8 is for the real-time video image that obtains, and it can be for example a monitoring camera.
The present invention is marked the moving target in video image, followed the tracks of, and in motion, deformation, convergent-divergent having occurred, the target of the situation such as blocking can carry out stable long-time tracking, and the present invention requires lowly to hardware, and algorithm complex is low.
Online Video target tracker of the present invention is for processing online for current each two field picture obtained in real time.That is, obtain image and synchronize and carry out with video frequency object tracking, not retain after all videos and to process starting target following.The Online Video target tracker can be arranged on a board, graphic process unit (Graphics processing unit, GPU) or embedded processing box.
Video frequency object tracking of the present invention comprises single goal and multiobject tracking, the image that background modeling unit 1 is accepted from image acquiring device 8, and each two field picture of receiving is carried out to cutting apart of foreground image and background image.
Background modeling unit 1 can adopt background modeling based on texture (specifically referring to Marko Heikkil, Matti Pietikinen, " A Texture-Based Method for Modeling the Background and Detecting Moving Objects ", IEEE Trans.Pattern Anal.Machine Intell, 2006) inputted video image is carried out to background modeling, obtain the background image of each two field picture, and send target extraction unit 2 to.
Target extraction unit 2 subtracts each other each two field picture and corresponding background image, and the figure of recycling prior art cuts algorithm (specifically referring to J.Sun, W.Zhang, X.Tang, H, Shum, " Background Cut ", ECCV, 2006) and obtains accurately foreground image.Then mark the possible position of target with the foreground image obtained.
Target signature on-line study unit 3 is for clarification of objective is learnt, in order to target is accurately located.Online boosting(boosting algorithm, specifically referring to Y Freund, A Short Introduction to Boosting, Journal of Japanese Society for Artificial Intelligence, 14 (5): 771-780, September, 1999.) feature learning algorithm has performance preferably on the feature learning problem, and the boosting algorithm not only can be done recurrence, classification, and the effect with feature selecting.Online boosting feature learning method is mainly paid close attention to the difference factor between target and background and other prospects, and does not pay close attention to the Some features of target itself.Owing to only from aspect, considering target characteristic, be easy to affected by noise and cause following the tracks of unsuccessfully, so can consider to reach the correct study to target signature from the mode of two angle Cooperative Studies.We adopt target at linear flow shape (Manifold learning, specifically referring to Zhenyue Zhang, Adaptive Manifold Learning, Pattern Analysis and Machine Intelligence, 2012) with the method for boosting feature Cooperative Study, target signature is learnt to statement.Object flow shape is similar to by the linear combination of its Local Subspace, and this learning method is paid close attention to the characteristics of target itself more, to the online renewal learning of target signature stream shape, has good characteristic learning performance.
Target localization unit 4, after 3 pairs of target signature on-line study unit target signature is learnt, utilize target signature accurately to locate target by target localization unit 4.
Sequence mark unit 5 carries out mark by the target behind location, its movement locus is marked simultaneously.
Memory storage 6, the video generated for storage sequence indexing unit 5.
Display device 7, can be a display screen, for the video after playback process, for the user, watches.
This Online Video target tracker also can comprise a user interface, for deriving video.The present invention so-called moving object, refer to the image that has recorded the colouring information that certain real moving target occurs in successive frame.The removable things such as this moving target such as behaviour, pet, mobile car body.Pass by the zone that moving target is taken at image acquiring device 8, usually taken in continuous multiple image by image acquiring device 8.
That is,, for a two field picture, its foreground image and current background model are processed simultaneously.
The target signature on-line study is another important step, by on-line study, obtains clarification of objective, thus accurate localizing objects.In the present embodiment, we have adopted target, in the method for linear flow shape and boosting feature Cooperative Study, target signature is learnt to statement.
Stream shape is one well-defined notion on mathematics, in brief, is exactly nonlinear space.The most simply flowing shape is exactly sphere.Manifold learning arithmetic be exactly that we think that the relation between data is nonlinear, just as these data be distributed in a stream shape upper.We attempt by some method, the dimension of data to be lowered.In the process of dimensionality reduction, we keep the nonlinear relationship between data.
The Boosting method is a kind of method that is used for improving the accuracy of weak typing algorithm, and this method, by an anticipation function series of structure, then is combined into them an anticipation function in some way.It is a kind of frame algorithm, is mainly to obtain sample set by the operation to sample set, then with the weak typing algorithm, trains on sample set and generates a series of base sorter.It can be used for improving the discrimination of other weak typing algorithms; namely using other weak typing algorithm as the base sorting algorithm, be put in the Boosting framework; operation by the Boosting framework to training sample set; obtain different training sample subsets, with this sample set, go training to generate the base sorter; Often obtain a sample set and just by this base sorting algorithm, on this sample set, produce a base sorter, like this after given training cycle index n, just can produce n base sorter, then the Boosting frame algorithm is weighted fusion by this n base sorter, produce a last sorter as a result, in this n base sorter, the discrimination of the sorter that each is single is not necessarily very high, but the result after their associatings is had to very high discrimination, so just improved the discrimination of this weak typing algorithm.Generally speaking the core concept of Boosting algorithm is exactly in order to produce our desired strong learning machine by the associating to a series of weak learning machines.
Fig. 3 shows the effect image of input and output of the present invention.As shown in the figure, when target (pedestrian), from t-△ t enters monitoring range constantly, system is followed the tracks of it, until current time t, and demonstrate its current location and movement locus.
Online Video target following mode of the present invention is processed for the moving object sequence of extract real-time, guarantees can carry out target following to raw video image in the very first time, makes target following reach the demand of real-time.
The present invention can carry out stable long-time tracking to the target of situations such as in motion, deformation, convergent-divergent having occurred, block, makes target following reach the demand of high-accuracy.
Algorithm of the present invention has higher rationality and operational efficiency, has reduced complexity.
The foregoing is only preferred embodiment of the present invention, in order to limit the present invention, within the spirit and principles in the present invention not all, any modification of doing, be equal to replacement, improvement etc., within all should being included in protection scope of the present invention.

Claims (9)

1. an Online Video method for tracking target, is characterized in that, comprises the following steps:
Step 1: obtain the start frame image in Online Video, extract characteristics of image, according to characteristics of image, set up initial back-ground model;
Step 2: obtain the next frame image, proceed to step 3 and step 4 simultaneously;
Step 3: the characteristics of image of the characteristics of image of initial back-ground model and next frame image is contrasted, obtain comparing result, according to comparing result, upgrade initial back-ground model;
Step 4: obtain the foreground image in the next frame image, extract foreground target in foreground image;
Step 5: utilize the on-line study method to obtain the target signature of foreground target, foreground target is positioned in the next frame image according to target signature, obtain the positional information of foreground target;
Step 6: carry out mark according to positional information position to foreground target in the next frame image of foreground target, the next frame image after mark is exported;
Step 7: repeated execution of steps 2 is to step 6, until the Online Video input is complete, combines the next frame image of all outputs, obtains the running orbit of foreground target.
2. Online Video method for tracking target according to claim 1, it is characterized in that: described on-line study method is specially the target signature that adopts Boosting learning algorithm and manifold learning to obtain respectively foreground target, obtain respectively First Characteristic and Second Characteristic, adopt the mode of weighting coefficient to integrate First Characteristic and Second Characteristic, obtain final target signature.
3. Online Video method for tracking target according to claim 1, it is characterized in that: described characteristics of image comprises textural characteristics.
4. Online Video method for tracking target according to claim 3, it is characterized in that: described step 3 further comprises:
Step 3.1: the textural characteristics of the textural characteristics of initial back-ground model and next frame image is mated to calculating;
Step 3.2: if the Image Feature Matching of the characteristics of image of initial back-ground model and next frame image is labeled as background by the pixel of compatible portion, proceed to step 3.3, otherwise, the pixel of compatible portion not is labeled as to prospect, proceed to step 3.3;
Step 3.3: prospect and context update initial back-ground model according to mark proceed to step 3.1.
5. an Online Video target tracker, is characterized in that: comprise background modeling unit (1), target extraction unit (2), target signature on-line study unit (3), target localization unit (4) and sequence mark unit (5);
Described background modeling unit (1), for obtaining the start frame image of Online Video, extract characteristics of image, set up initial back-ground model according to characteristics of image, obtain the next frame image, the characteristics of image of the characteristics of image of initial back-ground model and next frame image is contrasted, obtain comparing result, upgrade initial back-ground model according to comparing result, the information of next frame image is sent to target extraction unit (2);
Described target extraction unit (2) for obtaining the foreground image of next frame image, extracts foreground target in foreground image, and the information of foreground target is sent to target signature on-line study unit (3);
Described target signature on-line study unit (3), for receiving the information of foreground target, utilize the on-line study method to obtain the target signature of foreground target, and the information of target signature is sent to target localization unit (4);
Described target localization unit (4), for the information of receiving target feature, position foreground target in the next frame image according to target signature, obtains the positional information of foreground target, and the positional information of foreground target is sent to sequence mark unit (5);
Described sequence mark unit (5), for according to the positional information of foreground target, at the next frame image, mark being carried out in the position of foreground target, next frame image after mark is exported, repeat target extraction unit (2), target signature on-line study unit (3) and target localization unit (4), until the Online Video input is complete, combine the next frame image of all outputs, obtain the running orbit of foreground target.
6. Online Video target tracker according to claim 5, it is characterized in that: described target signature on-line study unit (3) comprises Boosting feature learning unit (3-1), stream shape feature learning unit (3-2) and weighted comprehensive unit (3-3);
Described Boosting feature learning unit (3-1), obtain the target signature of foreground target for adopting the Boosting learning algorithm, obtains First Characteristic, and First Characteristic is sent to weighted comprehensive unit (3-3);
Described stream shape feature learning unit (3-2), obtain the target signature of foreground target for adopting manifold learning, obtains Second Characteristic, and Second Characteristic is sent to weighted comprehensive unit (3-3);
Described weighted comprehensive unit (3-3), for receiving First Characteristic and Second Characteristic, adopt the mode of weighting coefficient to integrate First Characteristic and Second Characteristic, obtains final target signature.
7. Online Video target tracker according to claim 5, it is characterized in that: described characteristics of image comprises textural characteristics.
8. Online Video target tracker according to claim 5, it is characterized in that: described background modeling unit (1) further comprises acquiring unit (1-1), matching unit (1-2), indexing unit (1-3) and updating block (1-4);
Acquiring unit (1-1), for obtaining the start frame image of Online Video, extract characteristics of image, set up initial back-ground model according to characteristics of image, obtain the next frame image, the information of the information of initial back-ground model and next frame image is sent to matching unit (1-2);
Described matching unit (1-2), for receiving the information of initial back-ground model and the information of next frame image, the textural characteristics of the textural characteristics of initial back-ground model and next frame image is mated to calculating, and the result that described coupling is calculated sends to indexing unit (1-3);
Described indexing unit (1-3), the result of calculating for receiving coupling, if the Image Feature Matching of the characteristics of image of initial back-ground model and next frame image, the pixel of compatible portion is labeled as to background, carry out updating block (1-4), otherwise, the pixel of compatible portion not is labeled as to prospect, carry out updating block (1-4);
Described updating block (1-4), for the prospect according to mark and context update initial back-ground model, carry out matching unit (1-2).
9. Online Video target tracker according to claim 5, it is characterized in that: described Online Video target tracker also comprises memory storage (6), display device (7) and image acquiring device (8);
Described memory storage (6), the running orbit of the foreground target generated for storage sequence indexing unit (5);
Described display device (7), the running orbit of the foreground target generated for display sequence indexing unit (5);
Described image acquiring device (8), for the real-time Online Video that obtains, and send to background modeling unit (1) by Online Video.
CN201310390529.4A 2013-08-30 2013-08-30 Method and device for tracing online video target Active CN103440668B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310390529.4A CN103440668B (en) 2013-08-30 2013-08-30 Method and device for tracing online video target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310390529.4A CN103440668B (en) 2013-08-30 2013-08-30 Method and device for tracing online video target

Publications (2)

Publication Number Publication Date
CN103440668A true CN103440668A (en) 2013-12-11
CN103440668B CN103440668B (en) 2017-01-25

Family

ID=49694361

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310390529.4A Active CN103440668B (en) 2013-08-30 2013-08-30 Method and device for tracing online video target

Country Status (1)

Country Link
CN (1) CN103440668B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104123732A (en) * 2014-07-14 2014-10-29 中国科学院信息工程研究所 Online target tracking method and system based on multiple cameras
CN104217221A (en) * 2014-08-27 2014-12-17 重庆大学 Method for detecting calligraphy and paintings based on textural features
CN105282496A (en) * 2014-12-02 2016-01-27 四川浩特通信有限公司 Method for tracking target video object
CN106022279A (en) * 2016-05-26 2016-10-12 天津艾思科尔科技有限公司 Method and system for detecting people wearing a hijab in video images
CN106815844A (en) * 2016-12-06 2017-06-09 中国科学院西安光学精密机械研究所 A kind of stingy drawing method based on manifold learning
CN106934757A (en) * 2017-01-26 2017-07-07 北京中科神探科技有限公司 Monitor video foreground extraction accelerated method based on CUDA
CN107368188A (en) * 2017-07-13 2017-11-21 河北中科恒运软件科技股份有限公司 The prospect abstracting method and system based on spatial multiplex positioning in mediation reality
WO2018036454A1 (en) * 2016-08-26 2018-03-01 Huawei Technologies Co., Ltd. Method and apparatus for annotating a video stream comprising a sequence of frames
CN109215057A (en) * 2018-07-31 2019-01-15 中国科学院信息工程研究所 A kind of high-performance visual tracking method and device
CN109785356A (en) * 2018-12-18 2019-05-21 北京中科晶上超媒体信息技术有限公司 A kind of background modeling method of video image
CN112449160A (en) * 2020-11-13 2021-03-05 珠海大横琴科技发展有限公司 Video monitoring method and device and readable storage medium
CN112950676A (en) * 2021-03-25 2021-06-11 长春理工大学 Intelligent robot loop detection method
CN113283279A (en) * 2021-01-25 2021-08-20 广东技术师范大学 Deep learning-based multi-target tracking method and device in video
CN113468916A (en) * 2020-03-31 2021-10-01 顺丰科技有限公司 Model training method, throwing track detection method, device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006056743A1 (en) * 2004-11-25 2006-06-01 British Telecommunications Public Limited Company Method and system for initialising a background model
CN101216943B (en) * 2008-01-16 2010-07-14 湖北莲花山计算机视觉和信息科学研究院 A method for video moving object subdivision
CN102054170A (en) * 2011-01-19 2011-05-11 中国科学院自动化研究所 Visual tracking method based on minimized upper bound error
CN103116987A (en) * 2013-01-22 2013-05-22 华中科技大学 Traffic flow statistic and violation detection method based on surveillance video processing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006056743A1 (en) * 2004-11-25 2006-06-01 British Telecommunications Public Limited Company Method and system for initialising a background model
CN101216943B (en) * 2008-01-16 2010-07-14 湖北莲花山计算机视觉和信息科学研究院 A method for video moving object subdivision
CN102054170A (en) * 2011-01-19 2011-05-11 中国科学院自动化研究所 Visual tracking method based on minimized upper bound error
CN103116987A (en) * 2013-01-22 2013-05-22 华中科技大学 Traffic flow statistic and violation detection method based on surveillance video processing

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李坤: "基于在线学习的视频跟踪算法研究与实现", 《中国优秀硕士学位论文全文数据库》 *
陆星家等: "基于HOG和Harr特征的行人追踪算法研究", 《计算机科学》 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104123732A (en) * 2014-07-14 2014-10-29 中国科学院信息工程研究所 Online target tracking method and system based on multiple cameras
CN104217221A (en) * 2014-08-27 2014-12-17 重庆大学 Method for detecting calligraphy and paintings based on textural features
CN105282496B (en) * 2014-12-02 2018-03-23 四川浩特通信有限公司 A kind of method for tracking target video object
CN105282496A (en) * 2014-12-02 2016-01-27 四川浩特通信有限公司 Method for tracking target video object
CN106022279A (en) * 2016-05-26 2016-10-12 天津艾思科尔科技有限公司 Method and system for detecting people wearing a hijab in video images
US10140508B2 (en) 2016-08-26 2018-11-27 Huawei Technologies Co. Ltd. Method and apparatus for annotating a video stream comprising a sequence of frames
WO2018036454A1 (en) * 2016-08-26 2018-03-01 Huawei Technologies Co., Ltd. Method and apparatus for annotating a video stream comprising a sequence of frames
CN109644255A (en) * 2016-08-26 2019-04-16 华为技术有限公司 Mark includes the method and apparatus of the video flowing of a framing
CN109644255B (en) * 2016-08-26 2020-10-16 华为技术有限公司 Method and apparatus for annotating a video stream comprising a set of frames
CN106815844A (en) * 2016-12-06 2017-06-09 中国科学院西安光学精密机械研究所 A kind of stingy drawing method based on manifold learning
CN106934757A (en) * 2017-01-26 2017-07-07 北京中科神探科技有限公司 Monitor video foreground extraction accelerated method based on CUDA
CN106934757B (en) * 2017-01-26 2020-05-19 北京中科神探科技有限公司 Monitoring video foreground extraction acceleration method based on CUDA
CN107368188A (en) * 2017-07-13 2017-11-21 河北中科恒运软件科技股份有限公司 The prospect abstracting method and system based on spatial multiplex positioning in mediation reality
CN109215057A (en) * 2018-07-31 2019-01-15 中国科学院信息工程研究所 A kind of high-performance visual tracking method and device
CN109215057B (en) * 2018-07-31 2021-08-20 中国科学院信息工程研究所 High-performance visual tracking method and device
CN109785356A (en) * 2018-12-18 2019-05-21 北京中科晶上超媒体信息技术有限公司 A kind of background modeling method of video image
CN109785356B (en) * 2018-12-18 2021-02-05 北京中科晶上超媒体信息技术有限公司 Background modeling method for video image
CN113468916A (en) * 2020-03-31 2021-10-01 顺丰科技有限公司 Model training method, throwing track detection method, device and storage medium
CN112449160A (en) * 2020-11-13 2021-03-05 珠海大横琴科技发展有限公司 Video monitoring method and device and readable storage medium
CN113283279A (en) * 2021-01-25 2021-08-20 广东技术师范大学 Deep learning-based multi-target tracking method and device in video
CN113283279B (en) * 2021-01-25 2024-01-19 广东技术师范大学 Multi-target tracking method and device in video based on deep learning
CN112950676A (en) * 2021-03-25 2021-06-11 长春理工大学 Intelligent robot loop detection method

Also Published As

Publication number Publication date
CN103440668B (en) 2017-01-25

Similar Documents

Publication Publication Date Title
CN103440668A (en) Method and device for tracing online video target
Bi et al. Dynamic mode decomposition based video shot detection
Han et al. Dynamic scene semantics SLAM based on semantic segmentation
Min et al. A new approach to track multiple vehicles with the combination of robust detection and two classifiers
CN101299241B (en) Method for detecting multi-mode video semantic conception based on tensor representation
Ren et al. A novel squeeze YOLO-based real-time people counting approach
CN109447082B (en) Scene moving object segmentation method, system, storage medium and equipment
CN112861673A (en) False alarm removal early warning method and system for multi-target detection of surveillance video
Zhang et al. Detecting and removing visual distractors for video aesthetic enhancement
Charouh et al. Improved background subtraction-based moving vehicle detection by optimizing morphological operations using machine learning
CN112990122A (en) Complex behavior identification method based on video basic unit analysis
Zhang et al. A survey on instance segmentation: Recent advances and challenges
Zhou Feature extraction of human motion video based on virtual reality technology
Fan et al. Multi-task and multi-modal learning for rgb dynamic gesture recognition
Liu et al. HDA-Net: Hybrid convolutional neural networks for small objects recognization at airports
Qin et al. Application of video scene semantic recognition technology in smart video
Zhang [Retracted] Sports Action Recognition Based on Particle Swarm Optimization Neural Networks
Zhong et al. Key frame extraction algorithm of motion video based on priori
Callemein et al. Automated analysis of eye-tracker-based human-human interaction studies
Gong et al. Research on an improved KCF target tracking algorithm based on CNN feature extraction
Deng et al. Abnormal behavior recognition based on feature fusion C3D network
Sha et al. An improved two-stream CNN method for abnormal behavior detection
Wei et al. Graph-theoretic spatiotemporal context modeling for video saliency detection
Li et al. Video analysis and trajectory based video annotation system
Yang et al. Video quality evaluation toward complicated sport activities for clustering analysis

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant