CN104820996B - A kind of method for tracking target of the adaptive piecemeal based on video - Google Patents
A kind of method for tracking target of the adaptive piecemeal based on video Download PDFInfo
- Publication number
- CN104820996B CN104820996B CN201510236291.9A CN201510236291A CN104820996B CN 104820996 B CN104820996 B CN 104820996B CN 201510236291 A CN201510236291 A CN 201510236291A CN 104820996 B CN104820996 B CN 104820996B
- Authority
- CN
- China
- Prior art keywords
- target
- tracking
- image
- image block
- sub
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/223—Analysis of motion using block-matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of method for tracking target of the adaptive piecemeal based on video, the beneficial effects of the invention are as follows:(1)The method for constructing the tracking target dynamics model more with robustness:Adaptive method of partition is designed, blocked by piecemeal, judgement, adaptively merge the tracked target dynamic model that three steps obtain robust.(2)The weight of image block to be fused is updated under complex environment:During tracking, it is contemplated that influence of the background to To Template, the weight of image block to be fused is updated using improved weighing computation method, to ensure the accuracy of tracking.(3)The preliminary tracking result of moving target is corrected:Preliminary tracking result is corrected by SURF characteristic matchings, improves the tracking accuracy of algorithm.The situation of target occlusion can be handled well, and can is accurately positioned target under the influence of environment, moreover it is possible to adaptively changing tracking window size.
Description
Technical Field
The invention relates to a target tracking method based on video self-adaptive blocking.
Background
Video target tracking is one of important directions of computer vision field research, integrates knowledge of multiple fields such as image processing, mode recognition, automatic control and the like, and is widely applied to video monitoring, vehicle navigation, robot navigation and the like.
The Mean Shift algorithm is a typical region-based target tracking algorithm, a weighted histogram is adopted to model a target, first-order Taylor approximation is carried out on similarity functions of the target and candidate targets, finally, gradient optimization is carried out, a position vector in a Mean Shift form is obtained, and the target moves towards a position with high similarity through each iterative calculation. Because the target model only reserves less spatial information, the accumulation of wrong information can cause inaccurate tracking, and particularly, when the target is shielded or illumination is changed, the target is easy to drift or even lose.
The method has great advantages of blocking the target and then tracking by using the Mean Shift algorithm, because the part of the target which is not shielded stores a large amount of target information, and the target can be tracked by fusing the information. Considering the influence of the surrounding environment and the target itself, it is still not enough to use only the color feature information of these non-occluded image blocks, and the fusion of the target color features with other features has been studied in a great deal.
The scale invariant feature of the SURF algorithm (Speeded Up Robust Features) has invariance to rotation, scale scaling and brightness change.
Disclosure of Invention
Aiming at the problems, the invention provides a video-based adaptive block target tracking method, which can not only well process the shielding condition of a target, but also accurately position the target under the influence of the environment, and can also adaptively change the size of a tracking window.
In order to achieve the technical purpose and achieve the technical effect, the invention is realized by the following technical scheme:
a target tracking method based on video self-adaptive block division is characterized by comprising the following steps:
s01: acquiring a video stream and converting the video stream into an image frame sequence;
s02: reading an image and selecting a target template;
s03: initializing a target template:
03a) Partitioning the target template to obtain k sub image blocks, wherein the target template is partitioned according to two modes, namely, the target template is horizontally partitioned into the k sub image blocks, and the target template is vertically partitioned into the k sub image blocks;
03b) Establishing a histogram model vector for each subimage blockCalculating and selecting SURF (speeded Up robust features) feature points of the target, extracting a feature histogram of a peripheral background of the target, and calculating the central position and size information of each sub-image block;
03c) Initializing each sub-image block weight lambda (k) Is 1/k;
s04: reading the next frame image to track the selected moving target, wherein the initial target position is the target position of the previous frame image, and the specific tracking is as follows:
04a) Computing a similarity metric function between a candidate object and an object templateObtaining the central position y of the target position of the current frame;
04b) For each sub-image block weight lambda (k) Updating and judging whether the target is shielded or not, wherein,
in the formula: alpha is a numerical value used to indicate how similar the image block is to the background,representing the similarity of the kth sub image block and the target template,representSimilarity between the kth sub image block and the background;
04c) In the tracking process, calculating and determining which image block is blocked by utilizing the weight of each sub-image block, and adaptively selecting the blocking method with the least number of blocked image blocks in two blocking modes;
04d) Fusing effective image blocks by adopting a fusion formula, and tracking the effective image blocks by utilizing a block mean shift algorithm to obtain a preliminary result;
04e) Correcting the position and the scale of the target according to the extracted SURF characteristic points to obtain a final tracking result;
s05: judging whether to continue loading the image frame, if so, entering step S04 to start the tracking of the next image frame; otherwise, go to step S06;
s06: and synthesizing the obtained image frame sequence with the tracking result into a video stream for output.
The invention has the beneficial effects that:
(1) A method for constructing a dynamic model of a tracking target with more robustness comprises the following steps: a self-adaptive blocking method is designed, and a robust tracked target dynamic model is obtained through three steps of blocking, shielding judgment and self-adaptive fusion.
(2) Updating the weight of the image block to be fused under a complex environment: in the tracking process, the influence of the background on the target template is considered, and the weight of the image block to be fused is updated by using an improved weight calculation method so as to ensure the tracking accuracy.
(3) Correcting the preliminary tracking result of the moving target: and correcting the preliminary tracking result through SURF feature matching, so that the tracking precision of the algorithm is improved.
The method can well process the shielding condition of the target, can accurately position the target under the influence of the environment, and can adaptively change the size of the tracking window.
Drawings
FIG. 1 is a flow chart of a video-based adaptive chunking target tracking method of the present invention;
FIG. 2 is a moving object adaptive blocking method of the present invention.
Detailed Description
The technical solutions of the present invention are further described in detail with reference to the drawings and specific examples so that those skilled in the art can better understand the present invention and can implement the present invention, but the examples are not intended to limit the present invention.
A video-based adaptive block target tracking method, as shown in fig. 1, includes the following steps:
s01: a video stream is acquired and converted into a sequence of image frames.
S02: one image is read and the object template is selected, for example, the first image is read, and the object template of interest is manually selected in the form of a box.
S03: initializing a target template: the method comprises the following steps of target template blocking and parameter extraction:
03a) The target template is partitioned to obtain k sub image blocks, wherein the target template is partitioned according to two modes, namely, the target template is horizontally partitioned into the k sub image blocks, and the target template is vertically partitioned into the k sub image blocks.
Preferably, k =3, as shown in fig. 2, the target is partitioned into 3 blocks in two ways, namely, respectively, in the horizontal direction and the vertical direction, and in the process of the occlusion and the disappearance of the occlusion of the moving target, the size of the occluded area of the target is gradual change with time, and the part which is occluded first is the peripheral image block. The method determines which image block is shielded by utilizing the image block weight calculation provided by the invention, adaptively judges and selects the optimal blocking method, has more flexibility in the selection of the blocking mode, and utilizes a fusion formula to track the image block which is not shielded.
03b) Establishing a histogram model vector for each subimage blockAnd calculates the SURF of the selected targetAnd (4) feature points, extracting a feature histogram of a background around the target, and calculating the central position and size information of each sub image block.
03c) Initializing each sub-image block weight lambda (k) Is 1/k, when k =3, initializing each sub image block weight lambda (k) Is 1/3.
S04: reading the next frame image to track the selected moving target, wherein the initial target position is the target position of the previous frame image, and the specific tracking is as follows:
04a) Computing a similarity metric function between a candidate object and an object templateObtaining the central position y of the target position of the current frame;
preferably, the Bhattacharyya coefficient is used to calculate a similarity measure function between the candidate target and the target template
In the formula:a color histogram representing the target template,representing a color histogram of the candidate target template, ρ (-) being a measure of the degree of similarity between the target template and the candidate region, k representing the number of partitions, u and m being the set values of the Bhattacharyya coefficient calculation formula, u representing the number of bins, m being the number of bins per feature histogram,representing candidate objectsThe model of the object is then modeled,a color histogram model representing the target,to normalize the coefficients, ensureDelta is the function of Kroneckerdelta, h is the window bandwidth, let x i (i=1,2,3……,n h ) Normalized pixel position for candidate object model, b (x) i ) To form a pixel x i The corresponding eigenvalues are mapped to the quantization function of the corresponding bin values, k (·) is the kernel function.
04b) For each sub-image block weight lambda (k) Updating and judging whether the target is shielded or not, wherein,
in the formula: alpha is a numerical value used to indicate how similar the image block is to the background,representing the similarity of the kth sub image block and the target template,representing the similarity of the kth sub image block to the background.
The higher the similarity between the image block and the target template is, and the lower the similarity between the image block and the background is, the larger the weight value is, wherein,
andis the Bhattacharyya distance of the two feature histograms, sigma chosen by the experiment.
04c) In the tracking process, the weight of each sub-image block is used for calculating and determining which image block is shielded, and the blocking method with the least number of shielded image blocks in the two blocking modes is selected in a self-adaptive manner.
In general, when λ (k) &And lt, 0.25, this indicates that the image block has changed significantly, and it can be set that occlusion has occurred. And sequentially calculating the weights of the six sub-image blocks, judging the image block with the minimum weight, selecting the corresponding image block as an invalid block, and adaptively judging and selecting the optimal blocking method, so that the selection of the blocking mode is more flexible.
04d) And fusing effective image blocks (namely the image blocks which are not shielded) by adopting a fusion formula, and tracking the effective image blocks by utilizing a block mean shift algorithm to obtain a primary result.
The fusion formula is:
in the formula: lambda [ alpha ] (k) Representing the weight, p, of each image block (k) Is a similarity measure function between each target template and the candidate target, the function is represented by Bhattacharyya coefficient, y is the center position of the candidate target, and the estimated value of the center position y of the candidate target positionComprises the following steps:
wherein the content of the first and second substances,representing the displacement of the center of the kth image block from the center of the entire candidate target, N (k) And h (k) Is the number of pixels and the bandwidth, y 0 Is the initial position of the object in the current frame, g (-) is the negative derivative of the kernel k (-) and,are the pixel coordinates.
04e) And correcting the position and the scale of the target according to the extracted SURF characteristic points to obtain a final tracking result.
The scale invariant feature of SURF has invariance to rotation, scale scaling and brightness variation, and the method applies the idea to target tracking, can describe the target more effectively, has accurate positioning effect and can change the size of a tracking window in a self-adaptive manner. The characteristic points matched with the two adjacent frames of images meet the formula:
in the formula:is the position information of two groups of feature points; (Δ x, Δ y) Τ Is the relative translational scale of the image; k is a radical of formula x And k y Is the scale of the scale in the horizontal and vertical directions. And (3) fitting the above expression by using a random sample consensus (RANSAC) algorithm so as to eliminate the influence of the matching error of the feature points.
When SURF feature matching and block tracking are fused, feature points of image blocks which are not shielded are matched by a nearest neighbor matching method, the similarity measurement adopts Euclidean distance, feature points matched by front and back adjacent frames adopt affine transformation calculation to obtain an affine transformation matrix of a target between the two frames, and the matrix is used for correcting the position and the size of a pedestrian.
S05: judging whether to continue loading the image frame, if so, entering step S04 to start the tracking of the next image frame; otherwise, go to step S06;
s06: and synthesizing the obtained image frame sequence with the tracking result into a video stream for output.
The invention has the following advantages: firstly, designing a self-adaptive blocking method, and obtaining a robust tracked target dynamic model through three steps of blocking, judging shielding and self-adaptive fusion; secondly, in the tracking process, considering the influence of the background on the target template, updating the weight of the image block to be fused by using an improved weight calculation method so as to ensure the tracking accuracy; and finally, correcting the preliminary tracking result through SURF feature matching, and improving the tracking precision of the algorithm. The invention can be widely applied to the field of video monitoring, has good effect no matter pedestrians or vehicles are tracked, and improves the monitoring accuracy.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all equivalent structures or equivalent processes performed by the present invention or directly or indirectly applied to other related technical fields are also included in the scope of the present invention.
Claims (2)
1. A target tracking method based on video self-adaptive block division is characterized by comprising the following steps:
s01: acquiring a video stream and converting the video stream into an image frame sequence;
s02: reading an image and selecting a target template;
s03: initializing a target template:
03a) Partitioning the target template to obtain k sub image blocks, wherein the target template is partitioned according to two modes, namely, the target template is horizontally partitioned into the k sub image blocks, and the target template is vertically partitioned into the k sub image blocks;
03b) Establishing a histogram model vector for each subimage blockCalculating and selecting SURF (speeded Up robust features) feature points of the target, extracting a feature histogram of a peripheral background of the target, and calculating the central position and size information of each sub-image block;
03c) Initializing each sub-image block weight lambda (k) Is 1/k;
s04: reading the next frame image to track the selected moving target, wherein the initial target position is the target position of the previous frame image, and the specific tracking is as follows:
04a) Computing a similarity metric function between a candidate object and an object templateObtaining the central position y of the target position of the current frame;
04b) For each sub-image block weight lambda (k) Updating and judging whether the target is shielded or not, wherein,
in the formula: alpha is a numerical value used to indicate how similar the image block is to the background,representing the similarity of the kth sub image block to the target template,representing the similarity of the kth sub image block and the background;
04c) In the tracking process, calculating and determining which image block is blocked by utilizing the weight of each sub-image block, and adaptively selecting the blocking method with the least number of blocked image blocks in two blocking modes;
04d) Fusing effective image blocks by adopting a fusion formula, and tracking the effective image blocks by utilizing a block mean shift algorithm to obtain a preliminary result;
04e) Correcting the position and the scale of the target according to the extracted SURF characteristic points to obtain a final tracking result;
s05: judging whether to continue loading the image frame, if so, entering step S04 to start the tracking of the next image frame; otherwise, go to step S06;
s06: synthesizing the obtained image frame sequence with the tracking result into a video stream for output;
calculating similarity metric function between candidate target and target template by Bhattacharyya coefficient
In the formula:a color histogram representing the target template,a color histogram representing a candidate target template, k representing the number of blocks, u and m being set values of a Bhattacharyya coefficient calculation formula, m being the number of bins on each feature histogram,a model representing a candidate object is generated,a color histogram model representing the target;
when lambda is (k) &When lt is 0.25, it is said thatAnd obviously blocking the image block.
2. The method of claim 1, wherein k =3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510236291.9A CN104820996B (en) | 2015-05-11 | 2015-05-11 | A kind of method for tracking target of the adaptive piecemeal based on video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510236291.9A CN104820996B (en) | 2015-05-11 | 2015-05-11 | A kind of method for tracking target of the adaptive piecemeal based on video |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104820996A CN104820996A (en) | 2015-08-05 |
CN104820996B true CN104820996B (en) | 2018-04-03 |
Family
ID=53731281
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510236291.9A Active CN104820996B (en) | 2015-05-11 | 2015-05-11 | A kind of method for tracking target of the adaptive piecemeal based on video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104820996B (en) |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106560840B (en) * | 2015-09-30 | 2019-08-13 | 腾讯科技(深圳)有限公司 | A kind of image information identifying processing method and device |
CN105678330B (en) * | 2016-01-05 | 2019-02-05 | 北京环境特性研究所 | A kind of histogram matching based on Gauss weighting |
CN106097383A (en) * | 2016-05-30 | 2016-11-09 | 海信集团有限公司 | A kind of method for tracking target for occlusion issue and equipment |
CN106408591B (en) * | 2016-09-09 | 2019-04-05 | 南京航空航天大学 | A kind of anti-method for tracking target blocked |
CN106815860B (en) * | 2017-01-17 | 2019-11-29 | 湖南优象科技有限公司 | A kind of method for tracking target based on orderly comparison feature |
CN107330384A (en) * | 2017-06-19 | 2017-11-07 | 北京协同创新研究院 | The method and device of motion target tracking in a kind of video |
CN109919970A (en) * | 2017-12-12 | 2019-06-21 | 武汉盛捷达电力科技有限责任公司 | Based on a kind of improved Vision Tracking of MeanShift principle |
CN108447080B (en) * | 2018-03-02 | 2023-05-23 | 哈尔滨工业大学深圳研究生院 | Target tracking method, system and storage medium based on hierarchical data association and convolutional neural network |
CN109146918B (en) * | 2018-06-11 | 2022-04-22 | 西安电子科技大学 | Self-adaptive related target positioning method based on block |
CN109118514B (en) * | 2018-06-11 | 2022-07-15 | 西安电子科技大学 | Target tracking method |
CN109118521A (en) * | 2018-08-06 | 2019-01-01 | 中国电子科技集团公司第二十八研究所 | A kind of motion target tracking method based on characteristic similarity study |
CN108960213A (en) * | 2018-08-16 | 2018-12-07 | Oppo广东移动通信有限公司 | Method for tracking target, device, storage medium and terminal |
CN109389031B (en) * | 2018-08-27 | 2021-12-03 | 浙江大丰实业股份有限公司 | Automatic positioning mechanism for performance personnel |
CN110070511B (en) * | 2019-04-30 | 2022-01-28 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic device and storage medium |
CN110428445B (en) * | 2019-06-26 | 2023-06-27 | 西安电子科技大学 | Block tracking method and device, equipment and storage medium thereof |
CN114359335A (en) * | 2020-09-30 | 2022-04-15 | 华为技术有限公司 | Target tracking method and electronic equipment |
CN112381053B (en) * | 2020-12-01 | 2021-11-19 | 连云港豪瑞生物技术有限公司 | Environment-friendly monitoring system with image tracking function |
CN116091536A (en) * | 2021-10-29 | 2023-05-09 | 中移(成都)信息通信科技有限公司 | Tracking target shielding judging method, device, equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101867699A (en) * | 2010-05-25 | 2010-10-20 | 中国科学技术大学 | Real-time tracking method of nonspecific target based on partitioning |
CN103903280A (en) * | 2014-03-28 | 2014-07-02 | 哈尔滨工程大学 | Subblock weight Mean-Shift tracking method with improved level set target extraction |
CN104392465A (en) * | 2014-11-13 | 2015-03-04 | 南京航空航天大学 | Multi-core target tracking method based on D-S evidence theory information integration |
-
2015
- 2015-05-11 CN CN201510236291.9A patent/CN104820996B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101867699A (en) * | 2010-05-25 | 2010-10-20 | 中国科学技术大学 | Real-time tracking method of nonspecific target based on partitioning |
CN103903280A (en) * | 2014-03-28 | 2014-07-02 | 哈尔滨工程大学 | Subblock weight Mean-Shift tracking method with improved level set target extraction |
CN104392465A (en) * | 2014-11-13 | 2015-03-04 | 南京航空航天大学 | Multi-core target tracking method based on D-S evidence theory information integration |
Non-Patent Citations (2)
Title |
---|
基于改进Mean Shift和SURF的目标跟踪;包旭 等;《计算机工程与应用》;20130709;第49卷(第21期);论文第2-3节 * |
融合多核的目标分块跟踪;融合多核的目标分块跟踪;《图像与信号处理》;20141031;论文第3-4节 * |
Also Published As
Publication number | Publication date |
---|---|
CN104820996A (en) | 2015-08-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104820996B (en) | A kind of method for tracking target of the adaptive piecemeal based on video | |
CN107833270B (en) | Real-time object three-dimensional reconstruction method based on depth camera | |
CN107230218B (en) | Method and apparatus for generating confidence measures for estimates derived from images captured by vehicle-mounted cameras | |
US10762643B2 (en) | Method for evaluating image data of a vehicle camera | |
EP2538242B1 (en) | Depth measurement quality enhancement. | |
US9811742B2 (en) | Vehicle-surroundings recognition device | |
US8385630B2 (en) | System and method of processing stereo images | |
JP4974975B2 (en) | Method and system for locating an object in an image | |
CN107463890B (en) | A kind of Foregut fermenters and tracking based on monocular forward sight camera | |
US20110205338A1 (en) | Apparatus for estimating position of mobile robot and method thereof | |
CN110472553B (en) | Target tracking method, computing device and medium for fusion of image and laser point cloud | |
JP2017526082A (en) | Non-transitory computer-readable medium encoded with computer program code for causing a motion estimation method, a moving body, and a processor to execute the motion estimation method | |
CN109961417B (en) | Image processing method, image processing apparatus, and mobile apparatus control method | |
Hanek et al. | The contracting curve density algorithm: Fitting parametric curve models to images using local self-adapting separation criteria | |
CN111144213A (en) | Object detection method and related equipment | |
CN110363165B (en) | Multi-target tracking method and device based on TSK fuzzy system and storage medium | |
CN110349188B (en) | Multi-target tracking method, device and storage medium based on TSK fuzzy model | |
CN105447881A (en) | Doppler-based segmentation and optical flow in radar images | |
CN115063447A (en) | Target animal motion tracking method based on video sequence and related equipment | |
WO2015180758A1 (en) | Determining scale of three dimensional information | |
CN105574892A (en) | Doppler-based segmentation and optical flow in radar images | |
CN116977671A (en) | Target tracking method, device, equipment and storage medium based on image space positioning | |
JP2005062910A (en) | Image recognition device | |
Brockers | Cooperative stereo matching with color-based adaptive local support | |
CN113723432B (en) | Intelligent identification and positioning tracking method and system based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
EXSB | Decision made by sipo to initiate substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |