CN111369597B - Particle filter target tracking method based on multi-feature fusion - Google Patents

Particle filter target tracking method based on multi-feature fusion Download PDF

Info

Publication number
CN111369597B
CN111369597B CN202010155371.2A CN202010155371A CN111369597B CN 111369597 B CN111369597 B CN 111369597B CN 202010155371 A CN202010155371 A CN 202010155371A CN 111369597 B CN111369597 B CN 111369597B
Authority
CN
China
Prior art keywords
particle
target
feature
histogram
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010155371.2A
Other languages
Chinese (zh)
Other versions
CN111369597A (en
Inventor
黄成�
刘子淇
姚文杰
魏家豪
刘振光
罗涛
张永
王力立
徐志良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202010155371.2A priority Critical patent/CN111369597B/en
Publication of CN111369597A publication Critical patent/CN111369597A/en
Application granted granted Critical
Publication of CN111369597B publication Critical patent/CN111369597B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a particle filter target tracking method based on multi-feature fusion. The method comprises the following steps: collecting a video image and carrying out filtering processing; using a rectangular frame to mark a tracking target in an initial frame, and calculating an edge histogram, a texture histogram and a depth histogram of a target template; updating the particle state by adopting a second-order autoregressive model, and obtaining a feature histogram of each particle; calculating the similarity of the two templates, obtaining the discrimination of the single feature according to the position mean value, the standard deviation and the overall position mean value of the particles under the single feature, and adaptively adjusting the fusion weight; determining the particle weight of the current moment by combining the observation model of multi-feature fusion and the particle weight of the previous moment; and sorting the weight of the particles, counting the number of the particles with small weight, comparing the number with a threshold value, correcting the size of a window, and determining the state of the tracking target. The invention combines the edge, texture and depth characteristics to realize more accurate and continuous tracking of the target.

Description

Particle filter target tracking method based on multi-feature fusion
Technical Field
The invention belongs to the technical field of moving target tracking, and particularly relates to a particle filter target tracking method based on multi-feature fusion.
Background
The moving target tracking technology is an important research content in the field of computer vision, and relates to multiple disciplines such as image processing, pattern recognition, artificial intelligence, artificial neural networks and the like. The target tracking technology has high practicability and wide application range, greatly improves the level of automatic control in the fields of artificial intelligence, unmanned driving, medical treatment and the like, and plays an increasingly important role in the fields of military and civil use. Currently, the development trend of target tracking technology in the field of computer vision is mainly represented by the fusion of scene information and target state, the fusion of multi-dimensional and multi-level information, the fusion of deep learning and online learning, and the fusion of multiple sensors.
In the target tracking, a moving target is extracted from a subsequent image sequence under the initial state of a given tracking target, and simultaneously, the behavior of the target is understood and described according to the extracted moving information, so that the target is identified and tracked finally. The target tracking technology can continuously position the moving object in a video sequence, obtain a motion track and analyze the characteristics of the target motion, and the performance of a target tracking algorithm is mainly measured from three angles of robustness, accuracy and instantaneity. The current algorithms are limited to a certain specific environment or target condition, the effectiveness is lacked, the comprehensive performance of the algorithms needs to be improved, and the tracking and the identification of the target face various challenges under the conditions of complex scenes such as target shielding, illumination change, target characteristic change and the like.
The target tracking algorithm based on the filtering theory is to estimate the target state in the video image to realize tracking. Firstly, a target motion model is built, secondly, the real-time motion situation of a target is predicted through the model, and finally, the target tracking is realized through estimating and correcting the hypothesis of target observation. The common methods comprise a Kalman filtering algorithm, an extended Kalman filtering algorithm, a particle filtering algorithm and the like, the Kalman filtering algorithm can be only used under the condition that target motion is linear and is greatly influenced by uncertainty of a background environment, and the particle filtering algorithm has unique advantages in the aspects of processing parameter estimation and state filtering in a nonlinear and non-Gaussian system and is widely applied to the field of target tracking.
A typical object model system includes an appearance model, a motion model, and a search strategy to find the location of an object in the current frame. The appearance model consists of a target representation model and a statistical model, in a visual target tracking algorithm, an efficient and stable appearance model has great significance for robust target tracking, and in the traditional particle filter target tracking algorithm, a single RGB color histogram model is used as a probability model, but in practical application, target scenes do not all meet the ideal condition that the difference between a target and background colors is obvious, and various complexities such as similarity between the target and the background colors, obvious illumination change, shielding of the target, camera shake and the like often exist. Meanwhile, the single RGB color feature does not express the geometric structure information of the target, and when the target is partially shielded or the camera moves to image and other complex scenes, the target tracking loss is easily caused. Therefore, under a complex scene, multiple characteristics are selected to describe the target, and the accuracy and stability of target tracking can be improved. In recent years, many scholars have proposed a plurality of new target feature selection methods and feature fusion rules for the particle filter target tracking application requirements in different scenes: in consideration of the differences of different target feature descriptions, the feature fusion of the color and the edge is carried out in a particle filter target tracking frame, so that a good tracking effect is obtained, but due to the fact that the anti-shielding capability of the color and the edge features is not strong, when a target is shielded to a large extent, the tracked target is easy to lose; or the sparse structure is used for expressing texture features, the texture features are embedded into a particle filter target tracking frame to realize a target tracking task, the algorithm can effectively avoid the defect of poor tracking robustness caused by illumination change, but the number of the initialized particles of the algorithm is required to be enough, and the timeliness of the algorithm is easily reduced due to the excessive number of the particles.
Disclosure of Invention
The invention aims to provide a particle filter target tracking method based on multi-feature fusion, which can realize more accurate and more continuous tracking of a target by fusing edge, texture and depth features, and can adjust a tracking window in real time and accurately position the target position.
The technical solution for realizing the purpose of the invention is as follows: a particle filter target tracking method based on multi-feature fusion comprises the following steps:
step 1, image acquisition: acquiring an image by using a camera, and performing filtering operation on the image to remove noise to obtain a video image sequence required by tracking;
step 2, initialization: manually selecting a rectangular frame to determine a tracking target in an initial frame of a video image, establishing a corresponding target template, setting the number of sampling particles, initializing the particle state, and calculating an edge feature histogram, a texture feature histogram and a depth feature histogram of the target template;
step 3, updating the particle state: predicting according to a state transition equation, updating the particle state to obtain a new particle set, establishing a candidate template, and calculating an edge feature histogram, a texture feature histogram and a depth feature histogram of each particle;
step 4, a characteristic fusion strategy: respectively calculating the Pasteur distance between the characteristic histogram of each particle in the step 3 and the histogram of the target template to obtain an observation likelihood function of a single characteristic, sequencing the similarity of the candidate template and the target template from high to low to obtain the position mean value, the standard deviation and the overall position mean value of the edge, texture and depth characteristics, calculating the discrimination of each characteristic, dynamically and adaptively updating the fusion weight of each characteristic according to a distribution rule, and finally obtaining a multi-characteristic fusion observation model according to an additive fusion strategy;
step 5, target state estimation: according to the multi-feature fusion observation model in the step 4, combining the particle weight of the previous moment to calculate the particle weight of the current moment, normalizing, and determining the target state and the position information by using the obtained particle weight of the current moment and a weighting criterion;
step 6, adjusting a tracking window: based on the calibrated target rectangular frame in the step 2, calculating the size of the rectangular frame of the image at the current moment according to the similarity degree of the candidate template and the target template, and counting the particle weight smaller than the weight threshold value T p Comparing the number of the particles with a preset threshold, and when the number of the particles with small weight is larger than the preset threshold, correcting the window size according to an adjusting formula and obtaining a tracking rectangular window at the current moment;
step 7, resampling: calculating the number of effective particles, comparing the number of effective particles with the size of an effective sampling scale, discarding particles with small weight, reserving particles with large weight, and generating a new particle set;
and 8, repeating the step 3 to the step 7, and continuously tracking the next frame of image.
Further, the initialization of step 2: in an initial frame of a video image, manually selecting a rectangular frame to determine a tracking target, establishing a corresponding target template, setting the number of sampling particles, initializing particle states, and calculating an edge feature histogram, a texture feature histogram and a depth feature histogram of the target template, wherein the method specifically comprises the following steps:
step 2.1, manually selecting a rectangular frame T ═ x, y, width, height]Determining a tracking target, wherein x and y are respectively a horizontal coordinate and a vertical coordinate of the center of the rectangular frame, width is the width of the rectangular frame, and height is the height of the rectangular frame; setting the number N of sampling particles and the initial state of the particles
Figure GDA0003688308320000031
Initializing fusion weights
Figure GDA0003688308320000032
Initializing a particle weight of
Figure GDA0003688308320000033
Step 2.2, calculating the gradient of a gray image target area by using a Sobel operator to obtain an edge feature histogram q of a target template e (u); extracting texture features by adopting an LBP operator to obtain a texture feature histogram q of the target template t (u); the distance from the area corresponding to each pixel of the depth image to the camera is counted to obtain a depth feature histogram q of the target template d (u)。
Further, the particle state update of step 3: predicting according to a state transition equation, updating the particle state to obtain a new particle set, establishing a candidate template, and calculating an edge feature histogram, a texture feature histogram and a depth feature histogram of each particle, wherein the details are as follows:
step 3.1, use second order autoregressive model X k =AX k-1 +BX k-2 + CN (0, Σ) predicts the particle and establishes a candidate template, where X k For the predicted current particle state, X k-1 And X k-2 Respectively representing the particle states at the k-1 and k-2 moments, A and B are coefficient matrixes, and N (0, sigma) is Gaussian noise with the mean value of 0 and the standard deviation of 1;
step 3.2, calculating the gradient of the target area of the gray image by using a Sobel operator to obtain an edge feature histogram p of the particles e (u); extracting texture features by using LBP operator to obtain a particle texture feature histogram p t (u); counting the distance from the area corresponding to each pixel of the depth image to the camera to obtain a depth feature histogram p of the particles d (u)。
Further, the feature fusion strategy of step 4: respectively calculating the Pasteur distance between the characteristic histogram and the target template histogram of each particle in the step 3 to obtain an observation likelihood function of a single characteristic, sequencing the similarity of the candidate template and the target template from high to low to obtain the position mean value, the standard deviation and the overall position mean value of the edge, texture and depth characteristics, calculating the discrimination of each characteristic, dynamically and adaptively updating the fusion weight of each characteristic according to a distribution rule, and finally obtaining a multi-characteristic fusion observation model according to an additive fusion strategy, wherein the specific steps are as follows:
step 4.1, calculating a target template characteristic histogram q by combining the Pasteur distance c (u) and particle feature histogram p c Degree of similarity of (u) (. rho) c And histogram distance d c Wherein c belongs to { e, t, d }, and the obtained observation likelihood functions of the edge feature, the texture feature and the depth feature of the ith particle are respectively
Figure GDA0003688308320000041
And
Figure GDA0003688308320000042
step 4.2, the similarity of the N particles under the edge, texture and depth features is sequenced from high to low, and the position mean value mu of the particle sets under the edge feature, texture feature and depth feature is respectively calculated etd Position standard deviation σ etd And the mean value μ of the population position of the set of particles s Setting a discrimination coefficient lambda of each feature c ' is:
Figure GDA0003688308320000043
the normalized edge, texture and depth are divided into λ e 、λ t And λ d
And 4.3, respectively setting the fusion weight of each feature at the moment k as follows:
α k =τ·(λ e ) k +(1-τ)·α k-1
β k =τ·(λ t ) k +(1-τ)·β k-1
γ k =τ·(λ d ) k +(1-τ)·γ k-1
wherein (lambda) e ) k 、(λ t ) k And (lambda) d ) k Respectively representing the edge, texture and depth feature discriminations at time k, alpha k-1 、β k-1 And gamma k-1 Respectively the edge, texture and depth fusion weight at the k-1 moment, wherein tau is a weight adjustment coefficient;
step 4.4, aiming at the incompleteness and uncertainty of the single feature on the target expression, obtaining a multi-feature fusion observation model according to an additive fusion strategy
Figure GDA0003688308320000044
The fusion formula is as follows:
Figure GDA0003688308320000051
further, the tracking window adjustment in step 6: based on the calibrated target rectangular frame in the step 2, calculating the size of the rectangular frame of the image at the current moment according to the similarity degree of the candidate template and the target template, and counting the particle weight smaller than the weight threshold value T p The number of particles is compared with a preset threshold value, and when the weight is small, the number of particlesWhen the current time is greater than the set threshold, correcting the window size according to an adjustment formula, and obtaining a tracking rectangular window at the current time, wherein the following steps are specifically performed:
step 6.1, sorting the particles according to the weight, and counting the weight of the particles to be less than a weight threshold value T p Number of particles N d Setting window particle threshold to be N w Judgment of N d And N w If N is the size of d <N w Then the target window size is kept unchanged, i.e. width k =width k-1 ,height k =height k-1 (ii) a If N is present d ≥N w Adjusting the window size;
step 6.2, the window size adjusting formula is as follows:
width k =η×width k-1
height k =η×height k-1
wherein the content of the first and second substances,
Figure GDA0003688308320000052
d is the mean Euclidean distance from the particles to the center of the moving target at the current moment, d pre The mean euclidean distance of the particle to the center of the target at the previous time.
Compared with the prior art, the invention has the remarkable advantages that:
(1) the target is subjected to multi-feature description, and meanwhile, the depth feature representing the distance characteristic is introduced, so that the accuracy and the integrity of the tracking target extraction are ensured, and the problems of position change and target scale change of the target are solved;
(2) when multi-feature fusion is carried out, the position mean value, the standard deviation and the overall position mean value of single features and the discrimination of each feature are calculated, the fusion weight is dynamically updated, and the self-adaptive capacity of the feature template is improved;
(3) when the number of particles with small weight exceeds a threshold value, the size of a tracking window is adjusted by setting the length and width variable of the rectangular frame, so that background interference is effectively avoided.
Drawings
FIG. 1 is a schematic flow chart of a particle filter target tracking method based on multi-feature fusion according to the present invention.
FIG. 2 is a flow chart of the feature fusion algorithm of the present invention.
Fig. 3 is a flow chart of the window adaptive algorithm of the present invention.
Fig. 4 is an example of an initial frame image of a video and a corresponding target grayscale depth map, where (a) is an original map and (b) is a depth map.
Fig. 5 is a simulation effect diagram in the embodiment, in which (a) to (d) are tracking effect diagrams of 19 th, 80 th, 132 th and 181 th frames of a video, respectively.
Detailed Description
A particle filter target tracking method based on multi-feature fusion comprises the following steps:
step 1, image acquisition: acquiring an image by using a camera, and performing filtering operation on the image to remove noise to obtain a video image sequence required by tracking;
step 2, initialization: manually selecting a rectangular frame to determine a tracking target in an initial frame of a video image, establishing a corresponding target template, setting the number of sampling particles, initializing the particle state, and calculating an edge feature histogram, a texture feature histogram and a depth feature histogram of the target template;
step 3, updating the particle state: predicting according to a state transition equation, updating the particle state to obtain a new particle set, establishing a candidate template, and calculating an edge feature histogram, a texture feature histogram and a depth feature histogram of each particle;
step 4, a characteristic fusion strategy: respectively calculating the Pasteur distance between the characteristic histogram of each particle in the step 3 and the histogram of the target template to obtain an observation likelihood function of a single characteristic, sequencing the similarity of the candidate template and the target template from high to low to obtain the position mean value, the standard deviation and the overall position mean value of the edge, texture and depth characteristics, calculating the discrimination of each characteristic, dynamically and adaptively updating the fusion weight of each characteristic according to a distribution rule, and finally obtaining a multi-characteristic fusion observation model according to an additive fusion strategy;
step 5, target state estimation: according to the multi-feature fusion observation model in the step 4, combining the particle weight of the previous moment to calculate the particle weight of the current moment, normalizing, and determining the target state and the position information by using the obtained particle weight of the current moment and a weighting criterion;
step 6, adjusting a tracking window: based on the calibrated target rectangular frame in the step 2, calculating the size of the rectangular frame of the image at the current moment according to the similarity degree of the candidate template and the target template, and counting the particle weight smaller than the weight threshold value T p Comparing the number of the particles with a preset threshold, and when the number of the particles with small weight is larger than the preset threshold, correcting the window size according to an adjusting formula and obtaining a tracking rectangular window at the current moment;
and 7, resampling: calculating the number of effective particles, comparing the number of effective particles with the size of an effective sampling scale, discarding particles with small weight, reserving particles with large weight, and generating a new particle set;
and 8, repeating the step 3 to the step 7, and continuously tracking the next frame of image.
Further, the initialization of step 2: in an initial frame of a video image, manually selecting a rectangular frame to determine a tracking target, establishing a corresponding target template, setting the number of sampling particles, initializing the particle state, and calculating an edge feature histogram, a texture feature histogram and a depth feature histogram of the target template, wherein the specific steps are as follows:
step 2.1, manually selecting a rectangular frame T ═ x, y, width, height]Determining a tracking target, wherein x and y are respectively a horizontal coordinate and a vertical coordinate of the center of the rectangular frame, width is the width of the rectangular frame, and height is the height of the rectangular frame; setting the number N of sampling particles and the initial state of the particles
Figure GDA0003688308320000071
Initializing fusion weights
Figure GDA0003688308320000072
Initializing a particle weight of
Figure GDA0003688308320000073
Step 2.2, calculating the gradient of the gray image target area by using a Sobel operator to obtain an edge feature histogram q of the target template e (u); extracting texture features by adopting an LBP operator to obtain a texture feature histogram q of the target template t (u); the distance from the area corresponding to each pixel of the depth image to the camera is counted to obtain a depth feature histogram q of the target template d (u)。
Further, the particle state update of step 3: predicting according to a state transition equation, updating the particle state to obtain a new particle set, establishing a candidate template, and calculating an edge feature histogram, a texture feature histogram and a depth feature histogram of each particle, wherein the details are as follows:
step 3.1, use second order autoregressive model X k =AX k-1 +BX k-2 + CN (0, Σ) predicts the particle and establishes a candidate template, where X k For the predicted current particle state, X k-1 And X k-2 Respectively representing the particle states at the k-1 and k-2 moments, A and B are coefficient matrixes, and N (0, sigma) is Gaussian noise with the mean value of 0 and the standard deviation of 1;
step 3.2, calculating the gradient of the target area of the gray image by using a Sobel operator to obtain an edge feature histogram p of the particles e (u); extracting texture features by using LBP operator to obtain a particle texture feature histogram p t (u); counting the distance from the area corresponding to each pixel of the depth image to the camera to obtain a depth feature histogram p of the particles d (u)。
Further, the feature fusion strategy of step 4: respectively calculating the Pasteur distance between the characteristic histogram and the target template histogram of each particle in the step 3 to obtain an observation likelihood function of a single characteristic, sequencing the similarity of the candidate template and the target template from high to low to obtain the position mean value, the standard deviation and the overall position mean value of the edge, texture and depth characteristics, calculating the discrimination of each characteristic, dynamically and adaptively updating the fusion weight of each characteristic according to a distribution rule, and finally obtaining a multi-characteristic fusion observation model according to an additive fusion strategy, wherein the specific steps are as follows:
step 4.1, calculating a target template characteristic histogram q by combining the Pasteur distance c (u) and particle feature histogram p c Degree of similarity of (u) (. rho) c And histogram distance d c Wherein c belongs to { e, t, d }, and the obtained observation likelihood functions of the edge feature, the texture feature and the depth feature of the ith particle are respectively
Figure GDA0003688308320000081
And
Figure GDA0003688308320000082
step 4.2, the similarity of the N particles under the edge, texture and depth features is sequenced from high to low, and the position mean value mu of the particle sets under the edge feature, texture feature and depth feature is respectively calculated etd Position standard deviation σ etd And the mean value μ of the population position of the set of particles s Setting a discrimination coefficient lambda of each feature c ' is:
Figure GDA0003688308320000083
the normalized edge, texture and depth are respectively λ e 、λ t And λ d
And 4.3, respectively setting the fusion weight of each feature at the moment k as follows:
α k =τ·(λ e ) k +(1-τ)·α k-1
β k =τ·(λ t ) k +(1-τ)·β k-1
γ k =τ·(λ d ) k +(1-τ)·γ k-1
wherein (lambda) e ) k 、(λ t ) k And (lambda) d ) k Respectively representing the edge, texture and depth feature discriminations at time k, alpha k-1 、β k-1 And gamma k-1 Respectively the edge, texture and depth fusion weight at the moment of k-1, and tau is a weight adjustment coefficient;
and 4.4, aiming at the incompleteness and uncertainty of the single feature on the target expression, obtaining a multi-feature fusion observation model according to an additive fusion strategy
Figure GDA0003688308320000084
The fusion formula is as follows:
Figure GDA0003688308320000085
further, the tracking window adjustment in step 6: based on the calibrated target rectangular frame in the step 2, calculating the size of the rectangular frame of the image at the current moment according to the similarity degree of the candidate template and the target template, and counting the particle weight smaller than the weight threshold value T p Comparing the number of the particles with a preset threshold, when the number of the particles with small weight is larger than the preset threshold, correcting the window size according to an adjusting formula, and obtaining a tracking rectangular window at the current moment, wherein the specific steps are as follows:
step 6.1, sorting the particles according to the weight, and counting the weight of the particles to be less than a weight threshold value T p Number of particles N d Setting window particle threshold to N w Judgment of N d And N w If N is the size of d <N w Then the target window size is kept unchanged, i.e. width k =width k-1 ,height k =height k-1 (ii) a If N is present d ≥N w Adjusting the window size;
step 6.2, the window size adjusting formula is as follows:
width k =η×width k-1
height k =η×height k-1
wherein, the first and the second end of the pipe are connected with each other,
Figure GDA0003688308320000091
d is the mean Euclidean distance from the current particle to the center of the moving target pre The mean euclidean distance of the particle to the center of the target at the previous time.
The invention is described in further detail below with reference to the figures and the specific embodiments.
Examples
With reference to fig. 1, the invention relates to a particle filter target tracking method based on multi-feature fusion, which comprises the following steps:
step 1, performing mean filtering operation denoising on an acquired image to obtain a video sequence required by tracking;
step 2, initialization: in an initial frame of a video image, manually selecting a rectangular frame to determine a tracking target, establishing a corresponding target template, setting the number of sampling particles, initializing the particle state, and calculating an edge feature histogram, a texture feature histogram and a depth feature histogram of the target template, wherein the specific steps are as follows:
step 2.1, clicking a left mouse button, and calibrating the rectangular frame T ═ x, y, width and height in real time]And when the mouse is detected to be released, determining the tracking target frame. Wherein x and y are coordinates of the center of the rectangular frame, width is the width of the rectangular frame, and height is the height of the rectangular frame. Setting the number N of sampling particles and the initial state of the particles
Figure GDA0003688308320000092
Initializing fusion weights
Figure GDA0003688308320000093
Initializing a particle weight of
Figure GDA0003688308320000094
And 2.2, establishing a histogram to represent the characteristic probability distribution model. Calculating the gradient of a target area of the gray level image by using a Sobel operator to obtain an edge feature histogram q e (u) extracting the texture features by adopting an LBP operator, wherein the histogram of the texture features of the target is q t (u) counting the distance from the area corresponding to each pixel of the depth image to the camera to obtain a depth feature histogram q d (u)。
Step 3, updating the particle state: predicting according to a state transition equation, updating particle states to obtain a new particle set, and calculating an edge feature histogram, a texture feature histogram and a depth feature histogram of each particle, wherein the method specifically comprises the following steps:
step 3.1, use second order autoregressive model X k =AX k-1 +BX k-2 + CN (0, Σ) predicts the particle and establishes a candidate template, where X k For the predicted current particle state, X k-1 And X k-2 Respectively representing the particle states at the k-1 and k-2 times, a and B are coefficient matrices, N (0, Σ) is gaussian noise having a mean value of 0 and a standard deviation of 1.
Step 3.2, calculating the edge characteristic histogram p of the particles by the same method as the step 2.2 e (u) texture feature histogram of p t (u) depth feature histogram of p d (u)。
Step 4, a characteristic fusion strategy: respectively calculating the bhattachaya distance between the feature histogram of each particle in the step 3 and the target template, sequencing the similarity of the candidate template histogram and the target template histogram to obtain the position mean value, the position standard deviation and the overall position mean value of each feature, calculating the discrimination of each feature, dynamically and adaptively updating the fusion weight of each feature according to an allocation rule to obtain an observation likelihood function of a single feature, and finally obtaining a multi-feature fusion observation model according to an additive fusion strategy, wherein the specific steps are as follows by combining with a graph 2:
step 4.1, calculating a target template characteristic histogram q by combining the Pasteur distance c (u) histogram with particle features p c Degree of similarity ρ of (u) c And histogram distance d c Where c ∈ { e, t, d }:
Figure GDA0003688308320000101
Figure GDA0003688308320000102
the feature observation likelihood function for the ith particle is:
Figure GDA0003688308320000103
step 4.2, the similarity of the N particles under the characteristics of edges, textures and depths is sequenced from high to low, and the position mean value mu of the particle sets under each characteristic is respectively calculated etd Standard deviation σ etd And the mean of the population position of the particle set
Figure GDA0003688308320000104
The standard deviation under a single feature, and the deviation of the mean from the population mean, all indicate the degree of deviation of the single feature from the population. The larger the deviation is, the smaller the weight should be given to the feature during feature fusion, whereas the smaller the deviation is, the closer the similarity measure of the feature is to the whole, and the larger the weight should be given during feature fusion. Setting the discrimination coefficient of each feature as follows:
Figure GDA0003688308320000105
the normalized edge, texture and color discrimination is λ e ,λ t And λ d
Figure GDA0003688308320000111
Step 4.3, in order to avoid that the target is too sensitive to the scene change, the fusion weight of each feature at the time k is:
α k =τ·(λ e ) k +(1-τ)·α k-1
β k =τ·(λ t ) k +(1-τ)·β k-1
γ k =τ·(λ d ) k +(1-τ)·γ k-1
wherein (lambda) e ) k ,(λ t ) k ,(λ d ) k Representing the degree of feature discrimination at time k, α k-1 ,β k-1 And gamma k-1 Is the fusion weight at time k-1, and τ is the weight adjustment coefficient (in this embodiment, τ is 0.5).
Step 4.4, aiming at the incompleteness and uncertainty of the single feature on the target expression, obtaining a multi-feature fusion observation model according to an additive fusion strategy
Figure GDA0003688308320000112
The fusion formula is as follows:
Figure GDA0003688308320000113
step 5, target state estimation: calculating the weight of the particle at the current moment according to the observation probability density function in the step 4 and the weight of the particle at the last moment
Figure GDA0003688308320000114
And normalizing to obtain the weight of the ith particle
Figure GDA0003688308320000115
Using the obtained weight of the particle, determining the target state and position information by using a weighting criterion
Figure GDA0003688308320000116
Step 6, adjusting a tracking window: and based on the rectangular frame in the step 2, calculating the size of the rectangular frame of the image at the current moment according to the similarity degree of the particles and the target template. Counting the number of the particles with small weight and comparing the counted number with a preset threshold, and when the weight of some particles is smaller, correcting the window size according to an adjustment formula, wherein the specific details are as follows by combining with fig. 3:
step 6.1, sorting the particles according to the weight, and counting the weight of the particles to be less than a weight threshold value T p (in the present embodiment, T p Taking 0.018) particlesNumber N d . Setting a window particle threshold to N w And determining N d And N w If N is the size of d <N w Keeping the size of the target window constant, i.e. width k =width k-1 ,height k =height k-1 . When N is present d ≥N w Then, the window size needs to be adjusted;
step 6.2, aiming at the window adjusting condition, the window size adjusting formula is as follows:
width k =η×width k-1
height k =η×height k-1
let the mean Euclidean distance of a particle to the center of a moving target be d, and the mean Euclidean distance of a particle to the center of a target at the previous moment be d pre The ratio of the two is the adjustment size, i.e.
Figure GDA0003688308320000121
And 7, resampling: and calculating the number of effective particles, comparing the number of effective particles with the size of a set threshold value, discarding the particles with small weight and keeping the particles with large weight under the condition of ensuring that the total number of particles is not changed. And re-determining the state of the tracking target.
And 8, repeating the step 3 to the step 7, and continuously tracking the next frame of image.
And connecting the camera with a PC (personal computer), calibrating a camera coordinate system and adjusting the detection distance of the camera. And starting shooting after the installation is finished, and transmitting the video image into a computer processing system, wherein the processing platform is visual studio 2015+ opencv3.1.0, and the size of a single video image is 752 multiplied by 480.
Fig. 4 is an initial frame image of a video and a corresponding target gray-scale depth map in an embodiment, where (a) in fig. 4 is an original image and (b) in fig. 4 is a depth image. Fig. 5 is a simulation effect diagram in the embodiment, in which (a) to (d) in fig. 5 are tracking effect diagrams of 19 th frame, 80 th frame, 132 th frame and 181 th frame of the video, respectively. The method can be seen in the invention, the target is subjected to multi-feature description, and meanwhile, the depth feature representing the distance characteristic is introduced, so that the method is beneficial to determining the problems of position change and target dimension change of the target. When multi-feature fusion is carried out, the position mean value, the standard deviation and the total position mean value of the single feature and the discrimination of each feature are calculated, the fusion weight is dynamically updated, and the self-adaptive capability of the feature template is improved compared with the condition that the feature evaluation capability cannot be distinguished by using the fusion weight which is fixedly distributed. In addition, when the number of particles with small weight exceeds a threshold value, the size of a tracking window is adjusted by setting the length and width variable of a rectangular frame, so that background interference is effectively avoided.

Claims (5)

1. A particle filter target tracking method based on multi-feature fusion is characterized by comprising the following steps:
step 1, image acquisition: acquiring an image by using a camera, and performing filtering operation on the image to remove noise to obtain a video image sequence required by tracking;
step 2, initialization: manually selecting a rectangular frame to determine a tracking target in an initial frame of a video image, establishing a corresponding target template, setting the number of sampling particles, initializing the particle state, and calculating an edge feature histogram, a texture feature histogram and a depth feature histogram of the target template;
step 3, updating the particle state: predicting according to a state transition equation, updating the particle state to obtain a new particle set, establishing a candidate template, and calculating an edge feature histogram, a texture feature histogram and a depth feature histogram of each particle;
step 4, a characteristic fusion strategy: respectively calculating the Pasteur distance between the characteristic histogram of each particle in the step 3 and the histogram of the target template to obtain an observation likelihood function of a single characteristic, sequencing the similarity of the candidate template and the target template from high to low to obtain the position mean value, the standard deviation and the overall position mean value of the edge, texture and depth characteristics, calculating the discrimination of each characteristic, dynamically and adaptively updating the fusion weight of each characteristic according to a distribution rule, and finally obtaining a multi-characteristic fusion observation model according to an additive fusion strategy;
step 5, target state estimation: according to the multi-feature fusion observation model in the step 4, combining the particle weight of the previous moment to calculate the particle weight of the current moment, normalizing, and determining the target state and the position information by using the obtained particle weight of the current moment and a weighting criterion;
step 6, adjusting a tracking window: based on the calibrated target rectangular frame in the step 2, calculating the size of the rectangular frame of the image at the current moment according to the similarity degree of the candidate template and the target template, and counting the particle weight smaller than the weight threshold value T p Comparing the number of the particles with a preset threshold, and when the number of the particles with small weight is larger than the preset threshold, correcting the window size according to an adjusting formula and obtaining a tracking rectangular window at the current moment;
and 7, resampling: calculating the number of effective particles, comparing the number of effective particles with the size of an effective sampling scale, discarding particles with small weight, reserving particles with large weight, and generating a new particle set;
and 8, repeating the step 3 to the step 7, and continuously tracking the next frame of image.
2. The multi-feature fusion based particle filter target tracking method according to claim 1, wherein the initialization of step 2 is: in an initial frame of a video image, manually selecting a rectangular frame to determine a tracking target, establishing a corresponding target template, setting the number of sampling particles, initializing the particle state, and calculating an edge feature histogram, a texture feature histogram and a depth feature histogram of the target template, wherein the specific steps are as follows:
step 2.1, manually selecting a rectangular frame T ═ x, y, width, height]Determining a tracking target, wherein x and y are respectively a horizontal coordinate and a vertical coordinate of the center of the rectangular frame, width is the width of the rectangular frame, and height is the height of the rectangular frame; setting the number N of sampling particles and the initial state of the particles
Figure FDA0003688308310000021
Initializing fusion weights
Figure FDA0003688308310000022
Initializing a particle weight of
Figure FDA0003688308310000023
Step 2.2, calculating the gradient of a gray image target area by using a Sobel operator to obtain an edge feature histogram q of a target template e (u); extracting texture features by adopting an LBP operator to obtain a texture feature histogram q of the target template t (u); the distance from the area corresponding to each pixel of the depth image to the camera is counted to obtain a depth feature histogram q of the target template d (u)。
3. The multi-feature fusion based particle filter target tracking method according to claim 1, wherein the particle state update of step 3: predicting according to a state transition equation, updating the particle state to obtain a new particle set, establishing a candidate template, and calculating an edge feature histogram, a texture feature histogram and a depth feature histogram of each particle, wherein the details are as follows:
step 3.1, use second order autoregressive model X k =AX k-1 +BX k-2 + CN (0, Σ) predicts the particle and establishes a candidate template, where X k For the predicted current particle state, X k-1 And X k-2 Respectively representing the particle states at the k-1 and k-2 moments, A and B are coefficient matrixes, N (0, sigma) is Gaussian noise with the mean value of 0 and the standard deviation sigma of 1;
step 3.2, calculating the gradient of the target area of the gray image by using a Sobel operator to obtain an edge feature histogram p of the particles e (u); extracting texture features by using LBP operator to obtain a particle texture feature histogram p t (u); counting the distance from the area corresponding to each pixel of the depth image to the camera to obtain a depth feature histogram p of the particles d (u)。
4. The particle filter target tracking method based on multi-feature fusion as claimed in claim 1, wherein the feature fusion strategy of step 4 is: respectively calculating the Pasteur distance between the characteristic histogram and the target template histogram of each particle in the step 3 to obtain an observation likelihood function of a single characteristic, sequencing the similarity of the candidate template and the target template from high to low to obtain the position mean value, the standard deviation and the overall position mean value of the edge, texture and depth characteristics, calculating the discrimination of each characteristic, dynamically and adaptively updating the fusion weight of each characteristic according to a distribution rule, and finally obtaining a multi-characteristic fusion observation model according to an additive fusion strategy, wherein the specific steps are as follows:
step 4.1, calculating a target template characteristic histogram q by combining the Pasteur distance c (u) and particle feature histogram p c Degree of similarity of (u) (. rho) c And histogram distance d c Wherein c belongs to { e, t, d }, and the obtained observation likelihood functions of the edge feature, the texture feature and the depth feature of the ith particle are respectively
Figure FDA0003688308310000024
And
Figure FDA0003688308310000025
step 4.2, the similarity of the N particles under the edge, texture and depth features is sequenced from high to low, and the position mean value mu of the particle sets under the edge feature, texture feature and depth feature is respectively calculated etd Position standard deviation σ etd And the mean value μ of the population position of the set of particles s Setting a discrimination coefficient lambda of each feature c ' is:
Figure FDA0003688308310000031
the normalized edge, texture and depth are respectively λ e 、λ t And λ d
And 4.3, respectively setting the fusion weight of each feature at the moment k as follows:
α k =τ·(λ e ) k +(1-τ)·α k-1
β k =τ·(λ t ) k +(1-τ)·β k-1
γ k =τ·(λ d ) k +(1-τ)·γ k-1
wherein (lambda) e ) k 、(λ t ) k And (lambda) d ) k Respectively representing the edge, texture and depth feature discriminations at time k, alpha k-1 、β k-1 And gamma k-1 Respectively the edge, texture and depth fusion weight at the k-1 moment, wherein tau is a weight adjustment coefficient;
step 4.4, aiming at the incompleteness and uncertainty of the single feature on the target expression, obtaining a multi-feature fusion observation model according to an additive fusion strategy
Figure FDA0003688308310000032
The fusion formula is as follows:
Figure FDA0003688308310000033
5. the multi-feature fusion based particle filter target tracking method according to claim 1, wherein the tracking window adjustment of step 6: based on the calibrated target rectangular frame in the step 2, calculating the size of the rectangular frame of the image at the current moment according to the similarity degree of the candidate template and the target template, and counting the particle weight smaller than the weight threshold value T p Comparing the number of the particles with a preset threshold, when the number of the particles with small weight is larger than the preset threshold, correcting the window size according to an adjusting formula, and obtaining a tracking rectangular window at the current moment, wherein the specific steps are as follows:
step 6.1, sorting the particles according to the weight, and counting the weight of the particles to be less than a weight threshold value T p Number of particles N d Setting window particle threshold to be N w Judgment of N d And N w If N is the size of d <N w Then the target window size is kept unchanged, i.e. width k =width k-1 ,height k =height k-1 (ii) a If N is present d ≥N w Adjusting the window size;
step 6.2, the window size adjusting formula is as follows:
width k =η×width k-1
height k =η×height k-1
wherein the content of the first and second substances,
Figure FDA0003688308310000041
d is the mean Euclidean distance from the current particle to the center of the moving target pre The mean euclidean distance of the particle to the center of the target at the previous time.
CN202010155371.2A 2020-03-09 2020-03-09 Particle filter target tracking method based on multi-feature fusion Active CN111369597B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010155371.2A CN111369597B (en) 2020-03-09 2020-03-09 Particle filter target tracking method based on multi-feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010155371.2A CN111369597B (en) 2020-03-09 2020-03-09 Particle filter target tracking method based on multi-feature fusion

Publications (2)

Publication Number Publication Date
CN111369597A CN111369597A (en) 2020-07-03
CN111369597B true CN111369597B (en) 2022-08-12

Family

ID=71210367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010155371.2A Active CN111369597B (en) 2020-03-09 2020-03-09 Particle filter target tracking method based on multi-feature fusion

Country Status (1)

Country Link
CN (1) CN111369597B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184762A (en) * 2020-09-05 2021-01-05 天津城建大学 Gray wolf optimization particle filter target tracking algorithm based on feature fusion
CN112070840B (en) * 2020-09-11 2023-10-10 上海幻维数码创意科技股份有限公司 Human body space positioning and tracking method fused by multiple depth cameras
CN111931754B (en) * 2020-10-14 2021-01-15 深圳市瑞图生物技术有限公司 Method and system for identifying target object in sample and readable storage medium
CN112348853B (en) * 2020-11-04 2022-09-23 哈尔滨工业大学(威海) Particle filter tracking method based on infrared saliency feature fusion
CN112486197B (en) * 2020-12-05 2022-10-21 青岛民航凯亚系统集成有限公司 Fusion positioning tracking control method based on self-adaptive power selection of multi-source image
CN112288777A (en) * 2020-12-16 2021-01-29 西安长地空天科技有限公司 Method for tracking laser breakpoint by using particle filtering algorithm
CN112765492B (en) * 2020-12-31 2021-08-10 浙江省方大标准信息有限公司 Sequencing method for inspection and detection mechanism
CN113436313B (en) * 2021-05-24 2022-11-29 南开大学 Three-dimensional reconstruction error active correction method based on unmanned aerial vehicle
CN115903904A (en) * 2022-12-02 2023-04-04 亿航智能设备(广州)有限公司 Method, device and equipment for automatically tracking target by unmanned aerial vehicle cradle head

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036526A (en) * 2014-06-26 2014-09-10 广东工业大学 Gray target tracking method based on self-adaptive window
KR101953626B1 (en) * 2017-06-29 2019-03-06 서강대학교산학협력단 Method of tracking an object based on multiple histograms and system using the method

Also Published As

Publication number Publication date
CN111369597A (en) 2020-07-03

Similar Documents

Publication Publication Date Title
CN111369597B (en) Particle filter target tracking method based on multi-feature fusion
CN110837768B (en) Online detection and identification method for rare animal protection
CN107909081B (en) Method for quickly acquiring and quickly calibrating image data set in deep learning
CN108960047B (en) Face duplication removing method in video monitoring based on depth secondary tree
CN110728697A (en) Infrared dim target detection tracking method based on convolutional neural network
CN109902565B (en) Multi-feature fusion human behavior recognition method
CN106157330B (en) Visual tracking method based on target joint appearance model
CN109685045A (en) A kind of Moving Targets Based on Video Streams tracking and system
US20220128358A1 (en) Smart Sensor Based System and Method for Automatic Measurement of Water Level and Water Flow Velocity and Prediction
CN102346854A (en) Method and device for carrying out detection on foreground objects
CN111739064B (en) Method for tracking target in video, storage device and control device
CN108681711A (en) A kind of natural landmark extracting method towards mobile robot
CN108765463B (en) Moving target detection method combining region extraction and improved textural features
CN110555868A (en) method for detecting small moving target under complex ground background
CN112184762A (en) Gray wolf optimization particle filter target tracking algorithm based on feature fusion
CN113822352A (en) Infrared dim target detection method based on multi-feature fusion
CN104036526A (en) Gray target tracking method based on self-adaptive window
CN111199245A (en) Rape pest identification method
CN116051970A (en) Identification method for overlapping fish targets based on improved yolov5 model
Widyantara et al. Gamma correction-based image enhancement and canny edge detection for shoreline extraction from coastal imagery
Zhang et al. A coarse-to-fine leaf detection approach based on leaf skeleton identification and joint segmentation
CN113379789A (en) Moving target tracking method in complex environment
CN110349184B (en) Multi-pedestrian tracking method based on iterative filtering and observation discrimination
Cretu et al. Deformable object segmentation and contour tracking in image sequences using unsupervised networks
CN116777956A (en) Moving target screening method based on multi-scale track management

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant