CN111369597A - Particle filter target tracking method based on multi-feature fusion - Google Patents

Particle filter target tracking method based on multi-feature fusion Download PDF

Info

Publication number
CN111369597A
CN111369597A CN202010155371.2A CN202010155371A CN111369597A CN 111369597 A CN111369597 A CN 111369597A CN 202010155371 A CN202010155371 A CN 202010155371A CN 111369597 A CN111369597 A CN 111369597A
Authority
CN
China
Prior art keywords
particle
target
feature
histogram
particles
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010155371.2A
Other languages
Chinese (zh)
Other versions
CN111369597B (en
Inventor
黄成�
刘子淇
姚文杰
魏家豪
刘振光
罗涛
张永
王力立
徐志良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202010155371.2A priority Critical patent/CN111369597B/en
Publication of CN111369597A publication Critical patent/CN111369597A/en
Application granted granted Critical
Publication of CN111369597B publication Critical patent/CN111369597B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a particle filter target tracking method based on multi-feature fusion. The method comprises the following steps: collecting a video image and carrying out filtering processing; using a rectangular frame to mark a tracking target in an initial frame, and calculating an edge histogram, a texture histogram and a depth histogram of a target template; updating the particle state by adopting a second-order autoregressive model, and obtaining a feature histogram of each particle; calculating the similarity of the two templates, obtaining the discrimination of the single feature according to the position mean, the standard deviation and the overall position mean of the particles under the single feature, and adaptively adjusting the fusion weight; determining the particle weight of the current moment by combining the observation model of multi-feature fusion and the particle weight of the previous moment; and sorting the weight of the particles, counting the number of the particles with small weight, comparing the number with a threshold value, correcting the size of a window, and determining the state of the tracking target. The invention combines the edge, texture and depth characteristics to realize more accurate and continuous tracking of the target.

Description

Particle filter target tracking method based on multi-feature fusion
Technical Field
The invention belongs to the technical field of moving target tracking, and particularly relates to a particle filter target tracking method based on multi-feature fusion.
Background
The moving target tracking technology is an important research content in the field of computer vision, and relates to multiple disciplines such as image processing, pattern recognition, artificial intelligence, artificial neural networks and the like. The target tracking technology has high practicability and wide application range, greatly improves the level of automatic control in the fields of artificial intelligence, unmanned driving, medical treatment and the like, and plays an increasingly important role in the fields of military and civil use. Currently, the development trend of target tracking technology in the field of computer vision is mainly represented by the fusion of scene information and target state, the fusion of multi-dimensional and multi-level information, the fusion of deep learning and online learning, and the fusion of multiple sensors.
In the target tracking, a moving target is extracted from a subsequent image sequence under the initial state of a given tracking target, and simultaneously, the behavior of the target is understood and described according to the extracted moving information, so that the target is identified and tracked finally. The target tracking technology can continuously position the moving object in a video sequence, obtain a motion track and analyze the characteristics of the target motion, and the performance of a target tracking algorithm is mainly measured from three angles of robustness, accuracy and instantaneity. The current algorithms are limited to a certain specific environment or target condition, the effectiveness is lacked, the comprehensive performance of the algorithms needs to be improved, and the tracking and the identification of the target face various challenges under the conditions of complex scenes such as target shielding, illumination change, target characteristic change and the like.
The target tracking algorithm based on the filtering theory is to estimate the target state in the video image to realize tracking. Firstly, a target motion model is built, secondly, the real-time motion situation of a target is predicted through the model, and finally, the target tracking is realized through estimating and correcting the hypothesis of target observation. The common methods comprise a Kalman filtering algorithm, an extended Kalman filtering algorithm, a particle filtering algorithm and the like, the Kalman filtering algorithm can be only used under the condition that target motion is linear and is greatly influenced by uncertainty of a background environment, and the particle filtering algorithm has unique advantages in the aspects of processing parameter estimation and state filtering in a nonlinear and non-Gaussian system and is widely applied to the field of target tracking.
A typical object model system includes an appearance model, a motion model, and a search strategy to find the location of an object in the current frame. The appearance model is composed of a target representation model and a statistical model, in a visual target tracking algorithm, an efficient and stable appearance model has great significance for robust target tracking, a single RGB color histogram model is adopted as a probability model in a traditional particle filter target tracking algorithm, but in practical application, target scenes do not all meet the ideal condition that the difference between a target and a background color is obvious, and various complexities such as the color of the target is close to that of the background, the illumination change is obvious, the target is shielded, a camera shakes and the like often exist. Meanwhile, the single RGB color feature does not express the geometric structure information of the target, and when the target is partially shielded or the camera moves to image and other complex scenes, the target tracking loss is easily caused. Therefore, under a complex scene, multiple characteristics are selected to describe the target, and the accuracy and stability of target tracking can be improved. In recent years, many scholars have proposed a plurality of new target feature selection methods and feature fusion rules for the particle filter target tracking application requirements in different scenes: in consideration of the differences of different target feature descriptions, the feature fusion of the color and the edge is carried out in a particle filter target tracking frame, so that a good tracking effect is obtained, but due to the fact that the anti-shielding capability of the color and the edge features is not strong, when a target is shielded to a large extent, the tracked target is easy to lose; or the sparse structure is used for expressing texture features, the texture features are embedded into a particle filter target tracking frame to realize a target tracking task, the algorithm can effectively avoid the defect of poor tracking robustness caused by illumination change, but the number of the initialized particles of the algorithm is required to be enough, and the timeliness of the algorithm is easily reduced due to the excessive number of the particles.
Disclosure of Invention
The invention aims to provide a particle filter target tracking method based on multi-feature fusion, which can realize more accurate and more continuous tracking of a target by fusing edge, texture and depth features, and can adjust a tracking window in real time and accurately position the target position.
The technical solution for realizing the purpose of the invention is as follows: a particle filter target tracking method based on multi-feature fusion comprises the following steps:
step 1, image acquisition: acquiring an image by using a camera, and performing filtering operation on the image to remove noise to obtain a video image sequence required by tracking;
step 2, initialization: manually selecting a rectangular frame to determine a tracking target in an initial frame of a video image, establishing a corresponding target template, setting the number of sampling particles, initializing the particle state, and calculating an edge feature histogram, a texture feature histogram and a depth feature histogram of the target template;
step 3, updating the particle state: predicting according to a state transition equation, updating the particle state to obtain a new particle set, establishing a candidate template, and calculating an edge feature histogram, a texture feature histogram and a depth feature histogram of each particle;
step 4, a characteristic fusion strategy: respectively calculating the Pasteur distance between the characteristic histogram of each particle in the step 3 and the histogram of the target template to obtain an observation likelihood function of a single characteristic, sequencing the similarity of the candidate template and the target template from high to low to obtain the position mean value, the standard deviation and the overall position mean value of the edge, texture and depth characteristics, calculating the discrimination of each characteristic, dynamically and adaptively updating the fusion weight of each characteristic according to a distribution rule, and finally obtaining a multi-characteristic fusion observation model according to an additive fusion strategy;
step 5, target state estimation: according to the multi-feature fusion observation model in the step 4, combining the particle weight of the previous moment to calculate the particle weight of the current moment, normalizing, and determining the target state and the position information by using the obtained particle weight of the current moment and a weighting criterion;
step 6, adjusting a tracking window: based on the calibrated target rectangular frame in the step 2, calculating the size of the rectangular frame of the image at the current moment according to the similarity degree of the candidate template and the target template, and counting the particle weight smaller than the weight threshold value TpComparing the number of the particles with a preset threshold, and when the number of the particles with small weight is larger than the preset threshold, correcting the window size according to an adjusting formula and obtaining a tracking rectangular window at the current moment;
and 7, resampling: calculating the number of effective particles, comparing the number of effective particles with the size of an effective sampling scale, discarding particles with small weight, reserving particles with large weight, and generating a new particle set;
and 8, repeating the step 3 to the step 7, and continuously tracking the next frame of image.
Further, the initialization of step 2: in an initial frame of a video image, manually selecting a rectangular frame to determine a tracking target, establishing a corresponding target template, setting the number of sampling particles, initializing the particle state, and calculating an edge feature histogram, a texture feature histogram and a depth feature histogram of the target template, wherein the specific steps are as follows:
step 2.1, manually selecting a rectangular frame T ═ x, y, width, height]Determining a tracking target, wherein x and y are respectively a horizontal coordinate and a vertical coordinate of the center of the rectangular frame, width is the width of the rectangular frame, and height is the height of the rectangular frame; setting the number N of sampling particles and the initial state of the particles
Figure BDA0002403827010000031
Initializing fusion weights
Figure BDA0002403827010000032
InitializationThe weight of the particle is
Figure BDA0002403827010000033
Step 2.2, calculating the gradient of a gray image target area by using a Sobel operator to obtain an edge feature histogram q of a target templatee(u); extracting texture features by adopting an LBP operator to obtain a texture feature histogram q of the target templatet(u); the distance from the area corresponding to each pixel of the depth image to the camera is counted to obtain a depth feature histogram q of the target templated(u)。
Further, the particle state update of step 3: predicting according to a state transition equation, updating the particle state to obtain a new particle set, establishing a candidate template, and calculating an edge feature histogram, a texture feature histogram and a depth feature histogram of each particle, wherein the details are as follows:
step 3.1, use second order autoregressive model Xk=AXk-1+BXk-2+ CN (0, Σ) predicts the particle and establishes a candidate template, where XkFor the predicted current particle state, Xk-1And Xk-2Respectively representing the particle states at the k-1 and k-2 moments, A and B are coefficient matrixes, and N (0, sigma) is Gaussian noise with the mean value of 0 and the standard deviation of 1;
step 3.2, calculating the gradient of the target area of the gray image by using a Sobel operator to obtain an edge feature histogram p of the particlese(u); extracting texture features by using LBP operator to obtain a particle texture feature histogram pt(u); counting the distance from the area corresponding to each pixel of the depth image to the camera to obtain a depth feature histogram p of the particlesd(u)。
Further, the feature fusion strategy of step 4: respectively calculating the Pasteur distance between the characteristic histogram and the target template histogram of each particle in the step 3 to obtain an observation likelihood function of a single characteristic, sequencing the similarity of the candidate template and the target template from high to low to obtain the position mean value, the standard deviation and the overall position mean value of the edge, texture and depth characteristics, calculating the discrimination of each characteristic, dynamically and adaptively updating the fusion weight of each characteristic according to a distribution rule, and finally obtaining a multi-characteristic fusion observation model according to an additive fusion strategy, wherein the specific steps are as follows:
step 4.1, calculating a target template characteristic histogram q by combining the Pasteur distancec(u) and particle feature histogram pcDegree of similarity of (u) (. rho)cAnd histogram distance dcWherein c ∈ { e, t, d }, the observed likelihood functions of the edge feature, the texture feature and the depth feature of the ith particle are obtained as
Figure BDA0002403827010000041
And
Figure BDA0002403827010000042
step 4.2, the similarity of the N particles under the edge, texture and depth features is sequenced from high to low, and the position mean value mu of the particle sets under the edge feature, texture feature and depth feature is respectively calculatedetdPosition standard deviation σetdAnd the mean value μ of the population position of the set of particlessSetting a discrimination coefficient lambda of each featurec' is:
Figure BDA0002403827010000043
the normalized edge, texture and depth are respectively λe、λtAnd λd
And 4.3, respectively setting the fusion weight of each feature at the moment k as follows:
αk=τ·(λe)k+(1-τ)·αk-1
βk=τ·(λt)k+(1-τ)·βk-1
γk=τ·(λd)k+(1-τ)·γk-1
wherein (lambda)e)k、(λt)kAnd (lambda)d)kIndividual watchShows the edge, texture and depth feature discrimination at time k, αk-1、βk-1And gammak-1Respectively the edge, texture and depth fusion weight at the moment of k-1, and tau is a weight adjustment coefficient;
step 4.4, aiming at the incompleteness and uncertainty of the single feature on the target expression, obtaining a multi-feature fusion observation model according to an additive fusion strategy
Figure BDA0002403827010000044
The fusion formula is as follows:
Figure BDA0002403827010000051
further, the tracking window adjustment in step 6: based on the calibrated target rectangular frame in the step 2, calculating the size of the rectangular frame of the image at the current moment according to the similarity degree of the candidate template and the target template, and counting the particle weight smaller than the weight threshold value TpComparing the number of the particles with a preset threshold, when the number of the particles with small weight is larger than the preset threshold, correcting the window size according to an adjusting formula, and obtaining a tracking rectangular window at the current moment, wherein the specific steps are as follows:
step 6.1, sorting the particles according to the weight, and counting the weight of the particles to be less than a weight threshold value TpNumber of particles NdSetting window particle threshold to NwJudgment of NdAnd NwIf N is the size ofd<NwThen the target window size is kept unchanged, i.e. widthk=widthk-1,heightk=heightk-1(ii) a If N is presentd≥NwAdjusting the window size;
step 6.2, the window size adjusting formula is as follows:
widthk=η×widthk-1
heightk=η×heightk-1
wherein,
Figure BDA0002403827010000052
d is the mean Euclidean distance from the current particle to the center of the moving targetpreThe mean euclidean distance of the particle to the center of the target at the previous time.
Compared with the prior art, the invention has the remarkable advantages that:
(1) the target is subjected to multi-feature description, and meanwhile, the depth feature representing the distance characteristic is introduced, so that the accuracy and the integrity of the tracking target extraction are ensured, and the problems of position change and target scale change of the target are solved;
(2) when multi-feature fusion is carried out, the position mean value, the standard deviation and the total position mean value of the single feature and the discrimination of each feature are calculated, the fusion weight is dynamically updated, and the self-adaptive capacity of the feature template is improved;
(3) when the number of particles with small weight exceeds a threshold value, the size of a tracking window is adjusted by setting the length and width variable of the rectangular frame, so that background interference is effectively avoided.
Drawings
FIG. 1 is a schematic flow chart of a particle filter target tracking method based on multi-feature fusion according to the present invention.
FIG. 2 is a flow chart of the feature fusion algorithm of the present invention.
Fig. 3 is a flow chart of the window adaptive algorithm of the present invention.
Fig. 4 is an example of an initial frame image of a video and a corresponding target grayscale depth map, where (a) is an original map and (b) is a depth map.
Fig. 5 is a simulation effect diagram in the embodiment, in which (a) to (d) are tracking effect diagrams of 19 th, 80 th, 132 th and 181 th frames of a video, respectively.
Detailed Description
A particle filter target tracking method based on multi-feature fusion comprises the following steps:
step 1, image acquisition: acquiring an image by using a camera, and performing filtering operation on the image to remove noise to obtain a video image sequence required by tracking;
step 2, initialization: manually selecting a rectangular frame to determine a tracking target in an initial frame of a video image, establishing a corresponding target template, setting the number of sampling particles, initializing the particle state, and calculating an edge feature histogram, a texture feature histogram and a depth feature histogram of the target template;
step 3, updating the particle state: predicting according to a state transition equation, updating the particle state to obtain a new particle set, establishing a candidate template, and calculating an edge feature histogram, a texture feature histogram and a depth feature histogram of each particle;
step 4, a characteristic fusion strategy: respectively calculating the Pasteur distance between the characteristic histogram of each particle in the step 3 and the histogram of the target template to obtain an observation likelihood function of a single characteristic, sequencing the similarity of the candidate template and the target template from high to low to obtain the position mean value, the standard deviation and the overall position mean value of the edge, texture and depth characteristics, calculating the discrimination of each characteristic, dynamically and adaptively updating the fusion weight of each characteristic according to a distribution rule, and finally obtaining a multi-characteristic fusion observation model according to an additive fusion strategy;
step 5, target state estimation: according to the multi-feature fusion observation model in the step 4, combining the particle weight of the previous moment to calculate the particle weight of the current moment, normalizing, and determining the target state and the position information by using the obtained particle weight of the current moment and a weighting criterion;
step 6, adjusting a tracking window: based on the calibrated target rectangular frame in the step 2, calculating the size of the rectangular frame of the image at the current moment according to the similarity degree of the candidate template and the target template, and counting the particle weight smaller than the weight threshold value TpComparing the number of the particles with a preset threshold, and when the number of the particles with small weight is larger than the preset threshold, correcting the window size according to an adjusting formula and obtaining a tracking rectangular window at the current moment;
and 7, resampling: calculating the number of effective particles, comparing the number of effective particles with the size of an effective sampling scale, discarding particles with small weight, reserving particles with large weight, and generating a new particle set;
and 8, repeating the step 3 to the step 7, and continuously tracking the next frame of image.
Further, the initialization of step 2: in an initial frame of a video image, manually selecting a rectangular frame to determine a tracking target, establishing a corresponding target template, setting the number of sampling particles, initializing the particle state, and calculating an edge feature histogram, a texture feature histogram and a depth feature histogram of the target template, wherein the specific steps are as follows:
step 2.1, manually selecting a rectangular frame T ═ x, y, width, height]Determining a tracking target, wherein x and y are respectively a horizontal coordinate and a vertical coordinate of the center of the rectangular frame, width is the width of the rectangular frame, and height is the height of the rectangular frame; setting the number N of sampling particles and the initial state of the particles
Figure BDA0002403827010000071
Initializing fusion weights
Figure BDA0002403827010000072
Initializing a particle weight of
Figure BDA0002403827010000073
Step 2.2, calculating the gradient of a gray image target area by using a Sobel operator to obtain an edge feature histogram q of a target templatee(u); extracting texture features by adopting an LBP operator to obtain a texture feature histogram q of the target templatet(u); the distance from the area corresponding to each pixel of the depth image to the camera is counted to obtain a depth feature histogram q of the target templated(u)。
Further, the particle state update of step 3: predicting according to a state transition equation, updating the particle state to obtain a new particle set, establishing a candidate template, and calculating an edge feature histogram, a texture feature histogram and a depth feature histogram of each particle, wherein the details are as follows:
step 3.1, use second order autoregressive model Xk=AXk-1+BXk-2+ CN (0, Σ) predicts the particle and establishes a candidate template, where XkFor the predicted current particle state, Xk-1And Xk-2Respectively representing particles at time points k-1 and k-2States, a and B are coefficient matrices, N (0, Σ) is gaussian noise with a mean of 0 and a standard deviation of 1;
step 3.2, calculating the gradient of the target area of the gray image by using a Sobel operator to obtain an edge feature histogram p of the particlese(u); extracting texture features by using LBP operator to obtain a particle texture feature histogram pt(u); counting the distance from the area corresponding to each pixel of the depth image to the camera to obtain a depth feature histogram p of the particlesd(u)。
Further, the feature fusion strategy of step 4: respectively calculating the Pasteur distance between the characteristic histogram and the target template histogram of each particle in the step 3 to obtain an observation likelihood function of a single characteristic, sequencing the similarity of the candidate template and the target template from high to low to obtain the position mean value, the standard deviation and the overall position mean value of the edge, texture and depth characteristics, calculating the discrimination of each characteristic, dynamically and adaptively updating the fusion weight of each characteristic according to a distribution rule, and finally obtaining a multi-characteristic fusion observation model according to an additive fusion strategy, wherein the specific steps are as follows:
step 4.1, calculating a target template characteristic histogram q by combining the Pasteur distancec(u) and particle feature histogram pcDegree of similarity of (u) (. rho)cAnd histogram distance dcWherein c ∈ { e, t, d }, the observed likelihood functions of the edge feature, the texture feature and the depth feature of the ith particle are obtained as
Figure BDA0002403827010000081
And
Figure BDA0002403827010000082
step 4.2, the similarity of the N particles under the edge, texture and depth features is sequenced from high to low, and the position mean value mu of the particle sets under the edge feature, texture feature and depth feature is respectively calculatedetdPosition standard deviation σetdAnd the mean value μ of the population position of the set of particlessSetting a discrimination coefficient lambda of each featurec' is:
Figure BDA0002403827010000083
the normalized edge, texture and depth are respectively λe、λtAnd λd
And 4.3, respectively setting the fusion weight of each feature at the moment k as follows:
αk=τ·(λe)k+(1-τ)·αk-1
βk=τ·(λt)k+(1-τ)·βk-1
γk=τ·(λd)k+(1-τ)·γk-1
wherein (lambda)e)k、(λt)kAnd (lambda)d)kRespectively representing the edge, texture and depth feature discriminations at time k, αk-1、βk-1And gammak-1Respectively the edge, texture and depth fusion weight at the moment of k-1, and tau is a weight adjustment coefficient;
step 4.4, aiming at the incompleteness and uncertainty of the single feature on the target expression, obtaining a multi-feature fusion observation model according to an additive fusion strategy
Figure BDA0002403827010000084
The fusion formula is as follows:
Figure BDA0002403827010000085
further, the tracking window adjustment in step 6: based on the calibrated target rectangular frame in the step 2, calculating the size of the rectangular frame of the image at the current moment according to the similarity degree of the candidate template and the target template, and counting the particle weight smaller than the weight threshold value TpComparing the number of particles with a preset threshold, when the number of particles with small weight is larger than the preset threshold, correcting the window size according to an adjusting formula, and obtaining the tracking moment of the current momentThe shape window is specifically as follows:
step 6.1, sorting the particles according to the weight, and counting the weight of the particles to be less than a weight threshold value TpNumber of particles NdSetting window particle threshold to NwJudgment of NdAnd NwIf N is the size ofd<NwThen the target window size is kept unchanged, i.e. widthk=widthk-1,heightk=heightk-1(ii) a If N is presentd≥NwAdjusting the window size;
step 6.2, the window size adjusting formula is as follows:
widthk=η×widthk-1
heightk=η×heightk-1
wherein,
Figure BDA0002403827010000091
d is the mean Euclidean distance from the current particle to the center of the moving targetpreThe mean euclidean distance of the particle to the center of the target at the previous time.
The invention is described in further detail below with reference to the figures and the specific embodiments.
Examples
With reference to fig. 1, the invention relates to a particle filter target tracking method based on multi-feature fusion, which comprises the following steps:
step 1, performing mean filtering operation denoising on an acquired image to obtain a video sequence required by tracking;
step 2, initialization: in an initial frame of a video image, manually selecting a rectangular frame to determine a tracking target, establishing a corresponding target template, setting the number of sampling particles, initializing the particle state, and calculating an edge feature histogram, a texture feature histogram and a depth feature histogram of the target template, wherein the specific steps are as follows:
step 2.1, clicking a left mouse button, and calibrating a rectangular frame T ═ x, y, width, height in real time]And when the release of the mouse is detected, determining a tracking target frame. Wherein x and y are coordinates of the center of the rectangular frame, and width is rectangularThe width of the box, height is the height of the rectangular box. Setting the number N of sampling particles and the initial state of the particles
Figure BDA0002403827010000092
Initializing fusion weights
Figure BDA0002403827010000093
Initializing a particle weight of
Figure BDA0002403827010000094
And 2.2, establishing a histogram to represent the characteristic probability distribution model. Calculating the gradient of a target area of the gray level image by using a Sobel operator to obtain an edge feature histogram qe(u) extracting the texture features by adopting an LBP operator, wherein the histogram of the texture features of the target is qt(u) counting the distance from the area corresponding to each pixel of the depth image to the camera to obtain a depth feature histogram qd(u)。
Step 3, updating the particle state: predicting according to a state transition equation, updating the particle state to obtain a new particle set, and calculating an edge feature histogram, a texture feature histogram and a depth feature histogram of each particle, wherein the specific steps are as follows:
step 3.1, use second order autoregressive model Xk=AXk-1+BXk-2+ CN (0, Σ) predicts the particle and establishes a candidate template, where XkFor the predicted current particle state, Xk-1And Xk-2Respectively representing the particle states at the k-1 and k-2 times, a and B are coefficient matrices, N (0, Σ) is gaussian noise having a mean value of 0 and a standard deviation of 1.
Step 3.2, calculating the edge characteristic histogram p of the particles by the same method as the step 2.2e(u) texture feature histogram of pt(u) depth feature histogram of pd(u)。
Step 4, a characteristic fusion strategy: respectively calculating the bhattachaya distance between the feature histogram of each particle in the step 3 and the target template, sequencing the similarity of the candidate template histogram and the target template histogram to obtain the position mean value, the position standard deviation and the overall position mean value of each feature, calculating the discrimination of each feature, dynamically and adaptively updating the fusion weight of each feature according to an allocation rule to obtain an observation likelihood function of a single feature, and finally obtaining a multi-feature fusion observation model according to an additive fusion strategy, wherein the specific steps are as follows by combining with a graph 2:
step 4.1, calculating a target template characteristic histogram q by combining the Pasteur distancec(u) and particle feature histogram pcDegree of similarity of (u) (. rho)cAnd histogram distance dcWhere c ∈ { e, t, d }:
Figure BDA0002403827010000101
Figure BDA0002403827010000102
the feature observation likelihood function for the ith particle is:
Figure BDA0002403827010000103
step 4.2, the similarity of the N particles under the characteristics of edges, textures and depths is sequenced from high to low, and the position mean value mu of the particle sets under each characteristic is respectively calculatedetdStandard deviation σetdAnd the mean of the population position of the particle set
Figure BDA0002403827010000104
The standard deviation under a single feature, and the deviation of the mean from the population mean, all indicate the degree of deviation of the single feature from the population. The larger the deviation is, the smaller the weight should be given to the feature during feature fusion, whereas the smaller the deviation is, the closer the similarity measure of the feature is to the whole, and the larger the weight should be given during feature fusion. Setting the discrimination coefficient of each feature as follows:
Figure BDA0002403827010000105
the normalized edge, texture and color discrimination is λe,λtAnd λd
Figure BDA0002403827010000111
Step 4.3, in order to avoid that the target is too sensitive to the scene change, the fusion weight of each feature at the time k is:
αk=τ·(λe)k+(1-τ)·αk-1
βk=τ·(λt)k+(1-τ)·βk-1
γk=τ·(λd)k+(1-τ)·γk-1
wherein (lambda)e)k,(λt)k,(λd)kIndicating the degree of feature discrimination at time k, αk-1,βk-1And gammak-1Is the fusion weight at time k-1, and τ is the weight adjustment coefficient (in this embodiment, τ is 0.5).
Step 4.4, aiming at the incompleteness and uncertainty of the single feature on the target expression, obtaining a multi-feature fusion observation model according to an additive fusion strategy
Figure BDA0002403827010000112
The fusion formula is as follows:
Figure BDA0002403827010000113
step 5, target state estimation: calculating the weight of the particle at the current moment according to the observation probability density function in the step 4 and the weight of the particle at the last moment
Figure BDA0002403827010000114
And normalizing to obtain the weight of the ith particle
Figure BDA0002403827010000115
Using the obtained weight of the particle, determining the target state and position information by using a weighting criterion
Figure BDA0002403827010000116
Step 6, adjusting a tracking window: and based on the rectangular frame in the step 2, calculating the size of the rectangular frame of the image at the current moment according to the similarity degree of the particles and the target template. Counting the number of the particles with small weight and comparing the counted number with a preset threshold, and when the weight of some particles is smaller, correcting the window size according to an adjustment formula, wherein the specific details are as follows by combining with fig. 3:
step 6.1, sorting the particles according to the weight, and counting the weight of the particles to be less than a weight threshold value Tp(in the present embodiment, TpTaking 0.018) number of particles Nd. Setting a window particle threshold to NwAnd determining NdAnd NwIf N is the size ofd<NwKeeping the size of the target window constant, i.e. widthk=widthk-1,heightk=heightk-1. When N is presentd≥NwThen, the window size needs to be adjusted;
step 6.2, aiming at the window adjusting condition, the window size adjusting formula is as follows:
widthk=η×widthk-1
heightk=η×heightk-1
let the mean Euclidean distance of a particle to the center of a moving target be d, and the mean Euclidean distance of a particle to the center of a target at the previous moment be dpreThe ratio of the two is the adjustment size, i.e.
Figure BDA0002403827010000121
And 7, resampling: and calculating the number of effective particles, comparing the number of effective particles with the size of a set threshold value, discarding the particles with small weight and keeping the particles with large weight under the condition of ensuring that the total number of particles is not changed. And re-determining the state of the tracking target.
And 8, repeating the step 3 to the step 7, and continuously tracking the next frame of image.
And starting shooting after the installation is finished, and transmitting the video image into a computer processing system, wherein the processing platform is visual studio 2015+ opencv3.1.0, and the size of a single video image is 752 × 480.
Fig. 4 is an initial frame image of a video and a corresponding target gray-scale depth map in an embodiment, where fig. 4(a) is an original image and fig. 4(b) is a depth image. Fig. 5 is a simulation effect diagram in the embodiment, wherein fig. 5(a) to (d) are tracking effect diagrams of 19 th frame, 80 th frame, 132 th frame and 181 th frame of the video, respectively. The method can be seen in the invention, the target is subjected to multi-feature description, and meanwhile, the depth feature representing the distance characteristic is introduced, so that the method is beneficial to determining the problems of position change and target dimension change of the target. When multi-feature fusion is carried out, the position mean value, the standard deviation and the total position mean value of the single feature and the discrimination of each feature are calculated, the fusion weight is dynamically updated, and the self-adaptive capability of the feature template is improved compared with the condition that the feature evaluation capability cannot be distinguished by using the fusion weight which is fixedly distributed. In addition, when the number of particles with small weight exceeds a threshold value, the size of a tracking window is adjusted by setting the length and width variable of the rectangular frame, so that background interference is effectively avoided.

Claims (5)

1. A particle filter target tracking method based on multi-feature fusion is characterized by comprising the following steps:
step 1, image acquisition: acquiring an image by using a camera, and performing filtering operation on the image to remove noise to obtain a video image sequence required by tracking;
step 2, initialization: manually selecting a rectangular frame to determine a tracking target in an initial frame of a video image, establishing a corresponding target template, setting the number of sampling particles, initializing the particle state, and calculating an edge feature histogram, a texture feature histogram and a depth feature histogram of the target template;
step 3, updating the particle state: predicting according to a state transition equation, updating the particle state to obtain a new particle set, establishing a candidate template, and calculating an edge feature histogram, a texture feature histogram and a depth feature histogram of each particle;
step 4, a characteristic fusion strategy: respectively calculating the Pasteur distance between the characteristic histogram of each particle in the step 3 and the histogram of the target template to obtain an observation likelihood function of a single characteristic, sequencing the similarity of the candidate template and the target template from high to low to obtain the position mean value, the standard deviation and the overall position mean value of the edge, texture and depth characteristics, calculating the discrimination of each characteristic, dynamically and adaptively updating the fusion weight of each characteristic according to a distribution rule, and finally obtaining a multi-characteristic fusion observation model according to an additive fusion strategy;
step 5, target state estimation: according to the multi-feature fusion observation model in the step 4, combining the particle weight of the previous moment to calculate the particle weight of the current moment, normalizing, and determining the target state and the position information by using the obtained particle weight of the current moment and a weighting criterion;
step 6, adjusting a tracking window: based on the calibrated target rectangular frame in the step 2, calculating the size of the rectangular frame of the image at the current moment according to the similarity degree of the candidate template and the target template, and counting the particle weight smaller than the weight threshold value TpComparing the number of the particles with a preset threshold, and when the number of the particles with small weight is larger than the preset threshold, correcting the window size according to an adjusting formula and obtaining a tracking rectangular window at the current moment;
and 7, resampling: calculating the number of effective particles, comparing the number of effective particles with the size of an effective sampling scale, discarding particles with small weight, reserving particles with large weight, and generating a new particle set;
and 8, repeating the step 3 to the step 7, and continuously tracking the next frame of image.
2. The multi-feature fusion based particle filter target tracking method according to claim 1, wherein the initialization of step 2 is: in an initial frame of a video image, manually selecting a rectangular frame to determine a tracking target, establishing a corresponding target template, setting the number of sampling particles, initializing the particle state, and calculating an edge feature histogram, a texture feature histogram and a depth feature histogram of the target template, wherein the specific steps are as follows:
step 2.1, manually selecting a rectangular frame T ═ x, y, width, height]Determining a tracking target, wherein x and y are respectively a horizontal coordinate and a vertical coordinate of the center of the rectangular frame, width is the width of the rectangular frame, and height is the height of the rectangular frame; setting the number N of sampling particles and the initial state of the particles
Figure FDA0002403823000000021
Initializing fusion weights
Figure FDA0002403823000000022
Initializing a particle weight of
Figure FDA0002403823000000023
Step 2.2, calculating the gradient of a gray image target area by using a Sobel operator to obtain an edge feature histogram q of a target templatee(u); extracting texture features by adopting an LBP operator to obtain a texture feature histogram q of the target templatet(u); the distance from the area corresponding to each pixel of the depth image to the camera is counted to obtain a depth feature histogram q of the target templated(u)。
3. The multi-feature fusion based particle filter target tracking method according to claim 1, wherein the particle state update of step 3: predicting according to a state transition equation, updating the particle state to obtain a new particle set, establishing a candidate template, and calculating an edge feature histogram, a texture feature histogram and a depth feature histogram of each particle, wherein the details are as follows:
step 3.1, use second order autoregressive model Xk=AXk-1+BXk-2+ CN (0, Σ) predicts the particle and establishes a candidate template, where XkFor the predicted current particle state, Xk-1And Xk-2Respectively representing the particle states at the k-1 and k-2 moments, A and B are coefficient matrixes, and N (0, sigma) is Gaussian noise with the mean value of 0 and the standard deviation of 1;
step 3.2, calculating the gradient of the target area of the gray image by using a Sobel operator to obtain an edge feature histogram p of the particlese(u); extracting texture features by using LBP operator to obtain a particle texture feature histogram pt(u); counting the distance from the area corresponding to each pixel of the depth image to the camera to obtain a depth feature histogram p of the particlesd(u)。
4. The multi-feature fusion based particle filter target tracking method according to claim 1, wherein the feature fusion strategy of step 4 is as follows: respectively calculating the Pasteur distance between the characteristic histogram and the target template histogram of each particle in the step 3 to obtain an observation likelihood function of a single characteristic, sequencing the similarity of the candidate template and the target template from high to low to obtain the position mean value, the standard deviation and the overall position mean value of the edge, texture and depth characteristics, calculating the discrimination of each characteristic, dynamically and adaptively updating the fusion weight of each characteristic according to a distribution rule, and finally obtaining a multi-characteristic fusion observation model according to an additive fusion strategy, wherein the specific steps are as follows:
step 4.1, calculating a target template characteristic histogram q by combining the Pasteur distancec(u) and particle feature histogram pcDegree of similarity of (u) (. rho)cAnd histogram distance dcWherein c ∈ { e, t, d }, the observed likelihood functions of the edge feature, the texture feature and the depth feature of the ith particle are obtained as
Figure FDA0002403823000000024
And
Figure FDA0002403823000000025
step 4.2, the similarity of the N particles under the edge, texture and depth features is sequenced from high to low, and the position mean value mu of the particle sets under the edge feature, texture feature and depth feature is respectively calculatedetdPosition standard deviation σetdAnd the mean value μ of the population position of the set of particlessSetting a discrimination coefficient lambda of each featurec' is:
Figure FDA0002403823000000031
the normalized edge, texture and depth are respectively λe、λtAnd λd
And 4.3, respectively setting the fusion weight of each feature at the moment k as follows:
αk=τ·(λe)k+(1-τ)·αk-1
βk=τ·(λt)k+(1-τ)·βk-1
γk=τ·(λd)k+(1-τ)·γk-1
wherein (lambda)e)k、(λt)kAnd (lambda)d)kRespectively representing the edge, texture and depth feature discriminations at time k, αk-1、βk-1And gammak-1Respectively the edge, texture and depth fusion weight at the moment of k-1, and tau is a weight adjustment coefficient;
step 4.4, aiming at the incompleteness and uncertainty of the single feature on the target expression, obtaining a multi-feature fusion observation model according to an additive fusion strategy
Figure FDA0002403823000000032
The fusion formula is as follows:
Figure FDA0002403823000000033
5. the multi-feature fusion based particle filter target tracking method according to claim 1, wherein the tracking window adjustment of step 6:based on the calibrated target rectangular frame in the step 2, calculating the size of the rectangular frame of the image at the current moment according to the similarity degree of the candidate template and the target template, and counting the particle weight smaller than the weight threshold value TpComparing the number of the particles with a preset threshold, when the number of the particles with small weight is larger than the preset threshold, correcting the window size according to an adjusting formula, and obtaining a tracking rectangular window at the current moment, wherein the specific steps are as follows:
step 6.1, sorting the particles according to the weight, and counting the weight of the particles to be less than a weight threshold value TpNumber of particles NdSetting window particle threshold to NwJudgment of NdAnd NwIf N is the size ofd<NwThen the target window size is kept unchanged, i.e. widthk=widthk-1,heightk=heightk-1(ii) a If N is presentd≥NwAdjusting the window size;
step 6.2, the window size adjusting formula is as follows:
widthk=η×widthk-1
heightk=η×heightk-1
wherein,
Figure FDA0002403823000000041
d is the mean Euclidean distance from the current particle to the center of the moving targetpreThe mean euclidean distance of the particle to the center of the target at the previous time.
CN202010155371.2A 2020-03-09 2020-03-09 Particle filter target tracking method based on multi-feature fusion Active CN111369597B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010155371.2A CN111369597B (en) 2020-03-09 2020-03-09 Particle filter target tracking method based on multi-feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010155371.2A CN111369597B (en) 2020-03-09 2020-03-09 Particle filter target tracking method based on multi-feature fusion

Publications (2)

Publication Number Publication Date
CN111369597A true CN111369597A (en) 2020-07-03
CN111369597B CN111369597B (en) 2022-08-12

Family

ID=71210367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010155371.2A Active CN111369597B (en) 2020-03-09 2020-03-09 Particle filter target tracking method based on multi-feature fusion

Country Status (1)

Country Link
CN (1) CN111369597B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111931754A (en) * 2020-10-14 2020-11-13 深圳市瑞图生物技术有限公司 Method and system for identifying target object in sample and readable storage medium
CN112070840A (en) * 2020-09-11 2020-12-11 上海幻维数码创意科技有限公司 Human body space positioning and tracking method with integration of multiple depth cameras
CN112184762A (en) * 2020-09-05 2021-01-05 天津城建大学 Gray wolf optimization particle filter target tracking algorithm based on feature fusion
CN112288777A (en) * 2020-12-16 2021-01-29 西安长地空天科技有限公司 Method for tracking laser breakpoint by using particle filtering algorithm
CN112348853A (en) * 2020-11-04 2021-02-09 哈尔滨工业大学(威海) Particle filter tracking method based on infrared saliency feature fusion
CN112486197A (en) * 2020-12-05 2021-03-12 哈尔滨工程大学 Fusion positioning tracking control method based on self-adaptive power selection of multi-source image
CN112765492A (en) * 2020-12-31 2021-05-07 浙江省方大标准信息有限公司 Sequencing method for inspection and detection mechanism
CN113436313A (en) * 2021-05-24 2021-09-24 南开大学 Three-dimensional reconstruction error active correction method based on unmanned aerial vehicle
WO2024114376A1 (en) * 2022-12-02 2024-06-06 亿航智能设备(广州)有限公司 Method and apparatus for automatically tracking target by unmanned aerial vehicle gimbal, device, and storage medium
CN118608570A (en) * 2024-08-07 2024-09-06 深圳市浩瀚卓越科技有限公司 Visual tracking correction method, device and equipment based on holder and storage medium
CN118608570B (en) * 2024-08-07 2024-10-22 深圳市浩瀚卓越科技有限公司 Visual tracking correction method, device and equipment based on holder and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036526A (en) * 2014-06-26 2014-09-10 广东工业大学 Gray target tracking method based on self-adaptive window
US20190005655A1 (en) * 2017-06-29 2019-01-03 Sogang University Research Foundation Method and system of tracking an object based on multiple histograms

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036526A (en) * 2014-06-26 2014-09-10 广东工业大学 Gray target tracking method based on self-adaptive window
US20190005655A1 (en) * 2017-06-29 2019-01-03 Sogang University Research Foundation Method and system of tracking an object based on multiple histograms

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184762A (en) * 2020-09-05 2021-01-05 天津城建大学 Gray wolf optimization particle filter target tracking algorithm based on feature fusion
CN112070840B (en) * 2020-09-11 2023-10-10 上海幻维数码创意科技股份有限公司 Human body space positioning and tracking method fused by multiple depth cameras
CN112070840A (en) * 2020-09-11 2020-12-11 上海幻维数码创意科技有限公司 Human body space positioning and tracking method with integration of multiple depth cameras
CN111931754A (en) * 2020-10-14 2020-11-13 深圳市瑞图生物技术有限公司 Method and system for identifying target object in sample and readable storage medium
CN112348853A (en) * 2020-11-04 2021-02-09 哈尔滨工业大学(威海) Particle filter tracking method based on infrared saliency feature fusion
CN112486197A (en) * 2020-12-05 2021-03-12 哈尔滨工程大学 Fusion positioning tracking control method based on self-adaptive power selection of multi-source image
CN112288777A (en) * 2020-12-16 2021-01-29 西安长地空天科技有限公司 Method for tracking laser breakpoint by using particle filtering algorithm
CN112288777B (en) * 2020-12-16 2024-09-13 西安长地空天科技有限公司 Method for tracking laser breakpoint by using particle filter algorithm
CN112765492A (en) * 2020-12-31 2021-05-07 浙江省方大标准信息有限公司 Sequencing method for inspection and detection mechanism
CN112765492B (en) * 2020-12-31 2021-08-10 浙江省方大标准信息有限公司 Sequencing method for inspection and detection mechanism
CN113436313B (en) * 2021-05-24 2022-11-29 南开大学 Three-dimensional reconstruction error active correction method based on unmanned aerial vehicle
CN113436313A (en) * 2021-05-24 2021-09-24 南开大学 Three-dimensional reconstruction error active correction method based on unmanned aerial vehicle
WO2024114376A1 (en) * 2022-12-02 2024-06-06 亿航智能设备(广州)有限公司 Method and apparatus for automatically tracking target by unmanned aerial vehicle gimbal, device, and storage medium
CN118608570A (en) * 2024-08-07 2024-09-06 深圳市浩瀚卓越科技有限公司 Visual tracking correction method, device and equipment based on holder and storage medium
CN118608570B (en) * 2024-08-07 2024-10-22 深圳市浩瀚卓越科技有限公司 Visual tracking correction method, device and equipment based on holder and storage medium

Also Published As

Publication number Publication date
CN111369597B (en) 2022-08-12

Similar Documents

Publication Publication Date Title
CN111369597B (en) Particle filter target tracking method based on multi-feature fusion
CN104200485B (en) Video-monitoring-oriented human body tracking method
CN110837768B (en) Online detection and identification method for rare animal protection
CN105243667B (en) The recognition methods again of target based on Local Feature Fusion
CN107909081B (en) Method for quickly acquiring and quickly calibrating image data set in deep learning
CN108960047B (en) Face duplication removing method in video monitoring based on depth secondary tree
CN105740915B (en) A kind of collaboration dividing method merging perception information
CN109902565B (en) Multi-feature fusion human behavior recognition method
CN107705321A (en) Moving object detection and tracking method based on embedded system
CN106157330B (en) Visual tracking method based on target joint appearance model
US20220128358A1 (en) Smart Sensor Based System and Method for Automatic Measurement of Water Level and Water Flow Velocity and Prediction
CN108876820A (en) A kind of obstruction conditions based on average drifting move down object tracking method
CN113516713B (en) Unmanned aerial vehicle self-adaptive target tracking method based on pseudo twin network
CN113379789B (en) Moving target tracking method in complex environment
CN112184762A (en) Gray wolf optimization particle filter target tracking algorithm based on feature fusion
CN108765463B (en) Moving target detection method combining region extraction and improved textural features
CN111199245A (en) Rape pest identification method
CN105184771A (en) Adaptive moving target detection system and detection method
CN104036526A (en) Gray target tracking method based on self-adaptive window
CN111739064A (en) Method for tracking target in video, storage device and control device
CN113822352A (en) Infrared dim target detection method based on multi-feature fusion
CN110349184B (en) Multi-pedestrian tracking method based on iterative filtering and observation discrimination
CN116051970A (en) Identification method for overlapping fish targets based on improved yolov5 model
Widyantara et al. Gamma correction-based image enhancement and canny edge detection for shoreline extraction from coastal imagery
CN102592125A (en) Moving object detection method based on standard deviation characteristic

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant