CN103136526B - Based on the online method for tracking target of multi-source image feature fusion - Google Patents
Based on the online method for tracking target of multi-source image feature fusion Download PDFInfo
- Publication number
- CN103136526B CN103136526B CN201310064931.3A CN201310064931A CN103136526B CN 103136526 B CN103136526 B CN 103136526B CN 201310064931 A CN201310064931 A CN 201310064931A CN 103136526 B CN103136526 B CN 103136526B
- Authority
- CN
- China
- Prior art keywords
- target
- delta
- tracking
- image
- formula
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Abstract
The invention discloses a kind of online method for tracking target based on multi-source image feature fusion, for solving the technical matters based on the optimal characteristics method for tracking target poor robustness chosen online.Technical scheme first utilizes the mode of linear combination visible images and infrared image to be merged, and makes target reach maximum with the contrast of background on present image, the outstanding characteristic information of target on image; Secondly, by obtaining clarification of objective information to the mode of Objective extraction angle point, and the tracking of optical flow algorithm realize target is utilized.In order to improve the robustness of tracking further, add the information of detection sorting algorithm to object and background sample to classify, again on this basis, utilize on-line study, associated treatment optical flow tracking result and detection classification results, obtain optimum target following result, tracking results rate of accuracy reached is to more than 85%.
Description
Technical field
The present invention relates to a kind of online method for tracking target, particularly relate to a kind of online method for tracking target based on multi-source image feature fusion.
Background technology
The image utilizing visible light sensor and infrared sensor to obtain presents different physical characteristics respectively, automatically, effectively visible images and infrared image carries out merging and carries out the online target following of robust, having very important significance.Existing online method for tracking target mainly contains: the stencil matching tracking based on on-line study and the optimal characteristics tracking based on on-line study.
Document " Onlineselectionofdiscriminativetrackingfeatures.PAMI, 27 (10): 1631-1643, Oct.2005. " discloses a kind of optimal characteristics method for tracking target based on choosing online.The method adopts the mode of statistics with histogram to obtain optimum target signature linear combination image, adopts the method for mean-shift to follow the tracks of target afterwards.Choosing the optimum linearity combination image stage, utilize no optimum configurations to the R in visible images, G, B tri-channel image carry out linear fusion, then on newly-generated a large amount of linear combination images, statistical study is carried out to the histogram of the object and background chosen, obtain the linear fusion image of maximum-contrast result, utilize the linear fusion parameter of this linear fusion image to carry out the same manner fusion treatment to next frame image.But the method is mainly for the linear fusion of visible images three passages, after increase infrared image, the optimum configurations of this linear fusion can not directly be suitable for.At mean-shift tracking phase, owing to lacking necessary template renewal, when attitudes vibration occurs target itself, follow the tracks of unsuccessfully; Because window width size remains unchanged in tracing process, when target scale changes to some extent, follow the tracks of unsuccessfully; When target velocity is very fast, tracking effect is bad.In sum, the robustness of mean-shift tracking is not fine.
Summary of the invention
In order to overcome the deficiency of the existing optimal characteristics method for tracking target poor robustness based on choosing online, the invention provides a kind of online method for tracking target based on multi-source image feature fusion.The method utilizes the mode of linear combination visible images and infrared image to be merged, and makes target reach maximum with the contrast of background on present image, the outstanding characteristic information of target on image; Secondly, by obtaining clarification of objective information to the mode of Objective extraction angle point, and the tracking of optical flow algorithm realize target is utilized.In order to improve the robustness of tracking further, adding the information of detection sorting algorithm to object and background sample and classifying, then on this basis, utilizing on-line study, associated treatment optical flow tracking result and detection classification results, obtain optimum target following result.
The technical solution adopted for the present invention to solve the technical problems is: a kind of online method for tracking target based on multi-source image feature fusion, is characterized in comprising the following steps:
Step one, to the R in visible images, the information of G, B tri-passages carries out linear combination in conjunction with the thermal infrared half-tone information in infrared image again, produces fused images.The expression formula of linear combination is as follows,
F
1≡{w
1R+w
2G+w
3B+w
4I|w
*∈[-2,-1,0,1,2]}(1)
In formula, the image information of three passages in the corresponding visible images of R, G, B difference, the thermal infrared half-tone information of the corresponding infrared image of I, w
*for corresponding linear combination parameter, span is-2 to 2.Reject equivalent combinations mode (w
1, w
2, w
3, w
4)=k (w
1', w
2', w
3', w
4').
On step 2, image after each linear combination, calculate the statistics with histogram information of target area and background area respectively.The pixel characteristic histogram of target is made to be H
obji (), the pixel characteristic histogram of background sample is H
bgi (), the probability density being normalized calculating target and background respectively obtains
p(i)=H
obj(i)/n
obj(2)
q(i)=H
bg(i)/n
bg(3)
In formula, n
obj, n
bgrepresent the quantity of target sample and background sample respectively, p (i), q (i) represent the discrete probability density of target sample and background sample respectively.Utilize p (i), q (i) obtains likelihood function
In formula, δ=0.001, places the situation that log appears as 0.Judged the diversity factor of target sample characteristic sum background sample characteristics by the variance calculating L (i), utilize variance computing formula var (x)=Ex
2-(Ex)
2obtain
In formula, a (i) is probability density function.Thus obtain the variance ratio formula of likelihood function
Obtain the data of all samples, the data according to sample sort, and obtain maximum target signature and the contrast of background characteristics, obtain optimum Fusion Features linear combination mode.
Step 3, the first method passed through based on Corner Detection and non-maxima suppression carry out feature extraction to the object in the fused images obtained; Then RANSAC method is utilized to remove exterior point; Finally calculate the characteristic light stream of the invariant feature point remained by optical flow method, the position that estimating target occurs at next frame.The motion at the observed object adopting light stream to describe to cause relative to the motion of observer, surface or edge.Instantaneous velocity or the discrete picture displacement of motion is detected by image.All there is the vector set (x, y, t) of a two dimension or multidimensional in each moment, and (x, y, t) represents the instantaneous velocity of specified coordinate at t point.If the intensity that I (x, y, t) puts for t (x, y), in very short time Δ t, x, y increase Δ x, Δ y respectively:
Meanwhile, consider that the displacement of two frame adjacent images is enough short, therefore
I(x,y,t)=I(x+Δx,y+Δy,t+Δt)(8)
Thus obtain
Finally reach a conclusion:
In formula, V
xand V
ybe the speed of x and y, be called the light stream of I (x, y, t),
the partial derivative of image (x, y, t) at t specific direction.I
x, I
y, I
trelation as follows:
I
xV
x+I
yV
y=-I
t(12)
In formula, I
x, I
ybe respectively the x direction of Feature point correspondence or the Grad in y direction, I
tthe gray scale difference value of front and back two two field picture in corresponding pixel points position.Draw the light stream of unique point selected in target, the basis of former frame target location estimates the position that next frame target occurs.
Step 4, first, produce all positions that target may occur, be input to ground floor probability classification and vote, result passes to NearestNeighbor sorter; Secondly, NearestNeighbor sorter carries out decision-making to it, is exported in position with a high credibility and its intrinsic similarity; Again, the input using the output of NearestNeighbor sorter as distance restraint sorter, filters out the possible position apart from confirmed tracking results.The net result of the i.e. detecting portion finally exported.If the net result number of detecting portion is too much or very few, feed back to NearestNeighbor sorter and the distance restraint sorter stage adjustment threshold value carry out decision-making again.
Step 5, according to the tracking results in each two field picture and testing result, first judge whether tracking results and testing result all exist, then adopt and follow the tracks of and detect synergistic mechanism net result is adjusted; Finally by feedback mechanism, net result is used for adjusting pursuit path, and upgrades each sorter in testing process.
The invention has the beneficial effects as follows: the method utilizes the mode of linear combination visible images and infrared image to be merged, make target reach maximum with the contrast of background on present image, the outstanding characteristic information of target on image; Secondly, by obtaining clarification of objective information to the mode of Objective extraction angle point, and the tracking of optical flow algorithm realize target is utilized.In order to improve the robustness of tracking further, add the information of detection sorting algorithm to object and background sample to classify, again on this basis, utilize on-line study, associated treatment optical flow tracking result and detection classification results, obtain optimum target following result, tracking results rate of accuracy reached is to more than 85%.
Below in conjunction with embodiment, the present invention is elaborated.
Embodiment
The online method for tracking target concrete steps that the present invention is based on multi-source image feature fusion are as follows:
1, multi-source image feature fusion.
A (), to the R in visible images, the information of G, B tri-passages carries out multiple linear combination in conjunction with the thermal infrared half-tone information in infrared image again, produces a large amount of fused images.The expression formula of linear combination is as follows,
F
1≡{w
1R+w
2G+w
3B+w
4I|w
*∈[-2,-1,0,1,2]}(1)
Wherein, the image information of three passages in the corresponding visible images of R, G, B difference, the thermal infrared half-tone information of the corresponding infrared image of I, w
*for corresponding linear combination parameter, span is-2 to 2.After calculating, the image altogether after generation 625 linear combinations, but in these images, be similar to (w
1, w
2, w
3, w
4)=k (w
1', w
2', w
3', w
4') array mode is equivalent, therefore after the array mode weeding out equivalence, remaining computable linear combination image one has 215.
On (b) image after each linear combination, calculate the statistics with histogram information of target area and background area respectively.The pixel characteristic histogram of target is made to be H
obji (), the pixel characteristic histogram of background sample is H
bgi (), the probability density being normalized calculating target and background respectively obtains
p(i)=H
obj(i)/n
obj(2)
q(i)=H
bg(i)/n
bg(3)
Wherein, n
obj, n
bgrepresent the quantity of target sample and background sample respectively.Utilize p (i), q (i) obtains likelihood function
Wherein, δ=0.001, places the situation that log appears as 0.Judged the diversity factor of target sample characteristic sum background sample characteristics by the variance calculating L (i), utilize variance computing formula var (x)=Ex
2-(Ex)
2obtain
Wherein, a (i) is probability density function.Thus obtain the variance ratio formula of likelihood function
After calculating, obtain the data of all samples, the data according to sample sort, and obtain maximum target signature and the contrast of background characteristics, thus obtain optimum Fusion Features linear combination mode.
2, on-line study target following.
A () first carries out feature extraction by the method based on Corner Detection and non-maxima suppression to the object in the fused images obtained; Then RANSAC method is utilized to remove exterior point; Finally calculate the characteristic light stream of the invariant feature point remained by optical flow method, the position occurred at next frame in order to estimating target.Adopt light stream (opticalflow) be used for describing cause relative to the motion of observer observed object, surface or edge motion.A series of image detects instantaneous velocity or the discrete picture displacement of motion.All there is the vector set of a two dimension or multidimensional in each moment, as (x, y, t), represents the instantaneous velocity of specified coordinate at t point.If the intensity that I (x, y, t) puts for t (x, y), in very short time Δ t, x, y increase Δ x, Δ y respectively:
Meanwhile, consider that the displacement of two frame adjacent images is enough short, therefore:
I(x,y,t)=I(x+Δx,y+Δy,t+Δt)(8)
Therefore
Finally reach a conclusion:
V
xand V
ybe the speed of x and y, or be called the light stream of I (x, y, t),
the partial derivative of image (x, y, t) at t specific direction.I
x, I
y, I
trelation represent as follows:
I
xV
x+I
yV
y=-I
t(12)
Here I is understood
x, I
ybe respectively the x direction of Feature point correspondence or Grad, the I in y direction
tthe gray scale difference value of front and back two two field picture in corresponding pixel points position.Draw the light stream of unique point selected in target thus, and then on the basis of former frame target location, estimate the position of next frame target appearance.
B () utilizes light stream to carry out in tracing process; frequent meeting is because be subject to illumination variation; occlusion issue and occur lost efficacy; because we are while utilizing optical flow tracking; also detection sorting algorithm is added; by a learning process to optical flow tracking result and classification results, choose optimum target following result.Multi classifier combination testing mechanism is made up of three parts, and every part all produces positive sample and the negative sample of judgement, and wherein, negative sample adds for later stage on-line study in the negative sample set of system, and positive sample enters lower one deck sorter and judges further.First, produce all positions that target may occur, be input to ground floor probability classification and vote, result passes to NearestNeighbor sorter; Secondly, NearestNeighbor sorter carries out decision-making to it, is exported in position with a high credibility and its intrinsic similarity; Again, the input using the output of NearestNeighbor sorter as distance restraint sorter, filters out the possible position apart from confirmed tracking results.The net result of the i.e. detecting portion finally exported.If the net result number of detecting portion is too much or very few, feed back to NearestNeighbor sorter and the distance restraint sorter stage adjustment threshold value carry out decision-making again.
C (), according to the tracking results in each two field picture and testing result, first judges whether tracking results and testing result all exist, then adopt to follow the tracks of to adjust net result with detection synergistic mechanism; Finally by feedback mechanism, net result is used for adjusting pursuit path, and upgrades each sorter in testing process.
Claims (1)
1., based on an online method for tracking target for multi-source image feature fusion, it is characterized in that comprising the following steps:
Step one, to the R in visible images, the information of G, B tri-passages carries out linear combination in conjunction with the thermal infrared half-tone information in infrared image again, produces fused images; The expression formula of linear combination is as follows,
F
1≡{w
1R+w
2G+w
3B+w
4I|w
*∈[-2,-1,0,1,2]}(1)
In formula, the image information of three passages in the corresponding visible images of R, G, B difference, the thermal infrared half-tone information of the corresponding infrared image of I, w
*for corresponding linear combination parameter, span is-2 to 2; Reject equivalent combinations mode (w
1, w
2, w
3, w
4)=k (w
1', w
2', w
3', w
4');
On step 2, image after each linear combination, calculate the statistics with histogram information of target area and background area respectively; The pixel characteristic histogram of target is made to be H
obji (), the pixel characteristic histogram of background sample is H
bgi (), the probability density being normalized calculating target and background respectively obtains
p(i)=H
obj(i)/n
obj(2)
q(i)=H
bg(i)/n
bg(3)
In formula, n
obj, n
bgrepresent the quantity of target sample and background sample respectively, p (i), q (i) represent the discrete probability density of target sample and background sample respectively; Utilize p (i), q (i) obtains likelihood function
In formula, δ=0.001, prevents log from appearing as the situation of 0; Judged the diversity factor of target sample characteristic sum background sample characteristics by the variance calculating L (i), utilize variance computing formula var (x)=Ex
2-(Ex)
2obtain
In formula, a (i) is probability density function; Thus obtain the variance ratio formula of likelihood function
Obtain the data of all samples, the data according to sample sort, and obtain maximum target signature and the contrast of background characteristics, obtain optimum Fusion Features linear combination mode;
Step 3, the first method passed through based on Corner Detection and non-maxima suppression carry out feature extraction to the object in the fused images obtained; Then RANSAC method is utilized to remove exterior point; Finally calculate the characteristic light stream of the invariant feature point remained by optical flow method, the position that estimating target occurs at next frame; The motion at the observed object adopting light stream to describe to cause relative to the motion of observer, surface or edge; Instantaneous velocity or the discrete picture displacement of motion is detected by image; All there is the vector set (x, y, t) of a two dimension or multidimensional in each moment, and (x, y, t) represents the instantaneous velocity of specified coordinate at t point; If the intensity that I (x, y, t) puts for t (x, y), in very short time △ t, x, y increase △ x, △ y respectively:
Meanwhile, consider that the displacement of two frame adjacent images is enough short, therefore
I(x,y,t)=I(x+△x,y+△y,t+△t)(8)
Thus obtain
Finally reach a conclusion:
In formula, V
xand V
ybe the speed of x and y, be called the light stream of I (x, y, t),
the partial derivative of image (x, y, t) at t specific direction; I
x, I
y, I
trelation as follows:
I
xV
x+I
yV
y=-I
t(12)
In formula, I
x, I
ybe respectively the x direction of Feature point correspondence or the Grad in y direction, I
tthe gray scale difference value of front and back two two field picture in corresponding pixel points position; Draw the light stream of unique point selected in target, the basis of former frame target location estimates the position that next frame target occurs;
Step 4, first, produce all positions that target may occur, be input to ground floor probability classification and vote, result passes to NearestNeighbor sorter; Secondly, NearestNeighbor sorter carries out decision-making to it, is exported in position with a high credibility and its intrinsic similarity; Again, the input using the output of NearestNeighbor sorter as distance restraint sorter, filters out the possible position apart from confirmed tracking results; The net result of the i.e. detecting portion finally exported; If the net result number of detecting portion is too much or very few, feed back to NearestNeighbor sorter and the distance restraint sorter stage adjustment threshold value carry out decision-making again;
Step 5, according to the tracking results in each two field picture and testing result, first judge whether tracking results and testing result all exist, then adopt and follow the tracks of and detect synergistic mechanism net result is adjusted; Finally by feedback mechanism, net result is used for adjusting pursuit path, and upgrades each sorter in testing process.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310064931.3A CN103136526B (en) | 2013-03-01 | 2013-03-01 | Based on the online method for tracking target of multi-source image feature fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310064931.3A CN103136526B (en) | 2013-03-01 | 2013-03-01 | Based on the online method for tracking target of multi-source image feature fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103136526A CN103136526A (en) | 2013-06-05 |
CN103136526B true CN103136526B (en) | 2015-12-23 |
Family
ID=48496334
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310064931.3A Active CN103136526B (en) | 2013-03-01 | 2013-03-01 | Based on the online method for tracking target of multi-source image feature fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103136526B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103324932B (en) * | 2013-06-07 | 2017-04-12 | 东软集团股份有限公司 | Video-based vehicle detecting and tracking method and system |
CN104599286B (en) * | 2013-10-31 | 2018-11-16 | 展讯通信(天津)有限公司 | A kind of characteristic tracking method and device based on light stream |
CN105574517A (en) * | 2016-01-22 | 2016-05-11 | 孟玲 | Electric vehicle charging pile with stable tracking function |
CN107730535B (en) * | 2017-09-14 | 2020-03-24 | 北京空间机电研究所 | Visible light infrared cascade video tracking method |
CN108665487B (en) * | 2017-10-17 | 2022-12-13 | 国网河南省电力公司郑州供电公司 | Transformer substation operation object and target positioning method based on infrared and visible light fusion |
CN108122247B (en) * | 2017-12-25 | 2018-11-13 | 北京航空航天大学 | A kind of video object detection method based on saliency and feature prior model |
CN109271939B (en) * | 2018-09-21 | 2021-07-02 | 长江师范学院 | Thermal infrared human body target identification method based on monotone wave direction energy histogram |
CN109377469B (en) * | 2018-11-07 | 2020-07-28 | 永州市诺方舟电子科技有限公司 | Processing method, system and storage medium for fusing thermal imaging with visible light image |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101339664B (en) * | 2008-08-27 | 2012-04-18 | 北京中星微电子有限公司 | Object tracking method and system |
CN102436590A (en) * | 2011-11-04 | 2012-05-02 | 康佳集团股份有限公司 | Real-time tracking method based on on-line learning and tracking system thereof |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7526101B2 (en) * | 2005-01-24 | 2009-04-28 | Mitsubishi Electric Research Laboratories, Inc. | Tracking objects in videos with adaptive classifiers |
-
2013
- 2013-03-01 CN CN201310064931.3A patent/CN103136526B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101339664B (en) * | 2008-08-27 | 2012-04-18 | 北京中星微电子有限公司 | Object tracking method and system |
CN102436590A (en) * | 2011-11-04 | 2012-05-02 | 康佳集团股份有限公司 | Real-time tracking method based on on-line learning and tracking system thereof |
Non-Patent Citations (1)
Title |
---|
Online Selection of Discriminative Tracking Features;Robert T.Collins等;《Pattern Analysis and Machine Intelligence,IEEE Transactions on》;20051031;第27卷(第10期);第1631-1643页 * |
Also Published As
Publication number | Publication date |
---|---|
CN103136526A (en) | 2013-06-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103136526B (en) | Based on the online method for tracking target of multi-source image feature fusion | |
Gavrila et al. | Vision-based pedestrian detection: The protector system | |
CN102609720B (en) | Pedestrian detection method based on position correction model | |
CN102903122B (en) | Video object tracking method based on feature optical flow and online ensemble learning | |
CN103617636B (en) | The automatic detecting and tracking method of video object based on movable information and sparse projection | |
CN103886325B (en) | Cyclic matrix video tracking method with partition | |
CN102629385A (en) | Object matching and tracking system based on multiple camera information fusion and method thereof | |
CN105205486A (en) | Vehicle logo recognition method and device | |
CN103324932A (en) | Video-based vehicle detecting and tracking method and system | |
Kim et al. | Autonomous vehicle detection system using visible and infrared camera | |
Gualdi et al. | Contextual information and covariance descriptors for people surveillance: an application for safety of construction workers | |
Zhang et al. | Real-time moving object classification with automatic scene division | |
Chen et al. | A precise information extraction algorithm for lane lines | |
Arróspide et al. | On-board robust vehicle detection and tracking using adaptive quality evaluation | |
CN113221739B (en) | Monocular vision-based vehicle distance measuring method | |
Yang et al. | Vehicle detection from low quality aerial LIDAR data | |
CN102063726B (en) | Moving target classification method and system | |
Wei et al. | Pedestrian sensing using time-of-flight range camera | |
Gerónimo et al. | Computer vision approaches to pedestrian detection: Visible spectrum survey | |
Mishra et al. | Video-based vehicle detection and classification in heterogeneous traffic conditions using a novel kernel classifier | |
Nam et al. | Pedestrian detection system based on stereo vision for mobile robot | |
CN108665479A (en) | Infrared object tracking method based on compression domain Analysis On Multi-scale Features TLD | |
Bertozzi et al. | Performance analysis of a low-cost solution to vision-based obstacle detection | |
Das et al. | Vehicle boundary improvement and passing vehicle detection in driver assistance by flow distribution | |
Jin et al. | Method for pedestrian detection using ground plane constraint based on vision sensor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |