CN103136526A - Online target tracking method based on multi-source image feature fusion - Google Patents

Online target tracking method based on multi-source image feature fusion Download PDF

Info

Publication number
CN103136526A
CN103136526A CN2013100649313A CN201310064931A CN103136526A CN 103136526 A CN103136526 A CN 103136526A CN 2013100649313 A CN2013100649313 A CN 2013100649313A CN 201310064931 A CN201310064931 A CN 201310064931A CN 103136526 A CN103136526 A CN 103136526A
Authority
CN
China
Prior art keywords
partiald
target
delta
image
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013100649313A
Other languages
Chinese (zh)
Other versions
CN103136526B (en
Inventor
张艳宁
杨涛
陈挺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201310064931.3A priority Critical patent/CN103136526B/en
Publication of CN103136526A publication Critical patent/CN103136526A/en
Application granted granted Critical
Publication of CN103136526B publication Critical patent/CN103136526B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses an online target tracking method based on multi-source image feature fusion. The method is used for solving the technical problem that an optimal feature target tracking method based on on-line choosing is poor in robustness. The technical scheme of the method includes firstly utilizing a linear combination manner to fuse a visible image and an infrared image, enabling contrast of a target on a current image and the background to achieve maximum, and protruding feature information of the target on the image; secondly, obtaining character information of the target by extracting of angular points of the target, and utilizing an optical flow algorithm to achieve tracking of the target. In order to further improve robustness of tracking, a detection classification algorithm is added to conduct classification on information of the target and a background sample, based on the algorithm, on-line learning is utilized, an optical flow tracking result and a detection classification result are processed in a synergy mode to obtain the optimal target tracking result, and accuracy rate of the tracking result is up to more than 85%.

Description

Online method for tracking target based on the multi-source image Fusion Features
Technical field
The present invention relates to a kind of online method for tracking target, particularly relate to a kind of online method for tracking target based on the multi-source image Fusion Features.
Background technology
Utilize the image that visible light sensor and infrared sensor obtain to present respectively different physical characteristics, automatically, effectively visible images with infrared image merges and carry out the online target following of robust, have very important significance.Existing online method for tracking target mainly contains: based on the stencil matching tracking of on-line study with based on the optimal characteristics tracking of on-line study.
Document " Online selection of discriminative tracking features.PAMI, 27 (10): 1631-1643, Oct.2005. " discloses a kind of optimal characteristics method for tracking target based on choosing online.The method adopts the mode of statistics with histogram to obtain optimum target signature linear combination image, adopts afterwards the method for mean-shift that target is followed the tracks of.Choosing the optimum linearity combination image stage, utilize no parameter to arrange the R in visible images, G, three channel image of B are carried out linearity and are merged, then on newly-generated a large amount of linear combination images, the target chosen and the histogram of background are carried out statistical study, obtain the linear fused images of maximum-contrast result, utilize the linear fusion parameters of this linearity fused images to carry out the same manner fusion treatment to the next frame image.But the method is mainly the linearity fusion for three passages of visible images, and after increasing infrared image, the parameter setting that this linearity merges can not directly be suitable for.At the mean-shift tracking phase, owing to lacking necessary template renewal, when the attitude variation occurs target itself, follow the tracks of unsuccessfully; Because the window width size remains unchanged, when target scale changes to some extent, follow the tracks of unsuccessfully in tracing process; When target velocity was very fast, tracking effect was bad.In sum, the robustness of mean-shift tracking is not fine.
Summary of the invention
In order to overcome existing deficiency based on the optimal characteristics method for tracking target poor robustness of choosing online, the invention provides a kind of online method for tracking target based on the multi-source image Fusion Features.The mode that the method is utilized linear combination merges visible images and infrared image, makes target reach maximum, the outstanding characteristic information of target on image with the contrast of background on present image; Secondly, obtain clarification of objective information by the mode of target being extracted angle point, and utilize the tracking of optical flow algorithm realize target.For the further robustness of following the tracks of that improves, add the detection sorting algorithm that the information of target and background sample is classified, then on this basis, utilized on-line study, associated treatment optical flow tracking result and detection classification results obtain optimum target following result.
The technical solution adopted for the present invention to solve the technical problems is: a kind of online method for tracking target based on the multi-source image Fusion Features is characterized in comprising the following steps:
Step 1, to the R in visible images, G, the information of three passages of B is carried out linear combination in conjunction with the thermal infrared half-tone information in infrared image again, produces fused images.The expression formula of linear combination is as follows,
F 1≡{w 1R+w 2G+w 3B+w 4I|w *∈[-2,-1,0,1,2]} (1)
In formula, R, G, the image information of three passages in the corresponding visible images of B difference, the thermal infrared half-tone information of the corresponding infrared image of I, w *Be corresponding linear combination parameter, span is-2 to 2.Reject equivalent combinations mode (w 1, w 2, w 3, w 4)=k (w 1', w 2', w 3', w 4').
Step 2, on the image after each linear combination, calculate respectively the statistics with histogram information of target area and background area.The pixel characteristic histogram that makes target is H obj(i), the pixel characteristic histogram of background sample is H bg(i), calculate respectively the probability density of target and background and carry out normalization and obtain
p(i)=H obj(i)/n obj (2)
q(i)=H bg(i)/n bg (3)
In formula, n obj, n bgRepresent respectively the quantity of target sample and background sample, p (i), q (i) represent respectively the discrete probability density of target sample and background sample.Utilize p (i), q (i) to obtain likelihood function
L ( i ) = log max { p ( i ) , δ } max { q ( i ) , δ } - - - ( 4 )
In formula, δ=0.001 is placed log and is appeared as 0 situation.Judge the diversity factor of target sample feature and background sample characteristics by the variance of calculating L (i), utilize variance computing formula var (x)=Ex 2-(Ex) 2Obtain
var ( L ; p , q ) = Σ i a ( i ) L 2 ( i ) - [ Σ i a ( i ) L ( i ) ] 2 - - - ( 5 )
In formula, a (i) is probability density function.Thereby obtain the variance ratio formula of likelihood function
VR ( L ; p , q ) ≡ var ( L ; ( p + q ) / 2 ) var ( L ; p ) + var ( L ; q ) - - - ( 6 )
Obtain the data of all samples, sort according to the data of sample, obtain maximum target signature and the contrast of background characteristics, obtain the Fusion Features linear combination mode of optimum.
Step 3, at first by the method that suppresses based on Corner Detection and non-maximum value, the object on the fused images of having obtained is carried out feature extraction; Then utilize the RANSAC method to remove exterior point; Calculate at last the characteristic light stream of the invariant feature point that remains with optical flow method, estimating target is in the position of next frame appearance.Adopt light stream to describe the motion at observed object, surface or edge that the motion with respect to the observer causes.Go out instantaneous velocity or the discrete picture displacement of motion by image detection.Each all has the vector set (x, y, t) of a two dimension or multidimensional constantly, the instantaneous velocity that (x, y, t) expression specified coordinate is ordered at t.If I (x, y, t) is the t intensity of (x, y) point constantly, in very short time Δ t, x, y increase respectively Δ x, Δ y:
I ( x + Δx , y + Δy , t + Δt ) = I ( x , y , t ) + ∂ I ∂ x Δx + ∂ I ∂ y Δy + ∂ I ∂ t Δt - - - ( 7 )
Simultaneously, consider that the displacement of two frame adjacent images is enough short, therefore
I(x,y,t)=I(x+Δx,y+Δy,t+Δt) (8)
Thereby obtain
∂ I ∂ x Δx + ∂ I ∂ y Δy + ∂ I ∂ t Δt = 0 - - - ( 9 )
∂ I ∂ x Δx Δt + ∂ I ∂ y Δy Δt + ∂ I ∂ t Δt Δt = 0 - - - ( 10 )
Finally reach a conclusion:
∂ I ∂ x V x + ∂ I ∂ y V y + ∂ I ∂ t = 0 - - - ( 11 )
In formula, V xAnd V yBe the speed of x and y, be called the light stream of I (x, y, t),
Figure BDA00002873406300035
That image (x, y, t) is at the partial derivative of t moment specific direction.I x, I y, I tRelation as follows:
I xV x+I yV y=-I t (12)
In formula, I x, I yBe respectively the Grad of x direction corresponding to unique point or y direction, I tBefore and after being, two two field pictures are at the gray scale difference value of corresponding pixel points position.Draw the light stream of unique point selected on target, estimate the position that the next frame target occurs on the basis of former frame target location.
Step 4, at first produces all positions that target may occur, and is input to the ground floor probability classification and votes, and result passes to Nearest Neighbor sorter; Secondly, Nearest Neighbor sorter carries out decision-making to it, with position with a high credibility and its intrinsic similarity output; Again, with the output of the Nearest Neighbor sorter input as the distance restraint sorter, filter out the possible position of the confirmed tracking results of distance.Output is the net result of test section at last.If the net result number of test section is too much or very few, feeds back to Nearest Neighbor sorter and distance restraint sorter stage adjustment threshold value and carry out decision-making again.
Step 5, according to the tracking results in each two field picture and testing result, judge at first whether tracking results and testing result all exist, then adopt to follow the tracks of and detect synergistic mechanism net result is adjusted; By feedback mechanism, net result is used for adjusting pursuit path at last, and upgrades each sorter in testing process.
The invention has the beneficial effects as follows: the mode that the method is utilized linear combination merges visible images and infrared image, makes target reach maximum, the outstanding characteristic information of target on image with the contrast of background on present image; Secondly, obtain clarification of objective information by the mode of target being extracted angle point, and utilize the tracking of optical flow algorithm realize target.In order further to improve the robustness of following the tracks of, added the detection sorting algorithm that the information of target and background sample is classified, again on this basis, utilize on-line study, associated treatment optical flow tracking result and detection classification results, obtain optimum target following result, more than tracking results rate of accuracy reached to 85%.
Below in conjunction with embodiment, the present invention is elaborated.
Embodiment
The online method for tracking target concrete steps that the present invention is based on the multi-source image Fusion Features are as follows:
1, multi-source image Fusion Features.
(a) to the R in visible images, G, the information of three passages of B is carried out multiple linear combination in conjunction with the thermal infrared half-tone information in infrared image again, produces a large amount of fused images.The expression formula of linear combination is as follows,
F 1≡{w 1R+w 2G+w 3B+w 4I|w *∈[-2,-1,0,1,2]} (1)
Wherein, R, G, the image information of three passages in the corresponding visible images of B difference, the thermal infrared half-tone information of the corresponding infrared image of I, w *Be corresponding linear combination parameter, span is-2 to 2.After calculating, produce altogether 625 images after linear combination, but in these images, be similar to (w 1, w 2, w 3, w 4)=k (w 1', w 2', w 3', w 4') array mode is equivalent, so after the array mode that weeds out equivalence, remaining computable linear combination image one has 215.
(b) on the image after each linear combination, calculate respectively the statistics with histogram information of target area and background area.The pixel characteristic histogram that makes target is H obj(i), the pixel characteristic histogram of background sample is H bg(i), calculate respectively the probability density of target and background and carry out normalization and obtain
p(i)=H obj(i)/n obj (2)
q(i)=H bg(i)/n bg (3)
Wherein, n obj, n bgRepresent respectively the quantity of target sample and background sample.Utilize p (i), q (i) to obtain likelihood function
L ( i ) = log max { p ( i ) , δ } max { q ( i ) , δ } - - - ( 4 )
Wherein, δ=0.001 is placed log and is appeared as 0 situation.Judge the diversity factor of target sample feature and background sample characteristics by the variance of calculating L (i), utilize variance computing formula var (x)=Ex 2-(Ex) 2Obtain
var ( L ; p , q ) = Σ i a ( i ) L 2 ( i ) - [ Σ i a ( i ) L ( i ) ] 2 - - - ( 5 )
Wherein, a (i) is probability density function.Thereby obtain the variance ratio formula of likelihood function
VR ( L ; p , q ) ≡ var ( L ; ( p + q ) / 2 ) var ( L ; p ) + var ( L ; q ) - - - ( 6 )
By after calculating, obtain the data of all samples, sort according to the data of sample, obtain maximum target signature and the contrast of background characteristics, thereby obtain the Fusion Features linear combination mode of optimum.
2, on-line study target following.
(a) at first by based on the method for Corner Detection and the inhibition of non-maximum value, the object on the fused images of having obtained being carried out feature extraction; Then utilize the RANSAC method to remove exterior point; Calculate at last the characteristic light stream of the invariant feature point that remains with optical flow method, in order to the position of estimating target in the next frame appearance.Adopt light stream (optical flow) to be used for describing the motion at observed object, surface or edge that the motion with respect to the observer causes.A series of image detection goes out instantaneous velocity or the discrete picture displacement of motion.Each all has the vector set of a two dimension or multidimensional constantly, as (x, y, t), and the instantaneous velocity that the expression specified coordinate is ordered at t.If I (x, y, t) is the t intensity of (x, y) point constantly, in very short time Δ t, x, y increase respectively Δ x, Δ y:
I ( x + Δx , y + Δy , t + Δt ) = I ( x , y , t ) + ∂ I ∂ x Δx + ∂ I ∂ y Δy + ∂ I ∂ t Δt - - - ( 7 )
Simultaneously, consider that the displacement of two frame adjacent images is enough short, therefore:
I(x,y,t)=I(x+Δx,y+Δy,t+Δt) (8)
Therefore
∂ I ∂ x Δx + ∂ I ∂ y Δy + ∂ I ∂ t Δt = 0 - - - ( 9 )
∂ I ∂ x Δx Δt + ∂ I ∂ y Δy Δt + ∂ I ∂ t Δt Δt = 0 - - - ( 10 )
Finally reach a conclusion:
∂ I ∂ x V x + ∂ I ∂ y V y + ∂ I ∂ t = 0 - - - ( 11 )
V xAnd V yBe the speed of x and y, or be called the light stream of I (x, y, t),
Figure BDA00002873406300062
That image (x, y, t) is at the partial derivative of t moment specific direction.I x, I y, I tThe following expression of relation:
I xV x+I yV y=-I t (12)
Here understand I x, I yBe respectively Grad, the I of x direction corresponding to unique point or y direction tBefore and after being, two two field pictures are at the gray scale difference value of corresponding pixel points position.Draw thus the light stream of unique point selected on target, and then estimate the position that the next frame target occurs on the basis of former frame target location.
(b) utilize light stream to carry out in tracing process; often can be because be subject to illumination variation; occlusion issue and occur losing efficacy; because we are when utilizing optical flow tracking; also added the detection sorting algorithm; by a learning process to optical flow tracking result and classification results, choose optimum target following result.The multi classifier combination testing mechanism is comprised of three parts, and every part all produces positive sample and the negative sample of judgement, and wherein, negative sample adds and is used for the later stage on-line study in the negative sample set of system, and positive sample enters lower one deck sorter and further judges.At first, produce all positions that target may occur, be input to the ground floor probability classification and vote, result passes to Nearest Neighbor sorter; Secondly, Nearest Neighbor sorter carries out decision-making to it, with position with a high credibility and its intrinsic similarity output; Again, with the output of the Nearest Neighbor sorter input as the distance restraint sorter, filter out the possible position of the confirmed tracking results of distance.Output is the net result of test section at last.If the net result number of test section is too much or very few, feeds back to Nearest Neighbor sorter and distance restraint sorter stage adjustment threshold value and carry out decision-making again.
(c) according to the tracking results in each two field picture and testing result, judge at first whether tracking results and testing result all exist, then adopt to follow the tracks of with the detection synergistic mechanism net result is adjusted; By feedback mechanism, net result is used for adjusting pursuit path at last, and upgrades each sorter in testing process.

Claims (1)

1. online method for tracking target based on the multi-source image Fusion Features is characterized in that comprising the following steps:
Step 1, to the R in visible images, G, the information of three passages of B is carried out linear combination in conjunction with the thermal infrared half-tone information in infrared image again, produces fused images; The expression formula of linear combination is as follows,
F 1≡{w 1R+w 2G+w 3B+w 4I|w *∈[-2,-1,0,1,2]} (1)
In formula, R, G, the image information of three passages in the corresponding visible images of B difference, the thermal infrared half-tone information of the corresponding infrared image of I, w* is corresponding linear combination parameter, span is-2 to 2; Reject equivalent combinations mode (w 1, w 2, w 3, w 4)=k (w 1', w 2', w 3', w 4');
Step 2, on the image after each linear combination, calculate respectively the statistics with histogram information of target area and background area; The pixel characteristic histogram that makes target is H obj(i), the pixel characteristic histogram of background sample is H bg(i), calculate respectively the probability density of target and background and carry out normalization and obtain
p(i)=H obj(i)/n obj (2)
q(i)=H bg(i)/n bg (3)
In formula, n obj, n bgRepresent respectively the quantity of target sample and background sample, p (i), q (i) represent respectively the discrete probability density of target sample and background sample; Utilize p (i), q (i) to obtain likelihood function
L ( i ) = log max { p ( i ) , δ } max { q ( i ) , δ } - - - ( 4 )
In formula, δ=0.001 is placed log and is appeared as 0 situation; Judge the diversity factor of target sample feature and background sample characteristics by the variance of calculating L (i), utilize variance computing formula var (x)=Ex 2-(Ex) 2Obtain
var ( L ; p , q ) = Σ i a ( i ) L 2 ( i ) - [ Σ i a ( i ) L ( i ) ] 2 - - - ( 5 )
In formula, a (i) is probability density function; Thereby obtain the variance ratio formula of likelihood function
VR ( L ; p , q ) ≡ var ( L ; ( p + q ) / 2 ) var ( L ; p ) + var ( L ; q ) - - - ( 6 )
Obtain the data of all samples, sort according to the data of sample, obtain maximum target signature and the contrast of background characteristics, obtain the Fusion Features linear combination mode of optimum;
Step 3, at first by the method that suppresses based on Corner Detection and non-maximum value, the object on the fused images of having obtained is carried out feature extraction; Then utilize the RANSAC method to remove exterior point; Calculate at last the characteristic light stream of the invariant feature point that remains with optical flow method, estimating target is in the position of next frame appearance; Adopt light stream to describe the motion at observed object, surface or edge that the motion with respect to the observer causes; Go out instantaneous velocity or the discrete picture displacement of motion by image detection; Each all has the vector set (x, y, t) of a two dimension or multidimensional constantly, the instantaneous velocity that (x, y, t) expression specified coordinate is ordered at t; If I (x, y, t) is the t intensity of (x, y) point constantly, in very short time Δ t, x, y increase respectively Δ x, Δ y:
I ( x + Δx , y + Δy , t + Δt ) = I ( x , y , t ) + ∂ I ∂ x Δx + ∂ I ∂ y Δy + ∂ I ∂ t Δt - - - ( 7 )
Simultaneously, consider that the displacement of two frame adjacent images is enough short, therefore
I(x,y,t)=I(x+Δx,y+Δy,t+Δt) (8)
Thereby obtain
∂ I ∂ x Δx + ∂ I ∂ y Δy + ∂ I ∂ t Δt = 0 - - - ( 9 )
∂ I ∂ x Δx Δt + ∂ I ∂ y Δy Δt + ∂ I ∂ t Δt Δt = 0 - - - ( 10 )
Finally reach a conclusion:
∂ I ∂ x V x + ∂ I ∂ y V y + ∂ I ∂ t = 0 - - - ( 11 )
In formula, V xAnd V yBe the speed of x and y, be called the light stream of I (x, y, t),
Figure FDA00002873406200025
That image (x, y, t) is at the partial derivative of t moment specific direction; I x, I y, I tRelation as follows:
I xV x+I yV y=-I t (12)
In formula, I x, I yBe respectively the Grad of x direction corresponding to unique point or y direction, I tBefore and after being, two two field pictures are at the gray scale difference value of corresponding pixel points position; Draw the light stream of unique point selected on target, estimate the position that the next frame target occurs on the basis of former frame target location;
Step 4, at first produces all positions that target may occur, and is input to the ground floor probability classification and votes, and result passes to Nearest Neighbor sorter; Secondly, Nearest Neighbor sorter carries out decision-making to it, with position with a high credibility and its intrinsic similarity output; Again, with the output of the Nearest Neighbor sorter input as the distance restraint sorter, filter out the possible position of the confirmed tracking results of distance; Output is the net result of test section at last; If the net result number of test section is too much or very few, feeds back to Nearest Neighbor sorter and distance restraint sorter stage adjustment threshold value and carry out decision-making again;
Step 5, according to the tracking results in each two field picture and testing result, judge at first whether tracking results and testing result all exist, then adopt to follow the tracks of and detect synergistic mechanism net result is adjusted; By feedback mechanism, net result is used for adjusting pursuit path at last, and upgrades each sorter in testing process.
CN201310064931.3A 2013-03-01 2013-03-01 Based on the online method for tracking target of multi-source image feature fusion Active CN103136526B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310064931.3A CN103136526B (en) 2013-03-01 2013-03-01 Based on the online method for tracking target of multi-source image feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310064931.3A CN103136526B (en) 2013-03-01 2013-03-01 Based on the online method for tracking target of multi-source image feature fusion

Publications (2)

Publication Number Publication Date
CN103136526A true CN103136526A (en) 2013-06-05
CN103136526B CN103136526B (en) 2015-12-23

Family

ID=48496334

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310064931.3A Active CN103136526B (en) 2013-03-01 2013-03-01 Based on the online method for tracking target of multi-source image feature fusion

Country Status (1)

Country Link
CN (1) CN103136526B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103324932A (en) * 2013-06-07 2013-09-25 东软集团股份有限公司 Video-based vehicle detecting and tracking method and system
CN104599286A (en) * 2013-10-31 2015-05-06 展讯通信(天津)有限公司 Optical flow based feature tracking method and device
CN105574517A (en) * 2016-01-22 2016-05-11 孟玲 Electric vehicle charging pile with stable tracking function
CN107730535A (en) * 2017-09-14 2018-02-23 北京空间机电研究所 A kind of cascaded infrared video tracing method of visible ray
CN108122247A (en) * 2017-12-25 2018-06-05 北京航空航天大学 A kind of video object detection method based on saliency and feature prior model
CN108665487A (en) * 2017-10-17 2018-10-16 国网河南省电力公司郑州供电公司 Substation's manipulating object and object localization method based on the fusion of infrared and visible light
CN109271939A (en) * 2018-09-21 2019-01-25 长江师范学院 Thermal infrared human body target recognition methods based on dull wave oriented energy histogram
CN109377469A (en) * 2018-11-07 2019-02-22 永州市诺方舟电子科技有限公司 A kind of processing method, system and the storage medium of thermal imaging fusion visible images

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060165258A1 (en) * 2005-01-24 2006-07-27 Shmuel Avidan Tracking objects in videos with adaptive classifiers
CN101339664B (en) * 2008-08-27 2012-04-18 北京中星微电子有限公司 Object tracking method and system
CN102436590A (en) * 2011-11-04 2012-05-02 康佳集团股份有限公司 Real-time tracking method based on on-line learning and tracking system thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060165258A1 (en) * 2005-01-24 2006-07-27 Shmuel Avidan Tracking objects in videos with adaptive classifiers
CN101339664B (en) * 2008-08-27 2012-04-18 北京中星微电子有限公司 Object tracking method and system
CN102436590A (en) * 2011-11-04 2012-05-02 康佳集团股份有限公司 Real-time tracking method based on on-line learning and tracking system thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ROBERT T.COLLINS等: "Online Selection of Discriminative Tracking Features", 《PATTERN ANALYSIS AND MACHINE INTELLIGENCE,IEEE TRANSACTIONS ON》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103324932A (en) * 2013-06-07 2013-09-25 东软集团股份有限公司 Video-based vehicle detecting and tracking method and system
CN104599286A (en) * 2013-10-31 2015-05-06 展讯通信(天津)有限公司 Optical flow based feature tracking method and device
CN104599286B (en) * 2013-10-31 2018-11-16 展讯通信(天津)有限公司 A kind of characteristic tracking method and device based on light stream
CN105574517A (en) * 2016-01-22 2016-05-11 孟玲 Electric vehicle charging pile with stable tracking function
CN107730535B (en) * 2017-09-14 2020-03-24 北京空间机电研究所 Visible light infrared cascade video tracking method
CN107730535A (en) * 2017-09-14 2018-02-23 北京空间机电研究所 A kind of cascaded infrared video tracing method of visible ray
CN108665487B (en) * 2017-10-17 2022-12-13 国网河南省电力公司郑州供电公司 Transformer substation operation object and target positioning method based on infrared and visible light fusion
CN108665487A (en) * 2017-10-17 2018-10-16 国网河南省电力公司郑州供电公司 Substation's manipulating object and object localization method based on the fusion of infrared and visible light
CN108122247B (en) * 2017-12-25 2018-11-13 北京航空航天大学 A kind of video object detection method based on saliency and feature prior model
CN108122247A (en) * 2017-12-25 2018-06-05 北京航空航天大学 A kind of video object detection method based on saliency and feature prior model
CN109271939A (en) * 2018-09-21 2019-01-25 长江师范学院 Thermal infrared human body target recognition methods based on dull wave oriented energy histogram
CN109271939B (en) * 2018-09-21 2021-07-02 长江师范学院 Thermal infrared human body target identification method based on monotone wave direction energy histogram
CN109377469A (en) * 2018-11-07 2019-02-22 永州市诺方舟电子科技有限公司 A kind of processing method, system and the storage medium of thermal imaging fusion visible images
CN109377469B (en) * 2018-11-07 2020-07-28 永州市诺方舟电子科技有限公司 Processing method, system and storage medium for fusing thermal imaging with visible light image

Also Published As

Publication number Publication date
CN103136526B (en) 2015-12-23

Similar Documents

Publication Publication Date Title
CN103136526B (en) Based on the online method for tracking target of multi-source image feature fusion
Luvizon et al. A video-based system for vehicle speed measurement in urban roadways
US7848548B1 (en) Method and system for robust demographic classification using pose independent model from sequence of face images
CN108346159A (en) A kind of visual target tracking method based on tracking-study-detection
Gerónimo et al. 2D–3D-based on-board pedestrian detection system
CN102609720B (en) Pedestrian detection method based on position correction model
CN102629385B (en) Object matching and tracking system based on multiple camera information fusion and method thereof
CN102682287B (en) Pedestrian detection method based on saliency information
CN103886325B (en) Cyclic matrix video tracking method with partition
CN106373143A (en) Adaptive method and system
CN104239865A (en) Pedestrian detecting and tracking method based on multi-stage detection
CN103116896A (en) Visual saliency model based automatic detecting and tracking method
CN105205486A (en) Vehicle logo recognition method and device
CN102117487A (en) Scale-direction self-adaptive Mean-shift tracking method aiming at video moving object
Redondo-Cabrera et al. All together now: Simultaneous object detection and continuous pose estimation using a hough forest with probabilistic locally enhanced voting
WO2015131468A1 (en) Method and system for estimating fingerprint pose
CN107480585A (en) Object detection method based on DPM algorithms
Kim et al. Autonomous vehicle detection system using visible and infrared camera
CN102930294A (en) Chaotic characteristic parameter-based motion mode video segmentation and traffic condition identification method
CN105809673A (en) SURF (Speeded-Up Robust Features) algorithm and maximal similarity region merging based video foreground segmentation method
CN106599918B (en) vehicle tracking method and system
CN104680536A (en) Method for detecting SAR image change by utilizing improved non-local average algorithm
Chen et al. A precise information extraction algorithm for lane lines
CN107247967B (en) Vehicle window annual inspection mark detection method based on R-CNN
CN113221739B (en) Monocular vision-based vehicle distance measuring method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant