CN108805902A - A kind of space-time contextual target tracking of adaptive scale - Google Patents

A kind of space-time contextual target tracking of adaptive scale Download PDF

Info

Publication number
CN108805902A
CN108805902A CN201810472856.7A CN201810472856A CN108805902A CN 108805902 A CN108805902 A CN 108805902A CN 201810472856 A CN201810472856 A CN 201810472856A CN 108805902 A CN108805902 A CN 108805902A
Authority
CN
China
Prior art keywords
target
scale
space
histogram
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810472856.7A
Other languages
Chinese (zh)
Inventor
文武
伍立志
廖新平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHONGQING XINKE DESIGN Co Ltd
Chongqing University of Post and Telecommunications
Original Assignee
CHONGQING XINKE DESIGN Co Ltd
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHONGQING XINKE DESIGN Co Ltd, Chongqing University of Post and Telecommunications filed Critical CHONGQING XINKE DESIGN Co Ltd
Priority to CN201810472856.7A priority Critical patent/CN108805902A/en
Publication of CN108805902A publication Critical patent/CN108805902A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]

Abstract

The invention discloses a kind of space-time contextual target trackings of adaptive scale, belong to technical field of image processing.This method includes:S1, the color histogram feature and histogram of gradients feature for extracting target area, establish spatial context model;S2, using spatial context model modification next frame space-time context model, and then more fresh target confidence map, and acquire the maximum likelihood probability position of target confidence map as target location;S3, by update target scale come the target location of self-adoptive trace subsequent frame.Advantage of the invention is that:By establishing object module to sequence of video images progress color histogram and histogram of gradients feature extraction, then it utilizes the confidence map of space-time context model on-line study more fresh target and obtains the maximum probability confidence map of target, finally subsequent frame target is tracked using improved space-time context track algorithm scale scheme, it is ensured that the target tracking accuracy and real-time high when scale constantly changes.

Description

A kind of space-time contextual target tracking of adaptive scale
Technical field
The invention belongs to technical field of image processing more particularly to the space-time contextual target track sides of adaptive scale Method.
Background technology
Target following is one vital task of computer vision field, there are many practical applications, such as movement to identify, automatically Monitoring, video index, human-computer interaction and automobile navigation etc..Although target following has been investigated for more than ten years, and in recent years Remarkable progress is had been achieved for, but it is still a very challenging problem.Influence target tracking algorism performance There are many factor, and if target scale changes, complex background partly or entirely blocks, illumination variation and the requirement etc. handled in real time. Therefore, the target tracking algorism for designing Efficient robust is that current urgent need solves.
Space-time context track algorithm is by obtaining the maximum of space-time context model and confidence map come online updating mesh Cursor position, and calculation amount is reduced using Fast Fourier Transform (FFT) (FFT, fast fourier transform), improve algorithm Efficiency, real-time tracking target.After space-time context track algorithm proposes, the research of a large amount of scholars is attracted, to space-time context Track algorithm is improved to be had:(1) target in space-time context track algorithm is replaced with target and its peripheral region colouring information The low-level features of local context around, to improve performance of target tracking;(2) by space-time context and Kalman filtering knot It closes, by predicting the position of target next frame, can effectively solve the problems, such as target occlusion;(3) differentiation thought is blocked using piecemeal, tied Zygote Block- matching and particle filter estimate target location, realize that different degrees of anti-of target blocks.The above method mainly when On the basis of empty context track algorithm, the model of target is adjusted, target trajectory is predicted and solves mesh The robust tracking blocked is marked, and has ignored target low problem of tracking accuracy in dimensional variation.
For the target scale variation issue in processing space-time context track algorithm, some researchers consider using oval Log-polar transform method solves, but seldom research extraction color histogram and histogram of gradients feature is in conjunction with describing mesh Mark external appearance characteristic and using improved space-time contextual target scale scheme come solve space-time context track algorithm target with Tracking failure problem during track because being generated when target scale changes.
Invention content
In view of this, the object of the present invention is to provide one kind mainly for space-time context track algorithm target scale not It is disconnected to change the problem for causing tracking accuracy low, it is proposed that a kind of space-time contextual target tracking of adaptive scale.This hair Bright technical solution is as follows:
A kind of space-time contextual target tracking of adaptive scale, includes the following steps:
S1, the color histogram feature and histogram of gradients feature for extracting target area, establish spatial context model;
S2, using spatial context model modification next frame space-time context model, and then more fresh target confidence map, and asking The maximum likelihood probability position of target confidence map is obtained as target location;
S3, by update target scale come the target location of self-adoptive trace subsequent frame.
Further, the color histogram feature and histogram of gradients feature of the extraction target area, specifically includes:
Color histogram uses hsv color, H to indicate that tone, S indicate saturation degree, and V indicates brightness, since V is to intensity of illumination It is very sensitive, therefore color histogram only is established to H and S components;In order to describe the spatial positional information of target area, to V points Amount establishes histogram of gradients model.
Further, color histogram is expressed as:
Wherein, | | | | expression takes norm;X indicates the center pixel of target area;XiExpression ith pixel, i=1, 2,...n;K () is gaussian kernel function;δ () is Dirac function;b(Xi) indicate XiColor value on color histogram;u For color index in color histogram, and interval is [1, n];H is that nucleus band is wide;N is the number of pixels in target area.
Further, histogram of gradients is expressed as:
Wherein, m (x, y) is the gradient of pixel (x, y), and θ (x, y) is the direction of pixel (x, y), the range of θ (x, y) For [- π, π], by counting the gradient magnitude of each pixel, the interval of p is [0~7].
Further, step S2 is specifically included:Assuming that acquired t frames spatial context model ht(z), then t+1 The space-time context model H of framet+1(z) update is as follows:
Ht+1(z)=(1- ρ) Ht(z)+ρht(z)
Wherein, ρ is a learning rate parameter.
Target confidence map l in t+1 framest+1(z) expression formula is:
Wherein, F-1() is inverse Fourier transform;F () is Fourier transform;⊙ indicates dot product;It+1(z) t+1 is indicated Gray value at frame time point z;Indicate weighting function, usual distanceCloser point weighted value is bigger.
The center of targetIt is the maximum probability value of target confidence map, expression formula is:
Wherein,Expression surroundsAdjacent area.
Further, target scale update scheme includes:
s′tThe target scale using two continuous frames Image estimation is indicated, by s 'tInitial value is set as 1, because two continuous Dimensional variation between frame is continuous and small, the mutation of scale factor in order to prevent, introduces a updating factor v (st), Formula is as follows:
Therefore, the target scale of final updated is expressed as:
Wherein,Indicate the maximum probability value of t frame target confidence maps, lt() indicates target confidence map, st+1It is t+ Estimate target scale in 1 frame,It is the scale average value of continuous n frames, λ is fixed filtering parameter, and d is step parameter, σt+1It is Scale parameter in t+1 frames.
Advantage of the invention is that:By carrying out color histogram and histogram of gradients feature extraction to sequence of video images Object module is established, the confidence map of space-time context model on-line study more fresh target is then utilized and obtains the most general of target Rate confidence map finally tracks subsequent frame target, it is ensured that target is in ruler using improved space-time context track algorithm scale scheme Degree tracking accuracy and real-time high when constantly changing, can solve the problems, such as that tracking accuracy of the target in video tracking is low.
Description of the drawings
In order to keep the purpose of the present invention, technical solution and advantageous effect clearer, the present invention provides following attached drawing and carries out Explanation:
Fig. 1 is the flow diagram of the present invention.
Specific implementation mode
It is a kind of to the present invention with reference to the accompanying drawings of the specification to be lived based on the face of image diffusion velocity model and color character Body detecting method is further detailed.
Below with reference to attached drawing, the present invention is described in detail:
The invention discloses a kind of space-time contextual target trackings of adaptive scale, as shown in Figure 1, first regarding In frequency target following, color histogram and histogram of gradients feature extraction are carried out to target template, specifically included:
Color histogram uses hsv color model, and H indicates tone, and S indicates saturation degree, and V indicates brightness, and they it Between be independent from each other, since V is very sensitive to intensity of illumination, therefore only H and S are quantified to establish color histogram.
Color histogram in target area is:
Wherein, | | | | expression takes norm;X indicates the center pixel of target area;XiExpression ith pixel, i=1, 2,...n;K () is gaussian kernel function;δ () is Dirac function;b(Xi) indicate XiColor value on color histogram;u For color index in color histogram, and interval is [1, n];H is that nucleus band is wide;N is the number of pixels in target area.
In order to describe the spatial positional information of target area, the V used in hsv color model establishes one to target area The histogram of gradients model of a simplification.And the calculation formula of histogram of gradients model is:
Wherein, dxFor the difference between the horizontal direction consecutive points of pixel (x, y) in target area, dyFor target area Difference between the vertically adjacent point of interior pixel (x, y), m (x, y) is the gradient of pixel and θ (x, y) is pixel Direction, ranging from [- π, the π] of θ (x, y).By counting the gradient magnitude of each pixel, the gradient for obtaining target area is straight Square figure is:
Wherein, the interval of p is [0~7].
Then, the position that target confidence map maximum probability is obtained using space-time context model on-line study, is specifically included:
After obtaining spatial context model, target following task has reformed into target detection and has asked space-time contextual algorithms Topic.Assuming that acquired t frames spatial context model ht(z), then the space-time context model H of t+1 framest+1(z) update It is as follows:
Ht+1(z)=(1- ρ) Ht(z)+ρht(z)
Wherein, ρ is a learning rate parameter.
Target confidence map l in t+1 framest+1(z) expression formula is:
Wherein, F-1() is inverse Fourier transform;F () is Fourier transform;⊙ indicates dot product;It+1(z) t+1 is indicated Gray value at frame time point z;Indicate weighting function, usual distanceCloser point weighted value is bigger.
The center of targetIt is the maximum probability value of target confidence map, expression formula is:
Wherein,Expression surroundsAdjacent area.
Finally, by newer two time scales approach come adaptive tracing subsequent frame target to reach best tracking effect, tool Body includes:
Target scale update scheme includes:
s′tThe target scale using two continuous frames Image estimation is indicated, by s 'tInitial value is set as 1, because two continuous Dimensional variation between frame is continuous and small, the mutation of scale factor in order to prevent, introduces a updating factor v (st), Formula is as follows:
Therefore, the target scale of final updated is expressed as:
Wherein,Indicate the maximum probability value of t frame target confidence maps, lt() indicates target confidence map, st+1It is t+ Estimate target scale in 1 frame,It is the scale average value of continuous n frames, λ is fixed filtering parameter, and d is step parameter, σt+1It is Scale parameter in t+1 frames.
By above method adaptive updates target frame, it can not only accurately track target scale and taper into image sequence Row, and the case where target scale becomes larger can be accurately tracked.When target following scale constantly changes, mesh is accurately selected The scale size of mark frame can preferably track subsequent frame target, that is, reach best tracking effect.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is can It is completed with instructing relevant hardware by program, which can be stored in a computer readable storage medium, storage Medium may include:ROM, RAM, disk or CD etc..
Embodiment provided above has carried out further detailed description, institute to the object, technical solutions and advantages of the present invention It should be understood that embodiment provided above is only the preferred embodiment of the present invention, be not intended to limit the invention, it is all Any modification, equivalent substitution, improvement and etc. made for the present invention, should be included in the present invention within the spirit and principles in the present invention Protection domain within.

Claims (6)

1. a kind of space-time contextual target tracking of adaptive scale, which is characterized in that include the following steps:
S1, the color histogram feature and histogram of gradients feature for extracting target area, establish spatial context model;
S2, using spatial context model modification next frame space-time context model, and then more fresh target confidence map, and acquire mesh The maximum likelihood probability position of confidence map is marked as target location;
S3, by update target scale come the target location of self-adoptive trace subsequent frame.
2. a kind of space-time contextual target tracking of adaptive scale according to claim 1, which is characterized in that institute The color histogram feature and histogram of gradients feature for stating extraction target area, specifically include:
Color histogram use hsv color, H indicate tone, S indicate saturation degree, V indicate brightness, due to V to intensity of illumination very Sensitivity, therefore color histogram only is established to H and S components;In order to describe the spatial positional information of target area, V component is built Vertical histogram of gradients model.
3. a kind of space-time contextual target tracking of adaptive scale according to claim 2, which is characterized in that face Color Histogram is expressed as:
Wherein, | | | | expression takes norm;X indicates the center pixel of target area;XiExpression ith pixel, i=1,2 ... n; K () is gaussian kernel function;δ () is Dirac function;b(Xi) indicate XiColor value on color histogram;U is color Color index in histogram, and interval is [1, n];H is that nucleus band is wide;N is the number of pixels in target area.
4. a kind of space-time contextual target tracking of adaptive scale according to claim 3, which is characterized in that ladder Degree histogram table is shown as:
Wherein, m (x, y) be pixel (x, y) gradient, θ (x, y) be pixel (x, y) direction, θ (x, y) ranging from [- π, π], by counting the gradient magnitude of each pixel, the interval of p is [0~7].
5. a kind of space-time contextual target tracking of adaptive scale according to claim 1, which is characterized in that step Rapid S2 is specifically included:Assuming that acquired t frames spatial context model ht(z), then the space-time context model of t+1 frames Ht+1(z) update is as follows:
Ht+1(z)=(1- ρ) Ht(z)+ρht(z)
Wherein, ρ is a learning rate parameter.
Target confidence map l in t+1 framest+1(z) expression formula is:
Wherein, F-1() is inverse Fourier transform;F () is Fourier transform;⊙ indicates dot product;It+1(z) when indicating t+1 frames Gray value at point z;Indicate weighting function, usual distanceCloser point weighted value is bigger.
The center of targetIt is the maximum probability value of target confidence map, expression formula is:
Wherein,Expression surroundsAdjacent area.
6. a kind of space-time contextual target tracking of adaptive scale according to claim 1, it is characterised in that:Mesh Scale update scheme includes:
s′tThe target scale using two continuous frames Image estimation is indicated, by s 'tInitial value is set as 1 because two successive frames it Between dimensional variation be continuous and small, the mutation of scale factor in order to prevent, introduce a updating factor v (st), formula It is as follows:
Therefore, the target scale of final updated is expressed as:
Wherein,Indicate the maximum probability value of t frame target confidence maps, lt() indicates target confidence map, st+1It is in t+1 frames Estimate target scale,It is the scale average value of continuous n frames, λ is fixed filtering parameter, and d is step parameter, σt+1It is t+1 frames In scale parameter;N is the number of pixels in target area.
CN201810472856.7A 2018-05-17 2018-05-17 A kind of space-time contextual target tracking of adaptive scale Pending CN108805902A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810472856.7A CN108805902A (en) 2018-05-17 2018-05-17 A kind of space-time contextual target tracking of adaptive scale

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810472856.7A CN108805902A (en) 2018-05-17 2018-05-17 A kind of space-time contextual target tracking of adaptive scale

Publications (1)

Publication Number Publication Date
CN108805902A true CN108805902A (en) 2018-11-13

Family

ID=64092614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810472856.7A Pending CN108805902A (en) 2018-05-17 2018-05-17 A kind of space-time contextual target tracking of adaptive scale

Country Status (1)

Country Link
CN (1) CN108805902A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109672874A (en) * 2018-10-24 2019-04-23 福州大学 A kind of consistent three-dimensional video-frequency color calibration method of space-time
CN109740448A (en) * 2018-12-17 2019-05-10 西北工业大学 Video object robust tracking method of taking photo by plane based on correlation filtering and image segmentation
CN110660079A (en) * 2019-09-11 2020-01-07 昆明理工大学 Single target tracking method based on space-time context
CN110738685A (en) * 2019-09-09 2020-01-31 桂林理工大学 space-time context tracking method with color histogram response fusion
CN110910416A (en) * 2019-11-20 2020-03-24 河北科技大学 Moving obstacle tracking method and device and terminal equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107093189A (en) * 2017-04-18 2017-08-25 山东大学 Method for tracking target and system based on adaptive color feature and space-time context
CN107122780A (en) * 2017-02-28 2017-09-01 青岛科技大学 The Activity recognition method of mutual information and spatial and temporal distributions entropy based on space-time characteristic point

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107122780A (en) * 2017-02-28 2017-09-01 青岛科技大学 The Activity recognition method of mutual information and spatial and temporal distributions entropy based on space-time characteristic point
CN107093189A (en) * 2017-04-18 2017-08-25 山东大学 Method for tracking target and system based on adaptive color feature and space-time context

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
武亚宁: "基于时空上下文的目标跟踪算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109672874A (en) * 2018-10-24 2019-04-23 福州大学 A kind of consistent three-dimensional video-frequency color calibration method of space-time
CN109740448A (en) * 2018-12-17 2019-05-10 西北工业大学 Video object robust tracking method of taking photo by plane based on correlation filtering and image segmentation
CN109740448B (en) * 2018-12-17 2022-05-10 西北工业大学 Aerial video target robust tracking method based on relevant filtering and image segmentation
CN110738685A (en) * 2019-09-09 2020-01-31 桂林理工大学 space-time context tracking method with color histogram response fusion
CN110660079A (en) * 2019-09-11 2020-01-07 昆明理工大学 Single target tracking method based on space-time context
CN110910416A (en) * 2019-11-20 2020-03-24 河北科技大学 Moving obstacle tracking method and device and terminal equipment

Similar Documents

Publication Publication Date Title
CN108805902A (en) A kind of space-time contextual target tracking of adaptive scale
CN106909888B (en) Face key point tracking system and method applied to mobile equipment terminal
CN109741318B (en) Real-time detection method of single-stage multi-scale specific target based on effective receptive field
CN107169994B (en) Correlation filtering tracking method based on multi-feature fusion
CN105931269A (en) Tracking method for target in video and tracking device thereof
CN111160120A (en) Fast R-CNN article detection method based on transfer learning
CN107424171A (en) A kind of anti-shelter target tracking based on piecemeal
Ardianto et al. Real-time traffic sign recognition using color segmentation and SVM
CN105374049B (en) Multi-corner point tracking method and device based on sparse optical flow method
CN108875655A (en) A kind of real-time target video tracing method and system based on multiple features
CN111915583B (en) Vehicle and pedestrian detection method based on vehicle-mounted thermal infrared imager in complex scene
CN112052778B (en) Traffic sign identification method and related device
CN105868734A (en) Power transmission line large-scale construction vehicle recognition method based on BOW image representation model
CN111046746A (en) License plate detection method and device
CN113888461A (en) Method, system and equipment for detecting defects of hardware parts based on deep learning
CN103077537B (en) Novel L1 regularization-based real-time moving target tracking method
CN105740751A (en) Object detection and identification method and system
CN101324958A (en) Method and apparatus for tracking object
CN106778540A (en) Parking detection is accurately based on the parking event detecting method of background double layer
CN111524113A (en) Lifting chain abnormity identification method, system, equipment and medium
CN113963333B (en) Traffic sign board detection method based on improved YOLOF model
CN113436228A (en) Anti-blocking and target recapturing method of correlation filtering target tracking algorithm
Liu et al. SETR-YOLOv5n: A Lightweight Low-Light Lane Curvature Detection Method Based on Fractional-Order Fusion Model
CN111968154A (en) HOG-LBP and KCF fused pedestrian tracking method
CN112037255A (en) Target tracking method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181113