CN107038431A - Video target tracking method of taking photo by plane based on local sparse and spatio-temporal context information - Google Patents
Video target tracking method of taking photo by plane based on local sparse and spatio-temporal context information Download PDFInfo
- Publication number
- CN107038431A CN107038431A CN201710321830.8A CN201710321830A CN107038431A CN 107038431 A CN107038431 A CN 107038431A CN 201710321830 A CN201710321830 A CN 201710321830A CN 107038431 A CN107038431 A CN 107038431A
- Authority
- CN
- China
- Prior art keywords
- target
- template
- dictionary
- rarefaction representation
- coefficient
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2136—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on sparsity criteria, e.g. with an overcomplete basis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/28—Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/32—Normalisation of the pattern dimensions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/513—Sparse representations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to a kind of video target tracking method of taking photo by plane based on local sparse and spatio-temporal context information, the tracking result of robust can be obtained when target appearance changes, blocks and quickly moved.The present invention constructs dictionary, to capture the part and structural information of target under the framework of rarefaction representation and particle filter using localized mass.The confidence map that target occurs in its neighborhood is obtained using the spatial context information of target, namely target appears in the possibility size of each position.There is the larger region of possibility in target and carry out particle filter, tracking result is obtained using sparse representation method.The present invention can quick and precisely be positioned to target, preferably solved target and lost or drifting problem, realize the fast robust tracking for video object of being taken photo by plane under complex background.
Description
Technical field
The invention belongs to video target tracking method of taking photo by plane, and in particular to one kind is based on local sparse and space-time context letter
The video target tracking method of taking photo by plane of breath.
Background technology
With the development of computer vision technique, the video frequency object tracking of taking photo by plane based on unmanned plane has become important grind
Study carefully field.In UAV Video scene, motion target tracking is not only by environment as general video target tracking method
Interference, is such as blocked, shade, yardstick and the problems such as illumination variation, meanwhile, unmanned plane its motion in the case of high-speed motion is prominent
Become, on the one hand cause the change of the adjacent interframe image coordinate system of video, on the other hand also introduce image blurring, shake etc. dry
Disturb.All shot in addition, unmanned plane is general in the high-altitude of thousands of or upper myriametre, causing the target photographed to have, yardstick is small, target
With the problems such as background contrasts are small, target texture is unintelligible;In addition, unmanned plane is moved with moving target simultaneously, sequence is also resulted in
In row image target there are problems that scaling, bigger challenge is brought to target following.
In recent years, the method for tracking target based on sparse representation theory has obtained great attention, and this method asks tracking
The problem of topic regards one as by the candidate target with minimal reconstruction error after target template rarefaction representation to be selected, is found.
But most of this kind of method all considers the Integral Characteristic of target, does not utilize rarefaction representation coefficient to distinguish target and the back of the body
Scape, therefore when the object similar to target occur or blocking, easily tracking is failed.In addition, the target based on rarefaction representation with
Track method, only make use of time contextual information, i.e., using in the prediction next frame such as position, mode of appearance of current target
The position of target, but do not make full use of the spatial context information of target, it may appear that target is lost or drift phenomenon.In target
Surrounding neighbors in, being constantly present some regional areas and target has extremely strong contact.Utilization space contextual information is tracked
When, using the background area of target and its peripheral neighborhood as spatial context, tracking effect can be effectively improved.
The content of the invention
The technical problem to be solved
In order to avoid the shortcomings of the prior art, the present invention proposes one kind based on local sparse and spatio-temporal context information
Video target tracking method of taking photo by plane, simple using the one and efficient display model based on local rarefaction representation feature comes
Portray the partial structurtes feature of target.And spatial context information is introduced, target loss can be preferably solved or drift is asked
Topic, realizes the fast robust tracking for video object of being taken photo by plane under complex background.
Technical scheme
A kind of video target tracking method of taking photo by plane based on local sparse and spatio-temporal context information, it is characterised in that step
It is as follows:
Step 1:The parameter [xm, ym, w, h] of the first frame image data and object block in the first two field picture is read, its
In:Xm, ym represent the transverse and longitudinal coordinate of target's center, w, and h represents the wide and height of target;
Step 2, the dictionary D for being configured to rarefaction representation:Around the first frame target n are randomly generated according to Gaussian Profile
To Template, and each To Template is normalized to M × N standard picture block z ∈ RM×N;Using a sliding window in template
Upper scanning extracts m localized mass, and they are lined up in order, and dictionary D is made up of n To Template of the above:D=
[D1,D2,…,Dn, E], wherein, E is trifling template, Di=[di,1,di,2,…,di,m] wherein DiIt is i-th of template in dictionary,
di,jJth block in i-th of the template represented;
Step 3, the spatial context model for building target:
Definition space context model isWherein I () is certain pixel
Gray value, ωσ() is a Gaussian function, I (x) ωσ(x-x ') illustrates the spatial context prior information of target, and x '=
(xm, ym) is the coordinate of target current location, and b is normaliztion constant, and its span is 0 to 1;α, β are used for Controlling model
Empirical, value is respectively 2.25 and 1,The possibility size that target appears in each position has been weighed,WithRespectively Fourier changes and Fourier inversion;
Enter the circular treatment step for reading each frame from following steps 4:
Step 4:If the confidence map of the first two field picture, then targetOtherwise according to spatial context mould
Type, the confidence map of target is expressed as:
In formula:Represent convolutional calculation.According to the confidence map m (x) of target, centered on the maximum position of its value, according to
Gaussian Profile is sampled, i.e., according to distribution p (xk|xk-1)=N (xk-1;X, Σ) N number of particle point is randomly generated, N takes 600;And
Record its coordinate p=(xpi,ypi), i=1,2 ..., N, each particle represent a candidate target region, whereinWherein 6 parameters are represented successively respectively:It is horizontal extension amount, Horizontal Deformation amount, vertical
Deformation quantity, vertical telescopic amount, horizontal displacement, vertical displacement amount;
Step 5, rarefaction representation are solved:The candidate target Y represented to each particle, in the way of dictionary is constructed, first
Standard size 32*32 is normalized to, then it is used and extracts local block message with step 2 identical mode.So wait
Target Y is selected to be expressed as by dictionary D and corresponding rarefaction representation coefficient α:
Y≈[D1,D2,…,Dn,E][α1,α2,…,αn, e] and=D α
And rarefaction representation coefficient α is obtained by following formula by calculating:α=argmin | | Y-D α | |2+λ·||α1
Regularization coefficient λ=0.01 in formula;
The rarefaction representation coefficient α=[α1,α2,…,αn]T, αiRepresent i-th of template and the corresponding expression system of trifling template
Number, and To Template has been divided into some regional areas, then use αi,jRepresent that i-th of candidate target jth block is corresponding and represent system
Number, and eiIt is then the corresponding expression coefficient of trifling template, so as to there is α againi=[αi,1,αi,2,…,αi,m,ei]T;
Step 6:The expression coefficient of correspondence position is summed, so as to obtain:Again by ciIn
Diagonal entry take out arrangement form column vector f in orderi;
Step 7:The confidence level of each particle is calculated, its confidence level highest particle is tracking result;Wherein, particle is put
Reliability is weighed using rarefaction representation coefficient, is defined as
Step 8, dictionary updating:When the target traced into, its confidence level is less than some threshold value μ=10, just replaces in dictionary
A template;The To Template demarcated manually using in the first frame is not replaced as fixed form;Then, a sequence is produced
Arrange S={ 0,21,22,…,2n-1, n is the number of To Template in dictionary, after the sequence is normalized, obtain one accumulation
Probability sequence:
The accumulated sequence represents the probability that each template is replaced;Then, produce one on interval [0,1] it is uniform with
Machine number r, then r is located at LpInterval number, i.e., the sequence number of template to be replaced;
Step 9:Judge whether to have handled all frames of image sequence, if not having, go to step 4 and continue;If processing is complete,
Then terminate.
Beneficial effect
A kind of video target tracking method of taking photo by plane based on local sparse and spatio-temporal context information proposed by the present invention,
Target appearance can obtain the tracking result of robust when changing, block and quickly moving.The present invention is in rarefaction representation and particle
Under the framework of filtering, dictionary is constructed using localized mass, to capture the part and structural information of target.Using target spatially under
Literary information obtains the confidence map that target occurs in its neighborhood, namely target appears in the possibility size of each position.In mesh
Mark the larger region of existing possibility and carry out particle filter, tracking result is obtained using sparse representation method.
The present invention constructs dictionary using localized mass, can capture the part and structural information of target.Especially work as target portion
When point being blocked, those parts not being blocked are remained able to represented by corresponding localized mass, so that tracking more Shandong
Rod;Using the spatial context information of target, target can quick and precisely be positioned, preferably solve target and lose or float
Shifting problem, realizes the fast robust tracking for video object of being taken photo by plane under complex background.
Brief description of the drawings
Fig. 1:Flow chart
Embodiment
In conjunction with embodiment, accompanying drawing, the invention will be further described:
The present invention uses the method based on local rarefaction representation first on the basis of the first frame of video target of taking photo by plane is determined
Object representation is carried out, the dictionary of rarefaction representation is configured to.Then, tracking is aided in by spatial context information, passes through mesh
Target spatial context information, builds a confidence map to represent possibility size that target occurs in context.Go out in target
The larger region of existing possibility carries out particle filter, using sparse representation method, obtains tracking result.Step specific as follows is such as
Under, flow refers to accompanying drawing.
Step is as follows:
1) parameter [xm, ym, w, h] of the first frame image data and object block in the first two field picture is read, wherein:
Xm, ym represent the transverse and longitudinal coordinate of target's center, w, and h represents the wide and height of target.
2) it is configured to the dictionary D of rarefaction representation.N target is randomly generated according to Gaussian Profile around the first frame target
Template, and each To Template is normalized to M × N standard picture block z ∈ RM×N.Swept using a sliding window in template
Retouch and extract m localized mass, and they are lined up in order, dictionary D is made up of n To Template of the above:D=[D1,
D2,…,Dn, E], wherein, E is trifling template, Di=[di,1,di,2,…,di,m] wherein DiIt is i-th of template in dictionary, di,j
Jth block in i-th of the template represented.
3) the spatial context model of target is built.Concrete operations are as follows, and definition space context model isWherein I () is the gray value of certain pixel, ωσ() is a Gaussian function,
I(x)ωσ(x-x ') illustrates the spatial context prior information of target, and x '=(xm, ym) is the coordinate of target current location, b
For normaliztion constant, its span is 0 to 1;α, β are used for the empirical of Controlling model, and value is respectively 2.25 and 1,The possibility size that target appears in each position has been weighed,WithRespectively Fourier's change and Fourier are anti-
Conversion.
If 4) confidence map of the first two field picture, then targetOtherwise according to spatial context model, mesh
Target confidence map is expressed as:
In formula:Represent convolutional calculation.According to the confidence map m (x) of target,
Centered on the maximum position of its value, sampled according to Gaussian Profile, i.e., according to distribution p (xk|xk-1)=N (xk-1;x,Σ)
N number of particle point (N takes 600) is randomly generated, and records its coordinate p=(xpi,ypi), i=1,2 ..., N, each particle represent one
Individual candidate target region, whereinWherein 6 parameters are represented successively respectively:Horizontal extension
Amount, Horizontal Deformation amount, VERTICAL DEFORMATION amount, vertical telescopic amount, horizontal displacement, vertical displacement amount;
5) rarefaction representation is solved.The candidate target Y represented to each particle, in the way of dictionary is constructed, first by it
Standard size 32*32 is normalized to, then it is used and extracts local block message with step 2 identical mode.So candidate's mesh
Mark Y can be expressed as by dictionary D and corresponding rarefaction representation coefficient α:
Y≈[D1,D2,…,Dn,E][α1,α2,…,αn, e] and=D α
And rarefaction representation coefficient α can be obtained by following formula by calculating:
α=argmin | | Y-D α | |2+λ·||α1, regularization coefficient λ=0.01 in formula
6) for rarefaction representation coefficient α=[α1,α2,…,αn]T, αiRepresent i-th of template and the corresponding expression of trifling template
Coefficient, and To Template has been divided into some regional areas, then use αi,jRepresent that i-th of candidate target jth block is corresponding and represent system
Number, and eiIt is then the corresponding expression coefficient of trifling template, so as to there is α againi=[αi,1,αi,2,…,αi,m,ei]T。
7) candidate target Y, which has been divided into, overlapping m blocks, it is clear that a certain piece in each candidate target, will be more prone to
The block of correspondence position is represented in dictionary, therefore we sum the expression coefficient of correspondence position, so as to obtain:Again by ciIn diagonal entry take out arrangement form column vector f in orderi。
8) confidence level of each particle is calculated, its confidence level highest particle is tracking result.Wherein, the confidence level of particle
Weighed, be defined as using rarefaction representation coefficient
9) dictionary updating.When its confidence level of the target traced into is less than some threshold value μ (μ takes 10), dictionary is just replaced
In a template.The To Template demarcated manually in first frame will be not replaced by the present invention as fixed form.Then,
Produce a sequence S={ 0,21,22,…,2n-1, n is the number of To Template in dictionary, after the sequence is normalized, is obtained
The probability sequence of one accumulation:
The accumulated sequence represents the probability that each template is replaced.Then, produce one on interval [0,1] it is uniform with
Machine number r, then r is located at LpInterval number, i.e., the sequence number of template to be replaced.
8) judge whether to have handled all frames of image sequence, if not having, go to step 4 and continue;If processing is complete, tie
Beam.
Claims (1)
1. a kind of video target tracking method of taking photo by plane based on local sparse and spatio-temporal context information, it is characterised in that step is such as
Under:
Step 1:The parameter [xm, ym, w, h] of the first frame image data and object block in the first two field picture is read, wherein:
Xm, ym represent the transverse and longitudinal coordinate of target's center, w, and h represents the wide and height of target;
Step 2, the dictionary D for being configured to rarefaction representation:N target is randomly generated according to Gaussian Profile around the first frame target
Template, and each To Template is normalized to M × N standard picture block z ∈ RM×N;Swept using a sliding window in template
Retouch and extract m localized mass, and they are lined up in order, dictionary D is made up of n To Template of the above:D=[D1,
D2,…,Dn, E], wherein, E is trifling template, Di=[di,1,di,2,…,di,m] wherein DiIt is i-th of template in dictionary, di,j
Jth block in i-th of the template represented;
Step 3, the spatial context model for building target:
Definition space context model isWherein I () is the gray scale of certain pixel
Value, ωσ() is a Gaussian function, I (x) ωσ(x-x ') illustrates the spatial context prior information of target, and x '=(xm,
Ym it is) coordinate of target current location, b is normaliztion constant, its span is 0 to 1;α, β are used for the experience of Controlling model
Constant, value is respectively 2.25 and 1,The possibility size that target appears in each position has been weighed,WithPoint
Wei not Fourier's change and Fourier inversion;
Enter the circular treatment step for reading each frame from following steps 4:
Step 4:If the confidence map of the first two field picture, then targetOtherwise according to spatial context model, mesh
Target confidence map is expressed as:
In formula:Represent convolutional calculation.According to the confidence map m (x) of target, centered on the maximum position of its value, according to Gaussian Profile
Sampled, i.e., according to distribution p (xk|xk-1)=N (xk-1;X, Σ) N number of particle point is randomly generated, N takes 600;And record its coordinate p=
(xpi,ypi), i=1,2 ..., N, each particle represent a candidate target region, wherein
Wherein 6 parameters are represented successively respectively:Horizontal extension amount, Horizontal Deformation amount, VERTICAL DEFORMATION amount, vertical telescopic amount, horizontal displacement
Amount, vertical displacement amount;
Step 5, rarefaction representation are solved:The candidate target Y represented to each particle, in the way of dictionary is constructed, first by it
Standard size 32*32 is normalized to, then it is used and extracts local block message with step 2 identical mode.So candidate's mesh
Mark Y can be expressed as by dictionary D and corresponding rarefaction representation coefficient α:
Y≈[D1,D2,…,Dn,E][α1,α2,…,αn, e] and=D α
And rarefaction representation coefficient α is obtained by following formula by calculating:α=arg min | | Y-D α | |2+λ·||α1
Regularization coefficient λ=0.01 in formula;
The rarefaction representation coefficient α=[α1,α2,…,αn]T, αiI-th of template and the corresponding expression coefficient of trifling template are represented,
And To Template has been divided into some regional areas, then use αi,jThe corresponding expression coefficient of i-th of candidate target jth block is represented, and
eiIt is then the corresponding expression coefficient of trifling template, so as to there is α againi=[αi,1,αi,2,…,αi,m,ei]T;
Step 6:The expression coefficient of correspondence position is summed, so as to obtain:Again by ciIn pair
Diagonal element takes out arrangement form column vector f in orderi;
Step 7:The confidence level of each particle is calculated, its confidence level highest particle is tracking result;Wherein, the confidence level of particle
Weighed, be defined as using rarefaction representation coefficient
Step 8, dictionary updating:When the target traced into, its confidence level is less than some threshold value μ=10, just replaces one in dictionary
Individual template;The To Template demarcated manually using in the first frame is not replaced as fixed form;Then, a sequence S=is produced
{0,21,22,…,2n-1, n is the number of To Template in dictionary, after the sequence is normalized, and obtains the probability sequence of an accumulation
Row:
The accumulated sequence represents the probability that each template is replaced;Then, a uniform random number on interval [0,1] is produced
R, then r is located at LpInterval number, i.e., the sequence number of template to be replaced;
Step 9:Judge whether to have handled all frames of image sequence, if not having, go to step 4 and continue;If processing is complete, tie
Beam.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710321830.8A CN107038431A (en) | 2017-05-09 | 2017-05-09 | Video target tracking method of taking photo by plane based on local sparse and spatio-temporal context information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710321830.8A CN107038431A (en) | 2017-05-09 | 2017-05-09 | Video target tracking method of taking photo by plane based on local sparse and spatio-temporal context information |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107038431A true CN107038431A (en) | 2017-08-11 |
Family
ID=59537493
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710321830.8A Pending CN107038431A (en) | 2017-05-09 | 2017-05-09 | Video target tracking method of taking photo by plane based on local sparse and spatio-temporal context information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107038431A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108320300A (en) * | 2018-01-02 | 2018-07-24 | 重庆信科设计有限公司 | A kind of space-time context visual tracking method of fusion particle filter |
CN110544266A (en) * | 2019-09-11 | 2019-12-06 | 陕西师范大学 | traffic target tracking method based on structure sparse representation |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105631895A (en) * | 2015-12-18 | 2016-06-01 | 重庆大学 | Temporal-spatial context video target tracking method combining particle filtering |
CN106127776A (en) * | 2016-06-28 | 2016-11-16 | 北京工业大学 | Based on multiple features space-time context robot target identification and motion decision method |
-
2017
- 2017-05-09 CN CN201710321830.8A patent/CN107038431A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105631895A (en) * | 2015-12-18 | 2016-06-01 | 重庆大学 | Temporal-spatial context video target tracking method combining particle filtering |
CN106127776A (en) * | 2016-06-28 | 2016-11-16 | 北京工业大学 | Based on multiple features space-time context robot target identification and motion decision method |
Non-Patent Citations (3)
Title |
---|
XIAOFEN XING等: "Blurred Target Tracking Based on Sparse Representation of Online Updated Templates", 《IEEE》 * |
刘万军: "时空上下文抗遮挡视觉跟踪", 《中国图象图形学报》 * |
李鹏程: "基于稀疏表示的目标跟踪算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108320300A (en) * | 2018-01-02 | 2018-07-24 | 重庆信科设计有限公司 | A kind of space-time context visual tracking method of fusion particle filter |
CN110544266A (en) * | 2019-09-11 | 2019-12-06 | 陕西师范大学 | traffic target tracking method based on structure sparse representation |
CN110544266B (en) * | 2019-09-11 | 2022-03-18 | 陕西师范大学 | Traffic target tracking method based on structure sparse representation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106846359B (en) | Moving target rapid detection method based on video sequence | |
US20220222776A1 (en) | Multi-Stage Multi-Reference Bootstrapping for Video Super-Resolution | |
CN109974743B (en) | Visual odometer based on GMS feature matching and sliding window pose graph optimization | |
CN106851046A (en) | Video dynamic super-resolution processing method and system | |
CN110634147B (en) | Image matting method based on bilateral guide up-sampling | |
CN110148223B (en) | Method and system for concentrating and expressing surveillance video target in three-dimensional geographic scene model | |
CN103440667B (en) | The automaton that under a kind of occlusion state, moving target is stably followed the trail of | |
CN108280804B (en) | Multi-frame image super-resolution reconstruction method | |
CN110930411B (en) | Human body segmentation method and system based on depth camera | |
CN110555377B (en) | Pedestrian detection and tracking method based on fish eye camera overlooking shooting | |
CN104036468B (en) | Single-frame image super-resolution reconstruction method based on the insertion of pre-amplification non-negative neighborhood | |
CN108510520B (en) | A kind of image processing method, device and AR equipment | |
Li et al. | A maximum a posteriori estimation framework for robust high dynamic range video synthesis | |
CN107038431A (en) | Video target tracking method of taking photo by plane based on local sparse and spatio-temporal context information | |
CN114494085B (en) | Video stream restoration method, system, electronic device and storage medium | |
CN104376544B (en) | Non-local super-resolution reconstruction method based on multi-region dimension zooming compensation | |
Li et al. | Gaussianbody: Clothed human reconstruction via 3d gaussian splatting | |
CN106023097A (en) | Iterative-method-based flow field image preprocessing algorithm | |
CN109658441A (en) | Foreground detection method and device based on depth information | |
EA200501474A1 (en) | METHOD OF CODING THE COORDINATES OF MOVING ON THE SCREEN COMPUTING VIDEO DEVICE | |
Amiri et al. | A fast video super resolution for facial image | |
CN110570450A (en) | Target tracking method based on cascade context-aware framework | |
Wan et al. | Progressive convolutional transformer for image restoration | |
CN110853040B (en) | Image collaborative segmentation method based on super-resolution reconstruction | |
Liang et al. | Spatiotemporal super-resolution reconstruction based on robust optical flow and Zernike moment for video sequences |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170811 |