CN110348492A - A kind of correlation filtering method for tracking target based on contextual information and multiple features fusion - Google Patents

A kind of correlation filtering method for tracking target based on contextual information and multiple features fusion Download PDF

Info

Publication number
CN110348492A
CN110348492A CN201910548179.7A CN201910548179A CN110348492A CN 110348492 A CN110348492 A CN 110348492A CN 201910548179 A CN201910548179 A CN 201910548179A CN 110348492 A CN110348492 A CN 110348492A
Authority
CN
China
Prior art keywords
target
correlation filtering
information
response
contextual information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910548179.7A
Other languages
Chinese (zh)
Inventor
刘辉
秦莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunming University of Science and Technology
Original Assignee
Kunming University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunming University of Science and Technology filed Critical Kunming University of Science and Technology
Priority to CN201910548179.7A priority Critical patent/CN110348492A/en
Publication of CN110348492A publication Critical patent/CN110348492A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Abstract

The present invention relates to a kind of correlation filtering method for tracking target based on contextual information and multiple features fusion, belongs to video frequency object tracking technical field.Histograms of oriented gradients (HOG) feature and color of object histogram feature are extracted first from target and context, the correlation filtering response of both traditional characteristics is merged by fixed weight strategy, then the convolution feature that target and context are extracted with the convolutional network in deep learning is utilized, using the fusion traditional characteristic response of adaptive weighting convergence strategy and convolution characteristic response, estimation target position is obtained based on fused response diagram and target scale variation issue is solved using Scale Estimation Method;The present invention can effectively solve the problem that tracking target because of situations such as blocking, dimensional variation, illumination etc., tracking drift conditions caused by background clutter factor, realize accurate and robust target following.

Description

A kind of correlation filtering method for tracking target based on contextual information and multiple features fusion
Technical field
The present invention relates to a kind of correlation filtering method for tracking target based on contextual information and multiple features fusion, belongs to view Frequency target following technical field.
Background technique
In computer vision field, target following has broad application prospects, and mainly includes human-computer interaction, military system Lead, athlete's analysis, intelligent vision navigation etc..Although Target Tracking Problem has been achieved for very big breakthrough in recent years, But because of situations such as existing forms variation and background clutter during target is in tracking, accurate target following is completed, is still One greatly challenge.
According to the modeling pattern difference of target appearance model, target following model can be divided into two classes: production model and Discriminative model.Based on the target tracking algorism of production model, the appearance features of target are described using production model, are passed through Sampling searches out the candidate target come and realizes that reconstructed error minimizes, then compares the similarity degree of candidate target and model, finds Maximum similar purpose is as tracking result.Discriminate apparent model be then distinguished by the various classifiers of training by with The target object of track and background area efficiently utilize the context information of target, since effect preferably obtains extensively Using.What wherein tool represented is the Staple-CA algorithm based on correlation filtering, the algorithm effective integration target and context HOG feature and color histogram feature are divided into two independent ridge regressions and solve Target Tracking Problem.The algorithm is able to solve one As the variation of target external form, such as target appearance encounters the discontinuity of variation and illumination, becomes to the target under complex environment Change and is easy tracking failure.Since the algorithm is using traditional characteristic, the semantic information of target cannot be effectively extracted, when tracking mesh When mark blocks, it is easily lost target, the situation of model drift and failure occurs.
Summary of the invention
The technical problem to be solved in the present invention is to provide a kind of correlation filtering based on contextual information and multiple features fusion Method for tracking target, for solving during visual target tracking, because blocking, dimensional variation, illumination etc., background clutter factor Caused by track the situations of drift conditions, realize accurate and robust target following.
The technical scheme is that a kind of correlation filtering target following side based on contextual information and multiple features fusion Method extracts histograms of oriented gradients (HOG) feature and color of object histogram feature first from target and context, leads to The correlation filtering response that fixed weight strategy merges both traditional characteristics is crossed, the convolutional network used in deep learning is then utilized The convolution feature for extracting target and context, it is special using the fusion traditional characteristic response of adaptive weighting convergence strategy and convolution Sign response is obtained estimation target position and is solved target scale variation using Scale Estimation Method to ask based on fused response diagram Topic, specifically:
Step1, the initial position message and dimensional information for obtaining target;
Step2, the traditional characteristic that target and context information are extracted according to the initial information that Step1 is obtained: HOG is special It seeks peace color histogram feature;
Step3, according to the Step2 HOG feature extracted and color histogram feature and respectively correlation filtering model is established, and Two kinds of traditional characteristics are merged by the way of fixed coefficient fusion obtains correlation filtering response;
Step4, the convolution feature that initial target and context information are obtained using the convolutional network in deep learning;
Step5, correlation filtering model is established using the convolution feature extracted according to Step4, and using adaptive fusion side Formula fusion Step3 obtains traditional characteristic response to obtain filter response to the end, and future position;
Step6, the target position information obtained using Step5, the dimensional variation of addition scaling filter prediction target;
Step7, it after obtaining the location information and dimensional information of target, updates correlation filtering model and is tracked, until most A later frame.
The traditional characteristic and convolution feature extracted in the step Step3 and Step5 using target and contextual information is built Specific step is as follows for vertical correlation filtering model:
Note is current to extract image block characteristics x, obtains matrix X by cyclic shift, adopts up and down around target sample x The identical n contextual information of size is taken to obtain xi, corresponding cyclic shift matrices are Xi, sample n context letter Breath sample trains classifier as negative sample, so that filter response with higher at target sample, upper and lower Response is close to zero at literary information, and the ridge regression of objective function after contextual information is added are as follows:
In formula (1), λ and λ1For regularization parameter, w is filter, and y indicates correlation filtering desired output, by the back in formula The circular matrix of scape sample and target sample merges, and can obtain:
In formula (2), Β is block circulant matrix, and direct computation of DFT transformation can be used in all circular matrixes in Fu's formula space (DFT) matrix carries out diagonalization, obtains following formula:
Filter w utilizes Fourier's rapid solving are as follows:
The beneficial effects of the present invention are: be effectively utilized the background information around target, using background information auxiliary positioning, Tracking accuracy is promoted.Traditional characteristic and convolution feature efficiently have been merged, can have been effectively reduced during tracking because of part The generation of model drift phenomenon caused by blocking.It, can be with higher while accurately tracking target position and dimensional variation Execute speed tracing target.
Detailed description of the invention
Fig. 1 is flow diagram of the invention;
Fig. 2 is idiographic flow schematic diagram of the invention.
Specific embodiment
With reference to the accompanying drawings and detailed description, the invention will be further described.
Embodiment 1: as shown in Figs. 1-2, a kind of correlation filtering target following based on contextual information and multiple features fusion Method extracts histograms of oriented gradients (HOG) feature and color of object histogram feature first from target and context, The correlation filtering response that both traditional characteristics are merged by fixed weight strategy, then utilizes the convolution net used in deep learning Network extracts the convolution feature of target and context, using the fusion traditional characteristic response of adaptive weighting convergence strategy and convolution Characteristic response obtains estimation target position based on fused response diagram and solves target scale variation using Scale Estimation Method Problem.
Specific step is as follows:
Step1, the initial position message and dimensional information for obtaining target.
Since the present invention is the verifying invention validity on open test collection OTB-2013 in the Step1, so tracking The location information and dimensional information of target first frame have mark in test set.By the mark file of read test collection, i.e., Available target initial information.
Step2, the HOG feature and color histogram feature that target is extracted according to the initial information that Step1 is obtained.
Step3, HOG feature and color histogram feature and the convolution spy that target and contextual information are extracted according to Step2 Sign establishes correlation filtering model respectively, and the response of traditional characteristic correlation filtering is obtained by the way of fixed coefficient fusion.It establishes Specific step is as follows for correlation filtering model:
Note is current to extract image block characteristics x, obtains matrix X by cyclic shift, adopts up and down around target sample x The identical n contextual information of size is taken to obtain xi, corresponding cyclic shift matrices are Xi.Sample obtained n context letter Breath sample trains classifier as negative sample, so that filter response with higher at target sample, upper and lower Response is close to zero at literary information.The ridge regression of objective function after addition contextual information are as follows:
In formula (1), λ and λ1For regularization parameter, w is filter, and y indicates correlation filtering desired output.By the back in formula The circular matrix of scape sample and target sample merges, and can obtain:
In formula (2), Β is block circulant matrix.Direct computation of DFT transformation can be used in all circular matrixes in Fu's formula space (DFT) matrix carries out diagonalization, obtains following formula:
Filter w utilizes Fourier's rapid solving are as follows:
Step4, the convolution feature that initial target and context information are obtained using the convolutional network in deep learning. The convolutional network structure used is VGG-19, and since low layer convolution includes more location informations, and deep layer convolution includes more Semantic information.So extracting conv3-4 with VGG-19, this three layers feature of conv4-4, conv5-4 carries out linear weighted function and obtains It is responded to final convolution.
Step5, the volume obtained using the obtained correlation filtering response of the method fusion Step3 adaptively merged and Step4 Product characteristic response obtains response to the end: flast=kconvfconv+ktradftrad, and future position.Utilize the peak of video frame Value secondary lobe ratio (PSR) measures each model to the contribution degree of trace model, dynamically distributes model response diagram and merges weight.
Respective adaptive weighting is calculated by following formula:
ktrad=1-kconv
Wherein, CconvIndicate traditional characteristic PSR, CtradIndicate convolution feature PSR, calculation formula are as follows:
C=PSR (ft)-PSR(ft-1)
In formula, t is the sequence number of present frame, and μ is mean value, and δ is variance.
Weight ktMore new strategy are as follows:
kt=(1- ηk)kt-1kkt
Wherein, ηkCoefficient is updated for weight.
Step6, the target position information obtained using step5, the dimensional variation of addition scaling filter prediction target.
Scale " pyramid " sampling is carried out in the tracing positional predicted before.Target sample size for scale assessment Selection principle are as follows:
Wherein, P, R are respectively width, height of the target in previous frame, and a is scale factor, and S is the total series of scale.Then it incites somebody to action To the target sample of different scale be uniformly scaled the size of P × R, then carry out relevant operation with unidimensional scale correlation filter Scale response diagram is obtained, maximum response position is corresponding target best scale.
Step7, it after obtaining the location information and dimensional information of target, updates correlation filtering model and is tracked, until most A later frame.
In conjunction with attached drawing, the embodiment of the present invention is explained in detail above, but the present invention is not limited to above-mentioned Embodiment within the knowledge of a person skilled in the art can also be before not departing from present inventive concept Put that various changes can be made.

Claims (2)

1. a kind of correlation filtering method for tracking target based on contextual information and multiple features fusion, it is characterised in that:
Step1, the initial position message and dimensional information for obtaining target;
Step2, the traditional characteristic that target and context information are extracted according to the initial information that Step1 is obtained: HOG feature and Color histogram feature;
Step3, it establishes correlation filtering model according to the Step2 HOG feature extracted and color histogram feature and respectively, and uses The mode of fixed coefficient fusion merges two kinds of traditional characteristics and obtains correlation filtering response;
Step4, the convolution feature that initial target and context information are obtained using the convolutional network in deep learning;
Step5, correlation filtering model is established using the convolution feature extracted according to Step4, and melted using adaptive amalgamation mode It closes Step3 and obtains traditional characteristic response to obtain filter response to the end, and future position;
Step6, the target position information obtained using Step5, the dimensional variation of addition scaling filter prediction target;
Step7, it after obtaining the location information and dimensional information of target, updates correlation filtering model and is tracked, to the last one Frame.
2. the correlation filtering method for tracking target according to claim 1 based on contextual information and multiple features fusion, Be characterized in that: the traditional characteristic and convolution feature extracted in the step Step3 and Step5 using target and contextual information is built Specific step is as follows for vertical correlation filtering model:
Note is current to extract image block characteristics x, obtains matrix X by cyclic shift, takes up and down around target sample x big Small identical n contextual information obtains xi, corresponding cyclic shift matrices are Xi, n contextual information sample sampling This trains classifier as negative sample, so that filter response with higher at target sample, believes in context Response is close to zero at breath, and the ridge regression of objective function after contextual information is added are as follows:
In formula (1), λ and λ1For regularization parameter, w is filter, and y indicates correlation filtering desired output, by the background sample in formula It merges, can obtain with the circular matrix of target sample:
In formula (2), Β is block circulant matrix, and direct computation of DFT transformation can be used in all circular matrixes in Fu's formula space (DFT) matrix carries out diagonalization, obtains following formula:
Filter w utilizes Fourier's rapid solving are as follows:
CN201910548179.7A 2019-06-24 2019-06-24 A kind of correlation filtering method for tracking target based on contextual information and multiple features fusion Pending CN110348492A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910548179.7A CN110348492A (en) 2019-06-24 2019-06-24 A kind of correlation filtering method for tracking target based on contextual information and multiple features fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910548179.7A CN110348492A (en) 2019-06-24 2019-06-24 A kind of correlation filtering method for tracking target based on contextual information and multiple features fusion

Publications (1)

Publication Number Publication Date
CN110348492A true CN110348492A (en) 2019-10-18

Family

ID=68182853

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910548179.7A Pending CN110348492A (en) 2019-06-24 2019-06-24 A kind of correlation filtering method for tracking target based on contextual information and multiple features fusion

Country Status (1)

Country Link
CN (1) CN110348492A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111812636A (en) * 2020-06-01 2020-10-23 杭州电子科技大学 Particle filter tracking-before-detection method based on weight fusion selection
CN111862160A (en) * 2020-07-23 2020-10-30 中国兵器装备集团自动化研究所 Target tracking method, medium and system based on ARM platform
CN112651999A (en) * 2021-01-19 2021-04-13 滨州学院 Unmanned aerial vehicle ground target real-time tracking method based on space-time context perception
CN113538509A (en) * 2021-06-02 2021-10-22 天津大学 Visual tracking method and device based on adaptive correlation filtering feature fusion learning

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016026370A1 (en) * 2014-08-22 2016-02-25 Zhejiang Shenghui Lighting Co., Ltd. High-speed automatic multi-object tracking method and system with kernelized correlation filters
CN106127776A (en) * 2016-06-28 2016-11-16 北京工业大学 Based on multiple features space-time context robot target identification and motion decision method
WO2017088050A1 (en) * 2015-11-26 2017-06-01 Sportlogiq Inc. Systems and methods for object tracking and localization in videos with adaptive image representation
CN107403175A (en) * 2017-09-21 2017-11-28 昆明理工大学 Visual tracking method and Visual Tracking System under a kind of movement background
CN107451601A (en) * 2017-07-04 2017-12-08 昆明理工大学 Moving Workpieces recognition methods based on the full convolutional network of space-time context
CN107680119A (en) * 2017-09-05 2018-02-09 燕山大学 A kind of track algorithm based on space-time context fusion multiple features and scale filter
KR101837407B1 (en) * 2017-11-03 2018-03-12 국방과학연구소 Apparatus and method for image-based target tracking
US20180268559A1 (en) * 2017-03-16 2018-09-20 Electronics And Telecommunications Research Institute Method for tracking object in video in real time in consideration of both color and shape and apparatus therefor
CN108647694A (en) * 2018-04-24 2018-10-12 武汉大学 Correlation filtering method for tracking target based on context-aware and automated response
WO2018208245A1 (en) * 2017-05-12 2018-11-15 Aselsan Elektronik Sanayi Ve Ticaret Anonim Sirketi A method for correlation filter based visual tracking
CN109285179A (en) * 2018-07-26 2019-01-29 昆明理工大学 A kind of motion target tracking method based on multi-feature fusion
CN109461172A (en) * 2018-10-25 2019-03-12 南京理工大学 Manually with the united correlation filtering video adaptive tracking method of depth characteristic
CN109741366A (en) * 2018-11-27 2019-05-10 昆明理工大学 A kind of correlation filtering method for tracking target merging multilayer convolution feature

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016026370A1 (en) * 2014-08-22 2016-02-25 Zhejiang Shenghui Lighting Co., Ltd. High-speed automatic multi-object tracking method and system with kernelized correlation filters
WO2017088050A1 (en) * 2015-11-26 2017-06-01 Sportlogiq Inc. Systems and methods for object tracking and localization in videos with adaptive image representation
CN106127776A (en) * 2016-06-28 2016-11-16 北京工业大学 Based on multiple features space-time context robot target identification and motion decision method
US20180268559A1 (en) * 2017-03-16 2018-09-20 Electronics And Telecommunications Research Institute Method for tracking object in video in real time in consideration of both color and shape and apparatus therefor
WO2018208245A1 (en) * 2017-05-12 2018-11-15 Aselsan Elektronik Sanayi Ve Ticaret Anonim Sirketi A method for correlation filter based visual tracking
CN107451601A (en) * 2017-07-04 2017-12-08 昆明理工大学 Moving Workpieces recognition methods based on the full convolutional network of space-time context
CN107680119A (en) * 2017-09-05 2018-02-09 燕山大学 A kind of track algorithm based on space-time context fusion multiple features and scale filter
CN107403175A (en) * 2017-09-21 2017-11-28 昆明理工大学 Visual tracking method and Visual Tracking System under a kind of movement background
KR101837407B1 (en) * 2017-11-03 2018-03-12 국방과학연구소 Apparatus and method for image-based target tracking
CN108647694A (en) * 2018-04-24 2018-10-12 武汉大学 Correlation filtering method for tracking target based on context-aware and automated response
CN109285179A (en) * 2018-07-26 2019-01-29 昆明理工大学 A kind of motion target tracking method based on multi-feature fusion
CN109461172A (en) * 2018-10-25 2019-03-12 南京理工大学 Manually with the united correlation filtering video adaptive tracking method of depth characteristic
CN109741366A (en) * 2018-11-27 2019-05-10 昆明理工大学 A kind of correlation filtering method for tracking target merging multilayer convolution feature

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
BIN ZHOU: "Adaptive Context-Aware and Structural Correlation Filter for Visual Tracking", 《DOI:10.3390/APP9071338》 *
FENG TANG: "Spatial-aware correlation filters with adaptive weight maps for visual tracking", 《NEUROCOMPUTING》 *
JIMMY T.MBELWA: "Visual tracking using objectness-bounding box regression and correlation filters", 《JOURNAL OF ELECTRONIC IMAGING》 *
JOAO F.HENRIQUES: "Exploiting the Circulant Structure of Tracking-by-detection with Kernels", 《ECCV‘12》 *
MATTHIAS MUELLER: "Context-Aware Correlation Filter Tracking", 《2017 IEEE CONFERENCE ON CVPR》 *
李国友等: "一种基于多特征融合的核相关滤波目标跟踪算法", 《计算机与应用化学》 *
杨洋: "空时上下文模型下基于多种特征融合的监控目标跟踪", 《计算机测量与控制》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111812636A (en) * 2020-06-01 2020-10-23 杭州电子科技大学 Particle filter tracking-before-detection method based on weight fusion selection
CN111812636B (en) * 2020-06-01 2023-06-13 杭州电子科技大学 Particle filtering pre-detection tracking method based on weight fusion selection
CN111862160A (en) * 2020-07-23 2020-10-30 中国兵器装备集团自动化研究所 Target tracking method, medium and system based on ARM platform
CN111862160B (en) * 2020-07-23 2023-10-13 中国兵器装备集团自动化研究所有限公司 Target tracking method, medium and system based on ARM platform
CN112651999A (en) * 2021-01-19 2021-04-13 滨州学院 Unmanned aerial vehicle ground target real-time tracking method based on space-time context perception
CN113538509A (en) * 2021-06-02 2021-10-22 天津大学 Visual tracking method and device based on adaptive correlation filtering feature fusion learning

Similar Documents

Publication Publication Date Title
CN110070074B (en) Method for constructing pedestrian detection model
CN109145939B (en) Semantic segmentation method for small-target sensitive dual-channel convolutional neural network
CN110348492A (en) A kind of correlation filtering method for tracking target based on contextual information and multiple features fusion
CN109816689A (en) A kind of motion target tracking method that multilayer convolution feature adaptively merges
CN103268495B (en) Human body behavior modeling recognition methods based on priori knowledge cluster in computer system
Li et al. Adaptive deep convolutional neural networks for scene-specific object detection
CN110555387B (en) Behavior identification method based on space-time volume of local joint point track in skeleton sequence
CN109919981A (en) A kind of multi-object tracking method of the multiple features fusion based on Kalman filtering auxiliary
CN107330357A (en) Vision SLAM closed loop detection methods based on deep neural network
CN109285179A (en) A kind of motion target tracking method based on multi-feature fusion
CN109902806A (en) Method is determined based on the noise image object boundary frame of convolutional neural networks
CN108665481A (en) Multilayer depth characteristic fusion it is adaptive resist block infrared object tracking method
Cao et al. Rapid detection of blind roads and crosswalks by using a lightweight semantic segmentation network
CN106056628A (en) Target tracking method and system based on deep convolution nerve network feature fusion
CN105678231A (en) Pedestrian image detection method based on sparse coding and neural network
CN105550678A (en) Human body motion feature extraction method based on global remarkable edge area
CN111862145B (en) Target tracking method based on multi-scale pedestrian detection
CN109101938A (en) A kind of multi-tag age estimation method based on convolutional neural networks
CN105976397B (en) A kind of method for tracking target
CN103440471B (en) The Human bodys' response method represented based on low-rank
CN109241995A (en) A kind of image-recognizing method based on modified ArcFace loss function
CN108154113A (en) Tumble event detecting method based on full convolutional network temperature figure
CN109635695A (en) Pedestrian based on triple convolutional neural networks recognition methods again
CN105184229A (en) Online learning based real-time pedestrian detection method in dynamic scene
CN108682022A (en) Based on the visual tracking method and system to anti-migration network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned
AD01 Patent right deemed abandoned

Effective date of abandoning: 20230317