CN107945210A - Target tracking algorism based on deep learning and environment self-adaption - Google Patents

Target tracking algorism based on deep learning and environment self-adaption Download PDF

Info

Publication number
CN107945210A
CN107945210A CN201711237457.4A CN201711237457A CN107945210A CN 107945210 A CN107945210 A CN 107945210A CN 201711237457 A CN201711237457 A CN 201711237457A CN 107945210 A CN107945210 A CN 107945210A
Authority
CN
China
Prior art keywords
target
sample
positive sample
pretreatment
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711237457.4A
Other languages
Chinese (zh)
Other versions
CN107945210B (en
Inventor
周圆
李孜孜
曹颖
杜晓婷
杨鸿宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201711237457.4A priority Critical patent/CN107945210B/en
Publication of CN107945210A publication Critical patent/CN107945210A/en
Application granted granted Critical
Publication of CN107945210B publication Critical patent/CN107945210B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of target tracking algorism based on deep learning and environment self-adaption, the track algorithm is made of two parts, a part is pretreatment, information is extracted to each two field picture for tracking video, is then detected by conspicuousness, convolutional neural networks algorithm further screens the positive negative sample taken;Another part is to realize the convolutional neural networks of VGG models:Target signature is extracted first with three layers of convolutional network, is secondly classified using full articulamentum to target and background, finally obtains the position for the target for wanting tracking, then starts the trace flow of next frame.The prior art is compared, and (1) of the invention can be while computation complexity be reduced, the accurate pretreatment information for using image so that tracking effect is more accurate, and therefore, present invention has originality;(2) tracker adapts to the scene of a variety of environment complexity, has a wide range of applications.

Description

Target tracking algorism based on deep learning and environment self-adaption
Technical field
The present invention relates to the target tracking domain of computer vision, and deep learning method is based on more particularly, to one kind Carry out the target tracking algorism to environment self-adaption.
Background technology
The mankind are contacted and link up with the external world by feeling, but the energy of people and the visual field are very limited amount of.Cause In the application of every field, the vision of the mankind is limited by very large even poorly efficient for this.In digital computer technique Today of rapid development, computer vision also increasingly cause the extensive concern of people, and people are intended to replace people with computer " eyes ", with intelligence, allow computer to handle visual information, improve many short slabs in human vision.Meter Calculation machine vision is merged the various fields such as artificial neural network, psychology, physics, computer graphics and mathematics one The very strong subject of door intercrossing.
At present in computer vision field, target following is one of very active problem, and people are also increasingly attention Point has been placed on this field.The application field of target following is very extensive, for example, motion analysis, Activity recognition, monitoring and people The knowledge of this respect has all been used in machine interaction etc., before having important researching value in science and engineering and greatly applying Scape, attracts the interest of domestic and international large quantities of researchers.
Deep learning well be applied to image procossing direction among, for target following direction provide it is a kind of newly Resolving ideas.In target tracking domain, automatically more taken out from the sample learning of acquisition using the deep layer framework of deep learning As the feature with essence, so as to test new sequence.With reference to the tracking technique of deep learning method, gradually surmount in performance Traditional tracking, becomes new trend in this field.
So far, there is not yet carrying out in relation to being based on deep learning and ring in the paper and document at home and abroad published The adaptive target tracking algorism in border.
The content of the invention
Based on the above-mentioned prior art, the present invention proposes that a kind of target following based on deep learning and environment self-adaption is calculated Method, utilizes convolutional neural networks, the parameter of automatic adjusument network so that tracker has very high standard in a variety of tracking scenes The pretreatment advantage of true rate combination conspicuousness detection,.
A kind of target tracking algorism based on deep learning and environment self-adaption of the present invention, this method include following step Suddenly:
Step 1, the picture conduct input using 107 × 107 pixel sizes;
Step 2, pretreatment include positive sample pretreatment and the processing of negative sample, including positive sample pre-processes and negative sample is pre- Processing;Wherein, the step of positive sample pre-processes includes:First, taken according to groundtruth values around target in positive sample One big rectangle of groundtruth values than target, calculates the notable figure of positive sample, accounts for the ratio of whole sample boxes, if than Example is more than some threshold value of setting, as pure positive sample, if smaller than the threshold value set, is then abandoned;Then, utilize Conspicuousness detection algorithm detects the shape of target, and the notable figure binaryzation that then will be obtained, is turned back to an original frame In image, the whole two field picture after binaryzation is sampled further according to the flow of sampling above;Negative sample pretreatment Step includes:Screened using difficult example mining algorithm for negative sample, the sample of sampling is carried out in convolutional neural networks Forward-propagating, sample bigger loss is arranged in sequence, and using electing as " difficult example " above, use this Part sample carrys out training network;
Step (3), use bounding box regression model when the first frame is trained to, and specific processing includes:For regarding for test Fixed first frame given in frequency sequence, accumulate network using three-layer coil train a linear regression model (LRM) predict the position of target, Extract target signature;In each frame of subsequent video sequence, the position of the bounding box of target is adjusted using regression model.
Compared with prior art, the present invention has the following effects that:
(1) can be while computation complexity be reduced, the accurate pretreatment information for using image so that tracking effect is more Add accurately, therefore, present invention has originality;
(2) tracker adapts to the scene of a variety of environment complexity, has a wide range of applications.
Brief description of the drawings
Fig. 1 is the target tracking algorism general frame based on deep learning and environment self-adaption of the present invention;Fig. 1 (a) is The basic model of this paper track algorithms;Fig. 1 (b) is conspicuousness detection model;Fig. 1 (c) deep learning trace models;
Fig. 2 is Diving sequential tracks test results
Fig. 3 is ball sequential tracks test results
Embodiment
The target tracking algorism based on deep learning and environment self-adaption of the present invention, the track algorithm is by two parts group Into a part is pretreatment, and information is extracted to each two field picture for tracking video, is then detected by conspicuousness, convolution god The positive negative sample taken further is screened through network algorithm;Another part is to realize the convolution god of VGG models Through network:Extract target signature first with three layers of convolutional network, secondly using full articulamentum come to target and background come Classify, finally obtain the position for the target for wanting tracking, then start the trace flow of next frame.
Idiographic flow is described in detail as follows:
Step 1, the picture conduct input using 107 × 107 pixel sizes;In order to ensure the characteristic pattern of convolutional layer output Match with the size of input, it is ensured that input full convolutional layer for one-dimensional vector;
Step 2, pretreatment include positive sample pretreatment and the processing of negative sample
(1) positive sample pre-processes:The positive sample that general method is taken is to contain bearing for most of background sometimes Sample, such " positive sample " can cause certain error for the training in convolutional neural networks.Therefore, the present invention is to institute The positive sample taken carries out certain screening so that positive sample is more pure.Concrete implementation method is as follows:
First, a rectangle is taken around target in positive sample according to groundtruth values, rectangle has to compare target Groundtruth values it is big;The ratio that notable figure accounts for whole sample boxes is calculated, if ratio is more than some threshold value of setting, just It can input into network as pure positive sample, if smaller than the threshold value set, then be abandoned.It can so be used for protecting Demonstrate,prove obtained positive sample and be all almost pure.
Then, carry out " conspicuousness " detection, i.e., in a region significant object be detected.Specifically the practice is Using the shape for detecting target of conspicuousness detection algorithm substantially, the notable figure binaryzation that then will be obtained, is turned back to original In the image of the frame come, the whole two field picture after binaryzation is sampled further according to the flow of sampling above, behind " conspicuousness " method is utilized to test target.
Positive sample screening in this step, is a general positive sample screening technique in most track algorithm; In the network that this thought has been used to pre-training, it can have a certain impact for the parameter of whole network.
(2) negative sample pre-processes
In tracing detection, most negative sample is typically redundancy, only seldom representative negative sample It is useful for training tracker.For usual SGD methods, it is easy to cause the drifting problem of tracker.For solving This problem, most common is exactly the thought that difficult example is excavated.The thought that the difficult example of screening application for negative sample is excavated, will sample Sample a forward-propagating is carried out in convolutional neural networks, sample bigger loss is arranged in sequence, and will before Face is elected, because this part sample and positive sample are close enough, while is not positive sample again, therefore is referred to as " difficult example ", With this part sample come training network, network can be made preferably to learn to the difference between positive negative sample.
Step 3, use bounding box regression model when the first frame is trained to, and specific processing includes:For the video of test Fixed first frame given in sequence, using three-layer coil accumulates network and predicts the position of target to train a linear regression model (LRM), carries Take target signature;In each frame of subsequent video sequence, the position of the bounding box of target is adjusted using regression model, profit Classified with full articulamentum to the target and background in image, obtain the big image block of destination probability, which is considered as The target to be tracked, you can obtain the position of target to be tracked, then start the trace flow of next frame.
In positive sample pretreatment, length more new strategy can also be used:Utilize the positive sample being collected into a period of time To update network again.When target is tracked, once find with losing, just using short-term more new strategy, in short term more In new strategy, the positive sample that is collected in the positive sample or this period of time for updating network.Institute in two more new strategies Collected negative sample in the short-term more new model that the negative sample used all uses.Provide TsAnd TlIt is two frame index collection, it is short Phase is set as Ts=20 frames, are set as T for a long timel=100 frames.Just it is so that sample remains most using this tactful purpose " fresh ", it is so more favourable for tracking result.
After the good neutral net of off-line training, for the video sequence for needing to test, track online.Therefore whole , it is necessary to there is on-line tracking part in volume tracing algorithm.The algorithm that tracks online the specific implementation process is as follows:
Input:Wave filter { the w of pre-training convolutional neural networks CNN1,...,w5}
The state x of initialized target1
Output:Estimate the state of target
(1) the weight w of the full articulamentum of random initializtion the 6th6So that w6Obtain a random initial value;
(2) one bounding box regression model of training;
(3) positive sample is extractedAnd negative sample
(4) positive sample is screened using conspicuousness network,
(5) using the positive sample extractedAnd negative sampleTo update the weighted value { w of full articulamentum4,w5,w6, its In, w4,w5,w6The weighted value of full 4.5.6 layers of connection is represented respectively;
(6) length is set to update initial value:Ts← { 1 } and Tl←{1};
(7) following operation is repeated:
Extract the candidate samples of target
Pass through formulaFind the state of optimal targetWherein,For candidate samples, the public affairs It is optimal dbjective state that formula, which shows that candidate's positive sample passes through the highest sample of convolutional neural networks scoring,
IfThen trained sample is extractedWith
Ts←Ts∪ { t }, Tl←Tl∪{t}
Wherein, t represents t frames, TsAnd TlShort and long indexed set is represented respectively.By t and TsAnd TlMaximum respectively It is assigned to TsAnd Tl, the value of two frame index collection of renewal;
If the position length of short frame index collection is more than 20 set, i.e.,:|Ts| > τs, then by short indexed set TsIn Minimum element rejectWherein, v represents the value that tackline draws concentration;
If the position length of long frame index collection is more than 100 set, i.e.,:|Tl| > τl, then by long indexed set TlIn Minimum value reject
The position of the target of prediction is adjusted using bounding box regression model
IfWeight { w is updated using the positive sample in short-run model and negative sample4,w5,w6};
Other situations, update weight { w using the positive sample in short-run model and negative sample4,w5,w6}。
Embodiments of the present invention are described in further detail below in conjunction with attached drawing.
The target tracking algorism based on deep learning and environment self-adaption proposed below to patent is verified.Meanwhile The training error of algorithm before comparing the training error of the algorithm by emulation experiment and do not improve is contrasted, by substantial amounts of Experimental result carrys out the validity of validation algorithm.Experimental result is represented in the form of the target frame tracked.
Candidate target is generated to generate candidate target in each frame, chooses N=256 sample,
Wherein,Represent for previous dbjective state;Covariance matrix is that a parameter is (0.09r2) to angular moment Battle array, r represent the length of target frame and wide average value in former frame.The size of each candidate target frame is original state target frame 1.5 again.
Training data:When offline multiple domain is trained, 50 positive samples and 200 negative samples, positive sample are used from each frame This has >=0.7 and≤0.5 coincidence factor with negative sample with the frame of ground-truth respectively, is exactly distinguished according to this standard Choose positive negative sample.Likewise, for on-line study, collectA positive sample andA negative sample, And follow the sampling coincidence factor standard of top.But during the first frame sampling, we take positive sampleNegative sampleU is returned for bounding box, we use 1000 training samples.
E-learning:Learn for the multiple-domain network of K branch of training, the Study rate parameter of convolutional layer is arranged to 0.0001, the learning rate of full articulamentum is arranged to 0.001.When most starting to train full articulamentum, our iteration 30 times, entirely The learning rate of articulamentum 4 and 5 is arranged to 0.0001, and the 6th full articulamentum learning rate is arranged to 0.001.
Table 1 is to add " conspicuousness " to pre-process network for innovatory algorithm, and table 2 is not add pretreatment net for non-innovatory algorithm The experimental result of network:
Training result after table 1, innovatory algorithm
The training result of table 2, non-innovatory algorithm

Claims (1)

1. a kind of target tracking algorism based on deep learning and environment self-adaption, it is characterised in that this method includes following step Suddenly:
Step (1), the picture conduct input using 107 × 107 pixel sizes;
Step (2), pretreatment include positive sample pretreatment and the processing of negative sample, including positive sample pretreatment and negative sample are located in advance Reason;Wherein, the step of positive sample pre-processes includes:First, according to groundtruth values one is taken around target in positive sample A big rectangle of groundtruth values than target, calculates the notable figure of positive sample, the ratio of whole sample boxes is accounted for, if ratio More than some threshold value of setting, as pure positive sample, if smaller than the threshold value set, then abandoned;Then, using aobvious Work property detection algorithm detects the shape of target, the notable figure binaryzation that then will be obtained, and is turned back to the figure of an original frame As in, the whole two field picture after binaryzation is sampled further according to the flow of sampling above;The step of negative sample pretreatment Suddenly include:Screened using difficult example mining algorithm for negative sample, the sample of sampling is carried out one in convolutional neural networks Secondary forward-propagating, sample bigger loss is arranged in sequence, and using electing as " difficult example " above, with this part Sample carrys out training network;
Step (3), use bounding box regression model when the first frame is trained to, and specific processing includes:For the video sequence of test Fixed first frame given in row, using three-layer coil accumulates network to train a linear regression model (LRM) and predicts the position of target, extraction Target signature;In each frame of subsequent video sequence, the position of the bounding box of target is adjusted using regression model.
CN201711237457.4A 2017-11-30 2017-11-30 Target tracking method based on deep learning and environment self-adaption Active CN107945210B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711237457.4A CN107945210B (en) 2017-11-30 2017-11-30 Target tracking method based on deep learning and environment self-adaption

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711237457.4A CN107945210B (en) 2017-11-30 2017-11-30 Target tracking method based on deep learning and environment self-adaption

Publications (2)

Publication Number Publication Date
CN107945210A true CN107945210A (en) 2018-04-20
CN107945210B CN107945210B (en) 2021-01-05

Family

ID=61946958

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711237457.4A Active CN107945210B (en) 2017-11-30 2017-11-30 Target tracking method based on deep learning and environment self-adaption

Country Status (1)

Country Link
CN (1) CN107945210B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109345559A (en) * 2018-08-30 2019-02-15 西安电子科技大学 Expand the motion target tracking method with depth sorting network based on sample
CN109344793A (en) * 2018-10-19 2019-02-15 北京百度网讯科技有限公司 Aerial hand-written method, apparatus, equipment and computer readable storage medium for identification
CN109682392A (en) * 2018-12-28 2019-04-26 山东大学 Vision navigation method and system based on deeply study
CN111192288A (en) * 2018-11-14 2020-05-22 天津大学青岛海洋技术研究院 Target tracking algorithm based on deformation sample generation network
CN112465862A (en) * 2020-11-24 2021-03-09 西北工业大学 Visual target tracking method based on cross-domain deep convolutional neural network
CN113496188A (en) * 2020-04-08 2021-10-12 四零四科技股份有限公司 Apparatus and method for processing video content analysis
CN113538507A (en) * 2020-04-15 2021-10-22 南京大学 Single-target tracking method based on full convolution network online training

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955718A (en) * 2014-05-15 2014-07-30 厦门美图之家科技有限公司 Image subject recognition method
CN104915972A (en) * 2014-03-13 2015-09-16 欧姆龙株式会社 Image processing apparatus, image processing method and program
CN106709936A (en) * 2016-12-14 2017-05-24 北京工业大学 Single target tracking method based on convolution neural network
EP3229206A1 (en) * 2016-04-04 2017-10-11 Xerox Corporation Deep data association for online multi-class multi-object tracking
CN107369166A (en) * 2017-07-13 2017-11-21 深圳大学 A kind of method for tracking target and system based on multiresolution neutral net

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104915972A (en) * 2014-03-13 2015-09-16 欧姆龙株式会社 Image processing apparatus, image processing method and program
CN103955718A (en) * 2014-05-15 2014-07-30 厦门美图之家科技有限公司 Image subject recognition method
EP3229206A1 (en) * 2016-04-04 2017-10-11 Xerox Corporation Deep data association for online multi-class multi-object tracking
CN106709936A (en) * 2016-12-14 2017-05-24 北京工业大学 Single target tracking method based on convolution neural network
CN107369166A (en) * 2017-07-13 2017-11-21 深圳大学 A kind of method for tracking target and system based on multiresolution neutral net

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109345559A (en) * 2018-08-30 2019-02-15 西安电子科技大学 Expand the motion target tracking method with depth sorting network based on sample
CN109345559B (en) * 2018-08-30 2021-08-06 西安电子科技大学 Moving target tracking method based on sample expansion and depth classification network
CN109344793B (en) * 2018-10-19 2021-03-16 北京百度网讯科技有限公司 Method, apparatus, device and computer readable storage medium for recognizing handwriting in the air
CN109344793A (en) * 2018-10-19 2019-02-15 北京百度网讯科技有限公司 Aerial hand-written method, apparatus, equipment and computer readable storage medium for identification
US11423700B2 (en) 2018-10-19 2022-08-23 Beijing Baidu Netcom Science And Technology Co., Ltd. Method, apparatus, device and computer readable storage medium for recognizing aerial handwriting
CN111192288A (en) * 2018-11-14 2020-05-22 天津大学青岛海洋技术研究院 Target tracking algorithm based on deformation sample generation network
CN111192288B (en) * 2018-11-14 2023-08-04 天津大学青岛海洋技术研究院 Target tracking algorithm based on deformation sample generation network
CN109682392B (en) * 2018-12-28 2020-09-01 山东大学 Visual navigation method and system based on deep reinforcement learning
CN109682392A (en) * 2018-12-28 2019-04-26 山东大学 Vision navigation method and system based on deeply study
CN113496188A (en) * 2020-04-08 2021-10-12 四零四科技股份有限公司 Apparatus and method for processing video content analysis
CN113496188B (en) * 2020-04-08 2024-04-02 四零四科技股份有限公司 Apparatus and method for processing video content analysis
CN113538507A (en) * 2020-04-15 2021-10-22 南京大学 Single-target tracking method based on full convolution network online training
CN113538507B (en) * 2020-04-15 2023-11-17 南京大学 Single-target tracking method based on full convolution network online training
CN112465862A (en) * 2020-11-24 2021-03-09 西北工业大学 Visual target tracking method based on cross-domain deep convolutional neural network
CN112465862B (en) * 2020-11-24 2024-05-24 西北工业大学 Visual target tracking method based on cross-domain depth convolution neural network

Also Published As

Publication number Publication date
CN107945210B (en) 2021-01-05

Similar Documents

Publication Publication Date Title
CN107945210A (en) Target tracking algorism based on deep learning and environment self-adaption
CN109146921B (en) Pedestrian target tracking method based on deep learning
CN110059694A (en) The intelligent identification Method of lteral data under power industry complex scene
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN112837344B (en) Target tracking method for generating twin network based on condition countermeasure
CN106157332A (en) A kind of motion inspection optimization method based on ViBe algorithm
CN110084165A (en) The intelligent recognition and method for early warning of anomalous event under the open scene of power domain based on edge calculations
CN112668483B (en) Single-target person tracking method integrating pedestrian re-identification and face detection
CN113763424B (en) Real-time intelligent target detection method and system based on embedded platform
CN111191535B (en) Pedestrian detection model construction method based on deep learning and pedestrian detection method
CN112434599B (en) Pedestrian re-identification method based on random occlusion recovery of noise channel
CN111931654A (en) Intelligent monitoring method, system and device for personnel tracking
CN105809718A (en) Object tracking method with minimum trajectory entropy
CN110135446A (en) Method for text detection and computer storage medium
CN108830170A (en) A kind of end-to-end method for tracking target indicated based on layered characteristic
CN106887012A (en) A kind of quick self-adapted multiscale target tracking based on circular matrix
CN116524062B (en) Diffusion model-based 2D human body posture estimation method
CN109614896A (en) A method of the video content semantic understanding based on recursive convolution neural network
CN116206214A (en) Automatic landslide recognition method, system, equipment and medium based on lightweight convolutional neural network and double attention
CN111126155A (en) Pedestrian re-identification method for generating confrontation network based on semantic constraint
CN112084936B (en) Face image preprocessing method, device, equipment and storage medium
CN108717522A (en) A kind of human body target tracking method based on deep learning and correlation filtering
CN117576149A (en) Single-target tracking method based on attention mechanism
CN111241165A (en) Artificial intelligence education system based on big data and data processing method
CN110826459A (en) Migratable campus violent behavior video identification method based on attitude estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant