CN106651915A - Target tracking method of multi-scale expression based on convolutional neural network - Google Patents

Target tracking method of multi-scale expression based on convolutional neural network Download PDF

Info

Publication number
CN106651915A
CN106651915A CN201611201895.0A CN201611201895A CN106651915A CN 106651915 A CN106651915 A CN 106651915A CN 201611201895 A CN201611201895 A CN 201611201895A CN 106651915 A CN106651915 A CN 106651915A
Authority
CN
China
Prior art keywords
network
model
convolutional neural
target
scale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611201895.0A
Other languages
Chinese (zh)
Other versions
CN106651915B (en
Inventor
唐爽硕
王凡
胡小鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN201611201895.0A priority Critical patent/CN106651915B/en
Publication of CN106651915A publication Critical patent/CN106651915A/en
Application granted granted Critical
Publication of CN106651915B publication Critical patent/CN106651915B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2148Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of image processing, and provides a target tracking method of multi-scale expression based on a convolutional neural network. The method comprises the following steps: pre-training a multi-scale convolutional neural network structure; constructing multi-example classifier through multi-scale feature expression; improving multi-example online tracking; and updating a multi-step difference model. By means of the ability of automatically learning deep features of the convolutional neural network, the algorithm can obtain deep image expression involving semantic information, and meanwhile constructs the multi-scale expression of images by using the Laplacian Pyramid and trains the multi-scale convolutional neural network structure. In combination with an improved multi-example learning algorithm, an online tracker is constructed to realize the stable tracking of targets.

Description

Method for tracking target based on the multi-scale expression of convolutional neural networks
Technical field
The present invention relates to be based on the method for tracking target of the multi-scale expression of convolutional neural networks, belong to image processing techniques Field.
Background technology
In the last few years, with the proposition of a large amount of target tracking algorisms, target following technology has obtained rapidly developing, but Because in actual tracking, there are many Difficulties in target following task, such as object is blocked, visual angle change, target deformation, Ambient lighting changes and is difficult to the complicated background expected, causes many problems of existing algorithm.Based on differentiation In the target tracking algorism of model, generally display model is built using the difference of target and background, two graders are trained, so as to handle Target is separated from background.Existing most of track algorithms rely on the outward appearance mould of the feature construction target of hand-designed Type, it is impossible to the essential information of effective expression target, especially in complex condition, has to the ability to express of the display model of target Limit, causes the failure of object module.During tracking, due to the error that the error tracking of target is introduced, building up to make Into drifting problem.Track algorithm based on multi-instance learning can to a certain extent solve drifting problem, but due to model letter Number is easily saturated itself so that the separating capacity of model declines, and tracking performance is caused to limit.
The content of the invention
For the problem that prior art is present, the present invention carries out multi-resolution decomposition using laplacian pyramid to image, A kind of target tracking algorism of the multi-scale expression based on convolutional neural networks is provided.The algorithm utilizes oneself of convolutional neural networks The ability of dynamic study further feature, can obtain the deep layer image expression for being related to semantic information, while using Laplce's gold word Tower builds the multi-scale expression of image, trains multiple dimensioned convolutional neural networks structure.With reference to improved multi-instance learning algorithm, Online tracker is built, the tenacious tracking of target is realized.
The technical scheme is that:
Based on the method for tracking target of the multi-scale expression of convolutional neural networks, comprise the following steps:
The first step, multiple dimensioned convolutional neural networks structure pre-training;
Second step, using Analysis On Multi-scale Features expression many example classification devices are built;
3rd step, improves many examples and tracks online;
4th step, multistep difference model modification.
Beneficial effect of the present invention:There is multiple dimensioned structural information in natural image, the thick yardstick of image generally reflects figure The overall structure of picture, the fine dimension of image includes more image detail.Image is carried out using laplacian pyramid many Scale Decomposition, it is proposed that the target tracking algorism based on the multi-scale expression of convolutional neural networks.The method can extract many chis The convolution feature of degree, constitutes the higher display model of ability to express.In combination with improved multi-instance learning algorithm, model is solved The problem that the model separating capacity that easily saturation is caused declines.Compared with existing target tracking algorism, the method can be realized more Stable tracking, the degree of accuracy is higher.
Description of the drawings
Fig. 1 is convolutional neural networks structural representation;
Fig. 2 is that multiple dimensioned convolutional neural networks train schematic diagram;
Fig. 3 is the percentage of different errors of centration distances;
Fig. 4 is to successfully track frame percentage.
Specific embodiment
Hereinafter the present invention will be further described.
Based on the method for tracking target of the multi-scale expression of convolutional neural networks, comprise the following steps:
The first step, multiple dimensioned convolutional neural networks model pre-training
Laplace transform is done to image, the pyramid space of image is built, the three of laplacian pyramid is then extracted The image under yardstick is planted as the input of network model;Multiple dimensioned convolutional Neural net is built using Lasagne deep learning frameworks Network model, constitutes network model pond;Each network model includes three convolutional layers, two full articulamentums and one Softmax layers;Network model is as shown in Figure 1.Simultaneously using the shallow structure initialization network parameter of VGG-net.
During pre-training, using part of standards track file network parameter is continued to optimize;Every kind of scalogram picture point Not Dui Ying thick yardstick network, medium scale network and fine dimension network, network share parameter between different scale, yardstick by slightly to Carefully it is trained.In order to obtain different classes of object information, the different network of correspondence is built for different classes of video set, to catch The common feature of different classes of object is obtained, network parameter repetitive exercise is shared in addition to last layer between network, as shown in Figure 2. In training process, using cross entropy as loss function L, its form of Definition is:
L=- ∑sitilog(pi) (1)
Wherein, tiFor the authentic signature (target or background) of i-th image block, piPrediction for i-th image block is general Rate.In the training process network parameter is continued to optimize using gradient descent method (SGD), until all samples are trained up, Finally retain the network parameter of three kinds of yardsticks, obtain the good multiple dimensioned convolutional neural networks model of pre-training.
Second step, using Analysis On Multi-scale Features expression many example classification devices are built
Last layer of the good multiple dimensioned convolution model of pre-training is removed, random initializtion is added again Softmax layers, the target given using the frame of image first is finely adjusted to network parameter.Then divide from the network of three kinds of yardsticks The characteristic pattern of three layers of convolution is indescribably taken as convolution feature.Common group of the feature of two layers of the convolution of fine dimension network is extracted simultaneously Into the multi-scale expression of display model.In order to reduce the data dimension of feature, two layers of characteristic pattern of convolution are entered using maximum pond Row dimensionality reduction.All convolution features are connected and composed into the multiple dimensioned display model of target.
In order to realize the online updating of target, need to object module real-time update.Using the convolution feature for obtaining as spy Pond is levied, using one two grader of multi-instance learning Algorithm Learning.The grader be one by multiple Weak Classifiers constitute it is strong Grader.Its implementation is:Using strengthening by the way of study, it is log-likelihood function to maximize object function, and K is selected successively Individual Weak Classifier, and by each Weak Classifier weighted sum, so as to construct many example classification devices.
3rd step, improves many examples and tracks online
In multi-instance learning algorithm, the likelihood probability of each example is expressed as:
P (y | x)=σ (H (x)) (2)
Wherein, x for image feature space express, y be a dichotomic variable, for indicating image in whether there is mesh Mark, H (x) is the strong classifier of multiple weak typing compositions, and σ (x) is Sigmoid functions, i.e.,
From the property of Sigmoid functions, when x gradually increases or is gradually reduced, function is easy to saturation.Work as selection When weak typing constitutes strong classifier, it is easy to cause over-fitting problem.In order to solve this problem, we are in Sigmoid functions Middle to introduce a penalty factor to slow down function saturation, the Sigmoid functions after improvement are:
Wherein, k is the Weak Classifier number for constituting strong classifier.When the number of Weak Classifier gradually increases, punish because Son can quickly suppress the rational scope of size to of independent variable, slow down the speed of function saturation, while being able to ensure that letter Number convergence.
4th step, multistep difference model modification
During tracking, by the way of multistep difference model modification multiple dimensioned convolutional neural networks model is updated.
For thick scale network modeling, network model parameter is updated by the way of fast renewal, with timely adaptive model Cosmetic variation;For fine dimension network model, network model parameter is updated by the way of slow renewal, mould can be avoided Type changes the error noise and mistake renewal that may be introduced;For medium scale network model, renewal frequency is therebetween. In this way so that model can in time adapt to the cosmetic variation of target, while error tracking can be resisted to model more New impact.
When there is a new two field picture to be input into, n candidate target frame { x is chosen around previous frame target location1,…, xn, according to p (y | x)=σ (H (x)), the peak response position for selecting likelihood probability is the objective result of this frame, such as formula (5) It is shown.
We are carried out at the method for tracking target of the multi-scale expression based on convolutional neural networks in terms of two to proposing Analysis verification.First it is the accurate rate of track algorithm, next to that the success rate of algorithm.And using target following standard data set (OTB) part sequence of pictures is tested, and chooses classical MIL, TLD, Struck, SCM, KCF and TGPR method as right According to.
With regard to the accurate rate aspect of algorithm, we carry out the essence of evaluation algorithms using tracking target and the errors of centration of actual position Exactness, calculates the Euclidean distance of tracking target and actual position, arranges different distances as threshold value, and statistics reaches different threshold values The percentage of requirement, and the corresponding percentage of selected threshold 20 is final score.As a result as shown in figure 3, as seen from the figure we Method obtain higher fraction, the essence of method for tracking target tracking of this explanation based on the multi-scale expression of convolutional neural networks Really rate is higher.
With regard to the success rate aspect of algorithm, we calculate the coincidence factor of tracking target and actual position according to formula (6)
Wherein, rtTo track the area of target, roFor the area of real goal, ∩ represents intersection operation, and ∪ represents union behaviour Make.With coincidence factor as threshold value, the successful percentage under the different threshold values of statistics, and using AUC sizes as final score.Knot Really as shown in figure 4, as seen from the figure our method obtains higher AUC, this many chis of explanation based on convolutional neural networks What the method for tracking target of degree expression was tracked has higher success rate.

Claims (1)

1. a kind of method for tracking target of the multi-scale expression based on convolutional neural networks, it is characterised in that following steps:
The first step, multiple dimensioned convolutional neural networks model pre-training
Laplace transform is done to image, the pyramid space of image is built, under extracting three kinds of yardsticks of laplacian pyramid Image as network model input;Multiple dimensioned convolutional neural networks model, structure are built using Lasagne deep learning frameworks Into network model pond;Each network model includes three convolutional layers, two full articulamentums and a softmax layer;Simultaneously Using the shallow structure initialization network parameter of VGG-net;
During pre-training, track file continues to optimize network parameter;Every kind of scalogram picture correspond to respectively thick yardstick network, Medium scale network and fine dimension network;Network share parameter between different scale, yardstick is from coarse to fine to be trained;
Different networks are built for different classes of video set, for obtaining different classes of object information;Last is removed between network The outer shared network parameter repetitive exercise of layer, for capturing the common feature of different classes of object;In the training process, using intersection Used as loss function L, its form of Definition is entropy:
L=- ∑sitilog(pi) (1)
Wherein, tiFor the authentic signature of i-th image block, i.e. target or background;piFor the prediction probability of i-th image block;
In the training process network parameter is continued to optimize using gradient descent method SGD, until all samples are trained up, most Retain the network parameter of three kinds of yardsticks afterwards, obtain the good multiple dimensioned convolutional neural networks model of pre-training;
Second step, using Analysis On Multi-scale Features expression many example classification devices are built
Last layer of the good multiple dimensioned convolution model of pre-training is removed, the softmax layers of a random initializtion is added again, The target given using the frame of image first is finely adjusted to network parameter;Then convolution is extracted respectively from the network of three kinds of yardsticks Three layers of characteristic pattern is used as convolution feature;The feature for extracting two layers of the convolution of fine dimension network simultaneously collectively constitutes display model Multi-scale expression;Dimensionality reduction is carried out to two layers of characteristic pattern of convolution using maximum pond, reduces the data dimension of feature;By all volumes Product feature connects and composes the multiple dimensioned display model of target;
Using the convolution feature for obtaining as feature pool, using one two grader of multi-instance learning Algorithm Learning;Learned using strengthening The mode of habit, it is log-likelihood function to maximize object function, and K Weak Classifier is selected successively, and each Weak Classifier is added Power summation, builds many example classification devices;
3rd step, improves many examples and tracks online
In multi-instance learning algorithm, the likelihood probability of each example is expressed as:
P (y | x)=σ (H (x)) (2)
Wherein, x for image feature space express, y be a dichotomic variable, for indicating image in whether there is target, H X () is the strong classifier of multiple weak typing compositions, σ (x) is Sigmoid functions, i.e.,
A penalty factor is introduced in Sigmoid functions and slows down function saturation, the Sigmoid functions after improvement are:
Wherein, k is the Weak Classifier number for constituting strong classifier;
4th step, during tracking, using the multiple dimensioned convolutional neural networks model of multistep difference model modification
For thick scale network modeling, network model parameter is updated by the way of fast renewal, with the outer of timely adaptive model See change;For fine dimension network model, network model parameter is updated by the way of slow renewal, it is to avoid model changes introducing Error noise and mistake renewal;For medium scale network model, renewal frequency is therebetween;In this way, Model is enabled to adapt to the cosmetic variation of target in time, while impact of the error tracking to model modification can be resisted;
When there is a new two field picture to be input into, n candidate target frame { x is chosen around previous frame target location1,…,xn, According to p (y | x)=σ (H (x)), the peak response position for selecting likelihood probability is the objective result of this frame, such as shown in formula (5).
CN201611201895.0A 2016-12-23 2016-12-23 The method for tracking target of multi-scale expression based on convolutional neural networks Active CN106651915B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611201895.0A CN106651915B (en) 2016-12-23 2016-12-23 The method for tracking target of multi-scale expression based on convolutional neural networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611201895.0A CN106651915B (en) 2016-12-23 2016-12-23 The method for tracking target of multi-scale expression based on convolutional neural networks

Publications (2)

Publication Number Publication Date
CN106651915A true CN106651915A (en) 2017-05-10
CN106651915B CN106651915B (en) 2019-08-09

Family

ID=58828084

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611201895.0A Active CN106651915B (en) 2016-12-23 2016-12-23 The method for tracking target of multi-scale expression based on convolutional neural networks

Country Status (1)

Country Link
CN (1) CN106651915B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107622507A (en) * 2017-08-09 2018-01-23 中北大学 A kind of air target tracking method based on deep learning
CN108682022A (en) * 2018-04-25 2018-10-19 清华大学 Based on the visual tracking method and system to anti-migration network
CN108876754A (en) * 2018-05-31 2018-11-23 深圳市唯特视科技有限公司 A kind of remote sensing images missing data method for reconstructing based on depth convolutional neural networks
CN108985365A (en) * 2018-07-05 2018-12-11 重庆大学 Multi-source heterogeneous data fusion method based on depth subspace switching integrated study
CN109284680A (en) * 2018-08-20 2019-01-29 北京粉笔蓝天科技有限公司 A kind of progressive picture recognition methods, device, system and storage medium
CN111260536A (en) * 2018-12-03 2020-06-09 中国科学院沈阳自动化研究所 Digital image multi-scale convolution processor with variable parameters and implementation method thereof
CN111681263A (en) * 2020-05-25 2020-09-18 厦门大学 Multi-scale antagonistic target tracking algorithm based on three-value quantization
WO2021139069A1 (en) * 2020-01-09 2021-07-15 南京信息工程大学 General target detection method for adaptive attention guidance mechanism
CN113228063A (en) * 2019-01-04 2021-08-06 美国索尼公司 Multiple prediction network
CN113610759A (en) * 2021-07-05 2021-11-05 金华电力设计院有限公司 A on-spot safe management and control system for roofbolter construction

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110182469A1 (en) * 2010-01-28 2011-07-28 Nec Laboratories America, Inc. 3d convolutional neural networks for automatic human action recognition
CN103325125A (en) * 2013-07-03 2013-09-25 北京工业大学 Moving target tracking method based on improved multi-example learning algorithm
CN105741316A (en) * 2016-01-20 2016-07-06 西北工业大学 Robust target tracking method based on deep learning and multi-scale correlation filtering
CN105956532A (en) * 2016-04-25 2016-09-21 大连理工大学 Traffic scene classification method based on multi-scale convolution neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110182469A1 (en) * 2010-01-28 2011-07-28 Nec Laboratories America, Inc. 3d convolutional neural networks for automatic human action recognition
CN103325125A (en) * 2013-07-03 2013-09-25 北京工业大学 Moving target tracking method based on improved multi-example learning algorithm
CN105741316A (en) * 2016-01-20 2016-07-06 西北工业大学 Robust target tracking method based on deep learning and multi-scale correlation filtering
CN105956532A (en) * 2016-04-25 2016-09-21 大连理工大学 Traffic scene classification method based on multi-scale convolution neural network

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107622507A (en) * 2017-08-09 2018-01-23 中北大学 A kind of air target tracking method based on deep learning
CN107622507B (en) * 2017-08-09 2020-04-07 中北大学 Air target tracking method based on deep learning
CN108682022B (en) * 2018-04-25 2020-11-24 清华大学 Visual tracking method and system based on anti-migration network
CN108682022A (en) * 2018-04-25 2018-10-19 清华大学 Based on the visual tracking method and system to anti-migration network
CN108876754A (en) * 2018-05-31 2018-11-23 深圳市唯特视科技有限公司 A kind of remote sensing images missing data method for reconstructing based on depth convolutional neural networks
CN108985365A (en) * 2018-07-05 2018-12-11 重庆大学 Multi-source heterogeneous data fusion method based on depth subspace switching integrated study
CN108985365B (en) * 2018-07-05 2021-10-01 重庆大学 Multi-source heterogeneous data fusion method based on deep subspace switching ensemble learning
CN109284680A (en) * 2018-08-20 2019-01-29 北京粉笔蓝天科技有限公司 A kind of progressive picture recognition methods, device, system and storage medium
CN109284680B (en) * 2018-08-20 2022-02-08 北京粉笔蓝天科技有限公司 Progressive image recognition method, device, system and storage medium
CN111260536A (en) * 2018-12-03 2020-06-09 中国科学院沈阳自动化研究所 Digital image multi-scale convolution processor with variable parameters and implementation method thereof
CN111260536B (en) * 2018-12-03 2022-03-08 中国科学院沈阳自动化研究所 Digital image multi-scale convolution processor with variable parameters and implementation method thereof
CN113228063A (en) * 2019-01-04 2021-08-06 美国索尼公司 Multiple prediction network
JP2022514935A (en) * 2019-01-04 2022-02-16 ソニー コーポレイション オブ アメリカ Multiple prediction network
JP7379494B2 (en) 2019-01-04 2023-11-14 ソニー コーポレイション オブ アメリカ multiple prediction network
WO2021139069A1 (en) * 2020-01-09 2021-07-15 南京信息工程大学 General target detection method for adaptive attention guidance mechanism
CN111681263A (en) * 2020-05-25 2020-09-18 厦门大学 Multi-scale antagonistic target tracking algorithm based on three-value quantization
CN111681263B (en) * 2020-05-25 2022-05-03 厦门大学 Multi-scale antagonistic target tracking algorithm based on three-value quantization
CN113610759A (en) * 2021-07-05 2021-11-05 金华电力设计院有限公司 A on-spot safe management and control system for roofbolter construction

Also Published As

Publication number Publication date
CN106651915B (en) 2019-08-09

Similar Documents

Publication Publication Date Title
CN106651915B (en) The method for tracking target of multi-scale expression based on convolutional neural networks
CN109145939B (en) Semantic segmentation method for small-target sensitive dual-channel convolutional neural network
US20200285896A1 (en) Method for person re-identification based on deep model with multi-loss fusion training strategy
CN111709409B (en) Face living body detection method, device, equipment and medium
CN108229444B (en) Pedestrian re-identification method based on integral and local depth feature fusion
CN106778854B (en) Behavior identification method based on trajectory and convolutional neural network feature extraction
JP6159489B2 (en) Face authentication method and system
Altenberger et al. A non-technical survey on deep convolutional neural network architectures
CN104463250B (en) A kind of Sign Language Recognition interpretation method based on Davinci technology
CN108229381A (en) Face image synthesis method, apparatus, storage medium and computer equipment
CN107748858A (en) A kind of multi-pose eye locating method based on concatenated convolutional neutral net
CN107506740A (en) A kind of Human bodys' response method based on Three dimensional convolution neutral net and transfer learning model
CN107529650A (en) Network model construction and closed loop detection method, corresponding device and computer equipment
CN107292246A (en) Infrared human body target identification method based on HOG PCA and transfer learning
CN106372581A (en) Method for constructing and training human face identification feature extraction network
CN104636732B (en) A kind of pedestrian recognition method based on the deep belief network of sequence
CN110378208B (en) Behavior identification method based on deep residual error network
CN111126488A (en) Image identification method based on double attention
CN106326857A (en) Gender identification method and gender identification device based on face image
CN104517097A (en) Kinect-based moving human body posture recognition method
CN109086660A (en) Training method, equipment and the storage medium of multi-task learning depth network
CN114758288A (en) Power distribution network engineering safety control detection method and device
CN109033953A (en) Training method, equipment and the storage medium of multi-task learning depth network
CN106296734B (en) Method for tracking target based on extreme learning machine and boosting Multiple Kernel Learnings
CN111612799A (en) Face data pair-oriented incomplete reticulate pattern face repairing method and system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant