WO2021046681A1 - Complex scenario-oriented multi-source target tracking method - Google Patents

Complex scenario-oriented multi-source target tracking method Download PDF

Info

Publication number
WO2021046681A1
WO2021046681A1 PCT/CN2019/104924 CN2019104924W WO2021046681A1 WO 2021046681 A1 WO2021046681 A1 WO 2021046681A1 CN 2019104924 W CN2019104924 W CN 2019104924W WO 2021046681 A1 WO2021046681 A1 WO 2021046681A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature matrix
feature
source target
matrix
deep learning
Prior art date
Application number
PCT/CN2019/104924
Other languages
French (fr)
Chinese (zh)
Inventor
王玲
王�锋
关庆阳
张仁辉
Original Assignee
深圳市迪米欧科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市迪米欧科技有限公司 filed Critical 深圳市迪米欧科技有限公司
Priority to PCT/CN2019/104924 priority Critical patent/WO2021046681A1/en
Publication of WO2021046681A1 publication Critical patent/WO2021046681A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • the invention relates to the technical field of data processing, in particular to a deep learning-oriented multi-source target tracking method for complex scenes.
  • the multi-source target tracking method for complex scenes can provide accurate target data characteristics, and realize the tracking of multi-source targets by constructing a deep learning processing architecture based on feature matrix learning.
  • a template is formed by training feature matrices of a large number of typical simulation scenarios, and an autonomous, controllable and addable template model is obtained.
  • This model also has the feature matrix enhancement function of the scene, which can realize the enhancement of special features such as shadow and texture of the scene.
  • feature matrix learning can approximate high-dimensional spatial features with low-dimensional space vectors based on the basis vectors of the complete feature matrix.
  • one of the core problems of feature matrix learning is sparse representation.
  • the design of the feature matrix usually needs to update the fixed feature matrix basis vector. Whether a complete feature matrix can be designed determines whether the real signal can be represented more closely. Therefore, there is currently a need for a tracking processing method that can reduce the dimension of the feature matrix and more closely represent the real target.
  • the purpose of the embodiments of the present invention is to provide a deep learning-based multi-source target tracking method for complex scenes, aiming to solve the technical problem that the prior art is difficult to reduce the dimensionality of the feature matrix and cannot more closely represent the real target.
  • a multi-source target tracking method for complex scenes includes the following steps:
  • S1 Form an initial deep learning network for a multi-source target, set the weights of the initial deep learning network, and perform the steepest gradient dimensionality reduction process on the weights of the initial deep learning network to obtain a preliminary feature matrix library of the multi-source target ;
  • S3 Perform multi-source target classification on the sparse feature matrix, and establish feature representations of different feature targets
  • S6 Form effective multi-source tracking of the target through the generation model and the reconstruction model.
  • the step S1 specifically includes: establishing a deep learning network, forming the initial feature matrix from the features of the multi-source target through the deep learning network, and the initial feature matrix has the function of adaptive change and improvement.
  • the expression of the initial feature matrix is:
  • R represents the multi-source target feature matrix
  • k represents the number of targets
  • R k represents the feature of each target
  • s represents the weight matrix of the deep learning network
  • Y k represents the tracked multi-source target.
  • the step S2 includes: performing an effective feature matrix update iteration through feature decomposition, and completing the generation of sparse features through the preprocessing of the feature matrix.
  • the step S2 includes: the optimization problem of formula (1) is transformed into the optimal value problem of formula (2), which is obtained by the following formula:
  • R represents the initial feature matrix of the multi-source target
  • s represents the weight matrix of the deep learning network
  • the step S3 further includes the step of updating the feature matrix, wherein the feature matrix R of formula (1) is minimized to
  • the range of the characteristic matrix R is selected to correspond to the zero position of the k column of the matrix s.
  • i 1, 2, K n ⁇ obtained through the step S1 is used as the input of the deep learning network, and d′ i is used as the corresponding weight Structure features, calculate the reconstruction error and average error of the multi-source target,
  • T' ⁇ v i
  • the step S5 includes: calculating the reconstruction error by formula (7):
  • the multi-source target feature data is one or more of image data, radar data, communication data and position data.
  • the generative model of the feature matrix learning is a top-down generative model.
  • the reconstruction model of the feature matrix learning is a bottom-up reconstruction model.
  • a multi-source target tracking method oriented to complex scenes based on deep learning. This method first builds a multi-source complete feature matrix library of video image features of a large number of different tracking targets through feature matrix learning. Specifically, the feature matrix is compressed and learned, and non-zero columns of the original video are selected to correspond to the sparse feature matrix atoms to form a complete feature matrix library based on deep learning, which is suitable for deep learning hierarchical architecture. According to the MMSE criterion, a top-down generative model and a bottom-up reconstruction model are established. At the same time, a multi-layer feedforward network of discriminative training based on this criterion is adopted for feature reconstruction.
  • the method of the present invention can be applied to target features of different complex environments, such as various feature models such as wilderness scenes, urban scenes, open space scenes, etc., to establish a sparse representation of feature matrix learning, and at the same time form a complete feature matrix.
  • the present invention establishes non-zero coefficients of constraint weights to obtain sparse representations closer to real different scenes.
  • the present invention adopts a typical multi-layer deep learning network.
  • the basic structure is a memory, storage and knowledge network, which is fully linked by multiple networks and serves as a feature transfer channel between layers. At the same time, each layer is also It is used to train the feature structure of the next composition.
  • the deep learning network of the present invention has the ability to complete complex data modeling, including a top-down generation model and a bottom-up discrimination model.
  • the deep learning network also has the data training performance of weakly supervised learning. Therefore, the multi-source target feature matrix library established by the present invention through the deep learning network has good modeling characteristics and data statistical structure, and can generate a multi-source target tracker model for unlabeled training data.
  • Fig. 1 is a flowchart of a deep learning-oriented multi-source target tracking method for complex scenes according to an embodiment of the present invention.
  • the embodiment of the present invention provides a complex scene-oriented multi-source target tracking method based on deep learning, which can be used in technical fields such as video recognition and tracking.
  • the method of the present invention first establishes a complete feature matrix library of multi-source video images of massive different tracking targets through feature matrix learning.
  • the feature matrix learning of the present invention as a signal transformation method, can approximate high-dimensional space features through low-dimensional space vectors based on the matrix basis vectors of the over-complete feature matrix.
  • the embodiment of the present invention establishes a sparse representation of the feature matrix learning for various feature models of target features in different complex environments, such as a wilderness scene, an urban scene, and an open space scene, and at the same time forms a complete feature matrix.
  • the constraint weights established by the method of the present invention are non-zero coefficients to obtain a sparse representation that is closer to the real different scenes. Then complete the multi-source target feature matrix optimization approximation in different feature scenes.
  • Fig. 1 is a flowchart of a deep learning-oriented multi-source target tracking method for complex scenes according to an embodiment of the present invention.
  • the tracking method includes the following steps:
  • S1 Provide multi-source target data, such as video data, to form an initial deep learning network, set the weight of the initial deep learning network, reduce the weight of the initial deep learning network to the steepest gradient, and form the initial of the multi-source target Feature matrix
  • S2 Simplify the sparse representation of the features of the multi-source target, and sparse the multi-source target through the deep learning network to obtain the sparse feature matrix
  • the target system can include: target image features, target radar features, target communication features, and target location features;
  • S6 The effective multi-source target tracking of the target is formed by generating the template model and reconstructing the template model.
  • the aforementioned multi-source target feature data may be one or more of image data, radar data, communication data, and position data.
  • ground-to-air multi-source target tracking for complex environments as an example: a ground-to-air communication scenario is established, and the ground station achieves target tracking of 10 UAVs through different data link characteristics. Among them, the radar reflection cross-sections of different UAVs are different. Through different UAV radar reflection cross-sections and acquired image characteristics, combined with the multi-source characteristics of the communication carrier frequency, a feature matrix network is formed.
  • step S2 includes: obtaining the UAV radar reflection cross-section of the multi-source target, and the acquired image features, combining the characteristics of the communication carrier frequency to sparse through the deep learning network to obtain a sparse target feature matrix ; Establish target mapping through feature matrix.
  • step S5 includes: establishing a top-down generative model through the deep learning network.
  • the generative model includes the UAV radar reflection cross-section, and the acquired image features, combined with the joint feature model of the communication carrier frequency, and the generative model
  • the model belongs to the generation template model of feature matrix learning; through the deep learning network, bottom-up reconstruction model, the model belongs to the reconstruction template model of feature matrix learning.
  • step S1 specifically includes: establishing a deep learning network, and forming the target features of the multi-source target into an initial feature matrix through the deep learning network.
  • the initial feature matrix can be based on the feature requirements of the multi-source target, that is, the UAV radar reflection
  • the expression of the initial feature matrix is:
  • k represents the target number of multi-source targets
  • R k represents each target feature
  • s represents the weight matrix of the deep learning network
  • Y k represents the tracked multi-source target.
  • the initial feature matrix can be adaptively changed and improved according to the feature requirements of the multi-source target, such as texture, contour feature, etc., and subsequently based on the iterative initialization of the feature matrix.
  • step S2 include: substituting formula (1) through the feature decomposition of the current multi-source target, that is, forming two stages of iterative update through the initial feature matrix.
  • This step increases the amount of calculation by decomposing each node, and in the decomposition process, the solution of the problem is transformed into finding a sparse code that minimizes R k s (the iterative update of the feature matrix).
  • the present invention simplifies the sparse representation step, which is determined by updating the matrix R column. If R is less than the determined threshold at this time, then the k rows of the feature matrix D Can be treated as a zero vector.
  • the target x will be updated under the joint support of the feature matrix D and the matrix coefficient y.
  • step S2 the core idea of step S2 is to perform an effective feature matrix update iteration through the feature decomposition method, and complete the generation of sparse features through the preprocessing of the feature matrix.
  • One of the improvement points of the feature matrix learning of the present invention is sparse representation.
  • the basic structure of the multi-layer deep learning network used in the embodiment of the present invention is a memory, storage, and knowledge network. Specifically, multiple networks are fully linked, thereby acting as a feature transfer channel between layers. At the same time, each layer is also It is used to train the feature structure of the next composition. Deep learning networks have been applied in different fields and can include many different forms of data generation models at the same time.
  • the deep learning network of the present invention can complete complex data modeling, including a top-down generation model and a bottom-up discrimination model. This shows that the deep learning network of the present invention establishes data training performance with weak supervision through a multi-source target matrix library.
  • the multi-source target feature matrix library established by the invention through the deep learning network has good modeling characteristics and data statistical structure, and can generate a multi-source target tracker model for unlabeled training data.
  • step S2 also includes: under the joint decision of the initial feature matrix R of the multi-source target and the weight matrix s of the deep learning network, the optimization problem of formula (1) can be equivalent to the optimization of formula (2) Value problem, as shown in the following formula:
  • the core point of solving this problem is to determine the hard threshold judgment of the column of s in order to retain the amplitude judgment threshold in each column. For example, in similarity analysis, a simple soft threshold function is used to solve formula (1). If the sparse constraint is convex and relaxed, it will be difficult to solve it by calculating a simpler soft threshold method. Therefore, the method of the present invention adopts a certain hard decision threshold to implement.
  • step S4 include: a feature matrix update step, which minimizes the feature matrix R of formula (1).
  • the update range of the feature matrix R is determined by column selection of the weight matrix s.
  • the range of the matrix R is selected to correspond to the zero position of the k column of the weight matrix s. This step uses only limited prior information of s instead of the complete matrix, which reduces the calculation amount of feature matrix update, and effectively supports the learning of the feature matrix update step with limited calculation amount.
  • i 1, 2, K n ⁇ is used as the input of the deep learning network, and d′ i is taken as the corresponding Reconstruct the distance feature, calculate the reconstruction error of the multi-source target according to the MMSE (Minimum Mean Squared Error) criterion, and calculate the average error by the following formula
  • e i represents the error value of each target.
  • the feature reconstruction can be expressed as:
  • T' ⁇ v i
  • step S5 includes: a feature learning process terminated by the method of obtaining a parameter. The error between the average reconstruction error of the current iteration and the last one that will stop the iteration, and the iteration is completed by the error.
  • the reconstruction weight matrix of the eigenvalues is more reliable, suppose it is the reconstruction weight matrix of M, so that the multi-source feature of I on the test data set, X is the extracted multiple features, reconstructed features Can be written as:
  • the method proposed in the present invention learns the UAV radar reflection cross section, the acquired image characteristics and the characteristics of the communication carrier frequency by compressing the feature matrix, and obtains the weakly supervised learning initialization weights of the deep learning model, and through the reconstruction and judgment between layers And feature selection, adjust the weight of the network, and finally obtain the simplified weight of the system to obtain the tracking of multi-source feature targets.
  • the deep learning network proposed by the present invention is a hierarchical architecture.
  • the learning feature matrix is compressed by selecting non-zero columns, and then a preliminary feature matrix is formed.
  • a hierarchical deep learning architecture By establishing a hierarchical deep learning architecture, it is possible to effectively track multi-source targets in complex scenarios.
  • the verification of scene data shows that the target tracking efficiency of the multi-source target tracking method of the present invention surpasses the one-time algorithm tracking of a single dimension.
  • the radar reflection cross section is set to 1, which can complete the multi-source target tracking efficiency of 99%.

Abstract

A complex scenario-oriented multi-source target tracking method, which relates to the field of artificial intelligence technology, and comprises the following steps: forming an initial deep learning network, setting a weight of the initial deep learning network, and performing steepest gradient dimensionality reduction on the weight of the initial deep learning network, so as to obtain an initial feature matrix of a multi-source target; performing simplified sparse representation and sparse processing on the initial feature matrix of the multi-source target, so as to obtain a sparsified target feature matrix; and classifying the sparsified target feature matrix, and establishing and forming feature matrix representations of different feature targets. In said method, a multi-source target feature matrix library established by means of a deep learning network has a good modeling characteristic and a data statistical structure, and a multi-source target tracker model can be generated for label-free training data.

Description

面向复杂场景的多源目标跟踪方法Multi-source target tracking method for complex scenes 技术领域Technical field
本发明涉及数据处理技术领域,具体涉及一种基于深度学习的面向复杂场景的多源目标跟踪方法。The invention relates to the technical field of data processing, in particular to a deep learning-oriented multi-source target tracking method for complex scenes.
背景技术Background technique
面向复杂场景的多源目标跟踪方法,能提供精确的目标数据特征,并通过构建基于特征矩阵学习的深度学习处理架构来实现对多源目标的跟踪。具体地,通过对大量典型仿真场景的特征矩阵进行训练形成模板,得到自主可控可添加模板模型。这种模型同时具备场景的特征矩阵增强功能,可实现对场景的阴影、纹理等特殊特征的增强。The multi-source target tracking method for complex scenes can provide accurate target data characteristics, and realize the tracking of multi-source targets by constructing a deep learning processing architecture based on feature matrix learning. Specifically, a template is formed by training feature matrices of a large number of typical simulation scenarios, and an autonomous, controllable and addable template model is obtained. This model also has the feature matrix enhancement function of the scene, which can realize the enhancement of special features such as shadow and texture of the scene.
特征矩阵学习作为一种信号变换方法,可以根据完备特征矩阵基向量,用低维空间向量近似表示高维空间特征。而目前通过特征矩阵学习本身进行更新特征矩阵原子很难达到降低特征矩阵维数的目标,并且在对图像目标进行特征选择时会形成较大的运算量。同时,特征矩阵学习的核心问题之一是稀疏表示。为了提升特征矩阵学习的稀疏表示,设计特征矩阵通常需要更新固定的特征矩阵基向量,能否设计完备的特征矩阵决定了能否更加逼近地表示真实信号。因此,目前需要一种能够降低特征矩阵维数、更加逼近地表示真实目标的跟踪处理方法。As a signal transformation method, feature matrix learning can approximate high-dimensional spatial features with low-dimensional space vectors based on the basis vectors of the complete feature matrix. However, it is difficult to achieve the goal of reducing the dimensionality of the feature matrix through the feature matrix learning itself to update the feature matrix atoms, and a large amount of calculation will be formed when the feature selection of the image target is performed. At the same time, one of the core problems of feature matrix learning is sparse representation. In order to improve the sparse representation of the feature matrix learning, the design of the feature matrix usually needs to update the fixed feature matrix basis vector. Whether a complete feature matrix can be designed determines whether the real signal can be represented more closely. Therefore, there is currently a need for a tracking processing method that can reduce the dimension of the feature matrix and more closely represent the real target.
发明内容Summary of the invention
本发明实施例的目的在于提供一种基于深度学习的面向复杂场景的多源目标跟踪方法,旨在解决现有技术难以降低特征矩阵的维数、无法更加逼近地表 示真实目标的技术问题。The purpose of the embodiments of the present invention is to provide a deep learning-based multi-source target tracking method for complex scenes, aiming to solve the technical problem that the prior art is difficult to reduce the dimensionality of the feature matrix and cannot more closely represent the real target.
为实现上述目的,本发明实施例给出以下技术方案:To achieve the foregoing objective, the embodiments of the present invention provide the following technical solutions:
一种面向复杂场景的多源目标跟踪方法,包括以下步骤:A multi-source target tracking method for complex scenes includes the following steps:
S1:针对多源目标,形成初始深度学习网络,设定初始深度学习网络的权值,将所述初始深度学习网络的权值进行最陡梯度降维处理,获得多源目标的初步特征矩阵库;S1: Form an initial deep learning network for a multi-source target, set the weights of the initial deep learning network, and perform the steepest gradient dimensionality reduction process on the weights of the initial deep learning network to obtain a preliminary feature matrix library of the multi-source target ;
S2:对所述多源目标的初步特征进行简化稀疏表示及稀疏化处理,得到稀疏化特征矩阵;S2: Perform simplified sparse representation and sparse processing on the preliminary features of the multi-source target to obtain a sparse feature matrix;
S3:对所述稀疏化特征矩阵进行多源目标分类,建立不同特征目标的特征表示;S3: Perform multi-source target classification on the sparse feature matrix, and establish feature representations of different feature targets;
S4:使稀疏化特征矩阵最小化;S4: Minimize the sparse feature matrix;
S5:建立特征矩阵学习的生成模型和重构模型;S5: Establish the generative model and reconstruction model of feature matrix learning;
S6:通过所述生成模型和所述重构模型形成目标的有效多源跟踪。S6: Form effective multi-source tracking of the target through the generation model and the reconstruction model.
优选地,所述步骤S1具体包括:建立深度学习网络,通过所述深度学习网络将多源目标的特征形成所述初始特征矩阵,所述初始特征矩阵具有自适应变更及改进的功能,所述初始特征矩阵的表达式为:Preferably, the step S1 specifically includes: establishing a deep learning network, forming the initial feature matrix from the features of the multi-source target through the deep learning network, and the initial feature matrix has the function of adaptive change and improvement. The expression of the initial feature matrix is:
Figure PCTCN2019104924-appb-000001
Figure PCTCN2019104924-appb-000001
其中,R表示多源目标特征矩阵,k表示目标个数,R k表示每个目标特征,s表示深度学习网络的权值矩阵,Y k表示跟踪的多源目标。 Among them, R represents the multi-source target feature matrix, k represents the number of targets, R k represents the feature of each target, s represents the weight matrix of the deep learning network, and Y k represents the tracked multi-source target.
优选地,所述步骤S2包括:通过特征分解方式,来进行有效的特征矩阵更新迭代,通过特征矩阵的预处理,完成稀疏化特征的生成。Preferably, the step S2 includes: performing an effective feature matrix update iteration through feature decomposition, and completing the generation of sparse features through the preprocessing of the feature matrix.
进一步优选地,所述步骤S2包括:公式(1)的最优化问题转化为公式(2)的最优值问题,如下式所得:Further preferably, the step S2 includes: the optimization problem of formula (1) is transformed into the optimal value problem of formula (2), which is obtained by the following formula:
Figure PCTCN2019104924-appb-000002
Figure PCTCN2019104924-appb-000002
其中R表示多源目标的初始特征矩阵,s表示深度学习网络的权值矩阵。Where R represents the initial feature matrix of the multi-source target, and s represents the weight matrix of the deep learning network.
优选地,所述步骤S3还包括特征矩阵更新的步骤,其中式(1)的特征矩阵R最小化为Preferably, the step S3 further includes the step of updating the feature matrix, wherein the feature matrix R of formula (1) is minimized to
Figure PCTCN2019104924-appb-000003
Figure PCTCN2019104924-appb-000003
其中特征矩阵R的范围选择对应于矩阵s的k列零点位置。The range of the characteristic matrix R is selected to correspond to the zero position of the k column of the matrix s.
优选地,所述步骤S4包括:通过所述步骤S1获得的图像特征R opt={r i|i=1,2,K n}作为深度学习网络的输入,令d′ i作为得到相应的重构特征,计算出多源目标的重构误差和平均误差, Preferably, the step S4 includes: the image feature R opt ={r i |i=1, 2, K n} obtained through the step S1 is used as the input of the deep learning network, and d′ i is used as the corresponding weight Structure features, calculate the reconstruction error and average error of the multi-source target,
且特征重构后表示为:And the feature reconstruction is expressed as:
T'={v i|e i<η,v i∈V}      (5)。 T'={v i |e i <η, v i εV} (5).
优选地,所述步骤S5包括:通过公式(7)计算重建误差:Preferably, the step S5 includes: calculating the reconstruction error by formula (7):
Figure PCTCN2019104924-appb-000004
Figure PCTCN2019104924-appb-000004
优选地,所述多源目标特征数据为图像数据、雷达数据、通信数据和位置数据中的一种或多种。Preferably, the multi-source target feature data is one or more of image data, radar data, communication data and position data.
优选地,在所述步骤S5中,所述特征矩阵学习的生成模型为自上而下的生成模型。Preferably, in the step S5, the generative model of the feature matrix learning is a top-down generative model.
优选地,在所述步骤S5中,所述特征矩阵学习的重构模型为自下而上的重构模型。Preferably, in the step S5, the reconstruction model of the feature matrix learning is a bottom-up reconstruction model.
与现有技术相比,本发明的有益效果为:Compared with the prior art, the beneficial effects of the present invention are:
(1)基于深度学习的面向复杂场景的多源目标跟踪方法,该方法首先通过特征矩阵学习,建立大量不同跟踪目标的视频图像特征的多源的完备特征矩阵 库。具体地,特征矩阵通过压缩学习,选择原视频的非零列对应稀疏特征矩阵原子,形成基于深度学习的完备特征矩阵库,该特征矩阵库适用于深度学习分层架构。根据MMSE准则,建立自上而下的生成模型和自下而上重构模型,同时采取基于该准则的判别训练的多层前馈网络进行特征重构。(1) A multi-source target tracking method oriented to complex scenes based on deep learning. This method first builds a multi-source complete feature matrix library of video image features of a large number of different tracking targets through feature matrix learning. Specifically, the feature matrix is compressed and learned, and non-zero columns of the original video are selected to correspond to the sparse feature matrix atoms to form a complete feature matrix library based on deep learning, which is suitable for deep learning hierarchical architecture. According to the MMSE criterion, a top-down generative model and a bottom-up reconstruction model are established. At the same time, a multi-layer feedforward network of discriminative training based on this criterion is adopted for feature reconstruction.
(2)本发明的方法可以应用于不同复杂环境的目标特征,如旷野场景,城市场景,空地场景等多种特征模型,建立特征矩阵学习的稀疏表示,同时形成完备特征矩阵。同时,本发明建立了约束权值非零系数,以获得更接近真实不同场景的稀疏表示。(2) The method of the present invention can be applied to target features of different complex environments, such as various feature models such as wilderness scenes, urban scenes, open space scenes, etc., to establish a sparse representation of feature matrix learning, and at the same time form a complete feature matrix. At the same time, the present invention establishes non-zero coefficients of constraint weights to obtain sparse representations closer to real different scenes.
(3)本发明采用典型的多层的深度学习网络基本构成是记忆、存储和知识网络,这是由多个网络进行全链接,充当层与层之间的特征传递通道,同时,每层也被用来训练下一组成的特征结构。本发明的深度学习网络具有完成复杂的数据建模能力,包括自上而下生成模型和自下而上判别模型。深度学习网络还具有弱监督学习的数据训练性能。因此,本发明通过深度学习网络建立的多源目标特征矩阵库,具有良好的建模特性和数据统计结构,针对无标签的训练数据,可以生成多源目标跟踪器模型。(3) The present invention adopts a typical multi-layer deep learning network. The basic structure is a memory, storage and knowledge network, which is fully linked by multiple networks and serves as a feature transfer channel between layers. At the same time, each layer is also It is used to train the feature structure of the next composition. The deep learning network of the present invention has the ability to complete complex data modeling, including a top-down generation model and a bottom-up discrimination model. The deep learning network also has the data training performance of weakly supervised learning. Therefore, the multi-source target feature matrix library established by the present invention through the deep learning network has good modeling characteristics and data statistical structure, and can generate a multi-source target tracker model for unlabeled training data.
附图说明Description of the drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来说,在不付出创造性劳动的前提下,还可以根据这些附图得到其它的附图。In order to explain the embodiments of the present invention or the technical solutions in the prior art more clearly, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the drawings in the following description are only These are some embodiments of the present invention. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without creative work.
图1是本发明实施例提供的一种基于深度学习的面向复杂场景的多源目标跟踪方法的流程图。Fig. 1 is a flowchart of a deep learning-oriented multi-source target tracking method for complex scenes according to an embodiment of the present invention.
具体实施方式detailed description
为了使本发明要解决的技术问题、技术方案及有益效果更加清楚明白,以 下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。In order to make the technical problems, technical solutions, and beneficial effects to be solved by the present invention clearer, the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, but not to limit the present invention.
本发明实施例提供一种基于深度学习的面向复杂场景的多源目标跟踪方法,可用于视频识别跟踪等技术领域。The embodiment of the present invention provides a complex scene-oriented multi-source target tracking method based on deep learning, which can be used in technical fields such as video recognition and tracking.
本发明的方法首先通过特征矩阵学习,建立海量不同跟踪目标的多源视频图像的完备特征矩阵库。本发明的特征矩阵学习,作为一种信号变换方法,可以根据过完备特征矩阵的矩阵基向量,通过低维空间向量近似表示高维空间特征。本发明实施例针对不同复杂环境的目标特征,如旷野场景,城市场景,空地场景等多种特征模型,建立特征矩阵学习的稀疏表示,同时形成完备特征矩阵。同时,本发明的方法建立的约束权值非零系数,以获得更接近真实不同场景的稀疏表示。进而完成不同特征场景下的多源目标特征矩阵优化逼近。The method of the present invention first establishes a complete feature matrix library of multi-source video images of massive different tracking targets through feature matrix learning. The feature matrix learning of the present invention, as a signal transformation method, can approximate high-dimensional space features through low-dimensional space vectors based on the matrix basis vectors of the over-complete feature matrix. The embodiment of the present invention establishes a sparse representation of the feature matrix learning for various feature models of target features in different complex environments, such as a wilderness scene, an urban scene, and an open space scene, and at the same time forms a complete feature matrix. At the same time, the constraint weights established by the method of the present invention are non-zero coefficients to obtain a sparse representation that is closer to the real different scenes. Then complete the multi-source target feature matrix optimization approximation in different feature scenes.
参见图1,图1为本发明实施例提供的基于深度学习的面向复杂场景的多源目标跟踪方法的流程图,该跟踪方法包括如下步骤:Referring to Fig. 1, Fig. 1 is a flowchart of a deep learning-oriented multi-source target tracking method for complex scenes according to an embodiment of the present invention. The tracking method includes the following steps:
S1:提供多源目标数据,例如视频数据,形成初始深度学习网络,设定初始深度学习网络的权值,将初始深度学习网络的权值进行最陡梯度降维,并形成多源目标的初始特征矩阵;S1: Provide multi-source target data, such as video data, to form an initial deep learning network, set the weight of the initial deep learning network, reduce the weight of the initial deep learning network to the steepest gradient, and form the initial of the multi-source target Feature matrix
S2:将多源目标特征进行简化稀疏表示,通过多源目标通过深度学习网络进行稀疏化,得到稀疏化特征矩阵;S2: Simplify the sparse representation of the features of the multi-source target, and sparse the multi-source target through the deep learning network to obtain the sparse feature matrix;
S3:通过稀疏特征矩阵的多源目标分类,建立形成不同特征目标的特征矩阵表示,其中目标体系可以包括:目标图像特征、目标雷达特征、目标通信特征和目标位置特征;S3: Through the multi-source target classification of the sparse feature matrix, a feature matrix representation of different feature targets is established. The target system can include: target image features, target radar features, target communication features, and target location features;
S4:将稀疏化特征矩阵最小化,更新初始特征矩阵得到最小化的稀疏化特征矩阵,即图中的记忆系统知识网络的输出;S4: Minimize the sparse feature matrix, update the initial feature matrix to obtain the minimized sparse feature matrix, which is the output of the memory system knowledge network in the figure;
S5:通过深度学习网络,建立自上而下的生成模型和自下而上的重构模型,其中该生成模型属于特征矩阵学习的生成模板模型,该重构模型属于特征矩阵学习的重构模板模型;S5: Establish a top-down generative model and a bottom-up reconstruction model through the deep learning network, where the generative model belongs to the generative template model of feature matrix learning, and the reconstructed model belongs to the reconstruction template of feature matrix learning model;
S6:通过生成模板模型与重构模板模型形成目标的有效多源目标跟踪。S6: The effective multi-source target tracking of the target is formed by generating the template model and reconstructing the template model.
具体地,上述多源目标特征数据可以为图像数据、雷达数据、通信数据和位置数据中的一种或多种。Specifically, the aforementioned multi-source target feature data may be one or more of image data, radar data, communication data, and position data.
以面向复杂环境的地空多源目标跟踪为例:建立地空通信场景,地面站通过不同的数据链特征实现对10架次无人机的目标跟踪。其中,不同无人机的雷达反射截面存在差异,通过不同的无人机雷达反射截面,以及获取的图像特征,结合通信载频的多源特征形成特征矩阵网络。Take the ground-to-air multi-source target tracking for complex environments as an example: a ground-to-air communication scenario is established, and the ground station achieves target tracking of 10 UAVs through different data link characteristics. Among them, the radar reflection cross-sections of different UAVs are different. Through different UAV radar reflection cross-sections and acquired image characteristics, combined with the multi-source characteristics of the communication carrier frequency, a feature matrix network is formed.
在该具体实施例中,步骤S2包括:获得多源目标的无人机雷达反射截面,以及获取的图像特征,结合通信载频的特征通过深度学习网络进行稀疏化,得到稀疏化的目标特征矩阵;通过特征矩阵建立目标映射。In this specific embodiment, step S2 includes: obtaining the UAV radar reflection cross-section of the multi-source target, and the acquired image features, combining the characteristics of the communication carrier frequency to sparse through the deep learning network to obtain a sparse target feature matrix ; Establish target mapping through feature matrix.
具体地,步骤S5包括:通过深度学习网络,建立自上而下的生成模型,该生成模型包括无人机雷达反射截面,以及获取的图像特征,结合通信载频的的联合特征模型,该生成模型属于特征矩阵学习的生成模板模型;通过深度学习网络,自下而上的重构模型,该模型属于特征矩阵学习的重构模板模型。Specifically, step S5 includes: establishing a top-down generative model through the deep learning network. The generative model includes the UAV radar reflection cross-section, and the acquired image features, combined with the joint feature model of the communication carrier frequency, and the generative model The model belongs to the generation template model of feature matrix learning; through the deep learning network, bottom-up reconstruction model, the model belongs to the reconstruction template model of feature matrix learning.
进一步地,步骤S1具体包括:建立深度学习网络,通过深度学习网络将多源目标的目标特征形成为初始特征矩阵,初始特征矩阵可以根据多源目标的特征需求,也即是无人机雷达反射截面、获取的图像特征和通信载频的特征,以及根据后续自身迭代的情况,形成自适应变更及改进。初始特征矩阵的表达式为:Further, step S1 specifically includes: establishing a deep learning network, and forming the target features of the multi-source target into an initial feature matrix through the deep learning network. The initial feature matrix can be based on the feature requirements of the multi-source target, that is, the UAV radar reflection The cross-section, the characteristics of the acquired image and the characteristics of the communication carrier frequency, as well as the subsequent iterations of its own, form adaptive changes and improvements. The expression of the initial feature matrix is:
Figure PCTCN2019104924-appb-000005
Figure PCTCN2019104924-appb-000005
其中,k表示多源目标的目标个数,R k表示每个目标特征,s表示深度学习网络的权值矩阵。Y k表示跟踪的多源目标。 Among them, k represents the target number of multi-source targets, R k represents each target feature, and s represents the weight matrix of the deep learning network. Y k represents the tracked multi-source target.
在其他实施例中,初始特征矩阵可以根据多源目标的特征需求,如纹理,轮廓特征等等,后续根据迭代初始化特征矩阵的情况,形成自适应变更及改进。In other embodiments, the initial feature matrix can be adaptively changed and improved according to the feature requirements of the multi-source target, such as texture, contour feature, etc., and subsequently based on the iterative initialization of the feature matrix.
进一步地,步骤S2中的具体步骤包括:通过当前多源目标的特征分解,代 入公式(1),即通过初始特征矩阵形成迭代更新的两个阶段。该步骤通过对每个节点的分解,提升了计算量,并且在分解过程中,问题的求解转换为寻找使得R ks(特征矩阵迭代更新的)最小化的稀疏编码。针对该问题,同时为了降低选择冗余特征矩阵的风险,本发明简化了稀疏表示的步骤,通过更新矩阵R列来确定,如果此时R是小于已经确定的阈值,那么特征矩阵D的k行可以作为零向量处理。目标x将在特征矩阵D和矩阵系数y的共同支持下进行更新。 Further, the specific steps in step S2 include: substituting formula (1) through the feature decomposition of the current multi-source target, that is, forming two stages of iterative update through the initial feature matrix. This step increases the amount of calculation by decomposing each node, and in the decomposition process, the solution of the problem is transformed into finding a sparse code that minimizes R k s (the iterative update of the feature matrix). To address this problem, and to reduce the risk of selecting redundant feature matrices, the present invention simplifies the sparse representation step, which is determined by updating the matrix R column. If R is less than the determined threshold at this time, then the k rows of the feature matrix D Can be treated as a zero vector. The target x will be updated under the joint support of the feature matrix D and the matrix coefficient y.
其中,步骤S2的核心思想在于通过特征分解方式,来进行有效的特征矩阵更新迭代,通过特征矩阵的预处理,完成稀疏化特征的生成。Among them, the core idea of step S2 is to perform an effective feature matrix update iteration through the feature decomposition method, and complete the generation of sparse features through the preprocessing of the feature matrix.
本发明的特征矩阵学习的改进点之一是稀疏表示。为了提升特征矩阵学习的稀疏表示,设计特征矩阵时通常需要更新固定的特征矩阵基向量,因此能否设计完备特征矩阵决定了能否更加逼近表示真实信号。本发明实施例采用的多层的深度学习网络的基本构成是记忆、存储和知识网络,具体通过多个网络进行全链接,由此充当层与层之间的特征传递通道,同时,每层也被用来训练下一组成的特征结构。深度学习网络已被应用于不同领域,并且可同时包括许多不同形式的数据生成模型。本发明的深度学习网络能完成复杂的数据建模,包括自上而下生成模型和自下而上判别模型。这表明,本发明的深度学习网络通过多源目标矩阵库建立具有弱监督的数据训练性能。本发明通过深度学习网络建立的多源目标特征矩阵库,具有良好的建模特性和数据统计结构,针对无标签的训练数据,可以生成多源目标跟踪器模型。One of the improvement points of the feature matrix learning of the present invention is sparse representation. In order to improve the sparse representation of feature matrix learning, it is usually necessary to update the fixed feature matrix basis vector when designing the feature matrix. Therefore, whether a complete feature matrix can be designed determines whether it can more closely represent the real signal. The basic structure of the multi-layer deep learning network used in the embodiment of the present invention is a memory, storage, and knowledge network. Specifically, multiple networks are fully linked, thereby acting as a feature transfer channel between layers. At the same time, each layer is also It is used to train the feature structure of the next composition. Deep learning networks have been applied in different fields and can include many different forms of data generation models at the same time. The deep learning network of the present invention can complete complex data modeling, including a top-down generation model and a bottom-up discrimination model. This shows that the deep learning network of the present invention establishes data training performance with weak supervision through a multi-source target matrix library. The multi-source target feature matrix library established by the invention through the deep learning network has good modeling characteristics and data statistical structure, and can generate a multi-source target tracker model for unlabeled training data.
进一步地,步骤S2还包括:在多源目标的初始特征矩阵R与深度学习网络的权值矩阵s的共同决定下,公式(1)的最优化问题可以等价于公式(2)的最优值问题,如下式所示:Further, step S2 also includes: under the joint decision of the initial feature matrix R of the multi-source target and the weight matrix s of the deep learning network, the optimization problem of formula (1) can be equivalent to the optimization of formula (2) Value problem, as shown in the following formula:
Figure PCTCN2019104924-appb-000006
Figure PCTCN2019104924-appb-000006
还包括最近的共同稀疏表示的特征矩阵。求解这个问题的核心点在于,确定s的列的硬阈值判别,以保留在每一列中的幅度判别门限。例如,相似性分析, 利用简单的软阈值函数来求解公式(1),如果稀疏约束凸放松,将很难通过计算更简单的软阈值方法来求解。因此,本发明的方法采用确定的硬判决阈值来实现。It also includes the feature matrix of the most recent common sparse representation. The core point of solving this problem is to determine the hard threshold judgment of the column of s in order to retain the amplitude judgment threshold in each column. For example, in similarity analysis, a simple soft threshold function is used to solve formula (1). If the sparse constraint is convex and relaxed, it will be difficult to solve it by calculating a simpler soft threshold method. Therefore, the method of the present invention adopts a certain hard decision threshold to implement.
进一步地,步骤S4中的具体步骤包括:特征矩阵更新步骤,将使得式(1)的特征矩阵R最小化。Further, the specific steps in step S4 include: a feature matrix update step, which minimizes the feature matrix R of formula (1).
Figure PCTCN2019104924-appb-000007
Figure PCTCN2019104924-appb-000007
这里,通过权值矩阵s的列选择来确定特征矩阵R更新的范围。矩阵R的范围选择对应于权值矩阵s的k列零点位置。该步骤通过仅仅使用了有限的s的先验信息,而不是完整的矩阵,降低了特征矩阵更新的计算量,有效支撑有限计算量的特征矩阵更新步骤学习。Here, the update range of the feature matrix R is determined by column selection of the weight matrix s. The range of the matrix R is selected to correspond to the zero position of the k column of the weight matrix s. This step uses only limited prior information of s instead of the complete matrix, which reduces the calculation amount of feature matrix update, and effectively supports the learning of the feature matrix update step with limited calculation amount.
进一步地,步骤S4中的具体步骤包括:通过步骤S1,获得的图像特征R opt={r i|i=1,2,K n}作为深度学习网络的输入,令d′ i作为得到相应的重构距离特征,根据MMSE(Minimum Mean Squared Error,最小均方误差)准则,计算出多源目标的重构误差,并通过下式计算,得到平均误差 Further, the specific steps in step S4 include: through step S1, the obtained image feature R opt ={r i |i=1, 2, K n} is used as the input of the deep learning network, and d′ i is taken as the corresponding Reconstruct the distance feature, calculate the reconstruction error of the multi-source target according to the MMSE (Minimum Mean Squared Error) criterion, and calculate the average error by the following formula
Figure PCTCN2019104924-appb-000008
Figure PCTCN2019104924-appb-000008
其中,e i表示每个目标的误差值。 Among them, e i represents the error value of each target.
特征学习迭代中,特征重构后可以表示为:In the feature learning iteration, the feature reconstruction can be expressed as:
T'={v i|e i<η,v i∈V}      (5)。 T'={v i |e i <η, v i εV} (5).
其中,v i表示多源目标的重构特征,V表示获取的目标特征,η表示设定的特征门限。进一步地,步骤S5中的步骤“建立自上而下的生成模型”包括:获得一个参数的方法终止的特征学习过程。当前迭代的平均重建误差和最后一个将停止迭代的平均值之间的误差,通过误差完成迭代。在迭代特征学习过程中,由于特征值的重建权重矩阵更为可靠,假设为M的重建权重矩阵,令I在测试数据集上的多源特征,
Figure PCTCN2019104924-appb-000009
X为提取的多个特征,重构特征
Figure PCTCN2019104924-appb-000010
可以写成:
Wherein, v i represents reconstructed feature multiple source object, V represents acquired target feature, η represents a set of threshold characteristics. Further, the step of "building a top-down generative model" in step S5 includes: a feature learning process terminated by the method of obtaining a parameter. The error between the average reconstruction error of the current iteration and the last one that will stop the iteration, and the iteration is completed by the error. In the iterative feature learning process, since the reconstruction weight matrix of the eigenvalues is more reliable, suppose it is the reconstruction weight matrix of M, so that the multi-source feature of I on the test data set,
Figure PCTCN2019104924-appb-000009
X is the extracted multiple features, reconstructed features
Figure PCTCN2019104924-appb-000010
Can be written as:
Figure PCTCN2019104924-appb-000011
Figure PCTCN2019104924-appb-000011
由特征矩阵,算出重建误差:From the characteristic matrix, calculate the reconstruction error:
Figure PCTCN2019104924-appb-000012
Figure PCTCN2019104924-appb-000012
本发明所提出的方法通过压缩特征矩阵学习无人机雷达反射截面、获取的图像特征和通信载频的特征,获得深度学习模型的弱监督学习初始化权值,通过层之间的重构、判决以及特征选择,进行网络的权值调整,最后获得系统的精简权值,以获得多源特征目标的跟踪。The method proposed in the present invention learns the UAV radar reflection cross section, the acquired image characteristics and the characteristics of the communication carrier frequency by compressing the feature matrix, and obtains the weakly supervised learning initialization weights of the deep learning model, and through the reconstruction and judgment between layers And feature selection, adjust the weight of the network, and finally obtain the simplified weight of the system to obtain the tracking of multi-source feature targets.
本发明所提出的深度学习网络是一个分层的体系结构,首先通过选择非零列来压缩学习特征矩阵,进而形成初步特征矩阵。通过建立分层深度学习架构,可以有效针对面向复杂场景多源目标进行跟踪。通过场景数据验证表明,采用本发明多源目标跟踪方法的目标跟踪效率超越单一维度的一次算法跟踪。具体操作中,针对低空多架次无人机跟踪场景,面向QAM的高阶调制,雷达反射截面设定在1,可以完成99%的多源目标跟踪效率。The deep learning network proposed by the present invention is a hierarchical architecture. First, the learning feature matrix is compressed by selecting non-zero columns, and then a preliminary feature matrix is formed. By establishing a hierarchical deep learning architecture, it is possible to effectively track multi-source targets in complex scenarios. The verification of scene data shows that the target tracking efficiency of the multi-source target tracking method of the present invention surpasses the one-time algorithm tracking of a single dimension. In the specific operation, for the low-altitude multi-sort UAV tracking scene, high-order modulation for QAM, the radar reflection cross section is set to 1, which can complete the multi-source target tracking efficiency of 99%.
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。The above descriptions are only the preferred embodiments of the present invention and are not intended to limit the present invention. Any modification, equivalent replacement and improvement made within the spirit and principle of the present invention shall be included in the protection of the present invention. Within range.

Claims (10)

  1. 一种面向复杂场景的多源目标跟踪方法,其特征在于,包括如下步骤:A multi-source target tracking method for complex scenes is characterized in that it comprises the following steps:
    S1:针对多源目标特征数据,形成初始深度学习网络,设定所述初始深度学习网络的权值,将所述初始深度学习网络的权值进行最陡梯度降维,并形成多源目标的初始特征矩阵;S1: For multi-source target feature data, an initial deep learning network is formed, the weights of the initial deep learning network are set, and the weights of the initial deep learning network are reduced by the steepest gradient, and a multi-source target is formed Initial feature matrix;
    S2:将所述多源目标的初始特征矩阵进行简化稀疏表示及S2: Simplify and sparse the initial feature matrix of the multi-source target and
    稀疏化处理,得到稀疏化的目标特征矩阵;Sparse processing to obtain a sparse target feature matrix;
    S3:将所述稀疏化的目标特征矩阵分类,建立不同特征目标的特征矩阵表示;S3: Classify the sparse target feature matrix, and establish feature matrix representations of different feature targets;
    S4:使所述稀疏化的目标特征矩阵最小化;S4: Minimize the sparse target feature matrix;
    S5:建立特征矩阵学习的生成模型和重构模型;S5: Establish the generative model and reconstruction model of feature matrix learning;
    S6:通过所述生成模型与所述重构模型形成目标的有效多源目标跟踪。S6: Form effective multi-source target tracking of the target through the generation model and the reconstruction model.
  2. 如权利要求1所述的多源目标跟踪方法,其特征在于,所述步骤S1包括:The multi-source target tracking method according to claim 1, wherein the step S1 comprises:
    建立深度学习网络,通过所述深度学习网络将多源目标的特征形成所述初始特征矩阵,所述初始特征矩阵具有自适应变更及改进的功能,所述初始特征矩阵的表达式为:A deep learning network is established, and the features of multi-source targets are formed into the initial feature matrix through the deep learning network. The initial feature matrix has the function of adaptive change and improvement. The expression of the initial feature matrix is:
    Figure PCTCN2019104924-appb-100001
    Figure PCTCN2019104924-appb-100001
  3. 如权利要求1所述的多源目标跟踪方法,其特征在于,所述步骤S2包括:The multi-source target tracking method according to claim 1, wherein the step S2 comprises:
    通过特征分解方式,来进行有效的特征矩阵更新迭代,通过特征矩阵的预处理,完成稀疏化特征的生成。The feature decomposition method is used to perform effective feature matrix update iterations, and the feature matrix preprocessing is used to complete the generation of sparse features.
  4. 如权利要求2所述的多源目标跟踪方法,其特征在于,所述步骤S2包括:公式(1)的最优化问题转化为公式(2)的最优值问题,如下式所得:The multi-source target tracking method according to claim 2, wherein the step S2 comprises: the optimization problem of formula (1) is transformed into the optimal value problem of formula (2), which is obtained by the following formula:
    Figure PCTCN2019104924-appb-100002
    Figure PCTCN2019104924-appb-100002
    其中R表示多源目标的初始特征矩阵,s表示深度学习网络的权值矩阵。Where R represents the initial feature matrix of the multi-source target, and s represents the weight matrix of the deep learning network.
  5. 如权利要求2所述的多源目标跟踪方法,其特征在于,所述步骤S3还包括特征矩阵更新的步骤,其中式(1)的特征矩阵R最小化为The multi-source target tracking method according to claim 2, wherein the step S3 further includes the step of updating the feature matrix, wherein the feature matrix R of formula (1) is minimized to
    Figure PCTCN2019104924-appb-100003
    Figure PCTCN2019104924-appb-100003
    其中特征矩阵R的范围选择对应于矩阵s的k列零点位置。The range of the characteristic matrix R is selected to correspond to the zero position of the k column of the matrix s.
  6. 如权利要求1所述的多源目标跟踪方法,其特征在于,所述步骤S4包括:通过所述步骤S1获得的图像特征R opt={r i|i=1,2,K n}作为深度学习网络的输入,令d′ i作为得到相应的重构特征,计算出多源目标的重构误差和平均误差, The multi-source target tracking method according to claim 1, wherein the step S4 comprises: the image feature R opt ={r i |i=1, 2, K n} obtained through the step S1 as the depth Learn the input of the network, use d' i as the corresponding reconstruction feature, calculate the reconstruction error and average error of the multi-source target,
    且特征重构后表示为:And the feature reconstruction is expressed as:
    T'={v i|e i<η,v i∈V}     (5), T'={v i |e i <η,v i ∈V} (5),
    其中v i表示多源目标的重构特征,V表示获取的目标特征,η表示设定的特征门限。 Where v i represents reconstructed feature multiple source object, V represents acquired target feature, η represents a set of threshold characteristics.
  7. 如权利要求1所述的多源目标跟踪方法,其特征在于,所述步骤S5包括:通过公式(7)计算重建误差:The multi-source target tracking method according to claim 1, wherein the step S5 comprises: calculating the reconstruction error by formula (7):
    Figure PCTCN2019104924-appb-100004
    Figure PCTCN2019104924-appb-100004
  8. 如权利要求1所述的多源目标跟踪方法,其特征在于,所述多源目标特征数据为图像数据、雷达数据、通信数据和位置数据中的一种或多种。The multi-source target tracking method according to claim 1, wherein the multi-source target feature data is one or more of image data, radar data, communication data, and position data.
  9. 如权利要求1所述的多源目标跟踪方法,其特征在于,在步骤S5中,所述生成模型为自上而下的生成模型。The multi-source target tracking method according to claim 1, wherein in step S5, the generative model is a top-down generative model.
  10. 如权利要求1所述的多源目标跟踪方法,其特征在于,在步骤S5中,所述重构模型为自下而上的重构模型。The multi-source target tracking method according to claim 1, wherein in step S5, the reconstruction model is a bottom-up reconstruction model.
PCT/CN2019/104924 2019-09-09 2019-09-09 Complex scenario-oriented multi-source target tracking method WO2021046681A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/104924 WO2021046681A1 (en) 2019-09-09 2019-09-09 Complex scenario-oriented multi-source target tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/104924 WO2021046681A1 (en) 2019-09-09 2019-09-09 Complex scenario-oriented multi-source target tracking method

Publications (1)

Publication Number Publication Date
WO2021046681A1 true WO2021046681A1 (en) 2021-03-18

Family

ID=74866688

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/104924 WO2021046681A1 (en) 2019-09-09 2019-09-09 Complex scenario-oriented multi-source target tracking method

Country Status (1)

Country Link
WO (1) WO2021046681A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113256685A (en) * 2021-06-25 2021-08-13 南昌工程学院 Target tracking method and system based on convolutional neural network dictionary pair learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005165688A (en) * 2003-12-02 2005-06-23 Fuji Xerox Co Ltd Multiple objects tracking method and system
CN106127804A (en) * 2016-06-17 2016-11-16 淮阴工学院 The method for tracking target of RGB D data cross-module formula feature learning based on sparse depth denoising own coding device
CN108053423A (en) * 2017-12-05 2018-05-18 中国农业大学 A kind of multiple target animal tracking method and device
CN109544603A (en) * 2018-11-28 2019-03-29 上饶师范学院 Method for tracking target based on depth migration study

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005165688A (en) * 2003-12-02 2005-06-23 Fuji Xerox Co Ltd Multiple objects tracking method and system
CN106127804A (en) * 2016-06-17 2016-11-16 淮阴工学院 The method for tracking target of RGB D data cross-module formula feature learning based on sparse depth denoising own coding device
CN108053423A (en) * 2017-12-05 2018-05-18 中国农业大学 A kind of multiple target animal tracking method and device
CN109544603A (en) * 2018-11-28 2019-03-29 上饶师范学院 Method for tracking target based on depth migration study

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113256685A (en) * 2021-06-25 2021-08-13 南昌工程学院 Target tracking method and system based on convolutional neural network dictionary pair learning
CN113256685B (en) * 2021-06-25 2021-09-24 南昌工程学院 Target tracking method and system based on convolutional neural network dictionary pair learning

Similar Documents

Publication Publication Date Title
CN111583263B (en) Point cloud segmentation method based on joint dynamic graph convolution
CN109559329B (en) Particle filter tracking method based on depth denoising automatic encoder
CN112464004A (en) Multi-view depth generation image clustering method
CN110263855B (en) Method for classifying images by utilizing common-basis capsule projection
CN110516724B (en) High-performance multi-layer dictionary learning characteristic image processing method for visual battle scene
CN111931814A (en) Unsupervised anti-domain adaptation method based on intra-class structure compactness constraint
CN113987236B (en) Unsupervised training method and unsupervised training device for visual retrieval model based on graph convolution network
WO2021046681A1 (en) Complex scenario-oriented multi-source target tracking method
CN110717402B (en) Pedestrian re-identification method based on hierarchical optimization metric learning
CN110569807B (en) Multi-source target tracking method for complex scene
CN115131605A (en) Structure perception graph comparison learning method based on self-adaptive sub-graph
CN114648560A (en) Distributed image registration method, system, medium, computer device and terminal
Tabealhojeh et al. RMAML: Riemannian meta-learning with orthogonality constraints
Suganthan et al. Self-organizing Hopfield network for attributed relational graph matching
CN115019053A (en) Dynamic graph semantic feature extraction method for point cloud classification and segmentation
CN114880527A (en) Multi-modal knowledge graph representation method based on multi-prediction task
CN112905599A (en) Distributed deep hash retrieval method based on end-to-end
CN110794893A (en) Quantum-computation-oriented multilayer noise high-precision temperature control method
Wu et al. Classification of birds based on weighted fusion model
JP2022025392A (en) Machine learning device and method for mechanical learning
Zhang et al. Color clustering using self-organizing maps
WO2021073738A1 (en) Learning a data density function
Khadempir et al. Domain adaptation based on incremental adversarial learning
Chen CT-LSTM: Detection & estimation duplexed system for robust object tracking
CN113890109B (en) Multi-wind power plant power daily scene generation method with time-space correlation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19945127

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19945127

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19945127

Country of ref document: EP

Kind code of ref document: A1