CN109993224B - GEO satellite shape and attitude identification method based on deep learning and multi-core learning - Google Patents

GEO satellite shape and attitude identification method based on deep learning and multi-core learning Download PDF

Info

Publication number
CN109993224B
CN109993224B CN201910239623.7A CN201910239623A CN109993224B CN 109993224 B CN109993224 B CN 109993224B CN 201910239623 A CN201910239623 A CN 201910239623A CN 109993224 B CN109993224 B CN 109993224B
Authority
CN
China
Prior art keywords
sequence data
ocs
kernel
ocs sequence
feature vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910239623.7A
Other languages
Chinese (zh)
Other versions
CN109993224A (en
Inventor
霍俞蓉
李智
方宇强
徐灿
张峰
卢旺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peoples Liberation Army Strategic Support Force Aerospace Engineering University
Original Assignee
Peoples Liberation Army Strategic Support Force Aerospace Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peoples Liberation Army Strategic Support Force Aerospace Engineering University filed Critical Peoples Liberation Army Strategic Support Force Aerospace Engineering University
Priority to CN201910239623.7A priority Critical patent/CN109993224B/en
Publication of CN109993224A publication Critical patent/CN109993224A/en
Application granted granted Critical
Publication of CN109993224B publication Critical patent/CN109993224B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/24Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for cosmonautical navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a GEO satellite shape and attitude identification method based on deep learning and multi-core learning, which comprises the following steps: acquiring OCS sequence data of a GEO satellite for one year; preprocessing OCS sequence data; constructing a C-RNN model for automatically extracting characteristics of OCS sequence data based on a deep learning network, wherein the deep learning network comprises a cyclic neural network and a convolutional neural network; training a C-RNN model to obtain a plurality of feature vectors of OCS sequence data; based on the multi-kernel learning technology, a plurality of kernel functions are used for mapping different features, and a support vector machine is used for identifying the shape and the posture of the satellite. Under the condition of no need of prior information, the method combines deep learning based on the cyclic neural network and the convolutional neural network and the multi-core learning technology based on the support vector machine, utilizes a pure data driving mode, and uses OCS sequence data to automatically identify the shape and the posture of the GEO satellite.

Description

基于深度学习与多核学习的GEO卫星形状与姿态识别方法Shape and Attitude Recognition Method of GEO Satellite Based on Deep Learning and Multi-core Learning

技术领域technical field

本发明属于GEO卫星形状与姿态识别技术领域,尤其涉及一种基于深度学习与多核学习的GEO卫星形状与姿态识别方法。The invention belongs to the technical field of GEO satellite shape and attitude recognition, and in particular relates to a GEO satellite shape and attitude recognition method based on deep learning and multi-core learning.

背景技术Background technique

目前,空间物体的数量不断增加,空间态势感知(Space Situational Awareness)已成为国际上的重要研究课题之一。地球同步轨道(GEO)是重要的太空资产,验证GEO中所有卫星和物体的状态对于正确评估GEO环境非常重要。同时,详细的空间物体特征(例如形状、姿态和运动状态)可用于准确预测空间物体的轨迹和行为,为空间态势感知提供重要的信息能力保障。光学观测是获取空间物体信息的重要工具,现有的大量光学望远镜仍然是我国观测GEO目标的主要方法,更具体地说,分析员可通过光学观测系统获得的光度序列数据来提取空间物体的形状、大小、姿态、反射率和材料等特征,最终判断空间目标的行为与意图。目前,光学散射截面(OCS)广泛用于表示空间目标的光学散射特性,通过可见光散射特性可以有效识别GEO目标的形状和姿态。由于OCS仅受空间目标几何形状、表面材料、姿态和太阳-空间目标-观测站的相对位置的影响,却与观测距离和观测系统无关,同时,OCS序列数据与光度序列数据可互相转换,因此,OCS序列数据非常适合于空间目标的特征识别。At present, the number of space objects is increasing, and space situational awareness (Space Situational Awareness) has become one of the important research topics in the world. Geosynchronous orbit (GEO) is an important space asset, and verifying the status of all satellites and objects in GEO is important to properly assess the GEO environment. At the same time, detailed space object characteristics (such as shape, attitude and motion state) can be used to accurately predict the trajectory and behavior of space objects, providing important information capability guarantee for space situational awareness. Optical observation is an important tool to obtain information on space objects. A large number of existing optical telescopes are still the main method for observing GEO targets in my country. More specifically, analysts can extract the shape of space objects through the photometric sequence data obtained by the optical observation system. , size, attitude, reflectivity and material characteristics, and finally determine the behavior and intention of the space target. At present, optical scattering cross section (OCS) is widely used to represent the optical scattering properties of space targets, and the shape and attitude of GEO targets can be effectively identified by the visible light scattering properties. Because the OCS is only affected by the space target geometry, surface material, attitude and the relative position of the sun-space target-observation station, but has nothing to do with the observation distance and observation system, and at the same time, the OCS sequence data and the photometric sequence data can be converted to each other, so , OCS sequence data is very suitable for feature recognition of spatial targets.

目前,分析人员已经能够通过光度序列数据手动识别卫星的形状和姿态。通常,最常见的识别方法是物理模型(如基于二面元模型的反演方法)和滤波器(如卡尔曼滤波)。基于二面元模型所提出的点配对法要求卫星本体具有朗伯反射特性且三维形状复杂,同时要求卫星帆板具有镜面和朗伯反射特性,且接近平面结构,该方法可以快速反演GEO目标的反射率以及面积。卡尔曼滤波方法将待识别的空间目标特征参数作为系统的未知状态参数进行最优估计,利用无损卡尔曼滤波可以识别空间目标的形状,以及目标的惯性轴指向,该方法识别性能较好、速度也较快。Currently, analysts have been able to manually identify the shape and attitude of satellites from photometric sequence data. In general, the most common identification methods are physical models (such as inversion methods based on the dihedral model) and filters (such as Kalman filtering). The point pairing method proposed based on the dihedral model requires the satellite body to have Lambertian reflection characteristics and a complex three-dimensional shape. At the same time, it requires the satellite sail board to have specular and Lambertian reflection characteristics, and it is close to a plane structure. This method can quickly invert the GEO target. reflectivity and area. The Kalman filter method takes the characteristic parameters of the space target to be identified as the unknown state parameters of the system for optimal estimation. The shape of the space target and the direction of the inertial axis of the target can be identified by using the lossless Kalman filter. This method has better identification performance and faster speed. Also faster.

虽然传统的物理模型与滤波方法速度快,但传统方法需要很多先验信息,并且受模型质量的影响很大。此外,从大量观察设备中获取的数据,数据量非常大,人类的手动识别已经不再可行。Although traditional physical models and filtering methods are fast, traditional methods require a lot of prior information and are greatly affected by the quality of the model. In addition, the data obtained from a large number of observation devices is so large that manual identification by humans is no longer feasible.

基于二面元模型的方法首先对目标模型的结构要求很多,其次,该方法要求目标本体反射率、面积对每个观测点的贡献相同,且两个观测点的本体姿态必须相同,此外,该方法要求尽量增大点对之间的时间间隔或者观测站位置间隔,从而提高模型的识别效果,因此,该方法目前并不具备通用性和实用性。The method based on the dihedral model firstly requires a lot of structure of the target model, secondly, the method requires the same contribution of the reflectivity and area of the target body to each observation point, and the body pose of the two observation points must be the same. The method requires to maximize the time interval between point pairs or the position interval of the observation station, so as to improve the recognition effect of the model. Therefore, this method does not have generality and practicability at present.

卡尔曼滤波方法需要将相角数据和光度序列数据进行融合,因此,需要一定的先验知识,同时,空间目标自身模型参数的不确定性,会造成卡尔曼滤波估计的系统误差。The Kalman filtering method needs to fuse the phase angle data and the photometric sequence data. Therefore, certain prior knowledge is required. At the same time, the uncertainty of the model parameters of the spatial target itself will cause systematic errors in the Kalman filtering estimation.

发明内容SUMMARY OF THE INVENTION

本发明的目的是提供一种基于深度学习与多核学习的GEO卫星形状与姿态识别方法,在不需要先验信息的情况下,结合基于循环神经网络(Recurrent Neural Network,RNN)和卷积神经网络(Convolutional Neural Network,CNN)的深度学习和基于支持向量机(Support Vector Machine,SVM)的多核学习技术,利用纯数据驱动模式,使用OCS序列数据自动识别GEO卫星的形状和姿态。The purpose of the present invention is to provide a GEO satellite shape and attitude recognition method based on deep learning and multi-core learning, which combines the Recurrent Neural Network (RNN) and Convolutional Neural Network without prior information. Deep learning (Convolutional Neural Network, CNN) and multi-core learning technology based on Support Vector Machine (SVM), using pure data-driven mode, use OCS sequence data to automatically identify the shape and attitude of GEO satellites.

为达到上述目的,本发明通过以下技术方案来具体实现:In order to achieve the above object, the present invention is specifically realized through the following technical solutions:

本发明提供一种基于深度学习与多核学习的GEO卫星形状与姿态识别方法,包括:The present invention provides a GEO satellite shape and attitude recognition method based on deep learning and multi-core learning, including:

步骤一、获取GEO卫星的光学散射截面特性(OCS)序列数据;Step 1: Obtain the optical scattering cross section (OCS) sequence data of the GEO satellite;

步骤二、对OCS序列数据进行预处理;Step 2: Preprocess the OCS sequence data;

步骤三、基于深度学习网络,构建用于对OCS序列数据进行自动特征提取的C-RNN模型,所述深度学习网络包括循环神经网络与卷积神经网络;Step 3, building a C-RNN model for automatic feature extraction of OCS sequence data based on a deep learning network, where the deep learning network includes a cyclic neural network and a convolutional neural network;

步骤四、训练C-RNN模型,获取OCS序列数据的多个特征向量;Step 4: Train the C-RNN model to obtain multiple feature vectors of the OCS sequence data;

步骤五、基于多核学习技术,使用多个核函数对不同特征进行映射,并利用支持向量机对卫星的形状与姿态进行识别。Step 5: Based on the multi-kernel learning technology, use multiple kernel functions to map different features, and use the support vector machine to recognize the shape and attitude of the satellite.

所述步骤一中获取GEO卫星的OCS序列数据的方法包括:The method for obtaining the OCS sequence data of the GEO satellite in the step 1 includes:

通过数值计算、实际观测得到的光度序列数据和/或实验室模拟测量中的一种或多种方式获取空间目标的OCS序列数据。The OCS sequence data of the space target is obtained by one or more methods of numerical calculation, photometric sequence data obtained from actual observation, and/or laboratory simulation measurements.

进一步的,通过数值计算的方式获取空间目标的OCS序列数据的方法,包括:使用BRDF模型以及OCS计算方法获取空间目标的OCS序列数据;Further, the method for obtaining the OCS sequence data of the spatial target by means of numerical calculation includes: using the BRDF model and the OCS calculation method to obtain the OCS sequence data of the spatial target;

通过实际观测得到的光度序列数据可转换为OCS序列数据,光度序列数据与OCS序列数据的转换关系为:The photometric sequence data obtained through actual observation can be converted into OCS sequence data. The conversion relationship between photometric sequence data and OCS sequence data is:

Figure BDA0002009256710000031
Figure BDA0002009256710000031

其中,m为光度序列数据(星等),r为观测设备与空间目标的距离。Among them, m is the photometric sequence data (magnitude), and r is the distance between the observation equipment and the space target.

所述步骤二中,对OCS序列数据进行预处理的方法包括:In the second step, the method for preprocessing the OCS sequence data includes:

根据不同观测区间内获得的OCS序列数据所对应的观测几何位置关系,将在不同的观测几何位置关系下得到的OCS序列数据划分到不同的子集,并将每个子集的标签设置为所对应类别的子类;所述观测几何位置关系为太阳、空间目标和测站的位置关系。According to the observation geometric position relationship corresponding to the OCS sequence data obtained in different observation intervals, the OCS sequence data obtained under different observation geometric position relationships are divided into different subsets, and the label of each subset is set to the corresponding A subclass of the category; the observed geometric positional relationship is the positional relationship of the sun, the space target, and the station.

所述步骤三中,构建用于对OCS序列数据进行自动特征提取的C-RNN模型的方法包括:In the third step, the method for constructing a C-RNN model for automatic feature extraction of OCS sequence data includes:

所述C-RNN模型由编码器、解码器和分类器组成;The C-RNN model consists of an encoder, a decoder and a classifier;

编码器用于将OCS序列数据作为输入,生成固定长度的特征向量作为输出;The encoder is used to take the OCS sequence data as input and generate a fixed-length feature vector as output;

解码器通过编码器生成的特征向量重构输入的OCS序列数据;The decoder reconstructs the input OCS sequence data through the feature vector generated by the encoder;

分类器由三个使用ReLU激活函数的全连接层和一个使用sigmoid激活函数的输出层构成;使用编码器产生的特征向量作为输入;使用sigmoid函数将特征向量映射到类别作为输出,获得与GEO卫星输入OCS序列数据对应的形状和姿态。The classifier consists of three fully connected layers using the ReLU activation function and one output layer using the sigmoid activation function; the feature vector generated by the encoder is used as input; the sigmoid function is used to map the feature vector to the category as output, and the GEO satellite is obtained. Input the shape and pose corresponding to the OCS sequence data.

进一步的,C-RNN模型的损失函数为:Further, the loss function of the C-RNN model is:

L=MSE+lossL=MSE+loss

其中,MSE为用于OCS序列数据重构的损失函数,loss为用于形状和姿态分类过程的损失函数;C-RNN模型训练过程中使用反向传播和梯度下降方法使得总损失最小化。Among them, MSE is the loss function used for OCS sequence data reconstruction, and loss is the loss function used in the shape and pose classification process; the back-propagation and gradient descent methods are used in the C-RNN model training process to minimize the total loss.

进一步的,编码器中包括两个带有整流线性单元(ReLU)激活函数的1-D卷积层,ReLU函数可表示为:Further, the encoder includes two 1-D convolutional layers with Rectified Linear Unit (ReLU) activation function, and the ReLU function can be expressed as:

Figure BDA0002009256710000041
Figure BDA0002009256710000041

其中,x为输入值;1-D卷积使用一维卷积核对OCS序列数据进行卷积,从而提取出OCS序列数据的特征向量;1-D卷积定义为:Among them, x is the input value; 1-D convolution uses a one-dimensional convolution kernel to convolve the OCS sequence data, thereby extracting the feature vector of the OCS sequence data; 1-D convolution is defined as:

Figure BDA0002009256710000042
Figure BDA0002009256710000042

其中,

Figure BDA0002009256710000043
为输入的OCS序列数据;n为序列数据的长度;W和k分别表示1-D卷积核和滑动步数;
Figure BDA0002009256710000044
为卷积后的输出向量;m=n-nk+1,nk为卷积核的大小;in,
Figure BDA0002009256710000043
is the input OCS sequence data; n is the length of the sequence data; W and k represent the 1-D convolution kernel and the number of sliding steps, respectively;
Figure BDA0002009256710000044
is the output vector after convolution; m=nn k +1, n k is the size of the convolution kernel;

每个卷积层之后都有一个dropout层;第二个卷积层后有一层flatten层,该层将多维特征转换为1-D特征;最终得到的具有指定长度的特征向量是通过将flatten层的输出传递到两个全连接层产生的,全连接层使用ReLU激活函数。Each convolutional layer is followed by a dropout layer; the second convolutional layer is followed by a flatten layer, which converts multi-dimensional features into 1-D features; the final feature vector with the specified length is obtained by flattening the The outputs are passed to two fully connected layers, which use the ReLU activation function.

进一步的,在解码器中,应用了两个门控递归单元(GRU)网络来完成OCS输入信号的重构任务;Further, in the decoder, two Gated Recurrent Unit (GRU) networks are applied to complete the reconstruction task of the OCS input signal;

解码器采用特征向量、采样时间之间的差值ΔtN作为输入,N为采样点的个数;复制特征向量l次,其中l是设定的解码器的输出序列长度;将采样时间点的差值也复制l次;特征向量用于表征OCS序列数据,采样时间来确定重构序列中每个点所在的位置。The decoder takes the difference Δt N between the feature vector and the sampling time as input, where N is the number of sampling points; copies the feature vector l times, where l is the set length of the output sequence of the decoder; The difference is also replicated l times; the feature vector is used to characterize the OCS sequence data, and the sampling time is used to determine the position of each point in the reconstructed sequence.

所述步骤四中,训练C-RNN模型,获取OCS序列数据的多个特征向量的方法包括:In the step 4, the method for training the C-RNN model to obtain multiple feature vectors of the OCS sequence data includes:

将长度为200的OCS序列数据输入到C-RNN模型中;C-RNN模型需要训练2次,第一次将大小为5的卷积核应用于两个CNN层;第二次将大小为3的卷积核应用于两个CNN层,将迭代次数和批量大小分别设置为2000和1000,每一次训练结束后,保存C-RNN模型输出的特征向量;Input the OCS sequence data of length 200 into the C-RNN model; the C-RNN model needs to be trained 2 times, the first time a convolution kernel of size 5 is applied to two CNN layers; the second time the size is 3 The convolution kernel of C-RNN is applied to two CNN layers, and the number of iterations and batch size are set to 2000 and 1000 respectively. After each training, the feature vector output by the C-RNN model is saved;

其中,模型中的两个CNN层分别具有60个滤波器和100个滤波器;编码器的输出是嵌入大小为64的特征向量;Among them, the two CNN layers in the model have 60 filters and 100 filters respectively; the output of the encoder is a feature vector with an embedding size of 64;

解码器使用2层单元大小为100的双向GRU层;解码器的输入是长度为65的特征向量,其中包括编码器的输出(长度为64)以及OCS序列数据点之间的采样时间差值(长度为1);The decoder uses a 2-layer bidirectional GRU layer of unit size 100; the input to the decoder is a feature vector of length 65, which includes the output of the encoder (length 64) and the sample time difference between the OCS sequence data points ( length is 1);

分类器处理编码器产生的特征向量并给出每条OCS序列数据的分类结果;The classifier processes the feature vector generated by the encoder and gives the classification result of each OCS sequence data;

Adam优化器用于网络优化,学习率为1×10-3;每个dropout层的dropout率设置为0.25。The Adam optimizer is used for network optimization with a learning rate of 1×10 -3 ; the dropout rate of each dropout layer is set to 0.25.

所述步骤五中,对卫星的形状与姿态进行识别的方法,包括:In the step 5, the method for identifying the shape and attitude of the satellite includes:

基于MKL技术,将使用不同卷积核的C-RNN模型产生的特征向量作为支持向量机(SVM)的输入数据,并通过多核线性组合方法组合基本核函数,将多核线性组合而成的最终核函数作为SVM的核函数,使用SVM进行分类。多核线性组合可以描述如下:Based on the MKL technology, the feature vector generated by the C-RNN model using different convolution kernels is used as the input data of the support vector machine (SVM), and the basic kernel functions are combined by the multi-kernel linear combination method, and the multi-kernel linear combination is the final kernel. The function is used as the kernel function of SVM, and SVM is used for classification. The multi-kernel linear combination can be described as follows:

Figure BDA0002009256710000051
Figure BDA0002009256710000051

其中,x,z∈X,

Figure BDA0002009256710000053
为特征空间,
Figure BDA0002009256710000054
为第i个归一化的基本核函数,K(x,z)表示由n个基本核函数线性组合而成的最终核函数,βi表示第i个系数;基本核函数为多项式核函数,多项式内核可表示为:Among them, x, z∈X,
Figure BDA0002009256710000053
is the feature space,
Figure BDA0002009256710000054
is the ith normalized basic kernel function, K(x, z) represents the final kernel function formed by linear combination of n basic kernel functions, β i represents the ith coefficient; the basic kernel function is a polynomial kernel function, The polynomial kernel can be expressed as:

Figure BDA0002009256710000052
Figure BDA0002009256710000052

式中:x,z∈X,

Figure BDA0002009256710000055
为特征空间;R和d分别为常数和多项式的阶;其中,MKL为多特征融合方法,SVM为分类模型。目前,常用的多核学习(MKL)是一种基于SVM的多特征融合方法。通常,SVM是单内核的,很难选择最合适的核函数以及相应的参数来得到最佳的分类效果。而MKL是针对不同的特征采用不同的内核,为不同的内核分配不同的权重,然后训练每个核的权重,并选择核函数的最佳组合来完成分类任务。where: x,z∈X,
Figure BDA0002009256710000055
is the feature space; R and d are the order of a constant and a polynomial, respectively; among them, MKL is a multi-feature fusion method, and SVM is a classification model. Currently, the commonly used multi-kernel learning (MKL) is a multi-feature fusion method based on SVM. Usually, SVM is a single kernel, it is difficult to choose the most suitable kernel function and corresponding parameters to get the best classification effect. MKL uses different kernels for different features, assigns different weights to different kernels, then trains the weights of each kernel, and selects the best combination of kernel functions to complete the classification task.

本发明的有益效果是:The beneficial effects of the present invention are:

本发明利用神经网络对GEO卫星的OCS序列数据进行自动特征提取;本发明提出的C-RNN模型包括由CNN构成的编码器、由RNN构成的解码器以及由全连接神经网络构成的分类器组成;C-RNN架构中分类器的主要功能为最大化特征之间的距离,而非进行分类;本发明对GEO卫星的形状与姿态识别是通过多核学习技术和SVM完成;本发明对特征进行多核学习的方式是将多个多项式核通过线性组合的方式完成。本发明可自动对获得的OCS序列数据进行特征提取,节省了大量的人力成本;提取的特征包含OCS序列数据更加丰富的信息,在分类时会提高分类效果;使用多核学习能够融合不同的特征,使得分类结果更加准确。The present invention uses neural network to perform automatic feature extraction on OCS sequence data of GEO satellite; the C-RNN model proposed by the present invention includes an encoder composed of CNN, a decoder composed of RNN, and a classifier composed of fully connected neural network. The main function of the classifier in the C-RNN architecture is to maximize the distance between the features, rather than classifying; the present invention is to recognize the shape and attitude of the GEO satellite through multi-core learning technology and SVM. The way of learning is to complete the linear combination of multiple polynomial kernels. The invention can automatically perform feature extraction on the obtained OCS sequence data, which saves a lot of labor costs; the extracted features contain richer information of the OCS sequence data, and the classification effect can be improved during classification; the use of multi-core learning can integrate different features, make the classification results more accurate.

附图说明Description of drawings

图1所示为C-RNN模型结构示意图。Figure 1 shows a schematic diagram of the structure of the C-RNN model.

图2所示为MKL的原理示意图。Figure 2 shows a schematic diagram of the principle of MKL.

图3a至图3e所示为5个卫星模型示意图。Figures 3a to 3e show schematic diagrams of five satellite models.

图4a至图4e所示为姿态2下5颗卫星的OCS序列数据重构结果示意图。Figures 4a to 4e are schematic diagrams showing the reconstruction results of the OCS sequence data of five satellites in attitude 2.

图5是使用大小为5的卷积核的C-RNN结构对OCS序列数据分类的结果示意图。Figure 5 is a schematic diagram of the results of classifying OCS sequence data using a C-RNN structure with a convolution kernel of size 5.

图6是使用大小为3的卷积核的C-RNN结构对OCS序列数据分类的结果示意图。Figure 6 is a schematic diagram of the results of classifying OCS sequence data using a C-RNN structure with a convolution kernel of size 3.

图7是基于支持向量机的MKL对测试OCS序列数据的分类结果示意图。FIG. 7 is a schematic diagram of the classification result of the test OCS sequence data based on the support vector machine MKL.

图8是ENDECLA-CR、ENDE-CR、ENDE-RR的训练损失和验证损失示意图。Figure 8 is a schematic diagram of the training loss and validation loss of ENDECLA-CR, ENDE-CR, and ENDE-RR.

具体实施方式Detailed ways

为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。In order to make those skilled in the art better understand the solutions of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only Embodiments are part of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.

实施例一Example 1

本发明实施例提供了本发明提供一种基于深度学习与多核学习的GEO卫星形状与姿态识别方法,包括:The embodiments of the present invention provide a method for recognizing the shape and attitude of a GEO satellite based on deep learning and multi-core learning, including:

步骤一、获取GEO卫星的光学散射截面特性(OCS)序列数据。Step 1: Obtain the Optical Scattering Cross Section Characteristic (OCS) sequence data of the GEO satellite.

获取GEO卫星的OCS序列数据的方法包括:通过数值计算、实际观测得到的光度序列数据和/或实验室模拟测量中的一种或多种方式获取空间目标的OCS序列数据。通过数值计算的方式获取空间目标的OCS序列数据的方法,包括:使用BRDF模型以及OCS计算方法获取空间目标的OCS序列数据;通过实际观测得到的光度序列数据可转换为OCS序列数据,光度序列数据与OCS序列数据的转换关系为:The method for acquiring the OCS sequence data of the GEO satellite includes: acquiring the OCS sequence data of the space target by one or more of numerical calculation, photometric sequence data obtained by actual observation and/or laboratory simulation measurements. The method of obtaining the OCS sequence data of the space target by means of numerical calculation, including: using the BRDF model and the OCS calculation method to obtain the OCS sequence data of the space target; the photometric sequence data obtained through actual observation can be converted into OCS sequence data, photometric sequence data The conversion relationship with OCS sequence data is:

Figure BDA0002009256710000071
Figure BDA0002009256710000071

其中,m为光度序列数据(星等),r为观测设备与空间目标的距离。Among them, m is the photometric sequence data (magnitude), and r is the distance between the observation equipment and the space target.

步骤二、对OCS序列数据进行预处理;包括:Step 2: Preprocess the OCS sequence data; including:

根据不同观测区间内获得的OCS序列数据所对应的观测几何位置关系,将在不同的观测几何位置关系下得到的OCS序列数据划分到不同的子集,并将每个子集的标签设置为所对应类别的子类;所述观测几何位置关系为太阳、空间目标和测站的位置关系。例如,如果类别C中的目标T的OCS序列数据对应n个主要的观测几何位置关系(n≤5),则目标T在类别C中的OCS序列数据将被分成n个子类,并且将每个子类的标签设置为C1,C2,...,Cn。因此,一年的OCS序列数据将被划分为NC*n个类别,其中,NC为类别数。上述处理方法,能提升步骤四中的训练效果。此外,将每一段OCS序列处理为长度为200的序列数据,以便进行识别分类。According to the observation geometric position relationship corresponding to the OCS sequence data obtained in different observation intervals, the OCS sequence data obtained under different observation geometric position relationships are divided into different subsets, and the label of each subset is set to the corresponding A subclass of the category; the observed geometric positional relationship is the positional relationship of the sun, the space target, and the station. For example, if the OCS sequence data of target T in category C corresponds to n major observational geometric positional relationships (n≤5), then the OCS sequence data of target T in category C will be divided into n subcategories, and each subcategory will be divided into n subcategories. The labels of the classes are set to C1, C2, ..., Cn. Therefore, one year's OCS sequence data will be divided into N C *n categories, where N C is the number of categories. The above processing method can improve the training effect in the fourth step. In addition, each OCS sequence was processed into sequence data of length 200 for identification and classification.

步骤三、基于深度学习网络,构建用于对OCS序列数据进行自动特征提取的C-RNN模型,所述深度学习网络包括循环神经网络与卷积神经网络;Step 3, building a C-RNN model for automatic feature extraction of OCS sequence data based on a deep learning network, where the deep learning network includes a cyclic neural network and a convolutional neural network;

为了利用OCS数据对GEO卫星的形状与姿态进行识别,有必要将整个OCS序列数据压缩成特征向量。C-RNN特征提取模型如图1所示,图中的M和D表示CNN卷积层的维度,每个卷积层的尺寸由给定的卷积内核大小、滤波器的数量和输入数据的长度确定,图中省略了dropout层。In order to use OCS data to identify the shape and attitude of GEO satellites, it is necessary to compress the entire OCS sequence data into feature vectors. The C-RNN feature extraction model is shown in Figure 1. M and D in the figure represent the dimensions of the CNN convolutional layers. The size of each convolutional layer is determined by the given convolution kernel size, the number of filters and the size of the input data. The length is determined, and the dropout layer is omitted in the figure.

所述C-RNN模型由编码器、解码器和分类器组成;编码器主要由CNN构成,它将OCS序列作为输入,并生成固定长度的特征向量作为输出。解码器通过编码器生成的特征向量重构输入的OCS序列数据;分类器由三个使用ReLU激活函数的全连接层和一个使用sigmoid激活函数的输出层构成;使用编码器产生的特征向量作为输入;使用sigmoid函数将特征向量映射到类别作为输出,获得与GEO卫星输入OCS序列数据对应的形状和姿态。The C-RNN model consists of an encoder, a decoder and a classifier; the encoder is mainly composed of CNN, which takes the OCS sequence as input and generates a fixed-length feature vector as output. The decoder reconstructs the input OCS sequence data through the feature vector generated by the encoder; the classifier consists of three fully connected layers using the ReLU activation function and one output layer using the sigmoid activation function; using the feature vector generated by the encoder as input ; Use the sigmoid function to map feature vectors to categories as output to obtain the shape and pose corresponding to the GEO satellite input OCS sequence data.

由于提出的模型包含两个输出,因此,需要定义两个损失函数。C-RNN模型的损失函数为:Since the proposed model contains two outputs, two loss functions need to be defined. The loss function of the C-RNN model is:

L=MSE+lossL=MSE+loss

其中,MSE为用于OCS序列数据重构的损失函数,loss为用于形状和姿态分类过程的损失函数;C-RNN模型训练过程中使用反向传播和梯度下降方法使得总损失最小化。用于OCS序列重构的损失函数为均方误差(MSE)。MSE可由下式给出:Among them, MSE is the loss function used for OCS sequence data reconstruction, and loss is the loss function used in the shape and pose classification process; the back-propagation and gradient descent methods are used in the C-RNN model training process to minimize the total loss. The loss function used for OCS sequence reconstruction is mean square error (MSE). MSE can be given by:

Figure BDA0002009256710000081
Figure BDA0002009256710000081

式中:

Figure BDA0002009256710000084
为第i个OCS序列,
Figure BDA0002009256710000083
为第i个重构序列,wi为权重系数,n表示输出序列的长度,N表示输入OCS序列的总数。where:
Figure BDA0002009256710000084
is the ith OCS sequence,
Figure BDA0002009256710000083
is the ith reconstruction sequence, w i is the weight coefficient, n represents the length of the output sequence, and N represents the total number of input OCS sequences.

使用二元交叉熵作为形状和姿态分类过程的损失函数。需要注意的是,当使用二元交叉熵作为损失函数时,序列数据的标签需要进行二值化,分类器的输出也为由0和1组成的向量。二元交叉熵由下式给出:Use binary cross-entropy as the loss function for the shape and pose classification process. It should be noted that when binary cross entropy is used as the loss function, the labels of sequence data need to be binarized, and the output of the classifier is also a vector composed of 0s and 1s. The binary cross-entropy is given by:

Figure BDA0002009256710000082
Figure BDA0002009256710000082

式中:

Figure BDA0002009256710000085
Figure BDA0002009256710000086
分别为二值化后,标签中的第i个数值和分类器输出的类别向量中的第i个值,nc表示分类器输出向量的长度。where:
Figure BDA0002009256710000085
and
Figure BDA0002009256710000086
After binarization, the i-th value in the label and the i-th value in the category vector output by the classifier are respectively, n c represents the length of the classifier output vector.

编码器主要由CNN构成,它将OCS序列作为输入,并生成固定长度的特征向量作为输出。包括两个带有整流线性单元(ReLU)激活函数的1-D卷积层,ReLU函数可表示为:The encoder is mainly composed of a CNN, which takes the OCS sequence as input and generates a fixed-length feature vector as output. Including two 1-D convolutional layers with Rectified Linear Unit (ReLU) activation function, the ReLU function can be expressed as:

Figure BDA0002009256710000091
Figure BDA0002009256710000091

其中,x为输入值;1-D卷积使用一维卷积核对OCS序列数据进行卷积,从而提取出OCS序列数据的特征向量;1-D卷积定义为:Among them, x is the input value; 1-D convolution uses a one-dimensional convolution kernel to convolve the OCS sequence data, thereby extracting the feature vector of the OCS sequence data; 1-D convolution is defined as:

Figure BDA0002009256710000092
Figure BDA0002009256710000092

其中,

Figure BDA0002009256710000093
为输入的OCS序列数据;n为序列数据的长度;W和k分别表示1-D卷积核和滑动步数;
Figure BDA0002009256710000094
为卷积后的输出向量;m=n-nk+1,nk为卷积核的大小;为了防止神经网络过度拟合,每个卷积层之后都有一个dropout层;第二个卷积层后有一层flatten层,该层将多维特征转换为1-D特征;最终得到的具有指定长度的特征向量是通过将flatten层的输出传递到两个全连接层产生的,全连接层使用ReLU激活函数。in,
Figure BDA0002009256710000093
is the input OCS sequence data; n is the length of the sequence data; W and k represent the 1-D convolution kernel and the number of sliding steps, respectively;
Figure BDA0002009256710000094
is the output vector after convolution; m=nn k +1, n k is the size of the convolution kernel; in order to prevent the neural network from overfitting, there is a dropout layer after each convolution layer; the second convolution layer This is followed by a flatten layer, which converts multi-dimensional features into 1-D features; the resulting feature vector with the specified length is generated by passing the output of the flatten layer to two fully connected layers, which are activated using ReLU function.

在解码器中,应用了两个门控递归单元(GRU)网络来完成OCS输入信号的重构任务;GRU是RNN的一种变体,可以有效的解决RNN的长期依赖性问题,并且在处理时序数据的任务中表现优于标准RNN。In the decoder, two Gated Recurrent Unit (GRU) networks are applied to complete the reconstruction task of the OCS input signal; GRU is a variant of RNN, which can effectively solve the long-term dependency problem of RNN, and in processing Outperforms standard RNNs on tasks with time series data.

解码器采用特征向量以及采样时间之间的差值ΔtN作为输入,N为采样点的个数;复制特征向量l次,其中l是设定的解码器的输出序列长度;将采样时间点的差值也复制l次;特征向量用于表征OCS序列数据,采样时间来确定重构序列中每个点所在的位置。通过附加采样点之间的时间差值作为输入,C-RNN模型可以用来处理非均匀采样的时序数据。The decoder uses the feature vector and the difference between the sampling time Δt N as input, where N is the number of sampling points; copies the feature vector l times, where l is the set output sequence length of the decoder; The difference is also replicated l times; the feature vector is used to characterize the OCS sequence data, and the sampling time is used to determine the position of each point in the reconstructed sequence. By appending the time difference between sampling points as input, the C-RNN model can be used to process non-uniformly sampled time series data.

C-RNN模型中的分类器可以用来处理多标签分类问题,使用分类器可以对GEO目标的形状和姿态直接进行分类。然而,在特征提取C-RNN模型中,该分类器主要用于最大化对应于不同目标和不同姿态的特征向量之间的距离,使得编码器提取的特征信息更加丰富和准确。分类器由三个使用ReLU激活函数的全连接层和一个使用sigmoid激活函数的输出层构成。编码器产生的特征向量是分类器的输入。输出层使用sigmoid函数将特征映射到类别。通过分类器,可以直接获得与GEO目标的输入OCS序列数据相对应的形状和姿态。The classifier in the C-RNN model can be used to deal with multi-label classification problems, and the shape and pose of GEO objects can be directly classified using the classifier. However, in the feature extraction C-RNN model, the classifier is mainly used to maximize the distance between feature vectors corresponding to different targets and different poses, so that the feature information extracted by the encoder is more abundant and accurate. The classifier consists of three fully connected layers using ReLU activation function and one output layer using sigmoid activation function. The feature vector produced by the encoder is the input to the classifier. The output layer uses the sigmoid function to map features to categories. Through the classifier, the shape and pose corresponding to the input OCS sequence data of the GEO target can be directly obtained.

步骤四、训练C-RNN模型,获取OCS序列数据的多个特征向量;包括:Step 4: Train the C-RNN model to obtain multiple feature vectors of the OCS sequence data; including:

C-RNN模型的输入层将处理长度为200的OCS序列数据。将经过预处理的长度为200的OCS序列数据输入到C-RNN模型中;C-RNN模型需要训练2次,第一次将大小为5的卷积核应用于两个CNN层;第二次将大小为3的卷积核应用于两个CNN层,将迭代次数和批量大小分别设置为2000和1000,每一次训练结束后,保存C-RNN模型输出的特征向量;The input layer of the C-RNN model will process OCS sequence data of length 200. Input the preprocessed OCS sequence data of length 200 into the C-RNN model; the C-RNN model needs to be trained 2 times, the first time a convolution kernel of size 5 is applied to two CNN layers; the second time Apply a convolution kernel of size 3 to two CNN layers, set the number of iterations and batch size to 2000 and 1000, respectively, after each training, save the feature vector output by the C-RNN model;

其中,模型中的两个CNN层分别具有60个滤波器和100个滤波器;编码器的输出是嵌入大小为64的特征向量;Among them, the two CNN layers in the model have 60 filters and 100 filters respectively; the output of the encoder is a feature vector with an embedding size of 64;

解码器使用2层单元大小为100的双向GRU层;解码器的输入是长度为65的特征向量,其中包括编码器的输出(长度为64)以及OCS序列数据点之间的采样时间差值(长度为1);The decoder uses a 2-layer bidirectional GRU layer of unit size 100; the input to the decoder is a feature vector of length 65, which includes the output of the encoder (length 64) and the sample time difference between the OCS sequence data points ( length is 1);

由于分类器需要处理多标签分类问题,所以需要对每条OCS序列数据的标签进行二值化,分类器将处理编码器产生的特征向量并给出每条OCS序列数据的分类结果;Since the classifier needs to deal with the multi-label classification problem, the label of each OCS sequence data needs to be binarized. The classifier will process the feature vector generated by the encoder and give the classification result of each OCS sequence data;

Adam优化器用于网络优化,学习率为1×10-3;每个dropout层的dropout率设置为0.25。The Adam optimizer is used for network optimization with a learning rate of 1×10 -3 ; the dropout rate of each dropout layer is set to 0.25.

步骤五、基于多核学习技术,使用多个核函数对不同特征进行映射,并利用支持向量机对卫星的形状与姿态进行识别。包括:Step 5: Based on the multi-kernel learning technology, use multiple kernel functions to map different features, and use the support vector machine to recognize the shape and attitude of the satellite. include:

基于MKL技术,将使用不同卷积核的C-RNN模型产生的特征向量作为支持向量机SVM的输入数据,并通过多核线性组合方法组合基本核函数,多核线性组合可以描述如下:Based on the MKL technology, the feature vector generated by the C-RNN model using different convolution kernels is used as the input data of the support vector machine SVM, and the basic kernel functions are combined by the multi-kernel linear combination method. The multi-kernel linear combination can be described as follows:

Figure BDA0002009256710000101
Figure BDA0002009256710000101

其中,x,z∈X,

Figure BDA0002009256710000103
为特征空间,
Figure BDA0002009256710000104
为第i个归一化的基本核函数,K(x,z)表示由n个基本核函数线性组合而成的最终核函数,βi表示第i个系数;基本核函数为多项式核函数,多项式内核可表示为:Among them, x, z∈X,
Figure BDA0002009256710000103
is the feature space,
Figure BDA0002009256710000104
is the ith normalized basic kernel function, K(x, z) represents the final kernel function formed by linear combination of n basic kernel functions, β i represents the ith coefficient; the basic kernel function is a polynomial kernel function, The polynomial kernel can be expressed as:

Figure BDA0002009256710000102
Figure BDA0002009256710000102

式中:x,z∈X,

Figure BDA0002009256710000105
为特征空间;R和d分别为常数和多项式的阶;其中,MKL为多特征融合方法,SVM为分类模型。将最终核函数用于SVM分类器,从而确定GEO目标的形状和姿态。多核学习的原理示意图如图2所示。where: x,z∈X,
Figure BDA0002009256710000105
is the feature space; R and d are the order of a constant and a polynomial, respectively; among them, MKL is a multi-feature fusion method, and SVM is a classification model. The final kernel function is used in the SVM classifier to determine the shape and pose of the GEO object. A schematic diagram of the principle of multi-core learning is shown in Figure 2.

本发明的有益效果是:The beneficial effects of the present invention are:

本发明利用神经网络对GEO卫星的OCS序列数据进行自动特征提取;本发明提出的C-RNN模型包括由CNN构成的编码器、由RNN构成的解码器以及由全连接神经网络构成的分类器组成;C-RNN架构中分类器的主要功能为最大化特征之间的距离,而非进行分类;本发明对GEO卫星的形状与姿态识别是通过多核学习技术和SVM完成;本发明对特征进行多核学习的方式是将多个多项式核通过线性组合的方式完成。本发明可自动对获得的OCS序列数据进行特征提取,节省了大量的人力成本;提取的特征包含OCS序列数据更加丰富的信息,在分类时会提高分类效果;使用多核学习能够融合不同的特征,使得分类结果更加准确。The present invention uses neural network to perform automatic feature extraction on OCS sequence data of GEO satellite; the C-RNN model proposed by the present invention includes an encoder composed of CNN, a decoder composed of RNN, and a classifier composed of fully connected neural network. The main function of the classifier in the C-RNN architecture is to maximize the distance between the features, rather than classifying; the present invention is to recognize the shape and attitude of the GEO satellite through multi-core learning technology and SVM. The way of learning is to complete the linear combination of multiple polynomial kernels. The invention can automatically perform feature extraction on the obtained OCS sequence data, which saves a lot of labor costs; the extracted features contain richer information of the OCS sequence data, and the classification effect can be improved during classification; the use of multi-core learning can integrate different features, make the classification results more accurate.

为验证本发明的效果,采用中国丽江天文台(25.6°N,101.1°E,2.465Km),计算5个不同的卫星在3种不同姿态(三轴稳定模式)下的一年的OCS序列数据,五个空间目标的模型结构如图3a-图3e所示。In order to verify the effect of the present invention, the Chinese Lijiang Observatory (25.6°N, 101.1°E, 2.465Km) was used to calculate the one-year OCS sequence data of 5 different satellites under 3 different attitudes (three-axis stabilization mode), The model structures of the five spatial targets are shown in Fig. 3a-Fig. 3e.

获取的OCS序列数据可分为15个大类,分别为:⑴目标1、姿态1;⑵目标1、姿态2;⑶目标1、姿态3;⑷目标2、姿态1;⑸目标2、姿态2;⑹目标2、姿态3;⑺目标3、姿态1;⑻目标3、姿态2;⑼目标3、姿态3;⑽目标4、姿态1;⑾目标4、姿态2;⑿目标4、姿态3;⒀目标5、姿态1;⒁目标5、姿态2;⒂目标5、姿态3。三种姿态分别为:1)x轴指向卫星速度方向,z轴朝向地面,x,y,z轴彼此正交并满足右手法则;2)y轴指向卫星速度方向,z轴沿太阳帆板方向,x,y,z轴彼此正交并满足右手法则;3)z轴指向卫星速度方向,x轴指向地球地心,x,y,z轴彼此正交并满足右手法则。The acquired OCS sequence data can be divided into 15 categories, namely: (1) target 1, attitude 1; (2) target 1, attitude 2; (3) target 1, attitude 3; (4) target 2, attitude 1; (5) target 2, attitude 2 ; ⑹ target 2, posture 3; ⑺ target 3, posture 1; ⑻ target 3, posture 2; ⑼ target 3, posture 3; ⑽ target 4, posture 1; ⑾ target 4, posture 2; ⑿ target 4, posture 3; ⒀ Goal 5, Attitude 1; ⒁ Goal 5, Attitude 2; ⒂ Goal 5, Attitude 3. The three attitudes are: 1) the x-axis points to the satellite velocity direction, the z-axis faces the ground, the x, y, and z-axes are orthogonal to each other and satisfy the right-hand rule; 2) the y-axis points to the satellite velocity direction, and the z-axis is along the direction of the solar panel , x, y, z axes are orthogonal to each other and satisfy the right-hand rule; 3) the z-axis points to the direction of the satellite velocity, the x-axis points to the earth's center, and the x, y, z axes are orthogonal to each other and satisfy the right-hand rule.

按照步骤1,可获得五颗GEO卫星在三种姿态下的OCS序列数据6,915条(总共15类),每颗卫星为1,383条,每条光度序列数据对应于不同的观测区间。删除长度小于200的OCS序列数据,因此可用数据总计为5,505,每个卫星为1,101条,每种姿态为367条。根据观测区间的观测几何位置关系,将每个卫星对应的OCS序列数据划分为5个子类别(即5个主要观测几何位置关系):第1个子类有129条OCS序列数据;第二个子类有420条OCS序列数据;第3子类总共有141条OCS序列数据;第4子类共有399条OCS序列数据;第5个子类共有12条OCS序列数据。为使数值计算的OCS数据更可靠,将满足高斯分布的随机误差值添加到原始的OCS数据中。抽取OCS序列数据集约70%的数据作为训练集,其余数据分别用作验证集和测试集。在数据预处理之后,训练集中的OCS序列数量约为12,500。According to step 1, 6,915 OCS sequence data (15 types in total) of five GEO satellites in three attitudes can be obtained, 1,383 for each satellite, and each photometric sequence data corresponds to a different observation interval. OCS sequence data less than 200 in length were removed, so the total available data was 5,505, 1,101 for each satellite, and 367 for each attitude. According to the observation geometric position relationship in the observation interval, the OCS sequence data corresponding to each satellite is divided into 5 sub-categories (that is, 5 main observation geometric position relationships): the first sub-category has 129 OCS sequence data; the second sub-category has 420 pieces of OCS sequence data; the third subclass has a total of 141 OCS sequence data; the fourth subclass has a total of 399 OCS sequence data; the fifth subclass has a total of 12 OCS sequence data. To make the numerically computed OCS data more reliable, random error values that satisfy the Gaussian distribution are added to the original OCS data. About 70% of the data of the OCS sequence dataset is extracted as the training set, and the rest of the data are used as the validation set and test set, respectively. After data preprocessing, the number of OCS sequences in the training set is about 12,500.

将训练数据输入到已建好的模型中,在2000次训练迭代后,得到训练好的模型。将大约2,160条OCS序列数据作为测试数据,测试数据分为15个类别,每个类别144条数据(每个类别代表一个姿态下的一颗卫星)。利用训练好的模型将测试数据进行重构和分类,第一次训练中,解码器对5颗卫星、姿态2下的原始测试OCS序列数据进行重构后的结果如图4a-图4e所示。图中“Sat1”表示“卫星1”,“Attitude TWO”表示3种卫星姿态中的第2种姿态。The training data is input into the built model, and after 2000 training iterations, the trained model is obtained. Taking about 2,160 pieces of OCS sequence data as test data, the test data is divided into 15 categories with 144 pieces of data in each category (each category represents one satellite in one attitude). Use the trained model to reconstruct and classify the test data. In the first training, the decoder reconstructs the original test OCS sequence data at 5 satellites and attitude 2, and the results are shown in Figure 4a-Figure 4e . In the figure, "Sat1" indicates "Satellite 1", and "Attitude TWO" indicates the second attitude among the three satellite attitudes.

经过两次训练,C-RNN架构分类器的分类结果如图5和图6所示,图中“T0A1”表示卫星1、姿态1,对角线上的值是正确分类的数量。编码器使用大小为5的卷积核时,分类器的分类准确率达到91.9%;当使用大小为3的卷积核时,分类器的分类准确率为83.3%。由于卷积核的大小不同,编码器会产生不同的特征向量,所以分类结果也是不同的。在模型结构不变的情况下,利用大小为5的卷积核的编码器,从编码器中获得的特征可以更好地表示五颗卫星三个姿态的OCS序列数据。After two trainings, the classification results of the C-RNN architecture classifier are shown in Figure 5 and Figure 6. In the figure, "T0A1" represents satellite 1, attitude 1, and the value on the diagonal is the number of correct classifications. When the encoder uses a convolution kernel of size 5, the classifier achieves a classification accuracy of 91.9%; when using a convolution kernel of size 3, the classifier achieves a classification accuracy of 83.3%. Due to the different sizes of the convolution kernels, the encoder will generate different feature vectors, so the classification results are also different. With the model structure unchanged, using an encoder with a convolution kernel of size 5, the features obtained from the encoder can better represent the OCS sequence data of the three attitudes of five satellites.

为了使用多核学习进行分类,将基本多项式核函数中的R设为0。根据得到的2种特征,依次构造若干多项式核,其中每个多项式核函数的阶分别为d=1,2,3,…,10。以支持向量机(SVM)为分类器,将最终的组合核K作为支持向量机的核函数,处理训练集,进行训练。分类器训练结束后,得到SVM的最佳惩罚值C为1000,核函数的线性组合系数都为0。基于SVM的MKL分类结果如图7所示。多核学习得到的分类准确率达到了99.58%。To use multi-kernel learning for classification, set R to 0 in the basic polynomial kernel function. According to the obtained two kinds of features, several polynomial kernels are constructed in turn, wherein the order of each polynomial kernel function is d=1, 2, 3, . . . , 10, respectively. Taking the support vector machine (SVM) as the classifier, the final combined kernel K is used as the kernel function of the support vector machine, and the training set is processed for training. After the classifier is trained, the optimal penalty value C of the SVM is 1000, and the linear combination coefficients of the kernel function are all 0. The results of MKL classification based on SVM are shown in Fig. 7. The classification accuracy obtained by multi-kernel learning reaches 99.58%.

为了评估所提出的C-RNN架构(将所提出的C-RNN架构称为ENDECLA-CR)的特征选择性能,首先构建几个网络模型来与所提出的C-RNN架构进行比较。第一个网络模型是去除C-RNN中分类器的模型(将该模型表示为ENDE-CR);第二个网络模型是将ENDE-CR中编码器的两个卷积层,替换为大小为100的2个GRU层(该模型称为ENDE-RR)。这两个模型使用的模型参数和训练参数与所提出的C-RNN模型使用的参数一致。将用于训练C-RNN模型的训练数据作为ENDE-CR和ENDE-RR模型的输入训练数据。训练完成后,ENDECLA-CR、ENDE-CR、ENDE-RR的训练损失和验证损失如图8所示。To evaluate the feature selection performance of the proposed C-RNN architecture (referred to as ENDECLA-CR), several network models are first constructed to compare with the proposed C-RNN architecture. The first network model is a model that removes the classifier in C-RNN (denoted this model as ENDE-CR); the second network model is to replace the two convolutional layers of the encoder in ENDE-CR with a size of 2 GRU layers of 100 (this model is called ENDE-RR). The model parameters and training parameters used by these two models are consistent with those used by the proposed C-RNN model. The training data used to train the C-RNN model is used as the input training data for the ENDE-CR and ENDE-RR models. After training, the training loss and validation loss of ENDECLA-CR, ENDE-CR, and ENDE-RR are shown in Figure 8.

同时,表1中列出了本发明与其余用以比较的6种特征提取模型的分类准确率,显示出本发明优越的识别性能。其余6种特征提取模型包括主成分分析(PCA)、线性判别分析(LDA)、字典学习(DL)和ENDE-CR、ENDE-RR以及一个简单的深层神经网络(ENDE-RR的编码器和解码器被替换为一个简单的多层全连接结构,将该模型称为SDNN)。SDNN模型包含1个输入层(200个单元),4个完全连接层(每个隐藏层500个单元)和1个输出层(200个单元)。将SDNN的训练迭代次数设置为2000,损失函数使用MSE,优化器使用Adam。At the same time, Table 1 lists the classification accuracy rates of the present invention and the other six feature extraction models for comparison, showing the superior recognition performance of the present invention. The remaining 6 feature extraction models include Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Dictionary Learning (DL) and ENDE-CR, ENDE-RR and a simple deep neural network (ENDE-RR for encoder and decoder is replaced by a simple multi-layer fully connected structure, this model is called SDNN). The SDNN model contains 1 input layer (200 units), 4 fully connected layers (500 units per hidden layer) and 1 output layer (200 units). The number of training iterations of SDNN is set to 2000, the loss function uses MSE, and the optimizer uses Adam.

比较模型中使用的训练集和测试集与C-RNN模型中使用的训练集和测试集相同。将训练后的7个模型提取的特征分别输入到具有线性核的支持向量机中,以进行分类。使用这个分类器的原因是它不会将输入特征映射到高维空间或执行其他转换,可以保证比较结果的有效性。The training and test sets used in the comparison model are the same as those used in the C-RNN model. The features extracted by the trained 7 models are separately fed into a support vector machine with linear kernels for classification. The reason for using this classifier is that it does not map the input features to a high-dimensional space or perform other transformations, which can guarantee the validity of the comparison results.

由各模型分类的平均精度MAP来评估模型的识别性能,MAP是所有类别的平均预测精度,可表示为:The recognition performance of the model is evaluated by the average precision MAP of each model classification. MAP is the average prediction precision of all categories and can be expressed as:

Figure BDA0002009256710000131
Figure BDA0002009256710000131

式中:N为类别总数,precisionc为类别c的分类精确度,xTc为类别c的正确预测OCS序列个数,Mc为识别结果为类别c的总OCS序列数量。表格中的k表示C-RNN结构使用的卷积核的大小。where N is the total number of categories, precision c is the classification accuracy of category c, x Tc is the number of correctly predicted OCS sequences of category c, and Mc is the total number of OCS sequences with the recognition result of category c. The k in the table represents the size of the convolution kernel used by the C-RNN structure.

表1Table 1

Figure BDA0002009256710000132
Figure BDA0002009256710000132

Figure BDA0002009256710000141
Figure BDA0002009256710000141

由表1中的7个模型的分类性能可知,PCA和LDA模型的识别性能分别达到61%和76%;与PCA和LDA相比,DL具有更好的识别性能,识别准确率为73%;SDNN的识别性能优于PCA、LDA、DL等传统特征提取方法,识别准确率达到84.5%;ENDE-CR和ENDE-RR在特征提取方面具有很大的优势,识别准确率分别达到95.8%和83%。从结果可以看出,提出的C-RNN结构的识别精度是最好的。当卷积核的大小为3和5时,识别准确率超过98%。C-RNN中的分类器增加了不同类别特征之间的距离,使得编码器生成的特征向量能够更好地表示OCS序列数据,使得所提出的C-RNN架构具有良好的分类性能。According to the classification performance of the 7 models in Table 1, the recognition performance of PCA and LDA models reach 61% and 76%, respectively; compared with PCA and LDA, DL has better recognition performance, and the recognition accuracy rate is 73%; The recognition performance of SDNN is better than that of traditional feature extraction methods such as PCA, LDA, and DL, and the recognition accuracy rate reaches 84.5%; ENDE-CR and ENDE-RR have great advantages in feature extraction, and the recognition accuracy rate reaches 95.8% and 83%, respectively. %. From the results, it can be seen that the recognition accuracy of the proposed C-RNN structure is the best. When the kernel size is 3 and 5, the recognition accuracy exceeds 98%. The classifier in C-RNN increases the distance between features of different categories, so that the feature vector generated by the encoder can better represent the OCS sequence data, making the proposed C-RNN architecture have good classification performance.

上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。The above-mentioned serial numbers of the embodiments of the present invention are only for description, and do not represent the advantages or disadvantages of the embodiments.

需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本发明所必须的。It should be noted that, for the sake of simple description, the foregoing method embodiments are all expressed as a series of action combinations, but those skilled in the art should know that the present invention is not limited by the described action sequence. As in accordance with the present invention, certain steps may be performed in other orders or simultaneously. Secondly, those skilled in the art should also know that the embodiments described in the specification are all preferred embodiments, and the actions and modules involved are not necessarily required by the present invention.

以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。The above are only specific embodiments of the present invention, but the protection scope of the present invention is not limited thereto. Any person skilled in the art can easily think of changes or substitutions within the technical scope disclosed by the present invention. should be included within the protection scope of the present invention. Therefore, the protection scope of the present invention should be based on the protection scope of the claims.

Claims (9)

1.一种基于深度学习与多核学习的GEO卫星形状与姿态识别方法,其特征在于,包括:1. a GEO satellite shape and attitude recognition method based on deep learning and multi-core learning, is characterized in that, comprises: 步骤一、获取GEO卫星的光学散射截面特性(OCS)序列数据;Step 1: Obtain the optical scattering cross section (OCS) sequence data of the GEO satellite; 步骤二、对OCS序列数据进行预处理;Step 2: Preprocess the OCS sequence data; 步骤三、基于深度学习网络,构建用于对OCS序列数据进行自动特征提取的C-RNN模型,所述深度学习网络包括循环神经网络与卷积神经网络;Step 3, building a C-RNN model for automatic feature extraction of OCS sequence data based on a deep learning network, where the deep learning network includes a cyclic neural network and a convolutional neural network; 步骤四、训练C-RNN模型,获取OCS序列数据的多个特征向量;Step 4: Train the C-RNN model to obtain multiple feature vectors of the OCS sequence data; 步骤五、基于多核学习技术,使用多个核函数对不同特征向量进行映射,并利用支持向量机对卫星的形状与姿态进行识别;Step 5. Based on the multi-kernel learning technology, use multiple kernel functions to map different feature vectors, and use the support vector machine to identify the shape and attitude of the satellite; 所述步骤三中,构建用于对OCS序列数据进行自动特征提取的C-RNN模型的方法包括:所述C-RNN模型由编码器、解码器和分类器组成;编码器用于将OCS序列数据作为输入,生成固定长度的特征向量作为输出;解码器通过编码器生成的特征向量重构输入的OCS序列数据;分类器由三个使用ReLU激活函数的全连接层和一个使用sigmoid激活函数的输出层构成;使用编码器产生的特征向量作为输入;使用sigmoid函数将特征向量映射到类别作为输出,获得最大化特征之间的距离。In the third step, the method for constructing a C-RNN model for automatic feature extraction of OCS sequence data includes: the C-RNN model is composed of an encoder, a decoder and a classifier; the encoder is used to extract the OCS sequence data. As input, a fixed-length feature vector is generated as output; the decoder reconstructs the input OCS sequence data through the feature vector generated by the encoder; the classifier consists of three fully connected layers using the ReLU activation function and one output using the sigmoid activation function Layer composition; use the feature vector produced by the encoder as input; use the sigmoid function to map the feature vector to the category as output, obtain the distance between maximized features. 2.如权利要求1所述的方法,其特征在于,所述步骤一中获取GEO卫星的OCS序列数据的方法包括:2. The method of claim 1, wherein the method for obtaining the OCS sequence data of the GEO satellite in the step 1 comprises: 通过数值计算、实际观测得到的光度序列数据和/或实验室模拟测量中的一种或多种方式获取空间目标的OCS序列数据。The OCS sequence data of the space target is obtained by one or more methods of numerical calculation, photometric sequence data obtained from actual observation, and/or laboratory simulation measurements. 3.如权利要求2所述的方法,其特征在于,通过数值计算的方式获取空间目标的OCS序列数据的方法,包括:使用BRDF模型以及OCS计算方法获取空间目标的OCS序列数据;3. method as claimed in claim 2 is characterized in that, the method that obtains the OCS sequence data of space target by the mode of numerical calculation, comprises: use BRDF model and OCS calculation method to obtain the OCS sequence data of space target; 通过实际观测得到的光度序列数据可转换为OCS序列数据,光度序列数据与OCS序列数据的转换关系为:The photometric sequence data obtained through actual observation can be converted into OCS sequence data. The conversion relationship between photometric sequence data and OCS sequence data is:
Figure FDA0002835279420000011
Figure FDA0002835279420000011
其中,m为光度序列数据,r为观测设备与空间目标的距离。Among them, m is the photometric sequence data, and r is the distance between the observation equipment and the space target.
4.如权利要求1所述的方法,其特征在于,所述步骤二中,对OCS序列数据进行预处理的方法包括:4. The method of claim 1, wherein in the step 2, the method for preprocessing the OCS sequence data comprises: 根据不同观测区间内获得的OCS序列数据所对应的观测几何位置关系,将在不同的观测几何位置关系下得到的OCS序列数据划分到不同的子集,并将每个子集的标签设置为所对应类别的子类;所述观测几何位置关系为太阳、空间目标和测站的位置关系。According to the observation geometric position relationship corresponding to the OCS sequence data obtained in different observation intervals, the OCS sequence data obtained under different observation geometric position relationships are divided into different subsets, and the label of each subset is set to the corresponding A subclass of the category; the observed geometric positional relationship is the positional relationship of the sun, the space target, and the station. 5.如权利要求1所述的方法,其特征在于,C-RNN模型的损失函数为:5. The method of claim 1, wherein the loss function of the C-RNN model is: L=MSE+lossL=MSE+loss 其中,MSE为用于OCS序列数据重构的损失函数,loss为用于形状和姿态分类过程的损失函数;C-RNN模型训练过程中使用反向传播和梯度下降方法使得总损失最小化。Among them, MSE is the loss function used for OCS sequence data reconstruction, and loss is the loss function used in the shape and pose classification process; the back-propagation and gradient descent methods are used in the C-RNN model training process to minimize the total loss. 6.如权利要求1所述的方法,其特征在于,编码器中包括两个带有整流线性单元(ReLU)激活函数的1-D卷积层,ReLU函数可表示为:6. The method of claim 1, wherein the encoder includes two 1-D convolutional layers with Rectified Linear Unit (ReLU) activation functions, and the ReLU function can be expressed as:
Figure FDA0002835279420000021
Figure FDA0002835279420000021
其中,x为输入值;1-D卷积使用一维卷积核对OCS序列数据进行卷积,从而提取出OCS序列数据的特征向量;1-D卷积定义为:Among them, x is the input value; 1-D convolution uses a one-dimensional convolution kernel to convolve the OCS sequence data, thereby extracting the feature vector of the OCS sequence data; 1-D convolution is defined as:
Figure FDA0002835279420000022
Figure FDA0002835279420000022
其中,
Figure FDA0002835279420000023
为输入的OCS序列数据;n为序列数据的长度;W和k分别表示1-D卷积核和滑动步数;
Figure FDA0002835279420000024
为卷积后的输出向量;P=n-nk+1,nk为卷积核的大小;
in,
Figure FDA0002835279420000023
is the input OCS sequence data; n is the length of the sequence data; W and k represent the 1-D convolution kernel and the number of sliding steps, respectively;
Figure FDA0002835279420000024
is the output vector after convolution; P=nn k +1, n k is the size of the convolution kernel;
每个卷积层之后都有一个dropout层;第二个卷积层后有一层flatten层,该层将多维特征转换为1-D特征;最终得到的具有指定长度的特征向量是通过将flatten层的输出传递到两个全连接层产生的,全连接层使用ReLU激活函数。Each convolutional layer is followed by a dropout layer; the second convolutional layer is followed by a flatten layer, which converts multi-dimensional features into 1-D features; the final feature vector with the specified length is obtained by flattening the The outputs are passed to two fully connected layers, which use the ReLU activation function.
7.如权利要求1所述的方法,其特征在于,在解码器中,应用了两个门控递归单元(GRU)网络来完成OCS输入信号的重构任务;7. The method of claim 1, wherein, in the decoder, two gated recurrent unit (GRU) networks are applied to complete the reconstruction task of the OCS input signal; 解码器采用特征向量、采样时间之间的差值△tN作为输入,N为采样点的个数;复制特征向量l次,其中l是设定的解码器的输出序列长度;将采样时间点的差值也复制l次;特征向量用于表征OCS序列数据,采样时间来确定重构序列中每个点所在的位置。The decoder takes the difference Δt N between the feature vector and the sampling time as input, where N is the number of sampling points; copies the feature vector l times, where l is the set length of the output sequence of the decoder; The difference of is also replicated l times; the feature vector is used to characterize the OCS sequence data, and the sampling time is used to determine the position of each point in the reconstructed sequence. 8.如权利要求1-7之一所述的方法,其特征在于,所述步骤四中,训练C-RNN模型,获取OCS序列数据的多个特征向量的方法包括:8. The method according to one of claims 1-7, wherein in the step 4, training a C-RNN model, and the method for obtaining a plurality of feature vectors of OCS sequence data comprises: 将长度为200的OCS序列数据输入到C-RNN模型中;C-RNN模型需要训练2次,第一次将大小为5的卷积核应用于两个CNN层;第二次将大小为3的卷积核应用于两个CNN层,将迭代次数和批量大小分别设置为2000和1000,每一次训练结束后,保存C-RNN模型输出的特征向量;Input the OCS sequence data of length 200 into the C-RNN model; the C-RNN model needs to be trained 2 times, the first time a convolution kernel of size 5 is applied to two CNN layers; the second time the size is 3 The convolution kernel of C-RNN is applied to two CNN layers, and the number of iterations and batch size are set to 2000 and 1000 respectively. After each training, the feature vector output by the C-RNN model is saved; 其中,模型中的两个CNN层分别具有60个滤波器和100个滤波器;编码器的输出是嵌入大小为64的特征向量;Among them, the two CNN layers in the model have 60 filters and 100 filters respectively; the output of the encoder is a feature vector with an embedding size of 64; 解码器使用2层单元大小为100的双向GRU层;解码器的输入是长度为65的特征向量,其中包括长度为64的编码器的输出,以及长度为1的OCS序列数据点之间的采样时间差值;The decoder uses a 2-layer bidirectional GRU layer of unit size 100; the input to the decoder is a feature vector of length 65, which includes the output of the encoder of length 64, and the samples between the OCS sequence data points of length 1 time difference; 分类器处理编码器产生的特征向量并给出每条OCS序列数据的分类结果;The classifier processes the feature vector generated by the encoder and gives the classification result of each OCS sequence data; Adam优化器用于网络优化,学习率为1×10-3;每个dropout层的dropout率设置为0.25。The Adam optimizer is used for network optimization with a learning rate of 1×10 -3 ; the dropout rate of each dropout layer is set to 0.25. 9.如权利要求1所述的方法,其特征在于,所述步骤五中,对卫星的形状与姿态进行识别的方法,包括:9. The method of claim 1, wherein in the step 5, the method for identifying the shape and attitude of the satellite comprises: 基于多核学习(MKL)技术,将使用不同卷积核的C-RNN模型产生的特征向量作为支持向量机(SVM)的输入数据,并通过多核线性组合方法组合基本核函数,多核线性组合可以描述如下:Based on the multi-kernel learning (MKL) technology, the feature vector generated by the C-RNN model using different convolution kernels is used as the input data of the support vector machine (SVM), and the basic kernel functions are combined by the multi-kernel linear combination method. The multi-kernel linear combination can describe as follows:
Figure FDA0002835279420000031
Figure FDA0002835279420000031
其中,x,z∈X,
Figure FDA0002835279420000032
为特征空间,
Figure FDA0002835279420000033
为第i个归一化的基本核函数,K(x,z)表示由n个基本核函数线性组合而成的最终核函数,βi表示第i个系数;基本核函数为多项式核函数,多项式内核可表示为:
Among them, x, z∈X,
Figure FDA0002835279420000032
is the feature space,
Figure FDA0002835279420000033
is the ith normalized basic kernel function, K(x, z) represents the final kernel function formed by linear combination of n basic kernel functions, β i represents the ith coefficient; the basic kernel function is a polynomial kernel function, The polynomial kernel can be expressed as:
Figure FDA0002835279420000041
Figure FDA0002835279420000041
式中:x,z∈X,
Figure FDA0002835279420000042
为特征空间;R和d分别为常数和多项式的阶;其中,MKL为多特征融合方法,SVM为分类模型。
where: x,z∈X,
Figure FDA0002835279420000042
is the feature space; R and d are the order of a constant and a polynomial, respectively; among them, MKL is a multi-feature fusion method, and SVM is a classification model.
CN201910239623.7A 2019-03-27 2019-03-27 GEO satellite shape and attitude identification method based on deep learning and multi-core learning Active CN109993224B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910239623.7A CN109993224B (en) 2019-03-27 2019-03-27 GEO satellite shape and attitude identification method based on deep learning and multi-core learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910239623.7A CN109993224B (en) 2019-03-27 2019-03-27 GEO satellite shape and attitude identification method based on deep learning and multi-core learning

Publications (2)

Publication Number Publication Date
CN109993224A CN109993224A (en) 2019-07-09
CN109993224B true CN109993224B (en) 2021-02-02

Family

ID=67131770

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910239623.7A Active CN109993224B (en) 2019-03-27 2019-03-27 GEO satellite shape and attitude identification method based on deep learning and multi-core learning

Country Status (1)

Country Link
CN (1) CN109993224B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110490095A (en) * 2019-07-31 2019-11-22 中国人民解放军战略支援部队信息工程大学 A kind of multi-modal Fusion Features Modulation Identification method and system neural network based
CN110929242B (en) * 2019-11-20 2020-07-10 上海交通大学 Method and system for carrying out attitude-independent continuous user authentication based on wireless signals
CN111277564B (en) * 2020-01-08 2022-06-28 山东浪潮科学研究院有限公司 Enterprise network anomaly detection method and system based on dynamic storage network
CN111351488B (en) * 2020-03-03 2022-04-19 南京航空航天大学 Re-entry guidance method for aircraft intelligent trajectory reconstruction
CN111369142B (en) * 2020-03-04 2023-04-18 中国电子科技集团公司第五十四研究所 Autonomous remote sensing satellite task generation method
CN111400754B (en) * 2020-03-11 2021-10-01 支付宝(杭州)信息技术有限公司 Construction method and device of user classification system for protecting user privacy
CN111898652A (en) * 2020-07-10 2020-11-06 西北工业大学 A classification and recognition method of spatial target pose based on convolutional neural network
CN113326924B (en) * 2021-06-07 2022-06-14 太原理工大学 Photometric localization method of key targets in sparse images based on deep neural network
CN118409342B (en) * 2024-07-02 2024-09-27 上海卫星互联网研究院有限公司 Data compression method and satellite

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800104A (en) * 2012-06-18 2012-11-28 西安空间无线电技术研究所 Two-dimensional scattering center automatic correlation method based on ISAR (inverse synthetic aperture radar) image sequence
CN103186776A (en) * 2013-04-03 2013-07-03 西安电子科技大学 Human detection method based on multiple features and depth information
CN107576949A (en) * 2017-08-23 2018-01-12 电子科技大学 SVDD radar target-range image recognition methods based on density weight and mixed kernel function

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101450716B (en) * 2008-12-26 2010-12-29 中国科学院国家天文台 Fault photo-detection method for earth synchronous transfer orbit satellite in orbit
CN104101297B (en) * 2014-07-22 2017-02-08 中国科学院国家天文台 Space object dimension acquisition method based on photoelectric observation
CN104570742B (en) * 2015-01-29 2017-02-22 哈尔滨工业大学 Feedforward PID (proportion, integration and differentiation) control based rapid high-precision relative pointing control method of noncoplanar rendezvous orbit
CN106022391A (en) * 2016-05-31 2016-10-12 哈尔滨工业大学深圳研究生院 Hyperspectral image characteristic parallel extraction and classification method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800104A (en) * 2012-06-18 2012-11-28 西安空间无线电技术研究所 Two-dimensional scattering center automatic correlation method based on ISAR (inverse synthetic aperture radar) image sequence
CN103186776A (en) * 2013-04-03 2013-07-03 西安电子科技大学 Human detection method based on multiple features and depth information
CN107576949A (en) * 2017-08-23 2018-01-12 电子科技大学 SVDD radar target-range image recognition methods based on density weight and mixed kernel function

Also Published As

Publication number Publication date
CN109993224A (en) 2019-07-09

Similar Documents

Publication Publication Date Title
CN109993224B (en) GEO satellite shape and attitude identification method based on deep learning and multi-core learning
CN109583322B (en) Face recognition deep network training method and system
Xu et al. RPNet: A representation learning-based star identification algorithm
CN113392931A (en) Hyperspectral open set classification method based on self-supervision learning and multitask learning
CN105894050A (en) Multi-task learning based method for recognizing race and gender through human face image
CN109726671B (en) Action recognition methods and systems for learning from global to categorical feature representations
CN108960201A (en) A kind of expression recognition method extracted based on face key point and sparse expression is classified
CN111860446A (en) A system and method for detecting unknown patterns of satellite telemetry time series data
CN107977683A (en) Joint SAR target identification methods based on convolution feature extraction and machine learning
CN112859898A (en) Aircraft trajectory prediction method based on two-channel bidirectional neural network
CN114937182B (en) Image emotion distribution prediction method based on emotion wheel and convolutional neural network
CN114048468A (en) Intrusion detection method, intrusion detection model training method, device and medium
CN110533063A (en) A kind of cloud amount calculation method and device based on satellite image and GMDH neural network
CN111216126A (en) Multi-modal perception-based foot type robot motion behavior recognition method and system
CN112784487B (en) Flight action recognition method and device
CN117011274A (en) Automatic glass bottle detection system and method thereof
CN109239670B (en) Radar HRRP (high resolution ratio) identification method based on structure embedding and deep neural network
Liu et al. Auto-sharing parameters for transfer learning based on multi-objective optimization
Chugunkov et al. Creation of datasets from open sources
CN117076999A (en) Complex flight action small sample identification method and device based on double one-dimensional convolution attention mechanism
CN107392129A (en) Face retrieval method and system based on Softmax
Pearson et al. Auto-detection of strong gravitational lenses using convolutional neural networks
Ozaki et al. DNN-based self-attitude estimation by learning landscape information
Aldahoul et al. Space object recognition with stacking of CoAtNets using fusion of RGB and depth images
CN117171681B (en) Intelligent fault diagnosis method and device for UAV rudder surface under unbalanced small samples

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant