CN109657679B - A kind of identification method of application satellite function type - Google Patents

A kind of identification method of application satellite function type Download PDF

Info

Publication number
CN109657679B
CN109657679B CN201811556442.9A CN201811556442A CN109657679B CN 109657679 B CN109657679 B CN 109657679B CN 201811556442 A CN201811556442 A CN 201811556442A CN 109657679 B CN109657679 B CN 109657679B
Authority
CN
China
Prior art keywords
satellite
feature map
map set
convolution
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811556442.9A
Other languages
Chinese (zh)
Other versions
CN109657679A (en
Inventor
庞羽佳
李志�
蒙波
黄龙飞
张志民
王尹
韩旭
黄剑斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Academy of Space Technology CAST
Original Assignee
China Academy of Space Technology CAST
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Academy of Space Technology CAST filed Critical China Academy of Space Technology CAST
Priority to CN201811556442.9A priority Critical patent/CN109657679B/en
Publication of CN109657679A publication Critical patent/CN109657679A/en
Application granted granted Critical
Publication of CN109657679B publication Critical patent/CN109657679B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供一种应用卫星功能类型识别方法,包括:获取目标卫星空间图像,并对获取到的目标卫星空间图像进行分辨率调整,得到目标图像;基于ResNet神经网络模型对所述目标图像进行数据处理,确定所述目标卫星对应的功能类型。本发明提供的应用卫星功能类型识别方法,通过对目标卫星空间图像进行分辨率调整,得到能够被ResNet神经网络模型识别的图像,通过ResNet神经网络模型在轨自主对应用卫星功能类型进行识别,不依赖于地面人工解读和判断,提高了识别效率,满足对先验信息匮乏的非合作空间目标进行实时服务和操作的需要。

Figure 201811556442

The invention provides an application satellite function type identification method, comprising: acquiring a target satellite space image, and adjusting the resolution of the acquired target satellite space image to obtain the target image; and performing data analysis on the target image based on a ResNet neural network model Process to determine the function type corresponding to the target satellite. The application satellite function type identification method provided by the present invention obtains an image that can be recognized by the ResNet neural network model by adjusting the resolution of the target satellite space image, and independently identifies the application satellite function type through the ResNet neural network model on-orbit. Relying on manual interpretation and judgment on the ground, the recognition efficiency is improved, and the need for real-time service and operation of non-cooperative space targets lacking prior information is met.

Figure 201811556442

Description

一种应用卫星功能类型识别方法A kind of identification method of application satellite function type

技术领域technical field

本发明涉及一种应用卫星功能类型识别方法,属于卫星识别技术领域。The invention relates to an application satellite function type identification method, which belongs to the technical field of satellite identification.

背景技术Background technique

应用卫星是直接为国民经济、军事活动和文化教育服务的人造卫星,在各类人造卫星中,应用卫星发射数量最多,种类也最多。各种应用卫星按其工作基本特性,大致可以分为三大类,即对地观测类、无线电中继类和导航定位基准类。应用卫星在通信、导航、遥感等军用和民用领域发挥着重要作用。Applied satellites are artificial satellites that directly serve the national economy, military activities, and cultural education. Among all kinds of artificial satellites, applied satellites have the largest number of launches and the largest variety. Various application satellites can be roughly divided into three categories according to the basic characteristics of their work, namely earth observation, radio relay and navigation and positioning reference. Applied satellites play an important role in military and civilian fields such as communications, navigation, and remote sensing.

对应用卫星开展在轨服务可延长卫星寿命、提高任务执行能力,是当前国内外的研究热点之一。在对应用卫星进行在轨服务的过程中,根据需要可对在轨卫星进行辅助变轨、燃料补给、姿态控制、卫星接管、故障修复等操作。在对失效或故障卫星进行在轨维修维护时,首先须安全接近被服务卫星。对于非合作目标来说,其星表特征、关键载荷、运动状态等很难提前获取,在接近过程中必须对目标功能类型、运动状态、操作部位等进行准确认知,以确定接近停靠或控制策略,并避免碰撞。On-orbit service for application satellites can prolong the life of satellites and improve the ability of mission execution, which is one of the current research hotspots at home and abroad. During the on-orbit service of the application satellite, operations such as auxiliary orbit change, fuel supply, attitude control, satellite takeover, and fault repair can be performed on the on-orbit satellite as needed. When performing in-orbit maintenance on a failed or faulty satellite, it is first necessary to safely approach the serviced satellite. For non-cooperative targets, it is difficult to obtain their star catalog features, key loads, and motion states in advance. During the approaching process, it is necessary to accurately recognize the target's functional type, motion state, and operating position to determine whether to approach docking or control. strategy and avoid collisions.

现有空间非合作目标类型认知通常是通过服务飞行器获取目标飞行器的空间图像并将图像传输给地面指控中心,地面指控中心根据图像信息采用边缘检测、特征拟合等图像识别方法人为确定非合作目标飞行器类型,并将确认的类型发送给服务飞行器。现有方法存在如下问题:星地回路存在大时延,无法满足对先验信息匮乏的非合作空间目标进行实时服务和操作的需要。The existing space non-cooperative target type cognition usually obtains the space image of the target aircraft through the service aircraft and transmits the image to the ground command and control center. The ground command center uses image recognition methods such as edge detection and feature fitting to manually determine non-cooperative targets Target aircraft type, and send the confirmed type to the serving aircraft. The existing methods have the following problems: the satellite-ground loop has a large delay, which cannot meet the needs of real-time service and operation of non-cooperative space targets lacking prior information.

发明内容SUMMARY OF THE INVENTION

针对现有技术中存在的问题,本发明提供一种应用卫星功能类型识别方法,该方法对在轨产生的空间目标可见光图像自主处理和分类识别,单目标功能类型分类速度小于100ms,准确率可达到90%。In view of the problems existing in the prior art, the present invention provides a method for identifying function types of application satellites. The method autonomously processes and classifies visible light images of space targets generated in orbit. The classification speed of single target function type is less than 100ms, and the accuracy rate is to 90%.

本发明的技术解决方案是:The technical solution of the present invention is:

一种应用卫星功能类型识别方法,包括:An application satellite function type identification method, comprising:

获取目标卫星空间图像,并对获取到的目标卫星空间图像进行分辨率调整,得到目标图像;Obtain the target satellite space image, and adjust the resolution of the obtained target satellite space image to obtain the target image;

基于ResNet神经网络模型对所述目标图像进行数据处理,确定所述目标卫星对应的功能类型。Data processing is performed on the target image based on the ResNet neural network model, and the function type corresponding to the target satellite is determined.

在一可选实施例中,所述ResNet神经网络模型包括初始卷积层、三个残差学习模块和全连接层,每个所述残差学习模块包括两个残差学习单元,所述初始卷积层输出特征图集给第一个残差学习模块,第一个所述残差学习模块输出新的特征图集给第二个所述残差学习模块,第二个所述残差学习模块输出新的特征图集给第三个所述残差学习模块,第三个所述残差模块输出新的特征图集给所述全连接层,其中:In an optional embodiment, the ResNet neural network model includes an initial convolution layer, three residual learning modules and a fully connected layer, each of the residual learning modules includes two residual learning units, and the initial The convolutional layer outputs a feature atlas to the first residual learning module, the first residual learning module outputs a new feature atlas to the second residual learning module, and the second residual learning module The module outputs a new feature atlas to the third residual learning module, and the third residual module outputs a new feature atlas to the fully connected layer, where:

所述初始卷积层,用于:The initial convolutional layer is used for:

对所述目标图像进行一次二维卷积,得到特征图集;Perform a two-dimensional convolution on the target image to obtain a feature atlas;

所述残差学习模块的第一残差学习单元,用于:The first residual learning unit of the residual learning module is used for:

对输入本单元的特征图集进行一次卷积操作,得到本单元的残差特征图集;对所述输入本单元的特征图集依次进行标准化操作、激活操作及一次卷积操作,得到一次卷积操作后的特征图集,对所述一次卷积操作后的特征图集依次进行标准化操作、激活操作及一次卷积操作,得到二次卷积后的特征图集;根据所述本单元的残差特征图集和本单元的所述二次卷积后的特征图集确定本单元输出的特征图集,并将所述本单元输出的特征图集输出给所述残差学习模块的第二残差学习单元;Perform a convolution operation on the feature map set input to this unit to obtain the residual feature map set of this unit; perform a normalization operation, an activation operation and a convolution operation on the feature map set input to this unit in turn to obtain a volume The feature atlas after product operation, perform normalization operation, activation operation and one convolution operation on the feature atlas after the first convolution operation in turn, to obtain the feature atlas after the second convolution; The residual feature map set and the feature map set after the second convolution of this unit determine the feature map set output by this unit, and output the feature map set output by this unit to the first step of the residual learning module. Two residual learning units;

所述残差学习模块的第二残差学习单元,用于:The second residual learning unit of the residual learning module is used for:

对输入本单元的特征图集依次进行标准化操作、激活操作及一次卷积操作,得到一次卷积后的特征图集,对本单元的一次卷积后的特征图集依次进行标准化操作、激活操作及一次卷积操作,得到二次卷积后的特征图集;根据输入本单元的特征图集和本单元的所述二次卷积后的特征图集确定本单元的输出特征图集;Perform the normalization operation, activation operation and one convolution operation on the feature map set input to this unit in turn to obtain the feature map set after one convolution, and perform the normalization operation, activation operation and A convolution operation is performed to obtain the feature map set after the second convolution; the output feature map set of the unit is determined according to the feature map set input to the unit and the feature map set after the second convolution of the unit;

所述全连接层,用于:The fully connected layer is used for:

对第三个所述残差学习模块输出的特征图集进行平均池化,提取用于卫星类型识别的特征向量,并进行全连接操作,根据所述特征向量确定各卫星类型对应的特征累加向量,通过分类器进行分类概率统计,从而确定所述目标卫星对应的功能类型。Perform average pooling on the feature atlas output by the third residual learning module, extract feature vectors used for satellite type identification, and perform a full connection operation, and determine the feature accumulation vector corresponding to each satellite type according to the feature vectors , and perform classification probability statistics through the classifier, so as to determine the function type corresponding to the target satellite.

在一可选实施例中,所述目标图像的分辨率不低于256×256。In an optional embodiment, the resolution of the target image is not lower than 256×256.

在一可选实施例中,所述的应用卫星功能类型识别方法,还包括:In an optional embodiment, the described application satellite function type identification method further includes:

建立卫星空间图像样本库,所述样本库中包含多个卫星类型及各所述卫星类型对应的图像样本集;establishing a satellite space image sample library, the sample library includes a plurality of satellite types and image sample sets corresponding to each of the satellite types;

基于所述卫星空间图像样本库,对初始ResNet神经网络模型进行训练和测试,得到ResNet神经网络模型。Based on the satellite space image sample library, the initial ResNet neural network model is trained and tested to obtain the ResNet neural network model.

在一可选实施例中,所述建立卫星空间图像样本库,包括:In an optional embodiment, the building a satellite space image sample library includes:

建立不同类型卫星的三维模型,并模拟空间环境,对建立的所述三维模型进行成像,得到一定数量的模拟空间图像样本,建立卫星类型与所述模拟空间图像样本的对应关系,生成卫星空间图像样本库。Establish three-dimensional models of different types of satellites, simulate the space environment, image the established three-dimensional models, obtain a certain number of simulated space image samples, establish a correspondence between satellite types and the simulated space image samples, and generate satellite space images sample library.

在一可选实施例中,所述建立不同类型卫星的三维模型,包括:In an optional embodiment, the building three-dimensional models of different types of satellites includes:

根据各类型卫星的结构特点,建立不同类型卫星的结构三维模型;According to the structural characteristics of various types of satellites, establish three-dimensional models of the structures of different types of satellites;

根据所述各类卫星的表面纹理信息,渲染所述结构三维模型,得到不同类型卫星的三维模型。According to the surface texture information of the various types of satellites, the three-dimensional models of the structures are rendered to obtain three-dimensional models of different types of satellites.

在一可选实施例中,所述模拟空间环境,包括:In an optional embodiment, the simulated space environment includes:

模拟光源为平行光,大气分子密度为地面大气分子密度的0~0.01倍,光照强度指数为地面日常光照强度的2~3倍,光源入射方向在所述三维模型的4 π空间内随机生成。The simulated light source is parallel light, the density of atmospheric molecules is 0-0.01 times the density of atmospheric molecules on the ground, the light intensity index is 2-3 times the daily light intensity on the ground, and the incident direction of the light source is randomly generated in the 4π space of the three-dimensional model.

在一可选实施例中,所述对建立的所述三维模型进行成像,包括:In an optional embodiment, the imaging of the established three-dimensional model includes:

在除所述三维模型对天面法向夹角60°锥角的范围内随机对建立的所述三维模型进行成像,且成像时,光源光束方向与相机成像轴夹角小于45°。The established three-dimensional model is randomly imaged within the range of 60° cone angle except the angle between the three-dimensional model and the normal to the sky, and during imaging, the angle between the light source beam direction and the camera imaging axis is less than 45°.

在一可选实施例中,所述光源光束方向与相机成像轴夹角按照下式确定:In an optional embodiment, the angle between the beam direction of the light source and the imaging axis of the camera is determined according to the following formula:

Figure BDA0001912039640000041
Figure BDA0001912039640000041

其中α<45°,为光源光束方向与相机成像轴夹角;Where α<45°, is the angle between the light source beam direction and the camera imaging axis;

R1为光源到卫星的三维模型原点的距离,R2为相机到卫星的三维模型原点的距离,x1为光源x轴坐标,X2为相机x轴坐标,y1为光源y轴坐标,y2 为相机y轴坐标,z1为光源z轴坐标,z2为相机z轴坐标。R1 is the distance from the light source to the origin of the 3D model of the satellite, R2 is the distance from the camera to the origin of the 3D model of the satellite, x1 is the x-axis coordinate of the light source, X2 is the x-axis coordinate of the camera, y1 is the y-axis coordinate of the light source, and y2 is the y-axis of the camera Coordinates, z1 is the z-axis coordinate of the light source, and z2 is the z-axis coordinate of the camera.

在一可选实施例中,所述得到一定数量的模拟空间图像样本之后,还包括:In an optional embodiment, after obtaining a certain number of simulated space image samples, the method further includes:

对各所述模拟空间图像样本进行数据增强,得到样本数量扩充后的模拟空间图像样本集;performing data enhancement on each of the simulated space image samples to obtain a simulated space image sample set with an expanded number of samples;

相应地,所述建立卫星类型与所述模拟空间图像的对应关系,包括:Correspondingly, establishing the correspondence between the satellite type and the simulated space image includes:

建立所述卫星类型与所述扩充后的模拟空间图像样本集中的模拟空间样本的对应关系。establishing a correspondence between the satellite type and the simulated space samples in the expanded simulated space image sample set.

在一可选实施例中,所述基于所述卫星空间图像样本库,对初始ResNet 神经网络模型进行训练和测试,包括:In an optional embodiment, the initial ResNet neural network model is trained and tested based on the satellite space image sample library, including:

将所述卫星空间图像样本库中各样本由三通道彩色图像处理为一通道灰度图像,然后对初始ResNet神经网络模型进行训练和测试。Each sample in the satellite space image sample library is processed from a three-channel color image into a one-channel grayscale image, and then the initial ResNet neural network model is trained and tested.

本发明与现有技术相比的有益效果是:The beneficial effects of the present invention compared with the prior art are:

(1)本发明提供的应用卫星功能类型识别方法,通过对目标卫星空间图像进行分辨率调整,得到能够被ResNet神经网络模型识别的图像,通过ResNet 神经网络模型在轨自主对应用卫星功能类型进行识别,不依赖于地面人工解读和判断,提高了识别效率,满足对先验信息匮乏的非合作空间目标进行实时服务和操作的需要。(1) The application satellite function type identification method provided by the present invention obtains an image that can be identified by the ResNet neural network model by adjusting the resolution of the target satellite space image, and autonomously carries out the application satellite function type through the ResNet neural network model on-orbit. Recognition does not depend on manual interpretation and judgment on the ground, which improves the recognition efficiency and meets the needs of real-time service and operation of non-cooperative space targets lacking prior information.

(2)本发明在轨进行应用卫星功能类型自主识别时,不需使用传统图像分割、识别及分类等一系列图像处理算法,操作简单、识别速度快、准确率高,可提高针对应用卫星功能类型的在轨自主识别速度。(2) The present invention does not need to use a series of image processing algorithms such as traditional image segmentation, recognition and classification when the application satellite function type is independently identified in orbit. The operation is simple, the identification speed is fast, and the accuracy rate is high, which can improve the accuracy of application satellite functions. Type of on-orbit autonomous identification speed.

(3)本发明可实现空间飞行器对在轨产生的空间目标可见光图像自主处理和分类识别,可同时适应三通道彩色图像和一通道灰度图像,可适应不同光照条件和拍摄角度的图像数据,对数据的适应性强。(3) The present invention can realize autonomous processing and classification and identification of visible light images of space targets generated by spacecraft on-orbit, can simultaneously adapt to three-channel color images and one-channel grayscale images, and can adapt to image data of different lighting conditions and shooting angles, Strong adaptability to data.

(4)本发明模拟空间真空条件下可见光成像环境和星表材料反射特性,在不同光照和角度下生成具有真实可见光反射特性的图像样本,极大丰富了深度学习样本库,为空间目标分类识别的神经网络训练提供充足的训练和测试素材,提高了所得模型的可靠性。(4) The present invention simulates the visible light imaging environment and the reflection characteristics of star catalogue materials under the condition of space vacuum, and generates image samples with real visible light reflection characteristics under different illumination and angles, which greatly enriches the deep learning sample library, and is used for classification and identification of space targets. The neural network training provided by the neural network provides sufficient training and testing material, which improves the reliability of the resulting model.

(5)本发明可实现空间飞行器对在轨产生的空间目标可见光图像自主处理和分类识别,目标功能类型分类速度小于100ms,准确率可达到90%,可极大提高空间目标智能认知水平,增强飞行器在轨自主能力。(5) The present invention can realize autonomous processing and classification and identification of visible light images of space targets generated on orbit by space vehicles, the classification speed of target function types is less than 100ms, the accuracy rate can reach 90%, and the intelligent cognition level of space targets can be greatly improved. Enhance the autonomous capability of the aircraft in orbit.

附图说明Description of drawings

图1为本发明实施例提供的一种应用卫星功能类型识别方法流程图;1 is a flowchart of a method for identifying a function type of an application satellite provided by an embodiment of the present invention;

图2为本发明实施例模拟空间环境对建立的所述三维模型进行成像示意图。FIG. 2 is a schematic diagram of imaging the established three-dimensional model by simulating a space environment according to an embodiment of the present invention.

具体实施方式Detailed ways

以下将结合附图和具体实施例,对本发明的具体实施方式做进一步详细说明。The specific embodiments of the present invention will be further described in detail below with reference to the accompanying drawings and specific embodiments.

本发明实施例提供了一种应用卫星功能类型识别方法,包括以下步骤:An embodiment of the present invention provides a method for identifying a function type of an application satellite, comprising the following steps:

步骤101:获取目标卫星空间图像,并对获取到的目标卫星空间图像进行分辨率调整,得到目标图像;Step 101: acquiring a target satellite space image, and adjusting the resolution of the acquired target satellite space image to obtain a target image;

步骤102:基于ResNet神经网络模型对所述目标图像进行数据处理,确定所述目标卫星对应的功能类型。Step 102: Perform data processing on the target image based on the ResNet neural network model, and determine the function type corresponding to the target satellite.

本发明提供的应用卫星功能类型识别方法,通过对目标卫星空间图像进行分辨率调整,得到能够被ResNet神经网络模型识别的图像,通过ResNet神经网络模型在轨自主对应用卫星功能类型进行识别,不依赖于地面人工解读和判断,提高了识别效率,满足对先验信息匮乏的非合作空间目标进行实时服务和操作的需要。The application satellite function type identification method provided by the present invention obtains an image that can be identified by the ResNet neural network model by adjusting the resolution of the target satellite space image, and independently identifies the application satellite function type through the ResNet neural network model on-orbit. Relying on manual interpretation and judgment on the ground, the recognition efficiency is improved, and the need for real-time service and operation of non-cooperative space targets lacking prior information is met.

表1 ResNet神经网络模型的具体结构参数表Table 1 Specific structural parameters of the ResNet neural network model

Figure BDA0001912039640000061
Figure BDA0001912039640000061

其中,Conv0为初始卷积层、ConV1_x为第一个残差学习模块、ConV2_x 为第二个残差学习模块、ConV3_x为第三个残差学习模块、Full connection为全连接层。Among them, Conv0 is the initial convolutional layer, ConV1_x is the first residual learning module, ConV2_x is the second residual learning module, ConV3_x is the third residual learning module, and Full connection is the fully connected layer.

如表1所示,在一可选实施例中,所述ResNet神经网络模型包括初始卷积层、三个残差学习模块和全连接层,每个所述残差学习模块包括两个残差学习单元,所述初始卷积层输出特征图集给第一个残差学习模块,第一个所述残差学习模块输出新的特征图集给第二个所述残差学习模块,第二个所述残差学习模块输出新的特征图集给第三个所述残差学习模块,第三个所述残差模块输出新的特征图集给所述全连接层,其中:As shown in Table 1, in an optional embodiment, the ResNet neural network model includes an initial convolution layer, three residual learning modules and a fully connected layer, and each residual learning module includes two residuals A learning unit, the initial convolution layer outputs a feature atlas to the first residual learning module, the first residual learning module outputs a new feature atlas to the second residual learning module, the second Each of the residual learning modules outputs a new feature atlas to the third residual learning module, and the third residual module outputs a new feature atlas to the fully connected layer, where:

所述初始卷积层,用于:The initial convolutional layer is used for:

对所述目标图像进行一次二维卷积,得到第一特征图集,其中,优选卷积核数目16、卷积核尺寸3×3、卷积操作步长为1;Perform a two-dimensional convolution on the target image to obtain a first feature atlas, wherein the preferred number of convolution kernels is 16, the size of convolution kernels is 3×3, and the convolution operation step size is 1;

第一个残差学习模块的第一残差学习单元,用于:The first residual learning unit of the first residual learning module is used for:

对所述第一特征图集进行卷积操作,得到第二特征图集,其中,优选卷积核数目16,卷积核尺寸1×1,卷积操作步长为1;对所述第一特征图集依次进行标准化(归一化)操作、激活操作及一次卷积操作,得到第三特征图集,其中,优选卷积核数目16,卷积核尺寸3×3,卷积操作步长为1;对所述第三特征图集依次进行标准化操作、激活操作及一次卷积操作,得到第四特征图集,其中,优选卷积核数目16,卷积核尺寸3×3,卷积操作步长为1;根据所述第二特征图集和第四特征图集确定第五特征图集,具体地,将所述第二特征图集和第四特征图集中各图像的对应的特征值相加,得到第五特征图集中的图像;Perform a convolution operation on the first feature atlas to obtain a second feature atlas, wherein the number of convolution kernels is preferably 16, the size of the convolution kernels is 1×1, and the convolution operation step size is 1; The feature atlas is subjected to a normalization (normalization) operation, an activation operation, and a convolution operation in turn to obtain a third feature atlas. Among them, the number of convolution kernels is preferably 16, the size of the convolution kernel is 3×3, and the step size of the convolution operation is is 1; perform normalization operation, activation operation and one convolution operation on the third feature map set in turn to obtain a fourth feature map set, wherein the number of convolution kernels is preferably 16, the size of convolution kernels is 3×3, and the convolution kernel size is 3×3. The operation step is 1; the fifth feature atlas is determined according to the second feature atlas and the fourth feature atlas. Specifically, the corresponding features of each image in the second and fourth feature atlases are set. The values are added to obtain the images in the fifth feature atlas;

第一个残差学习模块的第二残差学习单元,用于:The second residual learning unit of the first residual learning module is used to:

对所述第五特征图集依次进行标准化操作、激活操作及一次卷积操作,得到第六特征图集,其中,优选卷积核数目16,卷积核尺寸3×3,卷积操作步长为1;对所述第六特征图集依次进行标准化操作、激活操作及一次卷积操作,得到第七特征图集,其中,优选卷积核数目16,卷积核尺寸3×3,卷积操作步长为1;根据所述第五特征图集和第七特征图集确定第八特征图集;Perform a normalization operation, an activation operation, and a convolution operation on the fifth feature map set in turn to obtain a sixth feature map set, wherein the preferred number of convolution kernels is 16, the size of the convolution kernel is 3×3, and the step size of the convolution operation is is 1; perform normalization operation, activation operation and one convolution operation on the sixth feature map set in turn to obtain the seventh feature map set, wherein the number of convolution kernels is preferably 16, the size of convolution kernels is 3×3, and the convolution kernel size is 3×3. The operation step is 1; the eighth feature atlas is determined according to the fifth and seventh feature atlases;

第二个残差学习模块的第一残差学习单元,用于:The first residual learning unit of the second residual learning module is used for:

对第八特征图集进行卷积操作,得到第九特征图集;Perform a convolution operation on the eighth feature atlas to obtain the ninth feature atlas;

对第八特征图集依次进行标准化操作、激活操作及一次卷积操作,得到第十特征图集,对第十特征图集依次进行标准化操作、激活操作及一次卷积操作,得到第十一特征图集,根据第十一特征图集与所述第九特征图集确定第十二特征图集;Perform the normalization operation, activation operation and one convolution operation on the eighth feature atlas in turn to obtain the tenth feature atlas, and perform the normalization operation, activation operation and one convolution operation on the tenth feature atlas in turn to obtain the eleventh feature atlas, determining the twelfth feature atlas according to the eleventh feature atlas and the ninth feature atlas;

第二个残差学习模块的第二残差学习单元,用于:The second residual learning unit of the second residual learning module is used to:

对第十二特征图集依次进行标准化操作、激活操作及一次卷积操作,得到第十三特征图集,对第十三特征图集依次进行标准化操作、激活操作及一次卷积操作,得到第十四特征图集;根据第十四特征图集与第十二特征图集确定第十五特征图集;Perform the normalization operation, activation operation, and a convolution operation on the twelfth feature atlas in turn to obtain the thirteenth feature atlas, and perform the normalization operation, activation operation, and a convolution operation on the thirteenth feature atlas in turn to obtain the thirteenth feature atlas. Fourteen feature atlas; determine the fifteenth feature atlas according to the fourteenth and twelfth feature atlas;

第三个残差学习模块的第一残差学习单元,用于:The first residual learning unit of the third residual learning module is used for:

对第十五特征图集进行卷积操作,得到第十六特征图集;Perform a convolution operation on the fifteenth feature atlas to obtain the sixteenth feature atlas;

对第十五特征图集依次进行标准化操作、激活操作及一次卷积操作,得到第十七特征图集,对第十七特征图集依次进行标准化操作、激活操作及一次卷积操作,得到第十八特征图集;根据第十八特征图集与第十五特征图集确定第十九特征图集;Perform the normalization operation, activation operation and one convolution operation on the fifteenth feature atlas in turn to obtain the seventeenth feature atlas, and perform the normalization operation, activation operation and one convolution operation on the seventeenth feature atlas in turn to obtain the first Eighteen feature atlases; determine the nineteenth feature atlas according to the eighteenth and fifteenth feature atlases;

第三个残差学习模块的第二残差学习单元,用于:The second residual learning unit of the third residual learning module is used for:

对第十九特征图集依次进行标准化操作、激活操作及一次卷积操作,得到第二十特征图集,对第二十特征图集依次进行标准化操作、激活操作及一次卷积操作,得到第二十一特征图集;根据第二十一特征图集与第十九特征图集确定第二十二特征图集。Perform a normalization operation, an activation operation and a convolution operation on the nineteenth feature atlas in turn to obtain the twentieth feature atlas, and perform a normalization operation, an activation operation and a convolution operation on the twentieth feature atlas in turn to obtain the first Twenty-one feature atlases; the twenty-second feature atlas is determined according to the twenty-first feature atlas and the nineteenth feature atlas.

本发明实施例中,第二个残差学习模块中优选卷积操作的卷积核数目32,卷积步长为2;第三个残差学习模块中优选卷积操作的卷积核数目64,卷积步长为2。In the embodiment of the present invention, the preferred number of convolution kernels for convolution operation in the second residual learning module is 32, and the convolution step size is 2; the preferred number of convolution kernels for convolution operation in the third residual learning module is 64 , with a convolution stride of 2.

所述全连接层,用于:The fully connected layer is used for:

对第二十二特征图集进行平均池化,提取用于卫星类型识别的特征向量,并进行全连接操作,根据所述特征向量确定各卫星类型对应的特征累加向量,通过分类器进行分类概率统计,从而确定所述目标卫星对应的功能类型。Perform average pooling on the twenty-second feature atlas, extract feature vectors used for satellite type identification, and perform a full connection operation, determine the feature accumulation vector corresponding to each satellite type according to the feature vectors, and use the classifier to classify the probability Statistics, so as to determine the function type corresponding to the target satellite.

具体地,全连接层根据下式确定各卫星类型对应的特征累加向量,通过 softmax多分类器进行类别确定:Specifically, the fully connected layer determines the feature accumulation vector corresponding to each satellite type according to the following formula, and determines the category through the softmax multi-classifier:

Figure BDA0001912039640000081
Figure BDA0001912039640000081

其中a1,a2,…aT为全连接层输出的各卫星类型对应的特征累加向量,W为特征权值矩阵,x为全连接层特征向量输入,b为全连接层偏置参数,T为目标类别数,N为输入全连接层的特征向量个数。where a1, a2,...aT are the feature accumulation vectors corresponding to each satellite type output by the fully connected layer, W is the feature weight matrix, x is the input of the fully connected layer feature vector, b is the fully connected layer bias parameter, and T is the target The number of categories, N is the number of feature vectors input to the fully connected layer.

全连接层完成了分布式特征表示到样本标记空间的转换,是进行目标分类的关键一步;全连接层将在之前卷积和池化层得到的局部特征向量通过权值矩阵进行综合,可较大保留模型的表示能力,有利于后续模型微调和迁移学习的进行。The fully connected layer completes the transformation from distributed feature representation to the sample label space, which is a key step for target classification; the fully connected layer integrates the local feature vectors obtained in the previous convolution and pooling layers through the weight matrix, which can be compared The large retention of the representation ability of the model is beneficial to the subsequent model fine-tuning and transfer learning.

本发明的ResNet深度神经网络的模型基于深度学习理论,可模拟人类大脑的神经连接结构,在处理图像信号时,通过多个变换阶段分层对数据特征进行描述,进而给出数据的解释。其图像数据处理流程符合灵长类动物视觉系统认知规律,即首先检测边缘、初始形状,然后再逐步形成更复杂的视觉形状。该应用卫星功能类型认知神经网络模型可通过组合低层特征形成更加抽象的高层表示、属性类别或特征,最终给出图像数据的分层特征表示。The model of the ResNet deep neural network of the present invention is based on the deep learning theory, which can simulate the neural connection structure of the human brain. Its image data processing flow conforms to the cognitive law of primate visual system, that is, it first detects edges and initial shapes, and then gradually forms more complex visual shapes. The applied satellite function type cognitive neural network model can form a more abstract high-level representation, attribute category or feature by combining low-level features, and finally give a hierarchical feature representation of image data.

该应用卫星功能类型认知神经网络模型可通过对原始信号进行逐层特征变换,将样本在原空间的特征表示变换到新的特征空间,自动地学习得到层次化的特征表示,从而更有利于分类或特征的可视化。该模型具有层次性,参数比较多并且容量足够大,因而可以更好的将数据特征表示出来。针对图像识别比较困难的问题,可以在大量训练数据的基础上取得好的结果。The applied satellite function type cognitive neural network model can transform the feature representation of the sample in the original space into a new feature space by performing layer-by-layer feature transformation on the original signal, and automatically learn to obtain a hierarchical feature representation, which is more conducive to classification or visualization of features. The model is hierarchical, with many parameters and large enough capacity, so it can better represent the data features. For the difficult problem of image recognition, good results can be achieved on the basis of a large amount of training data.

该模型所采用的卷积神经网络其前端输入采用了若干卷积核用于提出图像信息,充分考虑了图像目标在空间中符合的平移、旋转和缩放不变性,其卷积核结构相同、权值共享,使得神经网络既保持较大的前端规模,同时具有较少可变调整参数,大大降低了计算量和参数优化的负担。该模型在输入图像信号规模不变的同时保持了相对低的计算量,相比于传统的人工设定的图像预处理滤波及卷积过程,其前端处理过程经过了性能优化,特征的自动提取具有针对图像内容的特异性,因此性能上好于人工设定的预处理过程。The front-end input of the convolutional neural network used in this model adopts several convolution kernels to extract image information, and fully considers the translation, rotation and scaling invariance of the image target in space. The convolution kernels have the same structure and weight. Value sharing enables the neural network to maintain a large front-end scale while having fewer variable adjustment parameters, which greatly reduces the computational load and the burden of parameter optimization. The model maintains a relatively low amount of computation while the scale of the input image signal remains unchanged. Compared with the traditional artificially set image preprocessing filtering and convolution processes, the front-end processing process has been optimized for performance, and features are automatically extracted. It is specific to the image content, so the performance is better than the manual preprocessing process.

相对于传统图像分类算法,该模型使用相对较少的预处理,其不需要依赖于先验知识,可最大程度避免传统图像分类算法中的手工特征设计难题,而通过滤波器进行自动特征提取,可快速准确进行未知目标的分类和识别。Compared with traditional image classification algorithms, this model uses relatively less preprocessing, does not need to rely on prior knowledge, and can avoid the difficulty of manual feature design in traditional image classification algorithms to the greatest extent, and uses filters for automatic feature extraction. It can quickly and accurately classify and identify unknown targets.

本模型所采用的ResNet神经网络结构允许保留原始输入信息到特征提取结果中,有效保护了信息的完整性,并消除了神经网络随层数不断加深导致训练集上误差增大的现象,可以极快地加速超深神经网络的训练,模型的准确度也有非常大的提升,并具有较好的可移植性。The ResNet neural network structure used in this model allows the original input information to be retained in the feature extraction results, effectively protecting the integrity of the information, and eliminating the phenomenon that the error in the training set increases with the deepening of the number of layers in the neural network, which can be extremely Quickly accelerate the training of ultra-deep neural networks, the accuracy of the model is also greatly improved, and it has better portability.

在一可选实施例中,所述目标图像的分辨率不低于256×256。In an optional embodiment, the resolution of the target image is not lower than 256×256.

在一可选实施例中,所述的应用卫星功能类型识别方法,还包括:In an optional embodiment, the described application satellite function type identification method further includes:

建立卫星空间图像样本库,所述样本库中包含多个卫星类型及各所述卫星类型对应的图像样本集;establishing a satellite space image sample library, the sample library includes a plurality of satellite types and image sample sets corresponding to each of the satellite types;

基于所述卫星空间图像样本库,对初始ResNet神经网络模型进行训练和测试,得到ResNet神经网络模型。Based on the satellite space image sample library, the initial ResNet neural network model is trained and tested to obtain the ResNet neural network model.

在一可选实施例中,所述建立卫星空间图像样本库,包括:In an optional embodiment, the building a satellite space image sample library includes:

建立不同类型卫星的三维模型,并模拟空间环境,对建立的所述三维模型进行成像,得到一定数量的模拟空间图像样本,建立卫星类型与所述模拟空间图像样本的对应关系,生成卫星空间图像样本库。Establish three-dimensional models of different types of satellites, simulate the space environment, image the established three-dimensional models, obtain a certain number of simulated space image samples, establish a correspondence between satellite types and the simulated space image samples, and generate satellite space images sample library.

在一可选实施例中,所述建立不同类型卫星的三维模型,包括:In an optional embodiment, the building three-dimensional models of different types of satellites includes:

根据各类型卫星的结构特点,建立不同类型卫星的结构三维模型;According to the structural characteristics of various types of satellites, establish three-dimensional models of the structures of different types of satellites;

根据所述各类卫星的表面纹理信息,渲染所述结构三维模型,得到不同类型卫星的三维模型。According to the surface texture information of the various types of satellites, the three-dimensional models of the structures are rendered to obtain three-dimensional models of different types of satellites.

在一可选实施例中,所述模拟空间环境,包括:In an optional embodiment, the simulated space environment includes:

模拟光源为平行光,大气分子密度为地面大气分子密度的0~0.01倍,光照强度指数为地面日常光照强度的2~3倍,光源入射方向在所述三维模型的4 π空间内随机生成。The simulated light source is parallel light, the density of atmospheric molecules is 0-0.01 times the density of atmospheric molecules on the ground, the light intensity index is 2-3 times the daily light intensity on the ground, and the incident direction of the light source is randomly generated in the 4π space of the three-dimensional model.

在一可选实施例中,所述对建立的所述三维模型进行成像,包括:In an optional embodiment, the imaging of the established three-dimensional model includes:

在除所述三维模型对天面法向夹角60°锥角的范围内随机对建立的所述三维模型进行成像,且成像时,光源光束方向与相机成像轴夹角小于45°。The established three-dimensional model is randomly imaged within the range of 60° cone angle except the angle between the three-dimensional model and the normal to the sky, and during imaging, the angle between the light source beam direction and the camera imaging axis is less than 45°.

在一可选实施例中,所述光源光束方向与相机成像轴夹角按照下式确定:In an optional embodiment, the angle between the beam direction of the light source and the imaging axis of the camera is determined according to the following formula:

Figure BDA0001912039640000101
Figure BDA0001912039640000101

其中α<45°,为光源光束方向与相机成像轴夹角;Where α<45°, is the angle between the light source beam direction and the camera imaging axis;

R1为光源到卫星的三维模型原点的距离,R2为相机到卫星的三维模型原点的距离,x1为光源x轴坐标,X2为相机x轴坐标,y1为光源y轴坐标,y2 为相机y轴坐标,z1为光源z轴坐标,z2为相机z轴坐标。R1 is the distance from the light source to the origin of the 3D model of the satellite, R2 is the distance from the camera to the origin of the 3D model of the satellite, x1 is the x-axis coordinate of the light source, X2 is the x-axis coordinate of the camera, y1 is the y-axis coordinate of the light source, and y2 is the y-axis of the camera Coordinates, z1 is the z-axis coordinate of the light source, and z2 is the z-axis coordinate of the camera.

在一可选实施例中,所述得到一定数量的模拟空间图像样本之后,还包括:In an optional embodiment, after obtaining a certain number of simulated space image samples, the method further includes:

对各所述模拟空间图像样本进行数据增强,得到样本数量扩充后的模拟空间图像样本集;performing data enhancement on each of the simulated space image samples to obtain a simulated space image sample set with an expanded number of samples;

相应地,所述建立卫星类型与所述模拟空间图像的对应关系,包括:Correspondingly, establishing the correspondence between the satellite type and the simulated space image includes:

建立所述卫星类型与所述扩充后的模拟空间图像样本集中的模拟空间样本的对应关系。establishing a correspondence between the satellite type and the simulated space samples in the expanded simulated space image sample set.

在一可选实施例中,所述基于所述卫星空间图像样本库,对初始ResNet 神经网络模型进行训练和测试,包括:In an optional embodiment, the initial ResNet neural network model is trained and tested based on the satellite space image sample library, including:

将所述卫星空间图像样本库中各样本由三通道彩色图像处理为一通道灰度图像,然后对初始ResNet神经网络模型进行训练和测试。Each sample in the satellite space image sample library is processed from a three-channel color image into a one-channel grayscale image, and then the initial ResNet neural network model is trained and tested.

以下为本发明的一个具体实施例:The following is a specific embodiment of the present invention:

(1)建立卫星空图像样本库:(1) Establish a sample library of satellite air images:

搜集整理各类公开资料的卫星图片,选择能完整展示卫星外形且至少部分体现三类应用卫星各自星表特征(例如对地观测类卫星的相机载荷、无线电中继类卫星的通信天线)的三类卫星图片;Collect and organize satellite images of various types of public information, and select three types of satellite images that can fully show the shape of the satellite and at least partially reflect the characteristics of the respective star catalogs of the three types of application satellites (such as the camera load of earth observation satellites, and the communication antenna of radio relay satellites). satellite images;

首先根据每幅卫星图片所显示的卫星外部信息进行卫星轮廓、高度等几何比例数据测算,并按照测算比例以三维坐标系原点为卫星本体形心,构建卫星三维白模(无表面纹理数据);Firstly, according to the satellite external information displayed in each satellite image, the geometric ratio data such as satellite outline and height are calculated, and according to the calculated ratio, the origin of the three-dimensional coordinate system is used as the centroid of the satellite body to construct a three-dimensional white model of the satellite (without surface texture data);

根据每幅卫星图片所显示的星表特征为卫星三维白模按比例添加太阳帆板、相机镜头、敏感器、推力器、星箭对接环、测控天线、数传天线等设备和部件,得到卫星的结构三维模型(无表面纹理数据);According to the characteristics of the star catalog displayed in each satellite image, the solar panels, camera lenses, sensors, thrusters, star-arrow docking rings, measurement and control antennas, data transmission antennas and other equipment and components are added to the three-dimensional white model of the satellite in proportion to obtain the satellite. 3D model of the structure (without surface texture data);

根据星表实际材质反射特性对确定三类卫星的表面纹理信息,本实施例中星表主要部位的材质对可见光反射特性符合真实材质反射特性要求,主要部位包括太阳帆板正面(覆盖太阳电池)、太阳帆板背面、镀铝热控多层、镀金热控多层、二次表面镜与白漆、及星表其它部位等;后续根据所述表面纹理信息对卫星的结构三维模型进行渲染,以得到三类卫星的三维模型;According to the actual material reflection characteristics of the star catalog, the surface texture information of the three types of satellites is determined. In this embodiment, the visible light reflection characteristics of the materials of the main parts of the star catalog meet the requirements of the real material reflection characteristics, and the main parts include the front of the solar panel (covering the solar cells). , the back of the solar panel, aluminized thermal control multi-layer, gold-plated thermal control multi-layer, secondary surface mirror and white paint, and other parts of the star catalog, etc.; the three-dimensional model of the satellite structure is subsequently rendered according to the surface texture information, to obtain three-dimensional models of three types of satellites;

参见图2,以卫星的三维模型坐标原点为中心,建立半径为R的球面,将球面用N条经线和N条纬线划分出(N+1)×(N+1)个点,则每个点的X坐标为x (i,j),Y坐标为y(i,j),Z坐标为z(i,j),其中i和j分别为纬线和经线的编号,其范围分别为1~N。设模拟太阳光的平行光源与卫星三维模型原点的距离为R1,则平行光源在半径为R1的球面上,随机选择纬线i=a,经线j=b值,此时光源坐标为(x1(a,b),y1(a,b),z1(a,b));Referring to Figure 2, with the origin of the coordinates of the three-dimensional model of the satellite as the center, a spherical surface with a radius of R is established, and the spherical surface is divided into (N+1)×(N+1) points by N longitude lines and N latitude lines, then each The X coordinate of the point is x(i,j), the Y coordinate is y(i,j), and the Z coordinate is z(i,j), where i and j are the numbers of the latitude and longitude, respectively, and their range is 1~ N. Let the distance between the parallel light source of the simulated sunlight and the origin of the three-dimensional satellite model be R1, then the parallel light source is on a spherical surface with a radius of R1, and the latitude i=a and the meridian j=b are randomly selected. At this time, the coordinates of the light source are (x1(a , b), y1(a, b), z1(a, b));

同理设对三维卫星模型成像的相机与卫星三维模型原点的距离为R2,则成像相机在半径为R2的球面上,随机选择纬线编号i=c,经线编号j=d值,此时相机坐标为(x2(c,d),y2(c,d),z2(c,d))。In the same way, set the distance between the camera imaging the 3D satellite model and the origin of the satellite 3D model as R2, then the imaging camera randomly selects the latitude line number i=c and the longitude line number j=d value on a spherical surface with a radius of R2. At this time, the camera coordinates is (x2(c, d), y2(c, d), z2(c, d)).

由于应用卫星大多数应用载荷主要对地球,因此为在仿真成像过程中尽可能体现卫星表面关键载荷和特征部位,避免对卫星顶面单调成像,且为尽可能增加卫星样本的多样性和丰富性,卫星成每幅卫星光学成像图像的视角方向在排除卫星对天面法向夹角60°锥角的范围内随机生成。所以纬线编号i在选择时避免选择60°锥角范围内纬线,即i>(60°/180°)×N。Since most of the application loads of the application satellites are mainly on the earth, in order to reflect the key loads and characteristic parts of the satellite surface as much as possible in the simulation imaging process, to avoid monotonous imaging of the top surface of the satellite, and to increase the diversity and richness of satellite samples as much as possible , the viewing angle direction of the satellite into each satellite optical imaging image is randomly generated within the range of 60° cone angle excluding the normal angle between the satellite and the sky. Therefore, when selecting the latitude line number i, avoid selecting the latitude line within the range of 60° cone angle, that is, i>(60°/180°)×N.

又由于空间真空环境没有空气散射因子影响,光线照射到的星表部位与被遮挡的星表部位成像对比度非常大,因此可见光相机成像角度与平行光源夹角过大时,会导致大部分遮挡不可见情况,成像效果较差,不能正常反应星表特性和卫星外形特征,无法作为有效样本进行神经网络训练,因此光源光束方向与相机成像轴夹角应小于45°,光源与相机的夹角α的计算公式如下:In addition, because the space vacuum environment has no influence of air scattering factors, the imaging contrast between the star table part illuminated by the light and the occluded star table part is very large, so when the imaging angle of the visible light camera and the parallel light source are too large, it will cause most of the occlusion. It can be seen that the imaging effect is poor, it cannot normally reflect the characteristics of the star catalog and the shape of the satellite, and it cannot be used as an effective sample for neural network training. Therefore, the angle between the light source beam direction and the camera imaging axis should be less than 45°, and the angle between the light source and the camera is α. The calculation formula is as follows:

Figure BDA0001912039640000121
Figure BDA0001912039640000121

且α<45°。And α<45°.

当光源位置和相机位置满足上述要求时,将光源和相机位置坐标带入三维建模软件(3D MAX),对三维模型进行成像渲染。根据需要可设置多组相机和光源位置,以丰富样本多样性,增加样本数量,增强神经网络学习能力。When the position of the light source and the position of the camera meet the above requirements, the position coordinates of the light source and the camera are brought into the three-dimensional modeling software (3D MAX), and the three-dimensional model is imaged and rendered. Multiple sets of camera and light source positions can be set as needed to enrich the diversity of samples, increase the number of samples, and enhance the learning ability of the neural network.

在模型构建软件中设置成像尺寸为1920×1080,进行成像效果渲染。样本总数不少于5000幅,均匀覆盖三类应用卫星。In the model building software, set the imaging size to 1920×1080 to render the imaging effect. The total number of samples is not less than 5,000, covering three types of application satellites evenly.

统计每类卫星样本数量,按照比例将每类卫星样本数量进行扩充,主要采取旋转、平移、翻转等图像处理方式,对样本数量进行增强。其中旋转为0~ 360度随机角度,平移距离不超过图像长或宽的1/32,以避免将卫星主体平移至图像以外;Count the number of satellite samples of each type, and expand the number of satellite samples of each type in proportion. Image processing methods such as rotation, translation, and flipping are mainly used to enhance the number of samples. The rotation is a random angle from 0 to 360 degrees, and the translation distance does not exceed 1/32 of the length or width of the image, so as to avoid translating the satellite body outside the image;

根据扩充后的样本生成三类卫星的空间图像样本库,空间图像样本库中三类卫星样本数量相当,从每类卫星样本中随机挑选1/3图片作为测试样本,其余图片作为训练样本。According to the expanded samples, three types of satellite space image sample libraries are generated. The three types of satellite samples in the space image sample library are equal in number. 1/3 of the images are randomly selected from each type of satellite samples as test samples, and the rest of the images are used as training samples.

(2)采用Tensorflow平台进行深度卷积神经网络学习和训练。将训练样本集和测试样本集分别转换为Tensorflow平台所接受的TFRecord格式文件,该类文件后缀为“.tfrecords”。在格式文件的生成过程中,将所有样本图片大小调整为360×640,且将所有三通道彩色图像数据转换为单通道灰度图像数据;(2) Tensorflow platform is used for deep convolutional neural network learning and training. Convert the training sample set and the test sample set to TFRecord format files accepted by the Tensorflow platform, with the suffix of ".tfrecords". During the generation of the format file, the size of all sample images is adjusted to 360×640, and all three-channel color image data are converted into single-channel grayscale image data;

设置深度卷积网络批处理图像个数Batch_size=10,所有训练样本学习一次为一个epoch,训练总时长为250个epoch。完成一个epoch使用测试样本完成一次评估。初始学习率learning_rate为0.1,100个epoch后调整为0.01, 150个epoch后调整为0.001,200个epoch后调整为0.0001。识别类别数为 3类;Set the number of batch images of deep convolutional network Batch_size=10, all training samples are learned for one epoch at a time, and the total training time is 250 epochs. Complete an epoch to complete an evaluation using test samples. The initial learning rate learning_rate is 0.1, adjusted to 0.01 after 100 epochs, 0.001 after 150 epochs, and 0.0001 after 200 epochs. The number of recognition categories is 3;

依据深度学习典型卷积神经网络ResNet残差网络模型建立如表1所示的深度卷积神经网络模型。神经网络共14层,由1个卷积层,3个残差学习模块(block),1个全连接层组成,其中每个残差学习模块包括两个残差学习单元 (bottleneck),每个残差学习单元采用2层结构,每层结构包括1个卷积操作。第一个block中滤波器个数为16,第二个block中滤波器个数为32,第三个 block中滤波器个数为64。训练过程中进行神经网络模型存储,用于后续测试评估。网络模型使用Relu激活函数,平均池化后利用全连接进行输出,利用 softmax多分类器进行预测,优化算法采用Momentum算法;According to the deep learning typical convolutional neural network ResNet residual network model, the deep convolutional neural network model shown in Table 1 is established. The neural network consists of 14 layers, consisting of 1 convolutional layer, 3 residual learning modules (blocks), and 1 fully connected layer. Each residual learning module includes two residual learning units (bottleneck), each The residual learning unit adopts a 2-layer structure, and each layer structure includes a convolution operation. The number of filters in the first block is 16, the number of filters in the second block is 32, and the number of filters in the third block is 64. During the training process, the neural network model is stored for subsequent test evaluation. The network model uses the Relu activation function, uses the full connection for output after average pooling, uses the softmax multi-classifier for prediction, and the optimization algorithm uses the Momentum algorithm;

使用步骤(1)产生的空间图像样本库及表1所示的深度卷积神经网络模型进行应用卫星功能类型自主识别训练和测试。训练过程中的批量图片训练数 batch_size取值为10,每次随机选取训练样本库的10幅图片送入神经网络进行处理,直到全部样本训练完毕,为一个epoch。三类卫星功能类型自主识别训练共完成250个epoch。每10个epoch进行一次测试,即使用全部测试样本带入神经网络进行识别,识别分类结果与标签进行比对后得到识别准确率。神经网络训练的初始学习率设置为0.1。Use the spatial image sample library generated in step (1) and the deep convolutional neural network model shown in Table 1 to perform training and testing of autonomous identification of satellite function types. The number of batch image training in the training process batch_size is 10, and 10 images from the training sample library are randomly selected each time and sent to the neural network for processing until all samples are trained, which is an epoch. A total of 250 epochs were completed in the autonomous identification training of three types of satellite function types. A test is performed every 10 epochs, that is, all test samples are brought into the neural network for recognition, and the recognition accuracy is obtained after the recognition classification results are compared with the labels. The initial learning rate for neural network training is set to 0.1.

神经网络训练完成后,利用训练好的网络模型进行应用卫星功能类型自主识别测试,对测试正确率进行统计。应用卫星功能类型自主识别测试可分为无标签样本测试和有标签(标签为应用卫星类型)样本测试两类。After the neural network training is completed, the trained network model is used to carry out the autonomous identification test of the application satellite function type, and the test accuracy rate is counted. The self-identification test of the application satellite function type can be divided into two types: unlabeled sample test and labeled (labeled as the application satellite type) sample test.

进行无标签样本测试时,即无先验样本类型信息时,使用经过训练的神经网络模型对样本图像进行识别后得到三类应用卫星识别概率,由人工进行判断是否识别正确,由人工进行所有图片识别准确率的统计。When the unlabeled sample test is performed, that is, when there is no prior sample type information, the trained neural network model is used to identify the sample image to obtain three types of application satellite recognition probabilities. Statistics of recognition accuracy.

进行有标签样本测试时,即事先知道待测试样本为哪类卫星时,使用经过训练的神经网络模型对样本图像进行识别后取概率最大的卫星类别与样本标签进行自动比对,判断是否识别正确。在进行大量有标签样本测试时,可得到识别准确率的统计结果。When testing labeled samples, that is, when you know in advance which type of satellite the sample to be tested is, use the trained neural network model to identify the sample image, and then automatically compare the satellite category with the highest probability with the sample label to determine whether the recognition is correct. . When a large number of labeled samples are tested, the statistical results of the recognition accuracy can be obtained.

当识别准确率达到90%时,认为训练的模型为最终模型,在轨测试时,获取目标卫星空间图像,并对获取到的目标卫星空间图像进行分辨率调整,得到目标图像,利用得到的最终模型对所述目标图像进行数据处理,确定所述目标卫星对应的功能类型。When the recognition accuracy rate reaches 90%, the trained model is considered as the final model. During the on-orbit test, the space image of the target satellite is obtained, and the resolution of the obtained space image of the target satellite is adjusted to obtain the target image. The model performs data processing on the target image to determine the function type corresponding to the target satellite.

以上所述,仅为本发明最佳的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本发明的保护范围之内。The above is only the best specific embodiment of the present invention, but the protection scope of the present invention is not limited to this. Substitutions should be covered within the protection scope of the present invention.

本发明未详细说明部分属本领域技术人员公知常识。The parts not described in detail in the present invention belong to the common knowledge of those skilled in the art.

Claims (5)

1. An application satellite function type identification method is characterized by comprising the following steps:
acquiring a target satellite space image, and adjusting the resolution of the acquired target satellite space image to obtain a target image;
performing data processing on the target image based on a ResNet neural network model, and determining a function type corresponding to the target satellite;
the deep convolutional neural network model adopts a ResNet residual error network structure and comprises an initial convolutional layer, three residual error learning modules and a full-connection layer, wherein each residual error learning module comprises two residual error learning units, and each residual error learning unit comprises two convolutional operations;
the initial convolutional layer for:
performing one-time two-dimensional convolution on the target image to obtain a first feature map set, wherein the number of convolution kernels is 16, the size of the convolution kernels is 3 multiplied by 3, and the step length of convolution operation is 1;
a first residual learning unit of the first residual learning module, configured to:
performing convolution operation on the first feature map set to obtain a second feature map set, wherein the number of convolution kernels is 16, the size of the convolution kernels is 1 multiplied by 1, and the step length of the convolution operation is 1; sequentially carrying out standardization operation, activation operation and one convolution operation on the first feature map set to obtain a third feature map set, wherein the number of convolution kernels is 16, the size of the convolution kernels is 3 multiplied by 3, and the step length of the convolution operation is 1; sequentially carrying out standardization operation, activation operation and one convolution operation on the third feature map set to obtain a fourth feature map set, wherein the number of convolution kernels is 16, the size of the convolution kernels is 3 multiplied by 3, and the step length of the convolution operation is 1; adding the corresponding characteristic values of the images in the second characteristic image set and the fourth characteristic image set to obtain an image in a fifth characteristic image set;
a second residual learning unit of the first residual learning module, configured to:
sequentially carrying out standardization operation, activation operation and one convolution operation on the fifth feature map set to obtain a sixth feature map set, wherein the number of convolution kernels is 16, the size of the convolution kernels is 3 multiplied by 3, and the step length of the convolution operation is 1; sequentially carrying out standardization operation, activation operation and one convolution operation on the sixth feature map set to obtain a seventh feature map set, wherein the number of convolution kernels is 16, the size of the convolution kernels is 3 multiplied by 3, and the step length of the convolution operation is 1; determining an eighth feature map set according to the fifth feature map set and the seventh feature map set;
a first residual learning unit of the second residual learning module, configured to:
performing convolution operation on the eighth feature map set to obtain a ninth feature map set;
carrying out standardization operation, activation operation and one convolution operation on the eighth feature map set in sequence to obtain a tenth feature map set, carrying out standardization operation, activation operation and one convolution operation on the tenth feature map set in sequence to obtain an eleventh feature map set, and determining a twelfth feature map set according to the eleventh feature map set and the ninth feature map set;
a second residual learning unit of the second residual learning module, configured to:
sequentially carrying out standardization operation, activation operation and one convolution operation on the twelfth feature atlas to obtain a thirteenth feature atlas, and sequentially carrying out standardization operation, activation operation and one convolution operation on the thirteenth feature atlas to obtain a fourteenth feature atlas; determining a fifteenth feature atlas according to the fourteenth feature atlas and the twelfth feature atlas;
a first residual learning unit of the third residual learning module, configured to:
performing convolution operation on the fifteenth feature map set to obtain a sixteenth feature map set;
sequentially carrying out standardization operation, activation operation and one convolution operation on the fifteenth feature atlas to obtain a seventeenth feature atlas, and sequentially carrying out standardization operation, activation operation and one convolution operation on the seventeenth feature atlas to obtain an eighteenth feature atlas; determining a nineteenth feature map set according to the eighteenth feature map set and the fifteenth feature map set;
a second residual learning unit of the third residual learning module, configured to:
sequentially carrying out standardization operation, activation operation and one convolution operation on the nineteenth feature map set to obtain a twentieth feature map set, and sequentially carrying out standardization operation, activation operation and one convolution operation on the twentieth feature map set to obtain a twenty-first feature map set; determining a twenty-second feature map set according to the twenty-first feature map set and the nineteenth feature map set;
the number of convolution kernels for convolution operation in the second residual learning module is 32, and the convolution step length is 2; the convolution kernel number of convolution operation in the third residual learning module is 64, and the convolution step length is 2;
the full connection layer is used for:
performing average pooling on the twenty-second feature map set, extracting feature vectors for satellite type identification, performing full-connection operation, determining feature accumulation vectors corresponding to each satellite type according to the feature vectors, and performing classification probability statistics through a classifier so as to determine the function type corresponding to the target satellite;
the number of the images processed in batches by the deep convolutional network is 10, the total training period is 250, the initial learning rate is 0.1, the initial learning rate is adjusted to 0.01 after 100 periods, the initial learning rate is adjusted to 0.001 after 150 periods, and the initial learning rate is adjusted to 0.0001 after 200 periods;
establishing a satellite space image sample library, wherein the sample library comprises a plurality of satellite types and image sample sets corresponding to the satellite types;
training and testing an initial ResNet neural network model based on the satellite space image sample library to obtain a ResNet neural network model;
the establishing of the satellite space image sample library comprises the following steps:
establishing three-dimensional models of different types of satellites, simulating a space environment, imaging the established three-dimensional models to obtain a certain number of simulated space image samples, establishing a corresponding relation between the types of the satellites and the simulated space image samples, and generating a satellite space image sample library;
the establishment of the three-dimensional models of different types of satellites comprises the following steps:
carrying out satellite contour and height measurement and calculation according to satellite external information displayed by each satellite picture, and constructing a satellite three-dimensional white mold by taking the origin of a three-dimensional coordinate system as the centroid of a satellite body according to the measurement and calculation proportion;
adding a solar panel, a camera lens, a sensor, a thruster, a satellite and arrow docking ring, a measurement and control antenna and a data transmission antenna to a satellite three-dimensional white model according to the star surface characteristics displayed by each satellite picture in proportion to obtain structural three-dimensional models of different types of application satellites;
determining surface texture information of a satellite according to the reflection characteristics of the actual materials of the satellite surface, wherein the visible light reflection characteristics of the materials of the main parts of the satellite surface meet the requirements of the reflection characteristics of real materials, and the parts comprise the front surface of a solar panel, the back surface of the solar panel, an aluminum-plated thermal control multilayer, a gold-plated thermal control multilayer, a secondary surface mirror, white paint and other parts of the satellite surface;
rendering the structural three-dimensional model according to the surface texture information of each type of satellite to obtain three-dimensional models of different types of satellites;
the simulated spatial environment comprises:
the simulated light source is parallel light, the atmospheric molecular density is 0-0.01 times of the ground atmospheric molecular density, the illumination intensity index is 2-3 times of the ground daily illumination intensity, and the incident direction of the light source is randomly generated in a 4 pi space of the three-dimensional model;
in the light source and camera position setting, a spherical surface with the radius of R is established by taking the origin of coordinates of a three-dimensional model of a satellite as a center, and the spherical surface is divided into (N +1) × (N +1) points by N warps and N wefts, so that the X coordinate of each point is X (i, j), the Y coordinate is Y (i, j), the Z coordinate is Z (i, j), wherein i and j are respectively the numbers of the wefts and the warps, and the ranges of the i and the j are respectively 1-N; if the distance between the parallel light source simulating the sunlight and the origin of the three-dimensional satellite model is R1, randomly selecting a value of a latitude line i and a value of a longitude line j on a spherical surface with the radius of R1, wherein the coordinates of the light source are (x1(a, b), y1(a, b) and z1(a, b)); similarly, if the distance between the camera imaging the three-dimensional satellite model and the origin of the three-dimensional satellite model is R2, the imaging camera randomly selects the latitude line number i as c, the longitude line number j as d value on the spherical surface with the radius of R2, and the camera coordinates are (x2(c, d), y2(c, d), and z2(c, d)); the visual angle direction of each satellite optical imaging image formed by the satellite is randomly generated in the range of excluding the cone angle of 60 degrees formed by the satellite to the normal of the sky; the weft number i > (60 °/180 °) × N.
2. The method of claim 1, wherein the resolution of the target image is 360 x 640.
3. The method for identifying the type of the satellite function according to claim 1, wherein the angle between the light beam direction of the light source and the imaging axis of the camera is determined according to the following formula:
Figure FDA0002661051610000041
wherein alpha is less than 45 degrees and is an included angle between the light beam direction of the light source and the imaging axis of the camera;
r1 is the distance from the light source to the origin of the three-dimensional model of the satellite, R2 is the distance from the camera to the origin of the three-dimensional model of the satellite, x1 is the coordinates of the x axis of the light source, x2 is the coordinates of the x axis of the camera, y1 is the coordinates of the y axis of the light source, y2 is the coordinates of the y axis of the camera, z1 is the coordinates of the z axis of the light source, and z2 is the coordinates of the z axis of the camera.
4. The method for identifying the type of function of an application satellite according to claim 1, wherein after obtaining a certain number of simulated aerial image samples, the method further comprises:
performing data enhancement on each analog space image sample to obtain an analog space image sample set with the number of samples expanded;
correspondingly, the establishing of the corresponding relation between the satellite type and the simulated space image comprises:
and establishing a corresponding relation between the satellite type and the simulation space samples in the extended simulation space image sample set.
5. The method for identifying the application satellite function type according to claim 1, wherein the training and testing of the initial ResNet neural network model based on the satellite space image sample library comprises:
and processing each sample in the satellite space image sample library into a channel gray image from a three-channel color image, and then training and testing an initial ResNet neural network model.
CN201811556442.9A 2018-12-19 2018-12-19 A kind of identification method of application satellite function type Active CN109657679B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811556442.9A CN109657679B (en) 2018-12-19 2018-12-19 A kind of identification method of application satellite function type

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811556442.9A CN109657679B (en) 2018-12-19 2018-12-19 A kind of identification method of application satellite function type

Publications (2)

Publication Number Publication Date
CN109657679A CN109657679A (en) 2019-04-19
CN109657679B true CN109657679B (en) 2020-11-20

Family

ID=66114842

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811556442.9A Active CN109657679B (en) 2018-12-19 2018-12-19 A kind of identification method of application satellite function type

Country Status (1)

Country Link
CN (1) CN109657679B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127360B (en) * 2019-12-20 2023-08-29 东南大学 An Autoencoder-Based Transfer Learning Method for Grayscale Images
CN112093082B (en) * 2020-09-25 2022-03-18 中国空间技术研究院 On-orbit capture and guidance method and device for high-orbit satellite capture mechanism

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101788817A (en) * 2010-01-29 2010-07-28 航天东方红卫星有限公司 Fault recognition and processing method based on satellite-bone bus
US20150094056A1 (en) * 2013-10-01 2015-04-02 Electronics And Telecommunications Research Institute Satellite communication system and method for adaptive channel assignment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101814107A (en) * 2010-05-06 2010-08-25 哈尔滨工业大学 Satellite dynamics simulation system and method based on satellite dynamics model library

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101788817A (en) * 2010-01-29 2010-07-28 航天东方红卫星有限公司 Fault recognition and processing method based on satellite-bone bus
US20150094056A1 (en) * 2013-10-01 2015-04-02 Electronics And Telecommunications Research Institute Satellite communication system and method for adaptive channel assignment

Also Published As

Publication number Publication date
CN109657679A (en) 2019-04-19

Similar Documents

Publication Publication Date Title
CN108596101B (en) A multi-target detection method for remote sensing images based on convolutional neural network
CN112200764B (en) A method for detecting and locating hot spots in photovoltaic power plants based on thermal infrared images
CN114529817B (en) Photovoltaic fault diagnosis and positioning method for unmanned aerial vehicles based on attention neural network
CN114254696B (en) Visible light, infrared and radar fusion target detection method based on deep learning
CN110189304B (en) On-line fast detection method of optical remote sensing image target based on artificial intelligence
CN109740665A (en) Shielded image ship object detection method and system based on expertise constraint
CN111461006B (en) Optical remote sensing image tower position detection method based on deep migration learning
CN106295503A (en) The high-resolution remote sensing image Ship Target extracting method of region convolutional neural networks
CN109509156A (en) A kind of image defogging processing method based on generation confrontation model
CN111536970B (en) Infrared inertial integrated navigation method for low-visibility large-scale scene
CN112966659A (en) Video image small target detection method based on deep learning
CN110532865A (en) Spacecraft structure recognition methods based on visible light and laser fusion
CN109657679B (en) A kind of identification method of application satellite function type
CN110866472A (en) A UAV ground moving target recognition and image enhancement system and method
Yin et al. An enhanced lightweight convolutional neural network for ship detection in maritime surveillance system
Yang et al. Remote sensing image aircraft target detection based on GIoU-YOLO v3
Cao et al. Detection method based on image enhancement and an improved faster R-CNN for failed satellite components
CN114491694B (en) Space target data set construction method based on illusion engine
Oestreich et al. On-orbit relative pose initialization via convolutional neural networks
CN115661251A (en) Imaging simulation-based space target identification sample generation system and method
CN115292287A (en) A method for automatic labeling and database construction of satellite feature component images
CN119006591A (en) Multi-scale space target relative pose estimation method and system based on deep learning under complex environment
Du et al. Structural components recognition method for malfunctioned satellite based on domain randomization
Jaisawal et al. Airfisheye dataset: A multi-model fisheye dataset for uav applications
CN113326924B (en) Photometric localization method of key targets in sparse images based on deep neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant