CN111680552B - Feature part intelligent recognition method - Google Patents

Feature part intelligent recognition method Download PDF

Info

Publication number
CN111680552B
CN111680552B CN202010350572.8A CN202010350572A CN111680552B CN 111680552 B CN111680552 B CN 111680552B CN 202010350572 A CN202010350572 A CN 202010350572A CN 111680552 B CN111680552 B CN 111680552B
Authority
CN
China
Prior art keywords
spacecraft
image
dimensional geometric
images
gray level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010350572.8A
Other languages
Chinese (zh)
Other versions
CN111680552A (en
Inventor
汤亮
袁利
关新
王有懿
姚宁
宗红
冯骁
张科备
郝仁剑
郭子熙
刘昊
龚立纲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Control Engineering
Original Assignee
Beijing Institute of Control Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Control Engineering filed Critical Beijing Institute of Control Engineering
Priority to CN202010350572.8A priority Critical patent/CN111680552B/en
Publication of CN111680552A publication Critical patent/CN111680552A/en
Application granted granted Critical
Publication of CN111680552B publication Critical patent/CN111680552B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4023Scaling of whole images or parts thereof, e.g. expanding or contracting based on decimating pixels or lines of pixels; based on inserting pixels or lines of pixels
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本发明一种特征部位智能识别方法,适用于空间失效卫星局部典型部位识别领域。传统基于解析算法的目标典型部位识别存在边缘点识别误差大等问题,本发明设计了一种基于卷积神经网络的局部典型特征部位智能识别方法。首先针对失效卫星局部典型部位识别任务,创建包含丰富信息的卫星局部典型部位数据库,对典型部位的构件进行标注,构造训练数据集和测试数据集。然后构建一个深度卷积网络,使用训练数据集进行网络参数的训练,训练完成后,网络即可从输入图像中智能识别出的典型部位。

The invention is an intelligent identification method of characteristic parts, which is suitable for the field of identification of local typical parts of space failed satellites. Traditional identification of typical target parts based on analytical algorithms has problems such as large edge point identification errors. The present invention designs an intelligent identification method for local typical feature parts based on a convolutional neural network. First, for the task of identifying local typical parts of failed satellites, a database of local typical parts of satellites containing rich information was created, the components of the typical parts were annotated, and a training data set and a test data set were constructed. Then build a deep convolutional network and use the training data set to train the network parameters. After the training is completed, the network can intelligently identify typical parts from the input image.

Description

一种特征部位智能识别方法An intelligent identification method for characteristic parts

技术领域Technical field

本发明属于空间目标特征识别领域,具体涉及一种特征部位智能识别方法。The invention belongs to the field of spatial target feature recognition, and specifically relates to an intelligent recognition method for feature parts.

背景技术Background technique

针对以卫星为主的重要空间目标的保护和在轨服务已成为世界各国航天技术的重要发展方向,而卫星特征部件(如天线、帆板、发动机)的识别技术是其中的关键环节。随着飞行器抵近技术和光学成像技术的日益成熟,使得利用天基平台对空间目标进行高分辨率光学成像成为可能,这也对卫星目标识别,特别是卫星特征部件的精确识别提出了更高的需求。The protection and on-orbit services of important space targets, mainly satellites, have become an important development direction of aerospace technology in various countries around the world, and the identification technology of satellite characteristic components (such as antennas, sails, and engines) is a key link. With the increasing maturity of aircraft approach technology and optical imaging technology, it is possible to use space-based platforms to perform high-resolution optical imaging of space targets. This also puts forward higher requirements for satellite target identification, especially the accurate identification of satellite characteristic components. needs.

空间目标识别主要是利用空间目标特性数据,对其身份、姿态、状态等属性进行有效判断和识别。目前,目标特性数据的来源主要为地基探测,包括光学和雷达等设备。然而,地基设备的探测数据与观测角度、目标特征、太阳角度和大气层等多种因素相关,使得探测结果具有极大的不确定性。尽管国内外在天基目标探测识别方面也有研究,但多集中在远距离情况下的点目标探测及在轨运动状态的识别方面,所采用的识别方法依赖于在线的已知特征点,对特征点的具有很强的约束,一旦已知的特征点发生变化,则现有的识别方法难以准确识别特征部位。近年来,深度学习技术的发展,图像目标识别的效果得到了极大地提升,这给空间目标识别领域带来了新的方法手段。Spatial target recognition mainly uses spatial target characteristic data to effectively judge and identify its identity, posture, status and other attributes. At present, the source of target characteristic data is mainly ground-based detection, including optical and radar equipment. However, the detection data of ground-based equipment is related to many factors such as observation angle, target characteristics, solar angle and atmosphere, making the detection results highly uncertain. Although there are also studies on space-based target detection and recognition at home and abroad, most of them focus on point target detection at long distances and recognition of on-orbit motion states. The recognition methods used rely on known online feature points, and the features are Points have strong constraints. Once the known feature points change, it is difficult for the existing recognition methods to accurately identify the feature parts. In recent years, with the development of deep learning technology, the effect of image target recognition has been greatly improved, which has brought new methods and means to the field of spatial target recognition.

发明内容Contents of the invention

本发明解决的技术问题是:克服现有识别技术对已知特征点的过分依赖的不足,提出了一种特征部位智能识别方法,能够实现复杂空间环境下针对目标典型部位与战术特征进行智能识别,为在轨场景深度认知与理解提供技术支撑。The technical problem solved by this invention is to overcome the shortcomings of existing recognition technology's over-reliance on known feature points, and propose an intelligent recognition method for feature parts, which can realize intelligent recognition of typical target parts and tactical features in complex spatial environments. , providing technical support for in-depth recognition and understanding of on-orbit scenes.

本发明所采用的技术方案是:一种特征部位智能识别方法,步骤如下:The technical solution adopted by the present invention is: an intelligent identification method of characteristic parts. The steps are as follows:

(1)基于姿态旋转方法,构建空间中航天器图像数据库,航天器图像数据库中存储航天器在不同三轴姿态角组合情况下的三维几何图像;(1) Based on the attitude rotation method, construct a spacecraft image database in space. The spacecraft image database stores three-dimensional geometric images of the spacecraft under different three-axis attitude angle combinations;

(2)对航天器图像数据库中的每个三维几何图像的红(R)、绿(G)、蓝(B)三个颜色通道的像素值进行加权平均,能得到该三维几何图像的灰度图像;在灰度图像基础上,加入测量噪声能得到带噪声的灰度图像;(2) Perform a weighted average of the pixel values of the three color channels of red (R), green (G), and blue (B) for each three-dimensional geometric image in the spacecraft image database to obtain the grayscale of the three-dimensional geometric image. Image; based on the grayscale image, adding measurement noise can obtain a noisy grayscale image;

(3)采用三次插值方法对航天器图像数据库中的每个三维几何图像的灰度图像进行缩小处理,得到对应的插值后图像;(3) Use the cubic interpolation method to reduce the grayscale image of each three-dimensional geometric image in the spacecraft image database to obtain the corresponding interpolated image;

(4)对步骤(2)航天器图像数据库中的每个三维几何图像的灰度图像进行旋转变化,获取旋转后的灰度图像;(4) Rotate the grayscale image of each three-dimensional geometric image in the spacecraft image database in step (2) to obtain the rotated grayscale image;

(5)对步骤(2)航天器图像数据库中的各个三维几何图像的灰度图像以及带噪声的灰度图像、步骤(3)各个三维几何图像的灰度图像插值后图像、步骤(4)各个三维几何图像的灰度图像旋转后的灰度图像,分别进行航天器特征部位轮廓标注;并对标注的航天器特征部位轮廓分别进行名称设置,得到特征部分的标签,完成目标航天器特征部位的标注;(5) Interpolate the grayscale image of each three-dimensional geometric image in the spacecraft image database in step (2) and the grayscale image with noise, and the grayscale image of each three-dimensional geometric image in step (3), and step (4) After rotating the grayscale image of each three-dimensional geometric image, the contours of the characteristic parts of the spacecraft are marked respectively; and the names of the marked contours of the characteristic parts of the spacecraft are set respectively to obtain the labels of the characteristic parts and complete the characteristic parts of the target spacecraft. label;

(6)构建卷积神经网络,利用航天器图像数据库中航天器特征部位轮廓标注后的图像中的大部分对构建的卷积神经网络进行训练,得到训练后的卷积神经网络;将其余的航天器图像数据库中航天器特征部位轮廓标注后的图像,输入训练后的卷积神经网络进行航天器特征部位识别,输出识别后的结果,实现航天器的特征部位智能识别。(6) Construct a convolutional neural network, and use most of the images with the contours of the spacecraft's characteristic parts in the spacecraft image database to train the constructed convolutional neural network to obtain the trained convolutional neural network; the remaining The image with the outline of the spacecraft's characteristic parts in the spacecraft image database is input into the trained convolutional neural network to identify the spacecraft's characteristic parts, and the recognition results are output to realize the intelligent identification of the spacecraft's characteristic parts.

优选的,(1)基于姿态旋转方法,构建空间中航天器图像数据库,航天器图像数据库中存储航天器在不同三轴姿态角组合情况下的三维几何图像,具体为:Preferably, (1) based on the attitude rotation method, construct a spacecraft image database in space. The spacecraft image database stores three-dimensional geometric images of the spacecraft under different combinations of three-axis attitude angles, specifically:

(1)基于姿态旋转方法,构建空间中航天器图像数据库,即设置目标航天器三轴姿态角为滚动角俯仰角θ、偏航角ψ。三轴姿态角的变换范围为[0,360°],每隔N°进行取值,形成多组航天器的三轴姿态角组合,导入仿真建模软件,进行航天器在不同三轴姿态角组合情况下的三维几何图像的获取,得到航天器图像数据库。(1) Based on the attitude rotation method, construct a spacecraft image database in space, that is, set the three-axis attitude angle of the target spacecraft as the roll angle Pitch angle θ, yaw angle ψ. The transformation range of the three-axis attitude angle is [0, 360°], and the value is taken every N° to form multiple sets of three-axis attitude angle combinations of the spacecraft. Import the simulation modeling software to perform the simulation of the spacecraft at different three-axis attitude angles. Combining the acquisition of three-dimensional geometric images under the circumstances, a spacecraft image database is obtained.

优选的,(2)对航天器图像数据库中的每个三维几何图像的红(R)、绿(G)、蓝(B)三个颜色通道的像素值进行加权平均,能得到该三维几何图像的灰度图像,具体为:Preferably, (2) perform a weighted average of the pixel values of the three color channels of red (R), green (G), and blue (B) of each three-dimensional geometric image in the spacecraft image database to obtain the three-dimensional geometric image. Grayscale image, specifically:

G(i,j)=0.299×R(i,j)+0.578×G(i,j)+0.114×B(i,j)G(i,j)=0.299×R(i,j)+0.578×G(i,j)+0.114×B(i,j)

式中:i,j为灰度图像的横纵坐标,i大于等于1,j大于等于1,R(i,j)、G(i,j)、B(i,j)分别为灰度图像中i行j列像素红(R)、绿(G)、蓝(B)对应的像素值。G(i,j)为灰度图像中i行j列像素值。In the formula: i, j are the horizontal and vertical coordinates of the grayscale image, i is greater than or equal to 1, j is greater than or equal to 1, R(i,j), G(i,j), and B(i,j) are grayscale images respectively. The pixel values corresponding to the red (R), green (G), and blue (B) pixels in row i and column j. G(i,j) is the pixel value of row i and column j in the grayscale image.

优选的,(3)采用三次插值方法对航天器图像数据库中的每个三维几何图像的灰度图像进行缩小处理,得到对应的插值后图像;具体为:Preferably, (3) use cubic interpolation method to reduce the grayscale image of each three-dimensional geometric image in the spacecraft image database to obtain the corresponding interpolated image; specifically:

f(i+u,j+v)=ABCf(i+u,j+v)=ABC

式中:f(i+u,j+v)为插值后图像的i行j列像素值,u,v为插值间隔,A,B,C为系数矩阵,其形式如下:In the formula: f(i+u,j+v) is the pixel value of row i and column j of the interpolated image, u, v are the interpolation intervals, A, B, C are coefficient matrices, and their form is as follows:

A=[S(1+u) S(u) S(1-u) S(2-u)]A=[S(1+u) S(u) S(1-u) S(2-u)]

C=[S(1+v) S(v) S(1-v) S(2-v)]T C=[S(1+v) S(v) S(1-v) S(2-v)] T

式中:S为插值相关函数,具体为In the formula: S is the interpolation correlation function, specifically:

通过以上的优选的旋转方案及参数要求,大幅提高了图像目标识别的效果。Through the above optimized rotation scheme and parameter requirements, the effect of image target recognition is greatly improved.

优选的,(4)对步骤(2)航天器图像数据库中的每个三维几何图像的灰度图像进行旋转变化,获取旋转后的灰度图像;具体为:Preferably, (4) rotate the grayscale image of each three-dimensional geometric image in the spacecraft image database in step (2) to obtain the rotated grayscale image; specifically:

设原图像(即三维几何图像的灰度图像)某像素点的坐标为(i,j),旋转后在旋转后的灰度图像中的坐标为(i1,j1),则旋转变换的矩阵表达式为:Assume that the coordinates of a certain pixel in the original image (that is, the grayscale image of the three-dimensional geometric image) are (i, j), and the coordinates in the rotated grayscale image after rotation are (i 1 , j 1 ), then the rotation transformation The matrix expression is:

逆变换为:The inverse transformation is:

式中,θ表示在原图像所在的平面内,从原图像变为旋转后的灰度图像时,绕原图像(即三维几何图像的灰度图像)中心点旋转的角度;In the formula, θ represents the angle of rotation around the center point of the original image (that is, the grayscale image of the three-dimensional geometric image) when changing from the original image to the rotated grayscale image in the plane where the original image is located;

a、b为图像未旋转时候旋转中心的坐标、c、d为图像旋转后中心点的坐标,为:a, b are the coordinates of the rotation center when the image is not rotated, c, d are the coordinates of the center point after the image is rotated, as:

式中,w0为原图像(旋转前)的宽度,h0为原图像(旋转前)的长度;w1为旋转后灰度图像的宽度,h1为旋转后灰度图像的长度;为真实模拟光学相机成像的尺寸(长、宽)的约束影响,旋转后的航天器在光学相机内成像的尺寸有所变化,因此,旋转后的图像长度h1和宽度w1和原图像有长度h0和宽度w0有所变化。In the formula, w 0 is the width of the original image (before rotation), h 0 is the length of the original image (before rotation); w 1 is the width of the grayscale image after rotation, h 1 is the length of the grayscale image after rotation; is Really simulates the constraint influence of the size (length, width) of the optical camera imaging. The size of the image of the rotated spacecraft in the optical camera changes. Therefore, the length h 1 and width w 1 of the rotated image are the same as the original image. h 0 and width w 0 change.

通过以上的优选的旋转方案及参数要求,大幅提高了图像目标识别的效果。Through the above optimized rotation scheme and parameter requirements, the effect of image target recognition is greatly improved.

优选的,(1)基于姿态旋转方法,构建空间中航天器图像数据库,航天器图像数据库中存储航天器在不同三轴姿态角组合情况下的三维几何图像,具体为:Preferably, (1) based on the attitude rotation method, construct a spacecraft image database in space. The spacecraft image database stores three-dimensional geometric images of the spacecraft under different combinations of three-axis attitude angles, specifically:

目标航天器为空间中M种航天器,将多组目标航天器的三轴姿态角组合导入STK软件中,在STK软件中,进行M种航天器不同三轴姿态角组合下的三维几何图像的获取,存入空间中航天器图像数据库中。The target spacecraft is M types of spacecraft in space. The three-axis attitude angle combinations of multiple groups of target spacecraft are imported into the STK software. In the STK software, the three-dimensional geometric images of the M types of spacecraft under different three-axis attitude angle combinations are calculated. Obtain and store in spacecraft image database.

优选的,为模拟采用真实拍摄设备对航天器进行拍摄时产生的噪声,在步骤(2)得到灰度图像后,加入噪声,作为步骤(2)最终得到的灰度图像;在灰度图像基础上,加入测量噪声能得到带噪声的灰度图像,具体为:加入高斯噪声进行模糊化,进一步提升了识别的准确性。Preferably, in order to simulate the noise generated when using real shooting equipment to shoot the spacecraft, after the grayscale image is obtained in step (2), noise is added as the final grayscale image obtained in step (2); on the basis of the grayscale image On the above, adding measurement noise can obtain a noisy grayscale image, specifically: adding Gaussian noise for blurring, which further improves the accuracy of recognition.

加入高斯噪声进行模糊化,具体为:Add Gaussian noise for blurring, specifically:

在三维几何图像的灰度图像基础上,加入高斯噪声进行模糊化,得到带噪声的灰度图像,具体为:On the basis of the grayscale image of the three-dimensional geometric image, Gaussian noise is added for blurring to obtain a grayscale image with noise, specifically:

Gnoise(i,j)=G(i,j)+GGB(i,j)G noise (i,j)=G(i,j)+G GB (i,j)

式中:i,j为灰度图像的横纵坐标,Gnoise(i,j)为带噪声的灰度图像中i行j列像素值。σ是高斯噪声正态分布的标准方差;r是带噪声的灰度图像模糊半径。In the formula: i,j are the horizontal and vertical coordinates of the grayscale image, and G noise (i,j) is the pixel value of row i and column j in the grayscale image with noise. σ is the standard deviation of the Gaussian noise normal distribution; r is the blur radius of the noisy grayscale image.

在加入噪声以后,获得航天器带噪声的灰度图像,能够更加逼真的模拟在空间环境下对航天器成像的图片,为神经网络训练和特征部位识别提供准确的图像,提高特征部位的识别的鲁棒性和准确性。After adding noise, the grayscale image with noise of the spacecraft can be obtained, which can more realistically simulate the image of the spacecraft in the space environment, provide accurate images for neural network training and feature part identification, and improve the accuracy of feature part identification. Robustness and accuracy.

优选的,对构建的卷积神经网络进行训练具体为:卷积神经网络接收图像数据输入,输出图像特征部位的标签,作为预测标签,将预测标签与步骤(5)特征部分的标签共同输入设定的标签损失函数,通过损失函数计算训练误差,根据训练误差,动态调整神经网络参数,实现误差收敛,当误差收敛到设定的要求后,得到训练后的卷积神经网络。Preferably, the specific training of the constructed convolutional neural network is as follows: the convolutional neural network receives image data input, outputs the label of the feature part of the image as a prediction label, and inputs the prediction label and the label of the feature part of step (5) into the device together. A certain label loss function is used to calculate the training error through the loss function. According to the training error, the neural network parameters are dynamically adjusted to achieve error convergence. When the error converges to the set requirements, the trained convolutional neural network is obtained.

优选的,步骤(1)不同三轴姿态角组合中,三轴姿态角的变换范围为[0,360°],每隔N°进行取值,N为360的公约数;共形成(360/N)3组目标航天器的三轴姿态角的组合。Preferably, in step (1), in different three-axis attitude angle combinations, the transformation range of the three-axis attitude angle is [0, 360°], and the value is taken every N°, and N is the common divisor of 360; a total of (360/ N) The combination of three groups of three-axis attitude angles of the target spacecraft.

优选的,(1)中航天器图像数据库中存储的航天器在不同三轴姿态角组合情况下的三维几何图像,具体为:Preferably, the three-dimensional geometric images of the spacecraft stored in the spacecraft image database in (1) under different three-axis attitude angle combinations are specifically:

将多组目标航天器的三轴姿态角组合导入仿真建模软件中,仿真建模软件优选为卫星系统工具包(Satellite Toll Kit,STK)软件,能够构建航天器的三维模型,从而获取航天器在不同三轴姿态角组合情况下的三维几何图像。Import the three-axis attitude angle combinations of multiple groups of target spacecraft into the simulation modeling software. The simulation modeling software is preferably Satellite Toll Kit (STK) software, which can build a three-dimensional model of the spacecraft, thereby obtaining the spacecraft Three-dimensional geometric images under different combinations of three-axis attitude angles.

本发明与现有技术相比的有益效果为:Compared with the prior art, the beneficial effects of the present invention are:

(1)本发明提出了一种特征部位智能识别方法,将特征提取和特征识别融合到了一起,避免了对特征的人工设计,具有更强的通用性,且大大提升了空间目标特征部件的识别效率和精度。(1) The present invention proposes an intelligent recognition method for feature parts, which integrates feature extraction and feature recognition, avoids manual design of features, has stronger versatility, and greatly improves the recognition of feature parts of space targets. Efficiency and precision.

(2)本发明基于多种来源的卫星图片数据,包括卫星三维模型投影得到的二维图像等,考虑在轨卫星的实际运行环境,对卫星图像进行模拟渲染,这样不仅可以获取更加真实的卫星图像数据,同时能够增加卫星图像数据的数量,更加有利于卷积网络的训练。(2) This invention is based on satellite image data from multiple sources, including two-dimensional images projected by satellite three-dimensional models, etc., taking into account the actual operating environment of the satellite in orbit, and simulates and renders the satellite images, so that not only can a more realistic satellite image be obtained, Image data can also increase the amount of satellite image data, which is more conducive to the training of convolutional networks.

(3)本发明提出了一种特征部位智能识别方法,不依赖与在轨已知的特征点,对不具有特征点的航天器依然具有较强的识别能力,具有更强的通用性和识别的鲁棒能力,大大提升了航天器特征部位、例如帆板等识别效率和精度。(3) The present invention proposes an intelligent identification method of characteristic parts, which does not rely on known characteristic points in orbit. It still has strong identification ability for spacecraft without characteristic points, and has stronger versatility and recognition. The robust capability greatly improves the identification efficiency and accuracy of characteristic parts of the spacecraft, such as sail panels.

附图说明Description of the drawings

图1为本发明方法的示意图;Figure 1 is a schematic diagram of the method of the present invention;

图2为本发明方法模拟的航天器三维模型;Figure 2 is a three-dimensional model of the spacecraft simulated by the method of the present invention;

图3为本发明方法训练过程中损失函数曲线图;Figure 3 is a graph of the loss function during the training process of the method of the present invention;

图4为本发明方法得到的识别结果。Figure 4 shows the recognition results obtained by the method of the present invention.

具体实施方式Detailed ways

下面结合附图和具体实施例对本发明做进一步详细描述。The present invention will be described in further detail below with reference to the accompanying drawings and specific embodiments.

本发明一种特征部位智能识别方法,适用于空间失效卫星局部典型部位识别领域。传统基于解析算法的目标典型部位识别存在边缘点识别误差大等问题,本发明设计了一种基于卷积神经网络的局部典型特征部位智能识别方法。首先针对失效卫星局部典型部位识别任务,创建包含丰富信息的卫星局部典型部位数据库,对典型部位的构件进行标注,构造训练数据集和测试数据集。然后构建一个深度卷积网络,使用训练数据集进行网络参数的训练,训练完成后,网络即可从输入图像中智能识别出的典型部位。The invention is an intelligent identification method of characteristic parts, which is suitable for the field of identification of local typical parts of failed space satellites. Traditional identification of typical target parts based on analytical algorithms has problems such as large edge point identification errors. The present invention designs an intelligent identification method for local typical feature parts based on a convolutional neural network. First, for the task of identifying local typical parts of failed satellites, a database of local typical parts of satellites containing rich information was created, the components of the typical parts were annotated, and a training data set and a test data set were constructed. Then build a deep convolutional network and use the training data set to train the network parameters. After the training is completed, the network can intelligently identify typical parts from the input image.

空间失效航天器的抓捕、航天器之间交汇对接等任务需要精确获取航天器的相对姿态等信息。而航天器特征部位识别是实现相对姿态等信息估计的重要技术手段。传统基于解析算法的特征部位识别方法存在边缘点识别误差大、过渡依赖人工标识的已知特征点等问题。针对此,本发明设计了一种基于卷积神经网络的局部典型特征部位智能识别方法,解决了识别方法中过渡依赖人工标识的已知特征点的问题,实现了在具有弱光、抖动图像模糊等复杂空间环境下的航天器局部典型特征部位(如太阳翼)的识别。Tasks such as capturing failed spacecraft in space and rendezvous and docking between spacecrafts require precise acquisition of information such as the relative attitude of the spacecraft. The recognition of spacecraft characteristic parts is an important technical means to estimate relative attitude and other information. Traditional feature part identification methods based on analytical algorithms have problems such as large edge point identification errors and over-reliance on known feature points manually marked. In view of this, the present invention designs an intelligent identification method of local typical feature parts based on a convolutional neural network, which solves the problem of over-reliance on manually identified known feature points in the identification method, and achieves blurred images with low light and jitter. Identification of typical local characteristic parts of spacecraft (such as solar wings) in complex space environments.

如图1所示,本发明一种特征部位智能识别方法,包括以下步骤:As shown in Figure 1, a method for intelligent identification of characteristic parts of the present invention includes the following steps:

(1)基于姿态旋转方法构建空间中航天器图像数据库,即设置目标航天器三轴姿态角为滚动角俯仰角θ、偏航角ψ。优选的,三轴姿态角的变换范围为[0,360°],每隔N°进行取值(所述的N为360的公约数,优选为90°),共形成(360/N)3组目标航天器的三轴姿态角的组合;),形成多组目标航天器的三轴姿态角组合,导入仿真建模软件,进行航天器在不同三轴姿态角组合情况下的三维几何图像的获取,得到航天器图像数据库,进一步优选方案具体为:(1) Construct a spacecraft image database in space based on the attitude rotation method, that is, set the three-axis attitude angle of the target spacecraft as the roll angle Pitch angle θ, yaw angle ψ. Preferably, the transformation range of the three-axis attitude angle is [0, 360°], and the value is taken every N° (the N is the common divisor of 360, preferably 90°), forming a total of (360/N) 3 A combination of three-axis attitude angles of a group of target spacecraft;), forming a combination of three-axis attitude angles of multiple groups of target spacecraft, and importing it into the simulation modeling software to calculate the three-dimensional geometric image of the spacecraft under different combinations of three-axis attitude angles. Obtain and obtain the spacecraft image database, and the further optimized plan is as follows:

(1-1)设置航天器三轴姿态角滚动角俯仰角θ、偏航角ψ在[0,360°],遍历每隔N°进行取值,遍历滚动角/>俯仰角θ、偏航角ψ所有组合,共形成(360/N)3中组合。(1-1) Set the three-axis attitude angle and roll angle of the spacecraft The pitch angle θ and yaw angle ψ are in [0, 360°], and the values are taken every N° during the traversal, and the roll angle is traversed/> All combinations of pitch angle θ and yaw angle ψ form a total of (360/N) 3 combinations.

(1-2)在仿真建模软件"卫星系统工具包"(Satellite Toll Kit,STK)选择第i种航天器,设置该航天器的三轴姿态为(1-1)得到的所有遍历组合的姿态,截取每一种姿态组合下的该航天器的几何图片。(1-2) Select the i-th spacecraft in the simulation modeling software "Satellite Toll Kit (STK)", and set the three-axis attitude of the spacecraft to all traversal combinations obtained in (1-1) Attitude, intercept the geometric picture of the spacecraft under each attitude combination.

(1-3)在仿真建模软件"卫星系统工具包"(Satellite Toll Kit,STK)选择第i+1种航天器,设置该航天器的三轴姿态为(1-1)得到的所有遍历组合的姿态,截取每一种姿态组合下的该航天器的几何图片。设在仿真建模软件"卫星系统工具包"(Satellite TollKit,STK)共含有M中航天器,则可获得M×(360/N)3张三维几何图像,得到航天器图像数据库,该数据库包括了多姿态、多特征部位等信息,为准确识别航天器特征部位提供丰富的数据库,进一步为识别效果的提高提供了基础数据。(1-3) Select the i+1 spacecraft in the simulation modeling software "Satellite Toll Kit (STK)", and set the three-axis attitude of the spacecraft to all traversals obtained by (1-1) Combined attitude, intercept the geometric picture of the spacecraft under each combination of attitude. Assuming that the simulation modeling software "Satellite TollKit" (STK) contains a total of M spacecraft, then M×(360/N) 3 three-dimensional geometric images can be obtained to obtain a spacecraft image database, which includes It collects multi-attitude, multi-feature parts and other information, provides a rich database for accurate identification of spacecraft feature parts, and further provides basic data for improving the recognition effect.

(2)对航天器图像数据库中的每个三维几何图像的红(R)、绿(G)、蓝(B)三个颜色通道的像素值进行加权平均,能得到该三维几何图像的灰度图像,(标记为子集1)和带噪声的三维几何图像的灰度图像(标记为子集2),优选方案具体为:(2) Perform a weighted average of the pixel values of the three color channels of red (R), green (G), and blue (B) for each three-dimensional geometric image in the spacecraft image database to obtain the grayscale of the three-dimensional geometric image. image, (marked as subset 1) and the grayscale image of the noisy three-dimensional geometric image (marked as subset 2), the preferred solution is specifically:

(2-1)如图2所示,对航天器图像数据库中的每个三维几何图像的红(R)、绿(G)、蓝(B)三个颜色通道的像素值进行加权平均,能得到该三维几何图像的灰度图像;优选公式具体如下:(2-1) As shown in Figure 2, a weighted average of the pixel values of the three color channels of red (R), green (G), and blue (B) for each three-dimensional geometric image in the spacecraft image database can be obtained. Obtain the grayscale image of the three-dimensional geometric image; the preferred formula is as follows:

G(i,j)=0.299×R(i,j)+0.578×G(i,j)+0.114×B(i,j)G(i,j)=0.299×R(i,j)+0.578×G(i,j)+0.114×B(i,j)

式中:i,j为图像的横纵坐标,i大于等于1,j大于等于1,R(i,j)、G(i,j)、B(i,j)分别为图像中i行j列像素红(R)、绿(G)、蓝(B)对应的像素值。G(i,j)为图像中i行j列像素值。通过以上公式中国系数的搭配,得到的灰度图像,进一步提高了识别的效果。In the formula: i, j are the horizontal and vertical coordinates of the image, i is greater than or equal to 1, j is greater than or equal to 1, R(i,j), G(i,j), B(i,j) are the i row j in the image respectively. The pixel values corresponding to the column pixels red (R), green (G), and blue (B). G(i,j) is the pixel value of row i and column j in the image. Through the combination of the Chinese coefficients of the above formula, the grayscale image obtained further improves the recognition effect.

通过上述步骤(2-1)的灰度化处理可获得M×(360/N)3张航天器三维几何图像的灰度图像,即获得子集1中的全部图像。Through the grayscale processing in the above step (2-1), M × (360/N) 3 grayscale images of the three-dimensional geometric image of the spacecraft can be obtained, that is, all the images in subset 1 can be obtained.

(2-2)考虑空间环境下对航天器成像时存在的噪声、抖动模糊等影响因素,在步骤(2-1)的在三维几何图像的灰度图像基础上,加入高斯噪声进行模糊化,得到带噪声的灰度图像,即子集2,优选表达式具体为:(2-2) Considering factors such as noise, jitter and blur that exist when imaging spacecraft in the space environment, in step (2-1), based on the grayscale image of the three-dimensional geometric image, add Gaussian noise for blurring, The grayscale image with noise is obtained, that is, subset 2. The preferred expression is specifically:

Gnoise(i,j)=G(i,j)+GGB(i,j)G noise (i,j)=G(i,j)+G GB (i,j)

式中:i,j为灰度图像的横纵坐标,Gnoise(i,j)为带噪声的灰度图像中i行j列像素值。σ是高斯噪声正态分布的标准方差;r是带噪声的灰度图像模糊半径。In the formula: i,j are the horizontal and vertical coordinates of the grayscale image, and G noise (i,j) is the pixel value of row i and column j in the grayscale image with noise. σ is the standard deviation of the Gaussian noise normal distribution; r is the blur radius of the noisy grayscale image.

在加入噪声以后,获得航天器带噪声的灰度图像,能够更加逼真的模拟在空间环境下对航天器成像的图片,为神经网络训练和特征部位识别提供准确的图像,进一步提高特征部位的识别的鲁棒性和准确性。After adding noise, the noisy grayscale image of the spacecraft is obtained, which can more realistically simulate the image of the spacecraft in the space environment, provide accurate images for neural network training and feature part identification, and further improve the identification of feature parts. robustness and accuracy.

(3)采用三次插值方法对航天器图像数据库中的每个三维几何图像的灰度图像进行缩小处理,得到对应的插值后图像;,优选方案具体为:(3) Use the cubic interpolation method to reduce the grayscale image of each three-dimensional geometric image in the spacecraft image database to obtain the corresponding interpolated image; the preferred solution is as follows:

f(i+u,j+v)=ABCf(i+u,j+v)=ABC

式中:f(i+u,j+v)为插值后图像的i行j列像素值,u,v为插值间隔,A,B,C为系数矩阵,其优选方案形式如下:In the formula: f(i+u,j+v) is the pixel value of row i and column j of the interpolated image, u, v are the interpolation intervals, A, B, C are coefficient matrices. The preferred solution is as follows:

A=[S(1+u) S(u) S(1-u) S(2-u)]A=[S(1+u) S(u) S(1-u) S(2-u)]

C=[S(1+v) S(v) S(1-v) S(2-v)]T C=[S(1+v) S(v) S(1-v) S(2-v)] T

式中:S(·)为插值相关函数,具体为In the formula: S(·) is the interpolation correlation function, specifically:

通过步骤(3)获得的的插值后图像,能够逼真模拟空间环境下不用距离对航天器成像的图片,为神经网路的识别提供更为准确的训练数据集。The interpolated image obtained through step (3) can realistically simulate images of a spacecraft imaged without distance in a space environment, providing a more accurate training data set for neural network recognition.

(4)对步骤(2)航天器图像数据库中的子集1中的每个三维几何图像的灰度图像进行旋转变化,获取旋转后的灰度图像;优选方案具体为:(4) Rotate the grayscale image of each three-dimensional geometric image in subset 1 in the spacecraft image database in step (2) to obtain the rotated grayscale image; the preferred solution is specifically:

设原图像(即三维几何图像的灰度图像)某像素点的坐标为(i,j),旋转后在旋转后的灰度图像中的坐标为(i1,j1),则优选的旋转变换的矩阵表达式为:Assume that the coordinates of a certain pixel in the original image (that is, the grayscale image of the three-dimensional geometric image) are (i, j), and the coordinates in the rotated grayscale image after rotation are (i 1 , j 1 ), then the optimal rotation The matrix expression of the transformation is:

优选的逆变换的矩阵表达式为:The preferred matrix expression of the inverse transformation is:

式中,θ表示在原图像所在的平面内,从原图像变为旋转后的灰度图像时,绕原图像(即三维几何图像的灰度图像)中心点旋转的角度;In the formula, θ represents the angle of rotation around the center point of the original image (that is, the grayscale image of the three-dimensional geometric image) when changing from the original image to the rotated grayscale image in the plane where the original image is located;

a、b为图像未旋转时候旋转中心的坐标,即为图像中航天器所在位置的几何中心点、c、d为图像旋转后中心点的坐标。旋转后的中心点和旋转后的中心点既存在转动又存在平动。进一步优选方案具体如下:a and b are the coordinates of the rotation center when the image is not rotated, which is the geometric center point of the spacecraft's position in the image. c and d are the coordinates of the center point after the image is rotated. There is both rotation and translation in the rotated center point and the rotated center point. Further preferred solutions are as follows:

式中,w0=600为原图像的宽度,h0=1000为原图像的长度;w1=600为旋转后灰度图像的宽度,h1=360为旋转后灰度图像的长度;为真实模拟光学相机成像的尺寸(长宽)的约束影响,旋转后的航天器在光学相机内成像的尺寸有所变化,因此,旋转后的图像长度h1和宽度w1和原图像有长度h0和宽度w0有所变化。In the formula, w 0 =600 is the width of the original image, h 0 =1000 is the length of the original image; w 1 =600 is the width of the rotated grayscale image, h 1 =360 is the length of the rotated grayscale image; is Really simulates the constraint influence of the size (length and width) of the image of the optical camera. The size of the image of the rotated spacecraft in the optical camera changes. Therefore, the length h 1 and width w 1 of the rotated image have the same length h as the original image. 0 and the width w 0 changes.

通过以上本步骤的优选公式和参数要求,进一步提高特征部位的识别的准确性。Through the above optimized formula and parameter requirements of this step, the accuracy of identification of characteristic parts can be further improved.

(5)对步骤(2)航天器图像数据库中的各个三维几何图像的灰度图像的子集1和子集2、步骤(3)各个三维几何图像的灰度图像插值后图像、步骤(4)各个三维几何图像的灰度图像旋转后的灰度图像,分别进行航天器特征部位轮廓标注;并对标注的航天器特征部位轮廓分别进行名称设置,得到特征部分的标签,完成目标航天器特征部位的标注;以航天器太阳帆板为例,优选方案具体如下:(5) Interpolate the grayscale images of subset 1 and subset 2 of each three-dimensional geometric image in the spacecraft image database in step (2), the grayscale image of each three-dimensional geometric image in step (3), and the interpolated image in step (4) After rotating the grayscale image of each three-dimensional geometric image, the contours of the characteristic parts of the spacecraft are marked respectively; and the names of the marked contours of the characteristic parts of the spacecraft are set respectively to obtain the labels of the characteristic parts and complete the characteristic parts of the target spacecraft. annotation; taking the spacecraft solar panel as an example, the preferred solution is as follows:

(5-1)剔除无法确定太阳帆板特征部件的位置的图像;(5-1) Eliminate images in which the location of the solar panel’s characteristic components cannot be determined;

(5-2)对航天器太阳帆板的轮廓采用矩阵方框进行圈定和标志,并记该矩形方框圈定的部位为s1标签。(5-2) Use a matrix box to delineate and mark the outline of the spacecraft solar panel, and record the area delineated by the rectangular box as the s1 label.

(5-3)对航天器太阳帆板存在遮挡的图片进行标注,采用折线沿太阳帆板轮廓进行圈定和标志,并记该圈定的部位为s1标签。(5-3) Label the pictures where the spacecraft solar panel is blocked, use polylines to delineate and mark the outline of the solar panel, and record the delineated part as the s1 label.

通过以上本步骤具体处理后的图像,能够进一步提高特征部位的识别的准确性。Through the image specifically processed in this step above, the accuracy of identifying the characteristic parts can be further improved.

(6)构建卷积神经网络,利用航天器图像数据库中航天器特征部位轮廓标注后的图像中的大部分对构建的卷积神经网络进行训练;得到训练后的卷积神经网络;将其余的航天器图像数据库中航天器特征部位轮廓标注后的图像,输入训练后的卷积神经网络进行航天器特征部位识别,输出识别后的结果,实现航天器的特征部位智能识别,优选方案具体如下:(6) Construct a convolutional neural network, and use most of the images with contour annotations of the characteristic parts of the spacecraft in the spacecraft image database to train the constructed convolutional neural network; obtain the trained convolutional neural network; combine the rest In the spacecraft image database, the image with the outline of the characteristic parts of the spacecraft marked is input into the trained convolutional neural network to identify the characteristic parts of the spacecraft, and the recognition results are output to realize the intelligent identification of the characteristic parts of the spacecraft. The preferred solution is as follows:

(6-1)构造了特征部位边缘识别的损失函数为L2,优选表示为(6-1) The loss function for edge recognition of feature parts is constructed as L 2 , which is preferably expressed as

其中y为航天器图像数据库每种图片特征部位标注的区域面积。为神经网络识别出的航天器特征部位区域面积。M2为权重系数(所述的M2选为1)。Among them, y is the area of the area marked by each feature part of the image in the spacecraft image database. It is the area of the characteristic parts of the spacecraft identified by the neural network. M 2 is the weight coefficient (the M 2 is selected as 1).

(6-2)构造优选的神经卷积网络,设计神经卷积网络共有K个通道,每个通道的权重系数为Wj,1≤j≤K(所述的K优选为10000)。各个通道权重系数初始为Wj(0)(所述的Wj(0)优选为0.001)。(6-2) Construct an optimal neural convolution network. Design the neural convolution network to have a total of K channels. The weight coefficient of each channel is W j , 1≤j≤K (the K is preferably 10000). The weight coefficient of each channel is initially W j (0) (the W j (0) is preferably 0.001).

(6-3)利用航天器图像数据库中航天器特征部位轮廓标注后的图像中的大部分进行神经网络的训练。将航天器图像数据库中所有标注好的每一张图片输出到所构造神经卷积网络,标注好图像中已经通过标签可获得航天器特征部位所在区域的面积y。通过神经网络各个通道权重系数Wj(0)计算出航天器特征部位所在区域面积 (6-3) Use most of the images with contours of characteristic parts of the spacecraft in the spacecraft image database to train the neural network. Output all annotated images in the spacecraft image database to the constructed neural convolution network. The area y of the area where the spacecraft characteristic part is located in the annotated image can be obtained through the label. The area of the spacecraft’s characteristic parts is calculated through the weight coefficient W j (0) of each channel of the neural network.

(6-4)计算(6-1)中的特征部位边缘识别的损失函数为L2(6-4) Calculate the loss function for edge recognition of feature parts in (6-1) as L 2 .

(6-5)判断L2≤L2min(所述的L2min优选为0.16),是否成立,若成立进行(6-7);否则进行(6-6);(6-5) Determine whether L 2 ≤ L 2min (the L 2min is preferably 0.16) is established. If established, proceed to (6-7); otherwise, proceed to (6-6);

(6-6)更新神经卷积网络各个通道权重系数:Wj(l+1)=Wj(l)*dt。(dt为设计系数。l为神经网络训练的步数,满足1≤l≤lmax)。返回步骤(6-3)进行迭代计算。如图3所示,通过多次迭代训练,能够使边缘损失函数收敛到0.16以内。(6-6) Update the weight coefficient of each channel of the neural convolution network: W j (l+1)=W j (l)*dt. (dt is the design coefficient. l is the number of steps for neural network training, satisfying 1≤l≤l max ). Return to step (6-3) for iterative calculation. As shown in Figure 3, through multiple iterations of training, the edge loss function can be converged to within 0.16.

(6-7)固化神经网络参数各个通道的权重系数Wj(l),获得训练完毕后的神经网络。(6-7) Solidify the weight coefficient W j (l) of each channel of the neural network parameters to obtain the trained neural network.

(6-8)将其余的航天器图像数据库中航天器特征部位轮廓标注后的图像,输入获得训练完毕后的神经网络,计算航天器特征部位的所在区域,输出识别后的结果。如图4所示,通过以上优选的卷积神经网络的识别,能够进一步准确识别出航天器帆板所在位置,实现航天器的特征部位智能识别。(6-8) Input the remaining images with contours of the spacecraft's characteristic parts in the spacecraft image database to obtain the trained neural network, calculate the area where the spacecraft's characteristic parts are located, and output the recognition results. As shown in Figure 4, through the identification of the above preferred convolutional neural network, the location of the spacecraft sailboard can be further accurately identified, and the intelligent identification of the characteristic parts of the spacecraft can be realized.

本发明将特征提取和特征识别融合到了一起,避免了对特征的人工设计,具有更强的通用性,且大大提升了空间目标特征部件的识别效率和精度。本发明基于多种来源的卫星图片数据,包括卫星三维模型投影得到的二维图像等,考虑在轨卫星的实际运行环境,对卫星图像进行模拟渲染,这样不仅可以获取更加真实的卫星图像数据,同时能够增加卫星图像数据的数量,更加有利于卷积网络的训练。The present invention integrates feature extraction and feature recognition, avoids manual design of features, has stronger versatility, and greatly improves the recognition efficiency and accuracy of spatial target feature components. This invention is based on satellite image data from multiple sources, including two-dimensional images projected by satellite three-dimensional models, etc., and considers the actual operating environment of the satellite in orbit to simulate and render the satellite image. In this way, not only can more realistic satellite image data be obtained, At the same time, it can increase the amount of satellite image data, which is more conducive to the training of convolutional networks.

而且,本发明不依赖与在轨已知的特征点,对不具有特征点的航天器依然具有较强的识别能力,具有更强的通用性和识别的鲁棒能力,大大提升了航天器特征部位、例如帆板等识别效率和精度。Moreover, the present invention does not rely on known feature points in orbit. It still has strong identification ability for spacecraft without feature points, has stronger versatility and robust identification capabilities, and greatly improves the characteristics of spacecraft. Recognition efficiency and accuracy of parts, such as windsurfing boards.

本发明说明书中未作详细描述的内容属本领域技术人员的公知技术。Contents not described in detail in the specification of the present invention are well-known technologies to those skilled in the art.

Claims (10)

1. The intelligent characteristic part identifying method is characterized by comprising the following steps:
(1) Based on an attitude rotation method, constructing a spacecraft image database in space, and storing three-dimensional geometric images of the spacecraft under the condition of different three-axis attitude angle combinations in the spacecraft image database;
(2) The pixel values of three color channels of red R, green G and blue B of each three-dimensional geometric image in the spacecraft image database are weighted and averaged, so that a gray image of the three-dimensional geometric image can be obtained; adding measurement noise energy on the basis of the gray image to obtain a noisy gray image;
(3) Performing reduction processing on the gray level image of each three-dimensional geometric image in the spacecraft image database by adopting a cubic interpolation method to obtain a corresponding interpolated image;
(4) Carrying out rotation change on the gray level image of each three-dimensional geometric image in the spacecraft image database in the step (2) to obtain a rotated gray level image;
(5) Respectively labeling the gray level images of all three-dimensional geometric images in the spacecraft image database in the step (2), the gray level images with noise, the gray level images of all three-dimensional geometric images in the step (3), and the gray level images of all three-dimensional geometric images in the step (4) after the gray level images of all three-dimensional geometric images are rotated; respectively performing name setting on the marked spacecraft characteristic part outlines to obtain labels of characteristic parts, and finishing marking the target spacecraft characteristic parts;
(6) Constructing a convolutional neural network, and training the constructed convolutional neural network by utilizing most of images marked by the outline of the characteristic part of the spacecraft in the spacecraft image database to obtain a trained convolutional neural network; and inputting the images with the marked outlines of the characteristic parts of the spacecraft in the rest spacecraft image database, inputting the trained convolutional neural network to identify the characteristic parts of the spacecraft, outputting the identified results, and realizing the intelligent identification of the characteristic parts of the spacecraft.
2. The intelligent feature recognition method according to claim 1, wherein: (1) Based on an attitude rotation method, constructing a spacecraft image database in space, and storing three-dimensional geometric images of the spacecraft under the condition of different three-axis attitude angle combinations in the spacecraft image database, wherein the three-dimensional geometric images specifically comprise:
(1) Based on the gesture rotation method, a spacecraft image database in the space is constructed, namely, a three-axis gesture angle of the target spacecraft is set as a rolling anglePitch angle θ, yaw angle ψ; the transformation range of the three-axis attitude angle is [0, 360 ]]And taking values every N degrees to form three-axis attitude angle combinations of a plurality of groups of spacecraft, importing simulation modeling software, and acquiring three-dimensional geometric images of the spacecraft under the condition of different three-axis attitude angle combinations to obtain a spacecraft image database.
3. The intelligent feature recognition method according to claim 1, wherein: (2) The pixel values of three color channels of red R, green G and blue B of each three-dimensional geometric image in the spacecraft image database are weighted and averaged, so that a gray image of the three-dimensional geometric image can be obtained, specifically:
G(i,j)=0.299×R(i,j)+0.578×G(i,j)+0.114×B(i,j)
wherein: i, j is the abscissa of the gray level image, i is greater than or equal to 1, j is greater than or equal to 1, R (i, j), G (i, j) and B (i, j) are pixel values corresponding to the red R, green G and blue B pixels of the i rows and j columns in the gray level image respectively; g (i, j) is the pixel value of row i and column j in the gray scale image.
4. The intelligent feature recognition method according to claim 1, wherein: (3) Performing reduction processing on the gray level image of each three-dimensional geometric image in the spacecraft image database by adopting a cubic interpolation method to obtain a corresponding interpolated image; the method comprises the following steps:
f(i+u,j+v)=ABC
wherein: f (i+u, j+v) is the pixel value of the i row and j column of the interpolated image, u, v is the interpolation interval, A, B and C is the coefficient matrix, and the form is as follows:
A=[S(1+u) S(u) S(1-u) S(2-u)]
C=[S(1+v) S(v) S(1-v) S(2-v)] T
wherein: g (i, j) is the pixel value of i rows and j columns in the gray scale image, S is the interpolation correlation function, specifically
5. The intelligent feature recognition method according to claim 1, wherein: (1) Based on an attitude rotation method, constructing a spacecraft image database in space, and storing three-dimensional geometric images of the spacecraft under the condition of different three-axis attitude angle combinations in the spacecraft image database, wherein the three-dimensional geometric images specifically comprise:
the target spacecraft is M kinds of spacecraft in space, three-axis attitude angle combinations of a plurality of groups of target spacecraft are imported into STK software, three-dimensional geometric images of the M kinds of spacecraft under different three-axis attitude angle combinations are acquired in the STK software, and the three-dimensional geometric images are stored in a spacecraft image database in space.
6. The method for intelligently identifying the characteristic parts according to claim 1, wherein noise is added after the gray level image of the three-dimensional geometric image is obtained in the step (2) to simulate noise generated when the spacecraft is shot by adopting real shooting equipment, and the noise is used as the gray level image finally obtained in the step (2); on the basis of the gray level image of the three-dimensional geometric image, adding measurement noise to obtain the gray level image with noise, specifically: gaussian noise is added for blurring.
7. The intelligent feature recognition method according to claim 1, wherein: training the constructed convolutional neural network specifically comprises the following steps: the convolutional neural network receives the image data and inputs the image data, outputs the label of the characteristic part of the image, inputs the predicted label and the label of the characteristic part of the step (5) into a set label loss function together as a predicted label, calculates a training error through the loss function, dynamically adjusts the parameters of the neural network according to the training error, realizes error convergence, and obtains the trained convolutional neural network after the error converges to the set requirement.
8. The intelligent feature recognition method according to claim 1, wherein: in the step (1), in the combination of different three-axis attitude angles, the transformation range of the three-axis attitude angles is [0, 360 degrees ], the value is taken every N degrees, and N is a common divisor of 360.
9. The intelligent feature recognition method according to claim 8, wherein: step (1) combining different three-axis attitude angles to form (360/N) 3 And combining three-axis attitude angles of the group target spacecraft.
10. The intelligent feature recognition method according to claim 1, wherein: (1) The three-dimensional geometric images of the spacecraft stored in the spacecraft image database under the combination condition of different three-axis attitude angles are specifically as follows:
the three-axis attitude angle combinations of the plurality of groups of target spacecrafts are imported into simulation modeling software, wherein the simulation modeling software is satellite system tool kit software, and can construct a three-dimensional model of the spacecrafts, so that three-dimensional geometric images of the spacecrafts under the condition of different three-axis attitude angle combinations are obtained.
CN202010350572.8A 2020-04-28 2020-04-28 Feature part intelligent recognition method Active CN111680552B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010350572.8A CN111680552B (en) 2020-04-28 2020-04-28 Feature part intelligent recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010350572.8A CN111680552B (en) 2020-04-28 2020-04-28 Feature part intelligent recognition method

Publications (2)

Publication Number Publication Date
CN111680552A CN111680552A (en) 2020-09-18
CN111680552B true CN111680552B (en) 2023-10-03

Family

ID=72452614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010350572.8A Active CN111680552B (en) 2020-04-28 2020-04-28 Feature part intelligent recognition method

Country Status (1)

Country Link
CN (1) CN111680552B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114491694B (en) * 2022-01-17 2024-06-25 北京航空航天大学 A method for constructing a space target dataset based on Unreal Engine
US20250124179A1 (en) * 2023-10-11 2025-04-17 Proteus Space, Inc. Methods and systems for rapid design and delivery of spacecrafts

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093456A (en) * 2012-12-25 2013-05-08 北京农业信息技术研究中心 Corn ear character index computing method based on images
CN104482934A (en) * 2014-12-30 2015-04-01 华中科技大学 Multi-transducer fusion-based super-near distance autonomous navigation device and method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6959109B2 (en) * 2002-06-20 2005-10-25 Identix Incorporated System and method for pose-angle estimation
US8180112B2 (en) * 2008-01-21 2012-05-15 Eastman Kodak Company Enabling persistent recognition of individuals in images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093456A (en) * 2012-12-25 2013-05-08 北京农业信息技术研究中心 Corn ear character index computing method based on images
CN104482934A (en) * 2014-12-30 2015-04-01 华中科技大学 Multi-transducer fusion-based super-near distance autonomous navigation device and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于彩色图像的高速目标单目位姿测量方法;刘巍;陈玲;马鑫;李肖;贾振元;;仪器仪表学报(第03期);675-682 *
基于视觉测量的太阳翼模态参数在轨辨识;吴小猷;李文博;张国琪;关新;郭胜;刘易;;空间控制技术与应用(第03期);9-14 *

Also Published As

Publication number Publication date
CN111680552A (en) 2020-09-18

Similar Documents

Publication Publication Date Title
CN114419147A (en) Rescue robot intelligent remote human-computer interaction control method and system
CN119180908A (en) Gaussian splatter-based laser enhanced visual three-dimensional reconstruction method and system
CN110580717A (en) A method for generating autonomous inspection routes of unmanned aerial vehicles for power towers
CN106570905B (en) A kind of noncooperative target point cloud initial attitude verification method
CN109035327B (en) Panoramic camera pose estimation method based on deep learning
CN107679537A (en) A kind of texture-free spatial target posture algorithm for estimating based on profile point ORB characteristic matchings
CN105138756B (en) Satellite agility imaging simulation and positioning accuracy appraisal procedure
CN119006591B (en) Multi-scale space target relative pose estimation method and system based on deep learning under complex environment
CN106251282A (en) A kind of generation method and device of mechanical arm sampling environment analogous diagram
CN118229799B (en) Feature-level camera-lidar online calibration method based on Transformer
CN117972885A (en) Simulation enhancement-based space intelligent perception data generation method
CN111680552B (en) Feature part intelligent recognition method
CN114491694A (en) Spatial target data set construction method based on illusion engine
CN113706619A (en) Non-cooperative target attitude estimation method based on space mapping learning
CN116661334A (en) Verification method of semi-physical simulation platform for missile tracking target based on CCD camera
CN120105761B (en) High-fidelity space far-field time-sensitive target optical data set generation method
CN119693459B (en) High-precision space spacecraft pose estimation method based on deep learning
CN118967954B (en) Urban space three-dimensional reconstruction method and system based on big data
CN117094158A (en) Golf ball track prediction method and device
CN115423717A (en) Laser Point Cloud Enhanced Reconstruction Method Based on Image Guidance
CN113592929A (en) Real-time splicing method and system for aerial images of unmanned aerial vehicle
CN115830111B (en) A method for UAV image positioning and attitude determination based on adaptive fusion of point and line features
CN120495416B (en) Pose estimation method and device based on space-to-ground cross-view image matching
CN119758408B (en) Water surface multi-target positioning method and system
CN110503622A (en) Image overall positioning and optimizing joining method based on location data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant