CN117351333A - Quick star image extraction method of star sensor - Google Patents
Quick star image extraction method of star sensor Download PDFInfo
- Publication number
- CN117351333A CN117351333A CN202311316756.2A CN202311316756A CN117351333A CN 117351333 A CN117351333 A CN 117351333A CN 202311316756 A CN202311316756 A CN 202311316756A CN 117351333 A CN117351333 A CN 117351333A
- Authority
- CN
- China
- Prior art keywords
- star
- feature map
- image
- star image
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000605 extraction Methods 0.000 title claims abstract description 51
- 238000000034 method Methods 0.000 claims abstract description 23
- 238000013528 artificial neural network Methods 0.000 claims abstract description 15
- 238000001514 detection method Methods 0.000 claims description 16
- 230000004913 activation Effects 0.000 claims description 13
- 230000004927 fusion Effects 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 11
- 210000002569 neuron Anatomy 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 4
- 238000012549 training Methods 0.000 claims description 4
- 239000006185 dispersion Substances 0.000 claims description 3
- 230000003287 optical effect Effects 0.000 claims description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 2
- 230000007246 mechanism Effects 0.000 claims description 2
- 238000010606 normalization Methods 0.000 claims description 2
- 238000011176 pooling Methods 0.000 claims description 2
- 230000009467 reduction Effects 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 11
- 238000004422 calculation algorithm Methods 0.000 description 6
- 230000011218 segmentation Effects 0.000 description 5
- 238000005259 measurement Methods 0.000 description 4
- 238000011160 research Methods 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000007123 defense Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 238000009987 spinning Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/02—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by astronomical means
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C25/00—Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/16—Image acquisition using multiple overlapping images; Image stitching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Databases & Information Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Manufacturing & Machinery (AREA)
- Astronomy & Astrophysics (AREA)
- Automation & Control Theory (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种星敏感器快速星像提取方法,本发明提供了适用于星敏感器的星像提取网络—SSN,能够在布满杂光的星图中,提取出星像目标所在区域;采用基于神经网络的误差补偿模型SEC,减小高斯曲面拟合法处理星像目标所在区域得到的星像质心坐标误差。本发明能够直接从受较强杂光干扰的星图中,准确快速的提取出星像质心坐标。
The invention discloses a fast star image extraction method for a star sensor. The invention provides a star image extraction network-SSN suitable for star sensors, which can extract the area where the star image target is located in a star map full of stray light. ; Use the error compensation model SEC based on neural networks to reduce the star image centroid coordinate error obtained by the Gaussian surface fitting method in processing the area where the star image target is located. The invention can directly and accurately extract star image centroid coordinates from star charts interfered by strong stray light.
Description
技术领域Technical field
本发明涉及星敏感器姿态测量技术领域,特别是一种星敏感器快速星像提取方法。The invention relates to the technical field of star sensor attitude measurement, in particular to a method for fast star image extraction of a star sensor.
背景技术Background technique
星敏感器通过探测天球上不同位置的恒星,为航天器提供弧秒级精度的姿态信息,被认为是精度最高的姿态传感器之一。星敏感器测量姿态的主要过程包括星空成像、星像提取、星图识别和姿态估算,其中星空成像即是利用相机对星空拍照获得星图,星像提取即是通过处理星图确定星像在星图坐标系中的位置。星像提取的准确性直接影响了星图识别的成败和姿态测量性能。同时,星像提取是星敏感器姿态测量过程中耗时最多的部分,研究快速星像提取方法,对于发展高性能星敏感器具有重要意义。The star sensor provides attitude information with arc-second accuracy to the spacecraft by detecting stars at different locations on the celestial sphere. It is considered one of the most accurate attitude sensors. The main process of the star sensor measuring attitude includes starry sky imaging, star image extraction, star map recognition and attitude estimation. Starry sky imaging is to use a camera to take pictures of the starry sky to obtain a star map, and star image extraction is to process the star map to determine where the star image is. Position in the star map coordinate system. The accuracy of star image extraction directly affects the success or failure of star image recognition and attitude measurement performance. At the same time, star image extraction is the most time-consuming part of the star sensor attitude measurement process. Research on fast star image extraction methods is of great significance for the development of high-performance star sensors.
早期的星敏感器研究仅仅考虑高信噪比的星空成像环境,通常采用全局阈值法做简单的去噪处理,即可开展星像提取。随着飞行器工作任务的多样化,星敏感器工作环境越来越复杂,杂光影响越来越显著,对星敏感器的精度要求也越来越高,而传统的星像提取算法很难取得满意的效果。研究人员已经对抗杂光干扰的星像提取算法做了深入研究。国防科技大学Ding等基于FPGA并行运算能力实现了实时星像质心提取,提取速度仅5.2ms/帧,质心精度优于0.01像元,该方法采用Top-Hat形态学滤波器,取得了一定的抗噪效果。清华大学邢飞等针对高速自旋卫星的姿态测量问题,通过引入扩展卡尔曼滤波器,深度融合星敏感器与mems-陀螺仪的数据,降低探测器噪声对星像提取精度的影响。该方法的星点提取误差小于0.2个像元。何贻洋等提出的基于灰色梯度的星像提取算法,利用自适应阈值分割算法降低星图噪声,提高了星敏感器抗高斯白噪声的性能。上述算法处理星图的速度和星像提取精度都获得了不错的效果,但对于高分辨率、强杂光的星图处理仍有进一步提升的空间。因此,研究快速、准确的抗杂光星像提取技术是发展高性能星敏感器的必然要求。Early star sensor research only considered the starry sky imaging environment with high signal-to-noise ratio, and usually used the global threshold method for simple denoising to perform star image extraction. With the diversification of aircraft missions, the working environment of star sensors is becoming more and more complex, the influence of stray light is becoming more and more significant, and the accuracy requirements for star sensors are also getting higher and higher. However, it is difficult to obtain the accuracy of star sensors using traditional star image extraction algorithms. Satisfactory results. Researchers have conducted in-depth research on star image extraction algorithms that resist stray light interference. Ding et al. of the National University of Defense Technology realized real-time star image centroid extraction based on the parallel computing capabilities of FPGA. The extraction speed is only 5.2ms/frame, and the centroid accuracy is better than 0.01 pixels. This method uses Top-Hat morphological filters and has achieved a certain degree of resistance. Noise effect. Xing Fei and others from Tsinghua University aimed at the problem of attitude measurement of high-speed spinning satellites. By introducing the extended Kalman filter, they deeply integrated the data of the star sensor and the mems-gyroscope to reduce the impact of detector noise on the accuracy of star image extraction. The star point extraction error of this method is less than 0.2 pixels. The star image extraction algorithm based on gray gradient proposed by He Yiyang et al. uses an adaptive threshold segmentation algorithm to reduce star image noise and improve the performance of the star sensor against Gaussian white noise. The above algorithm has achieved good results in star map processing speed and star image extraction accuracy, but there is still room for further improvement in star map processing with high resolution and strong stray light. Therefore, research on fast and accurate anti-stray light star image extraction technology is an inevitable requirement for the development of high-performance star sensors.
发明内容Contents of the invention
针对现有技术中存在的问题,本发明提供了适用于星敏感器的快速星像提取方法,能够在受杂光干扰的星图中,提取出星像目标所在区域;采用基于神经网络的误差补偿模型减小高斯曲面拟合法处理星像目标所在区域得到的星像质心坐标误差。In view of the problems existing in the prior art, the present invention provides a fast star image extraction method suitable for star sensors, which can extract the area where the star image target is located in a star map interfered by stray light; it uses an error based on a neural network The compensation model reduces the star image centroid coordinate error obtained by the Gaussian surface fitting method when processing the area where the star image target is located.
本发明的目的通过以下技术方案实现。The object of the present invention is achieved through the following technical solutions.
一种星敏感器快速星像提取方法,包括以下步骤:A fast star image extraction method for a star sensor, including the following steps:
1)将64k×64k尺寸的受杂光污染的星图分割成四张32k×32k尺寸的子星图,将四张子星图送入星像提取网络的特征提取主干网中;1) Divide the 64k×64k star image polluted by stray light into four sub-star images of 32k×32k size, and send the four sub-star images into the feature extraction backbone network of the star image extraction network;
2)将每一张32k×32k尺寸,通道数为1的子星图作为输入,经过第一层步距为2,激活函数为ReLU的二维卷积,输出16k×16k尺寸,通道数为8的特征图F1;2) Take each sub-star image with a size of 32k×32k and a channel number of 1 as input. After the first layer of two-dimensional convolution with a stride of 2 and an activation function of ReLU, the output is 16k×16k size and the number of channels is Feature map F1 of 8;
3)将步骤2)中的特征图F1送入第一个残差模块Res n,得到输出为16k×16k尺寸,通道数为8的特征图F2;3) Send the feature map F1 in step 2) to the first residual module Res n, and obtain the feature map F2 with an output size of 16k×16k and a channel number of 8;
4)将步骤3)中的特征图F2送入第二个残差模块Res n,得到输出为8k×8k尺寸,通道数为8的特征图F3;4) Send the feature map F2 in step 3) to the second residual module Res n, and obtain the feature map F3 with an output size of 8k×8k and a channel number of 8;
5)将步骤4)中的特征图F3送入第三个残差模块Res n,得到输出为4k×4k尺寸,通道数为16的特征图F4;5) Send the feature map F3 in step 4) to the third residual module Res n, and obtain the feature map F4 with an output size of 4k×4k and a channel number of 16;
6)将步骤5)中的特征图F4送入第四个残差模块Res n,得到输出为2k×2k尺寸,通道数为16的特征图F5;6) Send the feature map F4 in step 5) to the fourth residual module Res n to obtain the feature map F5 whose output is 2k×2k size and the number of channels is 16;
7)将步骤6)中的特征图F5送入第五个残差模块Res n,得到输出为k×k尺寸,通道数为16的特征图F6;7) Send the feature map F5 in step 6) to the fifth residual module Res n, and obtain the feature map F6 whose output is k×k size and the number of channels is 16;
8)特征图进入特征融合模块进行特征融合,步骤7)中得到的F6依次经过5个CR模块处理,得到特征图F7,经过上采样操作与步骤5)得到的F5进行通道叠加相加操作,把两个相同尺寸的特征图通道拼接在一起得到F8,F8依次经过5个CR模块处理,得到特征图F9,F9经过上采样与步骤4)得到的F4进行通道叠加相加操作,把两个相同尺寸的特征图通道拼接在一起得到F10,再依次经过5个CR模块处理,得到最终的预测特征图P;8) The feature map enters the feature fusion module for feature fusion. The F6 obtained in step 7) is processed by 5 CR modules in turn to obtain the feature map F7. After the upsampling operation, the channel superposition and addition operation is performed with the F5 obtained in step 5). Two feature map channels of the same size are spliced together to obtain F8. F8 is processed by 5 CR modules in turn to obtain the feature map F9. F9 is upsampled and F4 obtained in step 4) is subjected to a channel superposition and addition operation. The two Feature map channels of the same size are spliced together to obtain F10, which is then processed by 5 CR modules in sequence to obtain the final predicted feature map P;
9)将特征图P送入检测模块用于星像检测;9) Send the feature map P to the detection module for star image detection;
10)输出预测区域坐标数据,即子星图中星像所在区域范围;10) Output the prediction area coordinate data, that is, the area where the star image in the sub-star map is located;
11)将预测区域坐标数据进行坐标变换。网络并行处理的四张子星图均属于同一张星图,第一张子图获得的横纵坐标均不变;第二张子图的横坐标乘上子星图缩放比例(缩放比例为原图尺寸/子图尺寸),纵坐标不变;第三张子图的纵坐标乘上子星图的缩放比例,横坐标不变;第四张子星图横纵坐标均乘上子星图的缩放比例,即可得到子图中星像所在区域范围所对应的原图区域范围;11) Coordinate transformation of the prediction area coordinate data. The four sub-star maps processed in parallel by the network all belong to the same star map. The horizontal and vertical coordinates of the first sub-map remain unchanged; the abscissa of the second sub-map is multiplied by the sub-star map scaling ratio (the scaling ratio is the original image size/sub-image size). The vertical coordinate remains unchanged; the vertical coordinate of the third sub-image is multiplied by the scaling ratio of the sub-star chart, and the abscissa remains unchanged; the horizontal and vertical coordinates of the fourth sub-star chart are multiplied by the scaling ratio of the sub-star chart, and the star image in the sub-image can be obtained The original image area corresponding to the area where the area is located;
12)输出在原图中的星像区域图像;12) Output the star image area image in the original image;
13)对所给星像区域图像进行高斯曲面拟合,求解星像质心。基于星像的能量分布特征进行中心定位,设初始化曲面中心为(x0,y0,I0),则高斯曲面模型如下:13) Perform Gaussian surface fitting on the given star image area image to solve for the star image centroid. Center positioning is performed based on the energy distribution characteristics of the star image. Assume the initialization surface center is (x 0 , y 0 , I 0 ), then the Gaussian surface model is as follows:
式中,(x0,y0)为星像的中心;I0为中心星像能量,与星等及光学系统相关;σ为高斯弥散半径,表示星像光斑的大小;f(xi,yi)为各像素点能量。将高斯曲面模型两边取对数进行线性化,得:In the formula, (x 0 , y 0 ) is the center of the star image; I 0 is the central star image energy, which is related to the magnitude and optical system; σ is the Gaussian dispersion radius, indicating the size of the star image spot; f (x i , y i ) is the energy of each pixel. Taking the logarithm of both sides of the Gaussian surface model and linearizing it, we get:
基于最小二乘的思想,可以求解出星像的质心坐标。计算得到的星像质心位置与实际星像位置误差较大,送入误差补偿神经网络SEC对质心坐标进行误差补偿。Based on the idea of least squares, the center of mass coordinates of the star image can be solved. The calculated star image centroid position has a large error with the actual star image position, and is sent to the error compensation neural network SEC to perform error compensation on the centroid coordinates.
所述星像提取网络SSN的结构包括特征提取主干网络、特征融合网络和检测网络。The structure of the star image extraction network SSN includes a feature extraction backbone network, a feature fusion network and a detection network.
所述特征提取主干网络包括5个残差模块Res n,所述Res n模块由n个残差结构(Res unit)和m个二维卷积层构成,n的数值受网络层数深度的影响,数值为1或2,m的数值为固定值1;残差结构(Res unit)和二维卷积层中的每一个卷积核尺寸大小均为3×3,激活函数为ReLU函数;特征提取主干网络用于准确提取出星图中星像的特征。The feature extraction backbone network includes 5 residual modules Res n. The Res n module is composed of n residual structures (Res unit) and m two-dimensional convolution layers. The value of n is affected by the depth of the network layers. , the value is 1 or 2, the value of m is a fixed value 1; the size of each convolution kernel in the residual structure (Res unit) and the two-dimensional convolution layer is 3×3, and the activation function is the ReLU function; features The extraction backbone network is used to accurately extract the features of star images in star maps.
所述特征融合网络包括CR模块、上采样操作和通道叠加操作,CR模块为标准卷积模块,包括一个二维卷积层、批量归一化、ReLU激活函数;上采样操作通过双线性内插法将低分辨率的图像增大到高分辨率,使得两个不同分辨率大小的特征图尺寸恢复一致,用于相同尺寸大小的特征图进行通道叠加操作;特征融合网络用于提高特征图包含的上下文信息量。The feature fusion network includes a CR module, an upsampling operation and a channel superposition operation. The CR module is a standard convolution module, including a two-dimensional convolution layer, batch normalization, and ReLU activation function; the upsampling operation is performed through a bilinear inner The interpolation method increases the low-resolution image to a high resolution, so that the sizes of the feature maps of two different resolutions are restored to be consistent, and is used for channel overlay operations on feature maps of the same size; the feature fusion network is used to improve the feature map The amount of contextual information included.
所述检测网络包括CR模块和一个卷积层,所述检测网络用于对特征图进行星像区域预测。The detection network includes a CR module and a convolutional layer, and the detection network is used to predict star image areas on feature maps.
所述特征图在单个残差结构(Res unit)中的变化包括以下过程:The change of the feature map in a single residual structure (Res unit) includes the following process:
输入特征图x经过一个1×1的降维卷积操作后获得输出特征图x’;The input feature map x undergoes a 1×1 dimensionality reduction convolution operation to obtain the output feature map x’;
将特征图x’进行一个3×3的卷积操作后得到输出特征图x”;Perform a 3×3 convolution operation on the feature map x’ to obtain the output feature map x”;
通过ReLU激活函数对x”的每个元素进行非线性映射,获得特征图z;Perform nonlinear mapping on each element of x" through the ReLU activation function to obtain the feature map z;
将特征图z送入注意力机制中,通过全局平均池化操作对每个通道上的特征图进行压缩,得到每个通道的全局上下文信息,将压缩后的特征图通过一个全连接层,产生每个通道的权重向量,将特征图与对应通道的权重向量相乘,对特征图进行加权放缩,以增强重要的特征,减弱不重要的特征,得到特征图z’;The feature map z is fed into the attention mechanism, and the feature map on each channel is compressed through the global average pooling operation to obtain the global context information of each channel. The compressed feature map is passed through a fully connected layer to generate For the weight vector of each channel, multiply the feature map by the weight vector of the corresponding channel, and perform weighted scaling on the feature map to enhance important features and weaken unimportant features to obtain the feature map z';
将特征图z和特征图z’相加得到残差r,接着输入残差r到下一层作卷积操作;Add the feature map z and the feature map z’ to obtain the residual r, and then input the residual r to the next layer for convolution operation;
将残差r经过ReLU激活函数对每个元素进行非线性映射和一个1×1的升维卷积操作,最终得到输出的特征图y;The residual r is passed through the ReLU activation function to perform nonlinear mapping and a 1×1 dimensionality-raising convolution operation on each element, and finally the output feature map y is obtained;
上述过程描述为:The above process is described as:
y=W2BN(ReLU(BN(ReLU(W1·x))))+Ws·xy=W 2 BN(ReLU(BN(ReLU(W 1 ·x))))+W s ·x
其中,W1和W2分别是每个卷积层使用的卷积滤波器,Ws是一个维度变换,用于将输入特征图的通道数变成和输出特征图的通道数一致,x是输入特征图,y是输出特征图。Among them, W 1 and W 2 are the convolution filters used by each convolution layer respectively, W s is a dimensional transformation, used to change the number of channels of the input feature map to the same as the number of channels of the output feature map, x is the input feature map, and y is the output feature map.
所述用于补偿高斯曲面拟合法得到的质心坐标误差的基于神经网络的误差补偿模型SEC包括五个隐藏层F1-F5,一个输入层,一个输出层,F1和F5隐藏层分别由5个神经元组成,F2、F3、F4隐藏层分别由7个神经元组成,每个神经元的激活函数为ReLU函数;The neural network-based error compensation model SEC used to compensate for the centroid coordinate error obtained by the Gaussian surface fitting method includes five hidden layers F1-F5, one input layer and one output layer. The F1 and F5 hidden layers are each composed of five neural networks. Composed of neurons, the F2, F3, and F4 hidden layers are each composed of 7 neurons, and the activation function of each neuron is the ReLU function;
误差补偿模型使用均方误差损失函数和Adam优化器,设模型的输出为y,真实值为t,那么损失函数均方误差的公式如下:The error compensation model uses the mean square error loss function and the Adam optimizer. Suppose the output of the model is y and the true value is t. Then the formula of the mean square error of the loss function is as follows:
其中n表示预测值和实际值的数量,对于一个给定的样本,yi表示模型预测的值,ti表示实际的值;where n represents the number of predicted values and actual values. For a given sample, yi represents the value predicted by the model, and t i represents the actual value;
在每次训练迭代中,将带有误差的数据输入到模型中,并得到模型的输出结果;计算输出结果与真实结果之间的均方误差,并将其作为损失值。接着,清空梯度缓存,执行反向传播并更新网络参数;重复执行上述训练过程若干次,直到网络的性能满足要求;In each training iteration, data with errors is input into the model and the output result of the model is obtained; the mean square error between the output result and the real result is calculated and used as the loss value. Then, clear the gradient cache, perform backpropagation and update network parameters; repeat the above training process several times until the performance of the network meets the requirements;
将带有误差的星像质心坐标送入训练好的误差补偿模型SEC中,就能得到修正误差后的星像质心坐标。By sending the star image centroid coordinates with errors into the trained error compensation model SEC, the error-corrected star image centroid coordinates can be obtained.
相比于现有技术,本发明的优点在于:本发明提供一种适用于星敏感器的星像提取网络Star Sensor Net,简称为SSN,能够在布满杂光的星图中,提取出星像目标所在区域;提供一种基于神经网络的误差补偿网络Star Error Compensktion,简称为SEC,减小高斯曲面拟合法计算星像质心坐标的误差。本发明能够直接从受较强杂光干扰的星图中,准确快速的提取出星像质心坐标。本发明可以解决由于现有技术无法准确、快速的从受杂光污染星图中提取星像质心的技术问题。Compared with the existing technology, the advantage of the present invention is that the present invention provides a star image extraction network Star Sensor Net, referred to as SSN, suitable for star sensors, which can extract stars from star maps full of stray light. Image the area where the target is located; provide a neural network-based error compensation network Star Error Compensktion, referred to as SEC, to reduce the error of the Gaussian surface fitting method in calculating the center of mass coordinates of star images. The invention can directly and accurately extract star image centroid coordinates from star charts interfered by strong stray light. The invention can solve the technical problem that the existing technology cannot accurately and quickly extract the centroid of a star image from a star chart polluted by stray light.
附图说明Description of drawings
图1为星像快速提取流程图。Figure 1 is a flow chart of fast star image extraction.
图2为残差结构示意图。Figure 2 is a schematic diagram of the residual structure.
图3为SSN星像提取网络示意图。Figure 3 is a schematic diagram of the SSN star image extraction network.
图4为实施例杂光污染星图。Figure 4 is a stray light pollution star chart according to an embodiment.
图5为实施例切分后的子星图1。Figure 5 is the sub-satellite diagram 1 after segmentation according to the embodiment.
图6为实施例切分后的子星图2。Figure 6 is the sub-satellite diagram 2 after segmentation in the embodiment.
图7为实施例切分后的子星图3。Figure 7 is the sub-satellite diagram 3 after segmentation according to the embodiment.
图8为实施例切分后的子星图4。Figure 8 is the sub-satellite diagram 4 after segmentation in the embodiment.
图9为实施例SSN星像检测网络处理子星图1输出结果。Figure 9 shows the output result of sub-satellite image 1 processed by the SSN star image detection network according to the embodiment.
图10为实施例SSN星像检测网络处理子星图2输出结果。Figure 10 shows the output result of sub-satellite image 2 processed by the SSN star image detection network according to the embodiment.
图11为实施例SSN星像检测网络处理子星图3输出结果。Figure 11 shows the output result of sub-satellite image 3 processed by the SSN star image detection network according to the embodiment.
图12为实施例SSN星像检测网络处理子星图4输出结果。Figure 12 shows the output result of sub-satellite image 4 processed by the SSN star image detection network according to the embodiment.
图13为实施例星像序号为1的星像区域图。Figure 13 is a diagram of the star image area with star image number 1 in the embodiment.
图14为实施例星像序号1区域高斯曲面拟合质心示意图。Figure 14 is a schematic diagram of the Gaussian surface fitting centroid in the star image number 1 region of the embodiment.
图15为星像坐标误差补偿神经网络SEC示意图。Figure 15 is a schematic diagram of the star image coordinate error compensation neural network SEC.
图16为实施例所有星像质心纵坐标误差图。Figure 16 is an error diagram of the ordinate error of the centroid of all star images in the embodiment.
图17为实施例所有星像质心横坐标误差图。Figure 17 is an error diagram of the abscissa of the centroid of all star images in the embodiment.
具体实施方式Detailed ways
下面结合说明书附图和具体的实施例,对本发明作详细描述。The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
实施例Example
图1示出了本发明实施例提供的基于神经网络的快速星像提取方法的实现流程,详述如下:Figure 1 shows the implementation process of the neural network-based fast star image extraction method provided by the embodiment of the present invention. The details are as follows:
实施例所需要提取的杂光星图如图4所示,图5、图6、图7、图8表示星像子图。The stray light star image that needs to be extracted in the embodiment is shown in Figure 4, and Figures 5, 6, 7, and 8 represent star image sub-images.
1、将1024×1024尺寸的受杂光污染的星图分割成四张512×512尺寸的子星图,将四张子星图送入SSN的特征提取主干网中。1. Divide the 1024×1024 star image contaminated by stray light into four sub-star images of 512×512 size, and send the four sub-star images to the feature extraction backbone network of SSN.
2、将512×512尺寸,通道数为1的子星图作为输入,经过第一层步距为2,激活函数为ReLU的二维卷积,输出256×256尺寸,通道数为8的特征图F1。2. Take a sub-star image with a size of 512×512 and a channel number of 1 as input. After the first layer of two-dimensional convolution with a stride of 2 and an activation function of ReLU, output features with a size of 256×256 and a channel number of 8. Figure F1.
3、将步骤2中的特征图F1送入第一个残差模块Res(一个残差结构Res unit+一个二维卷积),得到输出为256×256尺寸,通道数为8的特征图F2。3. Send the feature map F1 in step 2 to the first residual module Res (a residual structure Res unit + a two-dimensional convolution), and obtain the feature map F2 with an output size of 256×256 and a channel number of 8.
4、将步骤3中的特征图F2送入第二个残差模块Res(一个残差结构Res unit+一个二维卷积),得到输出为128×128尺寸,通道数为8的特征图F3。4. Send the feature map F2 in step 3 to the second residual module Res (a residual structure Res unit + a two-dimensional convolution), and obtain the feature map F3 with an output size of 128×128 and a channel number of 8.
5、将步骤4中的特征图F3送入第三个残差模块Res2(两个残差结构Res unit+一个二维卷积),得到输出为64×64尺寸,通道数为16的特征图F4。5. Send the feature map F3 in step 4 to the third residual module Res2 (two residual structures Res unit + one two-dimensional convolution), and obtain the feature map F4 with an output size of 64×64 and a channel number of 16. .
6、将步骤5中的特征图F4送入第四个残差模块Res2(两个残差结构Res unit+一个二维卷积),得到输出为32×32尺寸,通道数为16的特征图F5。6. Send the feature map F4 in step 5 to the fourth residual module Res2 (two residual structures Res unit + one two-dimensional convolution), and obtain the feature map F5 with an output size of 32×32 and a channel number of 16. .
7、将步骤6中的特征图F5送入第五个残差模块Res(一个残差结构Res unit+一个二维卷积),得到输出为16×16尺寸,通道数为16的特征图F6。7. Send the feature map F5 in step 6 to the fifth residual module Res (a residual structure Res unit + a two-dimensional convolution), and obtain the feature map F6 with an output size of 16×16 and a channel number of 16.
8、特征图进入特征融合模块进行特征融合,步骤7中得到的F6送入5个CR模块中得到特征图F7,经过上采样操作与步骤5得到的F5进行通道叠加相加操作,把两个相同尺寸特征图的通道拼接在一起得到F8,F8送入5个CR模块中得到特征图F9,F9经过上采样与步骤4得到的F4进行通道叠加相加操作,把两个相同尺寸特征图的通道拼接在一起得到F10,再经过5个CR模块得到最终的预测特征图P。8. The feature map enters the feature fusion module for feature fusion. The F6 obtained in step 7 is sent to the 5 CR modules to obtain the feature map F7. After the upsampling operation, the channel superposition and addition operation is performed with the F5 obtained in step 5. The two The channels of feature maps of the same size are spliced together to obtain F8. F8 is sent to 5 CR modules to obtain the feature map F9. After upsampling, F9 performs a channel superposition and addition operation with F4 obtained in step 4, and the two feature maps of the same size are combined. The channels are spliced together to obtain F10, and then five CR modules are used to obtain the final predicted feature map P.
9、将特征图P送入检测模块用于星像检测。9. Send the feature map P to the detection module for star image detection.
10、输出预测区域坐标数据,即子星图中星像所在区域范围,如图9、图10、图11、图12所示,预测区域坐标见表1,(x1,y1)是预测区域左上角的坐标,(x2,y2)是预测区域右下角的坐标。10. Output the prediction area coordinate data, that is, the area where the star image is located in the sub-star map, as shown in Figure 9, Figure 10, Figure 11, and Figure 12. The prediction area coordinates are shown in Table 1. (x 1 , y 1 ) is the prediction The coordinates of the upper left corner of the area, (x 2 , y 2 ) are the coordinates of the lower right corner of the prediction area.
表1预测区域坐标数据Table.1Predicted Area Coordinates DataTable 1 Predicted Area Coordinates DataTable.1Predicted Area Coordinates Data
11、将预测区域坐标数据进行坐标变换,网络并行处理的四张子星图均属于同一张星图,第一张子图获得的坐标不变,第二张子图的横坐标乘上2(原图尺寸/子图尺寸),纵坐标不变,第三张子图的纵坐标乘上2,横坐标不变,第四张子星图横纵坐标均乘上2,即可得到子图区域范围坐标对应的原图区域范围坐标。坐标变换后的预测区域范围坐标见表2。11. Transform the prediction area coordinate data. The four sub-star maps processed in parallel by the network all belong to the same star map. The coordinates obtained by the first sub-map remain unchanged, and the abscissa of the second sub-map is multiplied by 2 (original image size/sub-image size). , the vertical coordinate remains unchanged, the vertical coordinate of the third sub-map is multiplied by 2, the horizontal coordinate remains unchanged, the horizontal and vertical coordinates of the fourth sub-star map are both multiplied by 2, and the original map area range coordinates corresponding to the sub-map area range coordinates can be obtained. The coordinates of the predicted area range after coordinate transformation are shown in Table 2.
表2变换后预测区域坐标数据Table.2Transformed Predicted RegionCoordinates DataTable 2 Transformed Predicted Region Coordinates DataTable.2Transformed Predicted RegionCoordinates Data
为了进一步说明计算过程,本发明用星像序号为1的预测区域范围作为例子用于说明,其他星像质心提取步骤与其相同。In order to further illustrate the calculation process, the present invention uses the prediction area range with star image serial number 1 as an example for illustration, and the other star image centroid extraction steps are the same.
12、输出在原图中的星像1的区域图像,如图13所示,区域图像的尺寸为25×25,该区域图像所处原图中的位置为(802,50,827,75),其中(802,50)为左上角坐标,(827,75)为右上角坐标。12. Output the area image of star image 1 in the original image, as shown in Figure 13. The size of the area image is 25×25, and the position of the area image in the original image is (802, 50, 827, 75). Among them (802, 50) is the coordinate of the upper left corner, and (827, 75) is the coordinate of the upper right corner.
13、对所给区域图像进行高斯曲面拟合,求解星像质心。基于星像的能量分布特征进行中心定位,设初始化曲面中心为(x0,y0,I0),则高斯曲面模型如下:13. Perform Gaussian surface fitting on the given area image to solve for the star image centroid. Center positioning is performed based on the energy distribution characteristics of the star image. Assume the initialization surface center is (x 0 , y 0 , I 0 ), then the Gaussian surface model is as follows:
式中,(x0,y0)为星像的中心;I0为中心星像能量,与星等及光学系统相关;σ为高斯弥散半径,表示星像光斑的大小;f(xi,yi)为各像素点能量。将高斯曲面模型两边取对数进行线性化,得:In the formula, (x 0 , y 0 ) is the center of the star image; I 0 is the central star image energy, which is related to the magnitude and optical system; σ is the Gaussian dispersion radius, indicating the size of the star image spot; f (x i , y i ) is the energy of each pixel. Taking the logarithm of both sides of the Gaussian surface model and linearizing it, we get:
基于最小二乘的思想,可以求解出星像序号1的在区域图像的中心坐标为(13.326,13.217),加上区域图像所处原图中的左上角位置坐标(802,50),即得到的星像质心坐标为(815.326,63.217),与实际星像质心坐标(816.938,64.376)误差较大。Based on the idea of least squares, it can be calculated that the center coordinates of the regional image of star image number 1 are (13.326, 13.217), plus the coordinates of the upper left corner of the original image where the regional image is located (802, 50), that is, we get The star image centroid coordinates are (815.326, 63.217), which has a large error with the actual star image centroid coordinates (816.938, 64.376).
14、受杂光影响,高斯曲面拟合计算得到的星像质心位置与实际星像位置误差较大,送入误差补偿神经网络对质心坐标进行误差补偿,矫正后的坐标为(816.658,64.686),与准确位置坐标(816.938,64.376)误差较小。14. Affected by stray light, the center of mass position of the star image calculated by Gaussian surface fitting has a large error with the actual star image position. The error compensation neural network is sent to the error compensation neural network to compensate for the error of the center of mass coordinates. The corrected coordinates are (816.658, 64.686) , the error with the exact position coordinates (816.938, 64.376) is small.
表2星像提取仿真实验结果(添加杂光干扰)Table 2 Star image extraction simulation experiment results (adding stray light interference)
Tab.2 Results of star extraction simulation(Add stray lightinterference)Tab.2 Results of star extraction simulation(Add stray lightinterference)
如表2所示,本发明能够对星图中所有星像进行质心坐标提取,且高斯曲面拟合法得到的误差坐标送入误差补偿神经网络后,得到的所有星像质心坐标与星像准确坐标存在较小误差,横坐标误差如图10所示,列坐标误差如图11所示,横坐标误差与列坐标误差在0.03个像素左右,质心位置与标准质心位置之间的距离误差为0.04个像素。因此本发明提供的星像提取算法能够很好的解决在强杂光干扰下实现星像快速提取的问题。As shown in Table 2, the present invention can extract the centroid coordinates of all star images in the star map, and after the error coordinates obtained by the Gaussian surface fitting method are sent to the error compensation neural network, the centroid coordinates of all star images obtained are the same as the accurate coordinates of the star images. There are small errors. The abscissa error is shown in Figure 10, and the column coordinate error is shown in Figure 11. The abscissa error and column coordinate error are about 0.03 pixels, and the distance error between the centroid position and the standard centroid position is 0.04 pixels. pixels. Therefore, the star image extraction algorithm provided by the present invention can well solve the problem of rapid star image extraction under strong stray light interference.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311316756.2A CN117351333A (en) | 2023-10-12 | 2023-10-12 | Quick star image extraction method of star sensor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311316756.2A CN117351333A (en) | 2023-10-12 | 2023-10-12 | Quick star image extraction method of star sensor |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117351333A true CN117351333A (en) | 2024-01-05 |
Family
ID=89362542
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311316756.2A Pending CN117351333A (en) | 2023-10-12 | 2023-10-12 | Quick star image extraction method of star sensor |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117351333A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117727063A (en) * | 2024-02-07 | 2024-03-19 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Star map identification method based on map attention network |
CN117853582A (en) * | 2024-01-15 | 2024-04-09 | 苏州科技大学 | Star sensor rapid star image extraction method based on improved Faster R-CNN |
-
2023
- 2023-10-12 CN CN202311316756.2A patent/CN117351333A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117853582A (en) * | 2024-01-15 | 2024-04-09 | 苏州科技大学 | Star sensor rapid star image extraction method based on improved Faster R-CNN |
CN117853582B (en) * | 2024-01-15 | 2024-09-20 | 苏州科技大学 | Star sensor rapid star image extraction method based on improved Faster R-CNN |
CN117727063A (en) * | 2024-02-07 | 2024-03-19 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Star map identification method based on map attention network |
CN117727063B (en) * | 2024-02-07 | 2024-04-16 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Star map identification method based on map attention network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN117351333A (en) | Quick star image extraction method of star sensor | |
CN111507271A (en) | A method for intelligent detection and identification of airborne optoelectronic video targets | |
CN111461083A (en) | A fast vehicle detection method based on deep learning | |
CN111368769B (en) | Ship multi-target detection method based on improved anchor point frame generation model | |
CN110633661A (en) | A remote sensing image object detection method fused with semantic segmentation | |
CN113052109A (en) | 3D target detection system and 3D target detection method thereof | |
CN101826157B (en) | Ground static target real-time identifying and tracking method | |
CN113850129A (en) | Target detection method for rotary equal-variation space local attention remote sensing image | |
CN105957058A (en) | Preprocessing method of star map | |
CN111079604A (en) | Method for quickly detecting tiny target facing large-scale remote sensing image | |
CN114663654B (en) | An improved YOLOv4 network model and small target detection method | |
CN106056625A (en) | Airborne infrared moving target detection method based on geographical homologous point registration | |
CN112907557A (en) | Road detection method, road detection device, computing equipment and storage medium | |
CN115205467A (en) | Space non-cooperative target part identification method based on light weight and attention mechanism | |
CN118351410A (en) | Multi-mode three-dimensional detection method based on sparse agent attention | |
CN113436237A (en) | High-efficient measurement system of complicated curved surface based on gaussian process migration learning | |
CN116755090A (en) | SAR ship detection method based on novel pyramid structure and mixed pooling channel attention mechanism | |
CN116740135A (en) | Infrared weak and small target tracking methods, devices, electronic equipment and storage media | |
CN112581626B (en) | Complex curved surface measurement system based on non-parametric and multi-attention force mechanism | |
CN112525145A (en) | Aircraft landing relative attitude dynamic vision measurement method and system | |
CN117788296A (en) | Super-resolution reconstruction method of infrared remote sensing images based on heterogeneous combined deep network | |
CN117011648A (en) | Haptic image dataset expansion method and device based on single real sample | |
CN117197441A (en) | Small target detection method based on full-link multi-scale fusion network | |
CN117853582B (en) | Star sensor rapid star image extraction method based on improved Faster R-CNN | |
CN115830632A (en) | Infrared image pedestrian reflection detection method based on deep learning and image mask |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |