CN117351333A - Quick star image extraction method of star sensor - Google Patents
Quick star image extraction method of star sensor Download PDFInfo
- Publication number
- CN117351333A CN117351333A CN202311316756.2A CN202311316756A CN117351333A CN 117351333 A CN117351333 A CN 117351333A CN 202311316756 A CN202311316756 A CN 202311316756A CN 117351333 A CN117351333 A CN 117351333A
- Authority
- CN
- China
- Prior art keywords
- star
- feature map
- map
- image
- star image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000605 extraction Methods 0.000 title claims abstract description 42
- 238000000034 method Methods 0.000 claims abstract description 28
- 238000013528 artificial neural network Methods 0.000 claims abstract description 12
- 238000010586 diagram Methods 0.000 claims description 17
- 238000001514 detection method Methods 0.000 claims description 15
- 230000004913 activation Effects 0.000 claims description 13
- 230000004927 fusion Effects 0.000 claims description 12
- 238000005070 sampling Methods 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 7
- 210000002569 neuron Anatomy 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 4
- 238000012549 training Methods 0.000 claims description 4
- 239000006185 dispersion Substances 0.000 claims description 3
- 230000003287 optical effect Effects 0.000 claims description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 2
- 230000007246 mechanism Effects 0.000 claims description 2
- 238000010606 normalization Methods 0.000 claims description 2
- 238000011176 pooling Methods 0.000 claims description 2
- 230000009467 reduction Effects 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 8
- 238000004422 calculation algorithm Methods 0.000 description 5
- 230000011218 segmentation Effects 0.000 description 5
- 238000005259 measurement Methods 0.000 description 4
- 238000011160 research Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000003071 parasitic effect Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000004313 glare Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/02—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by astronomical means
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C25/00—Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/16—Image acquisition using multiple overlapping images; Image stitching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Databases & Information Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Manufacturing & Machinery (AREA)
- Astronomy & Astrophysics (AREA)
- Automation & Control Theory (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a quick star image extraction method of a star sensor, which is applicable to a star image extraction network-SSN of the star sensor, and can extract the area where a star image target is located in a star image full of stray light; and an error compensation model SEC based on a neural network is adopted to reduce star image centroid coordinate errors obtained by processing the area where the star image target is located by a Gaussian surface fitting method. The method can accurately and rapidly extract the barycenter coordinates of the star images directly from the star images interfered by strong stray light.
Description
Technical Field
The invention relates to the technical field of star sensor attitude measurement, in particular to a quick star image extraction method of a star sensor.
Background
The star sensor provides attitude information with arc second level precision for the spacecraft by detecting stars at different positions on the celestial sphere, and is considered as one of the attitude sensors with highest precision. The main process of measuring the gesture by the star sensor comprises star field imaging, star image extraction, star map recognition and gesture estimation, wherein the star field imaging is to take a photograph of the star field by using a camera to obtain a star map, and the star image extraction is to determine the position of the star image in a star map coordinate system by processing the star map. The accuracy of star image extraction directly influences success and failure of star image recognition and gesture measurement performance. Meanwhile, star image extraction is the most time-consuming part in the star sensor attitude measurement process, and the research on a rapid star image extraction method is of great significance to the development of high-performance star sensors.
Early star sensor research only considers the star imaging environment with high signal-to-noise ratio, and generally adopts a global threshold method to perform simple denoising treatment, so that star image extraction can be performed. Along with the diversification of the working tasks of the aircrafts, the working environment of the star sensor is more and more complex, the influence of stray light is more and more remarkable, the precision requirement on the star sensor is higher and higher, and the traditional star image extraction algorithm is difficult to obtain a satisfactory effect. Researchers have conducted intensive research into star image extraction algorithms against parasitic light interference. The method adopts a Top-Hat morphological filter, and achieves a certain anti-noise effect. Aiming at the attitude measurement problem of high-speed spin satellites, such as the university of Qinghua Xing Fei, the influence of detector noise on star image extraction accuracy is reduced by introducing an extended Kalman filter and deeply fusing the data of a star sensor and a mes-gyroscope. The star point extraction error of the method is less than 0.2 pixels. He Yiyang and the like, reduces star image noise by using a self-adaptive threshold segmentation algorithm, and improves the performance of the star sensor for resisting Gaussian white noise. The speed and the star image extraction precision of the star image processed by the algorithm have good effects, but there is still room for further improvement in star image processing with high resolution and strong stray light. Therefore, research on a rapid and accurate anti-stray light star image extraction technology is a necessary requirement for developing a high-performance star sensor.
Disclosure of Invention
Aiming at the problems existing in the prior art, the invention provides a rapid star image extraction method suitable for a star sensor, which can extract the area where a star image target is located in a star image interfered by stray light; and reducing star image centroid coordinate errors obtained by processing the area where the star image target is by using a Gaussian surface fitting method by adopting an error compensation model based on a neural network.
The aim of the invention is achieved by the following technical scheme.
A quick star image extraction method of a star sensor comprises the following steps:
1) Dividing a 64k multiplied by 64k size star map polluted by stray light into four sub-star maps with the size of 32k multiplied by 32k, and sending the four sub-star maps into a feature extraction backbone network of a star image extraction network;
2) Taking each sub-star map with the size of 32k multiplied by 32k and the channel number of 1 as input, carrying out two-dimensional convolution with the step distance of 2 in the first layer and the activation function of ReLU, and outputting a feature map F1 with the size of 16k multiplied by 16k and the channel number of 8;
3) The feature map F1 in the step 2) is sent to a first residual error module Res n to obtain a feature map F2 with the output size of 16k multiplied by 16k and the channel number of 8;
4) The feature map F2 in the step 3) is sent to a second residual error module Res n to obtain a feature map F3 with the output of 8k multiplied by 8k and the channel number of 8;
5) The feature map F3 in the step 4) is sent to a third residual error module Res n to obtain a feature map F4 with the output of 4k multiplied by 4k and the channel number of 16;
6) The feature map F4 in the step 5) is sent to a fourth residual error module Res n to obtain a feature map F5 with the output of 2k multiplied by 2k and the channel number of 16;
7) The feature map F5 in the step 6) is sent to a fifth residual error module Res n to obtain a feature map F6 with the output of k multiplied by k and the channel number of 16;
8) Feature images enter a feature fusion module to perform feature fusion, F6 obtained in the step 7) is sequentially processed by 5 CR modules to obtain feature images F7, the feature images F5 obtained in the step 5) are subjected to channel superposition addition operation by up-sampling operation, two feature image channels with the same size are spliced together to obtain F8, the feature images F8 are sequentially processed by 5 CR modules to obtain feature images F9, F9 are subjected to channel superposition addition operation by up-sampling operation and F4 obtained in the step 4), the two feature image channels with the same size are spliced together to obtain F10, and the final predicted feature images P are sequentially processed by 5 CR modules;
9) The feature map P is sent to a detection module for star image detection;
10 Outputting predicted area coordinate data, namely the area range of the star image in the sub star map;
11 Coordinate transformation is performed on the prediction area coordinate data. The four sub-star maps processed in parallel by the network belong to the same star map, and the abscissa and the ordinate obtained by the first sub-map are unchanged; multiplying the abscissa of the second sub-graph by the sub-star graph scaling (scaling is the original graph size/sub-graph size), and the ordinate is unchanged; multiplying the ordinate of the third subgraph by the scaling of the subgraph, and keeping the abscissa unchanged; multiplying the abscissa of the fourth sub-star map by the scaling of the sub-star map to obtain the original map region range corresponding to the region range of the star image in the sub-map;
12 Outputting a star image area image in the original image;
13 Performing Gaussian surface fitting on the image of the given star image area, and solving the mass center of the star image. Star image-based energy distribution characteristicsCenter positioning is performed, and the center of the initialized curved surface is set as (x) 0 ,y 0 ,I 0 ) The gaussian surface model is as follows:
in (x) 0 ,y 0 ) Is the center of the star image; i 0 The central star image energy is related to star and the like and an optical system; sigma is Gaussian dispersion radius and represents the size of a star image light spot; f (x) i ,y i ) Energy is applied to each pixel point. Taking logarithms from two sides of the Gaussian curved surface model for linearization to obtain:
based on the idea of least square, the centroid coordinates of the star image can be solved. The calculated star image centroid position and the actual star image position have larger errors, and the errors are sent into an error compensation neural network SEC to carry out error compensation on the centroid coordinates.
The structure of the star image extraction network SSN comprises a feature extraction backbone network, a feature fusion network and a detection network.
The feature extraction backbone network comprises 5 residual error modules Res n, wherein the Res n modules are composed of n residual error structures (Res units) and m two-dimensional convolution layers, the value of n is influenced by the depth of the network layer number, the value of n is 1 or 2, and the value of m is a fixed value of 1; the residual structure (Res unit) and the convolution kernel size of each of the two-dimensional convolution layers are 3×3, and the activation function is a ReLU function; the feature extraction backbone network is used for accurately extracting features of star images in the star map.
The feature fusion network comprises a CR module, an up-sampling operation and a channel superposition operation, wherein the CR module is a standard convolution module and comprises a two-dimensional convolution layer, batch normalization and ReLU activation function; the up-sampling operation increases the low-resolution image to high-resolution image by a bilinear interpolation method, so that the sizes of the two feature images with different resolutions are restored to be consistent, and the up-sampling operation is used for carrying out channel superposition operation on the feature images with the same size; the feature fusion network is used for improving the context information content contained in the feature map.
The detection network comprises a CR module and a convolution layer, and is used for predicting the planetary image area of the feature map.
The variation of the feature map in a single residual structure (Res unit) comprises the following procedure:
the input feature map x is subjected to a 1 x 1 dimension reduction convolution operation to obtain an output feature map x';
carrying out a convolution operation of 3 x 3 on the characteristic diagram x 'to obtain an output characteristic diagram x';
nonlinear mapping is carried out on each element of x' through a ReLU activation function, and a feature map z is obtained;
the feature map z is sent into an attention mechanism, the feature map on each channel is compressed through global average pooling operation to obtain global context information of each channel, the compressed feature map passes through a full connection layer to generate a weight vector of each channel, the feature map is multiplied with the weight vector of the corresponding channel, the feature map is weighted and scaled to strengthen important features, unimportant features are weakened, and the feature map z' is obtained;
adding the feature map z and the feature map z' to obtain a residual error r, and then inputting the residual error r to the next layer for convolution operation;
carrying out nonlinear mapping and 1×1 dimension-lifting convolution operation on each element by residual r through a ReLU activation function, and finally obtaining an output characteristic diagram y;
the above process is described as:
y=W 2 BN(ReLU(BN(ReLU(W 1 ·x))))+W s ·x
wherein W is 1 And W is 2 The convolution filters, W, used for each convolution layer s Is a dimension transformation for changing the number of channels of the input feature map to be identical to the number of channels of the output feature map, x is the input feature map, and y is the output feature map.
The neural network-based error compensation model SEC for compensating centroid coordinate errors obtained by the Gaussian surface fitting method comprises five hidden layers F1-F5, an input layer and an output layer, wherein the F1 hidden layer and the F5 hidden layer are respectively composed of 5 neurons, the F2 hidden layer, the F3 hidden layer and the F4 hidden layer are respectively composed of 7 neurons, and the activation function of each neuron is a ReLU function;
the error compensation model uses a mean square error loss function and an Adam optimizer, and the output of the model is set as y, the true value is set as t, and then the mean square error of the loss function is expressed as follows:
where n represents the number of predicted and actual values, y for a given sample i Representing the predicted value of the model, t i Representing the actual value;
in each training iteration, inputting data with errors into a model, and obtaining an output result of the model; and calculating the mean square error between the output result and the real result, and taking the mean square error as a loss value. Then, the gradient cache is emptied, back propagation is executed, and network parameters are updated; repeating the training process for a plurality of times until the performance of the network meets the requirement;
and sending the star image centroid coordinates with errors into a trained error compensation model SEC, so that corrected star image centroid coordinates can be obtained.
Compared with the prior art, the invention has the advantages that: the invention provides a Star image extraction network Star Sensor Net, which is called SSN for short and is applicable to a Star Sensor, and the Star image target area can be extracted from a Star map full of stray light; an error compensation network Star Error Compensktion, called SEC for short, based on a neural network is provided, and errors of star image centroid coordinates calculated by a Gaussian surface fitting method are reduced. The method can accurately and rapidly extract the barycenter coordinates of the star images directly from the star images interfered by strong stray light. The invention can solve the technical problem that the prior art can not accurately and rapidly extract the star image centroid from the starmap polluted by stray light.
Drawings
Fig. 1 is a flow chart for quick extraction of star images.
Fig. 2 is a schematic diagram of a residual structure.
Fig. 3 is a schematic diagram of an SSN star image extraction network.
Fig. 4 is a diagram of an embodiment stray light pollution star.
Fig. 5 is the sub-star of fig. 1 after embodiment segmentation.
Fig. 6 is the sub-star of fig. 2 after embodiment segmentation.
Fig. 7 is the sub-star of fig. 3 after embodiment segmentation.
Fig. 8 is the sub-star of fig. 4 after embodiment segmentation.
FIG. 9 is a diagram of an embodiment SSN star image detection network processing the output of sub-star FIG. 1.
FIG. 10 is a diagram of an embodiment SSN star image detection network processing the output of sub-star FIG. 2.
FIG. 11 is a diagram of an embodiment SSN star image detection network processing the output of sub-star FIG. 3.
FIG. 12 is a diagram of an embodiment SSN star image detection network processing the output of sub-star FIG. 4.
Fig. 13 is a star image area diagram with a star image number of 1 according to an embodiment.
Fig. 14 is a schematic diagram of a gaussian surface fit centroid in the region of example star image number 1.
Fig. 15 is a schematic diagram of a star image coordinate error compensation neural network SEC.
Fig. 16 is a graph of ordinate error of all star image centroids according to the embodiment.
Fig. 17 is an abscissa error diagram of all star image centroids of the embodiment.
Detailed Description
The invention will now be described in detail with reference to the drawings and the accompanying specific examples.
Examples
Fig. 1 shows an implementation flow of a neural network-based rapid star image extraction method according to an embodiment of the present invention, which is described in detail below:
the example of the extracted omnirange is shown in fig. 4, and fig. 5, 6, 7 and 8 show star image subgraphs.
1. The 1024×1024 size star map polluted by stray light is divided into four 512×512 size sub star maps, and the four sub star maps are sent into the feature extraction backbone network of the SSN.
2. Taking a sub-star map with 512×512 sizes and 1 channel number as input, performing two-dimensional convolution with the first layer step distance being 2 and the activation function being ReLU, and outputting a feature map F1 with 256×256 sizes and 8 channel numbers.
3. The feature map F1 in the step 2 is sent to a first residual module Res (a residual structure Res unit+a two-dimensional convolution), and a feature map F2 with 256×256 dimensions and 8 channels is obtained.
4. The feature map F2 in the step 3 is sent to a second residual module Res (a residual structure Res unit+a two-dimensional convolution), and a feature map F3 with an output of 128×128 sizes and a channel number of 8 is obtained.
5. The feature map F3 in step 4 is sent to a third residual block Res2 (two residual structures Res unit+one two-dimensional convolution) to obtain a feature map F4 with an output of 64×64 dimensions and a channel number of 16.
6. The feature map F4 in step 5 is sent to a fourth residual block Res2 (two residual structures Res unit+one two-dimensional convolution) to obtain a feature map F5 with an output of 32×32 dimensions and a channel number of 16.
7. The feature map F5 in the step 6 is sent to a fifth residual module Res (a residual structure Res unit+a two-dimensional convolution), and a feature map F6 with an output size of 16×16 and a channel number of 16 is obtained.
8. The feature map enters a feature fusion module to perform feature fusion, F6 obtained in the step 7 is sent to 5 CR modules to obtain a feature map F7, channel superposition addition operation is performed on the feature map F7 obtained in the step 5 through up-sampling operation, channels of two feature maps with the same size are spliced together to obtain F8, F8 is sent to 5 CR modules to obtain a feature map F9, channel superposition addition operation is performed on the feature map F9 through up-sampling operation and F4 obtained in the step 4, channels of the two feature maps with the same size are spliced together to obtain F10, and final predicted feature map P is obtained through 5 CR modules.
9. The feature map P is sent to a detection module for star image detection.
10. Outputting predicted region coordinate data, namely the region range of the star image in the subspecies, as shown in fig. 9, 10, 11 and 12, wherein the predicted region coordinate is shown in table 1, (x) 1 ,y 1 ) Is the coordinates of the upper left corner of the predicted region, (x) 2 ,y 2 ) Is the coordinates of the lower right corner of the predicted region.
Table 1 prediction area coordinate data table.1predicted Area Coordinates Data
11. And carrying out coordinate transformation on the predicted region coordinate data, wherein four sub-star maps processed in parallel by a network all belong to the same star map, the obtained coordinates of the first sub-map are unchanged, the abscissa of the second sub-map is multiplied by 2 (original map size/sub-map size), the ordinate is unchanged, the ordinate of the third sub-map is multiplied by 2, the abscissa is unchanged, and the abscissa of the fourth sub-star map is multiplied by 2, so that the original map region range coordinates corresponding to the sub-map region range coordinates can be obtained. The range coordinates of the predicted area after the coordinate transformation are shown in table 2.
Table 2transformed prediction area coordinate data table.2transformed Predicted Region Coordinates Data
To further illustrate the calculation process, the present invention uses the range of the predicted area with the star image number of 1 as an example for illustration, and other star image centroid extraction steps are the same.
12. As shown in fig. 13, the size of the region image of the star image 1 in the original image is 25×25, and the position of the region image in the original image is (802, 50, 827, 75), wherein (802, 50) is the upper left corner coordinate and (827, 75) is the upper right corner coordinate.
13. And carrying out Gaussian surface fitting on the given regional image, and solving the star image centroid. Center positioning is performed based on energy distribution characteristics of star images, and the energy distribution characteristics are arranged in an initialized curved surfaceHeart is (x) 0 ,y 0 ,I 0 ) The gaussian surface model is as follows:
in (x) 0 ,y 0 ) Is the center of the star image; i 0 The central star image energy is related to star and the like and an optical system; sigma is Gaussian dispersion radius and represents the size of a star image light spot; f (x) i ,y i ) Energy is applied to each pixel point. Taking logarithms from two sides of the Gaussian curved surface model for linearization to obtain:
based on the least square idea, the center coordinates of the star image sequence number 1 in the regional image can be solved to be (13.326, 13.217), and the upper left corner position coordinates (802, 50) in the original image where the regional image is located are added, so that the obtained star image centroid coordinates (815.326, 63.217) have larger errors with the actual star image centroid coordinates (816.938, 64.376).
14. The star image centroid position obtained by Gaussian surface fitting calculation and the actual star image position have larger errors under the influence of stray light, the star image centroid position and the actual star image position are sent into an error compensation neural network to carry out error compensation on centroid coordinates, and the corrected coordinates are (816.658, 64.686) and have smaller errors with accurate position coordinates (816.938, 64.376).
TABLE 2 simulation results of star image extraction (adding veiling glare disturbance)
Tab.2 Results of star extraction simulation(Add stray light interference)
As shown in Table 2, the invention can extract the barycenter coordinates of all star images in the star map, and after the error coordinates obtained by the Gaussian surface fitting method are sent into an error compensation neural network, the obtained barycenter coordinates of all star images and the accurate coordinates of the star images have smaller errors, the horizontal coordinate errors are shown in FIG. 10, the column coordinate errors are shown in FIG. 11, the horizontal coordinate errors and the column coordinate errors are about 0.03 pixel, and the distance errors between the barycenter positions and the standard barycenter positions are 0.04 pixel. Therefore, the star image extraction algorithm provided by the invention can well solve the problem of realizing quick star image extraction under the interference of strong parasitic light.
Claims (8)
1. A quick star image extraction method of a star sensor is characterized by comprising the following steps:
1) Dividing a 64k multiplied by 64k size star map polluted by stray light into four sub-star maps with the size of 32k multiplied by 32k, and sending the four sub-star maps into a feature extraction backbone network of a star image extraction network, wherein k is an integer;
2) Taking each sub-star map with the size of 32k multiplied by 32k and the channel number of 1 as input, carrying out two-dimensional convolution with the step distance of 2 in the first layer and the activation function of ReLU, and outputting a feature map F1 with the size of 16k multiplied by 16k and the channel number of 8;
3) The feature map F1 in the step 2) is sent to a first residual error module Res n to obtain a feature map F2 with the output size of 16k multiplied by 16k and the channel number of 8;
4) The feature map F2 in the step 3) is sent to a second residual error module Res n to obtain a feature map F3 with the output of 8k multiplied by 8k and the channel number of 8;
5) The feature map F3 in the step 4) is sent to a third residual error module Res n to obtain a feature map F4 with the output of 4k multiplied by 4k and the channel number of 16;
6) The feature map F4 in the step 5) is sent to a fourth residual error module Res n to obtain a feature map F5 with the output of 2k multiplied by 2k and the channel number of 16;
7) The feature map F5 in the step 6) is sent to a fifth residual error module Res n to obtain a feature map F6 with the output of k multiplied by k and the channel number of 16;
8) The feature map enters a feature fusion module to perform feature fusion, F6 obtained in the step 7) is sent into 5 CR modules to obtain a feature map F7, channel superposition addition operation is performed on the feature map F7 obtained in the step 5) and the F5 obtained in the step 5), two feature map channels with the same size are spliced together to obtain F8, F8 is sent into 5 CR modules to obtain a feature map F9, the F9 is subjected to channel superposition addition operation on the feature map F4 obtained in the step 4) and the two feature map channels with the same size are spliced together to obtain F10, and final predicted feature map P is obtained through the 5 CR modules;
9) The feature map P is sent to a detection module for star image detection;
10 Outputting predicted area coordinate data, namely the area range of the star image in the sub star map;
11 Carrying out coordinate transformation on the coordinate data of the prediction area, wherein four sub-star maps processed in parallel by a network belong to the same star map, and the horizontal and vertical coordinates obtained by the first sub-map are unchanged; multiplying the abscissa of the second sub-graph by the scaling of the sub-star graph, and keeping the ordinate unchanged; multiplying the ordinate of the third subgraph by the scaling of the subgraph, and keeping the abscissa unchanged; multiplying the abscissa of the fourth sub-star map by the scaling of the sub-star map to obtain the original map region range corresponding to the region range of the star image in the sub-map;
12 Outputting a star image area image in the original image;
13 Performing Gaussian surface fitting on the image of the given star image area, solving the mass center of the star image, performing center positioning based on the energy distribution characteristics of the star image, and setting the center of an initialization surface as (x) 0 ,y 0 ,I 0 ) The gaussian surface model is as follows:
in (x) 0 ,y 0 ) Is the center of the star image; i 0 The central star image energy is related to star and the like and an optical system; sigma is Gaussian dispersion radius and represents the size of a star image light spot; f (x) i ,y i ) Taking logarithms from two sides of the Gaussian surface model for linearizing the energy of each pixel point to obtain:
based on the least square idea, the centroid coordinates of the star image can be solved, the calculated centroid position of the star image has larger error with the actual star image position, and the calculated centroid position and the actual star image position are sent into an error compensation neural network to carry out error compensation on the centroid coordinates, so that corrected centroid coordinates are obtained.
2. The method for extracting the star sensor rapid star image according to claim 1, wherein the structure of the star image extracting network SSN comprises a feature extracting backbone network, a feature fusion network and a detection network.
3. The method for extracting the quick star image of the star sensor according to claim 2, wherein the feature extraction backbone network comprises 5 residual error modules Res n, the Res n modules comprise n residual error structures and m two-dimensional convolution layers, the value of n is influenced by the depth of the network layer number, the value is 1 or 2, and the value of m is a fixed value 1; the size of each convolution kernel in the residual structure and the two-dimensional convolution layer is 3 multiplied by 3, and the activation function is a ReLU function; the feature extraction backbone network is used for accurately extracting features of star images in the star map.
4. The method for extracting the quick star image of the star sensor according to claim 2, wherein the feature fusion network comprises a CR module, an up-sampling operation and a channel superposition operation, the CR module is a standard convolution module and comprises a two-dimensional convolution layer, batch normalization and a ReLU activation function; the up-sampling operation increases the low-resolution image to high-resolution image by a bilinear interpolation method, so that the sizes of the two feature images with different resolutions are restored to be consistent, and the up-sampling operation is used for carrying out channel superposition operation on the feature images with the same size; the feature fusion network is used for improving the context information amount of the feature map.
5. The method for fast star image extraction of a star sensor of claim 2 wherein said detection network comprises a CR module and a convolutional layer, said detection network being adapted to perform a star image region prediction of the feature map.
6. A method for rapid star image extraction of a star sensor according to claim 3, wherein the variation of the feature map in a single residual structure comprises the following steps:
the input feature map x is subjected to a 1 x 1 dimension reduction convolution operation to obtain an output feature map x';
carrying out a convolution operation of 3 x 3 on the characteristic diagram x 'to obtain an output characteristic diagram x';
nonlinear mapping is carried out on each element of x' through a ReLU activation function, and a feature map z is obtained;
the feature map z is sent into an attention mechanism, the feature map on each channel is compressed through global average pooling operation to obtain global context information of each channel, the compressed feature map passes through a full connection layer to generate a weight vector of each channel, the feature map is multiplied with the weight vector of the corresponding channel, the feature map is weighted and scaled to strengthen important features, unimportant features are weakened, and the feature map z' is obtained;
adding the feature map z and the feature map z' to obtain a residual error r, and then inputting the residual error r to a next convolution layer;
carrying out nonlinear mapping and 1×1 dimension-lifting convolution operation on each element by residual r through a ReLU activation function, and finally obtaining an output characteristic diagram y;
the above process is described as:
y=W 2 BN(ReLU(BN(ReLU(W 1 ·x))))+W s ·x
wherein W is 1 And W is 2 The convolution filters, W, used for each convolution layer s Is a dimension transformation for changing the number of channels of the input feature map to be identical to the number of channels of the output feature map, x is the input feature map, and y is the output feature map.
7. The method for extracting the quick star image of the star sensor according to claim 1, wherein the error compensation model SEC based on the neural network is used for carrying out error compensation on the centroid coordinates obtained by the Gaussian surface fitting method.
8. The method for extracting the quick star image of the star sensor according to claim 7, wherein the error compensation model SEC based on the neural network comprises five hidden layers F1-F5, one input layer and one output layer, wherein the hidden layers F1 and F5 are respectively composed of 5 neurons, the hidden layers F2, F3 and F4 are respectively composed of 7 neurons, and the activation function of each neuron is a ReLU function;
the error compensation model uses a mean square error loss function and an Adam optimizer, and the output of the model is set as y, the true value is set as t, and then the mean square error of the loss function is expressed as follows:
where n represents the number of predicted and actual values, y for a given sample i Representing the predicted value of the model, t i Representing the actual value;
in each training iteration, inputting data with errors into a model, and obtaining an output result of the model; calculating the mean square error between the output result and the real result, taking the mean square error as a loss value, then, emptying the gradient cache, executing back propagation and updating network parameters; repeating the training process for a plurality of times until the performance of the network meets the requirement;
and sending the star image centroid coordinates with errors into a trained error compensation model SEC, so that corrected star image centroid coordinates can be obtained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311316756.2A CN117351333A (en) | 2023-10-12 | 2023-10-12 | Quick star image extraction method of star sensor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311316756.2A CN117351333A (en) | 2023-10-12 | 2023-10-12 | Quick star image extraction method of star sensor |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117351333A true CN117351333A (en) | 2024-01-05 |
Family
ID=89362542
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311316756.2A Pending CN117351333A (en) | 2023-10-12 | 2023-10-12 | Quick star image extraction method of star sensor |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117351333A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117727063A (en) * | 2024-02-07 | 2024-03-19 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Star map identification method based on map attention network |
CN117853582A (en) * | 2024-01-15 | 2024-04-09 | 苏州科技大学 | Star sensor rapid star image extraction method based on improved Faster R-CNN |
-
2023
- 2023-10-12 CN CN202311316756.2A patent/CN117351333A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117853582A (en) * | 2024-01-15 | 2024-04-09 | 苏州科技大学 | Star sensor rapid star image extraction method based on improved Faster R-CNN |
CN117853582B (en) * | 2024-01-15 | 2024-09-20 | 苏州科技大学 | Star sensor rapid star image extraction method based on improved Faster R-CNN |
CN117727063A (en) * | 2024-02-07 | 2024-03-19 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Star map identification method based on map attention network |
CN117727063B (en) * | 2024-02-07 | 2024-04-16 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Star map identification method based on map attention network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109685152B (en) | Image target detection method based on DC-SPP-YOLO | |
CN111862126B (en) | Non-cooperative target relative pose estimation method combining deep learning and geometric algorithm | |
CN110135267B (en) | Large-scene SAR image fine target detection method | |
CN111524135B (en) | Method and system for detecting defects of tiny hardware fittings of power transmission line based on image enhancement | |
CN117351333A (en) | Quick star image extraction method of star sensor | |
CN112861729B (en) | Real-time depth completion method based on pseudo-depth map guidance | |
CN111368769B (en) | Ship multi-target detection method based on improved anchor point frame generation model | |
CN109559338B (en) | Three-dimensional point cloud registration method based on weighted principal component analysis method and M estimation | |
CN111507271A (en) | Airborne photoelectric video target intelligent detection and identification method | |
CN110569738B (en) | Natural scene text detection method, equipment and medium based on densely connected network | |
CN110659664B (en) | SSD-based high-precision small object identification method | |
CN108427924A (en) | A kind of text recurrence detection method based on rotational sensitive feature | |
CN113223068B (en) | Multi-mode image registration method and system based on depth global features | |
CN113850129A (en) | Target detection method for rotary equal-variation space local attention remote sensing image | |
CN110647977B (en) | Method for optimizing Tiny-YOLO network for detecting ship target on satellite | |
CN111611918B (en) | Traffic flow data set acquisition and construction method based on aerial data and deep learning | |
CN113989612A (en) | Remote sensing image target detection method based on attention and generation countermeasure network | |
CN111709307A (en) | Resolution enhancement-based remote sensing image small target detection method | |
Lei et al. | A low-complexity hyperspectral anomaly detection algorithm and its FPGA implementation | |
CN114882524A (en) | Monocular three-dimensional gesture estimation method based on full convolution neural network | |
CN114332070A (en) | Meteor crater detection method based on intelligent learning network model compression | |
CN112766340A (en) | Depth capsule network image classification method and system based on adaptive spatial mode | |
CN112581626B (en) | Complex curved surface measurement system based on non-parametric and multi-attention force mechanism | |
CN117830701A (en) | Attention mechanism-based multiscale feature fusion star map identification method and device | |
CN112766381A (en) | Attribute-guided SAR image generation method under limited sample |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |