CN117853582A - Star sensor rapid star image extraction method based on improved Faster R-CNN - Google Patents

Star sensor rapid star image extraction method based on improved Faster R-CNN Download PDF

Info

Publication number
CN117853582A
CN117853582A CN202410049984.6A CN202410049984A CN117853582A CN 117853582 A CN117853582 A CN 117853582A CN 202410049984 A CN202410049984 A CN 202410049984A CN 117853582 A CN117853582 A CN 117853582A
Authority
CN
China
Prior art keywords
star
image
star image
feature matrix
multiplied
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410049984.6A
Other languages
Chinese (zh)
Inventor
王晨
孙文卿
季卫林
吴峰
张惠蓉
唐运海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou University of Science and Technology
Original Assignee
Suzhou University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou University of Science and Technology filed Critical Suzhou University of Science and Technology
Priority to CN202410049984.6A priority Critical patent/CN117853582A/en
Publication of CN117853582A publication Critical patent/CN117853582A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a quick star image extraction method of a star sensor based on improved Faster R-CNN, which is based on the fact that the improved Faster R-CNN can extract the area where a star image target is located in a star image which is full of noise pollution; the gray level centroid method based on pixel screening is provided, and the accuracy of the centroid coordinates of the star image is improved. The method can be used for rapidly and accurately extracting the barycenter coordinates of the star images from the star images interfered by strong noise.

Description

Star sensor rapid star image extraction method based on improved Faster R-CNN
Technical Field
The invention relates to the technical field of star sensor attitude measurement, in particular to a quick star image extraction method of a star sensor based on improved Faster R-CNN.
Background
The star sensor provides attitude information with arc second level precision for the spacecraft by detecting stars at different positions on the celestial sphere, is considered as one of the attitude sensors with highest precision, and is the most widely applied attitude measurement equipment in the aerospace field at present. The main process of measuring the gesture by the star sensor comprises star field imaging, star image extraction, star map recognition and gesture estimation, wherein the star image extraction is to obtain a star map by photographing the star field by a processing camera, and the position of the star image in a star map coordinate system is determined. The accuracy of star image extraction directly influences success and failure of star image recognition and gesture measurement quality. Meanwhile, star image extraction is the most time-consuming part in the star sensor attitude measurement process.
Early star sensor research only considers the star imaging environment with high signal-to-noise ratio, and generally adopts a global threshold method to perform simple denoising treatment, so that star image extraction can be performed. With the diversification of the working tasks of the aircrafts, the working environment of the star sensor is more and more complex, the noise influence is more and more remarkable, and the traditional star image extraction algorithm is difficult to obtain a satisfactory effect. The noise interference of the star sensor mainly comes from two parts: firstly, the ground air light, moon light, sunlight and the like in the working environment, photons enter the sensor pixels to follow poisson distribution, and the poisson noise can be regarded as; and the noise generated by the component mainly comprises photon shot noise, dark current noise, quantization noise and the like, wherein the photon shot noise has the largest influence and cannot be eliminated, and the noise can be treated as Gaussian noise. Researchers have conducted intensive research on star image extraction algorithms against noise interference, but the traditional algorithm takes more time for denoising processing, and can destroy the energy distribution of a star image while reducing noise, so that the star image centroid extraction accuracy is poor. In addition, in the case of facing strong noise interference, there is also a problem that the star image cannot be extracted. There is still room for further improvement in high resolution, strongly noisy star map processing.
The way in which the deep convolutional neural network detects an image target is in fact a process that simulates the human visual system recognition target. The star sensor star map which is interfered by noise has more prominent star image targets than noise, and human eyes can easily identify and position the star image targets from the star sensor star map, so that the star image extraction algorithm provided by the invention is divided into two steps: firstly, designing an improved Faster R-CNN network capable of processing a noisy star map, reducing star image centroid energy loss caused by denoising without image denoising, and directly realizing rough extraction of a star image area, thereby greatly saving star image extraction time and reducing star image extraction omission ratio; secondly, designing a star image centroid extraction algorithm based on pixel screening for star image refined extraction aiming at the area of the star image obtained in the first step of network. Experiments show that the algorithm provided by the invention has strong anti-noise performance and high instantaneity, and provides important technical support for realizing high-precision attitude measurement of the star sensor in a high-noise environment.
Disclosure of Invention
Aiming at the problems existing in the prior art, the quick star image extraction method of the star sensor based on the improved Faster R-CNN is provided, and the area where a star image target is located can be extracted from a star image interfered by strong noise; and the accuracy of the star image centroid coordinates is improved by adopting a gray centroid method based on pixel screening.
The aim of the invention is achieved by the following technical scheme.
A star sensor rapid star image extraction method based on improved Faster R-CNN comprises the following steps:
(1) Constructing an improved Faster R-CNN network, wherein the structure of the improved Faster R-CNN network comprises a star image feature extraction backbone network, a feature fusion device, a region generation network and a classification and regression network;
(2) Dividing a 64k multiplied by 64k size star map polluted by noise into four sub-star maps with the size of 32k multiplied by 32k, and sending the four sub-star maps into a star image feature extraction backbone network;
(3) Taking a sub-star map with the size of 32k multiplied by 32k and the number of channels being 1 as input, passing through a two-dimensional convolution layer with the step distance of 2, the convolution kernel size of 3 multiplied by 3 and the activation function of ReLU, outputting a feature matrix F1 with the size of 16k multiplied by 16k and the number of channels being 8;
(4) The feature matrix F1 in the step (3) is sent into a first Star image residual structure (Star block) to obtain a feature matrix F2 with the output size of 16k multiplied by 16k and the channel number of 8;
(5) The feature matrix F2 in the step (4) is sent into a second Star image residual structure (Star block) to obtain a feature matrix F3 with the output of 8k multiplied by 8k and the channel number of 8;
(6) The feature matrix F3 in the step (5) is sent into a third Star image residual structure (Star block) to obtain a feature matrix F4 with the output size of 4k multiplied by 4k and the channel number of 8;
(7) The feature matrix F4 in the step (6) is sent into a fourth Star image residual structure (Star block) to obtain a feature matrix F5 with the output of 2k multiplied by 2k and the channel number of 8;
(8) The feature matrix F5 in the step (7) is sent into a fifth Star image residual structure (Star block) to obtain a feature matrix F6 with the output of k multiplied by k and the channel number of 16;
(9) Feeding the feature matrix F6 in the step (8) into a two-dimensional convolution layer with the step distance of 1, the convolution kernel size of 1 multiplied by 1 and the activation function of ReLU to obtain a feature matrix F7 with the output of k multiplied by k and the channel number of 64;
(10) Performing feature fusion on the obtained feature matrix, performing up-sampling operation on the F7 obtained in the step (9) and the F5 obtained in the step (7) to obtain F7 'and F5' with the same size as the F2 obtained in the step (4), respectively performing pixel superposition on the F7 'and F5' and the F2 to obtain F7 'and F5', and performing pixel superposition on the F7 'and the F5' again to obtain a final predicted feature matrix P;
(11) Sending the feature matrix P into a region generation network to obtain a series of star image target candidate frames;
(12) Then through classification and regression network, outputting predicted coordinate data, namely the area range of the star image in the sub-star map;
(13) Carrying out coordinate transformation on the predicted coordinate data to obtain regional range coordinates of the star image in the original image, wherein four sub-star images processed in parallel belong to the same star image, and the horizontal coordinates and the vertical coordinates of the star image region in the first sub-image are unchanged; the abscissa of the second sub-graph is added with the size of the sub-star graph, and the ordinate is unchanged; the ordinate of the third subgraph is added with the size of the subgraph, and the abscissa is unchanged; the dimension of the subsatellite map is added to the abscissa of the fourth subsatellite map, so that the regional range coordinate of the star image in the original image can be obtained;
(14) Outputting a star image area image in the original image;
(15) And obtaining the barycenter coordinates of the star image by using a gray centroid method based on pixel screening.
The Star image feature extraction backbone network is provided with 7 layers of convolution layers, wherein 5 layers of convolution layers are formed by Star image residual error structures (Star blocks), and 2 layers of convolution layers are formed by two-dimensional convolution. The Star image residual structure (Star block) is composed of a convolution dimension-increasing layer (Conv Up Dimension), a channel-by-channel convolution layer (Depthwise Conv), an Attention mechanism layer (Attention) and a convolution dimension-reducing layer (Conv Down Dimension), wherein a feature matrix firstly passes through n convolution kernel dimensions of 1×1, then carries out channel-by-channel convolution by using n convolution kernels with the size of 3×3, then allocates higher weight for Star image areas in the feature matrix by using a channel Attention mechanism and a space Attention mechanism, allocates lower weight for the background, finally uses m convolution kernels of 1×1 to reduce dimension, and the structure uses a ReLU activation function. The values of n and m are super parameters, and can be adjusted according to actual conditions. The star image feature extraction backbone network is used for accurately extracting features of star images in the star images.
The feature fusion device comprises an up-sampling operation and a pixel superposition operation, wherein the up-sampling operation increases the feature matrix with low resolution to high resolution through a nearest neighbor method, so that the sizes of the two feature matrices with different resolutions are restored to be consistent, and the feature matrix with the same size is used for carrying out corresponding pixel superposition operation. The feature fusion device is used for improving the context information content contained in the feature matrix.
The rapid star image extraction method of the star sensor based on the improved fast R-CNN is characterized in that the regional generation network RPN uses three sizes of [8,8], [10,10], [12,12] and the aspect ratio is 1:1, carrying out convolution operation on the feature matrix to obtain the probability that each anchor frame contains a foreground or a background and the corresponding position offset, wherein the results are used for generating a series of candidate frames for subsequent classification and regression network. The region generation network RPN is used to screen the target region, and output candidate target Regions (ROIs) and corresponding predicted locations.
The star sensor rapid star image extraction method based on improved fast R-CNN is characterized in that the classification and regression network comprises ROI Pooling, CLS and BOX. The function of the ROI Pooling network is to pool the characteristics of the predicted area obtained by the calculation of the RPN to a fixed dimension so as to meet the characteristic dimension requirements of the subsequent network. The function of the CLS and the BOX mainly generates a network RPN screening result according to the region, adopts the full-connection layer FCL (Full Connected Layer) to classify the candidate region ROI in the feature matrix, and simultaneously calculates the position of the star image target more accurately and outputs the classification probability and the prediction position of the star image target.
The gray level centroid method based on pixel screening comprises the following steps:
(1) Set star image area center [ ]) Is the initial coordinates;
(2) The initial coordinates are [ ]) Gray value +.>Setting a comparison threshold;
(3) Comparing the pixels, and gradually comparing the gray value of each pixel point with the initial coordinate) Gray value +.>In contrast, the gray value is greater than +.>Pixel point reservation;
(4) Obtaining a filtered pixel point set, and using the pixel points in a gray level centroid algorithm, wherein the gray level centroid method has the following calculation formula:
wherein,and->Respectively the centroid of the object in the image +.>And->Coordinates of->And->Is the coordinates of each pixel in the image, +.>Is the gray value of the pixel.
Compared with the prior art, the invention has the advantages that: the improved star image extraction network of the Faster R-CNN provided by the invention can extract the area where the star image target is located in the star image covered with noise pollution; the gray level centroid method based on pixel screening is provided, and the accuracy of the centroid coordinates of the star image is improved. The method can be used for rapidly and accurately extracting the barycenter coordinates of the star images from the star images interfered by strong noise. The invention can solve the technical problem that the prior art can not accurately and rapidly extract the centroid of the star image from the star map interfered by strong noise.
Drawings
FIG. 1 is a flow chart of star image quick extraction based on improved Faster R-CNN.
Fig. 2 is a schematic diagram of a Star block residual structure (Star block).
FIG. 3 is a schematic diagram of a star image extraction network for improving Faster R-CNN.
Fig. 4 is a diagram of an embodiment noise pollution star.
Fig. 5 is the sub-star of fig. 1 after embodiment segmentation.
Fig. 6 is the sub-star of fig. 2 after embodiment segmentation.
Fig. 7 is the sub-star of fig. 3 after embodiment segmentation.
Fig. 8 is the sub-star of fig. 4 after embodiment segmentation.
FIG. 9 is a graph of the output of FIG. 1 of an embodiment of a star image extraction network based on improved Faster R-CNN.
FIG. 10 is a graph of the output of FIG. 2 of an embodiment of a star image extraction network based on improved Faster R-CNN.
FIG. 11 is a graph of the output of FIG. 3 of an example star image extraction network based on improved Faster R-CNN.
FIG. 12 is a graph of the output of FIG. 4 of an example star image extraction network based on improved Faster R-CNN.
FIG. 13 is a schematic view of the region where the star image of the embodiment is located.
Fig. 14 is a schematic diagram of a star pixel point after pixel-based filtering in the embodiment.
FIG. 15 is an example star map centroid extraction abscissa error map.
FIG. 16 is a diagram of an example star map centroid extraction ordinate error.
Description of the embodiments
The invention will now be described in detail with reference to the drawings and the accompanying specific examples.
Examples
Fig. 1 shows an implementation flow of a star image centroid quick extraction method based on improved fast R-CNN, which is provided in an embodiment of the present invention, and is described in detail as follows:
the noise star map required to be extracted by the embodiment is shown in fig. 4, and fig. 5, fig. 6, fig. 7 and fig. 8 show star image subgraphs.
(1) The improved Faster R-CNN network is constructed, and the structure of the improved Faster R-CNN network comprises a star image feature extraction backbone network, a feature fusion device, a region generation network and a classification and regression network.
(2) The 1024×1024-size star map polluted by noise is divided into four 512×512-size sub-star maps, and the four sub-star maps are sent into a star image feature extraction backbone network.
(3) Taking a gray noise star map with 512×512 sizes and 1 channel number as input, performing two-dimensional convolution with an activation function of ReLU through a first layer step distance of 2, and outputting a feature matrix F1 with 256×256 sizes and 8 channel numbers.
(4) The feature matrix F1 in the step 3 is sent to a first star image residual structure, the first star image residual structure is subjected to convolution kernel dimension ascending of 24 1 multiplied by 1, then is convolved by 24 convolved kernels of 3 multiplied by 3, then a higher weight is distributed to a star image area in the feature matrix by using an attention mechanism, a lower weight is distributed to a background, finally, the dimension is reduced by using 8 convolved kernels of 1 multiplied by 1, the number of channels is reduced, an activation function is a ReLU function, and a feature matrix F2 with 256 multiplied by 256 sizes and 8 channels is obtained.
(5) The feature matrix F2 in the step 4 is sent to a second star image residual structure, the number of channels is increased through 24 convolution kernels of 1 multiplied by 1, the channels are convolved with 24 convolution kernels of 3 multiplied by 3, then a higher weight is distributed to a star image area in the feature matrix through an attention mechanism, a lower weight is distributed to the background, finally the number of channels is reduced through 8 convolution kernels of 1 multiplied by 1, the number of channels is reduced, an activation function is a ReLU function, and the feature matrix F3 with the output of 128 multiplied by 128 and the number of channels of 8 is obtained.
(6) And (3) sending the feature matrix F3 in the step (5) into a third star image residual structure, increasing the number of channels through 24 convolution kernels with 1 multiplied by 1, convolving with 24 convolution kernels with 3 multiplied by 3, then distributing higher weight to star image areas in the feature matrix by using an attention mechanism, distributing lower weight to the background, finally reducing the number of channels by using 8 convolution kernels with 1 multiplied by 1, reducing the number of channels, enabling the function to be a ReLU function, and obtaining the feature matrix F4 with the output of 64 multiplied by 64 and the number of channels of 8.
(7) The feature matrix F4 in the step 6 is sent to a fourth star image residual structure, the number of channels is increased through 24 convolution kernels of 1 multiplied by 1, the channels are convolved with 24 convolution kernels of 3 multiplied by 3, then a higher weight is distributed to a star image area in the feature matrix through an attention mechanism, a lower weight is distributed to the background, finally the number of channels is reduced through 8 convolution kernels of 1 multiplied by 1, the number of channels is reduced, an activation function is a ReLU function, and the feature matrix F5 with the output of 32 multiplied by 32 and the number of channels of 8 is obtained.
(8) And (3) sending the feature matrix F5 in the step (7) into a fifth star image residual structure, increasing the number of channels through 48 convolution kernels of 1 multiplied by 1, convolving the feature matrix with 48 convolution kernels of 3 multiplied by 3, then distributing higher weight to star image areas in the feature matrix by using an attention mechanism, distributing lower weight to the background, finally reducing the number of channels by using 16 convolution kernels of 1 multiplied by 1, reducing the number of channels, enabling the function to be a ReLU function, and obtaining the feature matrix F6 with the output of 16 multiplied by 16 and the number of channels of 16.
(9) And finally, F6 in the step 8 is subjected to a two-dimensional convolution layer formed by 64 convolution kernels, so that an output characteristic matrix F7 with the size of 16 multiplied by 16 and the channel number of 64 is obtained.
(10) Feature fusion is carried out on a feature matrix obtained in a star image feature extraction backbone network, up-sampling operation is carried out on F7 and F5, F7 'and F5' which are the same as F2 in size are obtained, pixel superposition is carried out on F7', F5' and F2 respectively, F7 'and F5' are obtained, and then the final prediction feature matrix P is obtained by pixel superposition of F7 'and F5'.
(11) And sending the feature matrix P into an area generating network of the improved Faster R-CNN to obtain a series of star image target candidate frames.
To further illustrate the calculation process, the present invention uses the star image centroid extraction process of the sub-star map 8 as an example, and other star image centroid extraction steps are the same as those described above. The process comprises the following steps:
(1) Through classification and regression network, output predicted coordinate data, namely the region range of the star image in the sub-star map 8, (x) 1 ,y 1 ) Is the coordinates of the upper left corner of the predicted region, (x) 2 ,y 2 ) Is the coordinates of the lower right corner of the predicted region.
(2) And carrying out coordinate transformation on the predicted coordinate data, wherein four sub-star maps which are processed in parallel all belong to the same star map, the coordinates obtained by the first sub-star map are unchanged, the abscissa of the second sub-star map is added with 512 (the size of the sub-star map is 512 multiplied by 512), the ordinate of the third sub-star map is added with 512, and the abscissa of the fourth sub-star map is added with 512, so that the region range of the sub-star map coordinates in the original map can be obtained, and the region range is projected into the original map.
(3) The image of the star image area in the original contaminated by noise is outputted, and as shown in fig. 13, the pixel size is 15×15 (the detailed description of the processing procedure of selecting one star image area in the sub-star image 8 is given, and the steps of the processing procedure of other star image areas are the same.
(4) And obtaining the barycenter coordinates of the star image by using a gray centroid method based on pixel screening. First, the center of the star image area of FIG. 13 is setIs the initial coordinates; the initial coordinates +.>Gray value +.>Setting a threshold value; gray value and initial coordinate of each pixel point in the region are added one by one>Gray value +.>In contrast, the gray value is greater than +.>And the pixel points are used for gray level centroid algorithm, as shown in fig. 14. For each retained pixel, will +.>And->Added to the corresponding sum variable, dividing the sum variable by the sum of the gray values of all pixels, respectively +.>Obtain +.>Coordinates->And->Coordinates->And obtaining the barycenter coordinates of the star image, and adding the positions of the star image after coordinate transformation to obtain barycenter coordinates (817.671, 637.044) of the star image in the original image. And calculating the coordinates of the centroids of all the star images according to the steps.
The invention can extract all star image coordinates, and has small error with the accurate star image coordinates, as shown in fig. 10 and 11, the error of the horizontal coordinate and the column coordinate is about 0.03 pixel, and the error of the distance between the centroid position and the standard centroid position is 0.04 pixel.

Claims (6)

1. A star sensor rapid star image extraction method based on improved Faster R-CNN is characterized by comprising the following steps:
s11, constructing an improved Faster R-CNN network, wherein the structure of the improved Faster R-CNN network comprises a star image feature extraction backbone network, a feature fusion device, a region generation network and a classification and regression network;
s12, dividing the star map with the size of 64k multiplied by 64k and polluted by noise into four sub-star maps with the size of 32k multiplied by 32k, and sending the four sub-star maps into a star image feature extraction backbone network;
s13, taking each sub-star map with the size of 32k multiplied by 32k and the channel number of 1 as input, passing through a two-dimensional convolution layer with the step distance of 2, the convolution kernel size of 3 multiplied by 3 and the activation function of ReLU, and outputting a feature matrix F1 with the size of 16k multiplied by 16k and the channel number of 8;
s14, sending the feature matrix F1 in the step S13 into a first Star image residual structure (Star block) to obtain a feature matrix F2 with the output size of 16k multiplied by 16k and the channel number of 8;
s15, sending the feature matrix F2 in the step S14 into a second Star image residual structure (Star block) to obtain a feature matrix F3 with the output size of 8k multiplied by 8k and the channel number of 8;
s16, sending the feature matrix F3 in the step S15 into a third Star image residual structure (Star block) to obtain a feature matrix F4 with the output size of 4k multiplied by 4k and the channel number of 8;
s17, sending the feature matrix F4 in the step S16 into a fourth Star image residual structure (Star block) to obtain a feature matrix F5 with the output size of 2k multiplied by 2k and the channel number of 8;
s18, sending the feature matrix F5 in the step S17 into a fifth Star image residual structure (Star block) to obtain a feature matrix F6 with the output of k multiplied by k and the channel number of 16;
s19, feeding the feature matrix F6 in the step S18 into a two-dimensional convolution layer with the step length of 1, the convolution kernel size of 1 multiplied by 1 and the activation function of ReLU to obtain a feature matrix F7 with the output size of k multiplied by k and the channel number of 64;
s110, performing feature fusion on the obtained feature matrix, performing up-sampling operation on F7 obtained in the step S19 and F5 obtained in the step S7 to obtain F7 'and F5' with the same size as F2 in the step S4, respectively performing pixel superposition on the F7 'and F5' and F2 to obtain F7 'and F5', and performing pixel superposition on the F7 'and the F5' again to obtain a final prediction feature matrix P;
s111, sending the feature matrix P into a region generation network to obtain a series of star image target candidate frames;
s112, outputting predicted coordinate data, namely the regional range of the star image in the sub-star map through a classification and regression network;
s113, carrying out coordinate transformation on the predicted coordinate data to obtain regional range coordinates of the star image in the original image, wherein four parallel sub-star images belong to the same star image, and the abscissa and the ordinate of the star image region in the first sub-image are unchanged; the abscissa of the second sub-graph is added with the size of the sub-star graph, and the ordinate is unchanged; the ordinate of the third subgraph is added with the size of the subgraph, and the abscissa is unchanged; the dimension of the subsatellite map is added to the abscissa of the fourth subsatellite map, so that the regional range coordinate of the star image in the original image can be obtained;
s114, outputting a star image area image in the original image;
s115, obtaining star image centroid coordinates by using a gray centroid method based on pixel screening.
2. The method for extracting the star sensor rapid star image based on the improved Faster R-CNN according to claim 1, wherein the star image feature extraction backbone network is provided with 7 layers of convolution layers, wherein 5 layers of convolution layers are formed by star image residual structures, and 2 layers of convolution layers are formed by two-dimensional convolution; the star image residual structure consists of a convolution dimension-increasing layer (Conv Up Dimension), a channel-by-channel convolution layer (Depthwise Conv), an Attention mechanism layer (Attention) and a convolution dimension-reducing layer (Conv Down Dimension), wherein a feature matrix firstly passes through n convolution kernels with the size of 1 multiplied by 1, then carries out channel-by-channel convolution by n convolution kernels with the size of 3 multiplied by 3, then uses a channel Attention mechanism and a space Attention mechanism to allocate higher weight for a star image area in the feature matrix, allocates lower weight for a background, finally uses m convolution kernels with the size of 1 multiplied by 1 to reduce the dimension, and uses a ReLU activation function for the structure; the numerical values of n and m are super parameters, and can be adjusted according to actual conditions; the star image feature extraction backbone network is used for accurately extracting features of star images in the star images.
3. The method for extracting the star sensor rapid star image based on the improved fast R-CNN according to claim 1, wherein the feature fusion device comprises an up-sampling operation and a pixel superposition operation, the up-sampling operation increases the feature matrix with low resolution to high resolution through a nearest neighbor method, so that the sizes of the feature matrices with different resolution are restored to be consistent, and the feature matrices with the same size are used for carrying out corresponding pixel superposition operation; the feature fusion device is used for improving the context information content contained in the feature matrix.
4. The method for rapid star image extraction of improved fast R-CNN-based star sensor according to claim 1, wherein the region generating network uses three sizes of anchor frames [8,8], [10,10], [12,12] and aspect ratio of [1:1] to perform convolution operation on the feature matrix, so as to obtain probability of each anchor frame containing foreground or background and corresponding position offset, and these results are used to generate a series of candidate frames for subsequent classification and regression network; the region generation network is used for screening the target region and outputting the candidate target region and the corresponding predicted position.
5. The improved fast star sensor image extraction method of claim 1, wherein said classification and regression network comprises region of interest Pooling (ROI Pooling), classification (CLS) and bounding BOX regression (BOX); the function of the ROI Pooling network is to pool the characteristics of the predicted area calculated by the area generating network to a fixed dimension so as to meet the characteristic dimension requirements of the subsequent network; the CLS and the BOX are used for generating a network screening result according to the region, classifying candidate regions in the feature matrix by adopting the full-connection layer, calculating the position of the star image target more accurately, and outputting the classification probability and the predicted position of the star image target.
6. The method for extracting the star sensor rapid star image based on the improved fast R-CNN according to claim 1, wherein the gray centroid method based on pixel screening comprises the following steps:
s61, setting the center of a star image area) As initial coordinates, subscript i;
s62, the initial coordinates are calculated) Gray value +.>Setting a comparison threshold;
s63, comparing the pixels, and gradually comparing the gray value of each pixel point with the initial coordinates) Gray value +.>In contrast, the gray value is greater than +.>Pixel point reservation;
s64, obtaining a filtered pixel point set, and using the pixel points in a gray level centroid algorithm, wherein the gray level centroid method has the following calculation formula:
wherein,and->Respectively the centroid of the object in the image +.>And->Coordinates of->And->Is the coordinates of each pixel in the image, +.>Is the gray value of the pixel.
CN202410049984.6A 2024-01-15 2024-01-15 Star sensor rapid star image extraction method based on improved Faster R-CNN Pending CN117853582A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410049984.6A CN117853582A (en) 2024-01-15 2024-01-15 Star sensor rapid star image extraction method based on improved Faster R-CNN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410049984.6A CN117853582A (en) 2024-01-15 2024-01-15 Star sensor rapid star image extraction method based on improved Faster R-CNN

Publications (1)

Publication Number Publication Date
CN117853582A true CN117853582A (en) 2024-04-09

Family

ID=90539740

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410049984.6A Pending CN117853582A (en) 2024-01-15 2024-01-15 Star sensor rapid star image extraction method based on improved Faster R-CNN

Country Status (1)

Country Link
CN (1) CN117853582A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020181685A1 (en) * 2019-03-12 2020-09-17 南京邮电大学 Vehicle-mounted video target detection method based on deep learning
CN112348053A (en) * 2020-10-12 2021-02-09 北京控制工程研究所 Parallel star image coordinate extraction method adopting row-column clustering
CN115655263A (en) * 2022-10-24 2023-01-31 中国人民解放军火箭军工程大学 Star extraction method based on attitude information
CN115717887A (en) * 2022-11-17 2023-02-28 上海航天控制技术研究所 Star point fast extraction method based on gray distribution histogram
CN116681757A (en) * 2023-06-05 2023-09-01 中国科学院光电技术研究所 Star point extraction algorithm suitable for stray light interference
CN117351333A (en) * 2023-10-12 2024-01-05 常州工学院 Quick star image extraction method of star sensor

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020181685A1 (en) * 2019-03-12 2020-09-17 南京邮电大学 Vehicle-mounted video target detection method based on deep learning
CN112348053A (en) * 2020-10-12 2021-02-09 北京控制工程研究所 Parallel star image coordinate extraction method adopting row-column clustering
CN115655263A (en) * 2022-10-24 2023-01-31 中国人民解放军火箭军工程大学 Star extraction method based on attitude information
CN115717887A (en) * 2022-11-17 2023-02-28 上海航天控制技术研究所 Star point fast extraction method based on gray distribution histogram
CN116681757A (en) * 2023-06-05 2023-09-01 中国科学院光电技术研究所 Star point extraction algorithm suitable for stray light interference
CN117351333A (en) * 2023-10-12 2024-01-05 常州工学院 Quick star image extraction method of star sensor

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
FENG WU 等: "Faster-RCNN Based Star Image Extraction Algorithm Suitable for Star Sensors Disturbed by Gaussian Noise", 《2023 ITC-EGYPT》, 10 August 2023 (2023-08-10) *
TANG YULIN 等: "Wreckage Target Recognition in Side-scan Sonar Image Based on an Improved Faster R-CNN Model", 《2020 ICBASE》, 23 April 2021 (2021-04-23) *
蒋梦源 等: "天顶筒测量世界时的CCD星图处理方法", 时间频率学报, vol. 43, no. 02, 30 April 2020 (2020-04-30), pages 143 - 152 *

Similar Documents

Publication Publication Date Title
US7430303B2 (en) Target detection method and system
CN111027382B (en) Attention mechanism-based lightweight face detection method and model
CN109272060B (en) Method and system for target detection based on improved darknet neural network
CN110910421B (en) Weak and small moving object detection method based on block characterization and variable neighborhood clustering
CN111160407A (en) Deep learning target detection method and system
CN113888461A (en) Method, system and equipment for detecting defects of hardware parts based on deep learning
CN116596792B (en) Inland river foggy scene recovery method, system and equipment for intelligent ship
CN110617802A (en) Satellite-borne moving target detection and speed estimation method
CN113160179A (en) Image deblurring method based on dynamic region convolution
CN116342519A (en) Image processing method based on machine learning
US20240161304A1 (en) Systems and methods for processing images
CN113421210A (en) Surface point cloud reconstruction method based on binocular stereo vision
CN117612153A (en) Three-dimensional target identification and positioning method based on image and point cloud information completion
CN116682105A (en) Millimeter wave radar and visual feature attention fusion target detection method
CN116363535A (en) Ship detection method in unmanned aerial vehicle aerial image based on convolutional neural network
CN117853582A (en) Star sensor rapid star image extraction method based on improved Faster R-CNN
CN116740375A (en) Image feature extraction method, system and medium
CN115984751A (en) Twin network remote sensing target tracking method based on multi-channel multi-scale fusion
CN114445726B (en) Sample library establishing method and device based on deep learning
CN113947723B (en) High-resolution remote sensing scene target detection method based on size balance FCOS
CN115984568A (en) Target detection method in haze environment based on YOLOv3 network
CN111932469B (en) Method, device, equipment and medium for fusing saliency weight fast exposure images
CN114821529A (en) Visual detection system, method and device for intelligent automobile
CN114937007A (en) Unmanned aerial vehicle visual landing runway initial line detection method based on corner features
CN110264417B (en) Local motion fuzzy area automatic detection and extraction method based on hierarchical model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination