CN117612018A - Intelligent discrimination method for optical remote sensing load astigmatism - Google Patents
Intelligent discrimination method for optical remote sensing load astigmatism Download PDFInfo
- Publication number
- CN117612018A CN117612018A CN202410092728.5A CN202410092728A CN117612018A CN 117612018 A CN117612018 A CN 117612018A CN 202410092728 A CN202410092728 A CN 202410092728A CN 117612018 A CN117612018 A CN 117612018A
- Authority
- CN
- China
- Prior art keywords
- module
- remote sensing
- optical remote
- characteristic diagram
- circles
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000003287 optical effect Effects 0.000 title claims abstract description 58
- 201000009310 astigmatism Diseases 0.000 title claims abstract description 45
- 238000012850 discrimination method Methods 0.000 title claims description 15
- 238000001514 detection method Methods 0.000 claims abstract description 23
- 238000000034 method Methods 0.000 claims abstract description 22
- 238000003384 imaging method Methods 0.000 claims abstract description 8
- 230000009466 transformation Effects 0.000 claims abstract description 8
- 238000012216 screening Methods 0.000 claims abstract description 4
- 238000010586 diagram Methods 0.000 claims description 49
- 238000005070 sampling Methods 0.000 claims description 18
- 238000012545 processing Methods 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 6
- 230000004913 activation Effects 0.000 claims description 3
- 230000007246 mechanism Effects 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 230000004075 alteration Effects 0.000 abstract description 14
- 230000006870 function Effects 0.000 description 9
- 230000008569 process Effects 0.000 description 4
- 238000012937 correction Methods 0.000 description 3
- 230000005484 gravity Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 210000000988 bone and bone Anatomy 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/48—Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Biodiversity & Conservation Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of remote sensing images, in particular to an intelligent judging method for optical remote sensing load astigmatism. Comprising the following steps: s1: dividing an on-orbit imaging picture of an optical remote sensing load to obtain n divided images; s2: sequentially inputting the n segmented images into an improved yolov5 network for automatic screening to obtain m target area pictures; s3: sequentially extracting distortion circles of m target area pictures by using a Canny operator, and sequentially extracting standard circles on the m distortion circles based on a Hough transformation circle detection method; s4: and calculating the included angles between the straight line where the furthest pixel points of the boundaries of the m distortion circles are and the circle centers of the standard circles corresponding to the m distortion circles one by one and the horizontal axis in sequence, and completing the intelligent discrimination of the optical remote sensing load astigmatism. The invention can realize the automatic identification and astigmatism estimation of the typical aberration region, and can automatically judge the astigmatism of the optical remote sensing load.
Description
Technical Field
The invention relates to the technical field of remote sensing images, in particular to an intelligent judging method for optical remote sensing load astigmatism.
Background
In optical remote sensing load, astigmatism refers to an image distortion phenomenon caused by non-ideal factors in an optical system. The astigmatism discrimination method aims at detecting and analyzing astigmatism in an optical remote sensing load so as to correct or compensate. Optical remote sensing loads face aberration problems caused by various reasons in the process of acquiring the earth surface information. The space remote sensor is affected by ground gravity in the development process, is subjected to severe vibration and impact in the carrying and transmitting process, is subjected to comprehensive influence of factors such as space gravity field change, temperature change and the like in the track entering process, and along with continuous increase of caliber and focal length, the imaging performance of the space remote sensor is greatly affected by offset wave aberration caused by deformation of an optical system caused by parameters such as gravity, vibration, impact and temperature, and the like, so that the problem that aberration occurs in the optical remote sensing load is complex is comprehensively considered, and the influence of the aberration is reduced or eliminated by adopting corresponding correction means and algorithms so as to improve the quality and accuracy of optical remote sensing data.
Current techniques for aberration adjustment include active optical techniques for the primary mirror's surface shape correction technique and the secondary mirror's pose adjustment correction technique, and adaptive optical techniques that add deformable mirrors to the system to adjust the wavefront. However, with these conventional aberration adjustment methods, additional equipment or facilities such as external field calibration are often required, and the implementation of the aberration adjustment interpretation engineering also requires the participation of experienced operators, and is time-consuming.
Disclosure of Invention
The invention provides an intelligent judging method for optical remote sensing load astigmatism, which aims to solve the defects that the traditional aberration adjusting method needs to have the participation of an experienced operator and consumes more time when the aberration adjusting judging engineering is implemented, and can realize the automatic identification and the astigmatism estimation of a typical aberration area, namely the automatic judgment of the astigmatism of the optical remote sensing load.
The invention provides an intelligent judging method for optical remote sensing load astigmatism, which specifically comprises the following steps:
s1: dividing an on-orbit imaging picture of an optical remote sensing load to obtain n divided images;
s2: sequentially inputting the n segmented images into an improved yolov5 network for automatic screening to obtain m target area pictures;
s3: sequentially extracting distortion circles of m target area pictures by using a Canny operator, and sequentially extracting standard circles on the m distortion circles based on a Hough transformation circle detection method;
s4: and calculating the included angles between the straight line where the furthest pixel points of the boundaries of the m distortion circles are and the circle centers of the standard circles corresponding to the m distortion circles one by one and the horizontal axis in sequence, and completing the intelligent discrimination of the optical remote sensing load astigmatism.
Preferably, the improved yolov5 network includes a backhaul network, a neg network, and a Head network, the neg network including: c3 module, CBS module, up sampling module, C3NA module and 3*3 convolution layer, wherein, the characteristic diagram B outputted by the backhaul network 1 After being processed by a CBS module, the characteristic diagram A is obtained 1 Feature map A 1 Feature map B output by back bone network after up-sampling processing by up-sampling module 2 Performing concat operation to obtain a feature map A 2 Feature map A 2 After being processed by a C3 module and a CBS module in sequence, a characteristic diagram A is obtained 3 Feature map A 3 After up sampling operation by up sampling module, the up sampling module is transported with a backhaul networkCharacteristic diagram B 3 Performing concat operation to obtain a feature map A 4 Feature map A 4 After being processed by a C3NA module, a characteristic diagram A is obtained 5 Feature map A 5 After convolution operation by 3*3 convolution layer, the convolution operation is carried out on the convolution layer and the feature diagram A 3 Performing concat operation to obtain a feature map A 6 Feature map A 6 After being processed by a C3NA module, a characteristic diagram A is obtained 7 Feature map A 7 After convolution operation by 3*3 convolution layer, the convolution operation is carried out on the convolution layer and the feature diagram A 1 Performing concat operation to obtain a feature map A 8 Feature map A 8 After being processed by a C3NA module, a characteristic diagram A is obtained 9 Map A of the characteristics 5 Feature map A 7 And feature map A 9 Are all input to the Head network.
Preferably, the C3NA module includes a Bottleneck sub-module, an NLA sub-module and a 1*1 convolution layer, and is input to the feature map C of the C3NA module 1 After convolution treatment by a 1*1 convolution layer, a characteristic diagram C is obtained 2 Map C of the characteristics 2 Respectively inputting the images to a Bottleneck submodule and an NLA submodule for processing, and then performing concat operation to obtain a feature map C 3 Feature map C 3 After convolution treatment by a 1*1 convolution layer, an output characteristic diagram C of the C3NA module is obtained 4 。
Preferably, the NLA sub-module is a non-local attention mechanism, and the calculation formula of the NLA sub-module is:
(1);
wherein i and j are position coordinate indexes of feature values to be calculated on a feature map input to an NLA sub-module, x is an input feature at a corresponding position of the feature values, y is an output obtained after the feature values are enhanced, f is a function for calculating similarity of i and j, g is a function for converting features of j, and C is a function for normalizing the input feature at the corresponding position of the feature values.
Preferably, the CBS module comprises a convolution layer, a normalization layer and a Silu activation function.
Preferably, feature map A 5 Features and characteristicsFIG. A 7 And feature map A 9 And merging target detection results correspondingly obtained by target detection processing through the Head network to obtain a target area picture.
Preferably, the step S3 specifically includes the following steps:
s31: traversing weak boundary pixel points of each target area picture in sequence by using a Canny operator method, and marking the weak boundary pixel points with adjacent strong boundary pixel points in 8 neighborhood pixels as boundaries until all boundaries of all target area pictures are marked;
s32: constructing distortion circles corresponding to m target area pictures one by utilizing all boundaries marked in the step S31, and sequentially extracting standard circles on the m distortion circles based on a Hough transformation circle detection method by the following steps:
(2);
wherein,coordinates of the j-th weak boundary pixel point which is the i-th distortion circle, +.>For the center coordinates of the standard circle corresponding to the ith distortion circle, < >>The radius of the standard circle corresponding to the ith distortion circle.
Preferably, the step S4 specifically includes the following steps:
s41: according to the center coordinates of the standard circles, the included angles between the straight line where the center of the standard circle corresponding to the m distorted circles is located and the horizontal axis are sequentially calculated by the following steps:
(3);
Wherein,coordinates of furthest pixel point of boundary of jth distortion circle, < >>Is the center coordinates of a standard circle corresponding to the jth distortion circle;
s42: taking the included angle value with the largest duty ratio as an included angle to be detected of the target area picture according to the calculation result of the step S41;
s43: among the eight main astigmatic directions of the optical remote sensing load, the astigmatic direction closest to the included angle to be detected is used as a final intelligent discrimination result.
Preferably, the eight main astigmatic directions of the optical remote sensing load are 0 °, 45 °, 90 °, 135 °, 180 °, 45 °, 90 ° and 135 °, respectively.
Preferably, if the radius of the standard circle extracted from the target area picture satisfies the following formula, the astigmatism of the optical remote sensing load is negligible:
(4);
wherein,is the radius of the j-th standard circle.
Compared with the prior art, the invention has the following beneficial effects:
the intelligent judging method for the optical remote sensing load astigmatism, provided by the invention, applies the deep learning target detection to an intelligent judging algorithm of the optical remote sensing load astigmatism, selects a circle as a typical aberration region, and judges the distance from the circle center to a boundary pixel point based on a Hough transformation circle detection method so as to judge the astigmatism direction of the optical remote sensing load, thereby realizing automatic identification and astigmatism estimation of the typical aberration region and realizing automatic judgment of the astigmatism of the optical remote sensing load.
The intelligent judging method for the optical remote sensing load astigmatism is beneficial to reducing search space and labor and time cost, and the astigmatic information of an imaging system is inverted by adopting the distortion characteristic based on the annular target in the on-orbit load acquisition image, so that key information support is provided for judging the detuning state of the secondary mirror.
Drawings
Fig. 1 is a schematic flow chart of an intelligent discrimination method for optical remote sensing load astigmatism, which is provided by an embodiment of the invention;
FIG. 2 is a schematic diagram of the distribution of eight main astigmatic directions of an optical remote sensing load provided according to an embodiment of the present invention;
fig. 3 is a schematic diagram of parameter distribution of extracting a standard circle based on a Hough transform circle detection method according to an embodiment of the present invention;
FIG. 4 is a network block diagram of an improved yolov5 network provided in accordance with an embodiment of the present invention;
fig. 5 is a network configuration diagram of a C3NA module according to an embodiment of the present invention.
Detailed Description
Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. In the following description, like modules are denoted by like reference numerals. In the case of the same reference numerals, their names and functions are also the same. Therefore, a detailed description thereof will not be repeated.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not to be construed as limiting the invention.
According to the intelligent judging method for the optical remote sensing load astigmatism, firstly, on-orbit imaging pictures of the optical remote sensing load are divided and stored, then typical aberration areas are screened out from the on-orbit imaging pictures of the optical remote sensing load through an improved yolov5 network, and finally, whether the astigmatism and the direction of the astigmatism occur in the target area pictures are detected and judged based on a computer vision algorithm, so that the intelligent judging of the optical remote sensing load astigmatism is realized.
FIG. 1 is a flow chart of an intelligent discrimination method for optical remote sensing load astigmatism, provided according to an embodiment of the present invention; FIG. 2 illustrates a distribution of eight principal astigmatic directions of an optical remote sensing load provided in accordance with an embodiment of the present invention; fig. 3 shows a parameter distribution of extracting a standard circle based on a Hough transform circle detection method according to an embodiment of the present invention.
As shown in fig. 1 to 3, the intelligent discrimination method for optical remote sensing load astigmatism provided by the embodiment of the invention specifically includes the following steps:
s1: and dividing the on-orbit imaging picture of the optical remote sensing load to obtain n divided images.
S2: and sequentially inputting the n segmented images into an improved yolov5 network for automatic screening to obtain m target area pictures.
S3: and sequentially extracting the distortion circles of the m target area pictures by using a Canny operator, and sequentially extracting standard circles on the m distortion circles based on a Hough transformation circle detection method.
The step S3 specifically comprises the following steps:
s31: traversing weak boundary pixel points of each target area picture in sequence by using a Canny operator method, and marking the weak boundary pixel points with adjacent strong boundary pixel points in 8 neighborhood pixels as boundaries until all boundaries of all target area pictures are marked;
s32: constructing distortion circles corresponding to m target area pictures one by utilizing all boundaries marked in the step S31, and sequentially extracting standard circles on the m distortion circles based on a Hough transformation circle detection method by the following steps:
(1);
wherein,coordinates of the j-th weak boundary pixel point which is the i-th distortion circle, +.>For the center coordinates of the standard circle corresponding to the ith distortion circle, < >>The radius of the standard circle corresponding to the ith distortion circle.
S4: and calculating the included angles between the straight line where the furthest pixel points of the boundaries of the m distortion circles are and the circle centers of the standard circles corresponding to the m distortion circles one by one and the horizontal axis in sequence, and completing the intelligent discrimination of the optical remote sensing load astigmatism.
The step S4 specifically comprises the following steps:
s41: according to the center coordinates of the standard circles, the included angles between the straight line where the center of the standard circle corresponding to the m distorted circles is located and the horizontal axis are sequentially calculated by the following steps:
(2);
Wherein,coordinates of furthest pixel point of boundary of jth distortion circle, < >>Is the center coordinates of a standard circle corresponding to the jth distortion circle;
s42: taking the included angle value with the largest duty ratio as an included angle to be detected of the target area picture according to the calculation result of the step S41;
s43: among the eight main astigmatic directions of the optical remote sensing load, the astigmatic direction closest to the included angle to be detected is used as a final intelligent discrimination result.
The eight main astigmatic directions of the optical remote sensing load are 0 °, 45 °, 90 °, 135 °, 180 °, 45 °, 90 ° and 135 °, respectively.
If the radius of the standard circle extracted from the target area picture meets the following formula, the astigmatism of the optical remote sensing load can be ignored:
(3);
wherein,is the radius of the j-th standard circle.
Fig. 4 shows a network structure of an improved yolov5 network provided in accordance with an embodiment of the present invention.
As shown in fig. 4, the modified yolov5 network includes a backhaul network, a neg network, and a Head network, and the neg network includes: c3 module, CBS module, up sampling module, C3NA module and 3*3 convolution layer, wherein, the characteristic diagram B outputted by the backhaul network 1 After being processed by a CBS module, the characteristic diagram A is obtained 1 Feature map A 1 Feature map B output by back bone network after up-sampling processing by up-sampling module 2 Performing concat operation to obtain a feature map A 2 Feature map A 2 After being processed by a C3 module and a CBS module in sequence, a characteristic diagram A is obtained 3 Feature map A 3 After up sampling operation by up sampling module, the up sampling operation is carried out on the feature map B which is output by the backhaul network 3 Performing concat operation to obtain a feature map A 4 Feature map A 4 After being processed by a C3NA module, a characteristic diagram A is obtained 5 Feature map A 5 After convolution operation by 3*3 convolution layer, the convolution operation is carried out on the convolution layer and the feature diagram A 3 Performing concat operation to obtain a feature map A 6 Feature map A 6 After being processed by a C3NA module, a characteristic diagram A is obtained 7 Feature map A 7 After convolution operation by 3*3 convolution layer, the convolution operation is carried out on the convolution layer and the feature diagram A 1 Performing concat operation to obtain a feature map A 8 Feature map A 8 After being processed by a C3NA module, a characteristic diagram A is obtained 9 Map A of the characteristics 5 Feature map A 7 And feature map A 9 Are all input to the Head network.
The backhaul network comprises a 6*6 convolution layer, a CBS module and an SPPF module, wherein the feature map B is input to the backhaul network 4 Sequentially passing through 6*6 convolution layer and two cascaded CBS modules and C3 moduleAfter the blocks are processed, a characteristic diagram B is obtained 3 Feature map B 3 After being processed by a CBS module and a C3 module in sequence, a characteristic diagram B is obtained 2 Feature map B 2 After being processed by a CBS module, a C3 module and an SPPF module in sequence, a characteristic diagram B is obtained 1 。
The CBS module includes a convolution layer, a normalization layer, and a Silu activation function.
Map A of the characteristics 5 Feature map A 7 And feature map A 9 And merging target detection results correspondingly obtained by target detection processing through the Head network to obtain a target area picture.
According to the embodiment of the invention, the C3NA module is placed at the three positions 18, 20 and 23 in the neg network, and the mutual dependency relationship of global semantic information is obtained by utilizing the C3NA module, so that the target detection precision of the improved yolov5 network is improved, the problem of false detection of missed detection is solved, and the detection precision and efficiency of the improved yolov5 network are improved.
The C3 module combines convolution operations of a plurality of convolution kernels, which can increase the model receptive field and reduce the number of parameters. Specifically, the first layer convolved kernel reduces the number of channels, the second layer convolved kernel expands the receptive field with 3*3, and the third layer convolved kernel reduces the number of channels with 1*1. However, since the receptive field of the convolution kernel is local, it is necessary to accumulate a number of layers before correlating the areas of different parts of the entire image. Therefore, the C3NA module provided in the embodiment of the present invention improves the C3 module based on the NLA sub-module, and is used to calculate the relationship between two positions (may be any two of a time position, a space position and a space-time position) so as to capture long-range dependence quickly, that is, extract image features in combination with context information, thereby improving the target detection accuracy. In the C3NA module, the NLA submodule replaces the 3*3 convolution kernel in the original C3 module to obtain the similarity between the feature of the current position and all the features in the feature map, and then all the features are weighted and output according to the similarity. The NLA sub-module and the Bottleneck sub-module are combined, so that the calculated amount is reduced, and the detection efficiency of the improved yolov5 network is improved.
Fig. 5 shows a network structure of a C3NA module according to an embodiment of the present invention.
As shown in fig. 5, the C3NA module includes a Bottleneck sub-module, an NLA sub-module, and a 1*1 convolution layer, and is input to the feature map C of the C3NA module 1 After convolution treatment by a 1*1 convolution layer, a characteristic diagram C is obtained 2 Map C of the characteristics 2 Respectively inputting the images to a Bottleneck submodule and an NLA submodule for processing, and then performing concat operation to obtain a feature map C 3 Feature map C 3 After convolution treatment by a 1*1 convolution layer, an output characteristic diagram C of the C3NA module is obtained 4 。
The NLA sub-module is a non-local attention mechanism, and the calculation formula of the NLA sub-module is as follows:
(4);
wherein i and j are position coordinate indexes of feature values to be calculated on a feature map input to an NLA sub-module, x is an input feature at a corresponding position of the feature values, y is an output obtained after the feature values are enhanced, f is a function for calculating similarity of i and j, g is a function for converting features of j, and C is a function for normalizing the input feature at the corresponding position of the feature values.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.
Claims (10)
1. An intelligent distinguishing method for optical remote sensing load astigmatism is characterized by comprising the following steps:
s1: dividing the on-orbit imaging picture of the optical remote sensing load to obtain n divided images;
s2: sequentially inputting the n segmented images into an improved yolov5 network for automatic screening to obtain m target area pictures;
s3: sequentially extracting distortion circles of m target area pictures by using a Canny operator, and sequentially extracting standard circles on the m distortion circles based on a Hough transformation circle detection method;
s4: and sequentially calculating the included angles between the straight line where the furthest pixel points of the boundaries of the m distortion circles are located and the horizontal axis, and the straight line where the circle centers of the standard circles are located, wherein the straight line corresponds to the m distortion circles one by one, so as to finish intelligent discrimination of the optical remote sensing load astigmatism.
2. The intelligent discrimination method for optical remote sensing load astigmatism according to claim 1, wherein the modified yolov5 network comprises a backhaul network, a neg network, and a Head network, the neg network comprising: a C3 module, a CBS module, an up-sampling module, a C3NA module and a 3*3 convolution layer, wherein the characteristic diagram B output by the backhaul network 1 After the CBS module is used for processing, a feature map A is obtained 1 The characteristic diagram A 1 And the feature map B is output with the backhaul network after being subjected to up-sampling processing by the up-sampling module 2 Performing concat operation to obtain a feature map A 2 The characteristic diagram A 2 The C3 module and the CBS module are processed in sequence to obtain a feature map A 3 The characteristic diagram A 3 After up sampling operation by the up sampling module, the up sampling module is connected with a feature diagram B output by the backhaul network 3 Performing concat operation to obtain a feature map A 4 The characteristic diagram A 4 After being processed by the C3NA module, a characteristic diagram A is obtained 5 The characteristic diagram A 5 Convolving with the 3*3 convolution layer to obtain a feature image A 3 Performing concat operation to obtain a feature map A 6 The features areFIG. A 6 After being processed by the C3NA module, a characteristic diagram A is obtained 7 The characteristic diagram A 7 Convolving with the 3*3 convolution layer to obtain a feature image A 1 Performing concat operation to obtain a feature map A 8 The characteristic diagram A 8 After being processed by the C3NA module, a characteristic diagram A is obtained 9 The characteristic diagram A is processed 5 The characteristic diagram A 7 And the characteristic diagram A 9 Are input to the Head network.
3. The intelligent discrimination method for optical remote sensing load astigmatism according to claim 2, wherein the C3NA module includes a Bottleneck sub-module, an NLA sub-module and a 1*1 convolution layer, and the feature map C is input to the C3NA module 1 After the 1*1 convolution layer is subjected to convolution treatment, a characteristic diagram C is obtained 2 The characteristic diagram C is processed 2 Respectively inputting the processed data to the Bottleneck sub-module and the NLA sub-module, and performing concat operation to obtain a feature map C 3 The characteristic diagram C 3 After the 1*1 convolution layer carries out convolution treatment, an output characteristic diagram C of the C3NA module is obtained 4 。
4. The intelligent discrimination method for optical remote sensing load astigmatism according to claim 3, wherein the NLA sub-module is a non-local attention mechanism, and a calculation formula of the NLA sub-module is:
(1);
wherein i and j are position coordinate indexes of feature values to be calculated on a feature map input to the NLA sub-module, x is an input feature at a corresponding position of the feature values, y is an output obtained after the feature values are enhanced, f is a function for calculating similarity of i and j, g is a function for converting features of j, and C is a function for normalizing input features at corresponding positions of the feature values.
5. The intelligent discrimination method for optical remote sensing load astigmatism according to claim 2, wherein the CBS module comprises a convolutional layer, a normalization layer, and a Silu activation function.
6. The intelligent discrimination method for optical remote sensing load astigmatism according to claim 2, characterized by combining the feature map a 5 The characteristic diagram A 7 And the characteristic diagram A 9 And merging target detection results correspondingly obtained by performing target detection processing through the Head network to obtain a target area picture.
7. The intelligent discrimination method for optical remote sensing load astigmatism according to claim 1, specifically comprising the step of:
s31: traversing weak boundary pixel points of each target area picture in sequence by using a Canny operator method, and marking the weak boundary pixel points with adjacent strong boundary pixel points in 8 neighborhood pixels as boundaries until all boundaries of all target area pictures are marked;
s32: constructing distortion circles corresponding to m target area pictures one by utilizing all boundaries marked in the step S31, and sequentially extracting the standard circles on the m distortion circles based on a Hough transformation circle detection method by the following steps:
(2);
wherein,coordinates of the j-th weak boundary pixel point which is the i-th distortion circle, +.>For the center coordinates of the standard circle corresponding to the ith distortion circle, < >>The radius of the standard circle corresponding to the ith distortion circle.
8. The intelligent discrimination method for optical remote sensing load astigmatism according to claim 1, specifically comprising the step of:
s41: according to the center coordinates of the standard circles, sequentially calculating the included angles between the straight line where the boundary furthest pixel points of m distorted circles are and the center of the standard circle corresponding to the m distorted circles one by one and the horizontal axis:
(3);
Wherein,coordinates of furthest pixel point of boundary of jth distortion circle, < >>Is the center coordinates of a standard circle corresponding to the jth distortion circle;
s42: according to the calculation result of the step S41, taking the included angle value with the largest duty ratio as the included angle to be detected of the target area picture;
s43: and taking the astigmatism direction closest to the included angle to be detected as a final intelligent discrimination result in the eight main astigmatism directions of the optical remote sensing load.
9. The intelligent discrimination method for optical remote sensing load astigmatism according to claim 8, wherein eight main astigmatic directions of the optical remote sensing load are 0 °, 45 °, 90 °, 135 °, 180 °, 45 °, 90 ° and 135 °, respectively.
10. The intelligent discrimination method for optical remote sensing load astigmatism according to claim 8, wherein the optical remote sensing load astigmatism is negligible if the radius of the standard circle extracted by the target area picture satisfies the following formula:
(4);
wherein,is the radius of the j-th standard circle.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410092728.5A CN117612018B (en) | 2024-01-23 | 2024-01-23 | Intelligent discrimination method for optical remote sensing load astigmatism |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410092728.5A CN117612018B (en) | 2024-01-23 | 2024-01-23 | Intelligent discrimination method for optical remote sensing load astigmatism |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117612018A true CN117612018A (en) | 2024-02-27 |
CN117612018B CN117612018B (en) | 2024-04-05 |
Family
ID=89944668
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410092728.5A Active CN117612018B (en) | 2024-01-23 | 2024-01-23 | Intelligent discrimination method for optical remote sensing load astigmatism |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117612018B (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104655496A (en) * | 2015-02-12 | 2015-05-27 | 中国科学院长春光学精密机械与物理研究所 | Method for testing influence of self weight to surface shape of off-axis reflection mirror |
CN106062917A (en) * | 2014-07-22 | 2016-10-26 | 智能病毒成像公司 | Method for automatic correction of astigmatism |
CN208766318U (en) * | 2018-08-27 | 2019-04-19 | 西安精英光电技术有限公司 | A kind of diffraction limit coupled lens of astigmatic compensation |
CN109875863A (en) * | 2019-03-14 | 2019-06-14 | 河海大学常州校区 | Wear-type VR eyesight lifting system based on binocular vision Yu mental image training |
CN111380494A (en) * | 2018-12-28 | 2020-07-07 | 卡尔蔡司工业测量技术有限公司 | Standard device for calibrating coordinate measuring machine |
CN112924477A (en) * | 2021-01-23 | 2021-06-08 | 北京大学 | Method for quantitatively eliminating astigmatism by electron microscope |
US20210310937A1 (en) * | 2020-03-31 | 2021-10-07 | Universitat Stuttgart | Method and Shear-Invariant Michelson-Type Interferometer for Single Shot Imaging FT-Spectroscopy |
CN113820823A (en) * | 2021-10-26 | 2021-12-21 | 长光卫星技术有限公司 | Optical reflector connecting structure and optical load batch integration and detection system and method applying same |
CN114719976A (en) * | 2022-03-28 | 2022-07-08 | 苏州大学 | Push-broom type imaging spectrometer and imaging method thereof |
CN115113490A (en) * | 2021-03-22 | 2022-09-27 | 纽富来科技股份有限公司 | Multi-charged particle beam writing apparatus and adjusting method thereof |
US20230260279A1 (en) * | 2020-10-07 | 2023-08-17 | Wuhan University | Hyperspectral remote sensing image classification method based on self-attention context network |
-
2024
- 2024-01-23 CN CN202410092728.5A patent/CN117612018B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106062917A (en) * | 2014-07-22 | 2016-10-26 | 智能病毒成像公司 | Method for automatic correction of astigmatism |
CN104655496A (en) * | 2015-02-12 | 2015-05-27 | 中国科学院长春光学精密机械与物理研究所 | Method for testing influence of self weight to surface shape of off-axis reflection mirror |
CN208766318U (en) * | 2018-08-27 | 2019-04-19 | 西安精英光电技术有限公司 | A kind of diffraction limit coupled lens of astigmatic compensation |
CN111380494A (en) * | 2018-12-28 | 2020-07-07 | 卡尔蔡司工业测量技术有限公司 | Standard device for calibrating coordinate measuring machine |
CN109875863A (en) * | 2019-03-14 | 2019-06-14 | 河海大学常州校区 | Wear-type VR eyesight lifting system based on binocular vision Yu mental image training |
US20210310937A1 (en) * | 2020-03-31 | 2021-10-07 | Universitat Stuttgart | Method and Shear-Invariant Michelson-Type Interferometer for Single Shot Imaging FT-Spectroscopy |
US20230260279A1 (en) * | 2020-10-07 | 2023-08-17 | Wuhan University | Hyperspectral remote sensing image classification method based on self-attention context network |
CN112924477A (en) * | 2021-01-23 | 2021-06-08 | 北京大学 | Method for quantitatively eliminating astigmatism by electron microscope |
CN115113490A (en) * | 2021-03-22 | 2022-09-27 | 纽富来科技股份有限公司 | Multi-charged particle beam writing apparatus and adjusting method thereof |
CN113820823A (en) * | 2021-10-26 | 2021-12-21 | 长光卫星技术有限公司 | Optical reflector connecting structure and optical load batch integration and detection system and method applying same |
CN114719976A (en) * | 2022-03-28 | 2022-07-08 | 苏州大学 | Push-broom type imaging spectrometer and imaging method thereof |
Non-Patent Citations (2)
Title |
---|
BRADLEY T.DE GREGORIO等: ""Fast, computer-assisted detection of dust and debris impact craters on Stardust interstellar foils"", 《THE METEORITICAL SOCIETY》, 14 June 2021 (2021-06-14), pages 944 - 959 * |
徐达等: ""Offner型凸面光栅宽动态范围辐射定标光源设计"", 《中国光学》, 31 October 2020 (2020-10-31), pages 1085 - 1093 * |
Also Published As
Publication number | Publication date |
---|---|
CN117612018B (en) | 2024-04-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111862126B (en) | Non-cooperative target relative pose estimation method combining deep learning and geometric algorithm | |
CN108960211B (en) | Multi-target human body posture detection method and system | |
CN110942458B (en) | Temperature anomaly defect detection and positioning method and system | |
CN112288008B (en) | Mosaic multispectral image disguised target detection method based on deep learning | |
CN111340701B (en) | Circuit board image splicing method for screening matching points based on clustering method | |
CN109099929B (en) | Intelligent vehicle positioning device and method based on scene fingerprints | |
JP2020014067A (en) | Stereo imaging apparatus | |
CN109034184B (en) | Grading ring detection and identification method based on deep learning | |
CN109712071B (en) | Unmanned aerial vehicle image splicing and positioning method based on track constraint | |
CN109376641B (en) | Moving vehicle detection method based on unmanned aerial vehicle aerial video | |
CN113313047B (en) | Lane line detection method and system based on lane structure prior | |
CN114170552A (en) | Natural gas leakage real-time early warning method and system based on infrared thermal imaging | |
CN114255197A (en) | Infrared and visible light image self-adaptive fusion alignment method and system | |
CN113052170A (en) | Small target license plate recognition method under unconstrained scene | |
CN111738071B (en) | Inverse perspective transformation method based on motion change of monocular camera | |
CN115047610A (en) | Chromosome karyotype analysis device and method for automatically fitting microscopic focusing plane | |
CN117612018B (en) | Intelligent discrimination method for optical remote sensing load astigmatism | |
CN116777953A (en) | Remote sensing image target tracking method based on multi-scale feature aggregation enhancement | |
CN116309270A (en) | Binocular image-based transmission line typical defect identification method | |
CN113537397B (en) | Target detection and image definition joint learning method based on multi-scale feature fusion | |
CN112734745B (en) | Unmanned aerial vehicle thermal infrared image heating pipeline leakage detection method fusing GIS data | |
US20220230412A1 (en) | High-resolution image matching method and system | |
Stankevich et al. | Satellite imagery spectral bands subpixel equalization based on ground classes’ topology | |
CN111833384A (en) | Method and device for quickly registering visible light and infrared images | |
CN111950433A (en) | Automatic construction method for optical satellite image feature matching deep learning training sample set |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |