CN110334645B - Moon impact pit identification method based on deep learning - Google Patents
Moon impact pit identification method based on deep learning Download PDFInfo
- Publication number
- CN110334645B CN110334645B CN201910589841.3A CN201910589841A CN110334645B CN 110334645 B CN110334645 B CN 110334645B CN 201910589841 A CN201910589841 A CN 201910589841A CN 110334645 B CN110334645 B CN 110334645B
- Authority
- CN
- China
- Prior art keywords
- impact
- impact pit
- pit
- image
- moon
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- PEDCQBHIVMGVHV-UHFFFAOYSA-N Glycerine Chemical compound OCC(O)CO PEDCQBHIVMGVHV-UHFFFAOYSA-N 0.000 title claims abstract description 46
- 238000013135 deep learning Methods 0.000 title claims abstract description 25
- 238000012549 training Methods 0.000 claims abstract description 35
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 15
- 238000010586 diagram Methods 0.000 claims description 11
- 238000000605 extraction Methods 0.000 claims description 11
- 238000012360 testing method Methods 0.000 claims description 10
- 238000013528 artificial neural network Methods 0.000 claims description 9
- 238000002372 labelling Methods 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 5
- 238000012795 verification Methods 0.000 claims description 5
- 230000002779 inactivation Effects 0.000 claims description 4
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 claims description 2
- 238000011156 evaluation Methods 0.000 claims description 2
- 230000006870 function Effects 0.000 claims description 2
- 230000004927 fusion Effects 0.000 claims description 2
- 238000003062 neural network model Methods 0.000 claims description 2
- 238000010606 normalization Methods 0.000 claims description 2
- 238000010977 unit operation Methods 0.000 claims description 2
- 230000009849 deactivation Effects 0.000 claims 2
- 230000011218 segmentation Effects 0.000 abstract description 2
- 230000000694 effects Effects 0.000 description 4
- 238000010801 machine learning Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000003116 impacting effect Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000013526 transfer learning Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
A moon impact pit identification method based on deep learning is used for solving the problem that a celestial body impact pit is difficult to identify. The method comprises the following steps: (1) generating a training sample by using the existing impact pit position information and the lunar DEM image; (2) constructing an applicable convolutional neural network, namely a Simple-ResUnet network model, according to the generated image; (3) training by utilizing the generated moon image and the impact pit marking image; (4) performing impact pit edge segmentation by using a trained model; (5) and finally, extracting the edge of the impact pit and recording the newly found impact pit. Compared with other classification methods, the method has the advantages of high classification speed, high collision pit identification probability and high collision discovery rate, and is suitable for identification of various celestial body collision pits.
Description
Technical Field
The invention relates to a moon impact pit identification method based on deep learning, and belongs to the technical field of image identification.
Background
The moon is the celestial body closest to the earth and is also the satellite with the highest mass. A large number of impact pits exist on the surface of the moon, and the impact pits are pits formed by meteorites impacting celestial bodies and form typical lunar features with the moon sea, the highland and the like. Chang' e project has already finished the survey task of the lunar surface, bring back a large amount of lunar surface pictures, and drawn the three-dimensional relief map of the whole moon, help the further study to the moon in the future. Through the research on the impact pits, the relative geological age and the surface characteristics of the surface of the spacecraft can be obtained, and data analysis can be provided for positioning of spacecraft navigation, obstacle avoidance and the like in space exploration in future by drawing a surface map of the spacecraft. Therefore, the extraction and identification of the impact pits are of great significance in the field of space exploration.
At present, the most important impact pit identification method is manual identification, and the manual identification method has the advantage that various large impact pits can be accurately identified. The manual identification method is time-consuming and labor-consuming, and the limitation of visibility can lead to incomplete identification results, so that the identification effect of the small impact pits is not ideal. At present, how to quickly and accurately identify the impact pit of the moon is still a difficult problem and focus in the moon exploration field at home and abroad.
In order to improve the identification efficiency of the celestial body impact pit, many researchers at home and abroad attempt to identify the impact pit by using a computer, and various automatic impact pit identification algorithms are designed. These algorithms are mostly based on methods of machine learning. Collision pit recognition is divided into two categories, one being unsupervised learning and one being supervised learning. In the unsupervised learning method, researchers mainly use the geometrical characteristics that most of impact pits are circular or elliptical, and match the impact pits in the image by using a method based on terrain analysis and mathematical morphology or a method based on local gray scale of an image area. In the supervised learning method, researchers generally divide marked impact pits into three types, train samples, verify samples and test samples, and then use methods such as a neural network, a support vector machine, ensemble learning, transfer learning or a continuous scalable template matching algorithm to improve the recognition accuracy in the repeated learning process, and then centrally display the recognition effect of the supervised learning method on the test samples. The unsupervised learning method has a good recognition effect in a specific image, but the recognition effect is greatly influenced by illumination and shooting angle, and the method is difficult to put into formal application due to the reasons. In supervised learning, an expected output value (impact pit edge labeling image) is obtained through an input object (a lunar surface image), and then the output value is compared with a real output value (impact pit real labeling image), so that the parameters of the classifier are continuously adjusted, and the identification error is adjusted to be minimum. At present, the manually marked impact pit image is generally used for real output in supervised learning. The manual marking is not complete on the smaller impact pit marking, which can cause some influence on the practical application.
In recent years, deep learning has enjoyed significant success in the field of computer vision. Unlike traditional machine learning methods, which may require some features to be defined manually, deep learning automatically finds important features needed to solve a problem. As the amount of data increases, deep learning performs better than traditional machine learning. The convolutional neural network-based algorithm has excellent performance in solving the problems of target detection, segmentation, classification and the like. The convolutional neural network is a multilayer neural network, and reduces data dimensionality and gradually extracts data features through operations such as convolution, pooling and the like. And finally, finishing the classification task through the trained weight of the convolutional neural network. In the face recognition task, the recognition accuracy rate of deep learning exceeds that of human eyes. Meanwhile, deep learning is also successful in tasks such as automatic driving and satellite image recognition. In addition to the field of computer vision, deep learning has also enjoyed great success in the fields of speech recognition and natural language processing.
Disclosure of Invention
The invention aims to provide a moon impact pit identification method based on deep learning by utilizing the advantages of deep learning in image identification aiming at the conditions of difficult moon impact pit identification and low efficiency.
The technical scheme of the invention is that the moon impact pit identification method based on deep learning adopts a convolutional neural network to identify impact pits of a moon Digital Elevation Map (DEM). The method comprises the steps of generating a training sample and a testing sample according to existing impact pit position information and a moon image; constructing a convolutional neural network, and setting neural network parameters; training by using the generated moon image as network input and the impact pit labeling image as network output; performing impact pit edge recognition by using a trained neural network model; and finally, extracting the edge of the impact pit and recording the newly found impact pit.
The moon impact pit identification method based on deep learning comprises the following implementation steps:
(1) generating a training sample and a testing sample, randomly cutting a moon digital elevation map, and drawing a pit impact picture according to the existing pit impact marking information; storing the output picture and the input picture into a file;
(2) constructing a neural network, and setting parameters of network training; training by using the generated picture and the labeled picture as input and output of a convolutional neural network to obtain a trained network model;
(3) carrying out moon impact pit recognition by using the trained network model, and acquiring an impact pit edge recognition image;
(4) after obtaining the identification image of the edge of the impact pit, matching the position information of the approximate impact pit in the identification image by using a template matching algorithm;
(5) drawing an impact pit identification image according to the matched impact pit position information;
(6) comparing with the existing impact pit marking information, calculating the accuracy rate and precision rate of identification, and storing newly found impact pit information;
(7) if the accuracy and precision of the model meet the requirements, identifying the impact pits by using the model; and if the requirements are not met, adjusting the network model training parameters and carrying out training again.
And (6) comparing the information with the existing impact pit marking information, calculating the identification accuracy and precision, and storing newly found impact pit information, wherein the process is as follows:
(1) using a trained network to identify the edge of the impact pit, and then using a template matching method to obtain the position (x) similar to the impact pit in the image of the edge of the impact pit i ,y i ,r i );
(2) Associating impact pits with existing impact pit information (x) j ,y j ,r j ) Comparing if formula ((x) is satisfied simultaneously i -x j ) 2 +(y i -y j ) 2 ) 2 /min(r i ,r j )<D x,y And abs (r) i -r j )/min(r i ,r j )<D r ,
Wherein x is i 、y i Respectively an abscissa value and an ordinate value of the center of the impact pit detected by the network on the image; x is the number of j 、y j Respectively representing an abscissa value and an ordinate value of the center position of the existing impact pit on the image; r is i 、r j Respectively representing the detected impact pit radius pixel value and the existing impact pit radius pixel value; d x,y Is a position information error threshold; d r Is a radius error threshold; abs is the absolute value;
then this impact pit is taken as the correctly identified impact pit and the total number is recorded as T p (ii) a Impact pits that failed to match were designated as newly found impact pits, and the total number was designated as F p (ii) a Marking the residual impact pits in the labeling set as undetected impact pits, and marking the total number as F n ;
(3) Calculating the accuracy and precision of identification:
precision ratio: p ═ T p /(T p +F p );
And (3) recall ratio: r is T p /(T p +F n );
The model score calculation formula is as follows: f 2 =5×P×R/(4×P+R);
Impact pit discovery rate: DR1 ═ F p /(T p +F p ),DR2=F p /(T p +F n +F p );
(4) Converting the newly found impact pit from the pixel position into moon longitude and latitude, and storing the moon longitude and latitude into a recording file for subsequent manual verification; if the performance evaluation of the network model meets the set requirements, the network model can be successfully used in the moon impact pit identification task; otherwise, adjusting the network training parameters and retraining.
The convolutional neural network is realized by the following steps:
(1) and (3) a characteristic extraction process: inputting an image, changing the number of channels of the image into an initial channel number through convolution, and performing residual convolution once again to keep the number of channels unchanged; carrying out down sampling again to reduce the size of the image by one time; this step is performed three times, and the latter two convolution operations change the number of channels to twice that of the input;
(2) bridging: carrying out convolution operation once on the feature map obtained by the last step of downsampling in the feature extraction process, wherein the number of channels of the feature map is not changed; then carrying out residual convolution once again;
(3) and (3) image restoration process: firstly, carrying out deconvolution on a feature map output by a bridging layer, wherein the size of the feature map is doubled; fusing the feature images with the same size after residual convolution in the feature extraction process, wherein the number of channels of the fused feature images is doubled; performing convolution again to reduce the number of channels by half, and performing residual convolution again; the step is executed for three times to obtain a characteristic diagram with the final size same as that of the input image and the number of channels same as that of the initial channels; performing a Sigmoid operation again to change the number of channels to 1; and finally, obtaining a characteristic diagram consistent with the number of channels with the size of the input image as the output of the network.
Carrying out primary fusion operation on the corresponding feature maps in the feature extraction process and the image restoration process, and splicing the two feature maps; and performing one random inactivation operation after splicing.
The downsampling process uses maximum pooling operation, the pooling size is 2 multiplied by 2, and the size of the feature map is reduced to be one half of the size of the feature map in input; the up-sampling process uses deconvolution operation, with a convolution kernel size of 3 x 3, a step size of 2, and an enlargement of the feature map size to twice that of the input.
The residual error unit of the residual error convolution comprises two convolution layers, and after the first convolution operation is finished, a batch normalization operation and a correction linear unit operation are carried out; after the second convolution operation is finished, performing superposition operation on the output characteristic diagram and the input characteristic diagram, namely adding corresponding elements of the characteristic diagram; in the feature extraction process, convolution operation is performed once before each residual error unit to double the number of filters, and downsampling is performed once after each residual error unit; in the image restoration process, carrying out convolution operation once before each residual error unit to reduce the number of the filters by one time, and carrying out up-sampling once after the residual error unit; before each residual unit, a convolution operation is performed to reduce or enlarge the filter by one time.
In the convolution operation, except that the convolution kernel in the convolution layer which is finally output is 1 multiplied by 1, sigmoid operation is carried out; the sizes of convolution kernels in other convolution layers in the network are all 3 multiplied by 3, the step length is 1, and the filling strategy is used for zero filling so as to ensure that the sizes of input and output images are consistent.
The set neural network parameters comprise the number of filters and the value of random inactivation, the number of network start filters is set to be 112, and the probability of random inactivation is set to be 0.15; the optimizer of the network is an Adam optimizer; the loss computation function used by the network is the cross entropy of the two classes.
The method has the advantages that the convolutional neural network is used for identifying the impact pit edge of the lunar surface image, so that the complexity of impact pit identification is reduced, and the speed and the accuracy of impact pit identification are improved. Compared with other classification methods, the method has the advantages of high classification speed, high collision pit recognition probability and high collision discovery rate, and is suitable for recognition of various celestial body collision pits.
Drawings
FIG. 1 is an overall flow chart of the present invention for deep learning based moon impact pit identification;
FIG. 2 is a diagram of an example convolutional neural network model used in the present invention.
Detailed Description
The moon impact pit identification method based on deep learning comprises the following steps:
(1) generating a training sample and a testing sample, and randomly cutting a DEM image;
(2) drawing an impact pit edge image according to the existing impact pit marking information; storing the output image and the input image into a file to accelerate the reading speed;
(3) constructing a network, and setting parameters for network training; training by using the generated image and the labeled image as input and output of a convolutional neural network to obtain a trained network model;
(4) carrying out moon impact pit recognition by using the trained network model, and acquiring an impact pit edge recognition image;
(5) after obtaining the identification image of the edge of the impact pit, matching the position information of the approximate impact pit in the identification image by using a template matching algorithm;
(6) drawing an impact pit identification image according to the matched impact pit position information;
(7) comparing with the existing impact pit marking information, calculating network efficiency, calculating identification accuracy and precision, and storing newly found impact pit information;
(8) if the accuracy and precision of the model meet the requirements, identifying the impact pits by using the model; and if the requirements are not met, adjusting the network model training parameters and carrying out training again.
As shown in fig. 1, the deep learning-based moon impact pit identification method, in which the implementation code is Python, includes the following specific steps:
(1) preparing experimental picture data; in the process, firstly, a full-moon surface digital elevation image path, an annotation information path, a picture generation size, a training picture generation quantity, a test picture generation quantity, a verification picture generation quantity and longitude and latitude are set, and a moon picture and a drawn pit impact picture are stored and converted into a numpy array which is an hdf5 file. Storing pictures using hdf5 enables fast reading of pictures. The full-moon digital elevation image used here is a moon digital elevation model image drawn by a picture taken by a moon reconnaissance orbit aircraft camera of the national aerospace agency of america, and the picture size is 92160 × 30720, the bit depth is 8 bits, and the resolution is 512 pixels per inch. The marking information used was impact pit position information detected by James W in 2010 and r.z. The image size is 256 multiplied by 256, the number of training pictures is 30000, the number of verification pictures is 3000, the number of training pictures is 3000, and PIL classes in python are used in the image processing process.
(2) The convolutional neural network shown in fig. 2 is constructed, and the picture size, the number of training pictures, the number of test pictures, the number of verification pictures, the size of a training batch, the number of training times and the model storage position are set. And (5) training. The neural network is constructed by using a Keras framework, and the learning rate of the network is 0.0001. The training batch size is three pictures per batch, the training times are set to be five times, and other parameter settings are the same as those in the step (1).
(3) Starting training, and storing loss values in the training process; and storing the model after training.
(4) And performing impact pit edge prediction on a test picture by using a stored model to obtain a prediction image matrix, and then matching the position and the radius of a similar impact pit by using a template matching algorithm in an image processing package scimit-image, wherein the impact pit matching threshold is set to be 0.5, and the impact pit radius is 5-40 pixels.
(5) And comparing the acquired impact pit position information with the marked impact pit information, calculating the precision ratio, the recall ratio, the model score and the new impact pit discovery probability of the model, converting the discovered impact pit position information into longitude and latitude information and storing the longitude and latitude information into a csv file.
Claims (8)
1. A moon impact pit identification method based on deep learning is characterized in that a training sample and a test sample are generated according to existing impact pit position information and a moon image; constructing a convolutional neural network, and setting neural network parameters; training by using the generated moon image as network input and the impact pit labeling image as network output; performing impact pit edge recognition by using a trained neural network model; finally, extracting the edge of the impact pit and recording the newly found impact pit;
the method comprises the following steps:
(1) generating a training sample and a testing sample, randomly cutting a moon digital elevation map, and drawing impact pit pictures according to the existing impact pit marking information; storing the output picture and the input picture into a file;
(2) constructing a neural network, and setting parameters of network training; training by using the generated picture and the labeled picture as input and output of a convolutional neural network to obtain a trained network model;
(3) identifying the moon impact pit by using the trained network model, and acquiring an impact pit edge identification image;
(4) after the impact pit edge identification image is obtained, matching position information of an approximate impact pit in the identification image by using a template matching algorithm;
(5) drawing an impact pit identification image according to the matched impact pit position information;
(6) comparing with the existing impact pit marking information, calculating the accuracy rate and precision rate of identification, and storing newly found impact pit information;
(7) if the accuracy rate and precision rate of the model meet the requirements, identifying the impact pits by using the model; and if the requirements are not met, adjusting the network model training parameters and carrying out training again.
2. The moon impact pit identification method based on deep learning according to claim 1, characterized in that the convolutional neural network is implemented by the following steps:
(1) and (3) a characteristic extraction process: inputting an image, changing the number of channels of the image into the initial number of channels through convolution, and performing residual convolution once again to keep the number of channels unchanged; carrying out down sampling again to reduce the size of the image by one time; this step is performed three times, and the latter two convolution operations change the number of channels to twice that of the input;
(2) bridging process: carrying out convolution operation once on the feature map obtained by the last step of downsampling in the feature extraction process, wherein the number of channels of the feature map is not changed; then carrying out residual convolution once again;
(3) and (3) image restoration process: firstly, carrying out deconvolution on a feature map output by a bridge layer, wherein the size of the feature map is doubled; then fusing the feature images with the same size after residual convolution in the feature extraction process, wherein the number of channels of the fused feature images is doubled; performing convolution again to reduce the number of channels by half, and performing residual convolution again; the step is executed three times to obtain a characteristic diagram with the final size same as that of the input image and the number of channels same as that of the initial channels; performing a Sigmoid operation again to change the number of channels to 1; and finally, obtaining a characteristic diagram consistent with the number of channels with the size of the input image as the output of the network.
3. The moon impact pit identification method based on deep learning as claimed in claim 2, wherein the downsampling process uses maximum pooling operation, the pooling size is 2 x 2, and the size of the feature map is reduced to half of the input size; the up-sampling process uses deconvolution operation, the convolution kernel size is 3 × 3, the step size is 2, and the feature map size is enlarged to twice that of the input.
4. The moon impact pit identification method based on deep learning as claimed in claim 2, wherein the residual unit of the residual convolution comprises two convolution layers, and after the first convolution operation is completed, a batch normalization operation and a modified linear unit operation are performed; after the second convolution operation is finished, the output characteristic diagram and the input characteristic diagram are subjected to superposition operation, namely corresponding elements of the characteristic diagram are added; in the feature extraction process, convolution operation is performed once before each residual error unit to double the number of filters, and downsampling is performed once after each residual error unit; in the image restoration process, performing convolution operation once before each residual error unit to reduce the number of filters by one time, and performing up-sampling once after the residual error unit; before each residual unit, a convolution operation is performed to reduce or enlarge the filter by one time.
5. The moon impact pit recognition method based on deep learning as claimed in claim 2, wherein the corresponding feature maps in the feature extraction process and the image restoration process are subjected to a primary fusion operation, and the two feature maps are spliced; and performing one random inactivation operation after splicing.
6. The moon impact pit identification method based on deep learning of claim 2, wherein in the convolution operation, sigmoid operation is performed except for the convolution kernel of the last output convolution layer which is 1 x 1; the sizes of convolution kernels in other convolution layers in the network are all 3 multiplied by 3, the step length is 1, and the filling strategy is used for zero filling so as to ensure that the sizes of input and output images are consistent.
7. The moon impact pit identification method based on deep learning of claim 1, wherein the set neural network parameters comprise the number of filters and the value of random deactivation, the number of initial filters of the network is set to 112, and the probability of random deactivation is set to 0.15; the optimizer of the network is an Adam optimizer; the loss computation function used by the network is the cross entropy of the two classes.
8. The moon impact pit identification method based on deep learning as claimed in claim 1, wherein the step (6) is compared with the existing impact pit labeling information, the identification accuracy and precision are calculated, and the newly found impact pit information is saved, and the process is as follows:
(1) using the trained network to carry out impact pit edge identification, and then using a template matching method to obtain the positions (x) similar to impact pits in the impact pit edge picture i ,y i ,r i );
(2) Associating the impact pit with the existing impact pit information (x) j ,y j ,r j ) Comparing if formula ((x) is satisfied simultaneously i -x j ) 2 +(y i -y j ) 2 ) 2 /min(r i ,r j )<D x,y And abs (r) i -r j )/min(r i ,r j )<D r ,
Wherein x is i 、y i Respectively an abscissa value and an ordinate value of the center of the impact pit detected by the network on the image; x is the number of j 、y j Respectively representing an abscissa value and an ordinate value of the center position of the existing impact pit on the image; r is i 、r j Respectively representing the detected impact pit radius pixel value and the existing impact pit radius pixel value; d x,y Is a position information error threshold; d r Is a radius error threshold; abs is the absolute value;
then this impact pit is taken as the correctly identified impact pit and the total number is recorded as T p (ii) a Impact pits that failed to match were designated as newly found impact pits, and the total number was designated as F p (ii) a The impact pits remained in the mark set are marked as undetectedThe total number of the discharged impact pits is marked as F n ;
(3) Calculating the accuracy and precision of identification:
precision ratio: p ═ T p /(T p +F p );
And (3) recall ratio: r ═ T p /(T p +F n );
The model score calculation formula is as follows: f 2 =5×P×R/(4×P+R);
Impact pit discovery rate: DR1 ═ F p /(T p +F p ),DR2=F p /(T p +F n +F p );
(4) Converting the newly found impact pit from the pixel position into the moon longitude and latitude, and storing the moon longitude and latitude into a recording file for subsequent manual verification; if the performance evaluation of the network model meets the set requirements, the network model can be successfully used in the moon impact pit identification task; otherwise, adjusting the network training parameters and retraining.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910589841.3A CN110334645B (en) | 2019-07-02 | 2019-07-02 | Moon impact pit identification method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910589841.3A CN110334645B (en) | 2019-07-02 | 2019-07-02 | Moon impact pit identification method based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110334645A CN110334645A (en) | 2019-10-15 |
CN110334645B true CN110334645B (en) | 2022-09-30 |
Family
ID=68143041
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910589841.3A Expired - Fee Related CN110334645B (en) | 2019-07-02 | 2019-07-02 | Moon impact pit identification method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110334645B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111612789A (en) * | 2020-06-30 | 2020-09-01 | 征图新视(江苏)科技股份有限公司 | Defect detection method based on improved U-net network |
CN113139592A (en) * | 2021-04-14 | 2021-07-20 | 中国地质大学(武汉) | Method, device and storage medium for identifying lunar meteorite crater based on depth residual error U-Net |
CN115393730B (en) * | 2022-07-15 | 2023-05-30 | 南京林业大学 | Mars meteorite crater precise identification method, electronic equipment and storage medium |
CN115272769A (en) * | 2022-08-10 | 2022-11-01 | 中国科学院地理科学与资源研究所 | Automatic moon impact pit extraction method and device based on machine learning |
CN117809190B (en) * | 2024-02-23 | 2024-05-24 | 吉林大学 | Impact pit sputter identification method based on deep learning |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102567701A (en) * | 2010-12-08 | 2012-07-11 | 中国科学院地理科学与资源研究所 | Method for automatically extracting circular impact craters from ChangE DEM (Dynamic Effect Model) data by Hough transformation |
CN107119657A (en) * | 2017-05-15 | 2017-09-01 | 苏州科技大学 | A kind of view-based access control model measures foundation ditch monitoring method |
CN108068479A (en) * | 2017-12-31 | 2018-05-25 | 西安立东行智能技术有限公司 | The Anti-counterfeiting seal system and seal making method for the matching artificial intelligence identification that a kind of anti-3D printing is forged |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103927543B (en) * | 2014-04-24 | 2015-12-09 | 山东大学(威海) | A kind of menology impact crater based on DEM identifies and boundary extraction method automatically |
CN104036234B (en) * | 2014-05-23 | 2017-11-10 | 中国科学院国家天文台 | A kind of image-recognizing method in annular hole |
CN104268593B (en) * | 2014-09-22 | 2017-10-17 | 华东交通大学 | The face identification method of many rarefaction representations under a kind of Small Sample Size |
US10262205B2 (en) * | 2015-07-28 | 2019-04-16 | Chiman KWAN | Method and system for collaborative multi-satellite remote sensing |
CN108734219B (en) * | 2018-05-23 | 2022-02-01 | 北京航空航天大学 | End-to-end collision pit detection and identification method based on full convolution neural network structure |
CN109255294A (en) * | 2018-08-02 | 2019-01-22 | 中国地质大学(北京) | A kind of remote sensing image clouds recognition methods based on deep learning |
CN109166141A (en) * | 2018-08-10 | 2019-01-08 | Oppo广东移动通信有限公司 | Dangerous based reminding method, device, storage medium and mobile terminal |
-
2019
- 2019-07-02 CN CN201910589841.3A patent/CN110334645B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102567701A (en) * | 2010-12-08 | 2012-07-11 | 中国科学院地理科学与资源研究所 | Method for automatically extracting circular impact craters from ChangE DEM (Dynamic Effect Model) data by Hough transformation |
CN107119657A (en) * | 2017-05-15 | 2017-09-01 | 苏州科技大学 | A kind of view-based access control model measures foundation ditch monitoring method |
CN108068479A (en) * | 2017-12-31 | 2018-05-25 | 西安立东行智能技术有限公司 | The Anti-counterfeiting seal system and seal making method for the matching artificial intelligence identification that a kind of anti-3D printing is forged |
Also Published As
Publication number | Publication date |
---|---|
CN110334645A (en) | 2019-10-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110334645B (en) | Moon impact pit identification method based on deep learning | |
CN109816012B (en) | Multi-scale target detection method fusing context information | |
CN108681752B (en) | Image scene labeling method based on deep learning | |
CN110245711B (en) | SAR target identification method based on angle rotation generation network | |
CN110738697A (en) | Monocular depth estimation method based on deep learning | |
CN111985376A (en) | Remote sensing image ship contour extraction method based on deep learning | |
CN111652273B (en) | Deep learning-based RGB-D image classification method | |
CN111461213A (en) | Training method of target detection model and target rapid detection method | |
CN105654122B (en) | Based on the matched spatial pyramid object identification method of kernel function | |
CN113592715B (en) | Super-resolution image reconstruction method for small sample image set | |
CN114332070B (en) | Meteorite detection method based on intelligent learning network model compression | |
CN116468995A (en) | Sonar image classification method combining SLIC super-pixel and graph annotation meaning network | |
CN110633706B (en) | Semantic segmentation method based on pyramid network | |
CN117671509B (en) | Remote sensing target detection method and device, electronic equipment and storage medium | |
CN114663880A (en) | Three-dimensional target detection method based on multi-level cross-modal self-attention mechanism | |
CN110647977A (en) | Method for optimizing Tiny-YOLO network for detecting ship target on satellite | |
Ning et al. | Rethinking the backbone architecture for tiny object detection | |
CN117710841A (en) | Small target detection method and device for aerial image of unmanned aerial vehicle | |
CN115861595B (en) | Multi-scale domain self-adaptive heterogeneous image matching method based on deep learning | |
CN117011515A (en) | Interactive image segmentation model based on attention mechanism and segmentation method thereof | |
CN116486133A (en) | SAR target classification method combining local classification and feature generation and correction | |
Xu et al. | Compressed YOLOv5 for oriented object detection with integrated network slimming and knowledge distillation | |
CN115100543A (en) | Self-supervision self-distillation element learning method for small sample remote sensing image scene classification | |
CN114973306A (en) | Fine-scale embedded lightweight infrared real-time detection method and system | |
CN113673629A (en) | Open set domain adaptive remote sensing image small sample classification method based on multi-graph convolution network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20220930 |