CN112232288A - A satellite image target recognition method and system based on deep learning - Google Patents

A satellite image target recognition method and system based on deep learning Download PDF

Info

Publication number
CN112232288A
CN112232288A CN202011231793.XA CN202011231793A CN112232288A CN 112232288 A CN112232288 A CN 112232288A CN 202011231793 A CN202011231793 A CN 202011231793A CN 112232288 A CN112232288 A CN 112232288A
Authority
CN
China
Prior art keywords
data
model
module
training
marking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011231793.XA
Other languages
Chinese (zh)
Other versions
CN112232288B (en
Inventor
潘晓光
李宇
令狐彬
刘剑超
陈亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanxi Sanyouhe Smart Information Technology Co Ltd
Original Assignee
Shanxi Sanyouhe Smart Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanxi Sanyouhe Smart Information Technology Co Ltd filed Critical Shanxi Sanyouhe Smart Information Technology Co Ltd
Priority to CN202011231793.XA priority Critical patent/CN112232288B/en
Publication of CN112232288A publication Critical patent/CN112232288A/en
Application granted granted Critical
Publication of CN112232288B publication Critical patent/CN112232288B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

本发明属于图像识别处理技术领域,具体涉及一种基于深度学习的卫星图目标识别方法及系统,包括下列步骤:S1、数据标注;S2、数据分割缩放;S3、数据划分扩增;S4、模型训练;本发明通过将大幅卫星图中的待识别单位进行单独的识别网络训练,解决了大幅卫星图无法直接进行识别的问题,同时在进行待识别物品图像分割时,对其上下文信息做了一定的保留,为网络识别提供了更多的特征,同时加入了易混淆数据,帮助网络分辨相近物品,提升了识别的精度,使用了深层网络模型ResNet‑152,保证了识别的效果。本发明用于对卫星图的识别。The invention belongs to the technical field of image recognition processing, and in particular relates to a method and system for recognizing satellite images based on deep learning, comprising the following steps: S1, data labeling; S2, data division and scaling; S3, data division and expansion; S4, model training; the present invention solves the problem that the large-scale satellite image cannot be directly identified by performing separate recognition network training on the unit to be identified in the large-scale satellite image, and at the same time, when segmenting the image of the item to be identified, certain context information is made. The retention of , provides more features for network recognition, and at the same time adds confusing data to help the network distinguish similar items and improve the accuracy of recognition. The deep network model ResNet‑152 is used to ensure the effect of recognition. The present invention is used for the identification of satellite images.

Description

Satellite map target identification method and system based on deep learning
Technical Field
The invention belongs to the technical field of image recognition processing, and particularly relates to a satellite image target recognition method and system based on deep learning.
Background
Finding specific objects, facilities and events in the satellite images is an important issue, and the satellite images need to be efficiently preprocessed for input into the deep network. To maintain reasonable processing time, deep learning requires relatively small fixed-size images, however, this is not the case with satellite images, as objects and facilities can be much larger than in plain photographs. When the image size is compressed, the small details in the figure disappear. These details may include important distinguishing features.
Problems or disadvantages of the prior art: the traditional target detection and classification algorithm has the problems of inaccuracy and unreliability.
Disclosure of Invention
Aiming at the technical problems of inaccuracy and unreliability of the traditional target detection and classification algorithm, the invention provides the satellite map target identification method and system based on deep learning, which have high accuracy, strong reliability and good identification effect.
In order to solve the technical problems, the invention adopts the technical scheme that:
a satellite map target identification method based on deep learning comprises the following steps:
s1, marking data; marking objects needing to be identified in the satellite image, wherein marking information comprises article types, article positions and easily-confused articles, and meanwhile, changing a marking frame from a rectangle to a square, enlarging a marking range and bringing context information around a target to be identified into the marking range;
s2, data segmentation and scaling; dividing the marked article in the satellite image according to the position information, wherein the divided content comprises marking information, partial data marked with edges and easily-confused articles, establishing a data set together with the image of the article to be identified, and zooming the data size of the data set according to the scale of the image input by the convolutional neural network;
s3, dividing and amplifying data; dividing a data set into a training set, a verification set and a test set, and rotating, changing colors and amplifying the data quantity of the training set;
s4, training a model; inputting the constructed training set and the corresponding label into a network, training the model through iterative optimization of the network, when the training loss of the model does not decrease any more, continuing training the model by using the data of the verification set, if the loss of the model does not decrease any more, saving the model, if the loss continues to decrease, continuing iterative training the model by using the training set until the loss value is stable, saving the model, inputting the data of the test set into the trained model for recognition, and evaluating the recognition result.
The calculation method of the data annotation in S1 is as follows: marking n types of articles to be identified in a satellite image as 1,2.. n, marking objects similar to the objects to be identified, marking the similar objects with uniform category identification as 0, marking the position information of a target object during marking, and marking the objects by a rectangular frame, so that the marking information of each object contains 5 pieces of information (l)0,l1,l2,l3,l4),l0Is the category information (l)1,l2,l3,l4) For position information, four points respectively correspond to the upper left, lower right and upper right, and the maximum length L of the article position marking frame is max (L)2-l1,l3-l2) As an edge, the labeling frame is converted into a square, then the square is amplified by 10% for intercepting more edges and context information and providing more identification features for the model, then the position labeling information of the model is updated, and the new labeling information is (l)0,k1,k2,k3,k4)。
The data segmentation and scaling method in S2 includes: the completion data of the annotation in S1 is based on the position information (k)1,k2,k3,k4) Dividing and storing the data, and labeling the divided data according to the corresponding category0Save and scale the data to input into the model, scaling it all to 224 x 224 in size.
The method for dividing and amplifying the data in S3 comprises the following steps: the data set was as follows 7: 2: the proportion of 1 is divided into a training set, a verification set and a test set, wherein the training set is used for training the model, the verification set is used for detecting whether the model loss continuously drops, the test set is used for testing the model effect, the training set data are respectively rotated by 45 degrees, 90 degrees and 135 degrees, the data contrast and brightness are adjusted, the data are amplified, the transformed data are mixed with the original training set, and a new data set is constructed.
The method for training the model in the S4 comprises the following steps: the convolution kernel of the first convolution is 7 × 7, then the convolution kernels are all 3 × 3 convolutions, except the first layer, every two layers of the subsequent convolution layers form a residual block, the last layer is a full-connection layer and is used for outputting a prediction result, the prediction result obtains the prediction probability of each category through a sigmoid function, the network input size is 224 × 224, the processed training set and the corresponding label thereof are input into the network for training the model, the model is trained by using verification set data, whether the model loss function continuously decreases or not is checked, if the model continuously decreases, the model is not optimal, the model is continuously trained, if the model does not decrease, the model is stored, the test set data is tested by using the model and is compared with the label result, the output result of the model is the probability that the data is of each data category, and if the probability of each data category is greater than 0.5, the data is represented as the category, if the probability of each data category is less than 0.5, the data is represented as the data without the object to be identified, and F1-Score is used for evaluating the identification effect.
The evaluation method of the F1-Score comprises the following steps:
Figure BDA0002765460200000021
Figure BDA0002765460200000022
Figure BDA0002765460200000023
said F1For identification effect, the TP is that the predicted answer is correct, the FP is that the FP predicts other classes as the local class by mistake, the FN is that the local label predicts other class labels, the precision is the proportion of the positive sample in the positive case determined by the classifier, and the recall is the proportion of the positive case which is predicted as the positive case and accounts for the total;
if the model recognizes the effect F1When the specified F1-Score value is reached, the model training is completed, and if the model recognition effect is F1And adjusting model parameters and continuing model training when a specified F1-Score value is not reached, wherein the F1-Score value is a harmonic mean of the precision rate and the recall rate, the maximum is 1, and the minimum is 0.
A satellite map target recognition system based on deep learning comprises a data labeling module, a data segmentation and scaling module, a data segmentation and amplification module and a model training module, wherein the data labeling module is connected with the data segmentation and scaling module through communication, the data segmentation and scaling module is connected with the data segmentation and amplification module through communication, and the data segmentation and amplification module is connected with the model training module through communication.
The data dividing and zooming module comprises a data dividing module and a data zooming module, the data dividing module is connected with the data labeling module, and the data zooming module is connected with the data dividing and amplifying module.
The data division and amplification module comprises a data division module and a data amplification module, the data division module is connected with the data scaling module of the data division and scaling module, and the data amplification module is connected with the model training module.
The model training module comprises a training module, an identification module and an evaluation module, the training module is connected with a data amplification module of the data division amplification module, the training module is connected with the identification module, the identification module is connected with the evaluation module, a ResNet-152 network model is adopted as a model in the training module, and F1-Score is adopted as the identification module.
Compared with the prior art, the invention has the following beneficial effects:
according to the method, the unit to be recognized in the large satellite image is subjected to independent recognition network training, so that the problem that the large satellite image cannot be directly recognized is solved, meanwhile, certain retention is performed on the context information of the image of the object to be recognized when the image of the object to be recognized is segmented, more characteristics are provided for network recognition, meanwhile, confusable data are added, the network is helped to distinguish similar objects, the recognition accuracy is improved, a deep network model ResNet-152 is used, and the recognition effect is guaranteed.
Drawings
FIG. 1 is a flow chart of the operation of the present invention;
FIG. 2 is a schematic structural view of the present invention;
FIG. 3 is a flow chart of model training according to the present invention.
Wherein: the method comprises the following steps of 1,2, 3, 4, 201, 202, 301, 302, 401, 402 and 403, wherein the steps are respectively a data labeling module, a data segmentation and scaling module, a data segmentation and amplification module, a model training module, a data segmentation and amplification module, a data scaling module, a data segmentation and amplification module, a data amplification module, a training module, a recognition module and an evaluation module.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A satellite map target identification method based on deep learning is disclosed, as shown in FIG. 1, and comprises the following steps:
step 1, data annotation; marking objects needing to be identified in the satellite image, wherein marking information comprises article types, article positions and easily-confused articles, and meanwhile, changing a marking frame from a rectangle to a square, enlarging a marking range and bringing context information around a target to be identified into the marking range;
step 2, data segmentation and scaling; dividing the marked article in the satellite image according to the position information, wherein the divided content comprises marking information, partial data marked with edges and easily-confused articles, establishing a data set together with the image of the article to be identified, and zooming the data size of the data set according to the scale of the image input by the convolutional neural network;
step 3, dividing and amplifying data; dividing a data set into a training set, a verification set and a test set, and rotating, changing colors and amplifying the data quantity of the training set;
step 4, training a model; inputting the constructed training set and the corresponding label into a network, training the model through iterative optimization of the network, when the training loss of the model does not decrease any more, continuing training the model by using the data of the verification set, if the loss of the model does not decrease any more, saving the model, if the loss continues to decrease, continuing iterative training the model by using the training set until the loss value is stable, saving the model, inputting the data of the test set into the trained model for recognition, and evaluating the recognition result.
Further, the calculation method of the data annotation in the step 1 is as follows: marking n types of articles to be identified in a satellite image as 1,2.. n, marking objects similar to the objects to be identified, marking the similar objects with uniform category identification as 0, marking the position information of a target object during marking, and marking the objects by a rectangular frame, so that the marking information of each object contains 5 pieces of information (l)0,l1,l2,l3,l4),l0Is the category information (l)1,l2,l3,l4) The four points are respectively corresponding to the upper left, the lower right and the upper right for position information, and the maximum of the object position marking frame is markedLength L ═ max (L)2-l1,l3-l2) As an edge, the labeling frame is converted into a square, then the square is amplified by 10% for intercepting more edges and context information and providing more identification features for the model, then the position labeling information of the model is updated, and the new labeling information is (l)0,k1,k2,k3,k4)。
Further, the method for segmenting and scaling data in step 2 comprises: marking the completion data in step 1 according to the position information (k)1,k2,k3,k4) Dividing and storing the data, and labeling the divided data according to the corresponding category0Save and scale the data to input into the model, scaling it all to 224 x 224 in size.
Further, the method for dividing and amplifying data in step 3 comprises the following steps: the data set was as follows 7: 2: the proportion of 1 is divided into a training set, a verification set and a test set, wherein the training set is used for training the model, the verification set is used for detecting whether the model loss continuously drops, the test set is used for testing the model effect, the training set data are respectively rotated by 45 degrees, 90 degrees and 135 degrees, the data contrast and brightness are adjusted, the data are amplified, the transformed data are mixed with the original training set, and a new data set is constructed.
Further, as shown in fig. 3, the method for model training in step 4 is: the convolution kernel of the first convolution is 7 × 7, then the convolution kernels are all 3 × 3 convolutions, except the first layer, every two layers of the subsequent convolution layers form a residual block, the last layer is a full-connection layer and is used for outputting a prediction result, the prediction result obtains the prediction probability of each category through a sigmoid function, the network input size is 224 × 224, the processed training set and the corresponding label thereof are input into the network for training the model, the model is trained by using verification set data, whether the model loss function continuously decreases or not is checked, if the model continuously decreases, the model is not optimal, the model is continuously trained, if the model does not decrease, the model is stored, the test set data is tested by using the model and is compared with the label result, the output result of the model is the probability that the data is of each data category, and if the probability of each data category is greater than 0.5, the data is represented as the category, if the probability of each data category is less than 0.5, the data is represented as the data without the object to be identified, and F1-Score is used for evaluating the identification effect.
Further, the evaluation method of F1-Score was as follows:
Figure BDA0002765460200000051
Figure BDA0002765460200000052
Figure BDA0002765460200000053
F1for identification effect, TP is correct prediction answer, FP is wrong and other classes are predicted as the class, FN is label prediction of the class and other classes, precision is the proportion of positive samples in positive examples judged by a classifier, and recall is the proportion of the positive examples which are predicted as the positive examples and account for the total;
if the model recognizes the effect F1When the specified F1-Score value is reached, the model training is completed, and if the model recognition effect is F1And adjusting model parameters and continuing model training when the specified F1-Score value is not reached, wherein the F1-Score value is the harmonic mean of the accuracy rate and the recall rate, the maximum is 1, and the minimum is 0.
A satellite map target recognition system based on deep learning is shown in figure 2 and comprises a data labeling module 1, a data segmentation and scaling module 2, a data segmentation and amplification module 3 and a model training module 4, wherein the data labeling module 1 is connected with the data segmentation and scaling module 2 through communication, the data segmentation and scaling module 2 is connected with the data segmentation and amplification module 3 through communication, and the data segmentation and amplification module 3 is connected with the model training module 4 through communication. The data labeling module 1 labels an object to be identified in a satellite map, converts a labeling frame from a rectangle to a square, enlarges a labeling range, and brings context information around a target to be identified into the labeling range; the data segmentation and scaling module 2 segments the articles marked out in the satellite map according to the position information of the articles, and scales the data size of the data set according to the scale of the convolutional neural network input image; the data dividing and amplifying module 3 divides the data set into a training set, a verification set and a test set according to the proportion, rotates the data of the training set, changes the color and amplifies the data quantity; the model training module 4 inputs the constructed training set and the corresponding label into the network, performs model training through iterative optimization of the network, trains the model by using the verification set data, inputs the test set data into the trained model for recognition, and evaluates the recognition result.
Further, the data segmentation and scaling module 2 comprises a data segmentation module 201 and a data scaling module 202, the data segmentation module 201 is connected with the data labeling module 1, and the data scaling module 202 is connected with the data segmentation and amplification module 3.
Further, the data division and amplification module 3 comprises a data division module 301 and a data amplification module 302, the data division module 301 is connected with the data scaling module 202 of the data division and scaling module 2, and the data amplification module 302 is connected with the model training module 4.
Further, the model training module 4 comprises a training module 401, a recognition module 402 and an evaluation module 403, the training module 401 is connected with the data amplification module 302 of the data division amplification module 3, the training module 401 is connected with the recognition module 402, the recognition module 402 is connected with the evaluation module 403, the model in the training module 401 adopts a ResNet-152 network model, and the recognition module 402 adopts F1-Score.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied in the medium, including but not limited to disk storage, CD-ROM, optical storage, and the like.
Although only the preferred embodiments of the present invention have been described in detail, the present invention is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art, and all changes are encompassed in the scope of the present invention.

Claims (10)

1. A satellite map target identification method based on deep learning is characterized in that: comprises the following steps:
s1, marking data; marking objects needing to be identified in the satellite image, wherein marking information comprises article types, article positions and easily-confused articles, and meanwhile, changing a marking frame from a rectangle to a square, enlarging a marking range and bringing context information around a target to be identified into the marking range;
s2, data segmentation and scaling; dividing the marked article in the satellite image according to the position information, wherein the divided content comprises marking information, partial data marked with edges and easily-confused articles, establishing a data set together with the image of the article to be identified, and zooming the data size of the data set according to the scale of the image input by the convolutional neural network;
s3, dividing and amplifying data; dividing a data set into a training set, a verification set and a test set, and rotating, changing colors and amplifying the data quantity of the training set;
s4, training a model; inputting the constructed training set and the corresponding label into a network, training the model through iterative optimization of the network, when the training loss of the model does not decrease any more, continuing training the model by using the data of the verification set, if the loss of the model does not decrease any more, saving the model, if the loss continues to decrease, continuing iterative training the model by using the training set until the loss value is stable, saving the model, inputting the data of the test set into the trained model for recognition, and evaluating the recognition result.
2. The satellite map target recognition method based on deep learning as claimed in claim 1, wherein: the calculation method of the data annotation in S1 is as follows: marking n types of articles to be identified in a satellite image as 1,2.. n, marking objects similar to the objects to be identified, marking the similar objects with uniform category identification as 0, marking the position information of a target object during marking, and marking the objects by a rectangular frame, so that the marking information of each object contains 5 pieces of information (l)0,l1,l2,l3,l4),l0Is the category information (l)1,l2,l3,l4) For position information, four points respectively correspond to the upper left, lower right and upper right, and the maximum length L of the article position marking frame is max (L)2-l1,l3-l2) As an edge, the labeling frame is converted into a square, then the square is amplified by 10% for intercepting more edges and context information and providing more identification features for the model, then the position labeling information of the model is updated, and the new labeling information is (l)0,k1,k2,k3,k4)。
3. The satellite map target recognition method based on deep learning as claimed in claim 2, wherein: the data segmentation and scaling method in S2 includes: the completion data of the annotation in S1 is based on the position information (k)1,k2,k3,k4) Dividing and storing the data, and labeling the divided data according to the corresponding category0Save and scale the data to input into the model, scaling it all to 224 x 224 in size.
4. The satellite map target recognition method based on deep learning as claimed in claim 1, wherein: the method for dividing and amplifying the data in S3 comprises the following steps: the data set was as follows 7: 2: the proportion of 1 is divided into a training set, a verification set and a test set, wherein the training set is used for training the model, the verification set is used for detecting whether the model loss continuously drops, the test set is used for testing the model effect, the training set data are respectively rotated by 45 degrees, 90 degrees and 135 degrees, the data contrast and brightness are adjusted, the data are amplified, the transformed data are mixed with the original training set, and a new data set is constructed.
5. The satellite map target recognition method based on deep learning as claimed in claim 1, wherein: the method for training the model in the S4 comprises the following steps: the convolution kernel of the first convolution is 7 × 7, then the convolution kernels are all 3 × 3 convolutions, except the first layer, every two layers of the subsequent convolution layers form a residual block, the last layer is a full-connection layer and is used for outputting a prediction result, the prediction result obtains the prediction probability of each category through a sigmoid function, the network input size is 224 × 224, the processed training set and the corresponding label thereof are input into the network for training the model, the model is trained by using verification set data, whether the model loss function continuously decreases or not is checked, if the model continuously decreases, the model is not optimal, the model is continuously trained, if the model does not decrease, the model is stored, the test set data is tested by using the model and is compared with the label result, the output result of the model is the probability that the data is of each data category, and if the probability of each data category is greater than 0.5, the data is represented as the category, if the probability of each data category is less than 0.5, the data is represented as the data without the object to be identified, and F1-Score is used for evaluating the identification effect.
6. The satellite map target recognition method based on deep learning as claimed in claim 5, wherein: the evaluation method of the F1-Score comprises the following steps:
Figure FDA0002765460190000021
Figure FDA0002765460190000022
Figure FDA0002765460190000023
said F1For identification effect, the TP is that the predicted answer is correct, the FP is that the FP predicts other classes as the local class by mistake, the FN is that the local label predicts other class labels, the precision is the proportion of the positive sample in the positive case determined by the classifier, and the recall is the proportion of the positive case which is predicted as the positive case and accounts for the total;
if the model recognizes the effect F1When the specified F1-Score value is reached, the model training is completed, and if the model recognition effect is F1And adjusting model parameters and continuing model training when a specified F1-Score value is not reached, wherein the F1-Score value is a harmonic mean of the precision rate and the recall rate, the maximum is 1, and the minimum is 0.
7. A satellite map target recognition system based on deep learning is characterized in that: the data division and amplification system comprises a data labeling module (1), a data division and scaling module (2), a data division and amplification module (3) and a model training module (4), wherein the data labeling module (1) is connected with the data division and scaling module (2) through communication, the data division and scaling module (2) is connected with the data division and amplification module (3) through communication, and the data division and amplification module (3) is connected with the model training module (4) through communication.
8. The satellite map target recognition system based on deep learning of claim 7, wherein: the data segmentation and scaling module (2) comprises a data segmentation module (201) and a data scaling module (202), the data segmentation module (201) is connected with the data labeling module (1), and the data scaling module (202) is connected with the data division and amplification module (3).
9. The satellite map target recognition system based on deep learning of claim 7, wherein: the data division and amplification module (3) comprises a data division module (301) and a data amplification module (302), the data division module (301) is connected with the data amplification module (302), the data division module (301) is connected with a data scaling module (202) of the data division and scaling module (2), and the data amplification module (302) is connected with the model training module (4).
10. The satellite map target recognition system based on deep learning of claim 7, wherein: the model training module (4) comprises a training module (401), a recognition module (402) and an evaluation module (403), wherein the training module (401) is connected with a data amplification module (302) of the data division amplification module (3), the training module (401) is connected with the recognition module (402), the recognition module (402) is connected with the evaluation module (403), a ResNet-152 network model is adopted as a model in the training module (401), and F1-Score is adopted as the recognition module (402).
CN202011231793.XA 2020-11-06 2020-11-06 A satellite image target recognition method and system based on deep learning Active CN112232288B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011231793.XA CN112232288B (en) 2020-11-06 2020-11-06 A satellite image target recognition method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011231793.XA CN112232288B (en) 2020-11-06 2020-11-06 A satellite image target recognition method and system based on deep learning

Publications (2)

Publication Number Publication Date
CN112232288A true CN112232288A (en) 2021-01-15
CN112232288B CN112232288B (en) 2024-11-08

Family

ID=74123251

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011231793.XA Active CN112232288B (en) 2020-11-06 2020-11-06 A satellite image target recognition method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN112232288B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705363A (en) * 2021-08-06 2021-11-26 成都德辰博睿科技有限公司 Method and system for identifying uplink signal of specific satellite
CN113743298A (en) * 2021-09-03 2021-12-03 云南电网有限责任公司电力科学研究院 Power grid foreign matter detection method and device based on satellite image deep learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100014755A1 (en) * 2008-07-21 2010-01-21 Charles Lee Wilson System and method for grid-based image segmentation and matching
US20190311479A1 (en) * 2018-04-10 2019-10-10 Sun Yat-Sen University Cancer Center Method and device for identifying pathological picture
CN110705508A (en) * 2019-10-15 2020-01-17 中国人民解放军战略支援部队航天工程大学 Satellite identification method of ISAR image
WO2020102988A1 (en) * 2018-11-20 2020-05-28 西安电子科技大学 Feature fusion and dense connection based infrared plane target detection method
WO2020119103A1 (en) * 2018-12-13 2020-06-18 程琳 Aero-engine hole detection image damage intelligent identification method based on deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100014755A1 (en) * 2008-07-21 2010-01-21 Charles Lee Wilson System and method for grid-based image segmentation and matching
US20190311479A1 (en) * 2018-04-10 2019-10-10 Sun Yat-Sen University Cancer Center Method and device for identifying pathological picture
WO2020102988A1 (en) * 2018-11-20 2020-05-28 西安电子科技大学 Feature fusion and dense connection based infrared plane target detection method
WO2020119103A1 (en) * 2018-12-13 2020-06-18 程琳 Aero-engine hole detection image damage intelligent identification method based on deep learning
CN110705508A (en) * 2019-10-15 2020-01-17 中国人民解放军战略支援部队航天工程大学 Satellite identification method of ISAR image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
袁秋壮;魏松杰;罗娜;: "基于深度学习神经网络的SAR星上目标识别系统研究", 上海航天, no. 05, 25 October 2017 (2017-10-25) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705363A (en) * 2021-08-06 2021-11-26 成都德辰博睿科技有限公司 Method and system for identifying uplink signal of specific satellite
CN113743298A (en) * 2021-09-03 2021-12-03 云南电网有限责任公司电力科学研究院 Power grid foreign matter detection method and device based on satellite image deep learning

Also Published As

Publication number Publication date
CN112232288B (en) 2024-11-08

Similar Documents

Publication Publication Date Title
CN111062282B (en) Substation pointer instrument identification method based on improved YOLOV3 model
CN110070536B (en) Deep learning-based PCB component detection method
CN112801169B (en) Camouflage target detection method, system, device and storage medium based on improved YOLO algorithm
CN111612751A (en) Lithium battery defect detection method based on Tiny-yolov3 network embedded with grouped attention module
CN111275082A (en) Indoor object target detection method based on improved end-to-end neural network
CN110660052A (en) A deep learning-based detection method for surface defects of hot-rolled strip steel
EP3478728A1 (en) Method and system for cell annotation with adaptive incremental learning
CN111860171A (en) A method and system for detecting irregularly shaped targets in large-scale remote sensing images
CN107330027B (en) Weak supervision depth station caption detection method
WO2024130857A1 (en) Article display inspection method and apparatus, and device and readable storage medium
CN117152746B (en) Method for acquiring cervical cell classification parameters based on YOLOV5 network
CN113487610B (en) Herpes image recognition method and device, computer equipment and storage medium
CN112115291B (en) Three-dimensional indoor model retrieval method based on deep learning
CN112528845A (en) Physical circuit diagram identification method based on deep learning and application thereof
CN110866931B (en) Image segmentation model training method and classification-based enhanced image segmentation method
CN110659637A (en) Electric energy meter number and label automatic identification method combining deep neural network and SIFT features
CN109977253B (en) Semantic and content-based rapid image retrieval method and device
CN116205881A (en) A digital printing image defect detection method based on lightweight semantic segmentation
CN112418208A (en) Tiny-YOLO v 3-based weld film character recognition method
CN112232288A (en) A satellite image target recognition method and system based on deep learning
CN115203408A (en) An intelligent labeling method for multimodal test data
CN118279643A (en) Unsupervised defect classification and segmentation method, system and storage medium based on double-branch flow model
CN117671457A (en) Welding part surface defect detection method based on improved YOLOV7-Tiny algorithm
CN115115825B (en) Method, device, computer equipment and storage medium for detecting object in image
CN113780284A (en) Logo detection method based on target detection and metric learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant