CN112232288A - Satellite map target identification method and system based on deep learning - Google Patents
Satellite map target identification method and system based on deep learning Download PDFInfo
- Publication number
- CN112232288A CN112232288A CN202011231793.XA CN202011231793A CN112232288A CN 112232288 A CN112232288 A CN 112232288A CN 202011231793 A CN202011231793 A CN 202011231793A CN 112232288 A CN112232288 A CN 112232288A
- Authority
- CN
- China
- Prior art keywords
- data
- model
- module
- training
- marking
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of image recognition processing, and particularly relates to a satellite image target recognition method and system based on deep learning, which comprises the following steps: s1, marking data; s2, data segmentation and scaling; s3, dividing and amplifying data; s4, training a model; according to the method, the unit to be recognized in the large satellite image is subjected to independent recognition network training, so that the problem that the large satellite image cannot be directly recognized is solved, meanwhile, certain retention is performed on the context information of the image of the object to be recognized when the image of the object to be recognized is segmented, more characteristics are provided for network recognition, meanwhile, confusable data are added, the network is helped to distinguish similar objects, the recognition accuracy is improved, a deep network model ResNet-152 is used, and the recognition effect is guaranteed. The method is used for identifying the satellite map.
Description
Technical Field
The invention belongs to the technical field of image recognition processing, and particularly relates to a satellite image target recognition method and system based on deep learning.
Background
Finding specific objects, facilities and events in the satellite images is an important issue, and the satellite images need to be efficiently preprocessed for input into the deep network. To maintain reasonable processing time, deep learning requires relatively small fixed-size images, however, this is not the case with satellite images, as objects and facilities can be much larger than in plain photographs. When the image size is compressed, the small details in the figure disappear. These details may include important distinguishing features.
Problems or disadvantages of the prior art: the traditional target detection and classification algorithm has the problems of inaccuracy and unreliability.
Disclosure of Invention
Aiming at the technical problems of inaccuracy and unreliability of the traditional target detection and classification algorithm, the invention provides the satellite map target identification method and system based on deep learning, which have high accuracy, strong reliability and good identification effect.
In order to solve the technical problems, the invention adopts the technical scheme that:
a satellite map target identification method based on deep learning comprises the following steps:
s1, marking data; marking objects needing to be identified in the satellite image, wherein marking information comprises article types, article positions and easily-confused articles, and meanwhile, changing a marking frame from a rectangle to a square, enlarging a marking range and bringing context information around a target to be identified into the marking range;
s2, data segmentation and scaling; dividing the marked article in the satellite image according to the position information, wherein the divided content comprises marking information, partial data marked with edges and easily-confused articles, establishing a data set together with the image of the article to be identified, and zooming the data size of the data set according to the scale of the image input by the convolutional neural network;
s3, dividing and amplifying data; dividing a data set into a training set, a verification set and a test set, and rotating, changing colors and amplifying the data quantity of the training set;
s4, training a model; inputting the constructed training set and the corresponding label into a network, training the model through iterative optimization of the network, when the training loss of the model does not decrease any more, continuing training the model by using the data of the verification set, if the loss of the model does not decrease any more, saving the model, if the loss continues to decrease, continuing iterative training the model by using the training set until the loss value is stable, saving the model, inputting the data of the test set into the trained model for recognition, and evaluating the recognition result.
The calculation method of the data annotation in S1 is as follows: marking n types of articles to be identified in a satellite image as 1,2.. n, marking objects similar to the objects to be identified, marking the similar objects with uniform category identification as 0, marking the position information of a target object during marking, and marking the objects by a rectangular frame, so that the marking information of each object contains 5 pieces of information (l)0,l1,l2,l3,l4),l0Is the category information (l)1,l2,l3,l4) For position information, four points respectively correspond to the upper left, lower right and upper right, and the maximum length L of the article position marking frame is max (L)2-l1,l3-l2) As an edge, the labeling frame is converted into a square, then the square is amplified by 10% for intercepting more edges and context information and providing more identification features for the model, then the position labeling information of the model is updated, and the new labeling information is (l)0,k1,k2,k3,k4)。
The data segmentation and scaling method in S2 includes: the completion data of the annotation in S1 is based on the position information (k)1,k2,k3,k4) Dividing and storing the data, and labeling the divided data according to the corresponding category0Save and scale the data to input into the model, scaling it all to 224 x 224 in size.
The method for dividing and amplifying the data in S3 comprises the following steps: the data set was as follows 7: 2: the proportion of 1 is divided into a training set, a verification set and a test set, wherein the training set is used for training the model, the verification set is used for detecting whether the model loss continuously drops, the test set is used for testing the model effect, the training set data are respectively rotated by 45 degrees, 90 degrees and 135 degrees, the data contrast and brightness are adjusted, the data are amplified, the transformed data are mixed with the original training set, and a new data set is constructed.
The method for training the model in the S4 comprises the following steps: the convolution kernel of the first convolution is 7 × 7, then the convolution kernels are all 3 × 3 convolutions, except the first layer, every two layers of the subsequent convolution layers form a residual block, the last layer is a full-connection layer and is used for outputting a prediction result, the prediction result obtains the prediction probability of each category through a sigmoid function, the network input size is 224 × 224, the processed training set and the corresponding label thereof are input into the network for training the model, the model is trained by using verification set data, whether the model loss function continuously decreases or not is checked, if the model continuously decreases, the model is not optimal, the model is continuously trained, if the model does not decrease, the model is stored, the test set data is tested by using the model and is compared with the label result, the output result of the model is the probability that the data is of each data category, and if the probability of each data category is greater than 0.5, the data is represented as the category, if the probability of each data category is less than 0.5, the data is represented as the data without the object to be identified, and F1-Score is used for evaluating the identification effect.
The evaluation method of the F1-Score comprises the following steps:
said F1For identification effect, the TP is that the predicted answer is correct, the FP is that the FP predicts other classes as the local class by mistake, the FN is that the local label predicts other class labels, the precision is the proportion of the positive sample in the positive case determined by the classifier, and the recall is the proportion of the positive case which is predicted as the positive case and accounts for the total;
if the model recognizes the effect F1When the specified F1-Score value is reached, the model training is completed, and if the model recognition effect is F1And adjusting model parameters and continuing model training when a specified F1-Score value is not reached, wherein the F1-Score value is a harmonic mean of the precision rate and the recall rate, the maximum is 1, and the minimum is 0.
A satellite map target recognition system based on deep learning comprises a data labeling module, a data segmentation and scaling module, a data segmentation and amplification module and a model training module, wherein the data labeling module is connected with the data segmentation and scaling module through communication, the data segmentation and scaling module is connected with the data segmentation and amplification module through communication, and the data segmentation and amplification module is connected with the model training module through communication.
The data dividing and zooming module comprises a data dividing module and a data zooming module, the data dividing module is connected with the data labeling module, and the data zooming module is connected with the data dividing and amplifying module.
The data division and amplification module comprises a data division module and a data amplification module, the data division module is connected with the data scaling module of the data division and scaling module, and the data amplification module is connected with the model training module.
The model training module comprises a training module, an identification module and an evaluation module, the training module is connected with a data amplification module of the data division amplification module, the training module is connected with the identification module, the identification module is connected with the evaluation module, a ResNet-152 network model is adopted as a model in the training module, and F1-Score is adopted as the identification module.
Compared with the prior art, the invention has the following beneficial effects:
according to the method, the unit to be recognized in the large satellite image is subjected to independent recognition network training, so that the problem that the large satellite image cannot be directly recognized is solved, meanwhile, certain retention is performed on the context information of the image of the object to be recognized when the image of the object to be recognized is segmented, more characteristics are provided for network recognition, meanwhile, confusable data are added, the network is helped to distinguish similar objects, the recognition accuracy is improved, a deep network model ResNet-152 is used, and the recognition effect is guaranteed.
Drawings
FIG. 1 is a flow chart of the operation of the present invention;
FIG. 2 is a schematic structural view of the present invention;
FIG. 3 is a flow chart of model training according to the present invention.
Wherein: the method comprises the following steps of 1,2, 3, 4, 201, 202, 301, 302, 401, 402 and 403, wherein the steps are respectively a data labeling module, a data segmentation and scaling module, a data segmentation and amplification module, a model training module, a data segmentation and amplification module, a data scaling module, a data segmentation and amplification module, a data amplification module, a training module, a recognition module and an evaluation module.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A satellite map target identification method based on deep learning is disclosed, as shown in FIG. 1, and comprises the following steps:
step 1, data annotation; marking objects needing to be identified in the satellite image, wherein marking information comprises article types, article positions and easily-confused articles, and meanwhile, changing a marking frame from a rectangle to a square, enlarging a marking range and bringing context information around a target to be identified into the marking range;
Further, the calculation method of the data annotation in the step 1 is as follows: marking n types of articles to be identified in a satellite image as 1,2.. n, marking objects similar to the objects to be identified, marking the similar objects with uniform category identification as 0, marking the position information of a target object during marking, and marking the objects by a rectangular frame, so that the marking information of each object contains 5 pieces of information (l)0,l1,l2,l3,l4),l0Is the category information (l)1,l2,l3,l4) The four points are respectively corresponding to the upper left, the lower right and the upper right for position information, and the maximum of the object position marking frame is markedLength L ═ max (L)2-l1,l3-l2) As an edge, the labeling frame is converted into a square, then the square is amplified by 10% for intercepting more edges and context information and providing more identification features for the model, then the position labeling information of the model is updated, and the new labeling information is (l)0,k1,k2,k3,k4)。
Further, the method for segmenting and scaling data in step 2 comprises: marking the completion data in step 1 according to the position information (k)1,k2,k3,k4) Dividing and storing the data, and labeling the divided data according to the corresponding category0Save and scale the data to input into the model, scaling it all to 224 x 224 in size.
Further, the method for dividing and amplifying data in step 3 comprises the following steps: the data set was as follows 7: 2: the proportion of 1 is divided into a training set, a verification set and a test set, wherein the training set is used for training the model, the verification set is used for detecting whether the model loss continuously drops, the test set is used for testing the model effect, the training set data are respectively rotated by 45 degrees, 90 degrees and 135 degrees, the data contrast and brightness are adjusted, the data are amplified, the transformed data are mixed with the original training set, and a new data set is constructed.
Further, as shown in fig. 3, the method for model training in step 4 is: the convolution kernel of the first convolution is 7 × 7, then the convolution kernels are all 3 × 3 convolutions, except the first layer, every two layers of the subsequent convolution layers form a residual block, the last layer is a full-connection layer and is used for outputting a prediction result, the prediction result obtains the prediction probability of each category through a sigmoid function, the network input size is 224 × 224, the processed training set and the corresponding label thereof are input into the network for training the model, the model is trained by using verification set data, whether the model loss function continuously decreases or not is checked, if the model continuously decreases, the model is not optimal, the model is continuously trained, if the model does not decrease, the model is stored, the test set data is tested by using the model and is compared with the label result, the output result of the model is the probability that the data is of each data category, and if the probability of each data category is greater than 0.5, the data is represented as the category, if the probability of each data category is less than 0.5, the data is represented as the data without the object to be identified, and F1-Score is used for evaluating the identification effect.
Further, the evaluation method of F1-Score was as follows:
F1for identification effect, TP is correct prediction answer, FP is wrong and other classes are predicted as the class, FN is label prediction of the class and other classes, precision is the proportion of positive samples in positive examples judged by a classifier, and recall is the proportion of the positive examples which are predicted as the positive examples and account for the total;
if the model recognizes the effect F1When the specified F1-Score value is reached, the model training is completed, and if the model recognition effect is F1And adjusting model parameters and continuing model training when the specified F1-Score value is not reached, wherein the F1-Score value is the harmonic mean of the accuracy rate and the recall rate, the maximum is 1, and the minimum is 0.
A satellite map target recognition system based on deep learning is shown in figure 2 and comprises a data labeling module 1, a data segmentation and scaling module 2, a data segmentation and amplification module 3 and a model training module 4, wherein the data labeling module 1 is connected with the data segmentation and scaling module 2 through communication, the data segmentation and scaling module 2 is connected with the data segmentation and amplification module 3 through communication, and the data segmentation and amplification module 3 is connected with the model training module 4 through communication. The data labeling module 1 labels an object to be identified in a satellite map, converts a labeling frame from a rectangle to a square, enlarges a labeling range, and brings context information around a target to be identified into the labeling range; the data segmentation and scaling module 2 segments the articles marked out in the satellite map according to the position information of the articles, and scales the data size of the data set according to the scale of the convolutional neural network input image; the data dividing and amplifying module 3 divides the data set into a training set, a verification set and a test set according to the proportion, rotates the data of the training set, changes the color and amplifies the data quantity; the model training module 4 inputs the constructed training set and the corresponding label into the network, performs model training through iterative optimization of the network, trains the model by using the verification set data, inputs the test set data into the trained model for recognition, and evaluates the recognition result.
Further, the data segmentation and scaling module 2 comprises a data segmentation module 201 and a data scaling module 202, the data segmentation module 201 is connected with the data labeling module 1, and the data scaling module 202 is connected with the data segmentation and amplification module 3.
Further, the data division and amplification module 3 comprises a data division module 301 and a data amplification module 302, the data division module 301 is connected with the data scaling module 202 of the data division and scaling module 2, and the data amplification module 302 is connected with the model training module 4.
Further, the model training module 4 comprises a training module 401, a recognition module 402 and an evaluation module 403, the training module 401 is connected with the data amplification module 302 of the data division amplification module 3, the training module 401 is connected with the recognition module 402, the recognition module 402 is connected with the evaluation module 403, the model in the training module 401 adopts a ResNet-152 network model, and the recognition module 402 adopts F1-Score.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied in the medium, including but not limited to disk storage, CD-ROM, optical storage, and the like.
Although only the preferred embodiments of the present invention have been described in detail, the present invention is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art, and all changes are encompassed in the scope of the present invention.
Claims (10)
1. A satellite map target identification method based on deep learning is characterized in that: comprises the following steps:
s1, marking data; marking objects needing to be identified in the satellite image, wherein marking information comprises article types, article positions and easily-confused articles, and meanwhile, changing a marking frame from a rectangle to a square, enlarging a marking range and bringing context information around a target to be identified into the marking range;
s2, data segmentation and scaling; dividing the marked article in the satellite image according to the position information, wherein the divided content comprises marking information, partial data marked with edges and easily-confused articles, establishing a data set together with the image of the article to be identified, and zooming the data size of the data set according to the scale of the image input by the convolutional neural network;
s3, dividing and amplifying data; dividing a data set into a training set, a verification set and a test set, and rotating, changing colors and amplifying the data quantity of the training set;
s4, training a model; inputting the constructed training set and the corresponding label into a network, training the model through iterative optimization of the network, when the training loss of the model does not decrease any more, continuing training the model by using the data of the verification set, if the loss of the model does not decrease any more, saving the model, if the loss continues to decrease, continuing iterative training the model by using the training set until the loss value is stable, saving the model, inputting the data of the test set into the trained model for recognition, and evaluating the recognition result.
2. The satellite map target recognition method based on deep learning as claimed in claim 1, wherein: the calculation method of the data annotation in S1 is as follows: marking n types of articles to be identified in a satellite image as 1,2.. n, marking objects similar to the objects to be identified, marking the similar objects with uniform category identification as 0, marking the position information of a target object during marking, and marking the objects by a rectangular frame, so that the marking information of each object contains 5 pieces of information (l)0,l1,l2,l3,l4),l0Is the category information (l)1,l2,l3,l4) For position information, four points respectively correspond to the upper left, lower right and upper right, and the maximum length L of the article position marking frame is max (L)2-l1,l3-l2) As an edge, the labeling frame is converted into a square, then the square is amplified by 10% for intercepting more edges and context information and providing more identification features for the model, then the position labeling information of the model is updated, and the new labeling information is (l)0,k1,k2,k3,k4)。
3. The satellite map target recognition method based on deep learning as claimed in claim 2, wherein: the data segmentation and scaling method in S2 includes: the completion data of the annotation in S1 is based on the position information (k)1,k2,k3,k4) Dividing and storing the data, and labeling the divided data according to the corresponding category0Save and scale the data to input into the model, scaling it all to 224 x 224 in size.
4. The satellite map target recognition method based on deep learning as claimed in claim 1, wherein: the method for dividing and amplifying the data in S3 comprises the following steps: the data set was as follows 7: 2: the proportion of 1 is divided into a training set, a verification set and a test set, wherein the training set is used for training the model, the verification set is used for detecting whether the model loss continuously drops, the test set is used for testing the model effect, the training set data are respectively rotated by 45 degrees, 90 degrees and 135 degrees, the data contrast and brightness are adjusted, the data are amplified, the transformed data are mixed with the original training set, and a new data set is constructed.
5. The satellite map target recognition method based on deep learning as claimed in claim 1, wherein: the method for training the model in the S4 comprises the following steps: the convolution kernel of the first convolution is 7 × 7, then the convolution kernels are all 3 × 3 convolutions, except the first layer, every two layers of the subsequent convolution layers form a residual block, the last layer is a full-connection layer and is used for outputting a prediction result, the prediction result obtains the prediction probability of each category through a sigmoid function, the network input size is 224 × 224, the processed training set and the corresponding label thereof are input into the network for training the model, the model is trained by using verification set data, whether the model loss function continuously decreases or not is checked, if the model continuously decreases, the model is not optimal, the model is continuously trained, if the model does not decrease, the model is stored, the test set data is tested by using the model and is compared with the label result, the output result of the model is the probability that the data is of each data category, and if the probability of each data category is greater than 0.5, the data is represented as the category, if the probability of each data category is less than 0.5, the data is represented as the data without the object to be identified, and F1-Score is used for evaluating the identification effect.
6. The satellite map target recognition method based on deep learning as claimed in claim 5, wherein: the evaluation method of the F1-Score comprises the following steps:
said F1For identification effect, the TP is that the predicted answer is correct, the FP is that the FP predicts other classes as the local class by mistake, the FN is that the local label predicts other class labels, the precision is the proportion of the positive sample in the positive case determined by the classifier, and the recall is the proportion of the positive case which is predicted as the positive case and accounts for the total;
if the model recognizes the effect F1When the specified F1-Score value is reached, the model training is completed, and if the model recognition effect is F1And adjusting model parameters and continuing model training when a specified F1-Score value is not reached, wherein the F1-Score value is a harmonic mean of the precision rate and the recall rate, the maximum is 1, and the minimum is 0.
7. A satellite map target recognition system based on deep learning is characterized in that: the data division and amplification system comprises a data labeling module (1), a data division and scaling module (2), a data division and amplification module (3) and a model training module (4), wherein the data labeling module (1) is connected with the data division and scaling module (2) through communication, the data division and scaling module (2) is connected with the data division and amplification module (3) through communication, and the data division and amplification module (3) is connected with the model training module (4) through communication.
8. The satellite map target recognition system based on deep learning of claim 7, wherein: the data segmentation and scaling module (2) comprises a data segmentation module (201) and a data scaling module (202), the data segmentation module (201) is connected with the data labeling module (1), and the data scaling module (202) is connected with the data division and amplification module (3).
9. The satellite map target recognition system based on deep learning of claim 7, wherein: the data division and amplification module (3) comprises a data division module (301) and a data amplification module (302), the data division module (301) is connected with the data amplification module (302), the data division module (301) is connected with a data scaling module (202) of the data division and scaling module (2), and the data amplification module (302) is connected with the model training module (4).
10. The satellite map target recognition system based on deep learning of claim 7, wherein: the model training module (4) comprises a training module (401), a recognition module (402) and an evaluation module (403), wherein the training module (401) is connected with a data amplification module (302) of the data division amplification module (3), the training module (401) is connected with the recognition module (402), the recognition module (402) is connected with the evaluation module (403), a ResNet-152 network model is adopted as a model in the training module (401), and F1-Score is adopted as the recognition module (402).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011231793.XA CN112232288A (en) | 2020-11-06 | 2020-11-06 | Satellite map target identification method and system based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011231793.XA CN112232288A (en) | 2020-11-06 | 2020-11-06 | Satellite map target identification method and system based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112232288A true CN112232288A (en) | 2021-01-15 |
Family
ID=74123251
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011231793.XA Pending CN112232288A (en) | 2020-11-06 | 2020-11-06 | Satellite map target identification method and system based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112232288A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113705363A (en) * | 2021-08-06 | 2021-11-26 | 成都德辰博睿科技有限公司 | Method and system for identifying uplink signal of specific satellite |
CN113743298A (en) * | 2021-09-03 | 2021-12-03 | 云南电网有限责任公司电力科学研究院 | Power grid foreign matter detection method and device based on satellite image deep learning |
-
2020
- 2020-11-06 CN CN202011231793.XA patent/CN112232288A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113705363A (en) * | 2021-08-06 | 2021-11-26 | 成都德辰博睿科技有限公司 | Method and system for identifying uplink signal of specific satellite |
CN113743298A (en) * | 2021-09-03 | 2021-12-03 | 云南电网有限责任公司电力科学研究院 | Power grid foreign matter detection method and device based on satellite image deep learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7447338B2 (en) | Method and system for face detection using pattern classifier | |
US6915025B2 (en) | Automatic image orientation detection based on classification of low-level image features | |
CN112801169B (en) | Camouflage target detection method, system, device and storage medium based on improved YOLO algorithm | |
EP3478728A1 (en) | Method and system for cell annotation with adaptive incremental learning | |
CN108537215A (en) | A kind of flame detecting method based on image object detection | |
CN108830332A (en) | A kind of vision vehicle checking method and system | |
CN107330027B (en) | Weak supervision depth station caption detection method | |
CN111340126A (en) | Article identification method and device, computer equipment and storage medium | |
CN111310826B (en) | Method and device for detecting labeling abnormality of sample set and electronic equipment | |
CN112418208B (en) | Tiny-YOLO v 3-based weld film character recognition method | |
CN113487610B (en) | Herpes image recognition method and device, computer equipment and storage medium | |
CN111382766A (en) | Equipment fault detection method based on fast R-CNN | |
CN111368632A (en) | Signature identification method and device | |
CN111552837A (en) | Animal video tag automatic generation method based on deep learning, terminal and medium | |
CN112232288A (en) | Satellite map target identification method and system based on deep learning | |
CN111652117B (en) | Method and medium for segmenting multiple document images | |
CN115115825B (en) | Method, device, computer equipment and storage medium for detecting object in image | |
CN110866931B (en) | Image segmentation model training method and classification-based enhanced image segmentation method | |
CN116385374A (en) | Cell counting method based on convolutional neural network | |
CN117593514B (en) | Image target detection method and system based on deep principal component analysis assistance | |
CN114882204A (en) | Automatic ship name recognition method | |
CN111914706A (en) | Method and device for detecting and controlling quality of character detection output result | |
WO2024021321A1 (en) | Model generation method and apparatus, electronic device, and storage medium | |
CN116030341A (en) | Plant leaf disease detection method based on deep learning, computer equipment and storage medium | |
CN114821062A (en) | Commodity identification method and device based on image segmentation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |