CN111428781A - Remote sensing image ground object classification method and system - Google Patents
Remote sensing image ground object classification method and system Download PDFInfo
- Publication number
- CN111428781A CN111428781A CN202010201027.2A CN202010201027A CN111428781A CN 111428781 A CN111428781 A CN 111428781A CN 202010201027 A CN202010201027 A CN 202010201027A CN 111428781 A CN111428781 A CN 111428781A
- Authority
- CN
- China
- Prior art keywords
- remote sensing
- sensing image
- network model
- green
- constructing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Software Systems (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a remote sensing image ground feature classification method, which comprises the following steps: preprocessing the remote sensing image; selecting a red-green-blue wave band, a near-infrared wave band, a red-green wave band and a full-wave band data set for the preprocessed remote sensing image, cutting the image, and constructing a training set and a test set; providing an end-to-end algorithm framework and constructing a network model; and inputting the training set into the constructed network model for training to obtain a network parameter model so as to classify the ground features of the remote sensing image by using the obtained network parameter model. The invention also relates to a remote sensing image ground object classification system. According to the method, remote sensing image fusion is not needed, the problem of high-resolution information loss in the convolutional neural network can be solved, and the detail information in the remote sensing image is better classified, so that the detail of the ground feature classification result is more accurate, the edge information is richer, and the overall classification precision is higher.
Description
Technical Field
The invention relates to a remote sensing image ground feature classification method and system.
Background
The remote sensing image has a large amount of detail information (edge information, gradient information, small targets and the like), and the resolution of the remote sensing image is limited, so that the detail information is difficult to extract and classify in the process of classifying the ground features, the details of the final classification result are seriously lost, the edge of the ground feature is fuzzy and distorted, and the final ground feature classification precision is also influenced. The precise classification of detail information in the image requires that the remote sensing image can provide more high-resolution information, the extraction of common ground objects in the image requires that the image has larger width for higher extraction efficiency, the requirement on the resolution is not high, and simultaneously, the remote sensing images with different resolutions are difficult to fuse due to different sensors.
In recent years, deep learning has been rapidly developed in the field of remote sensing image surface feature classification, and two methods, namely, picture-level classification and pixel-level classification, are mainly adopted.
1) In the field of picture-level classification algorithms, a single image is taken as a discrimination unit, each image only contains one type of ground object, and the integral features of the images are learned through a convolutional neural network. The core of the algorithm is the recognition of the image, and the ground objects in the sub-images are respectively recognized after the whole image is cut into a plurality of sub-images containing single ground objects. The method has the disadvantages that the classification result of a pixel level cannot be given, and the problem of inaccurate classification of detail information is not solved.
2) In the field of pixel-level classification algorithms, each pixel is taken as a discrimination unit, and a full convolution network is adopted to remove a full connection layer in a convolution neural network and is replaced by a 1 × 1 convolution layer, so that an end-to-end (pixel-to-pixel) classification method is realized. The replacement reserves the spatial information of the image content, removes the limitation of the convolutional neural network on the size of the input image, greatly reduces the model parameters and improves the algorithm efficiency. Representative of these, Jamie Sherrah proposed an FCN algorithm without a downsampling layer, achieving an overall accuracy of 89.1% in the isps dataset. Marmanis et al designed a pixel-level segmentation architecture, synthesized FCNs and deconvolution networks, and applied CRFs to post-processing for refinement, achieving an overall accuracy of 88.5% in an artificial dataset based on the ISPRS variangen dataset labels. Chen et al post-process the segmented result of FCN using a superposition strategy, which has higher accuracy than conventional FCN-8 and SegNet models.
However, in the process of applying the pixel-level classification algorithm to the remote sensing image farmland extraction, in order to obtain the features of the regions with different scales, the deep convolutional network often needs to convert a high-resolution image into a low-resolution image (polling) to extract semantic information with different abstract scales as the features for subsequent classification. Resampling is one of the commonly used methods, and this process further causes loss of image detail information (edge information, gradient information, or high-frequency noise signals, etc.), so that the feature classification result is blurred in edge, details are not rich and accurate enough, and the final classification precision is affected.
In summary, the disadvantages of the prior art are mainly: the resolution requirements of the remote sensing image on the detail information and the common ground feature information on the remote sensing image are different, and the remote sensing images with different resolutions are difficult to fuse due to different sensors; the traditional remote sensing image classification algorithm depends on feature extraction and is not suitable for processing large-scale remote sensing images; in the end-to-end method, the down-sampling process of the convolutional neural network causes high-resolution information loss, and is more unfavorable for detail information classification.
Disclosure of Invention
In view of the above, a method and a system for classifying features of remote sensing images are needed.
The invention provides a remote sensing image ground feature classification method, which comprises the following steps: a. preprocessing the remote sensing image; b. selecting a red-green-blue wave band, a near-infrared wave band, a red-green wave band and a full-wave band data set for the preprocessed remote sensing image, cutting the image, and constructing a training set and a test set; c. providing an end-to-end algorithm framework and constructing a network model; d. and inputting the training set into the constructed network model for training to obtain a network parameter model so as to classify the ground features of the remote sensing image by using the obtained network parameter model.
Wherein, the method also comprises the following steps: and inputting the test set into the constructed network model, and evaluating a classification result.
The step a specifically comprises the following steps: and performing radiation correction and spatial domain enhancement processing filtering on the remote sensing image by using the arcgis and the ENVI.
The step b specifically comprises the following steps: selecting data of red, green and blue wave bands, near infrared, red, green and green wave bands and full wave bands of the remote sensing image, respectively constructing three corresponding data sets, respectively cutting the remote sensing image after constructing the data sets into a plurality of block images with 256 pixels by 256 pixels, obtaining the three cut data sets, and randomly dividing each data set into a training set and a testing set according to a ratio of 4: 1.
The network model comprises:
a convolutional layer module for extracting characteristics;
obtaining a down-sampling layer module with multi-scale characteristics;
an upper sampling layer module for recovering characteristic information of each scale;
reserving continuous parallel multi-resolution subnets with various scale characteristics;
and a repeated multi-scale fusion module for recovering the high-resolution information from the low-resolution information.
The invention provides a remote sensing image ground feature classification system, which comprises a preprocessing unit, a training set test set construction unit, a network model construction unit and a network model training unit, wherein the preprocessing unit comprises: the preprocessing unit is used for preprocessing the remote sensing image; the training set and test set constructing unit is used for selecting a data set of red, green and blue wave bands, near infrared and red, green and green wave bands and full wave bands for the preprocessed remote sensing image, cutting an image, and constructing a training set and a test set; the network model construction unit is used for providing an end-to-end algorithm framework and constructing a network model; and the network model training unit is used for inputting the training set into the constructed network model for training to obtain a network parameter model so as to classify the ground features of the remote sensing image by using the obtained network parameter model.
Wherein the system further comprises: and the network model testing unit is used for inputting the testing set into the constructed network model and evaluating the classification result.
The preprocessing unit is specifically configured to: and performing radiation correction and spatial domain enhancement processing filtering on the remote sensing image by using the arcgis and the ENVI.
The training set test set building unit is specifically configured to: selecting data of red, green and blue wave bands, near infrared, red, green and green wave bands and full wave bands of the remote sensing image, respectively constructing three corresponding data sets, respectively cutting the remote sensing image after constructing the data sets into a plurality of block images with 256 pixels by 256 pixels, obtaining the three cut data sets, and randomly dividing each data set into a training set and a testing set according to a ratio of 4: 1.
The network model comprises:
a convolutional layer module for extracting characteristics;
obtaining a down-sampling layer module with multi-scale characteristics;
an upper sampling layer module for recovering characteristic information of each scale;
reserving continuous parallel multi-resolution subnets with various scale characteristics;
and a repeated multi-scale fusion module for recovering the high-resolution information from the low-resolution information.
The method and the system for classifying the ground features of the remote sensing images do not need to perform remote sensing image fusion, are an end-to-end method based on multi-scale features, are also suitable for processing large-scale remote sensing images, can solve the problem of high-resolution information loss in a convolutional neural network, and better classify detailed information in the remote sensing images. Compared with the existing method, for example, compared with a random forest method and a U-Net algorithm, the method and the device can keep more remote sensing image detail information, so that the details of the ground feature classification result are more accurate, the edge information is richer, and the overall classification precision is higher.
Drawings
FIG. 1 is a flow chart of the method for classifying the ground features of the remote sensing image according to the present invention;
FIG. 2 is a schematic diagram of a network model structure constructed based on an end-to-end algorithm framework according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a conventional U-Net full convolution network structure;
FIG. 4 is a diagram of the hardware architecture of the remote sensing image ground object classification system of the present invention;
FIG. 5 is a diagram illustrating comparison of the classification effect of the results of three methods for each data set according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
Fig. 1 is a flowchart illustrating the operation of the method for classifying a feature of a remote sensing image according to a preferred embodiment of the present invention.
And step S1, preprocessing the remote sensing image. Specifically, the method comprises the following steps:
and downloading L an remote sensing image, and performing radiation correction and spatial domain enhancement processing filtering on the remote sensing image by using arcgis and ENVI.
And step S2, selecting a specific wave band data set for the preprocessed remote sensing image, cutting the image, and constructing a training set and a test set. Specifically, the method comprises the following steps:
selecting data of red, green and blue wave bands, near infrared, red, green and green wave bands and full wave bands of the remote sensing image, respectively constructing three corresponding data sets, respectively cutting the remote sensing image after constructing the data sets into a plurality of block images with 256 pixels by 256 pixels, obtaining the three cut data sets, and randomly dividing each data set into a training set and a testing set according to a ratio of 4: 1.
The effect of clipping in this embodiment is that the original image is too large, so that the training can be faster by clipping, and in this embodiment, a block image of 256 pixels by 256 pixels is directly obtained by random sampling on the remote sensing image after the data set is constructed.
And step S3, providing an end-to-end algorithm framework and constructing a network model. Specifically, the method comprises the following steps:
the network model of this embodiment is constructed based on a PyTorch deep learning framework, and the hardware used is a TitanX GPU.
Compared with a convolutional neural network, the network model constructed by the invention has the advantages that a full connection layer is removed, and a main network structure (please refer to fig. 2) comprises a convolutional layer module, a downsampling layer module, an upsampling layer module, a continuous parallel multi-resolution subnet and a repeated multi-scale fusion module.
The specific contents are as follows:
(1) the convolutional layer module is used for extracting features;
the convolutional layer module mainly functions in feature extraction. The different feature maps respectively comprise convolution layer modules, each convolution layer module is composed of two convolution layers with convolution kernels of 3 x 3, and then a relu activation function is connected.
(2) A down-sampling layer module for obtaining multi-scale features;
in the embodiment, the convolution layer module with the step length of 2 is used for replacing the pooling layer of the U-Net full convolution network for down-sampling, so that the loss of high-resolution information is reduced when multi-scale features are obtained. Wherein the convolutional layer module is formed by convolution with two convolution kernels of 3 x 3.
(3) The upper sampling layer module is used for recovering the characteristic information of each scale;
and gradually restoring the feature map from the lowest resolution layer to the size of the input picture by using a deconvolution method to obtain the multi-scale features and a classification result map with the same size as the input picture.
(4) Continuously paralleling the multi-resolution subnets, and reserving the characteristics of each scale;
compared with a simple U-Net full convolution network (see fig. 3) that combines an original resolution feature map and a feature map obtained after upsampling by using skip connection (as shown by a horizontal dotted arrow in fig. 3), the embodiment retains corresponding resolution information in all the resolution layers, and the continuous parallel multi-resolution subnet can better retain detail information (as shown by a horizontal arrow in fig. 2).
(5) And repeating the multi-scale fusion module to recover the high-resolution information from the low-resolution information as much as possible.
In the structure diagram of the embodiment, the oblique downward arrow at the periphery of the network mainly functions to generate different resolution layers, and the oblique upward arrow inside the network mainly performs multi-scale fusion, so that high-resolution information is recovered from low-resolution information as much as possible, and further detail information of the ground features of the remote sensing image is maintained.
And step S4, inputting the training set into the constructed network model for training to obtain a network parameter model so as to classify the ground features of the remote sensing image by using the obtained network parameter model. Specifically, the method comprises the following steps:
inputting the established training set into the established network model, setting the learning rate to be 0.0001, the iteration times to be 200 epochs and other super parameters, setting a loss function for optimizing the network parameters, adjusting the training process according to the trained loss curve, and finally obtaining the trained network parameter model.
Wherein, the network parameter model comprises specific setting parameters of the network model constructed in the step S3.
Step S5: and inputting the test set into the constructed network model, and evaluating a classification result.
The method specifically comprises the following steps:
and inputting the constructed test set into the trained network parameter model to obtain a classification result of the test set image, and performing quantitative and qualitative evaluation on the network structure provided by the embodiment.
Quantitative evaluation classification accuracy was evaluated using overall accuracy, Kappa coefficient and F1-score; and evaluating the detail classification effect through the classification result graph in a qualitative evaluation mode.
Fig. 4 is a diagram showing a hardware architecture of the remote sensing image ground object classification system 10 according to the present invention. The system comprises: a preprocessing unit 101, a training set test set construction unit 102, a network model construction unit 103, a network model training unit 104, and a network model test unit 105.
The preprocessing unit 101 is used for preprocessing the remote sensing image. Specifically, the method comprises the following steps:
the preprocessing unit 101 downloads L an and set remote sensing images, and radiation correction and spatial domain enhancement processing filtering are carried out on the remote sensing images by using arcgis and ENVI.
The training set and test set constructing unit 102 is configured to select a specific waveband data set for the preprocessed remote sensing image, perform image clipping, and construct a training set and a test set. Specifically, the method comprises the following steps:
the training set test set construction unit 102 selects red, green and blue wave bands, near infrared and red, green and green wave bands of the remote sensing image and data of full wave bands, respectively constructs three corresponding data sets, respectively cuts the remote sensing image after constructing the data sets into a plurality of block images of 256 pixels by 256 pixels, obtains the three cut data sets, and randomly divides each data set into a training set and a test set according to a ratio of 4: 1.
The effect of clipping in this embodiment is that the original image is too large, so that the training can be faster by clipping, and in this embodiment, a block image of 256 pixels by 256 pixels is directly obtained by random sampling on the remote sensing image after the data set is constructed.
The network model constructing unit 103 is configured to provide an end-to-end algorithm framework to construct a network model. Specifically, the method comprises the following steps:
the network model of this embodiment is constructed based on a PyTorch deep learning framework, and the hardware used is a TitanX GPU.
Compared with a convolutional neural network, the network model constructed by the invention has the advantages that a full connection layer is removed, and a main network structure (please refer to fig. 2) comprises a convolutional layer module, a downsampling layer module, an upsampling layer module, a continuous parallel multi-resolution subnet and a repeated multi-scale fusion module.
The specific contents are as follows:
(1) the convolutional layer module is used for extracting features;
the convolutional layer module mainly functions in feature extraction. The different feature maps respectively comprise convolution layer modules, each convolution layer module is composed of two convolution layers with convolution kernels of 3 x 3, and then a relu activation function is connected.
(2) A down-sampling layer module for obtaining multi-scale features;
in the embodiment, the convolution layer module with the step length of 2 is used for replacing the pooling layer of the U-Net full convolution network for down-sampling, so that the loss of high-resolution information is reduced when multi-scale features are obtained. Wherein the convolutional layer module is formed by convolution with two convolution kernels of 3 x 3.
(3) The upper sampling layer module is used for recovering the characteristic information of each scale;
and gradually restoring the feature map from the lowest resolution layer to the size of the input picture by using a deconvolution method to obtain the multi-scale features and a classification result map with the same size as the input picture.
(4) Continuously paralleling the multi-resolution subnets, and reserving the characteristics of each scale;
compared with a simple U-Net full convolution network (see fig. 3) that combines an original resolution feature map and a feature map obtained after upsampling by using skip connection (as shown by a horizontal dotted arrow in fig. 3), the embodiment retains corresponding resolution information in all the resolution layers, and the continuous parallel multi-resolution subnet can better retain detail information (as shown by a horizontal arrow in fig. 2).
(5) And repeating the multi-scale fusion module to recover the high-resolution information from the low-resolution information as much as possible.
In the structure diagram of the embodiment, the oblique downward arrow at the periphery of the network mainly functions to generate different resolution layers, and the oblique upward arrow inside the network mainly performs multi-scale fusion, so that high-resolution information is recovered from low-resolution information as much as possible, and further detail information of the ground features of the remote sensing image is maintained.
The network model training unit 104 is configured to input the training set into the constructed network model for training, so as to obtain a network parameter model. Specifically, the method comprises the following steps:
inputting the established training set into the established network model, setting the learning rate to be 0.0001, the iteration times to be 200 epochs and other super parameters, setting a loss function for optimizing the network parameters, adjusting the training process according to the trained loss curve, and finally obtaining the trained network parameter model.
The network parameter model includes specific setting parameters of the network model constructed by the network model construction unit 103.
The network model test unit 105 is configured to input the test set into the constructed network model, and evaluate the classification result. The method specifically comprises the following steps:
and inputting the constructed test set into the trained network parameter model to obtain an extraction result of the test set image, and performing quantitative and qualitative evaluation on the network structure provided by the embodiment.
Quantitative evaluation classification accuracy was evaluated using overall accuracy, Kappa coefficient and F1-score; and evaluating the detail classification effect through the classification result graph in a qualitative evaluation mode.
The first test result of the embodiment of the application:
the network model provided by the application is trained by three training sets formed by L andset images, the test set is used for testing, and meanwhile, the results of the random forest and the U-Net full convolution network are compared:
(1) the classification precision is higher
As shown in table 1, the overall accuracy (Acc.), the kappa coefficient (K) and the F1-Score (F1) are specifically used as evaluation indexes, and all three evaluation indexes used in the three data sets (TMall, TMnrg) are higher than those of the random forest and U-Net algorithm.
(2) The detailed information is richer and more accurate
As shown in fig. 5, two pictures are selected from the three data sets (TMall, TMnrg), respectively, (a) column is the original input image, (b) column is the reference label, (c) column is the result of the method of the present invention, and (d) column is the result of the U-Net method. Compared with a U-Net full convolution network, the method has the advantages that the result details are more accurate, and the edge information is richer and more accurate.
Although the present invention has been described with reference to the presently preferred embodiments, it will be understood by those skilled in the art that the foregoing description is illustrative only and is not intended to limit the scope of the invention, as claimed.
Claims (10)
1. A remote sensing image surface feature classification method is characterized by comprising the following steps:
a. preprocessing the remote sensing image;
b. selecting a red-green-blue wave band, a near-infrared wave band, a red-green wave band and a full-wave band data set for the preprocessed remote sensing image, cutting the image, and constructing a training set and a test set;
c. providing an end-to-end algorithm framework and constructing a network model;
d. and inputting the training set into the constructed network model for training to obtain a network parameter model so as to classify the ground features of the remote sensing image by using the obtained network parameter model.
2. The method of claim 1, further comprising the step of:
and inputting the test set into the constructed network model, and evaluating a classification result.
3. The method according to claim 1, wherein said step a specifically comprises:
and performing radiation correction and spatial domain enhancement processing filtering on the remote sensing image by using the arcgis and the ENVI.
4. The method according to claim 3, wherein said step b comprises the steps of:
selecting data of red, green and blue wave bands, near infrared, red, green and green wave bands and full wave bands of the remote sensing image, respectively constructing three corresponding data sets, respectively cutting the remote sensing image after constructing the data sets into a plurality of block images with 256 pixels by 256 pixels, obtaining the three cut data sets, and randomly dividing each data set into a training set and a testing set according to a ratio of 4: 1.
5. The method of claim 4, wherein the network model comprises:
a convolutional layer module for extracting characteristics;
obtaining a down-sampling layer module with multi-scale characteristics;
an upper sampling layer module for recovering characteristic information of each scale;
reserving continuous parallel multi-resolution subnets with various scale characteristics;
and a repeated multi-scale fusion module for recovering the high-resolution information from the low-resolution information.
6. The remote sensing image ground feature classification system is characterized by comprising a preprocessing unit, a training set test set building unit, a network model building unit and a network model training unit, wherein:
the preprocessing unit is used for preprocessing the remote sensing image;
the training set and test set constructing unit is used for selecting a data set of red, green and blue wave bands, near infrared and red, green and green wave bands and full wave bands for the preprocessed remote sensing image, cutting an image, and constructing a training set and a test set;
the network model construction unit is used for providing an end-to-end algorithm framework and constructing a network model;
and the network model training unit is used for inputting the training set into the constructed network model for training to obtain a network parameter model so as to classify the ground features of the remote sensing image by using the obtained network parameter model.
7. The system of claim 6, wherein the system further comprises:
and the network model testing unit is used for inputting the testing set into the constructed network model and evaluating the classification result.
8. The system of claim 7, wherein the preprocessing unit is specifically configured to:
and performing radiation correction and spatial domain enhancement processing filtering on the remote sensing image by using the arcgis and the ENVI.
9. The system of claim 8, wherein the training set test set construction unit is specifically configured to:
selecting data of red, green and blue wave bands, near infrared, red, green and green wave bands and full wave bands of the remote sensing image, respectively constructing three corresponding data sets, respectively cutting the remote sensing image after constructing the data sets into a plurality of block images with 256 pixels by 256 pixels, obtaining the three cut data sets, and randomly dividing each data set into a training set and a testing set according to a ratio of 4: 1.
10. The system of claim 9, wherein the network model comprises:
a convolutional layer module for extracting characteristics;
obtaining a down-sampling layer module with multi-scale characteristics;
an upper sampling layer module for recovering characteristic information of each scale;
reserving continuous parallel multi-resolution subnets with various scale characteristics;
and a repeated multi-scale fusion module for recovering the high-resolution information from the low-resolution information.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010201027.2A CN111428781A (en) | 2020-03-20 | 2020-03-20 | Remote sensing image ground object classification method and system |
PCT/CN2020/140266 WO2021184891A1 (en) | 2020-03-20 | 2020-12-28 | Remotely-sensed image-based terrain classification method, and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010201027.2A CN111428781A (en) | 2020-03-20 | 2020-03-20 | Remote sensing image ground object classification method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111428781A true CN111428781A (en) | 2020-07-17 |
Family
ID=71548387
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010201027.2A Pending CN111428781A (en) | 2020-03-20 | 2020-03-20 | Remote sensing image ground object classification method and system |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111428781A (en) |
WO (1) | WO2021184891A1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112836610A (en) * | 2021-01-26 | 2021-05-25 | 平衡机器科技(深圳)有限公司 | Land use change and carbon reserve quantitative estimation method based on remote sensing data |
WO2021184891A1 (en) * | 2020-03-20 | 2021-09-23 | 中国科学院深圳先进技术研究院 | Remotely-sensed image-based terrain classification method, and system |
CN113989649A (en) * | 2021-11-25 | 2022-01-28 | 江苏科技大学 | Remote sensing land parcel identification method based on deep learning |
CN115797788A (en) * | 2023-02-17 | 2023-03-14 | 武汉大学 | Multimodal railway design element remote sensing feature extraction method based on deep learning |
CN118378780A (en) * | 2024-04-08 | 2024-07-23 | 广西壮族自治区自然资源遥感院 | Environment comprehensive evaluation method and system based on remote sensing image |
CN113989649B (en) * | 2021-11-25 | 2024-10-18 | 江苏科技大学 | Remote sensing land parcel recognition method based on deep learning |
Families Citing this family (59)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113887470B (en) * | 2021-10-15 | 2024-06-14 | 浙江大学 | High-resolution remote sensing image ground object extraction method based on multitask attention mechanism |
CN113902793B (en) * | 2021-11-05 | 2024-05-14 | 长光卫星技术股份有限公司 | Method, system and electronic equipment for predicting end-to-end building height based on single-vision remote sensing image |
CN114091531A (en) * | 2021-11-12 | 2022-02-25 | 哈尔滨工程大学 | Multi-scale-based environmental feature extraction method |
CN114067245A (en) * | 2021-11-16 | 2022-02-18 | 中国铁路兰州局集团有限公司 | Method and system for identifying hidden danger of external environment of railway |
CN114170462A (en) * | 2021-12-06 | 2022-03-11 | 哈尔滨理工大学 | Fine-grained remote sensing ship open set identification method based on convolutional neural network |
CN114332640B (en) * | 2021-12-15 | 2024-08-16 | 水利部南京水利水文自动化研究所 | Ground surface covering identification and area estimation method based on cloud platform and random forest |
CN114494851A (en) * | 2021-12-23 | 2022-05-13 | 青岛星科瑞升信息科技有限公司 | Landslide extraction method based on multi-temporal remote sensing image difference information |
CN113989652B (en) * | 2021-12-27 | 2022-04-26 | 中国测绘科学研究院 | Method and system for detecting farmland change under layered multiple judgment rules |
CN114387512B (en) * | 2021-12-28 | 2024-04-19 | 南京邮电大学 | Remote sensing image building extraction method based on multi-scale feature fusion and enhancement |
CN114612315B (en) * | 2022-01-06 | 2024-08-09 | 东南数字经济发展研究院 | High-resolution image missing region reconstruction method based on multitask learning |
CN114550002A (en) * | 2022-01-12 | 2022-05-27 | 山东锋士信息技术有限公司 | Crop remote sensing image classification method and system based on improved U-Net |
CN114549972B (en) * | 2022-01-17 | 2023-01-03 | 中国矿业大学(北京) | Strip mine stope extraction method, device, equipment and medium |
CN114549534B (en) * | 2022-01-17 | 2022-11-15 | 中国矿业大学(北京) | Mining area land utilization identification method, device, equipment and medium |
CN114529830B (en) * | 2022-01-19 | 2024-09-13 | 重庆邮电大学 | Remote sensing image space-time fusion method based on mixed convolution network |
CN114565858B (en) * | 2022-02-25 | 2024-04-05 | 辽宁师范大学 | Multispectral image change detection method based on geospatial perception low-rank reconstruction network |
CN114743110A (en) * | 2022-03-01 | 2022-07-12 | 西北大学 | Multi-scale nested remote sensing image change detection method and system and computer terminal |
CN114663301B (en) * | 2022-03-05 | 2024-03-08 | 西北工业大学 | Convolutional neural network panchromatic sharpening method based on wavelet layer |
CN114693512A (en) * | 2022-03-16 | 2022-07-01 | 北京理工大学 | Far-field remote sensing image conversion method based on near-field image |
CN114663759A (en) * | 2022-03-24 | 2022-06-24 | 东南大学 | Remote sensing image building extraction method based on improved deep LabV3+ |
CN114862731B (en) * | 2022-03-29 | 2024-04-16 | 武汉大学 | Multi-hyperspectral image fusion method guided by low-rank priori and spatial spectrum information |
CN114724030B (en) * | 2022-04-06 | 2023-06-02 | 西安电子科技大学 | Polarization SAR ground object classification method based on contrast learning |
CN114882139B (en) * | 2022-04-12 | 2024-06-07 | 北京理工大学 | End-to-end intelligent generation method and system for multi-level map |
CN114821354B (en) * | 2022-04-19 | 2024-06-07 | 福州大学 | Urban building change remote sensing detection method based on twin multitasking network |
CN114821315B (en) * | 2022-04-24 | 2024-06-07 | 福州大学 | Remote sensing image cultivated land block extraction method combining edge detection and multitask learning |
CN114998703B (en) * | 2022-05-10 | 2024-03-08 | 西北工业大学 | Remote sensing image change detection method based on high-resolution convolutional neural network |
CN114998756B (en) * | 2022-05-17 | 2024-09-24 | 大连理工大学 | Yolov-based remote sensing image detection method, yolov-based remote sensing image detection device and storage medium |
CN114792116B (en) * | 2022-05-26 | 2024-05-03 | 中国科学院东北地理与农业生态研究所 | Remote sensing classification method for crops in time sequence deep convolution network |
CN114998758B (en) * | 2022-05-26 | 2024-05-03 | 电子科技大学 | Transmission line insulator detection method based on multisource remote sensing satellite images |
CN114898097B (en) * | 2022-06-01 | 2024-05-10 | 首都师范大学 | Image recognition method and system |
CN115035334B (en) * | 2022-06-07 | 2024-09-06 | 西北大学 | Multi-classification change detection method and system for multi-scale fusion double-time-phase remote sensing image |
CN115082808B (en) * | 2022-06-17 | 2023-05-09 | 安徽大学 | Soybean planting area extraction method based on high-resolution first data and U-Net model |
CN114821376B (en) * | 2022-06-27 | 2022-09-20 | 中咨数据有限公司 | Unmanned aerial vehicle image geological disaster automatic extraction method based on deep learning |
CN115100540B (en) * | 2022-06-30 | 2024-05-07 | 电子科技大学 | Automatic road extraction method for high-resolution remote sensing image |
CN115131680B (en) * | 2022-07-05 | 2024-08-20 | 西安电子科技大学 | Remote sensing image water body extraction method based on depth separable convolution and jump connection |
CN115017418B (en) * | 2022-08-10 | 2022-11-01 | 北京数慧时空信息技术有限公司 | Remote sensing image recommendation system and method based on reinforcement learning |
CN115527123B (en) * | 2022-10-21 | 2023-05-05 | 河北省科学院地理科学研究所 | Land cover remote sensing monitoring method based on multisource feature fusion |
CN115661655B (en) * | 2022-11-03 | 2024-03-22 | 重庆市地理信息和遥感应用中心 | Southwest mountain area cultivated land extraction method with hyperspectral and hyperspectral image depth feature fusion |
CN115661681B (en) * | 2022-11-17 | 2023-05-30 | 中国科学院空天信息创新研究院 | Landslide hazard automatic identification method and system based on deep learning |
CN115761346A (en) * | 2022-11-22 | 2023-03-07 | 山东农业工程学院 | Remote sensing image classification method based on multi-model fusion |
CN115797184B (en) * | 2023-02-09 | 2023-06-30 | 天地信息网络研究院(安徽)有限公司 | Super-resolution extraction method for surface water body |
CN115841625B (en) * | 2023-02-23 | 2023-06-06 | 杭州电子科技大学 | Remote sensing building image extraction method based on improved U-Net model |
CN116563210B (en) * | 2023-03-21 | 2023-12-08 | 安徽中新云谷数字技术有限公司 | Virtual reality image quality evaluation method and system |
CN115995005B (en) * | 2023-03-22 | 2023-08-01 | 航天宏图信息技术股份有限公司 | Crop extraction method and device based on single-period high-resolution remote sensing image |
CN116030352B (en) * | 2023-03-29 | 2023-07-25 | 山东锋士信息技术有限公司 | Long-time-sequence land utilization classification method integrating multi-scale segmentation and super-pixel segmentation |
CN116385881B (en) * | 2023-04-10 | 2023-11-14 | 北京卫星信息工程研究所 | Remote sensing image ground feature change detection method and device |
CN116129278B (en) * | 2023-04-10 | 2023-06-30 | 牧马人(山东)勘察测绘集团有限公司 | Land utilization classification and identification system based on remote sensing images |
CN116702065B (en) * | 2023-05-30 | 2024-04-16 | 浙江时空智子大数据有限公司 | Method and system for monitoring ecological treatment pollution of black and odorous water based on image data |
CN116503677B (en) * | 2023-06-28 | 2023-09-05 | 武汉大学 | Wetland classification information extraction method, system, electronic equipment and storage medium |
CN116597318B (en) * | 2023-07-17 | 2023-09-26 | 山东锋士信息技术有限公司 | Irrigation area cultivated land precise extraction method, equipment and storage medium based on remote sensing image |
CN116862317B (en) * | 2023-08-08 | 2024-06-18 | 广西壮族自治区自然资源遥感院 | Satellite remote sensing monitoring system based on project full life cycle performance evaluation management |
CN116740578B (en) * | 2023-08-14 | 2023-10-27 | 北京数慧时空信息技术有限公司 | Remote sensing image recommendation method based on user selection |
CN116778104B (en) * | 2023-08-16 | 2023-11-14 | 江西省国土资源测绘工程总院有限公司 | Mapping method and system for dynamic remote sensing monitoring |
CN117689579B (en) * | 2023-12-12 | 2024-05-03 | 安徽大学 | SAR auxiliary remote sensing image thick cloud removal method with progressive double decoupling |
CN118196614A (en) * | 2023-12-14 | 2024-06-14 | 中国气象局乌鲁木齐沙漠气象研究所 | Mobile sand hill recognition method and device based on remote sensing image and neural network |
CN117456369B (en) * | 2023-12-25 | 2024-02-27 | 广东海洋大学 | Visual recognition method for intelligent mangrove growth condition |
CN117726947A (en) * | 2024-01-05 | 2024-03-19 | 中国空间技术研究院 | Road network distribution monitoring equipment based on high-resolution simulated remote sensing image |
CN118135311A (en) * | 2024-03-13 | 2024-06-04 | 南京北斗创新应用科技研究院有限公司 | Heterogeneous time sequence image wetland monitoring method and device based on improved cascade forests |
CN117975295B (en) * | 2024-04-01 | 2024-06-18 | 南京信息工程大学 | Accumulated snow depth prediction method based on multi-scale feature perception neural network |
CN118230073B (en) * | 2024-05-23 | 2024-07-23 | 青岛浩海网络科技股份有限公司 | Land optimization classification method and system based on remote sensing images under multi-scale visual angles |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109255334A (en) * | 2018-09-27 | 2019-01-22 | 中国电子科技集团公司第五十四研究所 | Remote sensing image terrain classification method based on deep learning semantic segmentation network |
CN110717420A (en) * | 2019-09-25 | 2020-01-21 | 中国科学院深圳先进技术研究院 | Cultivated land extraction method and system based on remote sensing image and electronic equipment |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108564109B (en) * | 2018-03-21 | 2021-08-10 | 天津大学 | Remote sensing image target detection method based on deep learning |
US11030486B2 (en) * | 2018-04-20 | 2021-06-08 | XNOR.ai, Inc. | Image classification through label progression |
CN108805874B (en) * | 2018-06-11 | 2022-04-22 | 中国电子科技集团公司第三研究所 | Multispectral image semantic cutting method based on convolutional neural network |
CN109711449A (en) * | 2018-12-20 | 2019-05-03 | 北京以萨技术股份有限公司 | A kind of image classification algorithms based on full convolutional network |
CN110633633B (en) * | 2019-08-08 | 2022-04-05 | 北京工业大学 | Remote sensing image road extraction method based on self-adaptive threshold |
CN111428781A (en) * | 2020-03-20 | 2020-07-17 | 中国科学院深圳先进技术研究院 | Remote sensing image ground object classification method and system |
-
2020
- 2020-03-20 CN CN202010201027.2A patent/CN111428781A/en active Pending
- 2020-12-28 WO PCT/CN2020/140266 patent/WO2021184891A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109255334A (en) * | 2018-09-27 | 2019-01-22 | 中国电子科技集团公司第五十四研究所 | Remote sensing image terrain classification method based on deep learning semantic segmentation network |
CN110717420A (en) * | 2019-09-25 | 2020-01-21 | 中国科学院深圳先进技术研究院 | Cultivated land extraction method and system based on remote sensing image and electronic equipment |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021184891A1 (en) * | 2020-03-20 | 2021-09-23 | 中国科学院深圳先进技术研究院 | Remotely-sensed image-based terrain classification method, and system |
CN112836610A (en) * | 2021-01-26 | 2021-05-25 | 平衡机器科技(深圳)有限公司 | Land use change and carbon reserve quantitative estimation method based on remote sensing data |
CN112836610B (en) * | 2021-01-26 | 2022-05-27 | 平衡机器科技(深圳)有限公司 | Land use change and carbon reserve quantitative estimation method based on remote sensing data |
CN113989649A (en) * | 2021-11-25 | 2022-01-28 | 江苏科技大学 | Remote sensing land parcel identification method based on deep learning |
CN113989649B (en) * | 2021-11-25 | 2024-10-18 | 江苏科技大学 | Remote sensing land parcel recognition method based on deep learning |
CN115797788A (en) * | 2023-02-17 | 2023-03-14 | 武汉大学 | Multimodal railway design element remote sensing feature extraction method based on deep learning |
CN115797788B (en) * | 2023-02-17 | 2023-04-14 | 武汉大学 | Multimodal railway design element remote sensing feature extraction method based on deep learning |
CN118378780A (en) * | 2024-04-08 | 2024-07-23 | 广西壮族自治区自然资源遥感院 | Environment comprehensive evaluation method and system based on remote sensing image |
Also Published As
Publication number | Publication date |
---|---|
WO2021184891A1 (en) | 2021-09-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111428781A (en) | Remote sensing image ground object classification method and system | |
CN111259905B (en) | Feature fusion remote sensing image semantic segmentation method based on downsampling | |
CN112287940B (en) | Semantic segmentation method of attention mechanism based on deep learning | |
CN111274865B (en) | Remote sensing image cloud detection method and device based on full convolution neural network | |
CN112446383B (en) | License plate recognition method and device, storage medium and terminal | |
CN109102469B (en) | Remote sensing image panchromatic sharpening method based on convolutional neural network | |
CN113887459B (en) | Open-pit mining area stope change area detection method based on improved Unet + | |
CN112154451A (en) | Method, apparatus and computer program for extracting representative features of objects in an image | |
CN113609889B (en) | High-resolution remote sensing image vegetation extraction method based on sensitive characteristic focusing perception | |
CN110544212B (en) | Convolutional neural network hyperspectral image sharpening method based on hierarchical feature fusion | |
CN110826596A (en) | Semantic segmentation method based on multi-scale deformable convolution | |
CN108647568B (en) | Grassland degradation automatic extraction method based on full convolution neural network | |
CN110706239B (en) | Scene segmentation method fusing full convolution neural network and improved ASPP module | |
CN110570440A (en) | Image automatic segmentation method and device based on deep learning edge detection | |
CN114067219A (en) | Farmland crop identification method based on semantic segmentation and superpixel segmentation fusion | |
CN111914909B (en) | Hyperspectral change detection method based on space-spectrum combined three-direction convolution network | |
CN110717420A (en) | Cultivated land extraction method and system based on remote sensing image and electronic equipment | |
CN111680690A (en) | Character recognition method and device | |
CN113887472B (en) | Remote sensing image cloud detection method based on cascade color and texture feature attention | |
CN111951164A (en) | Image super-resolution reconstruction network structure and image reconstruction effect analysis method | |
CN113066030B (en) | Multispectral image panchromatic sharpening method and system based on space-spectrum fusion network | |
CN116091940B (en) | Crop classification and identification method based on high-resolution satellite remote sensing image | |
CN108764287B (en) | Target detection method and system based on deep learning and packet convolution | |
CN115049640B (en) | Road crack detection method based on deep learning | |
CN115497010A (en) | Deep learning-based geographic information identification method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |