CN113743383A - SAR image water body extraction method and device, electronic equipment and storage medium - Google Patents

SAR image water body extraction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113743383A
CN113743383A CN202111302938.5A CN202111302938A CN113743383A CN 113743383 A CN113743383 A CN 113743383A CN 202111302938 A CN202111302938 A CN 202111302938A CN 113743383 A CN113743383 A CN 113743383A
Authority
CN
China
Prior art keywords
feature map
water body
sar image
feature
semantic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111302938.5A
Other languages
Chinese (zh)
Other versions
CN113743383B (en
Inventor
王宇翔
邹舒畅
张攀
路超然
李彦
沈均平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Hongtu Information Technology Co Ltd
Original Assignee
Aerospace Hongtu Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Hongtu Information Technology Co Ltd filed Critical Aerospace Hongtu Information Technology Co Ltd
Priority to CN202111302938.5A priority Critical patent/CN113743383B/en
Publication of CN113743383A publication Critical patent/CN113743383A/en
Application granted granted Critical
Publication of CN113743383B publication Critical patent/CN113743383B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Abstract

The application provides a method and a device for extracting SAR image water, electronic equipment and a storage medium, which relate to the technical field of remote sensing image processing and comprise the following steps: acquiring an SAR image to be detected; preprocessing the SAR image to be detected; processing the preprocessed SAR image to be detected through a water body segmentation model to obtain a water body segmentation result; the water body segmentation model is a DeepLabv3+ semantic segmentation model added with a double attention mechanism and is obtained through training of a sample set containing an SAR image and DEM data. The method and the device can improve the accuracy and efficiency of SAR image water body extraction.

Description

SAR image water body extraction method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of remote sensing image processing, in particular to a method and a device for extracting SAR image water, electronic equipment and a storage medium.
Background
Synthetic Aperture Radar (SAR) is more advantageous than optical remote sensing in applications for extracting surface water body ranges because: the SAR is an active radar sensor, does not depend on the radiation of the sun, and can be observed day and night; the adopted electromagnetic waves are in a microwave band, have the wavelength of about 1 mm-1 m, have strong penetrating power, are not shielded by cloud mist, can be observed in all weather, and even can penetrate through a tree crown to detect an underwater area under a forest; open, smooth (no waves or wavelets relative to the wavelengths used by SAR) bodies of water generally exhibit lower backscatter coefficients and are more uniform in SAR images. For the water body extraction of the SAR, a lot of researches and applications are available, and the following list is several main methods for water body extraction based on SAR images:
the water body extraction method based on threshold segmentation comprises the following steps: by utilizing the characteristic that the brightness of the water body is lower in the SAR radiance graph, a global or local water body segmentation threshold is realized through an empirical method, a bimodal method, a maximum between-class variance algorithm, multi-threshold segmentation, an entropy threshold algorithm and the like, and a pixel smaller than the threshold is considered as the water body. The method has simple calculation and high calculation efficiency, but has lower precision, and can not distinguish the ground objects similar to the scattering characteristics of the water body.
The water body extraction method based on the texture comprises the following steps: the surface roughness of the water body is obviously lower than that of other ground objects, the water body is represented as an extremely homogeneous area in the SAR image, the texture change is small, the homogeneity characteristic parameters are calculated by adopting a gray level co-occurrence matrix, and the area with high homogeneity is marked as the water body. The method can be influenced by the ground features with similar texture characteristics to the water body, such as shallow grass, bare land and the like.
The water body extraction method based on polarization decomposition comprises the following steps: the polarization characteristics reflect the structural characteristics of the ground objects, different structures have different backscattering mechanisms, the coherent matrix of the target can be decomposed into the sum of 3 components of surface scattering, dihedral angle scattering and volume scattering by adopting an incoherent decomposition method, and the power of the water body in the 3 scattering components is far lower than that of other ground objects. This type of method may be disturbed by terrain with similar scattering mechanisms, such as roofs, airport runways, etc.
The water body extraction method based on machine learning comprises the following steps: the backscattering coefficient, the derivative index and the like of the SAR image are extracted as features, the features and the water body label form a training sample, the training sample is used as input and is transmitted into a classifier for training, and the commonly used classifier comprises a support vector machine, a random forest and the like. The image classification effect of the method is good, but SAR data needs to be preprocessed to extract effective features for training.
With the rapid development of deep learning in the field of remote sensing, in the field of SAR image semantic segmentation, deep learning models such as FCN and U-net are successively verified through experiments, but are limited by SAR images, processing software, samples, a deep learning algorithm and the like, and the general accuracy of water extraction of remote sensing images by utilizing deep learning is not high.
Disclosure of Invention
In view of this, the present application provides a method and an apparatus for extracting a water body from an SAR image, an electronic device, and a storage medium, so as to solve the technical problems of low precision and low efficiency in the prior art of extracting a water body from a remote sensing image by deep learning.
On one hand, the embodiment of the application provides an SAR image water body extraction method, which comprises the following steps:
acquiring an SAR image to be detected;
preprocessing the SAR image to be detected;
processing the preprocessed SAR image to be detected through a water body segmentation model to obtain a water body segmentation result;
the water body segmentation model is a DeepLabv3+ semantic segmentation model added with a double attention mechanism and is obtained by training a sample data set containing an SAR image sample and DEM data.
Further, the water body segmentation model comprises: an encoder and a decoder; the encoder includes at least: the system comprises a feature extraction module, a spatial pyramid pooling module, a double-attention mechanism module and a fusion module;
processing the preprocessed SAR image to be detected through a water body segmentation model to obtain a water body segmentation result; the method comprises the following steps:
extracting a semantic feature map F1 and a low-level semantic feature map F2 of the preprocessed SAR image to be detected through the feature extraction module, respectively inputting the semantic feature map F1 into the space pyramid pooling module and the double-attention machine module, and inputting the low-level semantic feature map F2 into a decoder;
processing the semantic feature map F1 through the spatial pyramid pooling module to obtain feature maps with different scales, and fusing the feature maps with different scales to obtain semantic features F3;
performing position attention mechanism and channel attention mechanism processing on the semantic feature map F1 through the double-attention mechanism module to obtain semantic features F4;
the semantic features F3 and the semantic features F4 are fused through the fusion module to obtain high-level semantic features F5, and the high-level semantic features F5 are output to a decoder;
and fusing the low-level semantic features F2 and the high-level semantic features F5 through the decoder to obtain a water body segmentation result.
Furthermore, the feature extraction module adopts ResNet-50 with holes, and the ResNet-50 comprises 5 volume blocks in total; the second convolution block includes 3 sequentially connected residual modules; the third convolution block includes 4 residual modules connected in sequence; the fourth convolution block includes 6 sequentially connected residual modules; the fifth convolution block comprises 3 residual modules which are sequentially connected, wherein the convolution mode of the fifth convolution block is a hole convolution; the residual error module comprises: the system comprises three convolution kernels 1 multiplied by 1, 3 multiplied by 3 and 1 multiplied by 1 which are connected in sequence and a first addition unit, wherein the three convolution kernels process input characteristics and output residual errors of the input characteristics, and the first addition unit performs mapping addition on the residual errors of the input characteristics and outputs the processed input characteristics;
extracting a semantic feature map F1 and a low-level semantic feature map F2 of the preprocessed SAR image to be detected through the feature extraction module; the method comprises the following steps:
processing the feature map obtained by the first convolution block and the second convolution block to obtain a feature map which is used as a low-level semantic feature map F2;
and processing the first convolution block, the second convolution block, the third convolution block, the fourth convolution block and the second convolution block to obtain a feature map serving as a semantic feature map F1.
Further, the semantic feature map F1 is processed by the spatial pyramid pooling module to obtain feature maps of different scales, and the feature maps of different scales are fused to obtain a semantic feature F3; the method comprises the following steps:
processing the feature map F1 by using a 1 × 1 convolution kernel to obtain a feature map of a first scale;
processing the feature map F1 by using a 3 x 3 convolution kernel with expansion rates of 12, 24 and 36 respectively to obtain a second scale feature map, a third scale feature map and a fourth scale feature map;
carrying out global average pooling on the feature map F1 to obtain a feature map of a fifth scale;
connecting the feature map of the first scale, the feature map of the second scale, the feature map of the third scale, the feature map of the fourth scale and the feature map of the fifth scale;
dimension reduction processing is carried out on five connected feature maps with different scales by using a 1 × 1 convolution kernel to obtain a feature map F3.
Further, the dual attention mechanism module comprises a position attention unit, a channel attention unit and a second addition unit;
performing position attention mechanism and channel attention mechanism processing on the semantic feature map F1 through the double-attention mechanism module to obtain semantic features F4; the method comprises the following steps:
performing convolution operation on the feature maps F1 with the size of C × H × W three times respectively through the position attention unit to obtain a feature map Q, a feature map K and a feature map V which are the same as the feature map F1 in size; performing matrix multiplication on the feature diagram Q subjected to the dimension conversion and the transposed feature diagram and the feature diagram K subjected to the dimension conversion, and then processing a matrix multiplication result by using a Softmax function to obtain a space attention diagram S with the size of (H multiplied by W) x (H multiplied by W); reconstructing the feature map V back into a feature map of size C × (H × W), then multiplying the feature map V by the transferred attention map S, reconstructing a result of the multiplication back into a feature map of size C × H × W, multiplying the feature map of size C × H × W by the weight parameter α, and superimposing the result on the feature map F1 to obtain a feature map E1;
reconstructing the feature map F1 with the size of C × H × W back to the feature map B with the size of C × (H × W) by a channel attention unit, multiplying the feature map B by the feature map B after the conversion, and obtaining a channel attention map X with the size of C × C by the Softmax function for the activation; multiplying the feature map B by the transposed channel attention map X, reconstructing a C multiplied by H multiplied by W feature map, multiplying the C multiplied by H multiplied by W feature map by a weight coefficient beta, and adding the weight coefficient beta to the feature map F1 to obtain a feature map E2;
and adding the feature map E1 and the feature map E2 through a second adding unit to obtain a feature map F4.
Further, the method further comprises: the method for training the water model specifically comprises the following steps:
step S1: establishing a training sample data set; the training sample data set comprises an SAR image and external DEM data;
step S2: pre-training a ResNet-50 network by adopting transfer learning, and initializing model parameters of a water body segmentation model by utilizing the trained ResNet-50 network;
step S3: dividing training sample data set data into a plurality of batchs;
step S4: inputting data of a batch into a water body segmentation model to obtain a prediction result, and calculating a loss function by using the prediction result and a sample label result
Figure P_211104121737645_645275001
Figure P_211104121737660_660903001
Wherein the content of the first and second substances,
Figure P_211104121737692_692154001
gamma is a parameter for the probability of being predicted as a true label;
step S5: updating parameters of the water body segmentation model by using the loss function;
and iterating the steps S3 to S5 until all the training sample data sets are trained.
Further, the establishing a training sample data set includes:
collecting a plurality of satellite-borne SAR images;
taking the existing optical image as a geographic reference, performing control point matching on each satellite-borne SAR image through a multi-mode matching technology, directly obtaining geographic coordinates or projection coordinates of matching points, and taking the obtained matching points as adjustment control points and connection points of a regional network to generate SAR images with geographic references;
determining a threshold segmentation method of a water body area according to the pixel value range, and carrying out automatic water body preliminary labeling on the preprocessed SAR image; comparing the optical remote sensing images in the same period, and manually correcting the error marked area by combining with expert interpretation knowledge to obtain positive sample data;
simulating an SAR image by using external DEM data, and taking a mountain shadow area in the simulated SAR image as negative sample data;
taking the road data as negative sample data;
and slicing the positive sample data and the negative sample data to obtain a training sample data set.
On the other hand, this application embodiment provides an SAR image water extraction element, includes:
the image acquisition unit is used for acquiring an SAR image to be detected;
the preprocessing unit is used for preprocessing the SAR image to be detected;
the water body extraction unit is used for processing the preprocessed SAR image to be detected through a water body segmentation model to obtain a water body segmentation result; the water body segmentation model is a DeepLabv3+ semantic segmentation model added with a double attention mechanism and is obtained through training of a sample data set containing an SAR image and external DEM data.
In another aspect, an embodiment of the present application provides an electronic device, including: the SAR image water body extraction method comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein when the processor executes the computer program, the SAR image water body extraction method of the embodiment of the application is realized.
In another aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the method for extracting a SAR image water body according to the embodiment of the present application is implemented.
According to the method, a double-attention mechanism is added in the DeepLabv3+ semantic segmentation model to construct a water body segmentation model, the water body segmentation model is trained through a sample data set containing SAR image samples and DEM data, the water body extraction is carried out on the SAR image to be detected through the trained water body segmentation model, and the accuracy and the efficiency of the water body extraction can be improved.
Drawings
In order to more clearly illustrate the detailed description of the present application or the technical solutions in the prior art, the drawings needed to be used in the detailed description of the present application or the prior art description will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of an SAR image water body extraction method provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of a water body segmentation model provided in an embodiment of the present application;
fig. 3 is a functional structure schematic diagram of an SAR image water body extraction device provided in the embodiment of the present application;
fig. 4 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
First, the design idea of the embodiment of the present application is briefly introduced.
The existing non-deep learning SAR image water body extraction algorithm is generally low in precision, and the existing deep learning SAR water body extraction method does not form a complete production application processing flow.
In order to solve the above problems, an embodiment of the application provides an SAR image water body extraction method, which includes constructing a water body segmentation model by modifying a depllabv 3+ network structure, wherein the model can extract SAR water body elements; the void space pyramid-pooling ASPP based on the Deeplab V3+ network can overcome SAR image speckle noise, improve the water body extraction visual field and perform multi-scale water body element detection; introducing a Double Attention Mechanism Module (DAMM) into a Deeplab V3+ network, connecting the DAMM with the void space pyramid pooling ASPP in parallel, processing a backbone network in parallel to extract a feature map, and fusing two layers of processing feature information to serve as high-level semantic features. In the training of the water body segmentation model, DEM data and negative sample data are introduced, and images such as mountain shadow, roads and the like are fused and removed; and finally, adding Focal loss to calculate loss so as to solve the problem of category imbalance, thereby solving the problem of extraction of the SAR image water body elements.
After introducing the application scenario and the design concept of the embodiment of the present application, the following describes a technical solution provided by the embodiment of the present application.
As shown in fig. 1, an embodiment of the present application provides an SAR image water body extraction method, including:
step 101: constructing a water body segmentation model based on a DeeplabV network structure;
the embodiment of the application designs a water body segmentation model, which is essentially a deep learning network, based on the void space Pyramid Pooling (ASPP) of Deeplab V3+ network, introduces a Double Attention Mechanism Module (DAMM), the main network is ResNet-50, the feature maps extracted by the main network are respectively sent to a DAMM layer and an ASPP layer, and two layers of processing feature maps are fused to serve as high-level semantic information; the ASPP with the cavity convolution based on the Deeplab V3+ network is used, the influence of SAR image speckle noise is overcome, the water body extraction receptive field is improved, and the multi-scale water body element segmentation is met.
The DAMM layer and the ASPP layer are connected in parallel, and the problems that the fitting speed of a Deeplab V3+ network in remote sensing image segmentation is low, the edge target segmentation is not accurate, the large-scale target segmentation class is inconsistent, holes exist and the like are solved.
As shown in fig. 2, the main structure of the water body segmentation model includes two parts, namely an encoder and a decoder; taking ResNet-50 as an encoder main body network, and acquiring a semantic feature map F1 and a low-level semantic feature map F2; inputting the feature maps F1 into an ASPP module and a DAMM respectively, and fusing the feature maps of the two branches to obtain a high-level semantic feature F5; the decoder acquires the segmentation result by combining the high-level semantic feature F5 and the low-level semantic feature F2 along with the decoding operation of Deeplab V3 +.
The main structure of the encoder is ResNet-50, and the problem that the accuracy rate is reduced due to the deepening of a network is solved by adding a residual error module. One residual module is specifically: the input feature is subjected to three convolution operations (1 × 1, 3 × 3, 1 × 1) to obtain a residual error of the feature, and the residual error is added to the direct mapping of the input feature, wherein the 1 × 1 convolution in the front of the 3 × 3 convolution is used for reducing the dimension, and the 1 × 1 convolution in the back is used for increasing the dimension, so that the parameter calculation amount can be reduced. The ResNet-50 contains 5 convolution blocks in total, the 1 st convolution block is a single convolution layer (containing a BN layer and a Relu activation layer), the last 4 convolution blocks are respectively formed by sequentially connecting 3, 4, 6 and 3 residual modules, and the convolution mode of the 5 th convolution block is required to be cavity convolution.
Extracting a feature diagram F1 obtained by 5 convolution block operations as the input of a subsequent ASPP module and a DAMM; and extracting a low-level semantic feature map F2 obtained by only the first 2 convolution block operations to provide bottom-level information for a decoder.
Inputting the feature map F1 into an ASSP module, respectively performing 1 × 1 convolution, performing 3 × 3 convolution with expansion rates of 12, 24 and 36 respectively, and performing global average pooling to obtain feature maps under different scales, connecting 5 feature maps, and performing 1 × 1 convolution dimensionality reduction to obtain a feature map F3. The module meets the detection requirement of multi-scale water body elements, and overcomes the inherent speckle noise of the SAR image.
Introducing the feature map F1 into a DAMM, and realizing parallel processing of the DAMM and an ASPP module, wherein the DAMM comprises a position attention unit, a channel attention unit and an addition unit;
location attention unit: convolving an input feature map F1 (with the size of C multiplied by H multiplied by W) to obtain Q, K, V, wherein the feature map Q, K, V is equal to the size of F1, Q is subjected to dimension conversion, transpose and dimension conversion, K is subjected to matrix multiplication, and Softmax is used for obtaining a space attention map S, S with the size of (H multiplied by W) multiplied by H multiplied by WjiThe influence of the ith position on the jth position can be understood as the degree of association between the two positions, and a larger S indicates that the two positions are more similar. The feature map V is reconstructed as a feature map of C × (H × W) size, multiplied by the transpose of attention S, and reconstructed back to C × H × W, multiplied by the weight parameter α and superimposed on the feature map F1, resulting in a feature map E1, where attention α is initialized to 0 and gradually learned to be assigned a larger weight. And a position attention module is introduced to establish rich global feature context information, so that the same features at different positions are enhanced, and the semantic segmentation capability is improved.
Channel attention module CAM: directly reconstructing the feature map F1 into a size of C (H) multiplied by B, multiplying B by the transpose of B, and obtaining a channel attention map X, X with a size of C through Softmax activationjiThe impact of the ith channel on the jth channel is measured. Multiplying the transpose of X by B and reconstructing to a size of cxhxw, multiplying by a weight coefficient β, superimposing on the feature map F1, to obtain a feature map E2, noting that β is initialized to 0, and gradually learning to be assigned a larger weight. Introduction positionThe attention module performs different types of feature reinforcement by mining the interdependence relation among the related types of features of different channels, and improves the semantic segmentation precision.
And the adding unit is used for adding the position attention unit output characteristic diagram E1 and the channel attention unit output characteristic diagram E2 to obtain a characteristic diagram F4.
The feature map F1 is processed by the ASPP and DAMM modules to obtain a feature map F3 and a feature map F4 respectively, and the feature map F3 and the feature map F4 are fused to obtain a high-level semantic feature F5.
The decoder continues to use a Deeplab V3+ decoder to perform upsampling on the high-level semantic feature F5, is connected with the low-level semantic feature F2, and outputs an image segmentation result graph through convolution and upsampling.
Step 102: constructing a sample data set, and dividing the sample data set into a training set, a verification set and a test set;
the method specifically comprises the following steps:
step 2 a: acquiring a high-precision satellite-borne SAR image, constructing a set of multi-scene, multi-polarization and multi-temporal data set, and performing heterogeneous adjustment processing on the data set;
the domestic high-resolution three-number SAR radar satellite data is applied and downloaded from a Chinese resource satellite application center and comprises FSI (fine stripe 1, resolution ratio 5m) imaging and FSII (fine stripe 2, resolution ratio 10m) imaging. The data mainly cover the middle and downstream river basin of the Yangtze river, comprise various scenes such as mountainous areas, towns, coastal areas and the like, have various polarization modes such as HH, VV, HV and VH, and have the time span from 2 months in 2018 to 3 months in 2019.
Carrying out multisource adjustment processing on a large-scale SAR image, taking an existing optical image as a geographic reference, carrying out control point matching on SAR data through a multi-mode matching technology, directly obtaining high-precision geographic coordinates or projection coordinates of a matching point, taking the obtained matching point as a block adjustment control point and a connection point, and automatically generating the SAR image with geographic reference in a large-scale and full-flow manner.
And step 2 b: carrying out automatic water body preliminary labeling on the SAR image of the data set, and carrying out manual correction on the region with the error label by combining expert interpretation knowledge on the basis of a water body identification result and a contemporaneous optical image to obtain a high-quality water body labeling positive sample; external DEM data is introduced to simulate an SAR image, and a mountain shadow negative sample is obtained; introducing a road negative sample;
carrying out automatic water body preliminary labeling on the SAR image obtained by preprocessing, wherein the using method is a threshold segmentation method for determining a water body area according to a pixel value range, and quickly obtaining a preliminary water body labeling result; comparing the optical remote sensing images in the same period, combining with expert interpretation knowledge, manually correcting the areas marked by errors, such as mountain shadows, airport runways and other areas with lower echo reflection intensity as the water body, removing false detection caused by coherent speckle noise, and supplementing missing detection caused by water surface ripples, wave tides and the like.
Because SAR is side-looking imaging (generally, right-looking imaging), the slope body on the side of the mountain body departing from the sensor cannot receive radar beams due to shielding, and the slope body is represented as dark pixels similar to a water body on the SAR image. According to the imaging geometry of the radar sensor, external DEM data is used for simulating the SAR image, and a mountain shadow area in the simulated SAR image is used as a negative sample to be added into training data. Part of the road is smoother, generates mirror reflection to the radar signal, and also presents lower brightness in the SAR image. Road data is added as a negative sample to the training sample.
And step 2 c: the image data (positive sample and negative sample) after labeling is sliced, the size of the sliced piece is 512 multiplied by 512, 46500 pairs of sample data sets are obtained, and the sample data are divided into a training set, a verification set and a test set according to the proportion of 8:1: 1.
Step 103: training the water body segmentation model by using a training set, calculating a Focal loss function, and iteratively adjusting SAR image water body segmentation model parameters; verifying the model parameters by using a verification set; enabling the test set to test the model parameters;
pre-training ResNet-50 by adopting transfer learning, loading a trained ResNet-50 model on an ImageNet data set, and initializing model parameters of a water body segmentation model by using the pre-trained model; and 4, calculating loss by using Focal loss, solving the problem of class imbalance, and using random gradient descent as an optimization function.
The method specifically comprises the following steps:
step 3 a: dividing training set data into small lots;
wherein the size of each batch is determined according to hardware conditions;
and step 3 b: inputting data of a batch into a water body segmentation model to obtain an image prediction result, and calculating the Focal loss and loss function according to the labeling result
Figure P_211104121737707_707765001
Is defined as:
Figure P_211104121737738_738538001
wherein the content of the first and second substances,
Figure P_211104121737770_770272001
for predicting the probability of being a true label, when the parameter gamma is larger than 1, the loss function is smaller for the sample with the high model prediction accuracy probability, and the loss function is not greatly reduced for the sample with the low model prediction accuracy probability. This allows the model to focus on rare classes in the event of imbalanced data classes.
And step 3 c: updating model parameters in the network by adopting random gradient descent as an optimization function;
and iterating the steps 3b-3c until all the training set data are trained.
And step 3 d: inputting the verification set data into a water body segmentation model, obtaining a prediction label, taking average precision and average intersection ratio as an evaluation index of a network model prediction result, and adopting a calculation formula as follows:
Figure P_211104121737786_786849001
wherein TP represents 1 actually and 1 predicted; TN means true 0 and predicted 0; FN indicates true 1, predicted 0; FP means true 0 and predicted 1; acc is precision, AccWater (W)Accuracy of water body prediction, AccNon-aqueousThe accuracy of the non-water body prediction, and the average accuracy of the mACC; IoU is the cross-over ratio, IoUWater (W)IoU for water body cross-flow ratioNon-aqueousThe cross-over ratio is non-water body cross-over ratio, and the mIoU is average cross-over ratio. In the present embodiment, the categories are only water and non-water (background).
And (3 a) iterating the steps 3a-3d until a certain iteration number or prediction precision is reached, and selecting a model with the optimal performance as a final water body segmentation model.
Step 3 e: and inputting the test set data into the water body segmentation model to obtain a prediction label.
Step 104: preprocessing an SAR image to be detected;
preprocessing the SAR image to be detected, removing mountain shadow through DEM data, and removing a road.
Step 105: and inputting the preprocessed SAR image into a water body segmentation model, and outputting a pixel-level water body segmentation result.
The embodiment of the application realizes high-precision and automatic extraction of the water body in multiple scenes such as different seasons, different wind conditions, mountain shadows, different water body ranges and the like of the Yangtze river basin, the average precision in the water body extraction precision index reaches mACC = 97.82%, the average intersection ratio reaches mIoU = 0.96, and the method has strong model generalization capability and reaches the engineering application level.
The effect of the present application is further illustrated by the following application examples:
the method comprises the steps of obtaining high-resolution third-order radar satellite data of a flow domain in the middle and downstream of the Yangtze river, wherein the high-resolution third-order radar satellite data comprises an FSI (fine band 1, resolution ratio 5m) imaging mode and an FSII (fine band 2, resolution ratio 10m) imaging mode, and obtaining a high-precision water body extraction range of the flow domain through the method of the embodiment of the application.
radar satellite data of a sentinel I in two time phases of 10 days in 2021 and 22 days in 5 months in 2021 are obtained, water body ranges in the two time phases are respectively extracted by the method of the embodiment of the application, and a flood submerging range is obtained by making a difference.
According to the two-stage water body extraction results, quantitative statistical analysis shows that the water body areas of the Poyang lake migratory bird protection area and the Nanojie wetland national-level natural protection area are respectively amplified by 79.6 percent and 150 percent, and the total amplification is 275.7 square kilometers.
Table 1: water area range statistics for protected area
Figure P_211104121737818_818611001
Example two:
based on the foregoing embodiments, an embodiment of the present application provides an SAR image water body extraction device, and as shown in fig. 3, an SAR image water body extraction device 300 provided in an embodiment of the present application at least includes:
an obtaining unit 301, configured to obtain an SAR image to be detected;
the preprocessing unit 302 is configured to preprocess the SAR image to be detected;
the water body extraction unit 303 is configured to process the preprocessed SAR image to be detected through a water body segmentation model to obtain a water body segmentation result;
and a training unit 304, configured to train a water body segmentation model through a sample data set including the SAR image and the external DEM data.
In one possible embodiment, the water body segmentation model includes: an encoder and a decoder; the encoder includes at least: the system comprises a feature extraction module, a spatial pyramid pooling module, a double-attention mechanism module and a fusion module; the water body extraction unit 303 is specifically configured to:
extracting a semantic feature map F1 and a low-level semantic feature map F2 of the preprocessed SAR image to be detected through the feature extraction module, respectively inputting the semantic feature map F1 into the space pyramid pooling module and the double-attention machine module, and inputting the low-level semantic feature map F2 into a decoder;
processing the semantic feature map F1 through the spatial pyramid pooling module to obtain feature maps with different scales, and fusing the feature maps with different scales to obtain semantic features F3;
performing position attention mechanism and channel attention mechanism processing on the semantic feature map F1 through the double-attention mechanism module to obtain semantic features F4;
the semantic features F3 and the semantic features F4 are fused through the fusion module to obtain high-level semantic features F5, and the high-level semantic features F5 are output to a decoder;
and fusing the low-level semantic features F2 and the high-level semantic features F5 through the decoder to obtain a water body segmentation result.
In one possible implementation mode, the feature extraction module adopts ResNet-50 with holes, and the ResNet-50 comprises 5 volume blocks in total; the second convolution block includes 3 sequentially connected residual modules; the third convolution block includes 4 residual modules connected in sequence; the fourth convolution block includes 6 sequentially connected residual modules; the fifth convolution block comprises 3 residual modules which are sequentially connected, wherein the convolution mode of the fifth convolution block is a hole convolution; the residual error module comprises: the system comprises three convolution kernels 1 multiplied by 1, 3 multiplied by 3 and 1 multiplied by 1 which are connected in sequence and a first addition unit, wherein the three convolution kernels process input characteristics and output residual errors of the input characteristics, and the first addition unit performs mapping addition on the residual errors of the input characteristics and outputs the processed input characteristics;
extracting a semantic feature map F1 and a low-level semantic feature map F2 of the preprocessed SAR image to be detected through the feature extraction module; the method comprises the following steps:
processing the feature map obtained by the first convolution block and the second convolution block to obtain a feature map which is used as a low-level semantic feature map F2;
and processing the first convolution block, the second convolution block, the third convolution block, the fourth convolution block and the second convolution block to obtain a feature map serving as a semantic feature map F1.
In a possible implementation manner, the semantic feature F1 is processed by the spatial pyramid pooling module to obtain feature maps of different scales, and the feature maps of different scales are fused to obtain a semantic feature F3; the method comprises the following steps:
processing the feature map F1 by using a 1 × 1 convolution kernel to obtain a feature map of a first scale;
processing the feature map F1 by using a 3 x 3 convolution kernel with expansion rates of 12, 24 and 36 respectively to obtain a second scale feature map, a third scale feature map and a fourth scale feature map;
carrying out global average pooling on the feature map F1 to obtain a feature map of a fifth scale;
connecting the feature map of the first scale, the feature map of the second scale, the feature map of the third scale, the feature map of the fourth scale and the feature map of the fifth scale;
dimension reduction processing is carried out on five connected feature maps with different scales by using a 1 × 1 convolution kernel to obtain a feature map F3.
In one possible embodiment, the dual attention mechanism module includes a location attention unit, a channel attention unit, and a second addition unit;
performing position attention mechanism and channel attention mechanism processing on the semantic feature map F1 through the double-attention mechanism module to obtain semantic features F4; the method comprises the following steps:
performing convolution operation on the feature maps F1 with the size of C × H × W three times respectively through the position attention unit to obtain a feature map Q, a feature map K and a feature map V which are the same as the feature map F1 in size; performing matrix multiplication on the feature diagram Q subjected to the dimension conversion and the transposed feature diagram and the feature diagram K subjected to the dimension conversion, and then processing a matrix multiplication result by using a Softmax function to obtain a space attention diagram S with the size of (H multiplied by W) x (H multiplied by W); reconstructing the feature map V back into a feature map of size C × (H × W), then multiplying the feature map V by the transferred attention map S, reconstructing a result of the multiplication back into a feature map of size C × H × W, multiplying the feature map of size C × H × W by the weight parameter α, and superimposing the result on the feature map F1 to obtain a feature map E1;
reconstructing the feature map F1 with the size of C × H × W back to the feature map B with the size of C × (H × W) by a channel attention unit, multiplying the feature map B by the feature map B after the conversion, and obtaining a channel attention map X with the size of C × C by the Softmax function for the activation; multiplying the feature map B by the transposed channel attention map X, reconstructing a C multiplied by H multiplied by W feature map, multiplying the C multiplied by H multiplied by W feature map by a weight coefficient beta, and adding the weight coefficient beta to the feature map F1 to obtain a feature map E2;
and adding the feature map E1 and the feature map E2 through a second adding unit to obtain a feature map F4.
In a possible implementation, the training unit 304 is specifically configured to:
step S1: establishing a training sample data set; the training sample data set comprises an SAR image and external DEM data;
step S2: pre-training a ResNet-50 network by adopting transfer learning, and initializing model parameters of a water body segmentation model by utilizing the trained ResNet-50 network;
step S3: dividing training sample data set data into a plurality of batchs;
step S4: inputting data of a batch into a water body segmentation model to obtain a prediction result, and calculating a loss function by using the prediction result and a sample label result
Figure P_211104121737865_865483001
Figure P_211104121737881_881102001
Wherein the content of the first and second substances,
Figure P_211104121737912_912364001
gamma is a parameter for the probability of being predicted as a true label;
step S5: updating parameters of the water body segmentation model by using the loss function;
and iterating the steps S3 to S5 until all the training sample data sets are trained.
In one possible embodiment, the creating a training sample data set includes:
collecting a plurality of satellite-borne SAR images;
taking the existing optical image as a geographic reference, performing control point matching on each satellite-borne SAR image through a multi-mode matching technology, directly obtaining geographic coordinates or projection coordinates of matching points, and taking the obtained matching points as adjustment control points and connection points of a regional network to generate SAR images with geographic references;
determining a threshold segmentation method of a water body area according to the pixel value range, and carrying out automatic water body preliminary labeling on the preprocessed SAR image; comparing the optical remote sensing images in the same period, and manually correcting the error marked area by combining with expert interpretation knowledge to obtain positive sample data;
simulating an SAR image by using external DEM data, and taking a mountain shadow area in the simulated SAR image as negative sample data;
taking the road data as negative sample data;
and slicing the positive sample data and the negative sample data to obtain a training sample data set.
Example three:
based on the foregoing embodiments, an embodiment of the present application further provides an electronic device, and referring to fig. 4, an electronic device 400 provided in an embodiment of the present application at least includes: the SAR image water body extraction method comprises a processor 401, a memory 402 and a computer program which is stored on the memory 402 and can run on the processor 401, and when the processor 401 executes the computer program, the SAR image water body extraction method provided by the embodiment of the application is realized.
The electronic device 400 provided by the embodiment of the present application may further include a bus 403 that connects different components (including the processor 401 and the memory 402). Bus 403 represents one or more of any of several types of bus structures, including a memory bus, a peripheral bus, a local bus, and so forth.
The Memory 402 may include readable media in the form of volatile Memory, such as Random Access Memory (RAM) 4021 and/or cache Memory 4022, and may further include a Read Only Memory (ROM) 4023.
Memory 402 may also include a program tool 4024 having a set of (at least one) program modules 4025, program modules 4025 including, but not limited to: an operating subsystem, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Electronic device 400 may also communicate with one or more external devices 404 (e.g., keyboard, remote control, etc.), with one or more devices that enable a user to interact with electronic device 400 (e.g., cell phone, computer, etc.), and/or with any devices that enable electronic device 400 to communicate with one or more other electronic devices 400 (e.g., router, modem, etc.). This communication may be through an Input/Output (I/O) interface 403. Also, the electronic device 400 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public Network, such as the internet) via the Network adapter 406. As shown in FIG. 4, the network adapter 406 communicates with the other modules of the electronic device 400 over a bus 403. It should be understood that although not shown in FIG. 4, other hardware and/or software modules may be used in conjunction with electronic device 400, including but not limited to: microcode, device drivers, Redundant processors, external disk drive Arrays, disk array (RAID) subsystems, tape drives, and data backup storage subsystems, to name a few.
It should be noted that the electronic device 400 shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of the application of the embodiments.
Example four:
the embodiment of the application also provides a computer-readable storage medium, which stores computer instructions, and the computer instructions are executed by a processor to implement the SAR image water body extraction method provided by the embodiment of the application.
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units described above may be embodied in one unit, according to embodiments of the application. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. An SAR image water body extraction method is characterized by comprising the following steps:
acquiring an SAR image to be detected;
preprocessing the SAR image to be detected;
processing the preprocessed SAR image to be detected through a water body segmentation model to obtain a water body segmentation result;
the water body segmentation model is a DeepLabv3+ semantic segmentation model added with a double attention mechanism and is obtained by training a sample data set containing an SAR image sample and DEM data.
2. The SAR image water body extraction method according to claim 1, wherein the water body segmentation model comprises: an encoder and a decoder; the encoder includes at least: the system comprises a feature extraction module, a spatial pyramid pooling module, a double-attention mechanism module and a fusion module;
processing the preprocessed SAR image to be detected through a water body segmentation model to obtain a water body segmentation result; the method comprises the following steps:
extracting a semantic feature map F1 and a low-level semantic feature map F2 of the preprocessed SAR image to be detected through the feature extraction module, respectively inputting the semantic feature map F1 into the space pyramid pooling module and the double-attention machine module, and inputting the low-level semantic feature map F2 into a decoder;
processing the semantic feature map F1 through the spatial pyramid pooling module to obtain feature maps with different scales, and fusing the feature maps with different scales to obtain semantic features F3;
performing position attention mechanism and channel attention mechanism processing on the semantic feature map F1 through the double-attention mechanism module to obtain semantic features F4;
the semantic features F3 and the semantic features F4 are fused through the fusion module to obtain high-level semantic features F5, and the high-level semantic features F5 are output to a decoder;
and fusing the low-level semantic features F2 and the high-level semantic features F5 through the decoder to obtain a water body segmentation result.
3. The SAR image water body extraction method according to claim 2, characterized in that the feature extraction module adopts ResNet-50 with a cavity, the ResNet-50 contains 5 volume blocks in total; the second convolution block includes 3 sequentially connected residual modules; the third convolution block includes 4 residual modules connected in sequence; the fourth convolution block includes 6 sequentially connected residual modules; the fifth convolution block comprises 3 residual modules which are sequentially connected, wherein the convolution mode of the fifth convolution block is a hole convolution; the residual error module comprises: the system comprises three convolution kernels 1 multiplied by 1, 3 multiplied by 3 and 1 multiplied by 1 which are connected in sequence and a first addition unit, wherein the three convolution kernels process input characteristics and output residual errors of the input characteristics, and the first addition unit performs mapping addition on the residual errors of the input characteristics and outputs the processed input characteristics;
extracting a semantic feature map F1 and a low-level semantic feature map F2 of the preprocessed SAR image to be detected through the feature extraction module; the method comprises the following steps:
processing the feature map obtained by the first convolution block and the second convolution block to obtain a feature map which is used as a low-level semantic feature map F2;
and processing the first convolution block, the second convolution block, the third convolution block, the fourth convolution block and the second convolution block to obtain a feature map serving as a semantic feature map F1.
4. The SAR image water body extraction method according to claim 3, characterized in that semantic feature maps F1 are processed through the spatial pyramid pooling module to obtain feature maps of different scales, and the feature maps of different scales are fused to obtain semantic features F3; the method comprises the following steps:
processing the feature map F1 by using a 1 × 1 convolution kernel to obtain a feature map of a first scale;
processing the feature map F1 by using a 3 x 3 convolution kernel with expansion rates of 12, 24 and 36 respectively to obtain a second scale feature map, a third scale feature map and a fourth scale feature map;
carrying out global average pooling on the feature map F1 to obtain a feature map of a fifth scale;
connecting the feature map of the first scale, the feature map of the second scale, the feature map of the third scale, the feature map of the fourth scale and the feature map of the fifth scale;
dimension reduction processing is carried out on five connected feature maps with different scales by using a 1 × 1 convolution kernel to obtain a feature map F3.
5. The SAR image water body extraction method according to claim 3, wherein the double attention mechanism module comprises a position attention unit, a channel attention unit and a second addition unit;
performing position attention mechanism and channel attention mechanism processing on the semantic feature map F1 through the double-attention mechanism module to obtain semantic features F4; the method comprises the following steps:
performing convolution operation on the feature maps F1 with the size of C × H × W three times respectively through the position attention unit to obtain a feature map Q, a feature map K and a feature map V which are the same as the feature map F1 in size; performing matrix multiplication on the feature diagram Q subjected to the dimension conversion and the transposed feature diagram and the feature diagram K subjected to the dimension conversion, and then processing a matrix multiplication result by using a Softmax function to obtain a space attention diagram S with the size of (H multiplied by W) x (H multiplied by W); reconstructing the feature map V back into a feature map of size C × (H × W), then multiplying the feature map V by the transferred attention map S, reconstructing a result of the multiplication back into a feature map of size C × H × W, multiplying the feature map of size C × H × W by the weight parameter α, and superimposing the result on the feature map F1 to obtain a feature map E1;
reconstructing the feature map F1 with the size of C × H × W back to the feature map B with the size of C × (H × W) by a channel attention unit, multiplying the feature map B by the feature map B after the conversion, and obtaining a channel attention map X with the size of C × C by the Softmax function for the activation; multiplying the feature map B by the transposed channel attention map X, reconstructing a C multiplied by H multiplied by W feature map, multiplying the C multiplied by H multiplied by W feature map by a weight coefficient beta, and adding the weight coefficient beta to the feature map F1 to obtain a feature map E2;
and adding the feature map E1 and the feature map E2 through a second adding unit to obtain a feature map F4.
6. The SAR image water body extraction method according to claim 1, characterized in that the method further comprises: training a water body segmentation model, specifically comprising:
step S1: establishing a training sample data set; the training sample data set comprises an SAR image and external DEM data;
step S2: pre-training a ResNet-50 network by adopting transfer learning, and initializing model parameters of a water body segmentation model by utilizing the trained ResNet-50 network;
step S3: dividing training sample data set data into a plurality of batchs;
step S4: inputting data of a batch into a water body segmentation model to obtain a prediction result, and calculating a loss function by using the prediction result and a sample label result
Figure P_211104121735691_691685001
Figure P_211104121735738_738568001
Wherein the content of the first and second substances,
Figure P_211104121735787_787363001
gamma is a parameter for the probability of being predicted as a true label;
step S5: updating parameters of the water body segmentation model by using the loss function;
and iterating the steps S3 to S5 until all the training sample data sets are trained.
7. The SAR image water body extraction method according to claim 6, wherein the establishing of the training sample data set comprises:
collecting a plurality of satellite-borne SAR images;
taking the existing optical image as a geographic reference, performing control point matching on each satellite-borne SAR image through a multi-mode matching technology, directly obtaining geographic coordinates or projection coordinates of matching points, and taking the obtained matching points as adjustment control points and connection points of a regional network to generate SAR images with geographic references;
determining a threshold segmentation method of a water body area according to the pixel value range, and carrying out automatic water body preliminary labeling on the preprocessed SAR image; comparing the optical remote sensing images in the same period, and manually correcting the error marked area by combining with expert interpretation knowledge to obtain positive sample data;
simulating an SAR image by using external DEM data, and taking a mountain shadow area in the simulated SAR image as negative sample data;
taking the road data as negative sample data;
and slicing the positive sample data and the negative sample data to obtain a training sample data set.
8. The SAR image water extraction element, its characterized in that includes:
the image acquisition unit is used for acquiring an SAR image to be detected;
the preprocessing unit is used for preprocessing the SAR image to be detected;
the water body extraction unit is used for processing the preprocessed SAR image to be detected through a water body segmentation model to obtain a water body segmentation result; the water body segmentation model is a DeepLabv3+ semantic segmentation model added with a double attention mechanism and is obtained through training of a sample data set containing an SAR image and external DEM data.
9. An electronic device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the SAR image water body extraction method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the method for extracting a SAR image water body according to any one of claims 1 to 7 is implemented.
CN202111302938.5A 2021-11-05 2021-11-05 SAR image water body extraction method and device, electronic equipment and storage medium Active CN113743383B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111302938.5A CN113743383B (en) 2021-11-05 2021-11-05 SAR image water body extraction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111302938.5A CN113743383B (en) 2021-11-05 2021-11-05 SAR image water body extraction method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113743383A true CN113743383A (en) 2021-12-03
CN113743383B CN113743383B (en) 2022-06-07

Family

ID=78727404

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111302938.5A Active CN113743383B (en) 2021-11-05 2021-11-05 SAR image water body extraction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113743383B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113920448A (en) * 2021-12-15 2022-01-11 航天宏图信息技术股份有限公司 Flood inundation information extraction method and device, electronic equipment and storage medium
CN114814776A (en) * 2022-06-24 2022-07-29 中国空气动力研究与发展中心计算空气动力研究所 PD radar target detection method based on graph attention network and transfer learning
CN114966693A (en) * 2022-07-20 2022-08-30 南京信息工程大学 Airborne ship target ISAR refined imaging method based on deep learning
CN115272857A (en) * 2022-07-28 2022-11-01 北京卫星信息工程研究所 Multi-source remote sensing image target identification method based on attention mechanism
CN115797787A (en) * 2023-02-15 2023-03-14 耕宇牧星(北京)空间科技有限公司 SAR image bloom area extraction method
CN116012364A (en) * 2023-01-28 2023-04-25 北京建筑大学 SAR image change detection method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140241639A1 (en) * 2013-02-25 2014-08-28 Raytheon Company Reduction of cfar false alarms via classification and segmentation of sar image clutter
EP2816529A2 (en) * 2013-12-16 2014-12-24 Institute of Electronics, Chinese Academy of Sciences Automatic water area segmentation method and device for SAR image of complex terrain
CN110059758A (en) * 2019-04-24 2019-07-26 海南长光卫星信息技术有限公司 A kind of remote sensing image culture pond detection method based on semantic segmentation
CN112990086A (en) * 2021-04-08 2021-06-18 海南长光卫星信息技术有限公司 Remote sensing image building detection method and device and computer readable storage medium
CN113470033A (en) * 2021-06-04 2021-10-01 浙江科技学院 Road scene image processing method based on dynamic cross fusion of two sides

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140241639A1 (en) * 2013-02-25 2014-08-28 Raytheon Company Reduction of cfar false alarms via classification and segmentation of sar image clutter
EP2816529A2 (en) * 2013-12-16 2014-12-24 Institute of Electronics, Chinese Academy of Sciences Automatic water area segmentation method and device for SAR image of complex terrain
CN110059758A (en) * 2019-04-24 2019-07-26 海南长光卫星信息技术有限公司 A kind of remote sensing image culture pond detection method based on semantic segmentation
CN112990086A (en) * 2021-04-08 2021-06-18 海南长光卫星信息技术有限公司 Remote sensing image building detection method and device and computer readable storage medium
CN113470033A (en) * 2021-06-04 2021-10-01 浙江科技学院 Road scene image processing method based on dynamic cross fusion of two sides

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘文祥 等: "采用双注意力机制Deeplabv3+算法的遥感影像语义分割", 《热带地理》 *
庞科臣: "高分辨率 SAR 图像水体提取算法研究", 《中国优秀硕士学位论文全文数据库(电子期刊) 信息科技辑》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113920448A (en) * 2021-12-15 2022-01-11 航天宏图信息技术股份有限公司 Flood inundation information extraction method and device, electronic equipment and storage medium
CN113920448B (en) * 2021-12-15 2022-03-08 航天宏图信息技术股份有限公司 Flood inundation information extraction method and device, electronic equipment and storage medium
CN114814776A (en) * 2022-06-24 2022-07-29 中国空气动力研究与发展中心计算空气动力研究所 PD radar target detection method based on graph attention network and transfer learning
CN114814776B (en) * 2022-06-24 2022-10-14 中国空气动力研究与发展中心计算空气动力研究所 PD radar target detection method based on graph attention network and transfer learning
CN114966693A (en) * 2022-07-20 2022-08-30 南京信息工程大学 Airborne ship target ISAR refined imaging method based on deep learning
CN114966693B (en) * 2022-07-20 2022-11-04 南京信息工程大学 Airborne ship target ISAR refined imaging method based on deep learning
CN115272857A (en) * 2022-07-28 2022-11-01 北京卫星信息工程研究所 Multi-source remote sensing image target identification method based on attention mechanism
CN116012364A (en) * 2023-01-28 2023-04-25 北京建筑大学 SAR image change detection method and device
CN116012364B (en) * 2023-01-28 2024-01-16 北京建筑大学 SAR image change detection method and device
CN115797787A (en) * 2023-02-15 2023-03-14 耕宇牧星(北京)空间科技有限公司 SAR image bloom area extraction method

Also Published As

Publication number Publication date
CN113743383B (en) 2022-06-07

Similar Documents

Publication Publication Date Title
CN113743383B (en) SAR image water body extraction method and device, electronic equipment and storage medium
Muñoz et al. From local to regional compound flood mapping with deep learning and data fusion techniques
Khan et al. Satellite remote sensing and hydrologic modeling for flood inundation mapping in Lake Victoria basin: Implications for hydrologic prediction in ungauged basins
Karvonen A sea ice concentration estimation algorithm utilizing radiometer and SAR data
KR102540762B1 (en) Reservoir monitoring method using satellite informations
CN113920448B (en) Flood inundation information extraction method and device, electronic equipment and storage medium
US8401793B2 (en) High resolution wind measurements for offshore wind energy development
Wu et al. A two-step deep learning framework for mapping gapless all-weather land surface temperature using thermal infrared and passive microwave data
Bayik et al. Exploiting multi-temporal Sentinel-1 SAR data for flood extend mapping
Lê et al. Multiscale framework for rapid change analysis from SAR image time series: Case study of flood monitoring in the central coast regions of Vietnam
WO2023099665A1 (en) Method for near real-time flood detection at large scale in a geographical region covering both urban areas and rural areas and associated computer program product
Guo et al. Mozambique flood (2019) caused by tropical cyclone idai monitored from sentinel-1 and sentinel-2 images
CN116630818A (en) Plateau lake boundary online extraction method and system based on GEE and deep learning
Chen et al. A novel lightweight bilateral segmentation network for detecting oil spills on the sea surface
Li et al. Unet combined with attention mechanism method for extracting flood submerged range
Rumapea et al. Improving Convective Cloud Classification with Deep Learning: The CC-Unet Model.
CN116363526B (en) MROCNet model construction and multisource remote sensing image change detection method and system
Zhang et al. Internal wave signature extraction from sar and optical satellite imagery based on deep learning
Wang et al. Deep learning in extracting tropical cyclone intensity and wind radius information from satellite infrared images—A review
Aparna et al. SAR-FloodNet: a patch-based convolutional neural network for flood detection on SAR images
Tsay et al. Deep learning for satellite rainfall retrieval using Himawari-8 multiple spectral channels
Surwase et al. Development of algorithms for evaluating performance of flood simulation models with satellite-derived flood
Amitrano et al. Flood Detection with SAR: A Review of Techniques and Datasets
Alatalo et al. Improved Difference Images for Change Detection Classifiers in SAR Imagery Using Deep Learning
Zhang et al. Cloud detection using gabor filters and attention-based convolutional neural network for remote sensing images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information

Inventor after: Wang Yuxiang

Inventor after: Zou Shuchang

Inventor after: Zhang Pan

Inventor after: Lu Chaoran

Inventor after: Li Yan

Inventor after: Shen Junping

Inventor before: Wang Yuxiang

Inventor before: Zou Shuchang

Inventor before: Zhang Pan

Inventor before: Lu Chaoran

Inventor before: Li Yan

Inventor before: Shen Junping

CB03 Change of inventor or designer information