CN111797920A - Remote sensing extraction method and system for depth network impervious surface with gate control feature fusion - Google Patents

Remote sensing extraction method and system for depth network impervious surface with gate control feature fusion Download PDF

Info

Publication number
CN111797920A
CN111797920A CN202010619048.6A CN202010619048A CN111797920A CN 111797920 A CN111797920 A CN 111797920A CN 202010619048 A CN202010619048 A CN 202010619048A CN 111797920 A CN111797920 A CN 111797920A
Authority
CN
China
Prior art keywords
feature fusion
remote sensing
gating
image
impervious surface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010619048.6A
Other languages
Chinese (zh)
Other versions
CN111797920B (en
Inventor
邵振峰
程涛
姚远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202010619048.6A priority Critical patent/CN111797920B/en
Publication of CN111797920A publication Critical patent/CN111797920A/en
Application granted granted Critical
Publication of CN111797920B publication Critical patent/CN111797920B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Abstract

A remote sensing extraction method and a system for a depth network impervious surface with gate control feature fusion are disclosed, and the method comprises the steps of obtaining a high-resolution remote sensing image of a target area, counting the mean value and standard deviation of each wave band of a training image, carrying out normalization and blocking, and constructing a training sample image; building a convolutional neural network based on gating feature fusion, extracting training image features, wherein the convolutional neural network is composed of a residual error network, a gating feature fusion module and a decoding module, and the gating feature fusion module fuses spatial detail information of low-level features and semantic information of high-level features through a gating mechanism; setting different labels to respectively represent the impervious surface, the non-impervious surface and the water body, and performing update training through a cross entropy loss function; and (4) utilizing the training result, and realizing impervious surface extraction by using a convolutional neural network based on gating feature fusion. The method can accurately and automatically extract the impervious surface in the remote sensing image, and meets the application requirements of urban waterlogging analysis and the like.

Description

Remote sensing extraction method and system for depth network impervious surface with gate control feature fusion
Technical Field
The invention belongs to the field of information extraction of remote sensing image data, and relates to a technical scheme for remote sensing extraction of a depth network impervious surface with gate control feature fusion.
Background
Impervious Surface (impevideus Surface) refers to an artificial ground Surface for preventing water from permeating into soil on the ground, and is a key index for evaluating the health of an urban ecological system and the quality of human living environment, wherein the common artificial ground Surface comprises buildings, roads, squares, parking lots and the like. In many urban environment applications, impervious surface information is involved as a key parameter. Due to the limitation of the shallow machine learning method, the extraction of the impervious surface with large data volume is still difficult to realize, and the problem is just solved by the occurrence of the convolutional neural network. The convolutional neural network adopts a series of nonlinear transformation, extracts features from a low layer to a high layer, from concrete to abstract, from general to specific semantic from an original image, and is widely applied to the fields of scene classification, semantic segmentation, image retrieval and the like on natural images. However, due to the characteristics of the convolutional neural network, the characteristic diagram of the convolutional neural network gradually becomes smaller, which may cause information loss in the extraction of surface features, especially for remote sensing images of fine surface features with low spatial resolution and complex surface feature scenes, the conventional convolutional neural network may hardly recover the lost surface feature information, thereby failing to extract the impervious surface more accurately. Therefore, the emergence of new solutions to meet this need is highly desirable.
Disclosure of Invention
Aiming at the defects of the existing method, the invention aims to provide a remote sensing extraction method and system for the impervious surface of the depth network with gate control feature fusion, effectively make up for the defect of information loss caused by the size change of a feature map in a convolutional neural network, and improve the precision of the extraction of the impervious surface of the high-resolution remote sensing image.
In order to achieve the purpose, the technical scheme of the invention is a remote sensing extraction method of a depth network impervious surface with gate control feature fusion, which comprises the following steps,
step a, acquiring a high-resolution remote sensing image of a target area, counting the mean value and standard deviation of each wave band of a training image, carrying out normalization processing on each image, partitioning the image, and constructing a training sample image;
b, constructing a convolutional neural network based on gating feature fusion, extracting the features of the training sample image, and predicting the extracted image features pixel by pixel, wherein the convolutional neural network based on gating feature fusion is composed of a residual error network, a gating feature fusion module and a decoding module, and the gating feature fusion module fuses the spatial detail information of the low-level features and the semantic information of the high-level features through a gating mechanism;
c, setting different labels to respectively represent the impervious surface, the non-impervious surface and the water body, calculating an error between a predicted value and a true value of a training sample image through a cross entropy loss function, and performing update training on corresponding network parameters of the convolutional neural network based on gating feature fusion by minimizing the error to obtain a network model with optimal performance;
and d, utilizing the training result of the step c, using a convolutional neural network based on gating characteristic fusion to extract the characteristics of the high-resolution remote sensing image to be tested, and performing pixel-by-pixel type prediction on the extracted image characteristics to realize impervious surface extraction.
In the step a, the normalization processing of the high-resolution remote sensing image is realized by counting the mean value and standard deviation of each band of all the training images, subtracting the mean value of the corresponding band counted by the training data from each band of all the remote sensing images, and dividing the mean value by the standard deviation of the corresponding band, thereby realizing the normalization processing.
Moreover, the residual error network comprises four convolution layers and four residual error blocks, the outputs of the four residual error blocks are led out to construct three gating feature fusion modules, and the three gating feature fusion modules respectively fuse the spatial detail information of the low-level features with the semantic information of the adjacent high-level features; splicing the outputs of the three gating feature fusion modules and the output of the residual error network, and combining a plurality of features to form a combined feature; and after splicing, the output sequentially passes through a convolution layer and a Softmax layer in a decoding module and upsampling, and the image features extracted from the Softmax layer are mapped to the position of each pixel point through the upsampling, so that the pixel-by-pixel feature extraction and the class probability prediction of the high-resolution remote sensing image are finally realized.
Moreover, the gating feature fusion module is expressed as the following formula:
Figure BDA0002562390690000021
Figure BDA0002562390690000022
wherein the function Gate is sigmoid (w)i*Xi) Representing the mapping of the input features between 0 and 1 to obtain the gate control value, the sigmoid function is a sigmoid function, XlDenotes the l-th feature, X, introduced in the middle of the networkl-1Represents the l-1 th feature introduced in the middle of the network,
Figure BDA0002562390690000031
representing the spatial difference between two adjacent features, for deriving a gating value for the difference between the two features,
Figure BDA0002562390690000032
representing features that were gated fused.
And in the step c, parameters in the convolutional neural network are trained by minimizing the cross entropy loss function, and the network parameters are gradually updated by using a gradient descent method until the cross entropy loss function is not reduced and the change tends to be stable, so that a network model with optimal performance is obtained, and finally, the accurate extraction of the impervious surface is realized.
The invention provides a remote sensing extraction system of a depth network impervious surface with fused gating characteristics, which is used for realizing the remote sensing extraction method of the depth network impervious surface with the fused gating characteristics.
In conclusion, the invention provides a remote sensing extraction technical scheme of the impermeable surface of the depth network based on gate control feature fusion, a convolutional neural network based on gate control feature fusion is constructed for a high-resolution remote sensing image to automatically extract image features, and effective information of low-level features and high-level features is innovatively fused through a gate control mechanism, so that an impermeable surface extraction result with richer detailed information is obtained. The method takes data as a driving basis, automatically extracts the characteristics in the data, refines the extraction result of the impervious surface of the remote sensing image by using a gating characteristic fusion method, is convenient for obtaining a data source, has clear and repeatable operation steps, very accords with the basic environment data extraction requirement in the actual urban environment application, and can be used for providing technical support for the aspects of urban waterlogging analysis, urban planning and the like.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention.
Fig. 2 is a schematic diagram of a gating feature fusion module according to an embodiment of the present invention.
Fig. 3 is a structural diagram of a convolutional neural network based on gating feature fusion according to an embodiment of the present invention.
Detailed Description
For better understanding of the technical solutions of the present invention, the following detailed description of the present invention is made with reference to the accompanying drawings and examples.
Aiming at the defects of the prior art, the invention provides a method for accurately extracting the impervious surface by using a convolution network and introducing a gating characteristic fusion module and training a convolution neural network model.
Referring to fig. 1, the remote sensing extraction method for the impermeable surface of the depth network with gate control feature fusion provided by the embodiment of the invention comprises the following steps:
step a, acquiring a high-resolution remote sensing image, counting the mean value and standard deviation of each wave band of the image, carrying out normalization processing on the image, and partitioning the image to construct a training sample image;
the normalization processing is carried out on the high-resolution remote sensing image to eliminate the influence of dimension and accelerate the speed of obtaining the optimal solution by gradient descent in the subsequent steps. The high-resolution remote sensing image generally refers to a remote sensing image with a spatial resolution within 10 m.
In specific implementation, a plurality of high-resolution remote sensing images (such as a high-resolution No. 2 image of a main urban area in Wuhan City) of a target area defined by a user are obtained, one part of images are used as training images, one part of images are used as testing images, the mean value and the standard deviation of each wave band of all the training images are counted, and the training images and the testing images are respectively subjected to the same processing according to a data counting result obtained based on the training images:
and subtracting the mean value of the corresponding wave band after the statistics of the training data from each wave band of all the training images, and then dividing the mean value by the standard deviation of the corresponding wave band so as to realize the normalization processing. Finally, the image is divided into image blocks of 256x256 size. And c, correspondingly processing all the training images to obtain image blocks to form the input image in the step b.
And subtracting the mean value of the corresponding wave band after the statistics of the training data from each wave band of all the test images, and then dividing the mean value by the standard deviation of the corresponding wave band so as to realize the normalization processing. Finally, the image is divided into image blocks of 256x256 size. The image block obtained by processing the test image correspondingly can be used as the input image of the step d.
Step b, constructing a convolutional neural network based on gate control characteristic fusion, extracting the characteristics of the training sample image, and performing pixel-by-pixel type prediction on the extracted image characteristics, wherein the convolutional neural network based on gate control characteristic fusion comprises a residual error network, a gate control characteristic fusion module and a decoding module;
the first half section of the convolutional neural network comprises a plurality of convolutional layers which can be formed by convolutional, batch normalization and cross stacking of activation functions, four output features in the middle are led out, three gating feature fusion modules are constructed based on the four output features, and the three gating feature fusion modules fuse the space detail information of the low-level features and the semantic information of the high-level features; splicing the outputs of the three gating feature fusion modules with the output of the first half segment of network; and finally, the spliced output is processed by a decoding module, wherein the decoding module comprises a convolution layer, a Softmax layer and an upsampling layer, the image features proposed in the middle are mapped to the position of each pixel point, and the pixel-by-pixel feature extraction and the class probability prediction of the high-resolution remote sensing image are realized.
In specific implementation, as shown in fig. 3, the convolutional neural network uses a residual error network (sequentially including layers 1 to 8) as a backbone network, the residual error network includes four convolutional layers (layers 1 to 4) and four residual error blocks (layers 5 to 8), the four residual error blocks (layers 5 to 8) of the residual error network are led out to construct three gating feature fusion modules (layers 9 to 11), and the three gating feature fusion modules fuse the spatial detail information of the low-layer features and the semantic information of the high-layer features; concat splicing is carried out on the output of the three gating feature fusion modules and the output of the residual error network (layer 12), and a plurality of features are combined to form a combined feature; and then, the spliced output sequentially passes through the convolutional layers (layers 13-16) in the decoding module, the Softmax layer (layer 17) and the upsampling layer (layer 18), the image features extracted by the Softmax layer are mapped to the position of each pixel point through the upsampling, and finally the pixel-by-pixel feature extraction and the class probability prediction of the high-resolution remote sensing image are realized. In the Softmax layer, the probability that each pixel point belongs to each category is calculated by adopting a Softmax function, and therefore the probability that each position on the image belongs to each ground feature category is obtained.
Typically, each convolutional layer contains convolution, batch normalization, and activation functions. The details are shown in table 1:
TABLE 1 network architecture
Figure BDA0002562390690000051
Figure BDA0002562390690000061
Wherein, # is a Layer number, Layer represents a Layer, e.g., Conv is a convolutional Layer, Max-Pool is a maximum pooling Layer, Block represents a residual Block, Gatefusion represents a gated feature fusion module, Concat represents splicing, Softmax represents a normalized exponential function, and interplate represents upsampling; filters denotes the number of convolution kernels used, Size denotes the Size of the convolution kernels, Input denotes the Size of the network Input, and Output denotes the Size of the network Output.
Referring to fig. 2, the principle of the gateway gated feature fusion module can be expressed as the following formula:
Figure BDA0002562390690000062
Figure BDA0002562390690000063
wherein the function Gate is sigmoid (w)i*Xi) Representing the mapping of the input features between 0 and 1 to obtain the gate control value, the sigmoid function is a sigmoid function, XlDenotes the l-th feature, X, introduced in the middle of the networkl-1Represents the l-1 th feature introduced in the middle of the network,
Figure BDA0002562390690000071
representing the space difference of two adjacent characteristics, namely obtaining the differentiated gating value between the two characteristics,
Figure BDA0002562390690000072
representing features that were gated fused. Inputting low-level features Xl-1And high level feature XlCalculating Xl-1+XlAnd XlDifferential gating value of
Figure BDA0002562390690000073
High level feature XlMultiplication by
Figure BDA0002562390690000074
Then add the low-level feature Xl-1Multiplication by
Figure BDA0002562390690000075
Get the characteristics of fusion
Figure BDA0002562390690000076
The gating characteristic module can effectively fuse the space detail information of the low-level characteristic and the semantic information of the high-level characteristic.
In the example, the characteristics of layers 5 and 6 are output to layer 9, the characteristics of layers 6 and 7 are output to layer 10, and the characteristics of layers 7 and 8 are output to layer 10.
Step c, calculating the error between the predicted value and the true value of the training sample image through a cross entropy loss function, and performing update training on the network parameters by minimizing the error to obtain a network model with optimal performance;
parameters in the convolutional neural network are trained by minimizing a cross entropy loss function, network parameters are gradually updated by using a gradient descent method until the cross entropy loss function is not reduced and the change tends to be stable, a network model with optimal performance is obtained, and finally the accurate extraction of the impervious surface is realized.
In specific implementation, a training sample image and a true value image (labels are 0,1 and 2, which respectively represent an impervious surface, a non-impervious surface and a water body) corresponding to the training sample image are substituted into a network model for training, a predicted image pixel-by-pixel class probability is output through the network model, the predicted pixel-by-pixel class probability and the true value image are substituted into a cross entropy loss function, then, the solved loss value is subjected to back propagation, and a gradient descent method is used for iteratively updating training parameters until the loss value is reduced to meet a preset condition, for example, a certain threshold range is preset or loss change tends to be stable. And d, after the training is finished, entering the step d.
And d, utilizing the training result of the step c, carrying out feature extraction on the high-resolution remote sensing image by using a convolutional neural network based on gating feature fusion, and carrying out pixel-by-pixel type prediction on the extracted image features to realize impervious surface extraction.
In specific implementation, the trained network model parameters in the step c are used for extracting image features of a test remote sensing image of a target area (an image block obtained by correspondingly processing the test image in the step a is used as input) through a constructed convolutional neural network based on gating feature fusion, the image features extracted by the network are used for carrying out pixel-by-pixel class prediction (in a mode consistent with the step b), ground object labels with the maximum probability are obtained, and finally the extraction result of the impervious surface is obtained. In specific implementation, after the remote sensing image to be detected is preprocessed according to the step a, the extraction result of the corresponding impervious surface can be obtained by using the convolutional neural network based on gating feature fusion.
In specific implementation, the method can adopt a computer software technology to realize an automatic operation process, and a corresponding system device for implementing the method process is also in the protection scope of the invention.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.

Claims (6)

1. A remote sensing extraction method for a depth network impervious surface with gate control feature fusion is characterized by comprising the following steps: comprises the following steps of (a) carrying out,
step a, acquiring a high-resolution remote sensing image of a target area, counting the mean value and standard deviation of each wave band of a training image, carrying out normalization processing on each image, partitioning the image, and constructing a training sample image;
b, constructing a convolutional neural network based on gating feature fusion, extracting the features of the training sample image, and predicting the extracted image features pixel by pixel, wherein the convolutional neural network based on gating feature fusion is composed of a residual error network, a gating feature fusion module and a decoding module, and the gating feature fusion module fuses the spatial detail information of the low-level features and the semantic information of the high-level features through a gating mechanism;
c, setting different labels to respectively represent the impervious surface, the non-impervious surface and the water body, calculating an error between a predicted value and a true value of a training sample image through a cross entropy loss function, and performing update training on corresponding network parameters of the convolutional neural network based on gating feature fusion by minimizing the error to obtain a network model with optimal performance;
and d, utilizing the training result of the step c, using a convolutional neural network based on gating characteristic fusion to extract the characteristics of the high-resolution remote sensing image to be tested, and performing pixel-by-pixel type prediction on the extracted image characteristics to realize impervious surface extraction.
2. The remote sensing extraction method of the depth network impervious surface with the gate control feature fusion according to claim 1, which is characterized by comprising the following steps: in the step a, the normalization processing of the high-resolution remote sensing image is realized by counting the mean value and standard deviation of each wave band of all the training images, subtracting the mean value of the corresponding wave band counted by the training data from each wave band of all the remote sensing images, and dividing the mean value by the standard deviation of the corresponding wave band, thereby realizing the normalization processing. .
3. The remote sensing extraction method of the depth network impervious surface with the gate control feature fusion according to claim 1, which is characterized by comprising the following steps: the residual error network comprises four convolution layers and four residual error blocks, the output of the four residual error blocks is led out, three gating feature fusion modules are constructed, and the three gating feature fusion modules respectively fuse the space detail information of the low-level features and the semantic information of the adjacent high-level features; splicing the outputs of the three gating feature fusion modules and the output of the residual error network, and combining a plurality of features to form a combined feature; and after splicing, the output sequentially passes through a convolution layer and a Softmax layer in a decoding module and upsampling, and the image features extracted from the Softmax layer are mapped to the position of each pixel point through the upsampling, so that the pixel-by-pixel feature extraction and the class probability prediction of the high-resolution remote sensing image are finally realized.
4. The remote sensing extraction method of the depth network impervious surface with the gate control feature fusion, which is characterized by comprising the following steps of: the gating feature fusion module is expressed as the following formula:
Figure FDA0002562390680000011
Figure FDA0002562390680000012
wherein the function Gate is sigmoid (w)i*Xi) Representing the mapping of the input features between 0 and 1 to obtain the gate control value, the sigmoid function is a sigmoid function, XlDenotes the l-th feature, X, introduced in the middle of the networkl-1Represents the l-1 th feature introduced in the middle of the network,
Figure FDA0002562390680000021
representing the spatial difference between two adjacent features, for deriving a gating value for the difference between the two features,
Figure FDA0002562390680000022
representing features that were gated fused.
5. The remote sensing extraction method of the depth network impervious surface fused with the gating characteristics according to claim 1, 2, 3 or 4, which is characterized by comprising the following steps of: in the step c, parameters in the convolutional neural network are trained by minimizing the cross entropy loss function, and the network parameters are gradually updated by using a gradient descent method until the cross entropy loss function is not reduced and the change tends to be stable, so that a network model with optimal performance is obtained, and accurate extraction of the impervious surface is supported to be finally realized.
6. The utility model provides a impervious surface remote sensing extraction system of degree of depth network that gating characteristic fuses which characterized in that: remote sensing extraction method for the impervious surface of the depth network for realizing the gate control feature fusion of the claims 1 to 5.
CN202010619048.6A 2020-06-30 2020-06-30 Remote sensing extraction method and system for depth network impervious surface with gate control feature fusion Active CN111797920B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010619048.6A CN111797920B (en) 2020-06-30 2020-06-30 Remote sensing extraction method and system for depth network impervious surface with gate control feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010619048.6A CN111797920B (en) 2020-06-30 2020-06-30 Remote sensing extraction method and system for depth network impervious surface with gate control feature fusion

Publications (2)

Publication Number Publication Date
CN111797920A true CN111797920A (en) 2020-10-20
CN111797920B CN111797920B (en) 2022-08-30

Family

ID=72809769

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010619048.6A Active CN111797920B (en) 2020-06-30 2020-06-30 Remote sensing extraction method and system for depth network impervious surface with gate control feature fusion

Country Status (1)

Country Link
CN (1) CN111797920B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111968064A (en) * 2020-10-22 2020-11-20 成都睿沿科技有限公司 Image processing method and device, electronic equipment and storage medium
CN112560716A (en) * 2020-12-21 2021-03-26 浙江万里学院 High-resolution remote sensing image water body extraction method based on low-level feature fusion
CN112712033A (en) * 2020-12-30 2021-04-27 哈尔滨工业大学 Automatic division method for catchment areas of municipal drainage pipe network
CN113269787A (en) * 2021-05-20 2021-08-17 浙江科技学院 Remote sensing image semantic segmentation method based on gating fusion

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103824077A (en) * 2014-03-17 2014-05-28 武汉大学 Urban impervious layer rate information extraction method based on multi-source remote sensing data
US20160027314A1 (en) * 2014-07-22 2016-01-28 Sikorsky Aircraft Corporation Context-aware landing zone classification
CN108985238A (en) * 2018-07-23 2018-12-11 武汉大学 The high-resolution remote sensing image impervious surface extracting method and system of combined depth study and semantic probability
US20190087529A1 (en) * 2014-03-24 2019-03-21 Imagars Llc Decisions with Big Data
CN109919951A (en) * 2019-03-14 2019-06-21 武汉大学 The object-oriented city impervious surface Remotely sensed acquisition method and system of semantic association
CN109934153A (en) * 2019-03-07 2019-06-25 张新长 Building extracting method based on gate depth residual minimization network
CN110705457A (en) * 2019-09-29 2020-01-17 核工业北京地质研究院 Remote sensing image building change detection method
CN111243591A (en) * 2020-02-25 2020-06-05 上海麦图信息科技有限公司 Air control voice recognition method introducing external data correction

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103824077A (en) * 2014-03-17 2014-05-28 武汉大学 Urban impervious layer rate information extraction method based on multi-source remote sensing data
US20190087529A1 (en) * 2014-03-24 2019-03-21 Imagars Llc Decisions with Big Data
US20160027314A1 (en) * 2014-07-22 2016-01-28 Sikorsky Aircraft Corporation Context-aware landing zone classification
CN108985238A (en) * 2018-07-23 2018-12-11 武汉大学 The high-resolution remote sensing image impervious surface extracting method and system of combined depth study and semantic probability
CN109934153A (en) * 2019-03-07 2019-06-25 张新长 Building extracting method based on gate depth residual minimization network
CN109919951A (en) * 2019-03-14 2019-06-21 武汉大学 The object-oriented city impervious surface Remotely sensed acquisition method and system of semantic association
CN110705457A (en) * 2019-09-29 2020-01-17 核工业北京地质研究院 Remote sensing image building change detection method
CN111243591A (en) * 2020-02-25 2020-06-05 上海麦图信息科技有限公司 Air control voice recognition method introducing external data correction

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KE TAN ETAL.: "Gated Residual Networks With Dilated Convolutions for Monaural Speech Enhancement", 《IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING 》 *
代强等: "基于轻量自动残差缩放网络的图像超分辨率重建", 《计算机应用》 *
时昭丽等: "融合位置信息的卷积门控网络实现与应用", 《信息技术与网络安全》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111968064A (en) * 2020-10-22 2020-11-20 成都睿沿科技有限公司 Image processing method and device, electronic equipment and storage medium
CN111968064B (en) * 2020-10-22 2021-01-15 成都睿沿科技有限公司 Image processing method and device, electronic equipment and storage medium
CN112560716A (en) * 2020-12-21 2021-03-26 浙江万里学院 High-resolution remote sensing image water body extraction method based on low-level feature fusion
CN112712033A (en) * 2020-12-30 2021-04-27 哈尔滨工业大学 Automatic division method for catchment areas of municipal drainage pipe network
CN113269787A (en) * 2021-05-20 2021-08-17 浙江科技学院 Remote sensing image semantic segmentation method based on gating fusion

Also Published As

Publication number Publication date
CN111797920B (en) 2022-08-30

Similar Documents

Publication Publication Date Title
CN111797920B (en) Remote sensing extraction method and system for depth network impervious surface with gate control feature fusion
CN108985238B (en) Impervious surface extraction method and system combining deep learning and semantic probability
CN108647585B (en) Traffic identifier detection method based on multi-scale circulation attention network
CN108846835B (en) Image change detection method based on depth separable convolutional network
CN110929577A (en) Improved target identification method based on YOLOv3 lightweight framework
CN111950453A (en) Optional-shape text recognition method based on selective attention mechanism
CN109492596B (en) Pedestrian detection method and system based on K-means clustering and regional recommendation network
CN111080652B (en) Optical remote sensing image segmentation method based on multi-scale lightweight cavity convolution
CN113052106B (en) Airplane take-off and landing runway identification method based on PSPNet network
CN110472634A (en) Change detecting method based on multiple dimensioned depth characteristic difference converged network
CN116524189A (en) High-resolution remote sensing image semantic segmentation method based on coding and decoding indexing edge characterization
CN114943876A (en) Cloud and cloud shadow detection method and device for multi-level semantic fusion and storage medium
CN116206112A (en) Remote sensing image semantic segmentation method based on multi-scale feature fusion and SAM
CN115512222A (en) Method for evaluating damage of ground objects in disaster scene of offline training and online learning
CN114897781A (en) Permeable concrete pore automatic identification method based on improved R-UNet deep learning
CN113420619A (en) Remote sensing image building extraction method
CN116403121A (en) Remote sensing image water area segmentation method, system and equipment for multi-path fusion of water index and polarization information
CN113077438B (en) Cell nucleus region extraction method and imaging method for multi-cell nucleus color image
CN114722914A (en) Method for detecting field environmental barrier based on binocular vision and semantic segmentation network
CN114882490B (en) Unlimited scene license plate detection and classification method based on point-guided positioning
CN115984603A (en) Fine classification method and system for urban green land based on GF-2 and open map data
CN115147727A (en) Method and system for extracting impervious surface of remote sensing image
CN112164065B (en) Real-time image semantic segmentation method based on lightweight convolutional neural network
CN111738324B (en) Multi-frequency and multi-scale fusion automatic crack detection method based on frequency division convolution
CN111553272A (en) High-resolution satellite optical remote sensing image building change detection method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant