CN113688696B - Ultrahigh-resolution remote sensing image earthquake damage building detection method - Google Patents

Ultrahigh-resolution remote sensing image earthquake damage building detection method Download PDF

Info

Publication number
CN113688696B
CN113688696B CN202110891197.2A CN202110891197A CN113688696B CN 113688696 B CN113688696 B CN 113688696B CN 202110891197 A CN202110891197 A CN 202110891197A CN 113688696 B CN113688696 B CN 113688696B
Authority
CN
China
Prior art keywords
pixels
feature representation
boundary
pixel
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110891197.2A
Other languages
Chinese (zh)
Other versions
CN113688696A (en
Inventor
王超
林从远
仇星
张艳
王帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202110891197.2A priority Critical patent/CN113688696B/en
Publication of CN113688696A publication Critical patent/CN113688696A/en
Application granted granted Critical
Publication of CN113688696B publication Critical patent/CN113688696B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform

Abstract

The invention discloses a method for detecting a earthquake damage building by ultra-high resolution remote sensing images, which is characterized in that object context attention modules are respectively embedded in deep feature extraction of deep Labv3+ and UNet networks to enhance feature representation, and a boundary enhancement loss function is provided according to spatial position information of pixels in an object to refine a segmentation boundary, so that OB-deep Labv3+ and OB-UNet networks are constructed, and the earthquake damage building detection is realized by utilizing the two networks. The invention can obviously improve the detection precision of the earthquake damage building.

Description

Ultrahigh-resolution remote sensing image earthquake damage building detection method
Technical Field
The invention belongs to the technical field of remote sensing, and particularly relates to a method for detecting a earthquake damage building.
Background
Timely and accurately acquiring the earthquake damage information of the building after earthquake, and has important significance for post-earthquake emergency response and post-disaster reconstruction. The remote sensing technology adopts a remote imaging mode, has the advantages of high spatial resolution, no limitation of complex conditions of earthquake sites and the like, and becomes a main technical means for detecting earthquake damage buildings. Nevertheless, the structure and the space layout of the post-earthquake remote sensing image are more complex, and buildings with different earthquake damage forms and intact buildings are mixed and distributed, so that great challenges are brought to abstract representation and feature modeling of the earthquake damage buildings. For this reason, it is necessary to develop a general and accurate automatic detection method for earthquake damage buildings.
Generally, the current earthquake damage building detection methods based on post-earthquake remote sensing images can be mainly divided into two types: based on traditional classification methods and based on deep learning methods. The former predicts a likely jolt building by training a suitable classification network based on a user-defined or optimally selected feature set. The limitation of such methods is mainly that the generalization capability of the adopted feature set is not strong, i.e. whether expert knowledge is accurate or not and the types of available images are different, which will have a significant influence on the stability of the algorithm. With the continuous development of artificial intelligence technology, the utilization of deep learning to automatically extract abstract features with discrimination and representativeness in images has become a research hotspot in the field of automatic extraction of remote sensing image ground features. Among other things, classical CNNs can give a class label to a fixed-size image patch. However, the classification results at the pixel level clearly help to locate the position and boundaries of the jolt building more accurately, and the advent of FCNs provides a competitive solution. On the basis of FCN, a plurality of excellent semantic segmentation networks appear in recent years, for example, UNet utilizes feature fusion among different levels to alleviate the problem of detail loss caused by feature downsampling; the deep labv3+ uses parallel space golden sub-tower pooling structures with different void ratios, and is beneficial to the extraction of targets with different scales by introducing multi-scale context information.
However, convolution operations are typically limited to a fixed range due to the variety of building sizes, shapes, and forms of shock hazards, which is detrimental to the complete extraction of building contours of different sizes. To address this problem, aggregating multi-scale spatial context information is typically employed to expand the local receptive field, but features extracted in this way lack global perception; although the self-attention mechanism is capable of calculating the relationship of each positional pixel to the global, the difference between different classes of pixels is not considered, and thus there is a problem in that the correlation mining is inaccurate. In addition, in the post-earthquake scene, sound and earthquake damage buildings and other ground objects are mixed and distributed, and in general, FCNs lose a great deal of space details in the down-sampling process, so that the estimation of the ground object boundaries is extremely challenging. Because the prediction difficulty of the pixels positioned near the boundary in the building is obviously higher than that of other pixels, the spatial position relation between each pixel and the boundary is necessary to be considered and used as the basis of being given different weights in the training process.
Disclosure of Invention
In order to solve the technical problems mentioned in the background art, the invention provides a method for detecting a earthquake damage building by using ultra-high resolution remote sensing images.
In order to achieve the technical purpose, the technical scheme of the invention is as follows:
a method for detecting earthquake damage building by ultra-high-resolution remote sensing image includes embedding object context attention modules in deep feature extraction of deep Labv3+ and UNet networks to enhance feature representation, and providing a boundary enhancement loss function L according to spatial position information of pixels in object BE To refine the dividing boundary, thereby constructing OB-deep Labv3+ and OB-Unet networks, and using the two networks to realize the detection of the earthquake damage building;
taking a residual network resnet152 as a main feature extraction network of a deep Labv3+ network, and connecting pyramid pooling modules with hole convolution with object context attention modules in a serial manner;
connecting the object context attention module in series on a fourth-layer jumper structure of the Unet network;
the object context attention module includes the following 3 parts:
(1) Soft object region partitioning: dividing an image into K soft object areas on the basis of a rough segmentation result, wherein each soft object area represents a category;
(2) Object region feature representation: in each soft object region, carrying out weighted summation on all pixels according to the degree that the pixels belong to the region, and obtaining the characteristic representation of the region;
(3) Object context enhanced feature representation: obtaining an object context feature representation of each pixel using the feature representation of the object region, and obtaining an enhanced feature representation using the object context feature representation;
the boundary enhancement loss function L BE The expression of (2) is as follows:
L BE =L BCI +L CE
wherein L is CE L is a cross entropy loss function BCI As BCI loss function:
wherein N is the number of pixels in one Batch,for the nth pixel +.>True value label of one-hot code corresponding to nth pixel, gamma is difficult sample modulation coefficient, alpha k For category weight, D BCI And the boundary confidence index BCI corresponding to the final neighborhood c.
Further, taking the result of the pyramid pooling module with the hole convolution after being subjected to concat as one input of the object context attention module; and taking the result of the pyramid pooling module with the cavity convolution after the concat and after the convolution of 3 times 3 as the other input of the object context attention module.
Further, the feature representation f of the kth region is obtained in the object context attention module using the following formula k
Wherein I represents a set of pixels belonging to a kth region; x is x i Pixel p, which is the deepest output of the network i Is characterized by;is pixel p i The degree of spatial softmax normalization belonging to the kth subject region.
Further, the object context feature representation y for each pixel is calculated in the object context attention module using the following formula i
Wherein δ (·) and ρ (·) are transform functions, r k For pixel locations belonging to the kth region, phi (&) and->Is a transformation function;
by fusing y i And x i Enhanced feature representation z i
Where g is the transform function used for fusion and T represents the transpose.
Further, D BCI The formula of (2) is as follows:
defining boundary metric index d corresponding to final neighborhood c c = (W-3)/2, W is the window self-size of the final field c, traversing all pixels to obtain an initial boundary metric index set D, and respectively counting the maximum D corresponding to each type of pixel point in the K categories in the D c Is defined asK pieces of->Minimum value +.>As all D in D c Obtaining an updated boundary metric index set D * ,d c * For final neighborhood c at D * Boundary metrics in (a) are provided.
Further, alpha k The formula of (2) is as follows:
wherein F is k For the frequency of the kth class of pixels, media (F k ) The median frequency of the K class of pixels is represented.
The beneficial effects brought by adopting the technical scheme are that:
the invention obtains enhanced feature representation by embedding an object context attention module in depth feature extraction; a new boundary enhancement loss function is designed to force the network to pay more attention to boundary pixels; finally, OB-DeepLabv3+ and OB-UNet were constructed in combination with two strategies. Experiments show that the PA of the two designed networks can reach more than 86 percent.
Drawings
FIG. 1 is a block diagram of an OB-DeepLabv3+ network designed according to the present invention;
fig. 2 is a diagram of an OB-UNet network according to the present invention;
FIG. 3 is a block diagram of an ORC module designed according to this invention;
FIG. 4 is a schematic representation of different sized final neighbors in the present invention;
FIG. 5 is a schematic illustration of the BE Loss principle of the present invention.
Detailed Description
The technical scheme of the present invention will be described in detail below with reference to the accompanying drawings.
The invention adopts two advanced semantic parting networks as basic networks, including deep Labv3+ and Unet. The former is based on a coding and decoding structure, and is connected with hole convolution with different hole rates (dialated rates) after a backbone network, so that the network receptive field can be enlarged, and the extraction of multi-scale features is facilitated; the latter also adopts a codec structure, and is characterized in that the hop cut (short cuts) can well fuse (reuse) high-level features and low-level features. On this basis, two networks are constructed by embedding object context attention modules (Object Contextual Representations Module, OCR modules) into the Unet and deep labv3+, respectively, and using the proposed boundary enhancement Loss function BE Loss as a Loss function: OB-DeepLabv3+ and OB-UNet.
(1) OB-DeepLabv3+: the invention takes the resnet152 as a backbone feature extraction network (backbone) of deep labv3+. On this basis, pyramid pooling modules with hole convolutions are connected in series with OCR modules as shown in fig. 1.
Wherein, the resnet152 thickens (built-up depth) the third and fourth layer convolution blocks on the basis of the resnet50, thereby having a deeper network structure. In addition, the residual structure of the resnet152 can learn new features on the input features, thereby solving the network degradation problem. Therefore, the network has better feature extraction capability under the condition of calculation force permission by adopting the resnet152, and is more suitable for semantic segmentation of post-earthquake remote sensing images in complex scenes. Furthermore, since the rough segmentation map (coarse segmentation) is the basis for making an object context representation, the ASPP representation after concat is used to predict the rough segmentation result predicting coarse segmentation (object regions) and as one input to OCR; the ASPP after concat represents the result of a convolution of 3 times 3 as another input to OCR. At this time, the OCR output is an enhanced representation of the feature.
(2) OB-UNet: considering symmetry of the Unet network structure, the OCR module is connected in series to the fourth layer jumper structure of Unet, as shown in fig. 3. The purpose of such design is on the one hand to obtain good rough segmentation results from high-level features; on the other hand, high-level features often contain more semantic features and lose a portion of detail features, where the introduction of object context attention facilitates the restoration of building shatter details.
For the task of semantic segmentation, which is a fine pixel-level image classification, the context information of each pixel is extremely important. Especially in complex post-earthquake scenes, collapse and non-collapse are mixed with other ground object pixels to be distributed so as to be more prone to misclassification. To this end, the present invention incorporates OCR modules in two advanced network structures aimed at integrating the contextual information of the pixels to obtain an enhanced representation of the pixels. The structure of the OCR module is shown in FIG. 3. The OCR module is mainly composed of the following three parts:
(1) Soft object region partitioning: the image is roughly semantically segmented using a backbone feature extraction network and used as an input to an OCR module. On the basis of the rough segmentation result, the image is divided into K soft object regions (soft object regions) and each represents a class K.
(2) Object region feature representation (Object Region Feature Representation): in the kth object region, all pixels are weighted and summed according to the extent to which they belong to the region, resulting in a feature representation of the region:
wherein I represents a set of pixels belonging to a kth region; x is x i Pixel p, which is the deepest output of the network i Is characterized by;is pixel p i The degree of spatial softmax normalization belonging to the kth subject region.
(3) Object context enhanced feature representation: the object context feature representation for each pixel is derived using the feature representation of the object region, and the enhanced feature representation is derived using the object context feature representation:
calculating an object context feature representation y for each pixel in an object context attention module using the following formula i
Wherein δ (·) and ρ (·) are transform functions, r k For pixel locations belonging to the kth region, phi (&) and->Is a transformation function;
by fusing y i And x i Enhanced feature representation z i
Where g is the transform function used for fusion and T represents the transpose.
In post-earthquake scenes, the distribution mixing degree of collapsed, non-collapsed and other types of pixels is more prominent than that of common urban scenes, so that the prediction of pixels positioned at the ground object boundary is more difficult. The conventional Loss functions such as Focal Loss and the like usually only consider the relation between the prediction probability of a certain pixel in a training sample and a label, but do not consider the relative position relation between the pixel and the boundary of an object; on the other hand, pixels located near the object boundary should get higher loss in training, thereby improving the ability of the prediction network to fine-characterize the object edge.
Based on the analysis, the invention designs a boundary confidence index Boundary Confidence Index (BCI) and provides a boundary enhancement Loss function BE Loss on the basis. The calculation process is as follows:
step1: for one pixel c in the ground truth-value diagram, a window of 3*3 is constructed centered around c as an initial neighborhood (neighbor). If the pixel e of the label with the category different from the category c exists in the current neighborhood, the neighborhood is the final neighborhood; otherwise, the area is increased by taking the step length as 1, and the increase is stopped until the pixel e with the label of the category different from the label of the category c exists in the current neighborhood, and the current window size W is taken as the final neighborhood. Meanwhile, to reduce the variability in each direction, the four corner points of the neighborhood are removed to approximate the shape of a circle, such as the final neighborhood when W takes 5 and 9, as shown in fig. 4.
Step2: since c and e respectively belong to objects of different classes, the distance between c and e reflects the proximity of c to the object boundary. Based on this assumption, a boundary metric d corresponding to c is defined c = (W-3)/2. An initial set D of boundary metrics (boundary measure index) can be obtained by traversing all pixels.
Step3: considering that there may be a large difference in size, shape of different clutter objects, an excessive "outlier" may occur in D, causing bias in the statistics. For this purpose, D is counted up to the maximum D corresponding to each pixel point in K categories c Defined asK pieces of->Minimum value +.>As all D in D c Obtaining an updated boundary metric index set D *
Step4: let c be D * The boundary metric index in (a) is d c * And carrying out normalization processing, and defining BCI corresponding to c as follows:
wherein N is the number of pixels in one Batch,for the nth pixel +.>For the true value label of one-hot code corresponding to the nth pixel, gamma is a difficult sample modulation coefficient, so as to reduce the weight of the easy-classification sample, thereby leading the network to pay more attention to the difficult-classification sample during training, alpha k For category weights, the purpose is to alleviate the problem of category imbalance of training samples:
wherein F is k For the frequency of the kth class of pixels, media (F k ) The median frequency of the K class of pixels is represented.
Step5: the definition beloss is as follows:
L BE =L BCI +L CE
wherein L is CE Is cross entropy loss functionCross Entropy Loss). Adding L CE The purpose of (2) is to prevent the network training from late due to boundary D BCI Too large a value results in a network that is difficult to converge. FIG. 5 shows L BE Is a schematic diagram of the principle of (a).
Experiments show that the PA of the two designed networks can reach more than 86 percent. In addition, OB-DeepLabv3+ and OB-UNet increased by more than 1% in PA and 3% -6% in IoU of uncollapsed and collapsed buildings in comparison to the respective base networks.
The embodiments are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited by the embodiments, and any modification made on the basis of the technical scheme according to the technical idea of the present invention falls within the protection scope of the present invention.

Claims (5)

1. A method for detecting a building with ultrahigh-resolution remote sensing images and earthquake damage is characterized by comprising the following steps of: the object context attention modules are respectively embedded in deep feature extraction of deep Labv3+ and UNet networks to enhance feature representation, and a boundary enhancement loss function L is provided according to the spatial position information of pixels in the object BE To refine the dividing boundary, thereby constructing OB-deep Labv3+ and OB-Unet networks, and respectively utilizing the two networks to realize the detection of the earthquake damage building;
embedding an object context attention module in deep Labv3+ deep feature extraction to enhance feature representation is specifically: taking a residual network resnet152 as a main feature extraction network of a deep Labv3+ network, and connecting pyramid pooling modules with hole convolution with object context attention modules in a serial manner;
taking the result of the pyramid pooling module with the cavity convolution after the concat as one input of the object context attention module; the pyramid pooling module with the cavity convolution is subjected to concat and then takes the result of the convolution of 3 times 3 as the other input of the object context attention module;
embedding an object context attention module in deep feature extraction of the UNet network to enhance feature representation is specifically as follows: connecting the object context attention module in series on a fourth-layer jumper structure of the Unet network;
the object context attention module includes the following 3 parts:
(1) Soft object region partitioning: dividing an image into K soft object areas on the basis of a rough segmentation result, wherein each soft object area represents a category;
(2) Object region feature representation: in each soft object region, carrying out weighted summation on all pixels according to the degree that the pixels belong to the region, and obtaining the characteristic representation of the region;
(3) Object context enhanced feature representation: obtaining an object context feature representation of each pixel using the feature representation of the object region, and obtaining an enhanced feature representation using the object context feature representation;
the boundary enhancement loss function L BE The expression of (2) is as follows:
L BE =L BCI +L CE
wherein L is CE L is a cross entropy loss function BCI As BCI loss function:
wherein N is the number of pixels in one Batch,for the nth pixel +.>True value label of one-hot code corresponding to nth pixel, gamma is difficult sample modulation coefficient, alpha k For category weight, D BCI And the boundary confidence index BCI corresponding to the final neighborhood c.
2. The ultra-high resolution remote sensing image earthquake hazard building detection method according to claim 1, wherein the method comprises the following steps: in an object context attention moduleThe characteristic representation f of the kth region is obtained using k
Wherein I represents a set of pixels belonging to a kth region; x is x i Pixel p, which is the deepest output of the network i Is characterized by;is pixel p i The degree of spatial softmax normalization belonging to the kth subject region.
3. The ultra-high resolution remote sensing image earthquake hazard building detection method according to claim 2, wherein the method comprises the following steps of: calculating an object context feature representation y for each pixel in an object context attention module using the following formula i
Wherein δ (·) and ρ (·) are transform functions, r k For pixel locations belonging to the kth region, phi (&) and->Is a transformation function;
by fusing y i And x i Enhanced feature representation z i
Where g is the transform function used for fusion and T represents the transpose.
4. The ultra-high resolution remote sensing image earthquake hazard building detection method according to claim 1, wherein the method comprises the following steps: d (D) BCI The formula of (2) is as follows:
defining boundary metric index d corresponding to final neighborhood c c = (W-3)/2, W is the window self-size of the final field c, traversing all pixels to obtain an initial boundary metric index set D, and respectively counting the maximum D corresponding to each type of pixel point in the K categories in the D c Is defined asK pieces of->Minimum value +.>As all D in D c Obtaining an updated boundary metric index set D * ,d c * For final neighborhood c at D * Boundary metrics in (a) are provided.
5. The ultra-high resolution remote sensing image earthquake hazard building detection method according to claim 1, wherein the method comprises the following steps: alpha k The formula of (2) is as follows:
wherein F is k For the frequency of the kth class of pixels, media (F k ) Representing frequency median for K-class pixels。
CN202110891197.2A 2021-08-04 2021-08-04 Ultrahigh-resolution remote sensing image earthquake damage building detection method Active CN113688696B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110891197.2A CN113688696B (en) 2021-08-04 2021-08-04 Ultrahigh-resolution remote sensing image earthquake damage building detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110891197.2A CN113688696B (en) 2021-08-04 2021-08-04 Ultrahigh-resolution remote sensing image earthquake damage building detection method

Publications (2)

Publication Number Publication Date
CN113688696A CN113688696A (en) 2021-11-23
CN113688696B true CN113688696B (en) 2023-07-18

Family

ID=78578798

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110891197.2A Active CN113688696B (en) 2021-08-04 2021-08-04 Ultrahigh-resolution remote sensing image earthquake damage building detection method

Country Status (1)

Country Link
CN (1) CN113688696B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116088049B (en) * 2023-04-07 2023-06-20 清华大学 Least square inverse time migration seismic imaging method and device based on wavelet transformation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111460936A (en) * 2020-03-18 2020-07-28 中国地质大学(武汉) Remote sensing image building extraction method, system and electronic equipment based on U-Net network
CN112183360A (en) * 2020-09-29 2021-01-05 上海交通大学 Lightweight semantic segmentation method for high-resolution remote sensing image
CN112950645A (en) * 2021-03-24 2021-06-11 中国人民解放军国防科技大学 Image semantic segmentation method based on multitask deep learning
CN113129309A (en) * 2021-03-04 2021-07-16 同济大学 Medical image semi-supervised segmentation system based on object context consistency constraint

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110390267B (en) * 2019-06-25 2021-06-01 东南大学 Mountain landscape building extraction method and device based on high-resolution remote sensing image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111460936A (en) * 2020-03-18 2020-07-28 中国地质大学(武汉) Remote sensing image building extraction method, system and electronic equipment based on U-Net network
CN112183360A (en) * 2020-09-29 2021-01-05 上海交通大学 Lightweight semantic segmentation method for high-resolution remote sensing image
CN113129309A (en) * 2021-03-04 2021-07-16 同济大学 Medical image semi-supervised segmentation system based on object context consistency constraint
CN112950645A (en) * 2021-03-24 2021-06-11 中国人民解放军国防科技大学 Image semantic segmentation method based on multitask deep learning

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Earthquake-Damaged Buildings Detection in Very High-Resolution Remote Sensing Images Based on Object Context and Boundary Enhanced Loss;Chao Wang 等;《remote sensing》;1-25 *
Object-Contextual Representations for Semantic Segmentation;Yuhui Yuan 等;《Computer Vision-ECCV2020》;173-190 *
Recognition Zucchinis Intercropped with Sunflowers in UAV Visible Images Using an Improved Method Based on OCRNet;Shenjin Huang 等;《remote sensing》;1-20 *
利用U-net网络的高分遥感影像建筑提取方法;张浩然;赵江洪;张晓光;;《遥感信息》;第35卷(第03期);143-150 *
基于DeepLabv3+语义分割模型的GF-2影像城市绿地提取;刘文雅;岳安志;季珏;师卫华;邓孺孺;梁业恒;熊龙海;;《国土资源遥感》;第32卷(第02期);120-129 *

Also Published As

Publication number Publication date
CN113688696A (en) 2021-11-23

Similar Documents

Publication Publication Date Title
CN111723675B (en) Remote sensing image scene classification method based on multiple similarity measurement deep learning
EP1233374B1 (en) Apparatus and method for extracting objects based on feature matching between segmented regions in images
CN102236675B (en) Method for processing matched pairs of characteristic points of images, image retrieval method and image retrieval equipment
CN109657610A (en) A kind of land use change survey detection method of high-resolution multi-source Remote Sensing Images
CN104134080A (en) Method and system for automatically detecting roadbed collapse and side slope collapse of road
CN107862702A (en) A kind of conspicuousness detection method of combination boundary connected and local contrast
CN110929621B (en) Road extraction method based on topology information refinement
CN115641327B (en) Building engineering quality supervision and early warning system based on big data
CN111209894A (en) Roadside illegal building identification method for road aerial image
CN114140683A (en) Aerial image target detection method, equipment and medium
CN113688696B (en) Ultrahigh-resolution remote sensing image earthquake damage building detection method
Li et al. Classification of building damage triggered by earthquakes using decision tree
CN108648200B (en) Indirect urban high-resolution impervious surface extraction method
Senthilnath et al. Automatic road extraction using high resolution satellite image based on texture progressive analysis and normalized cut method
CN112347927B (en) High-resolution image building extraction method based on convolutional neural network probability decision fusion
Wang et al. Hybrid remote sensing image segmentation considering intrasegment homogeneity and intersegment heterogeneity
CN115880325A (en) Building outline automatic extraction method based on point cloud dimension and spatial distance clustering
Aung et al. Short-term prediction of localized heavy rain from radar imaging and machine learning
Xia et al. A shadow detection method for remote sensing images using affinity propagation algorithm
Sha et al. A boundary-enhanced supervoxel method for extraction of road edges in MLS point clouds
Ren et al. Building recognition from aerial images combining segmentation and shadow
CN112668403A (en) Fine-grained ship image target identification method for multi-feature area
Iturburu et al. Towards rapid and automated vulnerability classification of concrete buildings
Li et al. A fuzzy segmentation-based approach to extraction of coastlines from IKONOS imagery
CN113436091B (en) Object-oriented remote sensing image multi-feature classification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant