CN112767423A - Remote sensing image building segmentation method based on improved SegNet - Google Patents

Remote sensing image building segmentation method based on improved SegNet Download PDF

Info

Publication number
CN112767423A
CN112767423A CN202110163278.0A CN202110163278A CN112767423A CN 112767423 A CN112767423 A CN 112767423A CN 202110163278 A CN202110163278 A CN 202110163278A CN 112767423 A CN112767423 A CN 112767423A
Authority
CN
China
Prior art keywords
remote sensing
network
convolution
training
sensing image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110163278.0A
Other languages
Chinese (zh)
Other versions
CN112767423B (en
Inventor
英昌盛
姜亭亭
周伟
李紫薇
张桂杰
孙浩然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin Normal University
Original Assignee
Jilin Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin Normal University filed Critical Jilin Normal University
Priority to CN202110163278.0A priority Critical patent/CN112767423B/en
Publication of CN112767423A publication Critical patent/CN112767423A/en
Application granted granted Critical
Publication of CN112767423B publication Critical patent/CN112767423B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Abstract

The invention provides a remote sensing image building segmentation method based on improved SegNet, which comprises the steps of firstly, carrying out data amplification on a sample in a data set so as to increase the robustness of a model and reduce data overfitting, and obtaining a training sample set and a test sample set; secondly, adding bottleeck blocks, separable convolution layers and jump connection on the basis of a SegNet network model, and constructing a remote sensing building segmentation network based on improved SegNet; next, extracting training images and labels thereof, training the network model as input data, and storing the optimal training model after verification; and finally, inputting the remote sensing image to be processed to a network, and outputting a segmentation result. The method can effectively reduce the model training parameters and training time, inhibit the fuzzy phenomenon of the building edge in the segmentation result, ensure that the building edge in the segmentation result is more complete and has less false division conditions, and obtain higher values on the accuracy rate, the recall rate and the F1 value.

Description

Remote sensing image building segmentation method based on improved SegNet
Technical Field
The invention belongs to the technical field of remote sensing image segmentation, and particularly relates to a remote sensing image building segmentation method based on improved SegNet.
Background
The building is the most important component in the remote sensing image, and provides important reference for urban planning, land resource management, disaster emergency assessment and the like. The remote sensing image has high noise and low imaging quality under the influence of natural imaging factors such as space distance, illumination, weather and the like and the diversity of texture features of the building structure. The traditional remote sensing image segmentation method has the problems of low extraction precision, fuzzy edge, manual interpretation and the like.
Deep neural networks such as FCN, U-Net and SegNet have strong autonomous feature learning capability and adaptability, and are more and more widely applied to the field of remote sensing image processing. The SegNet semantic segmentation network performs upsampling operation on the decoding network part by using the pooling index information, so that the integrity of the segmented image information is reserved, and the memory occupation of the network is reduced. At present, SegNet networks and derived algorithms thereof applied to remote sensing image building segmentation have the problems of more network training parameters, loss of detail of segmentation results, fuzzy building edges and the like.
Disclosure of Invention
The invention aims to effectively reduce the number of parameters in a network model and solve the problems of detail loss in a segmentation result, fuzzy building edge and the like, and provides a remote sensing image building segmentation method based on improved SegNet.
The invention is realized by the following technical scheme, and provides a remote sensing image building segmentation method based on improved SegNet, which comprises the following steps:
step S1: training sample set preparation
Selecting an existing remote sensing image data set, wherein the data set comprises remote sensing images and labels thereof, and carrying out augmentation and expansion on samples in the data set through operations of rotating, zooming, cutting and increasing image noise to obtain a training sample set and a test sample set;
step S2: construction of remote sensing image building segmentation network model based on improved SegNet
Constructing a remote sensing image building segmentation network model based on improved SegNet by using a Keras framework based on an Anaconda environment, wherein the model is based on a coding-decoding network structure of SegNet, inserting bottleck blocks into the 2 nd convolution group and the 3 rd convolution group of a coding network in the SegNet model respectively, obtaining more building edge characteristics by increasing the network depth, performing dimension increase on input data by using a bottleneck structure, reducing network parameters and improving the network training efficiency; replacing the last two common convolutions of the last convolution group in the decoding network with depth separable convolutions; jump-connecting the feature map of the convolution layer in the coding network and the feature map of the upper sampling layer in the mirror image relationship in the decoding network, and utilizing the low-level semantic features of the remote sensing image to assist the high-level semantic features to reconstruct the image, thereby improving the precision of building segmentation and enriching feature description information, and leading the edge integrity of the segmented image to be higher;
step S3: network model training
Sending the preprocessed training sample set into a network model for training, and storing an optimal training model after the test sample set is checked;
step S4: and (5) carrying out building segmentation on the remote sensing image by using the optimal training model stored in the step S3, and outputting the result.
Further, in step S1, selecting a Satellite Dataset I and a Massachusetts Buildings Dataset to establish a remote sensing image building segmentation training sample set and a test sample set; the Satellite Dataset I comprises 204 building remote sensing images with the size of 512 multiplied by 512, and the Massachusetts Buildings Dataset comprises 151 building remote sensing images with the size of 1500 multiplied by 1500.
Further, the bottleeck block is composed of three convolutional layers of 1 × 1 × 64, 3 × 3 × 64, and 1 × 1 × 128; in the cottleneck block structure, convolution of 1 × 1 × 64 is used to reduce the feature dimension, compression is performed on the input feature, and convolution of 1 × 1 × 128 is used to increase the dimension for feature channel recovery.
Further, the depth separable convolution is composed of two parts of channel-by-channel convolution and point-by-point convolution; the channel-by-channel convolution divides all the multi-channel feature maps from the previous layer into feature maps of single channels, carries out single-channel convolution and recombination respectively, and carries out point-by-point convolution by adopting a convolution kernel of 1 multiplied by 1 after carrying out batch normalization and ReLU activation function processing on the feature map groups.
Further, in step S3, when the network model is trained, the cut remote sensing image and its label are extracted, and forward prediction is performed on the remote sensing image building segmentation network model based on the improved SegNet to obtain a binary image of the segmentation result, where the difference between the binary image and the label is the prediction error of the network model.
Further, in step S4, the remote sensing image to be processed is input to the network model, and is output as a divided binary image, and the position and edge information of the building in the remote sensing image can be obtained by the mark in the image.
The invention has the beneficial effects that:
1. according to the network model constructed by the method, more detailed characteristics such as building edges and the like can be reserved by adding the bottleeck block in the coding network, and meanwhile, dimension compression can be performed on input data to effectively reduce model training parameters.
2. In the invention, the last convolution layer of the model decoding network uses the deep separable convolution to replace the common convolution, thereby reducing the network parameters and the training time and realizing the light weight network.
3. The jump connection in the U-Net model is added into the network model constructed by the invention, and the feature graph generated by the coding network is spliced with the feature graph of the corresponding part of the decoding network after the up-sampling operation, so that more feature description information can be reserved, the segmentation precision is improved, the segmented result is clearer, and the edge integrity is higher.
Drawings
FIG. 1 is a schematic diagram of a cut small image and its label according to an embodiment of the present invention;
FIG. 2 is a sample augmented (rotated) view of an embodiment of the present invention;
FIG. 3 is a diagram of a remote sensing image building segmentation network model architecture of an improved SegNet constructed in accordance with the present invention;
FIG. 4 is a diagram illustrating the segmentation effect of the training process according to the embodiment of the present invention; wherein (a) is a training image, (b) is a label image, and (c) is a network segmentation result of the invention;
FIG. 5 is a graph illustrating the segmentation effect of an embodiment of the present invention for constructing a network; wherein (a) is a test image and (b) is a network segmentation result of the invention;
FIG. 6 is a graph comparing the FCN, SegNet, and U-Net segmentation results for an embodiment of the invention in which a network model is constructed.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a remote sensing image building segmentation method based on improved SegNet, which comprises the following steps:
step S1: training sample set preparation
Selecting an existing remote sensing image data set, wherein the data set comprises remote sensing images and labels thereof, and carrying out augmentation and expansion on samples in the data set through operations of rotating, zooming, cutting, increasing image noise and the like so as to reduce data overfitting and increase model robustness to obtain a training sample set and a test sample set;
step S2: construction of remote sensing image building segmentation network model based on improved SegNet
Based on an Anaconda environment, a Keras framework is used for constructing a remote sensing image building segmentation network model based on improved SegNet, the model is based on a coding-decoding network structure of SegNet, cottleneck blocks are respectively inserted into the 2 nd convolution group and the 3 rd convolution group of a coding network in the SegNet network model, more building edges and other detailed characteristics are obtained by increasing the network depth, network parameters are reduced by using data dimension compression operation, and the network training efficiency is improved; replacing the last two common convolutions in the last convolution group in the decoding network with deep separable convolutions to reduce the training parameters of the network; jump-connecting the feature map of the convolution layer in the coding network and the feature map of the upper sampling layer in the mirror image relationship in the decoding network, and utilizing the low-level semantic features of the remote sensing image to assist the high-level semantic features to reconstruct the image, thereby improving the precision of building segmentation and enriching feature description information, and leading the edge integrity of the segmented image to be higher;
step S3: network model training
Sending the preprocessed training sample set into a network to train the model, and storing the optimal training model after the test sample set is checked;
step S4: and (5) carrying out building segmentation on the remote sensing image by using the optimal training model stored in the step S3, and outputting the result.
Examples
One embodiment of the present invention comprises the following steps:
step S1: training sample set preparation
Selecting an existing remote sensing image data set, wherein the data set comprises remote sensing images and labels thereof, and carrying out augmentation and expansion on samples through operations of rotating, zooming, cutting, increasing image noise and the like to obtain a training sample set and a test sample set.
In this embodiment, a remote sensing image building segmentation training set and a remote sensing image building segmentation testing sample set are established by selecting Satellite Dataset I (global cities) and Massachusetts Buildings Dataset. The Satellite Dataset I (global cities) and Massachusetts construction datasets are small sample datasets, and images in the datasets comprise labeling information. The Satellite dataset I (global cities) contains 204 512 × 512 remote sensing images of buildings, and the resolution varies from 0.3m to 2.5 m. The image size in the Massachusetts construction Dataset data set is 1500 × 1500, containing 151 images, with a resolution of 1 m.
And preprocessing the original image and the labeled image in the data set, and storing the tif image sample in a png format so as to save the memory occupancy rate. The original image is cut into 256 × 256 small images, which reduces the memory requirement during training, as shown in fig. 1. And (3) performing operations such as rotation, scaling, brightness adjustment, noise increase, cropping and the like on the cut small image to expand the available data set (as shown in figure 2), so as to prevent overfitting and increase the robustness of the constructed network model. And dividing the training sample set and the testing sample set according to the ratio of 7: 3.
Step S2: construction of remote sensing image building segmentation network model based on improved SegNet
The remote sensing image building segmentation network model structure based on the improved SegNet constructed by the invention is shown in figure 3. In this embodiment, based on the SegNet network framework, a bottleeck block, a separable convolutional layer, and a hopping connection are added.
The bottleeck block is composed of three convolutional layers of 1 × 1 × 64, 3 × 3 × 64, and 1 × 1 × 128. In the bottleeck structure, 1 × 1 × 64 convolution is used to reduce the feature dimension, compress the input feature, and 1 × 1 × 128 convolution is used to increase the dimension for feature channel recovery. The purpose of adding 2 bottleeck blocks in the coding network is to increase the depth of the model and obtain more detailed characteristics such as building edges, the number of training parameters can be effectively reduced through the bottleneck structure of the bottleeck blocks, and the training speed of the network is improved.
In the final convolutional group of the decoding network, the last two normal convolutional layers are replaced with depth separable convolutional layers. By using the deep separable convolution, the training parameters of the network can be reduced, the network training time can be reduced, and the aim of light weight network can be achieved. The depth separable convolution consists of two parts, channel-by-channel convolution and point-by-point convolution. And (4) splitting all the multi-channel feature maps from the previous layer into feature maps of single channels by channel convolution, and respectively carrying out single-channel convolution and recombination. After the feature map group is subjected to batch normalization and ReLU activation function processing, point-by-point convolution is performed by adopting a convolution kernel of 1 multiplied by 1.
And connecting the characteristic diagram in the improved SegNet network model coding process to the characteristic diagram of the corresponding upper sampling layer in the decoding process by adopting the jump connection operation in the U-Net network. The low-level semantic features and the high-level semantic features of the segmented images are connected through concatee operation, the feature graph before pooling operation in each convolution group in the coding part is spliced with the feature graph after sampling operation on the corresponding decoding part, the building segmentation precision can be improved, feature description information can be enriched, the edge integrity of the segmented images is higher, and the edge blurring phenomenon of the segmented images is effectively inhibited.
Step S3: network model training
When the model is trained, 256 × 256 small images and labels thereof after cutting are extracted, and forward prediction is performed through a remote sensing image building segmentation network model based on improved SegNet to obtain a segmentation result binary image, as shown in fig. 4. The difference between the binary image and the label is the prediction error of the network model.
Step S4: and outputting the result. Inputting the remote sensing image to be processed into the network model, outputting the remote sensing image to be a segmented binary image, and obtaining the position and edge information of the building in the remote sensing image through the mark in the image, as shown in figure 5.
In the segmentation result of the constructed network, the edge information is more complete, and the error division condition is less, as shown in figure 6.
The invention constructs a network model with fewer training parameters, as shown in table 1.
TABLE 1 training parameters (× 10)6) Comparison of
Figure BDA0002937322610000051
The invention constructs the network to obtain higher values in precision rate, recall rate and F1 value, as shown in tables 2 and 3.
TABLE 2 evaluation index values on a satellite set I for each network
Figure BDA0002937322610000052
TABLE 3 evaluation index values of the networks on Massachusetts Buildings Dataset
Figure BDA0002937322610000053
The remote sensing image building segmentation method based on the improved SegNet is introduced in detail, a specific example is applied to explain the principle and the implementation mode of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (6)

1. A remote sensing image building segmentation method based on improved SegNet is characterized in that: the method comprises the following steps:
step S1: training sample set preparation
Selecting an existing remote sensing image data set, wherein the data set comprises remote sensing images and labels thereof, and carrying out augmentation and expansion on samples in the data set through operations of rotating, zooming, cutting and increasing image noise to obtain a training sample set and a test sample set;
step S2: construction of remote sensing image building segmentation network model based on improved SegNet
Constructing a remote sensing image building segmentation network model based on improved SegNet by using a Keras framework based on an Anaconda environment, wherein the model is based on a coding-decoding network structure of SegNet, inserting bottleck blocks into the 2 nd convolution group and the 3 rd convolution group of a coding network in the SegNet model respectively, obtaining more building edge characteristics by increasing the network depth, performing dimension increase on input data by using a bottleneck structure, reducing network parameters and improving the network training efficiency; replacing the last two common convolutions of the last convolution group in the decoding network with depth separable convolutions; jump-connecting the feature map of the convolution layer in the coding network and the feature map of the upper sampling layer in the mirror image relationship in the decoding network, and utilizing the low-level semantic features of the remote sensing image to assist the high-level semantic features to reconstruct the image, thereby improving the precision of building segmentation and enriching feature description information, and leading the edge integrity of the segmented image to be higher;
step S3: network model training
Sending the preprocessed training sample set into a network model for training, and storing an optimal training model after the test sample set is checked;
step S4: and (5) carrying out building segmentation on the remote sensing image by using the optimal training model stored in the step S3, and outputting the result.
2. The method of claim 1, wherein: in step S1, selecting Satellite Dataset I and Massachusetts Buildings Dataset to establish a remote sensing image building segmentation training sample set and a test sample set; the Satellite Dataset I comprises 204 building remote sensing images with the size of 512 multiplied by 512, and the Massachusetts Buildings Dataset comprises 151 building remote sensing images with the size of 1500 multiplied by 1500.
3. The method of claim 1, wherein: the bottleeck block is composed of three convolutional layers of 1 × 1 × 64, 3 × 3 × 64 and 1 × 1 × 128; in the cottleneck block structure, convolution of 1 × 1 × 64 is used to reduce the feature dimension, compression is performed on the input feature, and convolution of 1 × 1 × 128 is used to increase the dimension for feature channel recovery.
4. The method of claim 3, wherein: the depth separable convolution consists of two parts of channel-by-channel convolution and point-by-point convolution; the channel-by-channel convolution divides all the multi-channel feature maps from the previous layer into feature maps of single channels, carries out single-channel convolution and recombination respectively, and carries out point-by-point convolution by adopting a convolution kernel of 1 multiplied by 1 after carrying out batch normalization and ReLU activation function processing on the feature map groups.
5. The method of claim 1, wherein: in step S3, when the network model is trained, the cut remote sensing image and its label are extracted, and forward prediction is performed on the remote sensing image building segmentation network model based on the improved SegNet to obtain a binary image of the segmentation result, where the difference between the binary image and the label is the prediction error of the network model.
6. The method of claim 5, wherein: in step S4, the remote sensing image to be processed is input to the network model, and output as a divided binary image, and the position and edge information of the building in the remote sensing image can be obtained by the mark in the image.
CN202110163278.0A 2021-02-05 2021-02-05 Remote sensing image building segmentation method based on improved SegNet Active CN112767423B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110163278.0A CN112767423B (en) 2021-02-05 2021-02-05 Remote sensing image building segmentation method based on improved SegNet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110163278.0A CN112767423B (en) 2021-02-05 2021-02-05 Remote sensing image building segmentation method based on improved SegNet

Publications (2)

Publication Number Publication Date
CN112767423A true CN112767423A (en) 2021-05-07
CN112767423B CN112767423B (en) 2023-08-22

Family

ID=75705230

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110163278.0A Active CN112767423B (en) 2021-02-05 2021-02-05 Remote sensing image building segmentation method based on improved SegNet

Country Status (1)

Country Link
CN (1) CN112767423B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113496482A (en) * 2021-05-21 2021-10-12 郑州大学 Poison driving test paper image segmentation model, positioning segmentation method and portable device
CN115052148A (en) * 2022-07-21 2022-09-13 南昌工程学院 Image compression algorithm based on model segmentation compression self-encoder
CN115272377A (en) * 2022-09-27 2022-11-01 松立控股集团股份有限公司 Vehicle segmentation method fusing image edge information

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108537132A (en) * 2018-03-16 2018-09-14 中国人民解放军陆军工程大学 A kind of lane segmentation method of the depth autocoder based on supervised learning
CN111160276A (en) * 2019-12-31 2020-05-15 重庆大学 U-shaped cavity full-volume integral cutting network identification model based on remote sensing image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108537132A (en) * 2018-03-16 2018-09-14 中国人民解放军陆军工程大学 A kind of lane segmentation method of the depth autocoder based on supervised learning
CN111160276A (en) * 2019-12-31 2020-05-15 重庆大学 U-shaped cavity full-volume integral cutting network identification model based on remote sensing image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
徐胜军;欧阳朴衍;郭学源;KHAN TAHA MUTHAR;: "基于多尺度特征融合模型的遥感图像建筑物分割", 计算机测量与控制, no. 07 *
李紫薇 等: "基于编码-解码神经网络遥感图像语义分割应用研究", 《智能计算机与应用》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113496482A (en) * 2021-05-21 2021-10-12 郑州大学 Poison driving test paper image segmentation model, positioning segmentation method and portable device
CN113496482B (en) * 2021-05-21 2022-10-04 郑州大学 Toxic driving test paper image segmentation model, positioning segmentation method and portable device
CN115052148A (en) * 2022-07-21 2022-09-13 南昌工程学院 Image compression algorithm based on model segmentation compression self-encoder
CN115272377A (en) * 2022-09-27 2022-11-01 松立控股集团股份有限公司 Vehicle segmentation method fusing image edge information
CN115272377B (en) * 2022-09-27 2022-12-27 松立控股集团股份有限公司 Vehicle segmentation method fusing image edge information

Also Published As

Publication number Publication date
CN112767423B (en) 2023-08-22

Similar Documents

Publication Publication Date Title
CN112767423B (en) Remote sensing image building segmentation method based on improved SegNet
CN111047551A (en) Remote sensing image change detection method and system based on U-net improved algorithm
CN109087258B (en) Deep learning-based image rain removing method and device
CN113159051B (en) Remote sensing image lightweight semantic segmentation method based on edge decoupling
CN113850825A (en) Remote sensing image road segmentation method based on context information and multi-scale feature fusion
CN116051549B (en) Method, system, medium and equipment for dividing defects of solar cell
CN113888550A (en) Remote sensing image road segmentation method combining super-resolution and attention mechanism
CN113888547A (en) Non-supervision domain self-adaptive remote sensing road semantic segmentation method based on GAN network
CN115439751A (en) Multi-attention-fused high-resolution remote sensing image road extraction method
CN112927253A (en) Rock core FIB-SEM image segmentation method based on convolutional neural network
CN111914654A (en) Text layout analysis method, device, equipment and medium
CN112861795A (en) Method and device for detecting salient target of remote sensing image based on multi-scale feature fusion
CN114693929A (en) Semantic segmentation method for RGB-D bimodal feature fusion
CN113971735A (en) Depth image clustering method, system, device, medium and terminal
CN116229106A (en) Video significance prediction method based on double-U structure
CN116309485A (en) Pavement crack detection method for improving UNet network structure
CN113052759B (en) Scene complex text image editing method based on MASK and automatic encoder
CN113628180A (en) Semantic segmentation network-based remote sensing building detection method and system
CN113297986A (en) Handwritten character recognition method, device, medium and electronic equipment
CN112418229A (en) Unmanned ship marine scene image real-time segmentation method based on deep learning
CN116563691A (en) Road disease detection method based on TransUnet model
CN116778318A (en) Convolutional neural network remote sensing image road extraction model and method
CN117152435A (en) Remote sensing semantic segmentation method based on U-Net3+
CN115631405A (en) SegFormer-based SAR image ocean inner wave stripe segmentation method
CN115953386A (en) MSTA-YOLOv 5-based lightweight gear surface defect detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant