CN114863099A - Foundation cloud atlas segmentation network based on multi-branch asymmetric convolution module and multi-scale feature fusion - Google Patents

Foundation cloud atlas segmentation network based on multi-branch asymmetric convolution module and multi-scale feature fusion Download PDF

Info

Publication number
CN114863099A
CN114863099A CN202210401156.5A CN202210401156A CN114863099A CN 114863099 A CN114863099 A CN 114863099A CN 202210401156 A CN202210401156 A CN 202210401156A CN 114863099 A CN114863099 A CN 114863099A
Authority
CN
China
Prior art keywords
cloud
convolution
module
fusion
convolution module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210401156.5A
Other languages
Chinese (zh)
Inventor
邱波
张立文
李晓彤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202210401156.5A priority Critical patent/CN114863099A/en
Publication of CN114863099A publication Critical patent/CN114863099A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The invention discloses a foundation cloud picture segmentation network based on a multi-branch asymmetric convolution module and multi-scale feature fusion. In a decoder, deep layer features extracted by an encoder are fused with shallow layer features, then a feature graph is restored to be the same as an input cloud graph through convolution, and finally a probability mask is obtained by normalizing pixel points through a softmax function. And determining an optimal segmentation threshold value through a working curve of the subject, and finally judging whether each pixel point is a cloud or not. An attention mechanism is utilized in both the encoder and decoder to make the network more concerned about features useful for cloud segmentation. Compared with a cloud picture method utilizing a convolutional neural network, the method has higher accuracy, and can obtain a good segmentation effect under the condition that the data of the cloud picture in the daytime and the data of the cloud picture at night are unbalanced.

Description

Foundation cloud atlas segmentation network based on multi-branch asymmetric convolution module and multi-scale feature fusion
Technical Field
The invention belongs to the technical field of cloud image analysis, and particularly relates to a foundation cloud image segmentation network based on multi-branch asymmetric convolution modules and multi-scale feature fusion.
Background
Clouds are small water droplets that liquefy upon cooling from atmospheric water vapor or small ice crystals that desublimate, which are a mixture of visible polymers floating in the air. Typically, clouds cover around 50% of the earth's surface. The method not only reflects the current conditions of atmospheric motion, stability, water vapor change and the like, but also can predict the weather change trend in a certain time in the future, and the life and consumption process is also the water and energy redistribution process. Accurate acquisition of cloud areas and non-cloud areas is the basis of cloud amount calculation, and cloud picture segmentation is an important step of cloud picture analysis. The current cloud image segmentation algorithm mainly comprises a traditional cloud image segmentation algorithm and a cloud image segmentation algorithm based on a convolutional neural network. The traditional cloud image segmentation algorithm mainly comprises the following steps: taking the ratio or difference of the red and blue wave bands as a threshold, and then realizing cloud and non-cloud region segmentation by adjusting the threshold; and cloud image segmentation is realized through a clustering algorithm and a super-pixel algorithm of image characteristics. The traditional cloud image segmentation algorithm is sensitive to parameter selection and poor in robustness, and a large amount of characteristic engineering is often required by researchers to obtain a good effect. Another important method for cloud image segmentation is an algorithm based on a convolutional neural network, which mainly adopts a network based on a UNet architecture, and only restores image information through upsampling in a decoder, so that spatial information is lost, and the segmentation precision is not high.
Disclosure of Invention
The invention aims to overcome the defects in the background, provides a foundation cloud picture segmentation network based on multi-branch asymmetric convolution module and multi-scale feature fusion, and overcomes the defects of low precision, poor robustness and manual feature engineering of the traditional cloud picture segmentation algorithm. In addition, compared with other cloud image segmentation algorithms based on the convolutional neural network, the method is higher in precision. In order to achieve the technical purpose, the technical scheme of the invention is as follows:
a foundation cloud picture segmentation network based on multi-branch asymmetric convolution module and multi-scale feature fusion comprises an encoder and a decoder;
the encoder is used for extracting features, and the extracted deep features and shallow features are used as input of the encoder and comprise a depth residual error structure, a multi-branch asymmetric convolution module and a multi-scale feature fusion module;
the decoder is used for restoring the features obtained by fusing the deep features and the shallow features extracted by the encoder to the same size as the input cloud picture, then normalizing each pixel point through a softmax function to obtain a probability mask, determining a threshold value of an optimal judgment cloud and a non-cloud area through a subject working curve, and finally judging whether each pixel point belongs to the cloud.
The residual structure comprises a convolution module and a fusion attention convolution module, wherein the convolution module performs convolution, normalization and activation functions on an input feature map for four times, the second convolution adopts hole convolution with different hole rates, and finally the result obtained by correspondingly adding the third convolution and the fourth convolution feature map is output by the convolution module;
the activation function part adopts h _ swish, so that the precision loss in quantization can be effectively avoided, and the method can be represented by the following formula:
Figure BDA0003598849700000021
the fusion attention convolution module performs convolution, normalization and activation functions on the input feature map for three times, wherein the convolution for the second time adopts the void convolution with different void rates, fusion attention is adopted after the convolution for the third time, and finally the result of fusion attention and the result of corresponding addition of the input feature map are output;
the fusion attention mechanism can enhance the attention to useful information of cloud picture segmentation and input feature pictures
Figure BDA0003598849700000022
One-dimensional channel attention feature map F derived by fusing attention mechanism C ∈R C×1×1 And two-dimensional spatial attention feature map F S ∈R 1×H×W The final fused attention module output is:
Figure BDA0003598849700000023
Figure BDA0003598849700000024
wherein C, H and W respectively represent the number of channels, length and width of the input feature map,
Figure BDA0003598849700000025
representing pixel level multiplication.
The multi-branch asymmetric convolution module comprises a plurality of branches, wherein partial branches adopt asymmetric convolution kernels and a channel attention mechanism module, and finally all the branches are superposed on channels.
The multi-scale feature fusion module carries out cross fusion on the features of different depths extracted by the residual error module and the multi-branch asymmetric convolution module, and the size of the feature graph is adjusted through the up-sampling and depth separable convolution module so as to carry out channel number superposition type feature fusion;
the step size stride of the up sampling is 2, the purpose of doubling the length and the width of the feature map is achieved by repeating rows and columns of the feature map, the depth separable convolution module firstly carries out convolution kernel of 3 on the input feature map, the depth separable convolution of different step sizes is carried out, and then the size of the feature map is adjusted by utilizing convolution with the convolution kernel of 1.
Normalizing the pixel points by utilizing a softmax function to obtain a probability mask, determining a threshold value for optimally determining a cloud region and a non-cloud region through a working curve of a subject, and finally judging whether each pixel point belongs to a cloud;
the softmax function can be represented by the following equation:
Figure BDA0003598849700000026
wherein x is i And x j Characteristic values of class i and class j, respectively.
The invention also discloses a foundation cloud picture segmentation network based on the multi-branch asymmetric convolution module and the multi-scale feature fusion, which comprises the following steps:
s1: training the cloud image segmentation network: using labeled samples (X) i ,Y i ) Training to obtain trained model weight parameters, wherein X i Representing an M N image, Y i Representing the corresponding label, i represents the ith sample, i is 1,2,3.. x, and x is the total number of training samples;
s2: loading trained model parameters, judging the threshold range of pixel points in cloud and non-cloud areas to be 0-1, gradually increasing the threshold by the step length of 0.01, drawing a subject working curve with a false positive rate and a true positive rate as a horizontal coordinate and a vertical coordinate, determining a segmentation threshold according to the result, and finally setting the threshold to be 0.5;
s3: and judging the cloud area and the non-cloud area of the output probability mask by using the cloud area judgment threshold determined by the S2, wherein the pixel point represented by the probability mask larger than 0.5 is judged as the cloud area, and otherwise, the pixel point is judged as the non-cloud area.
Has the advantages that: compared with the prior art, the method adopts the multi-branch asymmetric convolution module and the multi-scale feature fusion module to extract the cloud features in the encoder for the first time. Compared with the traditional cloud picture segmentation algorithm, the method has higher precision and better robustness, and does not need manual feature engineering. Compared with the cloud image segmentation based on the convolutional neural network, the cloud image segmentation method based on the convolutional neural network has higher computation accuracy, and the problem that the UNet network loses spatial information is solved by adopting a multi-scale feature fusion mode.
Drawings
FIG. 1 is a basic flow diagram of the present invention;
FIG. 2 is a diagram of a multi-branch asymmetric convolution module of the present invention;
FIG. 3 is a schematic structural diagram of cloud image segmentation using multi-branch asymmetric convolution module and multi-scale feature fusion according to the present invention.
Detailed Description
The invention will be further elucidated with reference to the accompanying drawings.
As shown in fig. 1, a ground-based cloud atlas segmentation network based on multi-branch asymmetric convolution module and multi-scale feature fusion includes the following steps:
s1, training the multi-branch asymmetric convolution module and the multi-scale feature fused ground cloud picture segmentation network: extracting features by using a residual structure, a multi-branch asymmetric convolution module and a multi-scale feature fusion module in an encoder, taking the extracted deep features and shallow features as the input of the encoder, restoring the features obtained by fusing the deep features and the shallow features of the encoder to the same size as an input cloud picture by using a decoder, and finally using a marked sample (X) i ,Y i ) Training to obtain trained model weight parameters, wherein X i Representing an M N image, Y i Representing the corresponding label, i represents the ith sample, i is 1,2,3.. x, x is the total number of training samples, wherein 5040 daytime clouds and 621 nighttime clouds are contained;
s2: cloud picture segmentation: the method comprises the steps of cutting the size of a cloud picture input into a network into M multiplied by N, loading trained model parameters to perform cloud picture segmentation, firstly normalizing pixel points by using a softmax function to obtain a probability mask, determining a threshold value for optimally determining cloud and non-cloud areas through a subject working curve, and finally judging whether each pixel point belongs to a cloud;
the softmax function can be represented by the following equation:
Figure BDA0003598849700000041
wherein x is i And x j Characteristic values of class i and class j, respectively.
The encoder is used for extracting features, and the extracted deep features and shallow features are used as input of the encoder and comprise a residual error structure, a multi-branch asymmetric convolution module and a multi-scale feature fusion module;
the residual structure comprises a convolution module and a fusion attention convolution module, wherein the convolution module performs convolution, normalization and activation functions on an input feature map for four times, the second convolution adopts hole convolution with different hole rates, and finally the result obtained by correspondingly adding the third convolution and the fourth convolution feature map is output by the convolution module;
the activation function adopts h _ swish, so that the precision loss in quantization can be effectively avoided, and the method can be represented by the following formula:
Figure BDA0003598849700000042
the fusion attention convolution module performs convolution, normalization and activation functions on the input feature map for three times, wherein the convolution for the second time adopts the void convolution with different void rates, fusion attention is adopted after the convolution for the third time, and finally the result of fusion attention and the result of corresponding addition of the input feature map are output;
the fusion attention mechanism can enhance the attention to useful information of cloud picture segmentation, and is used for inputting feature pictures
Figure BDA0003598849700000043
One-dimensional channel attention feature map F derived by fusing attention mechanism C ∈R C×1×1 And two-dimensional spatial attention feature map F S ∈R 1×H×W The final fused attention module output is:
Figure BDA0003598849700000044
Figure BDA0003598849700000045
wherein C, H and W respectively represent the number of channels, length and width of the input feature map,
Figure BDA0003598849700000046
representing pixel level multiplication.
As shown in fig. 2, the multi-branch asymmetric convolution module has 5 branches, each branch firstly performs 1 × 1 convolution to adjust the number of channels, further, the first branch adopts 1 × 7 and 7 × 1 asymmetric convolutions, and then uses a channel attention mechanism module; the second branch takes asymmetric convolutions of 1 × 5 and 5 × 1, followed by the use of a channel attention mechanism module; the third branch is taken and continued to carry on 1 x 1 convolution, then use the module of attention mechanism of the channel; the fourth branch uses a channel attention mechanism module; these five branches are spliced.
As shown in fig. 3, the multi-scale feature fusion module performs cross fusion on the features of different depths extracted by the residual module and the multi-branch asymmetric convolution module, and adjusts the size of the feature map through the upsampling and depth separable convolution module, so as to perform channel number superposition type feature fusion;
the step length stride of the upsampling is 2, the purpose of doubling the length and the width of the feature map is achieved by repeating the rows and the columns of the feature map, the depth separable convolution module firstly carries out convolution kernel of 3 on the input feature map, carries out depth separable convolution with different step lengths, and then adjusts the size of the feature map by utilizing convolution with convolution kernel of 1.
The method comprises the steps that firstly, a decoder fuses deep-layer features and shallow-layer features extracted by an encoder, then restores a feature graph to be the same as an input cloud graph in size through convolution, divides each pixel into a cloud and a non-cloud through a softmax function, and finally judges whether each pixel belongs to a cloud or not;
the softmax function can be expressed by the following equation
Figure BDA0003598849700000051
Wherein x is i And x j Characteristic values of class i and class j, respectively.
According to the embodiment, the feature extraction is carried out through the depth residual error structure and the model structure of the multi-branch asymmetric convolution module, so that the network has good generalization capability, and the cloud features are fully extracted, so that the cloud segmentation precision of the network is higher. According to the invention, through multi-scale feature fusion, the defect that the UNet network loses spatial information is overcome, and the diversity of cloud feature utilization is ensured. The method does not need to manually carry out complicated characteristic engineering, and has good robustness. The experimental result shows that the result obtained by the method is better, and the method is more favorable for the follow-up research of cloud picture analysis.
Table 1 shows the comparison of the experimental results of the present invention with other convolutional neural network algorithm-based experiments, including day, night and all day clouds:
TABLE 1 comparison of the invention with other experimental results based on convolutional neural network algorithm
Figure BDA0003598849700000052
Figure BDA0003598849700000061
In conclusion, compared with a cloud image segmentation algorithm based on a convolutional neural network, the method has higher segmentation precision, can obtain better segmentation results under the condition of unbalanced cloud image data in the daytime and at night, and is more beneficial to the follow-up research of cloud image analysis.

Claims (9)

1. A foundation cloud picture segmentation network based on multi-branch asymmetric convolution module and multi-scale feature fusion is characterized in that: comprises an encoder and a decoder;
the encoder is used for extracting features, and the extracted deep features and shallow features are used as the input of the decoder and comprise a depth residual error structure, a multi-branch asymmetric convolution module and a multi-scale feature fusion module;
the decoder is used for recovering the features obtained by fusing the deep features and the shallow features extracted by the encoder to the same size as the input cloud picture, then normalizing each pixel point through a softmax function to obtain a probability mask, determining a threshold value of an optimal judgment cloud and non-cloud area through a subject working curve, and finally judging whether each pixel point belongs to the cloud or not.
2. The ground-based cloud graph partitioning network based on the multi-branch asymmetric convolution module and the multi-scale feature fusion is characterized in that: the residual error structure comprises a convolution module and a fusion attention convolution module;
the convolution module performs convolution, normalization and activation functions on the input feature map for four times, wherein the second convolution adopts hole convolution with different hole rates, and finally the result obtained by correspondingly adding the third convolution and the fourth convolution feature map is output by the convolution module;
the fusion attention convolution module performs convolution, normalization and activation functions on the input feature map for three times, wherein the convolution for the second time adopts the hole convolution with different hole rates, the fusion attention is adopted after the convolution for the third time, and finally the result of corresponding addition of the result of the fusion attention and the input feature map is output.
3. The ground-based cloud graph partitioning network based on the multi-branch asymmetric convolution module and the multi-scale feature fusion is characterized in that: the activation function part adopts h _ swish, so that the precision loss in quantization can be effectively avoided, and the method can be represented by the following formula:
Figure FDA0003598849690000011
4. the method of claim 2, wherein the ground-based cloud atlas segmentation network is based on multi-branch asymmetric convolution module and multi-scale feature fusionCharacterized in that: the fusion attention mechanism can enhance the attention to useful information of cloud picture segmentation, and is used for inputting feature pictures
Figure FDA0003598849690000012
One-dimensional channel attention feature map F derived by fusing attention mechanism C ∈R C×1×1 And two-dimensional spatial attention feature map F S ∈R 1×H×W The final fused attention module output is:
Figure FDA0003598849690000013
Figure FDA0003598849690000014
wherein C, H and W respectively represent the number of channels, length and width of the input feature map,
Figure FDA0003598849690000015
representing pixel level multiplication.
5. The ground-based cloud graph partitioning network based on the multi-branch asymmetric convolution module and the multi-scale feature fusion is characterized in that: the multi-branch asymmetric convolution module comprises a plurality of branches, wherein partial branches adopt asymmetric convolution kernels and a channel attention mechanism module, and finally all the branches are superposed on a channel.
6. The ground-based cloud graph partitioning network based on the multi-branch asymmetric convolution module and the multi-scale feature fusion is characterized in that: the channel attention mechanism compresses the input feature map along the channel first and then recovers the feature map, so that the network can enhance the weight of the useful channel.
7. The ground-based cloud graph partitioning network based on the multi-branch asymmetric convolution module and the multi-scale feature fusion is characterized in that: the multi-scale feature fusion module carries out cross fusion on features of different depths extracted by the depth residual error module and the multi-branch asymmetric convolution module for multiple times, and the size of a feature map is adjusted through upsampling and depth separable convolution so as to carry out feature fusion;
the step size stride of the up sampling is 2, the purpose of doubling the length and the width of the feature map is achieved by repeating rows and columns of the feature map, the depth separable convolution module firstly carries out convolution kernel of 3 on the input feature map, the depth separable convolution of different step sizes is carried out, and then the size of the feature map is adjusted by utilizing convolution with the convolution kernel of 1.
8. The ground-based cloud graph partitioning network based on the multi-branch asymmetric convolution module and the multi-scale feature fusion is characterized in that: normalizing the pixel points by utilizing a softmax function to obtain a probability mask, determining a threshold value for optimally determining a cloud region and a non-cloud region through a working curve of a subject, and finally judging whether each pixel point belongs to a cloud;
the softmax function can be represented by the following equation:
Figure FDA0003598849690000021
wherein x is i And x j Characteristic values of class i and class j, respectively.
9. The ground-based cloud graph partitioning network based on the multi-branch asymmetric convolution module and the multi-scale feature fusion is characterized in that: the method comprises the following steps:
s1: training the cloud image segmentation network: using labeled samples (X) i ,Y i ) Training to obtain trained model weight parameters, wherein X i Representing an M N image, Y i Representing the corresponding label, i represents the ith sample, i is 1,2,3.. x, and x is the total number of training samples;
s2: loading trained model parameters, judging the threshold range of pixel points in cloud and non-cloud areas to be 0-1, gradually increasing the threshold from 0 by step length of 0.01, drawing a test subject working curve with a false positive rate and a true positive rate as a horizontal coordinate and a vertical coordinate, determining a segmentation threshold according to the result, and finally setting the threshold to be 0.5;
s3: and judging the cloud area and the non-cloud area of the output probability mask by using the cloud area judgment threshold determined by the S2, wherein the pixel point represented by the probability mask larger than 0.5 is judged as the cloud area, and otherwise, the pixel point is judged as the non-cloud area.
CN202210401156.5A 2022-05-18 2022-05-18 Foundation cloud atlas segmentation network based on multi-branch asymmetric convolution module and multi-scale feature fusion Pending CN114863099A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210401156.5A CN114863099A (en) 2022-05-18 2022-05-18 Foundation cloud atlas segmentation network based on multi-branch asymmetric convolution module and multi-scale feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210401156.5A CN114863099A (en) 2022-05-18 2022-05-18 Foundation cloud atlas segmentation network based on multi-branch asymmetric convolution module and multi-scale feature fusion

Publications (1)

Publication Number Publication Date
CN114863099A true CN114863099A (en) 2022-08-05

Family

ID=82631094

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210401156.5A Pending CN114863099A (en) 2022-05-18 2022-05-18 Foundation cloud atlas segmentation network based on multi-branch asymmetric convolution module and multi-scale feature fusion

Country Status (1)

Country Link
CN (1) CN114863099A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115131684A (en) * 2022-08-25 2022-09-30 成都国星宇航科技股份有限公司 Landslide identification method and device based on satellite data UNet network model
CN117456191A (en) * 2023-12-15 2024-01-26 武汉纺织大学 Semantic segmentation method based on three-branch network structure under complex environment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115131684A (en) * 2022-08-25 2022-09-30 成都国星宇航科技股份有限公司 Landslide identification method and device based on satellite data UNet network model
CN117456191A (en) * 2023-12-15 2024-01-26 武汉纺织大学 Semantic segmentation method based on three-branch network structure under complex environment
CN117456191B (en) * 2023-12-15 2024-03-08 武汉纺织大学 Semantic segmentation method based on three-branch network structure under complex environment

Similar Documents

Publication Publication Date Title
CN114863099A (en) Foundation cloud atlas segmentation network based on multi-branch asymmetric convolution module and multi-scale feature fusion
CN111797712B (en) Remote sensing image cloud and cloud shadow detection method based on multi-scale feature fusion network
CN111914611B (en) Urban green space high-resolution remote sensing monitoring method and system
CN110796009A (en) Method and system for detecting marine vessel based on multi-scale convolution neural network model
CN113052210A (en) Fast low-illumination target detection method based on convolutional neural network
CN110717921B (en) Full convolution neural network semantic segmentation method of improved coding and decoding structure
CN114444791A (en) Flood disaster remote sensing monitoring and evaluation method based on machine learning
CN111753677A (en) Multi-angle remote sensing ship image target detection method based on characteristic pyramid structure
CN113591617B (en) Deep learning-based water surface small target detection and classification method
CN102495887B (en) Video lens partitioning method based on color matrixes of key regions and application thereof
CN112001293A (en) Remote sensing image ground object classification method combining multi-scale information and coding and decoding network
CN114494821A (en) Remote sensing image cloud detection method based on feature multi-scale perception and self-adaptive aggregation
CN115908924A (en) Multi-classifier-based small sample hyperspectral image semantic segmentation method and system
CN115953408A (en) YOLOv 7-based lightning arrester surface defect detection method
CN114387446A (en) Automatic water body extraction method for high-resolution remote sensing image
CN113486819A (en) Ship target detection method based on YOLOv4 algorithm
CN110853058B (en) High-resolution remote sensing image road extraction method based on visual saliency detection
CN116433545A (en) Multi-scale fusion single image rain removing method based on rain stripe guidance
CN116977866A (en) Lightweight landslide detection method
CN115063437A (en) Mangrove canopy visible light image index characteristic analysis method and system
CN113392704B (en) Mountain road sideline position detection method
CN115661677A (en) Light-weight satellite image cloud detection method based on dark channel feature guidance
Xu et al. Extraction of rivers and lakes on Tibetan Plateau based on Google Earth Engine
CN114743103A (en) Island reef remote sensing image geological classification method based on Deeplabv3+ network model
Yuan et al. Multi-Temporal Urban Land-Use Change Detection and Prediction Using Cnn-Based Ca-Markov Model from Gaofen Satellite Images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication