CN116310581A - Semi-supervised change detection flood identification method - Google Patents

Semi-supervised change detection flood identification method Download PDF

Info

Publication number
CN116310581A
CN116310581A CN202310320119.6A CN202310320119A CN116310581A CN 116310581 A CN116310581 A CN 116310581A CN 202310320119 A CN202310320119 A CN 202310320119A CN 116310581 A CN116310581 A CN 116310581A
Authority
CN
China
Prior art keywords
flood
supervised
semi
random
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310320119.6A
Other languages
Chinese (zh)
Inventor
王国杰
林俊杰
魏锡坤
祝善友
徐永明
胡一凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202310320119.6A priority Critical patent/CN116310581A/en
Publication of CN116310581A publication Critical patent/CN116310581A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A10/00TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE at coastal zones; at river basins
    • Y02A10/40Controlling or monitoring, e.g. of flood or hurricane; Forecasting, e.g. risk assessment or mapping

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Remote Sensing (AREA)
  • Astronomy & Astrophysics (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a semi-supervised change detection flood identification method, which comprises the following steps: s1, preprocessing an acquired sentinel first-number image pair, and manufacturing a flood change water body label; s2, constructing a semi-supervised change detection network; s3, calculating total loss of unsupervised loss and supervised loss, automatically calculating gradients of all parameters in a supervised learning network by using a deep learning framework, optimizing the parameters by using an optimizer, meeting set conditions, and storing network parameters of the supervised learning network with highest verification accuracy; s4, inputting the first-sentinel images before and after flood in the disaster area into a semi-supervised change detection network for training, and obtaining a flood change water prediction result. According to the invention, the flood water body characteristics are learned in a large number of non-label image pairs, so that the monitoring learning network can better identify flood by only needing less label data, obtain higher identification accuracy and other evaluation indexes, and save a large number of time and resources consumed by manual labeling.

Description

Semi-supervised change detection flood identification method
Technical Field
The invention relates to the field of remote sensing big data, in particular to a semi-supervised change detection flood identification method.
Background
In recent years, the annual loss of life and property of people caused by flood disasters is huge, and the work of post-disaster repair and disaster damage assessment has higher requirements on a remote sensing flood monitoring method. At present, a flood identification method based on a deep learning change monitoring technology is superior to a traditional remote sensing image change detection method in disaster identification effect, and a flood identification method based on the deep learning change monitoring has become the mainstream. However, the data set required for training the deep learning flood monitoring model is high in acquisition cost and high in difficulty, and how to train the neural network with good flood detection effect on a small-scale data set is a problem to be solved urgently. Recently, semi-supervised learning has been impressive in semantic segmentation tasks on very small-scale datasets, and the effect equivalent to that of traditional convolutional neural networks using full dataset supervised training can be achieved with only 10% of the data volume of the original dataset. Semi-supervised learning learns image features in unlabeled images through non-supervised training in addition to supervised training with a small number of samples through its unique semi-supervised training method.
Disclosure of Invention
The invention aims to: the invention aims to provide a semi-supervised change detection flood identification method capable of utilizing a large number of unlabeled sentinel images to learn the characteristics of a flood water body, so that the dependence of flood identification on manual label tags is greatly reduced.
The technical scheme is as follows: the flood identification method provided by the invention comprises the following steps:
s1, acquiring a pair of first-number images of a front guard and a rear guard of a flood, preprocessing the pair of first-number images of the guard, and manufacturing a flood change water body label; dividing a data set of the flood change water body label into a training set, a verification set and a test set according to a proportion;
s2, constructing a semi-supervised change detection network, wherein the semi-supervised change detection network comprises an unsupervised learning network and a supervised learning network;
s3, training an unsupervised learning network through an unsupervised loss function, and training the supervised learning network through a supervised loss function; calculating the total loss of the unsupervised loss and the supervised loss, automatically calculating the gradient of each parameter in the supervised learning network by using a deep learning framework after the total loss is obtained, optimizing the parameters by using an optimizer until the set condition is met, and storing the network parameters of the supervised learning network with the highest verification precision to finish the training of the semi-supervised change detection network;
s4, inputting the first images of the sentry before and after the flood in the disaster area into a semi-supervised change detection network for training, and performing forward propagation only in a supervised learning network to obtain a flood change water body prediction result.
In step S1, the first-number image pair of the sentry before and after the flood is obtained is subjected to radiometric calibration, geometric correction and logarithmic conversion pretreatment.
Further, in step S1, the implementation process of making the flooding change water body label is as follows: the unique tone, shape and texture characteristics of the water body in the synthetic aperture radar image are utilized through visual interpretation, the dual-time-phase remote sensing image is marked by marking software, and a binary change result graph is obtained, wherein the flooding area is marked as white, and the unchanged area is marked as black.
Further, in step S2, the implementation steps of the unsupervised learning network are as follows:
s211, setting a first-number image of a sentinel before flood occurs as I a The first image of the sentinel after flood is I b Ith flood initiation in tag-free datasetThe first image of a lifelong sentinel is called
Figure BDA0004151378570000021
The first image of the sentinel after the occurrence of the ith flood is called +.>
Figure BDA0004151378570000022
And->
Figure BDA0004151378570000023
The characteristics are obtained after the encoding by the encoder>
Figure BDA0004151378570000024
S212, calculating characteristics through a difference module
Figure BDA0004151378570000025
Absolute difference of +.>
Figure BDA0004151378570000026
Acquiring differences in image characteristics before and after flood occurrence; the absolute difference is then +>
Figure BDA0004151378570000027
Inputting a spatial pyramid pooling layer to obtain characteristics +.>
Figure BDA0004151378570000028
Differential features on different scales +.>
Figure BDA0004151378570000029
S213, for difference characteristics
Figure BDA00041513785700000210
Performing disturbance processing, the disturbance processing comprising: random noise processing, random mask processing and random temporary back processing;
the random noise processing process is as follows: first generate an AND
Figure BDA00041513785700000211
Noise tensors P with the same dimension are set to be subjected to uniform distribution on (-0.3, 0.3), and the noise tensors P and +.>
Figure BDA00041513785700000212
Multiplying the corresponding elements and then adding +.>
Figure BDA00041513785700000213
Adding to obtain the random noise difference characteristic +.>
Figure BDA00041513785700000214
The random mask processing process is as follows: firstly, generating a difference characteristic threshold T, setting T to follow uniform distribution on (0.6,0.9) to generate a mask tensor M,
Figure BDA00041513785700000215
wherein->
Figure BDA00041513785700000216
Is->
Figure BDA00041513785700000217
In the result of channel dimension normalization, mask tensor M and difference feature +.>
Figure BDA00041513785700000218
Multiplying the corresponding elements to obtain the difference characteristic of the random mask +.>
Figure BDA00041513785700000219
The random temporary back treatment process comprises the following steps: random to differential features
Figure BDA00041513785700000220
The element value in each channel is assigned to 0, so that the random temporary return difference characteristic is obtained>
Figure BDA00041513785700000221
S214, the difference features
Figure BDA00041513785700000222
Random noise variance feature->
Figure BDA00041513785700000223
Random mask difference feature->
Figure BDA00041513785700000224
Random transient differential characteristics->
Figure BDA00041513785700000225
The dimension is reduced while upsampling by the decoder, respectively, to obtain prediction values +.>
Figure BDA00041513785700000226
Random noise predictor
Figure BDA00041513785700000227
Random mask predictor +.>
Figure BDA00041513785700000228
Random transient prediction value->
Figure BDA00041513785700000229
Further, in step S2, the implementation process of the supervised learning network is as follows: let the first image of the guard before the occurrence of the ith flood in the data set containing the label be called as
Figure BDA0004151378570000031
The first image of the sentinel after the occurrence of the ith flood is called +.>
Figure BDA0004151378570000032
The i-th pair of sentinel first image pair corresponding to flood change water body label is called +.>
Figure BDA0004151378570000033
The encoder-differential module-decoder process is denoted as
Figure BDA0004151378570000034
The predicted value is expressed as +.>
Figure BDA0004151378570000035
Then there are:
Figure BDA0004151378570000036
wherein "-", is 1 "means absolute difference.
In step S3, after the gradient of each parameter in the supervised learning network is automatically calculated and the parameter update is completed by using the deep learning framework, the supervised learning network is adjusted to a verification mode, the first image pair of the sentinel and the label in the verification set are read into the supervised learning network, the verification precision is calculated, and then the next round of training is performed.
Further, in step S3, the unsupervised learning loss function L unsup The expression of (2) is as follows:
Figure BDA0004151378570000037
where MSE () represents the mean square error function.
The supervised learning loss function L sup The expression of (2) is as follows:
Figure BDA0004151378570000038
where CE () represents a cross entropy function.
Compared with the prior art, the invention has the following remarkable effects:
the invention constructs a semi-supervised change detection network comprising an unsupervised learning network and a supervised learning network, trains the unsupervised learning network through an unsupervised loss function, trains the supervised learning network through the supervised loss function, automatically calculates the gradient of each parameter in the supervised learning network by using a deep learning framework after obtaining total loss, and finally completes the training of the semi-supervised change detection network; through the characteristic of learning from a large number of non-label sentinel first-number images centering through the non-supervision learning network, the supervised learning network can realize a better flood identification effect by only needing little label data, obtain evaluation indexes such as higher identification precision and the like, and save a large number of time and resources consumed by manual labeling.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 (a) is a schematic diagram of an unsupervised training dataset;
FIG. 2 (b) is a diagram of a supervised training dataset;
FIG. 3 (a) is a schematic diagram of an unsupervised learning network architecture;
FIG. 3 (b) is a schematic diagram of a supervised learning network architecture;
fig. 4 is a schematic diagram of flood change water body recognition results.
Detailed Description
The invention is described in further detail below with reference to the drawings and the detailed description.
As shown in fig. 1, the flow chart of the present invention is shown, and the specific implementation steps are as follows:
step one, flood water body data set manufacturing based on sentinel first synthetic aperture radar image
Step 11, acquiring a first synthetic aperture radar image of a front guard post and a rear guard post of the flood occurrence
And acquiring a first-sentinel synthetic aperture radar image of the flood inundation area in two times before and after the flood inundation. In order to acquire the required data set, the embodiment acquires the first-number image pairs of the guard before and after the flood season of the Poyang lake in 2017-2019, and totally 6 (3 pairs) of first-number images of the guard, and performs radiometric calibration, geometric correction and logarithmic conversion pretreatment on the first-number image pairs of the guard.
Step 12, making a small amount of flood change water body labels
The semi-supervised change detection network can utilize the feature extraction capability of a large number of unlabeled image training networks, so that only a small number of flood change water body labels are required to be obtained in the embodiment. The unique tone, shape and texture characteristics of the water body in the synthetic aperture radar image are utilized through visual interpretation, the labeling software is used for labeling the double-phase remote sensing image, a binary change result graph is obtained, wherein the flooding flood inundation area is labeled white, and the unchanged area is labeled black, as shown in fig. 2 (a). The sentinel one-number image and the flood change water body label are cut into 256 x 256 sizes through the sliding window, 3578 pairs of change water body samples (here, 3578 pairs of double-phase sentinel one-number images are specifically 3578 pairs of change water body labels and 3578 pairs of double-phase sentinel one-number images for the supervised learning stage of semi-supervised training) and 6296 pairs of double-phase sentinel one-number images (no labels and the non-supervised learning stage of semi-supervised training) are manufactured for the sentinel one-number image, and the data set is shown in fig. 2 (b).
Step 13, partitioning the data set
3578 data set for variable water body tags were calculated as 8:1: the proportion of 1 is divided into a training set, a verification set and a test set.
Step two, constructing a semi-supervised change detection network
Step 21, constructing an unsupervised network
A1 encoder
The encoder is a ResNet50, in particular, all encoders in the semi-supervised change detection network share weights, i.e. the encoders are identical. Let the first image of the sentinel before flood occurs be I a The first image of the sentinel after flood is I b ,I a And I b Is 256 x 256. Specifically, as shown in fig. 3 (a), the i-th image of the data set without the tag (the data set used in this embodiment is 6296 pairs of double-phase first-sentinel images) is called the first-sentinel image before the occurrence of flood
Figure BDA0004151378570000051
The first image of the sentinel after the occurrence of the ith flood is called +.>
Figure BDA0004151378570000052
And->
Figure BDA0004151378570000053
The characteristics with the shape of 64 x 2048 are obtained after the codes are encoded by an encoder
Figure BDA0004151378570000054
Expressed by the formula:
Figure BDA0004151378570000055
Figure BDA0004151378570000056
wherein Encoder () represents the encoding operation.
A2, differential module
The difference module first calculates
Figure BDA0004151378570000057
Absolute difference->
Figure BDA0004151378570000058
To obtain the difference in the image characteristics before and after flood occurrence, and then the absolute difference is +.>
Figure BDA0004151378570000059
Inputting spatial pyramid pooling layer SPP to obtain +.>
Figure BDA00041513785700000510
Differential features on different scales +.>
Figure BDA00041513785700000511
Expressed by the formula:
Figure BDA00041513785700000512
where SPP () represents a spatial pyramid pooling operation.
A3, disturbance treatment
The perturbation process consists of random noise, random masking and random backoff processes.
Random noise processing is specifically the generation of a sum
Figure BDA00041513785700000513
Noise tensors P with the same dimension are set to be subjected to uniform distribution on (-0.3, 0.3), and the noise tensors P and +.>
Figure BDA00041513785700000514
Multiplying the corresponding elements and then adding +.>
Figure BDA00041513785700000515
Adding to obtain the difference characteristic of the random noise>
Figure BDA00041513785700000516
The random noise processing is formulated as:
Figure BDA00041513785700000517
wherein, as indicated by the letter, "-represents a matrix dot product operation.
The random mask processing is specifically to generate a difference characteristic threshold T, set T to follow uniform distribution at (0.6,0.9), generate a mask tensor M,
Figure BDA00041513785700000518
wherein->
Figure BDA00041513785700000519
Is->
Figure BDA00041513785700000520
In the result of channel dimension normalization, mask tensor M and difference feature +.>
Figure BDA00041513785700000521
Multiplying corresponding elements to obtain random mask difference characteristics>
Figure BDA00041513785700000522
The random masking process is formulated as:
Figure BDA00041513785700000523
random temporary back processing is concretely random differential feature
Figure BDA00041513785700000524
The element value in each channel is assigned to 0, so that the random temporary return difference characteristic is obtained>
Figure BDA0004151378570000061
A4 decoder
The decoder is a transposed convolutional layer. In particular, all decoders in the semi-supervised variation detection network share weights, i.e. the decoders are identical. The decoder will have a difference characteristic of 64×64×2048 shape
Figure BDA0004151378570000062
Random noise variance feature->
Figure BDA0004151378570000063
Random mask difference feature->
Figure BDA0004151378570000064
Random transient differential characteristics->
Figure BDA0004151378570000065
The dimensions are reduced while upsampling, resulting in a predicted value +.256 in shape>
Figure BDA0004151378570000066
Random noise predictor +.>
Figure BDA0004151378570000067
Random mask predictor +.>
Figure BDA0004151378570000068
Random transient prediction value->
Figure BDA0004151378570000069
Expressed as by the formula
Figure BDA00041513785700000610
Figure BDA00041513785700000611
Figure BDA00041513785700000612
Figure BDA00041513785700000613
Wherein Decoder () represents a decoding operation.
A5 loss function
Calculating an unsupervised learning partial loss L unsup The unsupervised learning partial loss function is defined as:
Figure BDA00041513785700000614
where MSE () represents the mean square error function.
Step 22, constructing a supervised learning network
B1 encoder-differential module-decoder
The data set containing the tag (the data set used in this example is 3578 vs. double whistleThe first-number image pair of the soldier and the first-number image of the soldier before the occurrence of the ith flood in the corresponding 3578 flood change water body label) are called as
Figure BDA00041513785700000615
The first image of the sentinel after the occurrence of the ith flood is called +.>
Figure BDA00041513785700000616
The i-th pair of sentinel first image pair corresponding to flood change water body label is called +.>
Figure BDA00041513785700000617
The encoder-differential module-decoder process in a supervised learning network is represented simply as +.>
Figure BDA00041513785700000618
The predicted value of the water label of the flood change corresponding to the first image of the ith pair of sentry is expressed as +.>
Figure BDA00041513785700000619
Expressed by the formula:
Figure BDA00041513785700000620
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004151378570000071
Figure BDA0004151378570000072
Figure BDA0004151378570000073
“││ 1 "means absolute difference.
B2 loss function
Computing supervised learning network lossesL sup The supervised learning network loss function adopts a cross entropy loss function, and is expressed as follows:
Figure BDA0004151378570000074
where CE () represents a cross entropy function.
Step three, semi-supervised change detection network training
And (3) taking the training set and the unsupervised data set in the data set containing the change water body label obtained in the step (I) as training data, and training the semi-supervised change detection network constructed in the step (II). Specifically, first, the first image pair of the sentry in the unsupervised data set
Figure BDA0004151378570000075
The non-supervision learning network read into the semi-supervision change detection network, part of the non-supervision data set is shown in fig. 2 (a), and the non-supervision loss L is calculated unsup . Then, the first image pair of the sentry in the training set containing the variable water body label is corresponding to the +.>
Figure BDA0004151378570000076
The supervised learning network of the semi-supervised change detection network is read in, part of training sets containing change water body labels are shown in the figure 2 (b), and the supervised loss +.>
Figure BDA0004151378570000077
Then, the total loss L is calculated according to the following calculation formula
L=L sup +L unsup (13)
After obtaining the total loss L, automatically calculating gradients of encoder and decoder parameters in a supervised learning network and an unsupervised learning network by using a deep learning framework, optimizing the parameters by using an optimizer, adjusting the supervised learning network into a verification mode, reading a sentinel first image pair and a label in the verification set into the supervised learning network, calculating the verification precision, performing the next round of training, and protecting after repeating training for 100 periodsAnd storing network parameters of the corresponding supervised learning network in the period with the highest verification precision, and completing the training of the semi-supervised change detection network. Because the encoders in the supervised and unsupervised learning networks share parameters, as do the decoders, a large number of unlabeled sentinel one-number image pairs are utilized to calculate the unsupervised loss L in the unsupervised learning network unsup After that, the adjustment made to the parameters of the encoder and the decoder still exists in the encoder and the decoder of the supervised learning network, so that the semi-supervised change detection network needs less manually marked flood change water body labels when in training compared with the traditional supervised change detection network, and can realize better flood identification effect under the condition that the flood change water body labels are few.
Step four, semi-supervised change detection network flood identification
And inputting the first images of the front and rear sentries of the flood disaster area into a supervised learning network for training, and performing forward propagation only in the supervised learning network to obtain a flood change water body prediction result, as shown in fig. 4. Experiments prove that 50 sentinel first-number image pairs with the size of 256 x 256 are used for training the semi-supervised change detection network with corresponding labels and 6296 non-label sentinel first-number images, the identification precision of the semi-supervised change detection flood on the test data set is 0.9792, and the IOU of the flood water body change part is 0.8403.

Claims (7)

1. The semi-supervised change detection flood identification method is characterized by comprising the following steps of:
s1, acquiring a pair of first-number images of a front guard and a rear guard of a flood, preprocessing the pair of first-number images of the guard, and manufacturing a flood change water body label; dividing a data set of the flood change water body label into a training set, a verification set and a test set according to a proportion;
s2, constructing a semi-supervised change detection network, wherein the semi-supervised change detection network comprises an unsupervised learning network and a supervised learning network;
s3, training an unsupervised learning network through an unsupervised loss function, and training the supervised learning network through a supervised loss function; calculating the total loss of the unsupervised loss and the supervised loss, automatically calculating the gradient of each parameter in the supervised learning network by using a deep learning framework after the total loss is obtained, optimizing the parameters by using an optimizer until the set condition is met, and storing the network parameters of the supervised learning network with the highest verification accuracy to finish the training of the semi-supervised change detection network;
s4, inputting the first images of the sentry before and after the flood in the disaster area into a semi-supervised change detection network for training, and performing forward propagation only in a supervised learning network to obtain a flood change water body prediction result.
2. The method for identifying the semi-supervised variation detection flood according to claim 1, wherein in the step S1, the first image pair of the sentry before and after the flood is acquired is subjected to radiometric calibration, geometric correction and logarithmic conversion preprocessing.
3. The semi-supervised variation detection flood identification method according to claim 1, wherein in step S1, the implementation process of making the flood variation water body label is as follows: the unique tone, shape and texture characteristics of the water body in the synthetic aperture radar image are utilized through visual interpretation, the dual-time-phase remote sensing image is marked by marking software, and a binary change result graph is obtained, wherein the flooding area is marked as white, and the unchanged area is marked as black.
4. The method for identifying a semi-supervised variation detection flood according to claim 1, wherein in step S2, the non-supervised learning network is implemented as follows:
s211, setting a first-number image of a sentinel before flood occurs as I a The first image of the sentinel after flood is I b The first image of the sentinel before the occurrence of the ith flood in the data set without the tag is called as
Figure FDA0004151378560000011
The first image of the sentinel after the occurrence of the ith flood is called as/>
Figure FDA0004151378560000012
Figure FDA0004151378560000013
And->
Figure FDA0004151378560000014
The characteristics are obtained after the encoding by the encoder>
Figure FDA0004151378560000015
Figure FDA0004151378560000016
S212, calculating characteristics through a difference module
Figure FDA0004151378560000017
Absolute difference of +.>
Figure FDA0004151378560000018
Acquiring differences in image characteristics before and after flood occurrence; the absolute difference is then +>
Figure FDA0004151378560000021
Inputting a spatial pyramid pooling layer to obtain characteristics +.>
Figure FDA0004151378560000022
Differential features on different scales +.>
Figure FDA0004151378560000023
S213, for difference characteristics
Figure FDA0004151378560000024
Performing disturbance processing, the disturbance processing comprising: random noise processing and random mask processingAnd random temporary back processing;
the random noise processing process is as follows: first generate an AND
Figure FDA0004151378560000025
Noise tensors P with the same dimension are set to be subjected to uniform distribution on (-0.3, 0.3), and the noise tensors P and +.>
Figure FDA0004151378560000026
Multiplying the corresponding elements and then adding +.>
Figure FDA0004151378560000027
Adding to obtain the random noise difference characteristic +.>
Figure FDA0004151378560000028
The random mask processing process is as follows: firstly, generating a difference characteristic threshold T, setting T to follow uniform distribution on (0.6,0.9) to generate a mask tensor M,
Figure FDA0004151378560000029
wherein->
Figure FDA00041513785600000210
Is->
Figure FDA00041513785600000211
In the result of channel dimension normalization, mask tensor M and difference feature +.>
Figure FDA00041513785600000212
Multiplying the corresponding elements to obtain the difference characteristic of the random mask +.>
Figure FDA00041513785600000213
The random temporary back treatment process comprises the following steps: random to differential features
Figure FDA00041513785600000214
The element value in each channel is assigned to 0, so that the random temporary return difference characteristic is obtained>
Figure FDA00041513785600000215
S214, the difference features
Figure FDA00041513785600000216
Random noise variance feature->
Figure FDA00041513785600000217
Random mask difference feature->
Figure FDA00041513785600000218
Random transient differential characteristics->
Figure FDA00041513785600000219
The dimension is reduced while upsampling by the decoder, respectively, to obtain prediction values +.>
Figure FDA00041513785600000220
Random noise predictor +.>
Figure FDA00041513785600000221
Random mask predictor +.>
Figure FDA00041513785600000222
Random transient prediction value->
Figure FDA00041513785600000223
5. The method for identifying semi-supervised variation detection flooding of claim 1, wherein in step S2, the supervised learning network is implemented as follows: is provided withThe first image of the sentinel before the occurrence of the ith flood in the data set of the tag is called as
Figure FDA00041513785600000224
The first image of the sentinel after the occurrence of the ith flood is called +.>
Figure FDA00041513785600000225
The i-th pair of sentinel first image pair corresponding to flood change water body label is called +.>
Figure FDA00041513785600000226
The encoder-differential module-decoder procedure is denoted +.>
Figure FDA00041513785600000227
The predicted value is expressed as +.>
Figure FDA00041513785600000228
Then there are:
Figure FDA00041513785600000229
wherein "-", is 1 "means absolute difference.
6. The method for identifying the semi-supervised variation detection flood according to claim 1, wherein in the step S3, after the gradient of each parameter in the supervised learning network is automatically calculated by using the deep learning framework and the parameter updating is completed, the supervised learning network is adjusted to be in a verification mode, the first image pair of the sentry and the label in the verification set are read into the supervised learning network, the verification precision is calculated, and then the next round of training is performed.
7. The method of identifying a semi-supervised variation detection flood according to claim 1, wherein in step S3, the unsupervised learning loss function L unsup The expression of (2) is as follows:
Figure FDA0004151378560000031
where MSE () represents the mean square error function;
the supervised learning loss function L sup The expression of (2) is as follows:
Figure FDA0004151378560000032
where CE () represents a cross entropy function.
CN202310320119.6A 2023-03-29 2023-03-29 Semi-supervised change detection flood identification method Pending CN116310581A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310320119.6A CN116310581A (en) 2023-03-29 2023-03-29 Semi-supervised change detection flood identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310320119.6A CN116310581A (en) 2023-03-29 2023-03-29 Semi-supervised change detection flood identification method

Publications (1)

Publication Number Publication Date
CN116310581A true CN116310581A (en) 2023-06-23

Family

ID=86837763

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310320119.6A Pending CN116310581A (en) 2023-03-29 2023-03-29 Semi-supervised change detection flood identification method

Country Status (1)

Country Link
CN (1) CN116310581A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118036829A (en) * 2024-04-11 2024-05-14 南京邮电大学 Intelligent flood early warning coping method and system for digital city management

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537177A (en) * 2021-09-16 2021-10-22 南京信息工程大学 Flood disaster monitoring and disaster situation analysis method based on visual Transformer
CN114241314A (en) * 2021-12-21 2022-03-25 天地信息网络研究院(安徽)有限公司 Remote sensing image building change detection model and algorithm based on CenterNet
CN115587964A (en) * 2022-08-22 2023-01-10 电子科技大学长三角研究院(湖州) Entropy screening-based pseudo label cross consistency change detection method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537177A (en) * 2021-09-16 2021-10-22 南京信息工程大学 Flood disaster monitoring and disaster situation analysis method based on visual Transformer
CN114241314A (en) * 2021-12-21 2022-03-25 天地信息网络研究院(安徽)有限公司 Remote sensing image building change detection model and algorithm based on CenterNet
CN115587964A (en) * 2022-08-22 2023-01-10 电子科技大学长三角研究院(湖州) Entropy screening-based pseudo label cross consistency change detection method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YASSINE OUALI等: "Semi-Supervised Semantic Segmentation with Cross-Consistency Training", 《CVPR2020》, pages 1 - 11 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118036829A (en) * 2024-04-11 2024-05-14 南京邮电大学 Intelligent flood early warning coping method and system for digital city management
CN118036829B (en) * 2024-04-11 2024-06-11 南京邮电大学 Intelligent flood early warning coping method and system for digital city management

Similar Documents

Publication Publication Date Title
CN112991354B (en) High-resolution remote sensing image semantic segmentation method based on deep learning
CN110555399B (en) Finger vein identification method and device, computer equipment and readable storage medium
CN112132149B (en) Semantic segmentation method and device for remote sensing image
CN114092832A (en) High-resolution remote sensing image classification method based on parallel hybrid convolutional network
CN113591866B (en) Special operation certificate detection method and system based on DB and CRNN
CN108460400B (en) Hyperspectral image classification method combining various characteristic information
CN110599502B (en) Skin lesion segmentation method based on deep learning
CN114283285A (en) Cross consistency self-training remote sensing image semantic segmentation network training method and device
CN116310581A (en) Semi-supervised change detection flood identification method
CN111640087B (en) SAR depth full convolution neural network-based image change detection method
CN112084330A (en) Incremental relation extraction method based on course planning meta-learning
CN112634171A (en) Image defogging method based on Bayes convolutional neural network and storage medium
CN115410059A (en) Remote sensing image part supervision change detection method and device based on contrast loss
CN116524189A (en) High-resolution remote sensing image semantic segmentation method based on coding and decoding indexing edge characterization
CN114880538A (en) Attribute graph community detection method based on self-supervision
CN114881286A (en) Short-time rainfall prediction method based on deep learning
CN114202473A (en) Image restoration method and device based on multi-scale features and attention mechanism
CN117292126A (en) Building elevation analysis method and system using repeated texture constraint and electronic equipment
CN117435992A (en) Fault prediction method and system for hydraulic propulsion system of shield tunneling machine
CN117152427A (en) Remote sensing image semantic segmentation method and system based on diffusion model and knowledge distillation
CN111563180A (en) Trademark image retrieval method based on deep hash method
CN115995040A (en) SAR image small sample target recognition method based on multi-scale network
CN114782821A (en) Coastal wetland vegetation remote sensing identification method combining multiple migration learning strategies
CN114565023A (en) Unsupervised anomaly detection method based on potential feature decomposition
CN115239967A (en) Image generation method and device for generating countermeasure network based on Trans-CSN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination