CN116258877A - Land utilization scene similarity change detection method, device, medium and equipment - Google Patents

Land utilization scene similarity change detection method, device, medium and equipment Download PDF

Info

Publication number
CN116258877A
CN116258877A CN202310506203.7A CN202310506203A CN116258877A CN 116258877 A CN116258877 A CN 116258877A CN 202310506203 A CN202310506203 A CN 202310506203A CN 116258877 A CN116258877 A CN 116258877A
Authority
CN
China
Prior art keywords
similarity
threshold
change detection
network
adaptive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310506203.7A
Other languages
Chinese (zh)
Inventor
周维勋
刘京雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202310506203.7A priority Critical patent/CN116258877A/en
Publication of CN116258877A publication Critical patent/CN116258877A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Abstract

The invention discloses a land utilization scene similarity change detection method, a device, a medium and equipment, wherein the method comprises the following steps: two remote sensing images of the same region and different time phases are acquired to form an image pair; inputting the formed image pairs into a constructed threshold self-adaptive similarity network, and outputting the similarity of the image pairs; and comparing the similarity of the image pairs with an optimal similarity threshold determined according to the threshold self-adaptive similarity network to obtain a land utilization scene similarity change detection result. Compared with the existing classification-based change detection method, the method can obtain more accurate change detection results, and effectively improves the accuracy of change detection.

Description

Land utilization scene similarity change detection method, device, medium and equipment
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a land use scene similarity change detection method, device, medium and equipment.
Background
The change detection is a technique for analyzing the state change of the ground object in a certain area by repeatedly observing the area at different times. In recent years, problems of overlarge investigation area, complex terrain and the like exist in a land resource land change investigation task, and change detection can be applied to the land change investigation task, so that the working efficiency and accuracy can be greatly improved. With the continuous development of remote sensing earth observation technology, the spatial resolution of remote sensing images shows a trend of developing from medium-low resolution to high resolution. Different from the middle-low resolution remote sensing image, the high resolution remote sensing image can provide more detail information of the ground object due to higher spatial resolution, and provides a rich data source for remote sensing image understanding, so that the method has become a new development direction of change detection. The change detection can be classified into binary change detection and category change detection. The change detection based on classification can be classified into post-classification change detection and joint classification change detection according to different methods. The post-classification change detection method ignores time information, and the accuracy of the post-classification change detection method is very dependent on the initial classification accuracy, so that the classification error on any image can influence the final change detection accuracy; for joint classification change detection, the biggest bottleneck is a sample selection problem, and the joint classification change detection is generally only applicable to scenes with a small sample number. Therefore, developing an effective change detection method independent of classification results and sample selection is a difficult problem to be solved in the field of remote sensing image processing at present.
The change detection effect of the remote sensing image is not only influenced by the detection method, but also depends on the acquisition of the image characteristics, so that how to acquire the image characteristics with good expression effect is the key point of the change detection technology. Prior to deep learning applications, some underlying features, such as color, texture, shape, etc., are typically extracted directly by hand. Deep learning, i.e., a deep learning method for extracting high-level features from a deep network, is widely used today, and among them, a deep learning method represented by a convolutional neural network has been widely used in the field of image processing.
At present, the classification-based change detection method is influenced by classification results and sample selection, and features extracted by a manual method cannot meet the requirement of high precision, so that under the background, the research on the land utilization change detection method based on deep learning and not depending on classification precision is necessary.
Disclosure of Invention
In order to solve the defects in the prior art, the invention provides a land utilization scene similarity change detection method, a device, a medium and equipment, which can effectively improve the accuracy of change detection.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
in a first aspect, a method for detecting land use scene similarity change is provided, including: two remote sensing images of the same region and different time phases are acquired to form an image pair; inputting the formed image pairs into a constructed threshold self-adaptive similarity network, and outputting the similarity of the image pairs; and comparing the similarity of the image pairs with an optimal similarity threshold determined according to the threshold self-adaptive similarity network to obtain a land utilization scene similarity change detection result.
Further, the threshold adaptive similarity network is constructed based on a res net residual network, and includes: a convolutional layer, a max pooling layer, four basic_block blocks, and an average pooling layer; the threshold self-adaptive similarity network comprises a shallow feature map fusion process and a deep feature map fusion process;
in the process of shallow feature map fusion, convolution kernels with different sizes are adopted for one-time convolution operation, the obtained feature maps are adjusted to be uniform in size, and the three obtained feature maps are assumed to be respectively
Figure SMS_1
The sizes are all
Figure SMS_2
And channel fusion was performed using the following formula:
Figure SMS_3
wherein ,
Figure SMS_4
for the fusion mode->
Figure SMS_5
The size of the feature map after fusion is +.>
Figure SMS_6
The channel, height and width of the feature map are represented respectively; then a convolution operation of 1 x 1 is performed again for reducing the channel dimension to +.>
Figure SMS_7
The size is +.>
Figure SMS_8
The method comprises the steps of carrying out a first treatment on the surface of the Then will get +.>
Figure SMS_9
And (4) the initial characteristic diagram>
Figure SMS_10
Dot multiplication, i.e., multiplication of corresponding position elements, is performed, and the following formula is calculated:
Figure SMS_11
obtaining a time phase characteristic diagram
Figure SMS_12
The method comprises the steps of carrying out a first treatment on the surface of the And similarly calculating to obtain a characteristic diagram of another time phase +.>
Figure SMS_13
Then splice is made in the batch dimension, calculated by:
Figure SMS_14
wherein ,
Figure SMS_15
representing splicing operation to obtain a fused shallow feature map +.>
Figure SMS_16
In the process of fusion of deep feature maps, the deep feature maps extracted from the double-branch deep network are subjected to
Figure SMS_17
Directly carrying out differential enhancement treatment to obtain a fused deep feature map; the formula is as follows:
Figure SMS_18
wherein ,
Figure SMS_19
representing absolute value>
Figure SMS_20
Is a deep feature map extracted from a double-branch deep network,
Figure SMS_21
namely a deep feature map after differential enhancement;
for shallow feature map
Figure SMS_22
Sequentially adopting dimension reduction and self-adaptive average pooling operation to obtain a corresponding one-dimensional characteristic diagram +.>
Figure SMS_23
The method comprises the steps of carrying out a first treatment on the surface of the For deep feature map->
Figure SMS_24
Directly compressing the channel to obtain corresponding one-dimensional characteristic diagram +.>
Figure SMS_25
The method comprises the steps of carrying out a first treatment on the surface of the Then, the shallow feature map and the deep feature map can be fused, and the following formula is adopted:
Figure SMS_26
wherein ,
Figure SMS_27
in a fusion mode, the corresponding positions of the representation feature graphs are added element by element;
using the one-dimensional feature map obtained after fusion
Figure SMS_28
And (5) transmitting a sigmoid function to obtain the similarity of the final image pair.
Further, the training method of the threshold adaptive similarity network comprises the following steps:
constructing a change detection scene image library, and randomly dividing a training set, a verification set and a test set from the change detection scene image library according to a proportion;
training the threshold self-adaptive similarity network by adopting a training set, calculating training loss according to the calculated similarity and the acquired label, carrying out gradient back propagation according to the training loss, updating weight parameters, repeating the process until convergence, and finally obtaining the trained threshold self-adaptive similarity network;
wherein training is lost
Figure SMS_29
The method is obtained through binary cross entropy loss function calculation: />
Figure SMS_30
wherein ,
Figure SMS_31
representing the number of image pairs, the->
Figure SMS_32
Representing image pair +.>
Figure SMS_33
If the corresponding label is a changing scene, then
Figure SMS_34
0, otherwise, < ->
Figure SMS_35
1->
Figure SMS_36
Representing image pair +.>
Figure SMS_37
Is a similarity of (3).
Further, the construction method of the change detection scene image library comprises the following steps:
firstly, selecting two remote sensing images of the same region and different time phases, and eliminating geometric errors and radiation errors in the images by adopting the same preprocessing mode; then the images of the two phases are cut into scene images with equal pixel sizes, and the two scene images of different phases at the same position form an image pair.
Further, the method for determining the optimal similarity threshold comprises the following steps:
after calculating the similarity of the verification set by using the trained threshold self-adaptive similarity network, determining an optimal similarity threshold by a threshold self-adaptive method based on a similarity curve, wherein the optimal similarity threshold is specifically as follows:
firstly, determining a threshold range and an iteration step length;
then, carrying out iterative loop by using the similarity of the verification set, taking t as a current threshold value, dividing the similarity of the verification set into a Changed class and an Unchanged class Unchanged, wherein the similarity of the Changed class is smaller than t, the similarity of the Unchanged class is larger than t, and assuming that the average value of the similarity of the Changed class and the Unchanged class Unchanged is respectively
Figure SMS_38
and />
Figure SMS_39
The overall similarity mean value is +.>
Figure SMS_40
The probability of similarity being divided into Changed class and Unchanged class is +.>
Figure SMS_41
、/>
Figure SMS_42
The overall similarity can be expressed as:
Figure SMS_43
Figure SMS_44
the variance between different classes of scenes is expressed as:
Figure SMS_45
the variance between peer class scenes is expressed as:
Figure SMS_46
Figure SMS_47
wherein ,
Figure SMS_48
representing variance between unchanged classes, +.>
Figure SMS_49
Representing the variance between the classes of variation->
Figure SMS_50
and />
Figure SMS_51
The number of the similarity in the unchanged class and the changed class is respectively represented;
the final variance ratio can be calculated by:
Figure SMS_52
,/>
based on the obtained varianceComparing the graphs corresponding to different similarity thresholds, namely a similarity curve graph, and finding the similarity threshold corresponding to the maximum variance ratio from the similarity curve graph as the optimal similarity threshold
Figure SMS_53
Further, comparing the similarity of the image pair with an optimal similarity threshold value determined according to a threshold value self-adaptive similarity network to obtain a land utilization scene similarity change detection result, wherein the detection result specifically comprises:
if the similarity of the image pairs is greater than the optimal similarity threshold, the two-phase image scenes reflected by the two remote sensing images in different phases of the same area are considered to be unchanged; otherwise, the change is considered to have occurred.
Further, the method also comprises precision evaluation, specifically:
firstly, calculating the similarity of image pairs in a test set by using a trained threshold self-adaptive similarity network, and acquiring real labels of the image pairs
Figure SMS_54
Then, the predicted tag is deduced by using the optimal similarity threshold and the calculated similarity of the image pairs in the test set
Figure SMS_55
The specific method is as follows:
Figure SMS_56
wherein ,
Figure SMS_57
representing image pair +.>
Figure SMS_58
Similarity of->
Figure SMS_59
For the best similarity threshold, +.>
Figure SMS_60
For image pair->
Figure SMS_61
Is a predictive tag of (1);
finally according to the real label
Figure SMS_62
And predictive tag->
Figure SMS_63
The overall accuracy OA and Kappa coefficients are calculated.
In a second aspect, there is provided a land use scene similarity change detection device, including:
the data acquisition module is used for acquiring two remote sensing images of different phases in the same area to form an image pair;
the similarity calculation module is used for inputting the formed image pairs into a constructed threshold self-adaptive similarity network and outputting the similarity of the image pairs;
and the comparison module is used for comparing the similarity of the image pairs with an optimal similarity threshold value determined according to the threshold value self-adaptive similarity network to obtain a land utilization scene similarity change detection result.
In a third aspect, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the land use scenario similarity change detection method as described in the first aspect.
In a fourth aspect, there is provided a computer device comprising:
a memory for storing instructions;
a processor configured to execute the instructions, cause the device to perform operations implementing the land use scenario similarity change detection method according to the first aspect.
Compared with the prior art, the invention has the beneficial effects that: the invention forms an image pair by collecting two remote sensing images of the same area and different time phases; inputting the formed image pairs into a constructed threshold self-adaptive similarity network, and outputting the similarity of the image pairs; comparing the similarity of the image pairs with an optimal similarity threshold value determined according to a threshold value self-adaptive similarity network to obtain a land utilization scene similarity change detection result; compared with the existing classification-based change detection method, a more accurate change detection result can be obtained, and the accuracy of change detection is effectively improved.
Drawings
FIG. 1 is a schematic diagram of a threshold adaptive similarity network constructed in an embodiment of the present invention;
fig. 2 is a schematic diagram of feature extraction of a dual-phase image according to an embodiment of the present invention, where (a) represents a processing manner of a shallow feature map and (b) represents a processing manner of a deep feature map.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present invention, and are not intended to limit the scope of the present invention.
Embodiment one:
a land utilization scene similarity change detection method comprises the following steps: two remote sensing images of the same region and different time phases are acquired to form an image pair; inputting the formed image pairs into a constructed threshold self-adaptive similarity network, and outputting the similarity of the image pairs; and comparing the similarity of the image pairs with an optimal similarity threshold determined according to the threshold self-adaptive similarity network to obtain a land utilization scene similarity change detection result.
According to the technical scheme for land use scene change detection based on feature similarity, firstly, a scene image library for change detection is constructed, secondly, a threshold self-adaptive similarity network is constructed, training is conducted by utilizing a training set, then an optimal similarity threshold is determined by utilizing a verification set by adopting a threshold self-adaptive method, and finally, change detection is conducted on a test set based on the trained threshold self-adaptive similarity network, so that a change detection result is obtained.
As shown in fig. 1, to describe the technical solution of the present invention in detail, the embodiment flow is provided as follows:
and 1, constructing a change detection scene image library.
Firstly, carrying out fixed-size cutting on a large-size remote sensing image to obtain a scene image library, and dividing the scene image library into training sets
Figure SMS_64
Verification set->
Figure SMS_65
And test set->
Figure SMS_66
Three sub-image libraries, wherein training set +.>
Figure SMS_67
For training the network, verification set->
Figure SMS_68
For determining the optimal similarity threshold->
Figure SMS_69
Test set->
Figure SMS_70
For change detection result evaluation.
The specific construction method is as follows: firstly, selecting two remote sensing images of the same region and different time phases, and eliminating geometric errors and radiation errors in the images by adopting the same preprocessing mode; then cutting the images of the two time phases into scene images with equal pixel sizes, and forming an image pair by the two scene images of different time phases at the same position; finally, all the image pairs are processed according to the following 6:2:2 into training sets
Figure SMS_71
Verification set->
Figure SMS_72
And test set->
Figure SMS_73
Three sub-image libraries.
And 2, constructing a threshold self-adaptive similarity network.
The threshold self-adaptive similarity network is a feature extraction, fusion and similarity calculation network designed based on a ResNet residual network, as shown in fig. 2, mainly performs feature extraction on a double-phase image, fuses low-level features and high-level features, and calculates similarity. The network consists of a convolution layer, a maximum pooling layer, four basic_block blocks (hereinafter referred to as block blocks) and an average pooling layer, wherein different ResNet residual error networks comprise different numbers of block blocks, the characteristics of different layers are extracted from different block blocks respectively, represent shallow characteristic diagrams and deep characteristic diagrams, and then are respectively processed correspondingly and fused. In this embodiment, a res net residual network is taken as an example, but the change detection technical scheme designed by the present invention is not limited to the res net residual network, and other networks may be used in specific implementation.
For the features of different layers, different fusion modes are adopted. The processing manner of the shallow feature map is shown in fig. 2 (a), and the processing manner of the deep feature map is shown in fig. 2 (b) (the feature fusion in this embodiment adopts the block4 layer and the block5 layer, but the method is not limited to these two layers, and other layers may be adopted to perform feature fusion in specific implementation). In the process of shallow feature map fusion, the number of channels is adopted as the number of channels at first
Figure SMS_74
The convolution kernels with different sizes perform one-time convolution operation and adjust the feature images to be of uniform sizes, the convolution kernels with different sizes can acquire receptive fields with different sizes, can feel ground objects with different sizes, and can acquire more complete information; the three feature maps obtained are then +.>
Figure SMS_75
The sizes are all
Figure SMS_76
Channel fusion is first performed, calculated from the following formula: />
Figure SMS_77
wherein ,
Figure SMS_78
for the fusion mode->
Figure SMS_79
The size of the feature map after fusion is +.>
Figure SMS_80
The channel, height and width of the feature map are represented respectively; then a convolution operation of 1 x 1 is performed again for reducing the channel dimension to +.>
Figure SMS_81
The size is +.>
Figure SMS_82
The method comprises the steps of carrying out a first treatment on the surface of the Then the resulting size is +.>
Figure SMS_83
And (4) the initial characteristic diagram>
Figure SMS_84
Dot multiplication, i.e., multiplication of corresponding position elements, is performed, and the following formula is calculated:
Figure SMS_85
obtaining a time phase characteristic diagram
Figure SMS_86
The method comprises the steps of carrying out a first treatment on the surface of the And similarly calculating to obtain a characteristic diagram of another time phase +.>
Figure SMS_87
Then splicing in the batch dimension, and calculating by the following formula:
Figure SMS_88
wherein ,
Figure SMS_89
representing splicing operation to obtain a fused shallow feature map +.>
Figure SMS_90
In the process of fusion of deep feature maps, the deep feature maps extracted from the double-branch deep network are subjected to
Figure SMS_91
Directly carrying out differential enhancement treatment to obtain a fused deep feature map; the formula is as follows:
Figure SMS_92
wherein ,
Figure SMS_93
representing absolute value>
Figure SMS_94
Is a deep feature map extracted from a double-branch deep network,
Figure SMS_95
namely a deep feature map after differential enhancement;
after the deep and shallow feature maps are obtained, fusion cannot be directly performed, and a processing operation is required. For shallow feature map
Figure SMS_96
Sequentially adopting dimension reduction and self-adaptive average pooling operation to obtain a corresponding one-dimensional characteristic diagram +.>
Figure SMS_97
The method comprises the steps of carrying out a first treatment on the surface of the For deep feature map->
Figure SMS_98
Directly compressing the channel to obtain corresponding one-dimensional characteristic diagram +.>
Figure SMS_99
The method comprises the steps of carrying out a first treatment on the surface of the Then, the shallow feature map and the deep feature map can be fused, and the following formula is adopted:
Figure SMS_100
wherein ,
Figure SMS_101
in a fusion mode, the corresponding positions of the representation feature graphs are added element by element. The part for calculating the similarity is to use the one-dimensional feature map obtained after fusion +.>
Figure SMS_102
And (5) transmitting a sigmoid function to obtain the similarity of the final image pair. The invention can better excavate the whole and local information of the image by fusing the feature graphs of the shallow network and the deep network, thereby describing the image content more accurately; meanwhile, the invention considers the scale difference of the ground objects in the image, and the constructed multi-scale threshold self-adaptive similarity network can obtain more accurate feature expression.
Step 3, using training set
Figure SMS_103
Training the threshold adaptive similarity network.
Training is carried out by adopting the training set constructed in the step 1
Figure SMS_104
Training the threshold self-adaptive similarity network constructed in the step 2, calculating training loss according to the calculated similarity and the acquired label, carrying out gradient back propagation according to the loss to update the weight parameter, and repeating the process until the model converges, so as to finally obtain the trained model.
The loss function adopted by the method
Figure SMS_105
Is a binary cross entropy loss function,>
Figure SMS_106
wherein ,
Figure SMS_107
representing the number of image pairs, the->
Figure SMS_108
Representing image pair +.>
Figure SMS_109
If the corresponding label is a changing scene, then
Figure SMS_110
0, otherwise, < ->
Figure SMS_111
1->
Figure SMS_112
Representing image pair +.>
Figure SMS_113
Is calculated from the sigmoid function.
Step 4, calculating a verification set by using the threshold self-adaptive similarity network trained in the step 3
Figure SMS_114
Is determined by a threshold adaptation method>
Figure SMS_115
The verification set obtained in the step 1 is collected
Figure SMS_116
And (3) transmitting the verification set ++f to the threshold self-adaptive similarity network obtained by training in the step (3)>
Figure SMS_117
Image pair similarity of->
Figure SMS_118
And determining the optimal similarity threshold value by a threshold value self-adaption method>
Figure SMS_119
. The principle of threshold determination is as follows:
the invention provides a method for better reflecting the optimal similarity threshold by adopting the ratio of the variance between different types of scenes to the variance between the scenes of the same type
Figure SMS_120
. The image pair similarity is finally divided into two categories, namely a changed category and an unchanged category according to the optimal threshold value. When the optimal threshold value is reasonable, the variances among different types of scenes are large, the variances among the same type of scenes are small, and the variances are large, so that the difference between the two parts in the image is large, and the requirements are met; when the optimal threshold value deviates, the variance between different types of scenes is small, the variance between the scenes of the same type is large, and the variance is small, which means that the difference between the two parts in the image is small and the requirement is not met. When part of the variation class is wrongly divided into unchanged classes or the unchanged classes are wrongly divided into changed classes, the variance between different classes of scenes is reduced, and the variance between the same class of scenes is increased. Thus, the optimal similarity threshold may be selected by a graph of similarity.
The specific method comprises the following steps: firstly, determining a threshold range of 0-1, wherein the iteration step length is 0.0001; then using the verification set obtained in step 3
Figure SMS_121
The similarity is subject to iterative loop, t is used as the current threshold value, and the verification set is +.>
Figure SMS_122
The similarity can be divided into a Changed class (less than t) and an Unchanged class Unchanged (greater than t), assuming that the average value of the similarity of the two classes is +.>
Figure SMS_123
and />
Figure SMS_124
The overall similarity mean value is->
Figure SMS_125
The probability of similarity being classified into Changed and Unchaged classes is +.>
Figure SMS_126
、/>
Figure SMS_127
The overall similarity can be expressed as:
Figure SMS_128
Figure SMS_129
the variance between different classes of scenes is expressed as:
Figure SMS_130
the variance between peer class scenes is expressed as:
Figure SMS_131
Figure SMS_132
wherein ,
Figure SMS_133
representing variance between unchanged classes, +.>
Figure SMS_134
Representing the variance between the classes of variation->
Figure SMS_135
and />
Figure SMS_136
The number of the similarity in the unchanged class and the changed class is respectively represented;
the final variance ratio can be calculated by:
Figure SMS_137
drawing graphs corresponding to different similarity thresholds according to the calculated variance ratio, namely similarity graphs, and finding a similarity threshold corresponding to the maximum variance ratio from the similarity graphs to serve as the optimal similarity threshold
Figure SMS_138
Step 5, utilizing the threshold self-adaptive similarity network pair test set obtained by training in step 3
Figure SMS_139
Detecting the change, and comparing the network output result with the optimal similarity threshold value determined in the step 4 +.>
Figure SMS_140
Comparing, if the two-phase image scene is larger than the threshold value, the two-phase image scene is considered to be unchanged; if not, the change is considered to occur; and obtaining a final change detection result and evaluating the precision.
Step 6, according to the optimal similarity threshold determined in the step 4
Figure SMS_141
And step 3, training threshold self-adaptive similarity network pair test set ++>
Figure SMS_142
And performing change detection to finally obtain a change detection result. The specific operation is as follows: firstly, a part for calculating similarity in the threshold self-adaptive similarity network trained in the step 3 is utilized to calculate a test set +.>
Figure SMS_143
Image pair similarity of (2), and obtainThe image is>
Figure SMS_144
The method comprises the steps of carrying out a first treatment on the surface of the Then using the best similarity threshold determined in step 5 +.>
Figure SMS_145
And the above calculated test set +.>
Figure SMS_146
To derive the predictive tag +.>
Figure SMS_147
The specific method is as follows:
Figure SMS_148
wherein ,
Figure SMS_149
representing image pair +.>
Figure SMS_150
Similarity of->
Figure SMS_151
For the best similarity threshold, +.>
Figure SMS_152
For image pair->
Figure SMS_153
Is a predictive tag of (1);
finally according to the real label
Figure SMS_154
And predictive tag->
Figure SMS_155
And calculating the overall accuracy OA and Kappa coefficient, and performing accuracy evaluation through the overall accuracy OA and Kappa coefficient.
Embodiment two:
based on the land use scene similarity change detection method described in the first embodiment, the present embodiment provides a land use scene similarity change detection device, including: the data acquisition module is used for acquiring two remote sensing images of different phases in the same area to form an image pair; the similarity calculation module is used for inputting the formed image pairs into a constructed threshold self-adaptive similarity network and outputting the similarity of the image pairs; and the comparison module is used for comparing the similarity of the image pairs with an optimal similarity threshold value determined according to the threshold value self-adaptive similarity network to obtain a land utilization scene similarity change detection result.
Embodiment III:
based on the land use scene similarity change detection method according to the first embodiment, the present embodiment provides a computer-readable storage medium having a computer program stored thereon, which when executed by a processor, implements the land use scene similarity change detection method according to the first embodiment.
Embodiment four:
based on the land use scene similarity change detection method described in the first embodiment, the present embodiment provides a computer device, including: a memory for storing instructions; and a processor, configured to execute the instructions, and cause the device to perform operations for implementing the land use scene similarity change detection method according to the first embodiment.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that modifications and variations could be made by those skilled in the art without departing from the technical principles of the present invention, and such modifications and variations should also be regarded as being within the scope of the invention.

Claims (10)

1. The land utilization scene similarity change detection method is characterized by comprising the following steps of:
two remote sensing images of the same region and different time phases are acquired to form an image pair;
inputting the formed image pairs into a constructed threshold self-adaptive similarity network, and outputting the similarity of the image pairs;
and comparing the similarity of the image pairs with an optimal similarity threshold determined according to the threshold self-adaptive similarity network to obtain a land utilization scene similarity change detection result.
2. The land use scene similarity change detection method according to claim 1, wherein the threshold adaptive similarity network is constructed based on a res net residual network, comprising: a convolutional layer, a max pooling layer, four basic_block blocks, and an average pooling layer;
the threshold self-adaptive similarity network comprises a shallow feature map fusion process and a deep feature map fusion process;
in the process of shallow feature map fusion, convolution kernels with different sizes are adopted for one-time convolution operation, the obtained feature maps are adjusted to be uniform in size, and the three obtained feature maps are assumed to be respectively
Figure QLYQS_1
The sizes are all ∈10>
Figure QLYQS_2
And channel fusion was performed using the following formula:
Figure QLYQS_3
wherein ,
Figure QLYQS_4
for the fusion mode->
Figure QLYQS_5
The size of the feature map after fusion is +.>
Figure QLYQS_6
The channel, height and width of the feature map are represented respectively; then a convolution operation of 1 x 1 is performed again for reducing the channel dimension to +.>
Figure QLYQS_7
The size is
Figure QLYQS_8
The method comprises the steps of carrying out a first treatment on the surface of the Then will get +.>
Figure QLYQS_9
And (4) the initial characteristic diagram>
Figure QLYQS_10
Dot multiplication, i.e., multiplication of corresponding position elements, is performed, and the following formula is calculated:
Figure QLYQS_11
obtaining a time phase characteristic diagram
Figure QLYQS_12
The method comprises the steps of carrying out a first treatment on the surface of the And similarly calculating to obtain a characteristic diagram of another time phase +.>
Figure QLYQS_13
Then splice is made in the batch dimension, calculated by:
Figure QLYQS_14
wherein ,
Figure QLYQS_15
representing a splicing operation->
Figure QLYQS_16
The shallow feature map after fusion is obtained;
in the process of fusion of deep feature maps, the deep feature maps extracted from the double-branch deep network are subjected to
Figure QLYQS_17
Directly carrying out differential enhancement treatment to obtain a fused deep feature map; the formula is as follows:
Figure QLYQS_18
wherein ,
Figure QLYQS_19
representing absolute value>
Figure QLYQS_20
Is a deep feature map extracted from a dual-branch deep network, and is->
Figure QLYQS_21
Namely a deep feature map after differential enhancement;
for shallow feature map
Figure QLYQS_22
Sequentially adopting dimension reduction and self-adaptive average pooling operation to obtain a corresponding one-dimensional characteristic diagram
Figure QLYQS_23
The method comprises the steps of carrying out a first treatment on the surface of the For deep feature map->
Figure QLYQS_24
Directly compressing the channel to obtain corresponding one-dimensional characteristic diagram +.>
Figure QLYQS_25
The method comprises the steps of carrying out a first treatment on the surface of the Then, the shallow feature map and the deep feature map can be fused, and the following formula is adopted:
Figure QLYQS_26
,/>
wherein ,
Figure QLYQS_27
for the fusion mode, express specialAdding the corresponding positions of the sign graphs element by element;
using the one-dimensional feature map obtained after fusion
Figure QLYQS_28
And (5) transmitting a sigmoid function to obtain the similarity of the final image pair.
3. The land use scene similarity change detection method according to claim 2, wherein the training method of the threshold adaptive similarity network comprises:
constructing a change detection scene image library, and randomly dividing a training set, a verification set and a test set from the change detection scene image library according to a proportion;
training the threshold self-adaptive similarity network by adopting a training set, calculating training loss according to the calculated similarity and the acquired label, carrying out gradient back propagation according to the training loss, updating weight parameters, repeating the process until convergence, and finally obtaining the trained threshold self-adaptive similarity network;
wherein training is lost
Figure QLYQS_29
The method is obtained through binary cross entropy loss function calculation:
Figure QLYQS_30
wherein ,
Figure QLYQS_31
representing the number of image pairs, the->
Figure QLYQS_32
Representing image pair +.>
Figure QLYQS_33
Corresponding tag, if the image pair is a changing scene +.>
Figure QLYQS_34
0, otherwise, < ->
Figure QLYQS_35
1->
Figure QLYQS_36
Representing image pair +.>
Figure QLYQS_37
Is a similarity of (3).
4. The land use scene similarity change detection method according to claim 3, wherein the construction method of the change detection scene image library comprises:
firstly, selecting two remote sensing images of the same region and different time phases, and eliminating geometric errors and radiation errors in the images by adopting the same preprocessing mode; then the images of the two phases are cut into scene images with equal pixel sizes, and the two scene images of different phases at the same position form an image pair.
5. The land use scene similarity change detection method according to claim 3, wherein the determination method of the optimal similarity threshold value comprises:
after calculating the similarity of the verification set by using the trained threshold self-adaptive similarity network, determining an optimal similarity threshold by a threshold self-adaptive method based on a similarity curve, wherein the optimal similarity threshold is specifically as follows:
firstly, determining a threshold range and an iteration step length;
then, carrying out iterative loop by using the similarity of the verification set, taking t as a current threshold value, dividing the similarity of the verification set into a Changed class and an Unchanged class Unchanged, wherein the similarity of the Changed class is smaller than t, the similarity of the Unchanged class is larger than t, and assuming that the average value of the similarity of the Changed class and the Unchanged class Unchanged is respectively
Figure QLYQS_38
and />
Figure QLYQS_39
The overall similarity mean value is +.>
Figure QLYQS_40
The probability of similarity being divided into Changed class and Unchanged class is +.>
Figure QLYQS_41
、/>
Figure QLYQS_42
The overall similarity can be expressed as:
Figure QLYQS_43
Figure QLYQS_44
the variance between different classes of scenes is expressed as:
Figure QLYQS_45
the variance between peer class scenes is expressed as:
Figure QLYQS_46
Figure QLYQS_47
wherein ,
Figure QLYQS_48
representing variance between unchanged classes, +.>
Figure QLYQS_49
Representing the variance between the classes of variation->
Figure QLYQS_50
and />
Figure QLYQS_51
The number of the similarity in the unchanged class and the changed class is respectively represented;
the final variance ratio can be calculated by:
Figure QLYQS_52
drawing graphs corresponding to different similarity thresholds according to the calculated variance ratio, namely similarity graphs, and finding a similarity threshold corresponding to the maximum variance ratio from the similarity graphs to serve as the optimal similarity threshold
Figure QLYQS_53
6. The land use scene similarity change detection method according to claim 3, wherein the similarity of the image pair is compared with an optimal similarity threshold determined according to a threshold adaptive similarity network to obtain a land use scene similarity change detection result, specifically:
if the similarity of the image pairs is greater than the optimal similarity threshold, the two-phase image scenes reflected by the two remote sensing images in different phases of the same area are considered to be unchanged; otherwise, the change is considered to have occurred.
7. The land use scene similarity change detection method according to claim 1, further comprising precision evaluation, specifically:
firstly, calculating the similarity of image pairs in a test set by using a trained threshold self-adaptive similarity network, and acquiring the true targets of the image pairsStick
Figure QLYQS_54
Then, the predicted tag is deduced by using the optimal similarity threshold and the calculated similarity of the image pairs in the test set
Figure QLYQS_55
The specific method is as follows:
Figure QLYQS_56
wherein ,
Figure QLYQS_57
representing image pair +.>
Figure QLYQS_58
Similarity of->
Figure QLYQS_59
For the best similarity threshold, +.>
Figure QLYQS_60
For image pair->
Figure QLYQS_61
Is a predictive tag of (1);
finally according to the real label
Figure QLYQS_62
And predictive tag->
Figure QLYQS_63
The overall accuracy OA and Kappa coefficients are calculated.
8. The utility scene similarity change detection device is characterized by comprising:
the data acquisition module is used for acquiring two remote sensing images of different phases in the same area to form an image pair;
the similarity calculation module is used for inputting the formed image pairs into a constructed threshold self-adaptive similarity network and outputting the similarity of the image pairs;
and the comparison module is used for comparing the similarity of the image pairs with an optimal similarity threshold value determined according to the threshold value self-adaptive similarity network to obtain a land utilization scene similarity change detection result.
9. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the land use scenario similarity change detection method according to any one of claims 1 to 7.
10. A computer device, comprising:
a memory for storing instructions;
a processor, configured to execute the instructions, and cause the device to perform operations for implementing the land use scenario similarity change detection method according to any one of claims 1 to 7.
CN202310506203.7A 2023-05-08 2023-05-08 Land utilization scene similarity change detection method, device, medium and equipment Pending CN116258877A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310506203.7A CN116258877A (en) 2023-05-08 2023-05-08 Land utilization scene similarity change detection method, device, medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310506203.7A CN116258877A (en) 2023-05-08 2023-05-08 Land utilization scene similarity change detection method, device, medium and equipment

Publications (1)

Publication Number Publication Date
CN116258877A true CN116258877A (en) 2023-06-13

Family

ID=86688225

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310506203.7A Pending CN116258877A (en) 2023-05-08 2023-05-08 Land utilization scene similarity change detection method, device, medium and equipment

Country Status (1)

Country Link
CN (1) CN116258877A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117576574A (en) * 2024-01-19 2024-02-20 湖北工业大学 Electric power facility ground feature change detection method and device, electronic equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104636118A (en) * 2013-11-10 2015-05-20 航天信息股份有限公司 QR two-dimensional code self-adaptation binarization processing method and device based on light balance
CN110853009A (en) * 2019-11-11 2020-02-28 北京端点医药研究开发有限公司 Retina pathology image analysis system based on machine learning
CN111582271A (en) * 2020-04-29 2020-08-25 安徽国钜工程机械科技有限公司 Railway tunnel internal disease detection method and device based on geological radar
CN113901900A (en) * 2021-09-29 2022-01-07 西安电子科技大学 Unsupervised change detection method and system for homologous or heterologous remote sensing image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104636118A (en) * 2013-11-10 2015-05-20 航天信息股份有限公司 QR two-dimensional code self-adaptation binarization processing method and device based on light balance
CN110853009A (en) * 2019-11-11 2020-02-28 北京端点医药研究开发有限公司 Retina pathology image analysis system based on machine learning
CN111582271A (en) * 2020-04-29 2020-08-25 安徽国钜工程机械科技有限公司 Railway tunnel internal disease detection method and device based on geological radar
CN113901900A (en) * 2021-09-29 2022-01-07 西安电子科技大学 Unsupervised change detection method and system for homologous or heterologous remote sensing image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
IONUT COSMIN DUTA等: "Pyramidal Convolution: Rethinking Convolutional Neural Networks for Visual Recognition", 《ARXIV:2006.11538V1》, pages 1 - 16 *
何一凡等: "多尺度卷积神经网络的图像超分辨率重建算法", 《厦门理工学院学报》, vol. 27, no. 5, pages 1 - 6 *
黄宇鸿等: "高分辨率遥感影像场景变化检测的相似度方法", 《测绘通报》, no. 8, pages 48 - 53 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117576574A (en) * 2024-01-19 2024-02-20 湖北工业大学 Electric power facility ground feature change detection method and device, electronic equipment and medium
CN117576574B (en) * 2024-01-19 2024-04-05 湖北工业大学 Electric power facility ground feature change detection method and device, electronic equipment and medium

Similar Documents

Publication Publication Date Title
US11556797B2 (en) Systems and methods for polygon object annotation and a method of training an object annotation system
Kato et al. Unsupervised parallel image classification using Markovian models
CN109740588B (en) X-ray picture contraband positioning method based on weak supervision and deep response redistribution
CN108764006B (en) SAR image target detection method based on deep reinforcement learning
CN108229347B (en) Method and apparatus for deep replacement of quasi-Gibbs structure sampling for human recognition
Descombes et al. Estimating Gaussian Markov random field parameters in a nonstationary framework: application to remote sensing imaging
CN112233129B (en) Deep learning-based parallel multi-scale attention mechanism semantic segmentation method and device
CN113988147B (en) Multi-label classification method and device for remote sensing image scene based on graph network, and multi-label retrieval method and device
US11367206B2 (en) Edge-guided ranking loss for monocular depth prediction
CN116258877A (en) Land utilization scene similarity change detection method, device, medium and equipment
CN117131348B (en) Data quality analysis method and system based on differential convolution characteristics
CN116563285A (en) Focus characteristic identifying and dividing method and system based on full neural network
CN114998630B (en) Ground-to-air image registration method from coarse to fine
CN116823782A (en) Reference-free image quality evaluation method based on graph convolution and multi-scale features
CN116597275A (en) High-speed moving target recognition method based on data enhancement
Li et al. A new algorithm of vehicle license plate location based on convolutional neural network
CN114091628B (en) Three-dimensional point cloud up-sampling method and system based on double branch network
CN115527050A (en) Image feature matching method, computer device and readable storage medium
CN112507826B (en) End-to-end ecological variation monitoring method, terminal, computer equipment and medium
CN111652246B (en) Image self-adaptive sparsization representation method and device based on deep learning
CN116030347B (en) High-resolution remote sensing image building extraction method based on attention network
Jiang et al. Uncertainty analysis for seismic salt interpretation by convolutional neural networks
CN116129280B (en) Method for detecting snow in remote sensing image
Wang et al. Dyeing creation: a textile pattern discovery and fabric image generation method
CN114882292B (en) Remote sensing image ocean target identification method based on cross-sample attention mechanism graph neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination