CN116258877A - Land utilization scene similarity change detection method, device, medium and equipment - Google Patents
Land utilization scene similarity change detection method, device, medium and equipment Download PDFInfo
- Publication number
- CN116258877A CN116258877A CN202310506203.7A CN202310506203A CN116258877A CN 116258877 A CN116258877 A CN 116258877A CN 202310506203 A CN202310506203 A CN 202310506203A CN 116258877 A CN116258877 A CN 116258877A
- Authority
- CN
- China
- Prior art keywords
- similarity
- threshold
- change detection
- network
- adaptive
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000008859 change Effects 0.000 title claims abstract description 85
- 238000001514 detection method Methods 0.000 title claims abstract description 79
- 238000000034 method Methods 0.000 claims abstract description 50
- 238000012549 training Methods 0.000 claims description 29
- 230000004927 fusion Effects 0.000 claims description 25
- 238000010586 diagram Methods 0.000 claims description 19
- 238000012795 verification Methods 0.000 claims description 17
- 238000012360 testing method Methods 0.000 claims description 14
- 238000011176 pooling Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 9
- 230000006870 function Effects 0.000 claims description 8
- 230000003044 adaptive effect Effects 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 4
- 238000011156 evaluation Methods 0.000 claims description 4
- 238000007499 fusion processing Methods 0.000 claims description 4
- 238000010276 construction Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 230000005855 radiation Effects 0.000 claims description 3
- 230000009467 reduction Effects 0.000 claims description 3
- 238000012545 processing Methods 0.000 description 8
- 238000013135 deep learning Methods 0.000 description 5
- 238000000605 extraction Methods 0.000 description 3
- 238000011835 investigation Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
Abstract
The invention discloses a land utilization scene similarity change detection method, a device, a medium and equipment, wherein the method comprises the following steps: two remote sensing images of the same region and different time phases are acquired to form an image pair; inputting the formed image pairs into a constructed threshold self-adaptive similarity network, and outputting the similarity of the image pairs; and comparing the similarity of the image pairs with an optimal similarity threshold determined according to the threshold self-adaptive similarity network to obtain a land utilization scene similarity change detection result. Compared with the existing classification-based change detection method, the method can obtain more accurate change detection results, and effectively improves the accuracy of change detection.
Description
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a land use scene similarity change detection method, device, medium and equipment.
Background
The change detection is a technique for analyzing the state change of the ground object in a certain area by repeatedly observing the area at different times. In recent years, problems of overlarge investigation area, complex terrain and the like exist in a land resource land change investigation task, and change detection can be applied to the land change investigation task, so that the working efficiency and accuracy can be greatly improved. With the continuous development of remote sensing earth observation technology, the spatial resolution of remote sensing images shows a trend of developing from medium-low resolution to high resolution. Different from the middle-low resolution remote sensing image, the high resolution remote sensing image can provide more detail information of the ground object due to higher spatial resolution, and provides a rich data source for remote sensing image understanding, so that the method has become a new development direction of change detection. The change detection can be classified into binary change detection and category change detection. The change detection based on classification can be classified into post-classification change detection and joint classification change detection according to different methods. The post-classification change detection method ignores time information, and the accuracy of the post-classification change detection method is very dependent on the initial classification accuracy, so that the classification error on any image can influence the final change detection accuracy; for joint classification change detection, the biggest bottleneck is a sample selection problem, and the joint classification change detection is generally only applicable to scenes with a small sample number. Therefore, developing an effective change detection method independent of classification results and sample selection is a difficult problem to be solved in the field of remote sensing image processing at present.
The change detection effect of the remote sensing image is not only influenced by the detection method, but also depends on the acquisition of the image characteristics, so that how to acquire the image characteristics with good expression effect is the key point of the change detection technology. Prior to deep learning applications, some underlying features, such as color, texture, shape, etc., are typically extracted directly by hand. Deep learning, i.e., a deep learning method for extracting high-level features from a deep network, is widely used today, and among them, a deep learning method represented by a convolutional neural network has been widely used in the field of image processing.
At present, the classification-based change detection method is influenced by classification results and sample selection, and features extracted by a manual method cannot meet the requirement of high precision, so that under the background, the research on the land utilization change detection method based on deep learning and not depending on classification precision is necessary.
Disclosure of Invention
In order to solve the defects in the prior art, the invention provides a land utilization scene similarity change detection method, a device, a medium and equipment, which can effectively improve the accuracy of change detection.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
in a first aspect, a method for detecting land use scene similarity change is provided, including: two remote sensing images of the same region and different time phases are acquired to form an image pair; inputting the formed image pairs into a constructed threshold self-adaptive similarity network, and outputting the similarity of the image pairs; and comparing the similarity of the image pairs with an optimal similarity threshold determined according to the threshold self-adaptive similarity network to obtain a land utilization scene similarity change detection result.
Further, the threshold adaptive similarity network is constructed based on a res net residual network, and includes: a convolutional layer, a max pooling layer, four basic_block blocks, and an average pooling layer; the threshold self-adaptive similarity network comprises a shallow feature map fusion process and a deep feature map fusion process;
in the process of shallow feature map fusion, convolution kernels with different sizes are adopted for one-time convolution operation, the obtained feature maps are adjusted to be uniform in size, and the three obtained feature maps are assumed to be respectivelyThe sizes are allAnd channel fusion was performed using the following formula:
wherein ,for the fusion mode->The size of the feature map after fusion is +.>The channel, height and width of the feature map are represented respectively; then a convolution operation of 1 x 1 is performed again for reducing the channel dimension to +.>The size is +.>The method comprises the steps of carrying out a first treatment on the surface of the Then will get +.>And (4) the initial characteristic diagram>Dot multiplication, i.e., multiplication of corresponding position elements, is performed, and the following formula is calculated:
obtaining a time phase characteristic diagramThe method comprises the steps of carrying out a first treatment on the surface of the And similarly calculating to obtain a characteristic diagram of another time phase +.>Then splice is made in the batch dimension, calculated by:
In the process of fusion of deep feature maps, the deep feature maps extracted from the double-branch deep network are subjected toDirectly carrying out differential enhancement treatment to obtain a fused deep feature map; the formula is as follows:
wherein ,representing absolute value>Is a deep feature map extracted from a double-branch deep network,namely a deep feature map after differential enhancement;
for shallow feature mapSequentially adopting dimension reduction and self-adaptive average pooling operation to obtain a corresponding one-dimensional characteristic diagram +.>The method comprises the steps of carrying out a first treatment on the surface of the For deep feature map->Directly compressing the channel to obtain corresponding one-dimensional characteristic diagram +.>The method comprises the steps of carrying out a first treatment on the surface of the Then, the shallow feature map and the deep feature map can be fused, and the following formula is adopted:
wherein ,in a fusion mode, the corresponding positions of the representation feature graphs are added element by element;
using the one-dimensional feature map obtained after fusionAnd (5) transmitting a sigmoid function to obtain the similarity of the final image pair.
Further, the training method of the threshold adaptive similarity network comprises the following steps:
constructing a change detection scene image library, and randomly dividing a training set, a verification set and a test set from the change detection scene image library according to a proportion;
training the threshold self-adaptive similarity network by adopting a training set, calculating training loss according to the calculated similarity and the acquired label, carrying out gradient back propagation according to the training loss, updating weight parameters, repeating the process until convergence, and finally obtaining the trained threshold self-adaptive similarity network;
wherein training is lostThe method is obtained through binary cross entropy loss function calculation: />
wherein ,representing the number of image pairs, the->Representing image pair +.>If the corresponding label is a changing scene, then0, otherwise, < ->1->Representing image pair +.>Is a similarity of (3).
Further, the construction method of the change detection scene image library comprises the following steps:
firstly, selecting two remote sensing images of the same region and different time phases, and eliminating geometric errors and radiation errors in the images by adopting the same preprocessing mode; then the images of the two phases are cut into scene images with equal pixel sizes, and the two scene images of different phases at the same position form an image pair.
Further, the method for determining the optimal similarity threshold comprises the following steps:
after calculating the similarity of the verification set by using the trained threshold self-adaptive similarity network, determining an optimal similarity threshold by a threshold self-adaptive method based on a similarity curve, wherein the optimal similarity threshold is specifically as follows:
firstly, determining a threshold range and an iteration step length;
then, carrying out iterative loop by using the similarity of the verification set, taking t as a current threshold value, dividing the similarity of the verification set into a Changed class and an Unchanged class Unchanged, wherein the similarity of the Changed class is smaller than t, the similarity of the Unchanged class is larger than t, and assuming that the average value of the similarity of the Changed class and the Unchanged class Unchanged is respectively and />The overall similarity mean value is +.>The probability of similarity being divided into Changed class and Unchanged class is +.>、/>The overall similarity can be expressed as:
the variance between different classes of scenes is expressed as:
the variance between peer class scenes is expressed as:
wherein ,representing variance between unchanged classes, +.>Representing the variance between the classes of variation-> and />The number of the similarity in the unchanged class and the changed class is respectively represented;
the final variance ratio can be calculated by:
based on the obtained varianceComparing the graphs corresponding to different similarity thresholds, namely a similarity curve graph, and finding the similarity threshold corresponding to the maximum variance ratio from the similarity curve graph as the optimal similarity threshold。
Further, comparing the similarity of the image pair with an optimal similarity threshold value determined according to a threshold value self-adaptive similarity network to obtain a land utilization scene similarity change detection result, wherein the detection result specifically comprises:
if the similarity of the image pairs is greater than the optimal similarity threshold, the two-phase image scenes reflected by the two remote sensing images in different phases of the same area are considered to be unchanged; otherwise, the change is considered to have occurred.
Further, the method also comprises precision evaluation, specifically:
firstly, calculating the similarity of image pairs in a test set by using a trained threshold self-adaptive similarity network, and acquiring real labels of the image pairs,
Then, the predicted tag is deduced by using the optimal similarity threshold and the calculated similarity of the image pairs in the test setThe specific method is as follows:
wherein ,representing image pair +.>Similarity of->For the best similarity threshold, +.>For image pair->Is a predictive tag of (1);
finally according to the real labelAnd predictive tag->The overall accuracy OA and Kappa coefficients are calculated.
In a second aspect, there is provided a land use scene similarity change detection device, including:
the data acquisition module is used for acquiring two remote sensing images of different phases in the same area to form an image pair;
the similarity calculation module is used for inputting the formed image pairs into a constructed threshold self-adaptive similarity network and outputting the similarity of the image pairs;
and the comparison module is used for comparing the similarity of the image pairs with an optimal similarity threshold value determined according to the threshold value self-adaptive similarity network to obtain a land utilization scene similarity change detection result.
In a third aspect, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the land use scenario similarity change detection method as described in the first aspect.
In a fourth aspect, there is provided a computer device comprising:
a memory for storing instructions;
a processor configured to execute the instructions, cause the device to perform operations implementing the land use scenario similarity change detection method according to the first aspect.
Compared with the prior art, the invention has the beneficial effects that: the invention forms an image pair by collecting two remote sensing images of the same area and different time phases; inputting the formed image pairs into a constructed threshold self-adaptive similarity network, and outputting the similarity of the image pairs; comparing the similarity of the image pairs with an optimal similarity threshold value determined according to a threshold value self-adaptive similarity network to obtain a land utilization scene similarity change detection result; compared with the existing classification-based change detection method, a more accurate change detection result can be obtained, and the accuracy of change detection is effectively improved.
Drawings
FIG. 1 is a schematic diagram of a threshold adaptive similarity network constructed in an embodiment of the present invention;
fig. 2 is a schematic diagram of feature extraction of a dual-phase image according to an embodiment of the present invention, where (a) represents a processing manner of a shallow feature map and (b) represents a processing manner of a deep feature map.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present invention, and are not intended to limit the scope of the present invention.
Embodiment one:
a land utilization scene similarity change detection method comprises the following steps: two remote sensing images of the same region and different time phases are acquired to form an image pair; inputting the formed image pairs into a constructed threshold self-adaptive similarity network, and outputting the similarity of the image pairs; and comparing the similarity of the image pairs with an optimal similarity threshold determined according to the threshold self-adaptive similarity network to obtain a land utilization scene similarity change detection result.
According to the technical scheme for land use scene change detection based on feature similarity, firstly, a scene image library for change detection is constructed, secondly, a threshold self-adaptive similarity network is constructed, training is conducted by utilizing a training set, then an optimal similarity threshold is determined by utilizing a verification set by adopting a threshold self-adaptive method, and finally, change detection is conducted on a test set based on the trained threshold self-adaptive similarity network, so that a change detection result is obtained.
As shown in fig. 1, to describe the technical solution of the present invention in detail, the embodiment flow is provided as follows:
and 1, constructing a change detection scene image library.
Firstly, carrying out fixed-size cutting on a large-size remote sensing image to obtain a scene image library, and dividing the scene image library into training setsVerification set->And test set->Three sub-image libraries, wherein training set +.>For training the network, verification set->For determining the optimal similarity threshold->Test set->For change detection result evaluation.
The specific construction method is as follows: firstly, selecting two remote sensing images of the same region and different time phases, and eliminating geometric errors and radiation errors in the images by adopting the same preprocessing mode; then cutting the images of the two time phases into scene images with equal pixel sizes, and forming an image pair by the two scene images of different time phases at the same position; finally, all the image pairs are processed according to the following 6:2:2 into training setsVerification set->And test set->Three sub-image libraries.
And 2, constructing a threshold self-adaptive similarity network.
The threshold self-adaptive similarity network is a feature extraction, fusion and similarity calculation network designed based on a ResNet residual network, as shown in fig. 2, mainly performs feature extraction on a double-phase image, fuses low-level features and high-level features, and calculates similarity. The network consists of a convolution layer, a maximum pooling layer, four basic_block blocks (hereinafter referred to as block blocks) and an average pooling layer, wherein different ResNet residual error networks comprise different numbers of block blocks, the characteristics of different layers are extracted from different block blocks respectively, represent shallow characteristic diagrams and deep characteristic diagrams, and then are respectively processed correspondingly and fused. In this embodiment, a res net residual network is taken as an example, but the change detection technical scheme designed by the present invention is not limited to the res net residual network, and other networks may be used in specific implementation.
For the features of different layers, different fusion modes are adopted. The processing manner of the shallow feature map is shown in fig. 2 (a), and the processing manner of the deep feature map is shown in fig. 2 (b) (the feature fusion in this embodiment adopts the block4 layer and the block5 layer, but the method is not limited to these two layers, and other layers may be adopted to perform feature fusion in specific implementation). In the process of shallow feature map fusion, the number of channels is adopted as the number of channels at firstThe convolution kernels with different sizes perform one-time convolution operation and adjust the feature images to be of uniform sizes, the convolution kernels with different sizes can acquire receptive fields with different sizes, can feel ground objects with different sizes, and can acquire more complete information; the three feature maps obtained are then +.>The sizes are allChannel fusion is first performed, calculated from the following formula: />
wherein ,for the fusion mode->The size of the feature map after fusion is +.>The channel, height and width of the feature map are represented respectively; then a convolution operation of 1 x 1 is performed again for reducing the channel dimension to +.>The size is +.>The method comprises the steps of carrying out a first treatment on the surface of the Then the resulting size is +.>And (4) the initial characteristic diagram>Dot multiplication, i.e., multiplication of corresponding position elements, is performed, and the following formula is calculated:
obtaining a time phase characteristic diagramThe method comprises the steps of carrying out a first treatment on the surface of the And similarly calculating to obtain a characteristic diagram of another time phase +.>Then splicing in the batch dimension, and calculating by the following formula:
In the process of fusion of deep feature maps, the deep feature maps extracted from the double-branch deep network are subjected toDirectly carrying out differential enhancement treatment to obtain a fused deep feature map; the formula is as follows:
wherein ,representing absolute value>Is a deep feature map extracted from a double-branch deep network,namely a deep feature map after differential enhancement;
after the deep and shallow feature maps are obtained, fusion cannot be directly performed, and a processing operation is required. For shallow feature mapSequentially adopting dimension reduction and self-adaptive average pooling operation to obtain a corresponding one-dimensional characteristic diagram +.>The method comprises the steps of carrying out a first treatment on the surface of the For deep feature map->Directly compressing the channel to obtain corresponding one-dimensional characteristic diagram +.>The method comprises the steps of carrying out a first treatment on the surface of the Then, the shallow feature map and the deep feature map can be fused, and the following formula is adopted:
wherein ,in a fusion mode, the corresponding positions of the representation feature graphs are added element by element. The part for calculating the similarity is to use the one-dimensional feature map obtained after fusion +.>And (5) transmitting a sigmoid function to obtain the similarity of the final image pair. The invention can better excavate the whole and local information of the image by fusing the feature graphs of the shallow network and the deep network, thereby describing the image content more accurately; meanwhile, the invention considers the scale difference of the ground objects in the image, and the constructed multi-scale threshold self-adaptive similarity network can obtain more accurate feature expression.
Training is carried out by adopting the training set constructed in the step 1Training the threshold self-adaptive similarity network constructed in the step 2, calculating training loss according to the calculated similarity and the acquired label, carrying out gradient back propagation according to the loss to update the weight parameter, and repeating the process until the model converges, so as to finally obtain the trained model.
wherein ,representing the number of image pairs, the->Representing image pair +.>If the corresponding label is a changing scene, then0, otherwise, < ->1->Representing image pair +.>Is calculated from the sigmoid function.
Step 4, calculating a verification set by using the threshold self-adaptive similarity network trained in the step 3Is determined by a threshold adaptation method>。
The verification set obtained in the step 1 is collectedAnd (3) transmitting the verification set ++f to the threshold self-adaptive similarity network obtained by training in the step (3)>Image pair similarity of->And determining the optimal similarity threshold value by a threshold value self-adaption method>. The principle of threshold determination is as follows:
the invention provides a method for better reflecting the optimal similarity threshold by adopting the ratio of the variance between different types of scenes to the variance between the scenes of the same type. The image pair similarity is finally divided into two categories, namely a changed category and an unchanged category according to the optimal threshold value. When the optimal threshold value is reasonable, the variances among different types of scenes are large, the variances among the same type of scenes are small, and the variances are large, so that the difference between the two parts in the image is large, and the requirements are met; when the optimal threshold value deviates, the variance between different types of scenes is small, the variance between the scenes of the same type is large, and the variance is small, which means that the difference between the two parts in the image is small and the requirement is not met. When part of the variation class is wrongly divided into unchanged classes or the unchanged classes are wrongly divided into changed classes, the variance between different classes of scenes is reduced, and the variance between the same class of scenes is increased. Thus, the optimal similarity threshold may be selected by a graph of similarity.
The specific method comprises the following steps: firstly, determining a threshold range of 0-1, wherein the iteration step length is 0.0001; then using the verification set obtained in step 3The similarity is subject to iterative loop, t is used as the current threshold value, and the verification set is +.>The similarity can be divided into a Changed class (less than t) and an Unchanged class Unchanged (greater than t), assuming that the average value of the similarity of the two classes is +.> and />The overall similarity mean value is->The probability of similarity being classified into Changed and Unchaged classes is +.>、/>The overall similarity can be expressed as:
the variance between different classes of scenes is expressed as:
the variance between peer class scenes is expressed as:
wherein ,representing variance between unchanged classes, +.>Representing the variance between the classes of variation-> and />The number of the similarity in the unchanged class and the changed class is respectively represented;
the final variance ratio can be calculated by:
drawing graphs corresponding to different similarity thresholds according to the calculated variance ratio, namely similarity graphs, and finding a similarity threshold corresponding to the maximum variance ratio from the similarity graphs to serve as the optimal similarity threshold。
Step 5, utilizing the threshold self-adaptive similarity network pair test set obtained by training in step 3Detecting the change, and comparing the network output result with the optimal similarity threshold value determined in the step 4 +.>Comparing, if the two-phase image scene is larger than the threshold value, the two-phase image scene is considered to be unchanged; if not, the change is considered to occur; and obtaining a final change detection result and evaluating the precision.
Step 6, according to the optimal similarity threshold determined in the step 4And step 3, training threshold self-adaptive similarity network pair test set ++>And performing change detection to finally obtain a change detection result. The specific operation is as follows: firstly, a part for calculating similarity in the threshold self-adaptive similarity network trained in the step 3 is utilized to calculate a test set +.>Image pair similarity of (2), and obtainThe image is>The method comprises the steps of carrying out a first treatment on the surface of the Then using the best similarity threshold determined in step 5 +.>And the above calculated test set +.>To derive the predictive tag +.>The specific method is as follows:
wherein ,representing image pair +.>Similarity of->For the best similarity threshold, +.>For image pair->Is a predictive tag of (1);
finally according to the real labelAnd predictive tag->And calculating the overall accuracy OA and Kappa coefficient, and performing accuracy evaluation through the overall accuracy OA and Kappa coefficient.
Embodiment two:
based on the land use scene similarity change detection method described in the first embodiment, the present embodiment provides a land use scene similarity change detection device, including: the data acquisition module is used for acquiring two remote sensing images of different phases in the same area to form an image pair; the similarity calculation module is used for inputting the formed image pairs into a constructed threshold self-adaptive similarity network and outputting the similarity of the image pairs; and the comparison module is used for comparing the similarity of the image pairs with an optimal similarity threshold value determined according to the threshold value self-adaptive similarity network to obtain a land utilization scene similarity change detection result.
Embodiment III:
based on the land use scene similarity change detection method according to the first embodiment, the present embodiment provides a computer-readable storage medium having a computer program stored thereon, which when executed by a processor, implements the land use scene similarity change detection method according to the first embodiment.
Embodiment four:
based on the land use scene similarity change detection method described in the first embodiment, the present embodiment provides a computer device, including: a memory for storing instructions; and a processor, configured to execute the instructions, and cause the device to perform operations for implementing the land use scene similarity change detection method according to the first embodiment.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that modifications and variations could be made by those skilled in the art without departing from the technical principles of the present invention, and such modifications and variations should also be regarded as being within the scope of the invention.
Claims (10)
1. The land utilization scene similarity change detection method is characterized by comprising the following steps of:
two remote sensing images of the same region and different time phases are acquired to form an image pair;
inputting the formed image pairs into a constructed threshold self-adaptive similarity network, and outputting the similarity of the image pairs;
and comparing the similarity of the image pairs with an optimal similarity threshold determined according to the threshold self-adaptive similarity network to obtain a land utilization scene similarity change detection result.
2. The land use scene similarity change detection method according to claim 1, wherein the threshold adaptive similarity network is constructed based on a res net residual network, comprising: a convolutional layer, a max pooling layer, four basic_block blocks, and an average pooling layer;
the threshold self-adaptive similarity network comprises a shallow feature map fusion process and a deep feature map fusion process;
in the process of shallow feature map fusion, convolution kernels with different sizes are adopted for one-time convolution operation, the obtained feature maps are adjusted to be uniform in size, and the three obtained feature maps are assumed to be respectivelyThe sizes are all ∈10>And channel fusion was performed using the following formula:
wherein ,for the fusion mode->The size of the feature map after fusion is +.>The channel, height and width of the feature map are represented respectively; then a convolution operation of 1 x 1 is performed again for reducing the channel dimension to +.>The size isThe method comprises the steps of carrying out a first treatment on the surface of the Then will get +.>And (4) the initial characteristic diagram>Dot multiplication, i.e., multiplication of corresponding position elements, is performed, and the following formula is calculated:
obtaining a time phase characteristic diagramThe method comprises the steps of carrying out a first treatment on the surface of the And similarly calculating to obtain a characteristic diagram of another time phase +.>Then splice is made in the batch dimension, calculated by:
in the process of fusion of deep feature maps, the deep feature maps extracted from the double-branch deep network are subjected toDirectly carrying out differential enhancement treatment to obtain a fused deep feature map; the formula is as follows:
wherein ,representing absolute value>Is a deep feature map extracted from a dual-branch deep network, and is->Namely a deep feature map after differential enhancement;
for shallow feature mapSequentially adopting dimension reduction and self-adaptive average pooling operation to obtain a corresponding one-dimensional characteristic diagramThe method comprises the steps of carrying out a first treatment on the surface of the For deep feature map->Directly compressing the channel to obtain corresponding one-dimensional characteristic diagram +.>The method comprises the steps of carrying out a first treatment on the surface of the Then, the shallow feature map and the deep feature map can be fused, and the following formula is adopted:
wherein ,for the fusion mode, express specialAdding the corresponding positions of the sign graphs element by element;
3. The land use scene similarity change detection method according to claim 2, wherein the training method of the threshold adaptive similarity network comprises:
constructing a change detection scene image library, and randomly dividing a training set, a verification set and a test set from the change detection scene image library according to a proportion;
training the threshold self-adaptive similarity network by adopting a training set, calculating training loss according to the calculated similarity and the acquired label, carrying out gradient back propagation according to the training loss, updating weight parameters, repeating the process until convergence, and finally obtaining the trained threshold self-adaptive similarity network;
wherein training is lostThe method is obtained through binary cross entropy loss function calculation:
4. The land use scene similarity change detection method according to claim 3, wherein the construction method of the change detection scene image library comprises:
firstly, selecting two remote sensing images of the same region and different time phases, and eliminating geometric errors and radiation errors in the images by adopting the same preprocessing mode; then the images of the two phases are cut into scene images with equal pixel sizes, and the two scene images of different phases at the same position form an image pair.
5. The land use scene similarity change detection method according to claim 3, wherein the determination method of the optimal similarity threshold value comprises:
after calculating the similarity of the verification set by using the trained threshold self-adaptive similarity network, determining an optimal similarity threshold by a threshold self-adaptive method based on a similarity curve, wherein the optimal similarity threshold is specifically as follows:
firstly, determining a threshold range and an iteration step length;
then, carrying out iterative loop by using the similarity of the verification set, taking t as a current threshold value, dividing the similarity of the verification set into a Changed class and an Unchanged class Unchanged, wherein the similarity of the Changed class is smaller than t, the similarity of the Unchanged class is larger than t, and assuming that the average value of the similarity of the Changed class and the Unchanged class Unchanged is respectively and />The overall similarity mean value is +.>The probability of similarity being divided into Changed class and Unchanged class is +.>、/>The overall similarity can be expressed as:
the variance between different classes of scenes is expressed as:
the variance between peer class scenes is expressed as:
wherein ,representing variance between unchanged classes, +.>Representing the variance between the classes of variation-> and />The number of the similarity in the unchanged class and the changed class is respectively represented;
the final variance ratio can be calculated by:
6. The land use scene similarity change detection method according to claim 3, wherein the similarity of the image pair is compared with an optimal similarity threshold determined according to a threshold adaptive similarity network to obtain a land use scene similarity change detection result, specifically:
if the similarity of the image pairs is greater than the optimal similarity threshold, the two-phase image scenes reflected by the two remote sensing images in different phases of the same area are considered to be unchanged; otherwise, the change is considered to have occurred.
7. The land use scene similarity change detection method according to claim 1, further comprising precision evaluation, specifically:
firstly, calculating the similarity of image pairs in a test set by using a trained threshold self-adaptive similarity network, and acquiring the true targets of the image pairsStick,
Then, the predicted tag is deduced by using the optimal similarity threshold and the calculated similarity of the image pairs in the test setThe specific method is as follows:
wherein ,representing image pair +.>Similarity of->For the best similarity threshold, +.>For image pair->Is a predictive tag of (1);
8. The utility scene similarity change detection device is characterized by comprising:
the data acquisition module is used for acquiring two remote sensing images of different phases in the same area to form an image pair;
the similarity calculation module is used for inputting the formed image pairs into a constructed threshold self-adaptive similarity network and outputting the similarity of the image pairs;
and the comparison module is used for comparing the similarity of the image pairs with an optimal similarity threshold value determined according to the threshold value self-adaptive similarity network to obtain a land utilization scene similarity change detection result.
9. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the land use scenario similarity change detection method according to any one of claims 1 to 7.
10. A computer device, comprising:
a memory for storing instructions;
a processor, configured to execute the instructions, and cause the device to perform operations for implementing the land use scenario similarity change detection method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310506203.7A CN116258877A (en) | 2023-05-08 | 2023-05-08 | Land utilization scene similarity change detection method, device, medium and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310506203.7A CN116258877A (en) | 2023-05-08 | 2023-05-08 | Land utilization scene similarity change detection method, device, medium and equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116258877A true CN116258877A (en) | 2023-06-13 |
Family
ID=86688225
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310506203.7A Pending CN116258877A (en) | 2023-05-08 | 2023-05-08 | Land utilization scene similarity change detection method, device, medium and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116258877A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117576574A (en) * | 2024-01-19 | 2024-02-20 | 湖北工业大学 | Electric power facility ground feature change detection method and device, electronic equipment and medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104636118A (en) * | 2013-11-10 | 2015-05-20 | 航天信息股份有限公司 | QR two-dimensional code self-adaptation binarization processing method and device based on light balance |
CN110853009A (en) * | 2019-11-11 | 2020-02-28 | 北京端点医药研究开发有限公司 | Retina pathology image analysis system based on machine learning |
CN111582271A (en) * | 2020-04-29 | 2020-08-25 | 安徽国钜工程机械科技有限公司 | Railway tunnel internal disease detection method and device based on geological radar |
CN113901900A (en) * | 2021-09-29 | 2022-01-07 | 西安电子科技大学 | Unsupervised change detection method and system for homologous or heterologous remote sensing image |
-
2023
- 2023-05-08 CN CN202310506203.7A patent/CN116258877A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104636118A (en) * | 2013-11-10 | 2015-05-20 | 航天信息股份有限公司 | QR two-dimensional code self-adaptation binarization processing method and device based on light balance |
CN110853009A (en) * | 2019-11-11 | 2020-02-28 | 北京端点医药研究开发有限公司 | Retina pathology image analysis system based on machine learning |
CN111582271A (en) * | 2020-04-29 | 2020-08-25 | 安徽国钜工程机械科技有限公司 | Railway tunnel internal disease detection method and device based on geological radar |
CN113901900A (en) * | 2021-09-29 | 2022-01-07 | 西安电子科技大学 | Unsupervised change detection method and system for homologous or heterologous remote sensing image |
Non-Patent Citations (3)
Title |
---|
IONUT COSMIN DUTA等: "Pyramidal Convolution: Rethinking Convolutional Neural Networks for Visual Recognition", 《ARXIV:2006.11538V1》, pages 1 - 16 * |
何一凡等: "多尺度卷积神经网络的图像超分辨率重建算法", 《厦门理工学院学报》, vol. 27, no. 5, pages 1 - 6 * |
黄宇鸿等: "高分辨率遥感影像场景变化检测的相似度方法", 《测绘通报》, no. 8, pages 48 - 53 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117576574A (en) * | 2024-01-19 | 2024-02-20 | 湖北工业大学 | Electric power facility ground feature change detection method and device, electronic equipment and medium |
CN117576574B (en) * | 2024-01-19 | 2024-04-05 | 湖北工业大学 | Electric power facility ground feature change detection method and device, electronic equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11556797B2 (en) | Systems and methods for polygon object annotation and a method of training an object annotation system | |
Kato et al. | Unsupervised parallel image classification using Markovian models | |
CN109740588B (en) | X-ray picture contraband positioning method based on weak supervision and deep response redistribution | |
CN108764006B (en) | SAR image target detection method based on deep reinforcement learning | |
CN108229347B (en) | Method and apparatus for deep replacement of quasi-Gibbs structure sampling for human recognition | |
Descombes et al. | Estimating Gaussian Markov random field parameters in a nonstationary framework: application to remote sensing imaging | |
CN112233129B (en) | Deep learning-based parallel multi-scale attention mechanism semantic segmentation method and device | |
CN113988147B (en) | Multi-label classification method and device for remote sensing image scene based on graph network, and multi-label retrieval method and device | |
US11367206B2 (en) | Edge-guided ranking loss for monocular depth prediction | |
CN116258877A (en) | Land utilization scene similarity change detection method, device, medium and equipment | |
CN117131348B (en) | Data quality analysis method and system based on differential convolution characteristics | |
CN116563285A (en) | Focus characteristic identifying and dividing method and system based on full neural network | |
CN114998630B (en) | Ground-to-air image registration method from coarse to fine | |
CN116823782A (en) | Reference-free image quality evaluation method based on graph convolution and multi-scale features | |
CN116597275A (en) | High-speed moving target recognition method based on data enhancement | |
Li et al. | A new algorithm of vehicle license plate location based on convolutional neural network | |
CN114091628B (en) | Three-dimensional point cloud up-sampling method and system based on double branch network | |
CN115527050A (en) | Image feature matching method, computer device and readable storage medium | |
CN112507826B (en) | End-to-end ecological variation monitoring method, terminal, computer equipment and medium | |
CN111652246B (en) | Image self-adaptive sparsization representation method and device based on deep learning | |
CN116030347B (en) | High-resolution remote sensing image building extraction method based on attention network | |
Jiang et al. | Uncertainty analysis for seismic salt interpretation by convolutional neural networks | |
CN116129280B (en) | Method for detecting snow in remote sensing image | |
Wang et al. | Dyeing creation: a textile pattern discovery and fabric image generation method | |
CN114882292B (en) | Remote sensing image ocean target identification method based on cross-sample attention mechanism graph neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |