CN117671437A - Open stope identification and change detection method based on multitasking convolutional neural network - Google Patents
Open stope identification and change detection method based on multitasking convolutional neural network Download PDFInfo
- Publication number
- CN117671437A CN117671437A CN202311359531.5A CN202311359531A CN117671437A CN 117671437 A CN117671437 A CN 117671437A CN 202311359531 A CN202311359531 A CN 202311359531A CN 117671437 A CN117671437 A CN 117671437A
- Authority
- CN
- China
- Prior art keywords
- feature map
- feature
- characteristic diagram
- convolutional neural
- change detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000008859 change Effects 0.000 title claims abstract description 83
- 238000001514 detection method Methods 0.000 title claims abstract description 67
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 47
- 238000010586 diagram Methods 0.000 claims abstract description 67
- 238000000034 method Methods 0.000 claims abstract description 66
- 230000004927 fusion Effects 0.000 claims abstract description 56
- 238000005070 sampling Methods 0.000 claims abstract description 15
- 238000011160 research Methods 0.000 claims description 49
- 230000008569 process Effects 0.000 claims description 29
- 238000012545 processing Methods 0.000 claims description 27
- 238000012549 training Methods 0.000 claims description 27
- 238000011176 pooling Methods 0.000 claims description 18
- 238000012360 testing method Methods 0.000 claims description 12
- 238000012795 verification Methods 0.000 claims description 10
- 230000009191 jumping Effects 0.000 claims description 9
- 238000007781 pre-processing Methods 0.000 claims description 9
- 238000012937 correction Methods 0.000 claims description 8
- 239000013598 vector Substances 0.000 claims description 8
- 230000006835 compression Effects 0.000 claims description 5
- 238000007906 compression Methods 0.000 claims description 5
- 238000005520 cutting process Methods 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000004519 manufacturing process Methods 0.000 claims description 3
- 239000000203 mixture Substances 0.000 claims description 3
- 238000005065 mining Methods 0.000 abstract description 14
- 230000006870 function Effects 0.000 description 23
- 239000003245 coal Substances 0.000 description 5
- 238000000605 extraction Methods 0.000 description 5
- 230000004913 activation Effects 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 4
- 238000011835 investigation Methods 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 4
- 230000002708 enhancing effect Effects 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 229920002430 Fibre-reinforced plastic Polymers 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000011151 fibre-reinforced plastic Substances 0.000 description 1
- 230000002401 inhibitory effect Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 239000002689 soil Substances 0.000 description 1
- 230000007306 turnover Effects 0.000 description 1
- -1 vegetation Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses an open stope identification and change detection method based on a multitasking convolutional neural network, which comprises the following steps: s1, acquiring remote sensing image data of two time phases of a study area T1 and a study area T2, and constructing a multi-task convolutional neural network model; s2, the change detection network branch carries out differential fusion on the characteristic diagram obtained by the first identification network branch and the characteristic diagram obtained by the second identification network branch to obtain a coding characteristic diagram, and then the coding characteristic diagram is connected through jump and is specially processedObtaining a characteristic diagram D by means of sign fusion t‑5 、D t‑4 、D t‑3 、D t‑2 The method comprises the steps of carrying out a first treatment on the surface of the S3, detecting network branching characteristic diagram D t1‑2 And feature map D t2‑2 Differential fusion is carried out to obtain a feature map D a‑t2 The method comprises the steps of carrying out a first treatment on the surface of the S4, combining the characteristic diagram D t‑2 Multiplying the channel attention weight and the space attention weight to obtain a feature map D' t‑2 And then obtaining a change detection result through an up-sampling operation. The invention builds the multi-task convolutional neural network model based on the twin VGG-16 network structure, and can be rapidly and efficiently applied to the open pit mining field identification and the automatic detection of the change area.
Description
Technical Field
The invention relates to the field of open stope remote sensing image processing and change detection, in particular to an open stope identification and change detection method based on a multitasking convolutional neural network.
Background
Although the exploitation of coal resources promotes the development of economy, serious ecological environment problems are brought about, and the exploitation is specifically expressed as follows: along with the increase of coal resource exploitation strength, serious ground surface damage is caused, normal land utilization patterns are disturbed, and ecological elements such as ground surface/underground water, soil, vegetation, atmosphere and the like are affected, so that ecological environment risks are brought. The conventional open stope identification and change detection usually adopts a manual field investigation method, and the manual investigation data has the problems of high precision, large consumption of manpower and material resources, untimely monitoring and the like. The spatial range identification and change detection of the open-air stope of the mining area are realized accurately, and the ecological environment monitoring and management and the purpose control of the open-air stope are facilitated for relevant national departments.
The method for automatically identifying and detecting the space range of the open stope based on the remote sensing image is mainly a traditional remote sensing automatic interpretation method, such as membership function, random forest and the like, and although the identification accuracy of the method is continuously improved, the result accuracy still cannot meet the application requirements. Most of the existing researches cut and split the two tasks of identification and change detection, and the change detection does not go deep into the processes of coding and the like of feature identification, so that the accuracy of the change detection is low, especially for low detail feature identification, the change detection is interfered by pseudo change information (the pseudo change problem is particularly serious in the change detection task of the open stope because the open stope of a coal mining area is continuously mined). The coal mining area ground object is complex in type and high in heterogeneity, and the identification task and the change detection task of the open stope are focused on details and face great challenges.
Disclosure of Invention
The invention aims to solve the technical problems pointed out in the background art, provides an open stope identification and change detection method based on a multitasking convolutional neural network, constructs a multitasking convolutional neural network model based on a twin VGG-16 network structure, can realize open stope space range identification and change detection research in an end-to-end mode, can be quickly and efficiently applied to open stope identification and change area automatic detection, and provides network detection model and data support for ecological environment protection of open stopes and mining area illegal mining activity monitoring.
The aim of the invention is achieved by the following technical scheme:
an open stope identification and change detection method based on a multitasking convolutional neural network, the method comprising:
s1, determining a research area, and collecting remote sensing image data of two time phases of the research area T1 and the research area T2; constructing a multi-task convolutional neural network model, wherein the multi-task convolutional neural network model comprises a first identification network branch, a second identification network branch and a change detection network branch;
the encoding process of the first recognition network branch utilizes VGG-16 network to extract five hierarchical feature graphs of T1 image data, which are respectively marked as E t1-1 、E t1-2 、E t1-3 、E t1-4 、E t1-5 The decoding process of the first identified network branch will characterize graph E t1-2 、E t1-3 、E t1-4 The feature map D is obtained by jumping connection and feature fusion t1-5 、D t1-4 、D t1-3 、D t1-2 Then pass throughThe up-sampling operation obtains the identification result of the T1 image data;
the coding process of the second recognition network branch utilizes VGG-16 network to extract five level characteristic diagrams of T2 image data, which are respectively marked as E t2-1 、E t2-2 、E t2-3 、E t2-4 、E t2-5 The second identified network branch decoding process decodes the feature map E t2-2 、E t2-3 、E t2-4 The feature map D is obtained by jumping connection and feature fusion t2-5 、D t2-4 、D t2-3 、D t2-2 Then, obtaining an identification result of the T2 image data through up-sampling operation;
s2, the coding process of the change detection network branch is used for processing the characteristic diagram E t1-2 、E t1-3 、E t1-4 、E t1-5 And feature map E t2-2 、E t2-3 、E t2-4 、E t2-5 Performing differential fusion to obtain a coding feature map E t-2 、E t-3 、E t-4 、E t-5 The method comprises the steps of carrying out a first treatment on the surface of the The change detection network branch decoding process encodes feature map E t-2 、E t-3 、E t-4 The feature map D is obtained by jumping connection and feature fusion t-5 、D t-4 、D t-3 、D t-2 ;
S3, detecting network branching characteristic diagram D t1-2 And feature map D t2-2 Differential fusion is carried out to obtain a feature map D t1-t2 The method comprises the steps of carrying out a first treatment on the surface of the The change detection network branch adopts a convolution attention module to a characteristic diagram D t1-t2 Processing to obtain a channel attention weight and a space attention weight;
s4, combining the characteristic diagram D t-2 Multiplying the two signals with the channel attention weight and the space attention weight respectively to obtain a characteristic diagram D 'with enhanced information in the channel direction and the space direction' t-2 And then obtaining a change detection result of the research area through an up-sampling operation.
In order to better implement the present invention, in step S1, the multitasking convolutional neural network model performs model training using the following method:
s11, preparing remote sensing image sample data of two time phases of a research area T1 and a research area T2, wherein the method comprises the following steps: collecting remote sensing image samples of two time phases of a research area T1 and a research area T2, respectively correspondingly extracting boundary vectors, carrying out vector grid conversion treatment, respectively converting the remote sensing image samples of the two time phases of the research area T1 and the research area T2 into sample grid images, correspondingly forming T1 image sample data by the remote sensing image samples of the time phase of the research area T1 and the sample grid images of the time phase of the research area T1, and correspondingly forming T2 image sample data by the remote sensing image samples of the time phase of the research area T2 and the sample grid images of the time phase of the research area T2; the method comprises the steps of carrying out a first treatment on the surface of the
S12, cutting the T1 image sample data and the T2 image sample data into image blocks respectively, dividing the image blocks into a training set, a verification set and a test set according to the ratio of 6:2:2, wherein the training set and the verification set are used for training a model, and the test set is used for testing the precision and generalization capability of the model.
In order to better implement the present invention, the decoding process method of the first identified network branch is as follows: first feature map E t-5 The characteristic diagram D is obtained through convolution processing f1-5 Then upsample and match the feature map E t1-4 Jump connection and feature fusion to obtain a feature map D t1-4 Then up-sample and match with characteristic diagram E t1-3 Jump connection and feature fusion to obtain a feature map D t1-3 Then up-sample and match with characteristic diagram E t1-2 Jump connection and feature fusion to obtain a feature map D t1-2 By means of the characteristic diagram D t1-2 Performing up-sampling operation to obtain an identification result of the T1 image data; the second identified network branch decoding process method is as follows: first feature map E t2-5 The characteristic diagram D is obtained through convolution processing t2-5 Then upsample and match the feature map E t2-4 Jump connection and feature fusion to obtain a feature map D t1-4 Then up-sample and match with characteristic diagram E t2-3 Jump connection and feature fusion to obtain a feature map D t2-3 Then up-sample and match with characteristic diagram E t2-2 Jump connection and feature fusion to obtain a feature map D t2-2 By means of the characteristic diagram D t2-2 Performing up-sampling operation to obtain a recognition result of the T2 image data; the method for detecting the network branch decoding process by the change is as follows: first, feature map E t-5 The characteristic diagram D is obtained through convolution processing t-5 Then upsample and match the feature map E t-4 Jump connection and feature fusion to obtain a feature map D t-4 Then up-sample and match with characteristic diagram E t-3 Jump connection and feature fusion to obtain a feature map D t-3 Then up-sample and match with characteristic diagram E t-2 Jump connection and feature fusion to obtain a feature map D t-2 。
Preferably, the remote sensing image data of two time phases of the research areas T1 and T2 in the step S1 and the remote sensing image sample in the step S11 are subjected to image preprocessing, and the image preprocessing includes radiometric calibration, atmospheric correction, orthographic correction or/and image fusion.
Preferably, the jump connection further comprises edge information enhancement module processing, the edge information enhancement module comprises channel dimension pooling and Sobel convolution, the edge information enhancement module carries out channel compression on the input feature image through the channel dimension pooling, the Sobel convolution comprises an operator Sobel x in the horizontal direction and an operator Sobel in the vertical direction, the feature image after channel compression obtains edge information in the horizontal direction and the vertical direction through Sobel convolution and carries out addition operation, and then product operation is carried out with the original input feature image to obtain the feature image with enhanced edge information.
Preferably, the loss function of the multitasking convolutional neural network model is represented by a contrast loss function L CT And a cross entropy loss function L CE The common composition and the calculation formula are as follows:
L=ω 1 L CT +ω 2 L CE the method comprises the steps of carrying out a first treatment on the surface of the Wherein omega 1 Representing contrast loss function L CT Weights, ω 2 Representing a cross entropy loss function L CE And L is the total loss value of the multitasking convolutional neural network model.
Preferably, the loss function L is compared CT The formula is as follows:
wherein d represents the Euclidean distance that the two features form a pair of features n, the two features in the pair of features n are similar, then +.>Otherwise->margin is a set threshold and N is the total number of feature pairs.
Preferably, the cross entropy loss function L CE The formula is as follows:
wherein->For the true category corresponding to the pixel, y i And p is the total number of pixels, which is the pixel result of model prediction.
Compared with the prior art, the invention has the following advantages:
(1) The invention builds a multi-task convolutional neural network model based on a twin VGG-16 network structure, can synchronously realize the open stope space range identification and the change detection research end to end, can be quickly and efficiently applied to open stope identification and automatic change area detection, and provides network detection models and data support for the ecological environment protection of open stopes and the illegal mining activity monitoring of mining areas.
(2) The multi-task convolutional neural network model utilizes the contrast loss function to restrict the feature extraction process in the feature coding part, so that the gap recognition capability of a change sample is enhanced, a non-change sample can be effectively ignored, the problem of complex heterogeneity of a mining area is solved, and the change detection precision is improved; an edge information enhancement module for enhancing the edge characteristics of the open stope is constructed in jump connection, so that the separability of the detail identification of the open stope is improved; and the feature decoding layer utilizes the features decoded by the recognition branches to carry out absolute difference fusion and inputs the absolute difference fusion to the attention mechanism module to obtain the spatial attention feature and the channel attention feature, thereby being beneficial to further improving the precision of the change detection task.
(3) The invention utilizes the twin VGG-16 network structure to respectively extract the characteristics from the front and rear time phase remote sensing images for the characteristic decoding of the subsequent identification branch and the open stope identification, and simultaneously utilizes the contrast loss function to supervise and restrict the characteristic extraction process of the front and rear time phase remote sensing images under the supervision of the change detection true value, thereby enhancing the study of the model on the substantial change characteristics of the images, inhibiting the characteristics irrelevant to the substantial change, improving the detail characteristic identification degree, reducing the interference of pseudo change information, and further improving the sensitivity degree of the network to the change pixels.
(4) The invention carries out absolute difference operation on the multidimensional features extracted from the front and back time phase remote sensing images by the twin VGG-16 network, and is used for the feature decoding of the subsequent change detection branch; absolute difference operation is carried out by utilizing the identification branch characteristics, and the absolute difference operation is input into a convolution attention module to acquire channel attention weight and space attention weight; and finally, fusing the two attention weights with the decoding characteristics of the change detection branch to obtain a change detection result with high precision.
(5) The invention constructs an edge information enhancement module for enhancing the edge characteristics of the open stope in jump connection; in the feature encoding stage, the feature edge structure information in the features is gradually blurred due to continuous convolution and pooling operation, so that the development of decoding part of open stope recognition and change detection tasks is not facilitated.
Drawings
FIG. 1 is a schematic diagram of a multi-tasking convolutional neural network model of the present invention;
FIG. 2 is a schematic diagram of the principle structure of a VGG-16 network according to an embodiment;
FIG. 3 is a schematic diagram of the principle structure of an edge information enhancement module according to an embodiment;
FIG. 4 is a schematic diagram of the schematic structure of a CBAM attention module according to an embodiment;
FIG. 5 is a schematic diagram of a channel attention module according to an embodiment;
fig. 6 is a schematic structural diagram of a spatial attention module according to an embodiment.
Detailed Description
The invention is further illustrated by the following examples:
examples
As shown in fig. 1, a method for identifying and detecting changes in an open stope based on a multitasking convolutional neural network, the method comprising:
s1, determining a research area, and collecting remote sensing image data (high-resolution remote sensing image data) of two time phases of the research area T1 and the research area T2. And constructing a multi-task convolutional neural network model, wherein the multi-task convolutional neural network model comprises a first identification network branch, a second identification network branch and a change detection network branch. In this embodiment, the first and second identification network branches have the same structure and all adopt VGG-16 networks, as shown in fig. 1, and the first and second identification network branches form a twin VGG-16 network structure, as shown in fig. 2, preferably, the VGG-16 network of the present invention includes 13 convolutional layers and 4 pooling layers for 5-layer feature extraction.
The encoding process of the first recognition network branch utilizes VGG-16 network to extract five hierarchical feature graphs of T1 image data, which are respectively marked as E t1-1 、E t1-2 、E t1-3 、E t1-4 、E t1-5 The decoding process of the first identified network branch will characterize graph E t1-2 、E t1-3 、E t1-4 The feature map D is obtained by jumping connection and feature fusion t1-5 、D t1-4 、D t1-3 、D t1-2 And then obtaining the identification result of the T1 image data through up-sampling operation. Preferably, the decoding process method of the first identified network branch is as follows: first feature map E t-5 The characteristic diagram D is obtained through convolution processing t1-5 Then upsample (for feature map D t1-5 ) And with the characteristic diagram E t1-4 Jump connection and feature fusion to obtain a feature map D t1-4 And then upsampling (for feature map D t1-4 ) And with the characteristic diagram E t1-3 Jump connection and feature fusion to obtain a feature map D t1-3 And then upsampling (for feature map D t1-3 ) And with the characteristic diagram E t1-2 Jump connection and feature fusion to obtain a feature map D t1-2 By means of the characteristic diagram D t1-2 And performing up-sampling operation to obtain the identification result of the T1 image data.
The coding process of the second recognition network branch utilizes VGG-16 network to extract five level characteristic diagrams of T2 image data, which are respectively marked as E t2-1 、E t2-2 、E t2-3 、E t2-4 、E t2-5 The second identified network branch decoding process decodes the feature map E t2-2 、E t2-3 、E t2-4 The feature map D is obtained by jumping connection and feature fusion t2-5 、D t2-4 、D t2-3 、D t2-2 And then obtaining the identification result of the T2 image data through up-sampling operation. Preferably, the second identified network branch decoding process method is as follows: first feature map E t2-5 The characteristic diagram D is obtained through convolution processing t2-5 Then upsample (for feature map D t2-5 ) And with the characteristic diagram E t2-4 Jump connection and feature fusion to obtain a feature map D t1-4 And then upsampling (for feature map D t1-4 ) And with the characteristic diagram E t2-3 Jump connection and feature fusion to obtain a feature map D t2-3 And then upsampling (for feature map D t2-3 ) And with the characteristic diagram E t2-2 Jump connection and feature fusion to obtain a feature map D t2-2 By means of the characteristic diagram D t2-2 And performing up-sampling operation to obtain a recognition result of the T2 image data.
In some embodiments, in step S1, the multitasking convolutional neural network model is model trained using the following method:
s11, remote sensing image sample data (comprising T1 image sample data and T2 image sample data) of two time phases of a research area T1 and a research area T2 are manufactured, and the method comprises the following steps: collecting remote sensing image samples (high-resolution remote sensing image data) of two time phases of a research area T1 and T2, respectively and correspondingly extracting boundary vectors (the boundary of an outdoor stope can be measured and positioned by adopting field investigation, and the boundary positioning data is convenient for the vector extraction of the boundary), carrying out vector grid conversion processing, respectively converting the remote sensing image samples of the two time phases of the research area T1 and T2 into sample grid images, correspondingly forming T1 image sample data by the remote sensing image samples of the time phase of the research area T1 and the sample grid images of the time phase of the T1, and correspondingly forming T2 image sample data by the remote sensing image samples of the time phase of the research area T2 and the sample grid images of the time phase of the T2. Preferably, the remote sensing image data of two time phases of the research areas T1 and T2 in the step S1 and the remote sensing image sample in the step S11 are subjected to image preprocessing, and the image preprocessing includes radiometric calibration, atmospheric correction, orthographic correction or/and image fusion.
S12, cutting the T1 image sample data and the T2 image sample data into image blocks (cutting the T1 image sample data and the T2 image sample data according to the same region to obtain the image blocks of the same region, wherein the size of the image blocks is C multiplied by H multiplied by W, C is the number of channels, M is the number of image lines, and N is the number of image columns), in the embodiment, data enhancement processing can be performed on the image blocks to enhance the generalization capability of the model, and the data enhancement processing comprises turnover, translation, scale change, contrast change and Gaussian noise. The training set, the verification set and the test set are divided according to the ratio of 6:2:2, the training set and the verification set are used for training a model, and the test set is used for testing the precision and the generalization capability of the model.
Taking the Shendong mining area as an example, the method for collecting high-resolution six-number remote sensing images in 2019 (time phase T1) and 2020 (time phase T2) of the Shendong mining area, preprocessing data, and manufacturing the data into remote sensing image sample data of the time phases T1 and T2, so that the high-resolution remote sensing images meet the specified format of the input images of the multi-task convolutional neural network model, comprises the following steps: the field investigation is adopted to measure and position the boundary of the open stope, and the boundary positioning data is obtained so as to facilitate the vector extraction of the boundary; collecting or downloading high-resolution six-number remote sensing images in 2019 and 2020 of Shendong mining areas, and performing image preprocessing, wherein the image preprocessing comprises radiometric calibration, atmospheric correction, orthographic correction, image fusion, research area cutting and the like; labeling the open stope space range and the inter-annual change region in the remote sensing images of the regions 2019 and 2020 by using geographic information software (such as ArcMap), and converting the labeled open stope space range and the inter-annual change region into a sample grid image; the remote sensing image data and the sample grid image of the Shendong mining area in 2019 and 2020 (i.e. the remote sensing image sample data of the time phase of the research area T1 and the sample grid image of the time phase of the T1 are correspondingly formed into T1 image sample data, and the remote sensing image sample data of the time phase of the research area T2 and the sample grid image of the time phase of the T2 are correspondingly formed into T2 image sample data) are cut into 256-pixel and 256-pixel image blocks in a plurality of same areas by using Arcmap. Dividing the cut data into a training set, a verification set and a test set according to the proportion of 6:2:2; the training set and the verification set are used for training the model, and the testing set is used for testing the precision and generalization capability of the finally obtained model; the training set is subjected to data enhancement processing only to enhance the generalization capability of the model, wherein the data enhancement processing comprises flipping, translation, scale change, contrast change and Gaussian noise.
S2, the coding process of the change detection network branch is used for processing the characteristic diagram E t1-2 、E t1-3 、E t1-4 、E t1-5 And feature map E t2-2 、E t2-3 、E t2-4 、E t2-5 Performing differential fusion to obtain a coding feature map E t-2 、E t-3 、E t-4 、E t-5 The differential fusion in this embodiment preferably adopts absolute differential fusion (i.e., feature absolute differential fusion module), i.e., E t-i =|E t1-i -E t2-i I=2, 3,4,5. The change detection network branch decoding process encodes feature map E t-2 、E t-3 、E t-4 The feature map D is obtained by jumping connection and feature fusion t-5 、D t-4 、D t-3 、D t-2 . Preferably, the method of the change detection network branch decoding process is as follows: first, feature map E t-5 The characteristic diagram D is obtained through convolution processing t-5 Then upsample (for feature map D t-5 ) And with the characteristic diagram E t-4 Jump connection and feature fusion to obtain a feature map D t-4 And then upsampling (for feature map D t-4 ) And with the characteristic diagram E t-3 Jump connection and feature fusion to obtain a feature map D t-3 And then upsampling (for feature map D t-3 ) And with the characteristic diagram E t-2 Jump connection and feature fusion to obtain a feature map D t-2 。
S3, variationDetecting network branches vs. feature map D t1-2 And feature map D t2-2 Differential fusion is carried out to obtain a feature map D t1-t2 The method comprises the steps of carrying out a first treatment on the surface of the The differential fusion of the present embodiment preferably employs absolute differential fusion, i.e., D t1-t2 =|D t1-2 -D t1-2 | a. The invention relates to a method for producing a fibre-reinforced plastic composite. The change detection network branches employ a convolution attention module (Convolutional Block Attention Module, CBAM) to a feature map D t1-t2 Processing is performed to obtain channel attention weights and spatial attention weights. As shown in fig. 4, the convolution attention module includes a channel attention module and a spatial attention module, feature map D t1-t2 (e.g. feature map D) t1-t2 Information is expressed as c×h×w, C is the number of channels, H is the number of rows of the feature map, and W is the number of columns of the feature map) is input into the convolution attention module, the channel attention weight is obtained by the channel attention module, and the spatial attention weight is obtained by the spatial attention module. As shown in fig. 5, the channel attention module includes two parallel adaptive global maximum pooling layers, two parallel adaptive global average pooling layers, a multi-layer perceptron, a Sigmoid activation operation module, and a feature map D t1-t2 Two feature maps are generated through two parallel self-adaptive global maximum pooling layers and two parallel self-adaptive global average pooling layers, the two feature maps are respectively input into the same multi-layer perceptron, the output feature maps are subjected to addition processing, and then the channel attention weight is obtained through Sigmoid activation operation. As shown in fig. 6, the spatial attention module includes two parallel adaptive global maximum pooling layers, two parallel adaptive global average pooling layers, a channel splicing module, a convolution module, a Sigmoid activation operation module, and a feature map D t1-t2 Generating two feature graphs through two parallel self-adaptive global maximum pooling layers and two parallel self-adaptive global average pooling layers, and performing channel splicing, convolution and Sigmoid activation operation on the two feature graphs to obtain the spatial attention weight.
S4, combining the characteristic diagram D t-2 Multiplying the two signals with the channel attention weight and the space attention weight respectively to obtain a characteristic diagram D 'with enhanced information in the channel direction and the space direction' t-2 Then the change detection junction of the research area is obtained through the up-sampling operationAnd if the change detection result corresponds to the prediction result of the change detection.
In this embodiment, as a further preferred implementation method: the jump connection of the embodiment further comprises edge information enhancement module processing, and because the ground features of the coal mine area are complex in distribution and strong in surface heterogeneity, the first identification network branch and the second identification network branch are subjected to repeated rolling and pooling operations, and ground feature boundary information in coding features is easy to blur. As shown in fig. 3, the edge information enhancement module (Edge Information Enhancement Module, EIEM) includes channel dimension pooling, and Sobel convolution, where the edge information enhancement module performs channel compression on an input feature map through the channel dimension pooling, for example, performs channel pooling processing on a feature map (c×h×w) to obtain a feature map (1×h×w), the Sobel convolution includes an operator Sobel x in a horizontal direction and an operator Sobel in a vertical direction, and the channel compressed feature map performs Sobel convolution to obtain edge information in the horizontal direction and the vertical direction and performs addition operation, and then performs product operation with an original input feature map to obtain a feature map with enhanced edge information. According to the invention, the first recognition network branch, the second recognition network branch and the change detection network branch are processed by adopting the edge information enhancement module in jump connection, so that the three branches are more refined in feature processing, and the edge recognition capability and the change edge detection capability of the subsequent feature map are enhanced.
In some embodiments, the training set is used for model training of the multi-task convolutional neural network model, the verification set is used for verifying the model precision after each iteration training, and the loss function adopted by the multi-task convolutional neural network model in the iteration training is represented by a comparison loss function L CT And a cross entropy loss function L CE The common composition and the calculation formula are as follows:
L=ω 1 L CT +ω 2 L CE . Wherein omega 1 Representing contrast loss function L CT Weights, ω 2 Representing a cross entropy loss function L CE And L is the total loss value of the multitasking convolutional neural network model. The multitasking convolutional neural network model utilizes a contrast loss function L in the encoding process CT Model loss constraint is performed, as shown in fig. 1, on the last four hierarchical feature graphs (specifically, feature E t1-2 、E t1-3 、E t1-4 、E t1-5 ) Feature map E 'is extracted using 1×1 convolution feature' t1-2 、E′ t1-3 、E′ t1-4 、E′ t1-5 The method comprises the steps of carrying out a first treatment on the surface of the The second recognition network branch is used for recognizing the characteristic diagram (specifically, characteristic E of the last four layers of the T1 image data t2-2 、E t2-3 、E t2-4 、E t2-5 ) Feature map E 'is extracted using 1×1 convolution feature' t2-2 、E′ t2-3 、E′ t2-4 、E′ t2-5 The method comprises the steps of carrying out a first treatment on the surface of the In a characteristic diagram E' t1-2 、E′ t1-3 、E′ t1-4 、E′ t1-5 Feature map E' t2-2 、E′ t2-3 、E′ t2-4 、E′ t2-5 Sequentially corresponds as feature pairs (feature map E' t1-2 And feature map E' t2-2 As a feature pair, feature map E' t1-3 And feature map E' t2-3 As a feature pair, feature map E' t1-4 And feature map E' t2-4 As a feature pair, feature map E' t1-5 And feature map E' t2-5 As a feature pair), taking the real results of the change detection corresponding to the T1 image data and the T2 image data (taking the remote sensing image sample data of two time phases of the research area T1 and the T2 of the model training as an example, the real results of the change detection correspond to the change detection results marked by the remote sensing image sample data).
Comparison loss function L of three network branch constraints of multitasking convolutional neural network model in coding process CT The formula is as follows:
where d represents the Euclidean distance of two features forming a pair of features, n (e.g., feature map E' t1-2 And feature map E' t2-2 Euclidean distance of feature pair) of the feature pair n, two features in the feature pair n are similar, then +.>(the true result of the corresponding change detection is that the open stope of the two time phases is not changed), otherwise(the real result of the corresponding change detection is that the open stope of two time phases has a change); margin is a set threshold and N is the total number of feature pairs.
The multitasking convolutional neural network model utilizes a cross entropy loss function L on the output result CE Model loss constraint is carried out, as shown in fig. 1, the predicted result and the real result of the T1 image data form the data input of a group of loss functions, the predicted result and the real result of the T2 image data form the data input of a group of loss functions, the change detection predicted result and the change detection real result output by the multitasking convolutional neural network model form the data input of a group of loss functions, and the cross entropy loss function L is utilized CE And performing loss constraint on the output result of the model. Cross entropy loss function L CE The formula is as follows:
wherein->For the real category corresponding to the pixel (the real result corresponding to the T1 image data, the real result corresponding to the T2 image data or the real result of the change detection), y i And p is the total number of pixels, which is the pixel result of model prediction (the prediction result corresponding to the T1 image data, the prediction result of the T2 image data or the change detection prediction result).
The multitask convolutional neural network model of the invention sets the iteration training times, the learning rate, the batch size, the optimizer and other super parameters, and carries out repeated iteration training; and for each iteration training, reducing the model loss value by using a gradient descent algorithm, and simultaneously optimizing and updating the weight value of the model connected between layers in the network.
In this embodiment, taking the eastern mining area as an example, the network parameter settings of the multitasking convolutional neural network model are shown in table 1, and the server performance is shown in table 2.
TABLE 1
TABLE 2
Training the multitasking convolutional neural network model by using a training set, checking the model precision after each iteration training by using a verification set, and checking the model precision by using three classification evaluation indexes of Precision, recall, F-score in the experiment. And after multiple iterative training, selecting a multi-task convolutional neural network model with highest precision and best visual effect, and inputting remote sensing image data of two time phases T1 and T2 to be detected in a research area into the trained multi-task convolutional neural network model for open stope identification and change detection.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.
Claims (8)
1. An open stope classification and change detection method based on a multitasking convolutional neural network is characterized in that: the method comprises the following steps:
s1, determining a research area, and collecting remote sensing image data of two time phases of the research area T1 and the research area T2; constructing a multi-task convolutional neural network model, wherein the multi-task convolutional neural network model comprises a first differential network branch, a second identification network branch and a change detection network branch;
first toThe encoding process of the other network branch utilizes VGG-16 network to extract five hierarchical feature graphs of T1 image data, which are respectively marked as E t1-1 、E t1-2 、E t1-3 、E t1-4 、E t1 - 5 The decoding process of the first identified network branch will characterize graph E t1-2 、E t1-3 、E t1-4 The feature map D is obtained by jumping connection and feature fusion t1-5 、D t1-4 、D t1-3 、D t1-2 Then, obtaining an identification result of the T1 image data through up-sampling operation;
the coding process of the second recognition network branch utilizes VGG-16 network to extract five level characteristic diagrams of T2 image data, which are respectively marked as E t2-1 、E t2-2 、E t2-3 、E t2-4 、E t2-5 The second identified network branch decoding process decodes the feature map E t2-2 、E t2-3 、E t2-4 The feature map D is obtained by jumping connection and feature fusion t2-5 、D t2-4 、D t2-3 、D t2-2 Then, obtaining a recognition result of the T2 image data through an upward picking operation;
s2, the coding process of the change detection network branch is used for processing the characteristic diagram E t1-2 、E t1-3 、E t1-4 、E t1-5 And feature map E t2-2 、E t2-3 、E t2-4 、E t2-5 Performing differential fusion to obtain a coding feature map E t-2 、E t-3 、E t-4 、E t-5 The method comprises the steps of carrying out a first treatment on the surface of the The change detection network branch decoding process encodes feature map E t-2 、E t-3 、E t-4 The feature map D is obtained by jumping connection and feature fusion t-5 、D t-4 、D t-3 、D t-2 ;
S3, detecting network branching characteristic diagram D t1-2 And feature map D t2-2 Differential fusion is carried out to obtain a feature map D t1-t2 The method comprises the steps of carrying out a first treatment on the surface of the The change detection network branch adopts a convolution attention module to a characteristic diagram D t1-t2 Processing to obtain a channel attention weight and a space attention weight;
s4, combining the characteristic diagram D t-2 Respectively pay attention to the channelsMultiplying the force weight and the space attention weight to obtain a characteristic diagram D 'with enhanced information in the channel direction and the space direction' t-2 And then obtaining a change detection result of the research area through an up-sampling operation.
2. The surface stope identification and change detection method based on a multitasking convolutional neural network of claim 1, wherein: in step S1, the multitasking convolutional neural network model performs model training using the following method:
s11, manufacturing remote sensing image sample data of two time phases of a research area T1 and a research area T2, wherein the method comprises the following steps: collecting remote sensing image samples of two time phases of a research area T1 and a research area T2, respectively correspondingly extracting boundary vectors, carrying out vector grid conversion treatment, respectively converting the remote sensing image samples of the two time phases of the research area T1 and the research area T2 into sample grid images, correspondingly forming T1 image sample data by the remote sensing image samples of the time phase of the research area T1 and the sample grid images of the time phase of the research area T1, and correspondingly forming T2 image sample data by the remote sensing image samples of the time phase of the research area T2 and the sample grid images of the time phase of the research area T2;
s12, cutting the T1 image sample data and the T2 image sample data into image blocks respectively, dividing the image blocks into a training set, a verification set and a test set according to the ratio of 6:2:2, wherein the training set and the verification set are used for training a model, and the test set is used for testing the precision and generalization capability of the model.
3. The surface pit identification and change detection method based on the multitasking convolutional neural network according to claim 1, wherein: the decoding process method of the first identified network branch is as follows: first feature map E t-5 The characteristic diagram D is obtained through convolution processing t1-5 Then upsample and match the feature map E t1-4 Jump connection and feature fusion to obtain a feature map D t1-4 Then up-sample and match with characteristic diagram E t1-3 Jump connection and feature fusion to obtain a feature map D t1-3 Then up-sample and match with characteristic diagram E t1-2 Jump connection and feature fusion to obtain a feature map D t1-2 By means of the characteristic diagram D t1-2 Up-sampling operation is performed to obtainTo the identification result of the T1 image data;
the second method for decoding the network branches comprises the following steps: first feature map E t2-5 The characteristic diagram D is obtained through convolution processing t2-5 Then upsample and match the feature map E t2-4 Jump connection and feature fusion to obtain a feature map D t1-4 Then up-sample and match with characteristic diagram E t2-3 Jump connection and feature fusion to obtain a feature map D t2-3 Then up-sample and match with characteristic diagram E t2-2 Jump connection and feature fusion to obtain a feature map D t2-2 By means of the characteristic diagram D t2-2 Performing up-sampling operation to obtain other results of the T2 image data;
the method for detecting the network branch decoding process by the change is as follows: first, feature map E t-5 The characteristic diagram D is obtained through convolution processing t-5 Then upsample and match the feature map E t-4 Jump connection and feature fusion to obtain a feature map D t-4 Then up-sample and match with characteristic diagram E t-3 Jump connection and feature fusion to obtain a feature map D t-3 Then up-sample and match with characteristic diagram E t-2 Jump connection and feature fusion to obtain a feature map D t-2 。
4. The surface pit identification and change detection method based on the multitasking convolutional neural network as recited in claim 2, wherein: the remote sensing image data of two time phases of the research areas T1 and T2 in the step S1 and the remote sensing image sample in the step S11 are subjected to image preprocessing, wherein the image preprocessing comprises radiometric calibration, atmospheric correction, orthographic correction or/and image fusion.
5. The surface stope based on a multitasking convolutional neural network as recited in claim 1, further comprising a change detection method, wherein: the jump connection further comprises edge information enhancement module processing, the edge information enhancement module comprises channel dimension pooling and Sobel convolution, the edge information enhancement module carries out channel compression on an input feature image through the channel dimension pooling, the Sobel convolution comprises an operator sobelx in the horizontal direction and an operator Sobely in the vertical direction, the feature image after channel compression obtains edge information in the horizontal direction and the vertical direction through Sobel convolution and carries out addition operation, and then product operation is carried out on the feature image and the original input feature image to obtain the feature image with enhanced edge information.
6. The surface pit identification and change detection method based on the multitasking convolutional neural network according to claim 1, wherein: the loss function of the multitasking convolutional neural network model is represented by a contrast loss function L CT And a cross entropy loss function L CE The common composition and the calculation formula are as follows:
L=ω 1 L CT +ω 2 L CE the method comprises the steps of carrying out a first treatment on the surface of the Wherein omega 1 Representing contrast loss function L CT Weights, ω 2 Representing a cross entropy loss function L CE And L is the total loss value of the multitasking convolutional neural network model.
7. The surface pit identification and change detection method based on the multitasking convolutional neural network as recited in claim 6, wherein: contrast loss function L CT The formula is as follows:
wherein d represents the Euclidean distance that the two features form a pair of features n, the two features in the pair of features n are similar, then +.>Otherwise->margin is a set threshold and N is the total number of feature pairs.
8. The surface stope based on a multitasking convolutional neural network of claim 6, characterized in that: cross entropy loss function L CE The formula is as follows:
wherein->For the true category corresponding to the pixel, y i And p is the total number of pixels, which is the pixel result of model prediction.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311359531.5A CN117671437A (en) | 2023-10-19 | 2023-10-19 | Open stope identification and change detection method based on multitasking convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311359531.5A CN117671437A (en) | 2023-10-19 | 2023-10-19 | Open stope identification and change detection method based on multitasking convolutional neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117671437A true CN117671437A (en) | 2024-03-08 |
Family
ID=90075968
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311359531.5A Pending CN117671437A (en) | 2023-10-19 | 2023-10-19 | Open stope identification and change detection method based on multitasking convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117671437A (en) |
Citations (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006185206A (en) * | 2004-12-28 | 2006-07-13 | Toshiba Corp | Method, device, and program for detecting object |
CN110555841A (en) * | 2019-09-10 | 2019-12-10 | 西安电子科技大学 | SAR image change detection method based on self-attention image fusion and DEC |
CN111291622A (en) * | 2020-01-16 | 2020-06-16 | 武汉汉达瑞科技有限公司 | Method and device for detecting building change in remote sensing image |
CN113505636A (en) * | 2021-05-25 | 2021-10-15 | 中国科学院空天信息创新研究院 | Mining area change detection method based on attention mechanism and full convolution twin neural network |
CN113609896A (en) * | 2021-06-22 | 2021-11-05 | 武汉大学 | Object-level remote sensing change detection method and system based on dual-correlation attention |
CN113887459A (en) * | 2021-10-12 | 2022-01-04 | 中国矿业大学(北京) | Open-pit mining area stope change area detection method based on improved Unet + |
CN113920262A (en) * | 2021-10-15 | 2022-01-11 | 中国矿业大学(北京) | Mining area FVC calculation method and system for enhancing edge sampling and improving Unet model |
CN114549972A (en) * | 2022-01-17 | 2022-05-27 | 中国矿业大学(北京) | Strip mine stope extraction method, apparatus, device, medium, and program product |
WO2022160753A1 (en) * | 2021-01-27 | 2022-08-04 | 上海商汤智能科技有限公司 | Image processing method and apparatus, and electronic device and storage medium |
CN114972989A (en) * | 2022-05-18 | 2022-08-30 | 中国矿业大学(北京) | Single remote sensing image height information estimation method based on deep learning algorithm |
US20220309772A1 (en) * | 2021-03-25 | 2022-09-29 | Satellite Application Center for Ecology and Environment, MEE | Human activity recognition fusion method and system for ecological conservation redline |
CN115170824A (en) * | 2022-07-01 | 2022-10-11 | 南京理工大学 | Change detection method for enhancing Siamese network based on space self-adaption and characteristics |
CN115457390A (en) * | 2022-09-13 | 2022-12-09 | 中国人民解放军国防科技大学 | Remote sensing image change detection method and device, computer equipment and storage medium |
WO2023007198A1 (en) * | 2021-07-27 | 2023-02-02 | Számítástechnikai És Automatizálási Kutatóintézet | Training method for training a change detection system, training set generating method therefor, and change detection system |
CN115880553A (en) * | 2022-10-11 | 2023-03-31 | 浙江工业大学 | Multi-scale change target retrieval method based on space-time modeling |
CN115908369A (en) * | 2022-12-13 | 2023-04-04 | 南湖实验室 | Change detection method based on semantic alignment and feature enhancement |
CN115937697A (en) * | 2022-07-14 | 2023-04-07 | 中国人民解放军战略支援部队信息工程大学 | Remote sensing image change detection method |
CN116030357A (en) * | 2022-12-12 | 2023-04-28 | 中北大学 | High-resolution remote sensing image change detection depth network and detection method |
CN116229283A (en) * | 2023-03-10 | 2023-06-06 | 江西师范大学 | Remote sensing image change detection system and method based on depth separable convolution module |
US20230186607A1 (en) * | 2022-03-30 | 2023-06-15 | Beijing Baidu Netcom Science Technology Co., Ltd. | Multi-task identification method, training method, electronic device, and storage medium |
CN116433940A (en) * | 2023-04-21 | 2023-07-14 | 北京数慧时空信息技术有限公司 | Remote sensing image change detection method based on twin mirror network |
CN116486255A (en) * | 2023-03-16 | 2023-07-25 | 福州大学 | High-resolution remote sensing image semantic change detection method based on self-attention feature fusion |
US20230260279A1 (en) * | 2020-10-07 | 2023-08-17 | Wuhan University | Hyperspectral remote sensing image classification method based on self-attention context network |
CN116778238A (en) * | 2023-06-14 | 2023-09-19 | 陕西科技大学 | Light-weight structure-based sensing transducer network and VHR remote sensing image change detection method |
-
2023
- 2023-10-19 CN CN202311359531.5A patent/CN117671437A/en active Pending
Patent Citations (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006185206A (en) * | 2004-12-28 | 2006-07-13 | Toshiba Corp | Method, device, and program for detecting object |
CN110555841A (en) * | 2019-09-10 | 2019-12-10 | 西安电子科技大学 | SAR image change detection method based on self-attention image fusion and DEC |
CN111291622A (en) * | 2020-01-16 | 2020-06-16 | 武汉汉达瑞科技有限公司 | Method and device for detecting building change in remote sensing image |
US20230260279A1 (en) * | 2020-10-07 | 2023-08-17 | Wuhan University | Hyperspectral remote sensing image classification method based on self-attention context network |
WO2022160753A1 (en) * | 2021-01-27 | 2022-08-04 | 上海商汤智能科技有限公司 | Image processing method and apparatus, and electronic device and storage medium |
US20220309772A1 (en) * | 2021-03-25 | 2022-09-29 | Satellite Application Center for Ecology and Environment, MEE | Human activity recognition fusion method and system for ecological conservation redline |
CN113505636A (en) * | 2021-05-25 | 2021-10-15 | 中国科学院空天信息创新研究院 | Mining area change detection method based on attention mechanism and full convolution twin neural network |
CN113609896A (en) * | 2021-06-22 | 2021-11-05 | 武汉大学 | Object-level remote sensing change detection method and system based on dual-correlation attention |
WO2023007198A1 (en) * | 2021-07-27 | 2023-02-02 | Számítástechnikai És Automatizálási Kutatóintézet | Training method for training a change detection system, training set generating method therefor, and change detection system |
CN113887459A (en) * | 2021-10-12 | 2022-01-04 | 中国矿业大学(北京) | Open-pit mining area stope change area detection method based on improved Unet + |
CN113920262A (en) * | 2021-10-15 | 2022-01-11 | 中国矿业大学(北京) | Mining area FVC calculation method and system for enhancing edge sampling and improving Unet model |
CN114549972A (en) * | 2022-01-17 | 2022-05-27 | 中国矿业大学(北京) | Strip mine stope extraction method, apparatus, device, medium, and program product |
US20230186607A1 (en) * | 2022-03-30 | 2023-06-15 | Beijing Baidu Netcom Science Technology Co., Ltd. | Multi-task identification method, training method, electronic device, and storage medium |
CN114972989A (en) * | 2022-05-18 | 2022-08-30 | 中国矿业大学(北京) | Single remote sensing image height information estimation method based on deep learning algorithm |
CN115170824A (en) * | 2022-07-01 | 2022-10-11 | 南京理工大学 | Change detection method for enhancing Siamese network based on space self-adaption and characteristics |
CN115937697A (en) * | 2022-07-14 | 2023-04-07 | 中国人民解放军战略支援部队信息工程大学 | Remote sensing image change detection method |
CN115457390A (en) * | 2022-09-13 | 2022-12-09 | 中国人民解放军国防科技大学 | Remote sensing image change detection method and device, computer equipment and storage medium |
CN115880553A (en) * | 2022-10-11 | 2023-03-31 | 浙江工业大学 | Multi-scale change target retrieval method based on space-time modeling |
CN116030357A (en) * | 2022-12-12 | 2023-04-28 | 中北大学 | High-resolution remote sensing image change detection depth network and detection method |
CN115908369A (en) * | 2022-12-13 | 2023-04-04 | 南湖实验室 | Change detection method based on semantic alignment and feature enhancement |
CN116229283A (en) * | 2023-03-10 | 2023-06-06 | 江西师范大学 | Remote sensing image change detection system and method based on depth separable convolution module |
CN116486255A (en) * | 2023-03-16 | 2023-07-25 | 福州大学 | High-resolution remote sensing image semantic change detection method based on self-attention feature fusion |
CN116433940A (en) * | 2023-04-21 | 2023-07-14 | 北京数慧时空信息技术有限公司 | Remote sensing image change detection method based on twin mirror network |
CN116778238A (en) * | 2023-06-14 | 2023-09-19 | 陕西科技大学 | Light-weight structure-based sensing transducer network and VHR remote sensing image change detection method |
Non-Patent Citations (3)
Title |
---|
MENGMENG YIN 等: "A CNN-Transformer Network Combining CBAM for Change Detection in High-Resolution Remote Sensing Images", REMOTE SENSING, 4 May 2023 (2023-05-04), pages 1 - 26 * |
向阳;赵银娣;董霁红;: "基于改进UNet孪生网络的遥感影像矿区变化检测", 煤炭学报, no. 12, 15 December 2019 (2019-12-15), pages 155 - 162 * |
张成业 等: "基于U - Net 网络和GF - 6 影像 的尾矿库空间范围识别", 《自然资源遥感》, vol. 33, no. 04, 31 December 2021 (2021-12-31), pages 252 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111986099B (en) | Tillage monitoring method and system based on convolutional neural network with residual error correction fused | |
CN111667187B (en) | Highway landslide risk evaluation method based on multi-source remote sensing data | |
Su | Scale-variable region-merging for high resolution remote sensing image segmentation | |
CN107247927B (en) | Method and system for extracting coastline information of remote sensing image based on tassel cap transformation | |
CN110991430B (en) | Ground feature identification and coverage rate calculation method and system based on remote sensing image | |
CN114494821B (en) | Remote sensing image cloud detection method based on feature multi-scale perception and self-adaptive aggregation | |
CN113887515A (en) | Remote sensing landslide identification method and system based on convolutional neural network | |
CN103020649A (en) | Forest type identification method based on texture information | |
CN113988147B (en) | Multi-label classification method and device for remote sensing image scene based on graph network, and multi-label retrieval method and device | |
CN113610070A (en) | Landslide disaster identification method based on multi-source data fusion | |
Chen et al. | Open-pit mine area mapping with Gaofen-2 satellite images using U-Net+ | |
Xu et al. | Feature-based constraint deep CNN method for mapping rainfall-induced landslides in remote regions with mountainous terrain: An application to Brazil | |
CN115457396A (en) | Surface target ground object detection method based on remote sensing image | |
Li et al. | Fault-Seg-Net: A method for seismic fault segmentation based on multi-scale feature fusion with imbalanced classification | |
CN102231190B (en) | Automatic extraction method for alluvial-proluvial fan information | |
CN116543165B (en) | Remote sensing image fruit tree segmentation method based on dual-channel composite depth network | |
CN116091492B (en) | Image change pixel level detection method and system | |
CN113158770A (en) | Improved mining area change detection method of full convolution twin neural network | |
Ghasemloo et al. | Road and tunnel extraction from SPOT satellite images using neural networks | |
CN106845498A (en) | With reference to the single width mountain range remote sensing images landslide detection method of elevation | |
Moradi et al. | Potential evaluation of visible-thermal UAV image fusion for individual tree detection based on convolutional neural network | |
CN117671437A (en) | Open stope identification and change detection method based on multitasking convolutional neural network | |
Aghdami-Nia et al. | Effect of transferring pre-trained weights on a Siamese change detection network | |
Chen et al. | Improving GPR imaging of the buried water utility infrastructure by integrating the multidimensional nonlinear data decomposition technique into the edge detection | |
CN115661675A (en) | Multi-cloud area earthquake landslide remote sensing identification method, system, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |