CN117612020A - SGAN-based detection method for resisting neural network remote sensing image element change - Google Patents

SGAN-based detection method for resisting neural network remote sensing image element change Download PDF

Info

Publication number
CN117612020A
CN117612020A CN202410096146.4A CN202410096146A CN117612020A CN 117612020 A CN117612020 A CN 117612020A CN 202410096146 A CN202410096146 A CN 202410096146A CN 117612020 A CN117612020 A CN 117612020A
Authority
CN
China
Prior art keywords
change
remote sensing
classification
sgan
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410096146.4A
Other languages
Chinese (zh)
Other versions
CN117612020B (en
Inventor
葛平
校朝勃
王宁
左巡勋
问利萍
李淑艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Yusu Defense Group Co ltd
Original Assignee
Xi'an Yusu Defense Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Yusu Defense Group Co ltd filed Critical Xi'an Yusu Defense Group Co ltd
Priority to CN202410096146.4A priority Critical patent/CN117612020B/en
Publication of CN117612020A publication Critical patent/CN117612020A/en
Application granted granted Critical
Publication of CN117612020B publication Critical patent/CN117612020B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a detection method for the change of remote sensing image elements of an anti-neural network based on SGAN, which comprises the following steps: s1, preprocessing an image: preprocessing the selected remote sensing image; s2, constructing an SGAN network model: constructing a change detection network model based on the SGAN antagonistic neural network; s3, training a model: respectively training a discriminator, a classifier and a generator by adopting different training samples in sequence, and outputting a trained change detection network model; s4, detecting a model: detecting the preprocessed remote sensing image by using a change detection network model, and outputting at least two ground feature classification and classification feature graphs in different periods; s5, change decision: the classification feature map is compared and decided through the pixel labels, and the change region is judged to output a change feature map; s6, vectorization: and generating and outputting a changed element vector according to the pixel value of the change feature map, and simultaneously giving attribute change information. The method reduces the follow-up classification work and improves the GIS data analysis and utilization efficiency.

Description

SGAN-based detection method for resisting neural network remote sensing image element change
Technical Field
The invention relates to the field of computer vision and remote sensing image processing, in particular to a remote sensing image element change detection method, and especially relates to a detection method for resisting neural network remote sensing image element change based on SGAN.
Background
With the improvement of productivity level, human activities and natural exercises are not changing the earth's surface at any time, and in order to better record the transition of the environment, the description of the change of the ground feature elements by using the space-based observation system is getting more and more attention from related departments. The remote sensing image has the characteristics of large coverage, high resolution, multiple data sources, short period and the like, so that the change monitoring is also an important field of remote sensing application.
Conventional change detection is classified from the level of abstraction into pixel-level-based and feature-level-based. If the gray level difference method is not high in detection precision and poor in dryness resistance; edge features are calculated based on morphology; the texture features and the moment features are difficult to interpret the change information, the threshold value distinguishing change areas are required to be manually determined, the automation degree is generally low, and the ground object category change information is difficult to interpret.
With the development of artificial intelligence, deep learning technology has been applied to the field of change detection. Most of the existing detection methods are based on the segmentation and extraction of a full convolutional neural network, and are represented by a U-Net, VGG, deepLab network model and the like, a large number of manual labeling samples are needed for training, and manual visual judgment labeling is easy to be subject to cognitive interference of operators, so that the workload is high, and time and labor are wasted.
The remote sensing image processing method and system based on the antagonistic neural network model disclosed in the chinese patent document CN111160128A are also based on the network model, namely, the generated antagonistic neural network GAN, which mainly comprises a generator and a discriminator, wherein the discriminator model is two-class, and only distinguishes true and false. In addition, the GAN network discriminator does not include multiple classifiers, and cannot classify the feature elements multiple times, and cannot realize the segmentation and extraction of the feature areas.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a detection method for resisting the change of the remote sensing image elements of the neural network based on SGAN, which can realize the segmentation and extraction of the ground object category and the distinction of the attribute change direction while relieving the requirement on the sample label to a great extent, and improve the robustness of model training.
In order to solve the technical problems, the invention adopts the following technical scheme: the detection method for the change of the remote sensing image elements of the anti-neural network based on the SGAN specifically comprises the following steps:
s1, preprocessing an image: selecting at least two overlapping optical remote sensing images in the same region in different periods, and preprocessing the selected remote sensing images;
s2, constructing an SGAN network model: constructing a change detection network model based on the SGAN antagonism neural network;
s3, training a model: respectively training a discriminator, a classifier and a generator by adopting different training samples in sequence, and outputting a trained change detection network model;
s4, detecting a model: detecting the preprocessed remote sensing images in the step S1 by using a trained change detection network model, and outputting at least two ground object classification and classification feature diagrams in different periods;
s5, change decision: the classification feature map is compared and decided through the pixel labels, a change area is judged, and a change feature map is output;
s6, vectorization: according to the pixel values of the change feature image, a change element vector is generated and output, and attribute change information is given.
By adopting the technical scheme, based on the SGAN (serving area network) antagonistic neural network, through a small number of marked samples, the SVM is adopted for classification and then segmentation, so that the requirements on sample labels are relieved to a large extent, meanwhile, the ground object classification segmentation extraction and attribute change direction distinction are realized, the workload of manual labels is reduced, and meanwhile, the robustness of model training is improved.
Preferably, in the step S1, two overlapping optical remote sensing images in the same region and in different periods are selected, and the two remote sensing images have the same geographic coordinates through registration, radiation correction, filtering, color homogenization and coordinate conversion. Because the optical remote sensing images in different periods have obvious differences in color distribution, texture characteristics and context information, the images have the same geographic coordinates and reduce the difference between radiation and color by registration, correction, filtering, color homogenization, coordinate conversion and other treatments in the early stage, so that the influence of irrelevant factors on detection results is eliminated.
Preferably, in the step S2, a change detection network model is constructed by using a tagged real sample, an untagged real sample and a pseudo sample, and the change detection network model includes a generator G and a discriminator, where the generator G includes an input layer, a hidden layer and an output layer; the discriminator comprises a discriminator and a classifier, wherein the discriminator is based on a VGG network and consists of 13 convolution layers and 3 full connection layers, and the acquired convolution kernel is 3*3; the classifier is based on SVM network and performs feature segmentation extraction at the same time of classification, namely, the feature points of each sample are circularly classified, and then the voting mechanism is used for determining which class is output and realizing segmentation.
Preferably, the generator G has a convolution operation Conv, a mapping operation ReShape, a normalization operation Batch, and upsampling, where a convolution kernel size 3*3 is used to capture pixel 8 neighborhood information, and a ReLU activation function is used, where the formula of the activation function f (x) is:
f (x) =max (0, x), where x is the input vector of the unlabeled exemplar data.
Preferably, the input of the discriminator in step S2 contains pseudo-sample data X generated by random noise Z * And labeled samples (x, y) and unlabeled sample data x, outputting n+1 classification results by the discriminator.
Preferably, the specific steps of classifying by the classifier are as follows:
s21, extracting features: the method comprises the steps of forming a convolution layer and a sampling layer, sequentially performing convolution operation Conv, dropout operation, normalization operation Batch Normalization and activation operation Leaky ReLU, wherein 4 times of activation operation, 4 convolution operations and 2 times of Batch Normalization operations are used in the extraction process, and the formula of the activation operation Leaky ReLU is as follows:
y=max (0, x) +leak min (0, x); wherein leak is a constant. The value of leak is about 0.01, and y reserves the value of some negative axes, so that the information of the negative axes is not lost completely;
s22, similarity calculation of the classifier: based on a multi-classification support vector SVM, a nonlinear optimal classifier is adopted, a Gaussian kernel function is used for converting a low dimension into a high dimension, the similarity is calculated, and a kernel function formula is as follows:
where σ is the control function damping free parameter, />is a feature vector +_>Is Euclidean distance;
s23, classification and segmentation of ground features: the classification of the ground object category adopts a voting mechanism, and 1-v-1 creates SVM between every two, so when k categories exist, the k categories existThe classification result is the maximum value of votes of each result of the SVM, the result comprises a ground feature classification and a segmentation map, each pixel in the segmentation image data is provided with a label, each ground feature is endowed with a pixel value, the same pixel value represents one ground feature, and the pixel value range is 0-255.
Preferably, the specific steps of the step S3 are as follows:
s31, training a discriminator: taking supervised training of real samples (x, y), extracting tens or hundreds of tagged real samples (x, y), calculating D ((x, y)) of given batch samples and back-propagating multi-classification loss, updating parameters of a discriminatorθ D To minimize losses, the formula is:
in the method, in the process of the invention,is the number of real samples (x, y); the real samples (x, y) represent the eigenvalues and labels of the input samples; d ((x, y)) represents the probability that the sample (x, y) is true;θ D the ability of the reaction model to distinguish true from false is the parameter of the discriminator; />Representation ofθ D Gradient at; />A logarithmic probability of the probability value D ((x, y)) representing the discriminator for a single real sample (x, y); />Representing the arbiter for a single dummy sample X * Probability value D ((X) * ) Log probability to improve the performance of the arbiter; d ((X) * ) Is pseudo sample X) * Probability of being a true sample;
s32, training a classifier: and (3) performing unsupervised training on an unlabeled real sample (x), and calculating a penalty coefficient C and a kernel function parameter through training to reach an optimal solution, wherein a penalty coefficient C formula is as follows:
in the method, in the process of the invention,is a regularization parameter used for controlling the complexity of the model; the penalty coefficient C has a value range of [0.001,100 ]];
S33 training generator: using pseudo-samples X with added random noise Z * Unsupervised training, extracting tens or hundreds of random noise Z to generate tens or hundreds of pseudo samples, and recording as: g (z) = (X) * ) D ((X) is calculated given the batch * ) Back propagation binary classification loss update arbiter parametersθ G To maximize the loss, the formula is:
in the method, in the process of the invention,is the number of dummy samples G (z);D((X * ) A probability that the dummy sample is a true sample;θ G parameters of the generator; />Representation ofθ G Gradient at->Representing the arbiter for a single dummy sample X * Probability values of (2)D((X * ) Log probability) to improve the performance of the arbiter.
Preferably, the training of the model in step S3 further includes classification accuracy OA evaluation, setting a threshold, and retraining if the classification accuracy OA evaluation is smaller than the threshold, where the formula is as follows:
wherein,in order to correctly classify the number of samples,is the total number of samples.
Preferably, in the step S5, a pixel difference method is adopted, an open source tool OpenCV function interface cv.sub (), a post-change classification feature map and a pre-change segmentation feature map are used as inputs, a pixel difference map is obtained, a difference map pixel value is used to distinguish the change category, and a change feature map is output.
Preferably, in the step S6, a shp vector including a change attribute and a change direction label is generated and output by grid vectorization according to the pixel value of the change feature map, and attribute change information is given.
Compared with the prior art, the invention has the following beneficial effects:
(1) Based on SGAN (serving as a countermeasure network), the SVM is combined, and after classification, the attribute change direction of a change area can be effectively identified, so that the assignment work of the attribute after classification is reduced, and the GIS data analysis and utilization efficiency is greatly improved;
(2) The method is suitable for detecting the change of multiple ground objects or single ground object types, particularly, the workload of manual labels can be reduced, sample data can be expanded according to class results, and the robustness of model training is improved.
Drawings
FIG. 1 is a flow chart of a method for detecting element changes of an anti-neural network remote sensing image based on SGAN in the invention;
fig. 2 is a schematic diagram of a network model structure of a method for detecting element changes of an anti-neural network remote sensing image based on an SGAN in the present invention;
fig. 3 is a schematic diagram of a generated sample model G of the method for detecting the change of the remote sensing image element of the anti-neural network based on the SGAN in the present invention;
fig. 4 is a schematic diagram of a VGG network detection process based on the SGAN anti-neural network remote sensing image element change detection method of the present invention;
fig. 5 is a schematic diagram of a classifier classification flow of the detection method based on SGAN anti-neural network remote sensing image element change in the present invention;
fig. 6 is a schematic diagram of a detection result of the detection method based on SGAN for resisting the change of the remote sensing image element of the neural network according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the drawings of the embodiments of the present invention.
Examples: as shown in fig. 1, the method for detecting the change of the remote sensing image element of the anti-neural network based on the SGAN specifically includes the following steps:
s1, preprocessing an image: selecting at least two overlapping optical remote sensing images in the same region in different periods, and preprocessing the selected remote sensing images; in the embodiment, the method is realized by using Python language programming, the bottom layer depends on a Pytorch deep learning framework and an Nvidia CUDA library, and optical remote sensing data (single-band or multi-band) is selected as training data; training samples are vector data of ground object categories of detection areas marked manually, and are subjected to rasterization and binarization, wherein 1 represents an area of a detection element category, and 0 represents a detection area which does not belong to the element; correspondingly cutting the image into 224x224 pictures, and connecting the pictures in the channel direction to enable the pictures to correspond to network model input;
in the step S1, two overlapped optical remote sensing images in the same region and in different periods are selected, and the two remote sensing images have the same geographic coordinates through registration, radiation correction, filtering, color homogenization and coordinate conversion; because the optical remote sensing images in different periods have obvious differences in color distribution, texture characteristics and context information, the images have the same geographic coordinates and reduce the difference between radiation and color by registration, correction, filtering, color homogenization, coordinate conversion and other treatments in the early stage, so that the influence of irrelevant factors on a detection result is eliminated;
s2, constructing an SGAN network model: constructing a change detection network model based on the SGAN antagonism neural network; in the step S2, a change detection complex model is constructed by using a labeled real sample, an unlabeled real sample and a pseudo sample, as shown in fig. 2; the change detection complex model comprises a generator G and a discriminator, wherein the generator G comprises an input layer, a hidden layer and an output layer; the structure of the generator G is as shown in fig. 3, and mainly refers to the GAN network, so that the random vector Z is input, the pseudo sample which is as different from the training data set is generated as possible, and the pseudo sample which is as different from the training data set is generated as possible;
the generator G is provided with a convolution operation Conv, a mapping operation ReShape, a normalization operation Batch and up-sampling, the convolution kernel size is 3*3, the convolution kernel size is used for capturing pixel 8 neighborhood information, a ReLU activation function is used, and the formula of the activation function f (x) is as follows:
f (x) =max (0, x), where x is the input vector of the unlabeled exemplar data; the activation function f (x) can overcome the problem of gradient disappearance, and after Conv2d convolution operation, a nonlinear relation can be introduced, so that the convergence speed of a generated model network is increased;
the input of the discriminator in step S2 contains pseudo-sample data X generated by random noise Z * And labeled samples (x, y) and unlabeled sample data x, outputting n+1 classification results by the discriminator;
the discriminator comprises a discriminator and a classifier, wherein the discriminator is based on a VGG network and consists of 13 convolution layers and 3 full connection layers, and the acquired convolution kernel is 3*3; the classifier is based on SVM network and performs feature segmentation extraction at the same time of classification, namely, the feature points of each sample are circularly classified, and then the classification of which class is output is determined through a voting mechanism and segmentation is realized; the purpose of the discriminator is to discriminate and classify and output the feature element classification characteristic diagram;
wherein the discriminator is mainly used for distinguishing true and false of the category; the input parameter is an image, and the output representation is the real probability of the image; the closer to 1, the closer to 1 the true probability representing the belonging to the sample, and the closer to 0. The arbiter model is based on the VGG network, as shown in fig. 4; the method consists of 13 convolution layers and 3 full connection layers, wherein the acquired convolution kernel is 3*3 in size and comprises the following steps:
the input image size is 224x224x3, the step size is 1 through a convolution kernel of 3x3 with 64 channels being 3, padding=same is filled, convolution is carried out twice, and then the input image is activated through a ReLU, and the output size is 224x224x64;
through max pooling (maximizing pooling), the filter is 2x2, the step size is 2, the image size is halved, and the pooled size becomes 112x112x64;
after two convolutions with 128 convolution kernels of 3x3, reLU is activated, becoming 112x112x128 in size;
max pooling, size change to 56x56x128;
after 256 convolution kernels of 3x3, three convolutions, reLU is activated, the size becomes 56x56x256;
max pooling, size becomes 28x28x256;
after 512 convolution kernels of 3x3, three convolutions, reLU activated, size changed to 28x28x512;
max pooling, size change to 14x14x512;
after 512 convolution kernels of 3x3, three convolutions, reLU, size becomes 14x14x512;
max pooling, size becomes 7x7x512;
then flat (), flatten the data into a vector, becoming one-dimensional 51277 = 25088;
then through two layers of 1x1x4096 and one layer of 1x1x1000 full-connection layer (three layers in total), activating by ReLU;
wherein the classifier softmax is based on SVM, and is extracted according to feature segmentation while classifying, as shown in FIG. 5; because of the multiple classification, there are a plurality of SVMs, and each sample feature point is circularly classified, and which class is output and the segmentation is realized is determined by a voting mechanism. The specific steps of classifying by the classifier are as follows:
s21, extracting features: the method comprises the steps of forming a convolution layer and a sampling layer, sequentially performing convolution operation Conv, dropout operation, normalization operation Batch Normalization and activation operation Leaky ReLU, wherein 4 times of activation operation, 4 convolution operations and 2 times of Batch Normalization operations are used in the extraction process, and the formula of the activation operation Leaky ReLU is as follows:
y=max(0,x)+leak*min(0,x);
wherein, the leak is a very small constant, the value of leak in this embodiment is about 0.01, and y retains some negative axis values, so that the information of the negative axis is not lost completely;
s22, similarity calculation of the classifier: based on a multi-classification support vector SVM, a nonlinear optimal classifier is adopted, a Gaussian kernel function is used for converting a low dimension into a high dimension, the similarity is calculated, and a kernel function formula is as follows:
where σ is the control function damping free parameter,,/>is a feature vector +_>Is Euclidean distance;
s23, classification and segmentation of ground features: the classification of the ground object category adopts a voting mechanism, and 1-v-1 creates SVM between every two, so when k categories exist, the k categories existThe classification results are the maximum value of votes of each result of the SVM, the results comprise ground feature classification and a segmentation map, each pixel in the segmentation image data is provided with a label, each ground feature is endowed with a pixel value, the same pixel value represents one ground class, and the pixel value range is 0-255;
for example: there are three types of ground categories, A, B, C, which takes three vectors (a, B), (B, C), (a, C) as input, trains 3 results, calculates 3 results with 3 vectors respectively, and votes in the following order:
A=B=C=0;
(a, B) classifier, if a, a=a+1; otherwise b=b+1;
(B, C) classifier, if B, b=b+1; otherwise c=c+1;
(a, C) classifier, if a, a=a+1; otherwise c=c+1;
s3, training a model: respectively training a discriminator, a classifier and a generator by adopting different training samples in sequence, and outputting a trained change detection network model; training by adopting a labeled sample ratio (5% -10%) and a large number of unlabeled real samples;
the specific steps of the step S3 are as follows:
s31, training a discriminator: taking supervised training of real samples (x, y), extracting tens or hundreds of tagged real samples (x, y), calculating D ((x, y)) of given batch samples and back-propagating multi-classification loss, updating parameters of a discriminatorθ D To minimize losses, the formula is:
in the method, in the process of the invention,is the number of real samples (x, y); the real samples (x, y) represent the eigenvalues and labels of the input samples; d ((x, y)) represents the probability that the sample (x, y) is true, the closer 1 is to be a true sample;θ D the ability of the reaction model to distinguish between true and false for the parameters of the discriminant, where D represents the discriminant (D) for distinguishing the classifier G; />Representation ofθ D Gradient at; />The logarithmic probability of the probability value D ((x, y)) of the discriminator for a single real sample (x, y) is represented, so that the gradient calculation is more stable and accurate; />Representing the arbiter for a single dummy sample X * Probability value D ((X) * ) Log probability to improve the performance of the arbiter; d ((X) * ) Is pseudo sample X) * Probability of being a true sample;
s32, training a classifier: and (3) performing unsupervised training on an unlabeled real sample (x), and calculating a penalty coefficient C and a kernel function parameter through training to reach an optimal solution, wherein a penalty coefficient C formula is as follows:
in the method, in the process of the invention,is a regularization parameter used for controlling the complexity of the model; the penalty coefficient C has a value range of [0.001,100 ]];
S33 training generator: using pseudo-samples X with added random noise Z * Unsupervised training, extracting tens or hundreds of random noise Z to generate tens or hundreds of pseudo samples, and recording as: g (z) = (X) * ) D ((X) is calculated given the batch * ) Back propagation binary classification loss update arbiter parametersθ G To maximize the loss, the formula is:
in the method, in the process of the invention,is the number of dummy samples G (z);D((X * ) A probability that the dummy sample is a true sample;θ G parameters of the Generator, the goal being to make it more difficult for the arbiter to distinguish between the pseudo-samples and the real samples, where G represents the Generator (G) for distinguishing the arbiter D; />Representation ofθ G Gradient at->Representing the arbiter for a single dummy sample X * Probability values of (2)D((X * ) Log probability to improve the performance of the arbiter;
the training of the model in step S3 further includes classification accuracy OA evaluation, setting a threshold (0.8 in this embodiment), and retraining if the threshold is smaller than the threshold, where the formula is as follows:
wherein,in order to correctly classify the number of samples,the number of the total samples;
s4, detecting a model: detecting the preprocessed remote sensing images in the step S1 by using a trained change detection network model, and outputting at least two ground object classification and classification feature diagrams in different periods;
s5, change decision: the classification feature map is compared and decided through the pixel labels, a change area is judged, and a change feature map is output; in the step S5, a pixel difference method is adopted, an open source tool OpenCV function interface cv.sub (), a classification feature map after change and a segmentation feature map before change are used as inputs, a pixel difference map is obtained, a difference map pixel value is used for distinguishing a change category, and a change feature map is output;
s6, vectorization: generating and outputting a changed element vector according to the pixel value of the change feature map, and simultaneously giving attribute change information; in the step S6, according to the pixel values of the change feature map, a shp vector including the change attribute and the change direction label is generated and output by grid vectorization, and attribute change information is given. Fig. 6 is a schematic diagram showing the prediction result of the present embodiment, specifically, the result shows the detection result of the change of the land, the residential land, the road and the neighborhood, which is obtained from the reference image and the contrast image.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather to enable any modification, equivalent replacement, improvement or the like to be made within the spirit and principles of the invention.

Claims (10)

1. The detection method for the change of the remote sensing image elements of the anti-neural network based on the SGAN is characterized by comprising the following steps of:
s1, preprocessing an image: selecting at least two overlapping optical remote sensing images in the same region in different periods, and preprocessing the selected remote sensing images;
s2, constructing an SGAN network model: constructing a change detection network model based on the SGAN antagonism neural network;
s3, training a model: respectively training a discriminator, a classifier and a generator by adopting different training samples in sequence, and outputting a trained change detection network model;
s4, detecting a model: detecting the preprocessed remote sensing images in the step S1 by using a trained change detection network model, and outputting at least two ground object classification and classification feature diagrams in different periods;
s5, change decision: the classification feature map is compared and decided through the pixel labels, a change area is judged, and a change feature map is output;
s6, vectorization: according to the pixel values of the change feature image, a change element vector is generated and output, and attribute change information is given.
2. The method for detecting the element change of the remote sensing image based on the SGAN (signal to noise ratio) anti-neural network according to claim 1, wherein in the step S1, two overlapped optical remote sensing images in the same region and in different periods are selected, and the two remote sensing images have the same geographic coordinates through registration, radiation correction, filtering, color homogenization and coordinate conversion.
3. The method for detecting the change of the remote sensing image element based on the SGAN for the neural network according to claim 2, wherein in the step S2, a change detection network model is constructed by using a tagged real sample, an untagged real sample and a pseudo sample, the change detection network model includes a generator G and a discriminator, and the generator G includes an input layer, a hidden layer and an output layer; the discriminator comprises a discriminator and a classifier, wherein the discriminator is based on a VGG network and consists of 13 convolution layers and 3 full connection layers, and the acquired convolution kernel is 3*3; the classifier is based on SVM network and performs feature segmentation extraction at the same time of classification, namely, the feature points of each sample are circularly classified, and then the voting mechanism is used for determining which class is output and realizing segmentation.
4. The method for detecting changes in elements of a remote sensing image based on SGAN and applied to a neural network according to claim 3, wherein the generator G has a convolution operation Conv, a mapping operation ReShape, a normalization operation Batch, and upsampling, the convolution kernel is 3*3, which is used to capture the neighborhood information of the pixel 8, and the formula of the activation function f (x) is: f (x) =max (0, x), where x is the input vector of the unlabeled exemplar data.
5. The method for detecting changes in elements of a remote sensing image based on an SGAN as claimed in claim 3, wherein the input of the discriminator in the step S2 comprises pseudo-sample data X generated by random noise Z * And labeled samples (x, y) and unlabeled sample data x, outputting n+1 classification results by the discriminator.
6. The method for detecting the change of the remote sensing image element based on the SGAN (serving area network) against the neural network according to claim 3, wherein the specific steps of classifying by the classifier are as follows:
s21, extracting features: the method comprises the steps of forming a convolution layer and a sampling layer, sequentially performing convolution operation Conv, dropout operation, normalization operation Batch Normalization and activation operation Leaky ReLU, wherein 4 times of activation operation, 4 convolution operations and 2 times of Batch Normalization operations are used in the extraction process, and the formula of the activation operation Leaky ReLU is as follows:
y=max (0, x) +leak min (0, x); wherein leak is a constant;
s22, similarity calculation of the classifier: based on a multi-classification support vector SVM, a nonlinear optimal classifier is adopted, a Gaussian kernel function is used for converting a low dimension into a high dimension, the similarity is calculated, and a kernel function formula is as follows:
where σ is the control function damping free parameter,,/>is a feature vector +_>Is Euclidean distance;
s23, classification and segmentation of ground features: the classification of the ground object category adopts a voting mechanism, and 1-v-1 creates SVM between every two, so when k categories exist, the k categories existThe classification result is the maximum value of votes of each result of the SVM, the result comprises a ground feature classification and a segmentation map, each pixel in the segmentation image data is provided with a label, each ground feature is endowed with a pixel value, the same pixel value represents one ground feature, and the pixel value range is 0-255.
7. The method for detecting the change of the remote sensing image element based on the SGAN for resisting the neural network according to claim 5, wherein the specific steps of step S3 are as follows:
s31, training a discriminator: taking supervised training of real samples (x, y), extracting tens or hundreds of tagged real samples (x, y), calculating D ((x, y)) of given batch samples and back-propagating multi-classification loss, updating parameters of a discriminatorθ D To minimize losses, the formula is:
in the method, in the process of the invention,is the number of real samples (x, y); the real samples (x, y) represent the eigenvalues and labels of the input samples; d ((x, y)) represents the probability that the sample (x, y) is true;θ D to distinguishParameters of the device, the ability of the reaction model to distinguish true from false; />Representation ofθ D Gradient at; />A logarithmic probability of the probability value D ((x, y)) representing the discriminator for a single real sample (x, y); />Representing the arbiter for a single dummy sample X * Probability value D ((X) * ) Log probability to improve the performance of the arbiter; d ((X) * ) Is pseudo sample X) * Probability of being a true sample;
s32, training a classifier: and (3) performing unsupervised training on an unlabeled real sample (x), and calculating a penalty coefficient C and a kernel function parameter through training to reach an optimal solution, wherein a penalty coefficient C formula is as follows:
in the method, in the process of the invention,is a regularization parameter used for controlling the complexity of the model; the penalty coefficient C has a value range of [0.001,100 ]];
S33 training generator: using pseudo-samples X with added random noise Z * Unsupervised training, extracting tens or hundreds of random noise Z to generate tens or hundreds of pseudo samples, and recording as: g (z) = (X) * ) D ((X) for a given sample of the batch is calculated * ) Back propagation binary class loss update generator parametersθ G To maximize the loss, the formula is:
in the method, in the process of the invention,is the number of dummy samples G (z);D((X * ) A probability that the dummy sample is a true sample;θ G parameters of the generator; />Representation ofθ G Gradient at->Representing the arbiter for a single dummy sample X * Probability values of (2)D((X * ) Log probability) to improve the performance of the arbiter.
8. The method for detecting the change of the remote sensing image element based on the SGAN for resisting the neural network according to claim 7, wherein the training of the model in step S3 further includes classification accuracy OA evaluation, setting a threshold, and retraining if the threshold is smaller than the threshold, wherein the formula is as follows:
wherein,for the number of correctly classified samples, +.>Is the total number of samples.
9. The method for detecting the change of the remote sensing image elements based on the SGAN according to claim 5, wherein in the step S5, a pixel difference method is adopted, an open source tool OpenCV function interface is utilized, a post-change classification feature map and a pre-change segmentation feature map are taken as inputs, a pixel difference map is obtained, a difference map pixel value is used for distinguishing a change category, and a change feature map is output.
10. The method for detecting the change of the remote sensing image element based on the SGAN for resisting the neural network according to claim 5, wherein in the step S6, the shp vector including the change attribute and the change direction label is generated and output by the grid vectorization according to the pixel value of the change feature map, and the attribute change information is given at the same time.
CN202410096146.4A 2024-01-24 2024-01-24 SGAN-based detection method for resisting change of remote sensing image element of neural network Active CN117612020B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410096146.4A CN117612020B (en) 2024-01-24 2024-01-24 SGAN-based detection method for resisting change of remote sensing image element of neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410096146.4A CN117612020B (en) 2024-01-24 2024-01-24 SGAN-based detection method for resisting change of remote sensing image element of neural network

Publications (2)

Publication Number Publication Date
CN117612020A true CN117612020A (en) 2024-02-27
CN117612020B CN117612020B (en) 2024-07-05

Family

ID=89952087

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410096146.4A Active CN117612020B (en) 2024-01-24 2024-01-24 SGAN-based detection method for resisting change of remote sensing image element of neural network

Country Status (1)

Country Link
CN (1) CN117612020B (en)

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108846832A (en) * 2018-05-30 2018-11-20 理大产学研基地(深圳)有限公司 A kind of change detecting method and system based on multi-temporal remote sensing image and GIS data
JP2019028839A (en) * 2017-08-01 2019-02-21 国立研究開発法人情報通信研究機構 Classifier, method for learning of classifier, and method for classification by classifier
CN109948693A (en) * 2019-03-18 2019-06-28 西安电子科技大学 Expand and generate confrontation network hyperspectral image classification method based on super-pixel sample
CN110689086A (en) * 2019-10-08 2020-01-14 郑州轻工业学院 Semi-supervised high-resolution remote sensing image scene classification method based on generating countermeasure network
CN111046900A (en) * 2019-10-25 2020-04-21 重庆邮电大学 Semi-supervised generation confrontation network image classification method based on local manifold regularization
CN111160128A (en) * 2019-12-11 2020-05-15 中国资源卫星应用中心 Remote sensing image processing method and system based on antagonistic neural network model
CN111161218A (en) * 2019-12-10 2020-05-15 核工业北京地质研究院 High-resolution remote sensing image change detection method based on twin convolutional neural network
CN111242050A (en) * 2020-01-15 2020-06-05 同济大学 Automatic change detection method for remote sensing image in large-scale complex scene
CN111274905A (en) * 2020-01-16 2020-06-12 井冈山大学 AlexNet and SVM combined satellite remote sensing image land use change detection method
CN111931553A (en) * 2020-06-03 2020-11-13 西安电子科技大学 Remote sensing data enhanced generation countermeasure network method, system, storage medium and application
CN112016436A (en) * 2020-08-28 2020-12-01 北京国遥新天地信息技术有限公司 Remote sensing image change detection method based on deep learning
US20210279960A1 (en) * 2020-03-05 2021-09-09 Topcon Corporation Photogrammetry of building using machine learning based inference
CN113689517A (en) * 2021-09-08 2021-11-23 云南大学 Image texture synthesis method and system of multi-scale channel attention network
CN113936217A (en) * 2021-10-25 2022-01-14 华中师范大学 Priori semantic knowledge guided high-resolution remote sensing image weakly supervised building change detection method
CN114821299A (en) * 2022-03-28 2022-07-29 西北工业大学 Remote sensing image change detection method
CN115527056A (en) * 2022-04-13 2022-12-27 齐齐哈尔大学 Hyperspectral image classification method based on dual-hybrid convolution generation countermeasure network
CN115830466A (en) * 2022-11-19 2023-03-21 山东科技大学 Glacier change remote sensing detection method based on deep twin neural network
CN116012702A (en) * 2022-12-06 2023-04-25 南京市测绘勘察研究院股份有限公司 Remote sensing image scene level change detection method
CN116612381A (en) * 2023-03-31 2023-08-18 中国人民解放军国防科技大学 Semi-supervised remote sensing image change detection method based on pseudo-double phase generation technology
CN116704350A (en) * 2023-06-16 2023-09-05 浙江时空智子大数据有限公司 Water area change monitoring method and system based on high-resolution remote sensing image and electronic equipment

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019028839A (en) * 2017-08-01 2019-02-21 国立研究開発法人情報通信研究機構 Classifier, method for learning of classifier, and method for classification by classifier
CN108846832A (en) * 2018-05-30 2018-11-20 理大产学研基地(深圳)有限公司 A kind of change detecting method and system based on multi-temporal remote sensing image and GIS data
CN109948693A (en) * 2019-03-18 2019-06-28 西安电子科技大学 Expand and generate confrontation network hyperspectral image classification method based on super-pixel sample
CN110689086A (en) * 2019-10-08 2020-01-14 郑州轻工业学院 Semi-supervised high-resolution remote sensing image scene classification method based on generating countermeasure network
CN111046900A (en) * 2019-10-25 2020-04-21 重庆邮电大学 Semi-supervised generation confrontation network image classification method based on local manifold regularization
CN111161218A (en) * 2019-12-10 2020-05-15 核工业北京地质研究院 High-resolution remote sensing image change detection method based on twin convolutional neural network
CN111160128A (en) * 2019-12-11 2020-05-15 中国资源卫星应用中心 Remote sensing image processing method and system based on antagonistic neural network model
CN111242050A (en) * 2020-01-15 2020-06-05 同济大学 Automatic change detection method for remote sensing image in large-scale complex scene
CN111274905A (en) * 2020-01-16 2020-06-12 井冈山大学 AlexNet and SVM combined satellite remote sensing image land use change detection method
US20210279960A1 (en) * 2020-03-05 2021-09-09 Topcon Corporation Photogrammetry of building using machine learning based inference
CN111931553A (en) * 2020-06-03 2020-11-13 西安电子科技大学 Remote sensing data enhanced generation countermeasure network method, system, storage medium and application
CN112016436A (en) * 2020-08-28 2020-12-01 北京国遥新天地信息技术有限公司 Remote sensing image change detection method based on deep learning
CN113689517A (en) * 2021-09-08 2021-11-23 云南大学 Image texture synthesis method and system of multi-scale channel attention network
CN113936217A (en) * 2021-10-25 2022-01-14 华中师范大学 Priori semantic knowledge guided high-resolution remote sensing image weakly supervised building change detection method
CN114821299A (en) * 2022-03-28 2022-07-29 西北工业大学 Remote sensing image change detection method
CN115527056A (en) * 2022-04-13 2022-12-27 齐齐哈尔大学 Hyperspectral image classification method based on dual-hybrid convolution generation countermeasure network
CN115830466A (en) * 2022-11-19 2023-03-21 山东科技大学 Glacier change remote sensing detection method based on deep twin neural network
CN116012702A (en) * 2022-12-06 2023-04-25 南京市测绘勘察研究院股份有限公司 Remote sensing image scene level change detection method
CN116612381A (en) * 2023-03-31 2023-08-18 中国人民解放军国防科技大学 Semi-supervised remote sensing image change detection method based on pseudo-double phase generation technology
CN116704350A (en) * 2023-06-16 2023-09-05 浙江时空智子大数据有限公司 Water area change monitoring method and system based on high-resolution remote sensing image and electronic equipment

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
KHYAT PATEL等: "Semantic Segmentation of Urban Area using Pix2Pix Generative Adversarial Networks", 2023 3RD INTERNATIONAL CONFERENCE ON RANGE TECHNOLOGY (ICORT), 19 September 2023 (2023-09-19), pages 1 - 6 *
桑宏强;刘雨轩;刘芬;: "改进卷积神经网络在工件表面缺陷检测中的应用", 组合机床与自动化加工技术, no. 08, 20 August 2020 (2020-08-20) *
王艳恒;高连如;陈正超;张兵;: "结合深度学习和超像元的高分遥感影像变化检测", 中国图象图形学报, no. 06, 16 June 2020 (2020-06-16) *
郝 睿等: "基于 BP 神经网络的多特征融合变化检测方法", 海洋测绘, 31 January 2016 (2016-01-31), pages 79 - 82 *

Also Published As

Publication number Publication date
CN117612020B (en) 2024-07-05

Similar Documents

Publication Publication Date Title
CN112966684B (en) Cooperative learning character recognition method under attention mechanism
CN110443143B (en) Multi-branch convolutional neural network fused remote sensing image scene classification method
CN110363182B (en) Deep learning-based lane line detection method
CN107016357B (en) Video pedestrian detection method based on time domain convolutional neural network
CN104361313B (en) A kind of gesture identification method merged based on Multiple Kernel Learning heterogeneous characteristic
CN106408030B (en) SAR image classification method based on middle layer semantic attribute and convolutional neural networks
CN104834942B (en) Remote sensing image variation detection method and system based on mask classification
CN105528595A (en) Method for identifying and positioning power transmission line insulators in unmanned aerial vehicle aerial images
CN112950780B (en) Intelligent network map generation method and system based on remote sensing image
CN111598098B (en) Water gauge water line detection and effectiveness identification method based on full convolution neural network
CN111401145B (en) Visible light iris recognition method based on deep learning and DS evidence theory
CN110060273B (en) Remote sensing image landslide mapping method based on deep neural network
CN102332086A (en) Facial identification method based on dual threshold local binary pattern
CN108427919B (en) Unsupervised oil tank target detection method based on shape-guided saliency model
CN113963222A (en) High-resolution remote sensing image change detection method based on multi-strategy combination
CN117475236B (en) Data processing system and method for mineral resource exploration
CN113052215A (en) Sonar image automatic target identification method based on neural network visualization
CN114359702A (en) Method and system for identifying building violation of remote sensing image of homestead based on Transformer
CN107423771B (en) Two-time-phase remote sensing image change detection method
CN111242028A (en) Remote sensing image ground object segmentation method based on U-Net
CN107766810A (en) A kind of cloud, shadow detection method
CN114529906A (en) Method and system for detecting abnormity of digital instrument of power transmission equipment based on character recognition
CN108960005B (en) Method and system for establishing and displaying object visual label in intelligent visual Internet of things
CN117351371A (en) Remote sensing image target detection method based on deep learning
CN109829511B (en) Texture classification-based method for detecting cloud layer area in downward-looking infrared image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant