CN110263845A - SAR image change detection based on semi-supervised confrontation depth network - Google Patents

SAR image change detection based on semi-supervised confrontation depth network Download PDF

Info

Publication number
CN110263845A
CN110263845A CN201910527007.1A CN201910527007A CN110263845A CN 110263845 A CN110263845 A CN 110263845A CN 201910527007 A CN201910527007 A CN 201910527007A CN 110263845 A CN110263845 A CN 110263845A
Authority
CN
China
Prior art keywords
layer
network
sample
training
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910527007.1A
Other languages
Chinese (zh)
Other versions
CN110263845B (en
Inventor
王英华
杨振东
王剑
刘宏伟
秦庆喜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201910527007.1A priority Critical patent/CN110263845B/en
Publication of CN110263845A publication Critical patent/CN110263845A/en
Application granted granted Critical
Publication of CN110263845B publication Critical patent/CN110263845B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of SAR image change detections based on semi-supervised confrontation depth network, mainly solve existing change detection techniques detection effect false alarm rate height, problem of detection zone inaccuracy when there is label data less.Its scheme is: 1) utilizing two phase SAR image data, calculate the logarithm ratio disparity map of two images;2) training sample and test sample are extracted in two phase SAR images and disparity map;3) building variation detection dual network and two differentiation networks;4) using there is label data to exercise supervision training, dual training and coorinated training is carried out using no label data, obtains trained detection network;5) test data is input in trained variation detection network, obtains variation testing result.The present invention combines the variation for largely extracting separability without label data to detect feature, improves when there is label lack of training samples, the Generalization Capability of supervised training model, can be used for SAR image variation detection.

Description

SAR image change detection based on semi-supervised confrontation depth network
Technical field
The invention belongs to radar image processing technology field, in particular to a kind of change detecting method of SAR image can be used In disaster monitoring, land investigation and target detection.
Background technique
SAR system is influenced smaller by weather condition and illumination condition, and earth observation all-time anf all-weather may be implemented, Therefore being changed detection to multidate SAR data is a kind of changed important means of analysis earth surface, is answered extensively For disaster monitoring, land investigation and target detection.
Traditional SAR change detecting method has three classes: the first kind is unsupervised change detecting method, as Celik et al. is mentioned The change detecting method based on principal component analysis and k mean cluster out;Gong et al. propose based on image co-registration and fuzzy The SAR change detecting method of cluster;The variation inspection based on SIFT critical point detection and area information that Yan Wang et al. is proposed Survey method.Second class is the change detecting method for having supervision, the variation detection side based on core proposed such as Camps-Valls et al. Label data has been utilized as training sample in method;Change detecting method of the Yu Li et al. people based on match tracing.Third class is half The change detecting method of supervision, such as the SAR image change detection based on neighborhood cluster core of Lu Jia proposition;Lin An The semi-supervised change detection algorithm of the SAR image based on random field and maximum entropy of equal propositions.
In above-mentioned three kinds of methods, unsupervised approaches, which do not need to utilize, label data, therefore becomes variation detection neck Main stream approach in domain, but have due to lacking the supervision and guidance of label data, the testing result of such methods is generally and very Real region of variation difference is larger, and false-alarm is more.For having measure of supervision, in the case where there is the biggish scene of label data amount, Have measure of supervision that can obtain well as a result, still in reality scene, the cost of the acquisition of label data be it is very big, That is have under normal circumstances label data amount be it is seldom, in this case, have the effect of measure of supervision that can be deteriorated, mould The Generalization Capability of type is deteriorated.Compared to having supervision and unsupervised method, semi-supervised change detecting method can be in conjunction with a small amount of There is label data and largely learn jointly without label data, extract the feature of separability, promotes detection performance.It is existing semi-supervised SAR image change detection is generally basede on the semi-supervised method in conventional machines study, and the input feature vector of model generally requires Engineer can not efficiently use all information of initial data, lead to higher false alarm rate and lower detection accuracy, therefore Limit the performance of such methods.
Summary of the invention
It is an object of the invention to be directed to the deficiency of above-mentioned existing three kinds of SAR change detecting methods, propose that one kind is based on The SAR image change detection of semi-supervised confrontation depth network, to pass through combination in the case where there is exemplar amount seldom Detection accuracy largely is improved without label data, reduces false alarm rate.
The technical scheme is that first with sliding window model extraction it is a small amount of have exemplar and largely without label Sample, then with thering are exemplar and unlabeled exemplars to train deep neural network model jointly, after model convergence, will train Application of Neural Network on test data, obtain final variation detection label figure, implementation step includes the following:
(1) two phase SAR image data are utilized, the logarithm ratio disparity map K of this two images is calculated;
(2) training sample and test sample are extracted in two phase SAR images and disparity map by sliding window mode, and from instruction Practice and randomly select 4% in sample as there is label training sample, remaining is as no label training sample;
(3) training network model is constructed:
(3a) is arranged SAR and changes detection dual network Ψ1And Ψ2:
Each network is made of six layer structure, wherein first four layers are inclusion layer, i.e., first layer is full articulamentum L1, the second layer It is full articulamentum L for activation primitive layer ReLU, third layer2, the 4th layer be that activation primitive layer ReLU, the 5th and the 6th is unshared Layer, in which:
First network Ψ1Layer 5 be full articulamentum L13, layer 6 is Softmax classifier layer S11,
Second network Ψ2Layer 5 be full articulamentum L23, layer 6 is Softmax classifier layer S21
Two differentiation networks are arranged in (3b)With
The two differentiate that network is identical, are all made of six layer structure, i.e., first layer is full articulamentum, the second layer is sharp Living function layer ReLU, third layer be full articulamentum, the 4th layer be activation primitive layer, layer 5 is full articulamentum, and layer 6 is Softmax classifier layer,
Dual network and two differentiation networks are attached by (3c), i.e., differentiate network for firstIt is connected in the first detection network Ψ1Later, second differentiates networkIt is connected to the second detection network Ψ2Later, training network model is formed;
(6) training sample data are input in the training network model that (3) are built, successively carry out having label data Supervised training, the dual training without label data and coorinated training obtain trained variation detection network Ψ;
(7) test sample data are input to trained variation detection network Ψ to detect, obtain the change of SAR image Change testing result.
Compared with the prior art, the present invention has the following advantages:
1) present invention can be in the case where there is exemplar amount seldom, in conjunction with largely without label using two phase SAR The variation that data extract separability detects feature, promotes variation detection performance;
2) present invention utilizes advantage of the deep learning in classification task, passes through dual network structure and dual training and association With the combination of training, two networks is promoted to mutually promote during training mutual promotion, finally improves variation detection Performance.
Detailed description of the invention
Fig. 1 is realization general flow chart of the invention;
Fig. 2 is that input sample schematic diagram is constructed in the present invention;
Fig. 3 is experimental data figure;
Fig. 4 is with the present invention and existing method in the result figure for being changed detection to SAR image shown in Fig. 3.
Specific embodiment
Embodiment of the present invention and effect are described in detail with reference to the accompanying drawing:
Referring to Fig.1, steps are as follows for realization of the invention:
Step 1, according to two phase SAR images, corresponding logarithm ratio disparity map is calculated.
Corresponding logarithm ratio disparity map K is calculated using two phase SAR image data:
Wherein I1For the first phase figure of original SAR image, I2For original SAR image second when phasor.
Step 2, training sample and test sample are extracted in two phase SAR images and disparity map by sliding window mode.
Referring to Fig. 2, this step is accomplished by
2.1) sliding window size is set as N × N, and the center of sliding window is that (i, j) slides training sample in specified region Window chooses image block, for test sample, carries out sliding window in entire image and chooses image block;
2.2) the phase images I at two1With I2In select the image block of N × N size respectively centered on (i, j), be denoted as WithIt willWithBe stitched together the sample first passage for obtaining that size is 2N × N along the first dimension;
2.3) phase images I when corresponding to two1With I2Middle pixel (i, j), selection is centered on (i, j) in differential image K Image block is denoted asThe size of image block is N × N, is calculatedAverage value m, by m be extended to size be 2N × N matrix, As sample second channel;
2.4) when choosing training sample, sliding window is slided in specified region, with sample first passage and sample second channel Form training sample;When choosing test sample, sliding window is slided on whole picture figure, with sample first passage and sample second channel Form test sample;
2.5) 4% is randomly selected from training sample as there is label training sample, remaining is as no label training sample.
Step 3, training network model is constructed.
3.1) setting SAR changes detection dual network Ψ1And Ψ2:
The two detection networks are made of six layer structure, wherein first four layers are inclusion layer, latter two layers is unshared layer, In:
3.11)Ψ1And Ψ2Preceding four layers of each layer parameter setting and relationship:
The full articulamentum L of first layer1, 1000 neurons are equipped with, for extracting the shallow-layer of training sample and test sample Feature, this layer generate the output vector of one 1000 dimension;
Second layer activation primitive layer ReLU carries out nonlinear mapping for the output to upper one layer full articulamentum, non-thread Property mapping equation is as follows:
ReLU (x)=max (0, x),
It is output that x, which is input ReLU (x), in formula, and the dimension output and input of this layer is consistent;
Third layer is full articulamentum L2, 1000 neurons are equipped with, the shallow-layer for exporting from upper one layer full articulamentum Deeper feature is extracted in feature, this layer generates the output vector of one 1000 dimension;
4th layer is that activation primitive layer ReLU, effect and principle are consistent with above-mentioned ReLU layers;
3.12)Ψ15th and layer 6 parameter setting and relationship:
Ψ1Layer 5 be full articulamentum L31, for extracting different types of feature in the output of a upper active coating, The input dimension of this layer is 1000, and output dimension is 2;
Ψ1Layer 6 is Softmax classifier layer S11, the effect of this layer is by L312 dimensional vectors of output are converted respectively For class probability, i.e., current input sample belongs to the probability of variation Yu non-changing class, is classified according to probability value to sample;
3.13)Ψ25th and layer 6 parameter setting and relationship:
Ψ2Layer 5 be full articulamentum L32, for extracting different types of feature in the output of a upper active coating, The input dimension of this layer is 1000, and output dimension is 2;
Ψ2Layer 6 be Softmax classifier layer S21, the effect of this layer is by L32The 2 dimensional vectors difference of layer output Class probability is converted to, i.e., current input sample belongs to the probability of variation Yu non-changing class, divides according to probability value sample Class.
3.2) two differentiation networks are setWith
The two differentiate that network is identical, are made of six layer structure, i.e., first layer is full articulamentum, the second layer is sharp Living function layer ReLU, third layer be full articulamentum, the 4th layer be activation primitive layer, layer 5 is full articulamentum, and layer 6 is Softmax classifier layer, each layer parameter setting and relationship are as follows:
The full articulamentum of first layer is equipped with 1000 neurons, for extracting the differentiation of shallow-layer in the input of network Feature, this layer generate the output vector of 1000 dimensions;
Second layer activation primitive layer ReLU carries out nonlinear mapping, the layer for the output to upper one layer full articulamentum The dimension output and input it is consistent;
Third layer is full articulamentum, is equipped with 1000 neurons, and the shallow-layer for exporting from upper one layer full articulamentum is special Deeper differentiation feature is extracted in sign, this layer generates the output vector of 1000 dimensions;
4th layer is activation primitive layer ReLU;
Layer 5 is full articulamentum, and it includes 2 neurons, effect is to use upper one layer of output dimensionality reduction to 2 dimensional vectors It is calculated in subsequent two class probability;
Layer 6 is Softmax classifier layer, and the effect of this layer is that 2 dimensional vectors that upper one layer exports are converted to two dimension Class probability, i.e., current input sample belongs to true distribution sample and belongs to the probability of generator generation sample, according to probability value Sample is differentiated:
3.3) by the 3.1) dual network of building and 3.2), two differentiation networks of building are attached, i.e., differentiate net for first NetworkIt is connected to the first detection network Ψ1After layer 6, network is differentiated by secondIt is connected to the second detection network Ψ25th After layer, training network model is formed.
Step 4, training sample data are input in the training network model that step 3 is built, iteration is successively had The supervised training of label data, the dual training without label data and coorinated training obtain trained variation detection network Ψ.
4.1) there is the supervised training of label data:
4.11) label data will is sent to detection dual network Ψ1、Ψ2In, detection net is calculated according to the label for being sent into sample Network Ψ1Loss function Ls1With detection network Ψ2Loss function Ls2:
Wherein, v1、v2For network Ψ1Two values of the bivector of the output of the middle full articulamentum of the last layer, z1、z2For net Network Ψ2Two values of the bivector of the output of the middle full articulamentum of the last layer, i is the corresponding correct classification of current input sample Label, it is to indicate that input sample is variation class that i, which is 1, and it is to indicate that input sample is not change class that i, which is 2,;
4.12) the first detection network Ψ 4.11) calculated is utilized1Loss function Ls1With the second detection network Ψ2Damage Lose function Ls2, the first detection network Ψ is updated according to gradient descent algorithm backpropagation1Parameter and the second detection network Ψ2Ginseng Number;
4.2) dual training is carried out to no label data;
4.2.1) network Ψ is detected by first1As generator network P1, and with the first arbiter networkIs constituted together All one's life is at confrontation network G AN1, dual training is carried out to no label data, is accomplished by
Firstly, authentic specimen and generation sample are sequentially sent to the first arbiter networkIn the middle, according to feeding sample Label calculates first and differentiates network losses function
Wherein, Or1And Of1It is two values for differentiating the bivector of the full articulamentum output of network the last layer, respectively corresponds It is judged as authentic specimen and is judged as the characteristic value for generating sample, y1For network P1Output;
Then, it calculates first and differentiates network losses functional gradient, update first using gradient descent algorithm backpropagation and sentence Other device networkParameter;
Then, by the first generator network Ψ1It generates sample and is sent into the first differentiation networkIn the middle, the first generator is calculated Network P1Loss function
Wherein, Or1And Of1It is two values of the bivector of the full articulamentum output of the first differentiation network the last layer, respectively It is corresponding to be judged as authentic specimen and be judged as the characteristic value for generating sample, y1For network P1Output;
Then, the first generator network losses functional gradient is calculated, updates first using gradient descent algorithm backpropagation Generator network P1Parameter;
4.2.2) network Ψ is detected by second2First five layer regard generator network P2, and with the second arbiter networkOne It rises and constitutes the second generation confrontation network G AN2, carry out dual training:
Authentic specimen and generation sample are sequentially sent in the second arbiter network, according to the mark for being sent into sample by the first step Label calculate second and differentiate network losses function
Wherein, Or2And Of2It is two values of the bivector of the full articulamentum output of the second differentiation network the last layer, respectively It is corresponding to be judged as authentic specimen and be judged as the characteristic value for generating sample, y2For network P2Output;
Second step calculates second and differentiates network losses functional gradient, updates network using the backpropagation of lossGinseng Number;
Second generator network is generated sample and is sent into the second differentiation network by third stepIn the middle, the second generator is calculated Network P2Loss function
Wherein, Or2And Of2It is two values of the bivector of the full articulamentum output of the second differentiation network the last layer, respectively It is corresponding to be judged as authentic specimen and be judged as the characteristic value for generating sample, y2For the second generator network P2Output;
4th step calculates the second generator network losses functional gradient, updates the using gradient descent algorithm backpropagation Two generator network P2Parameter.
4.3) detection dual network Ψ will be sent into simultaneously respectively without label data1And Ψ2Carry out following coorinated training:
4.31) the first detection network Ψ1Category feature is extracted from no label data, and utilizes Softmax classifier pair Category feature is classified, and pseudo label PL corresponding to each unlabeled exemplars is obtained1And class probability vector py, it calculates The confidence level of i-th of unlabeled exemplars classification are as follows: Conyi=max (pyi), and by classification confidence ConyiWith preset Confidence threshold value TyBe compared, by confidence level be greater than threshold value sample with have together with exemplar as second detection network Ψ2Training sample;
4.32) the second detection network Ψ2Pattern feature is extracted from no label data, and utilizes Softmax classifier pair Pattern feature is classified, and pseudo label PL corresponding to each unlabeled exemplars is obtained2And class probability vector pz, it calculates The confidence level of i-th of unlabeled exemplars classification are as follows: Conzi=max (pzi), and by ConziWith preset confidence threshold value TzBe compared, by confidence level be greater than threshold value sample with have together with exemplar as first detection network Ψ1Have supervision instruct Experienced training sample;
Pass through iteration above-mentioned steps 4.1) to the supervised training for having label data, 4.2) to the dual training of no label data With the coorinated training 4.3) to no label data, trained variation detection network Ψ is obtained.
Step 5, test sample data are input to trained variation detection network Ψ to detect, obtain SAR image Variation testing result.
Effect of the invention can be further illustrated by following experimental data:
One, experiment condition
1) experimental data
This experiment uses four groups of SAR image data, as shown in Figure 3, in which:
Fig. 3 (a1) and Fig. 3 (a2) is obtained in Augusts, 2003 and in May, 2004 respectively by ERS-2SAR sensor Image, image size are that 256 × 256, Fig. 3 (a3) is corresponding variation detection with reference to figure, this group is known as San Francisco number According to.
Fig. 3 (b1) and Fig. 3 (b2) be by Radarsat-1 sensor respectively in July, 1997 and in August, 1997 obtained SAR image, image size is that 290 × 350, Fig. 3 (b3) is that corresponding variation detection refers to figure, this group referred to as Ottawa number According to.
Fig. 3 (c1) and Fig. 3 (c2) is obtained respectively in June, 2008 and in June, 2009 by Radarsat-2 sensor SAR image, image size is that 289 × 257, Fig. 3 (c3) is that corresponding variation detection refers to figure, this group referred to as Yellow River Farmland I datum.
Fig. 3 (d1) and Fig. 3 (d2) is obtained respectively in June, 2008 and in June, 2009 by Radarsat-2 sensor SAR image, image size is that 291 × 306, Fig. 3 (d3) is that corresponding variation detection refers to figure, this group referred to as Yellow River Farmland II data.
2) interpretational criteria
Experimental result is evaluated using following criterion
False alarm rate FA, false dismissed rate MD, global error rate OE, classification accuracy PCC, Kappa COEFFICIENT K C.
Two, experiment contents
Experiment one: algorithm A1 is supervised with the present invention and deep neural network DNN, fights the semi-supervised algorithm of self-encoding encoder SAAE The A2 and semi-supervised algorithm A3 of semi-supervised deep neural network SSDC for combining coorinated training is changed detection pair to above-mentioned data Than experiment, performance parameter comparing result is as shown in table 1.
1 the method for the present invention of table and correlation model performance parameter comparing result
In table 1: the semi-supervised semi-supervised algorithm A3 test of deep neural network SSDC lacks dual training compared to the present invention Part, deep neural network DNN supervise algorithm A1 experiment and use network same as the present invention, but only Training process.
It can be seen in table 1 that the present invention achieves best testing result, performance is also more more stable than other methods.Comparison The semi-supervised semi-supervised algorithm experimental of deep neural network SSDC the result shows that, increase antagonistic training be it is reasonable, help to mention High-class effect.
Experiment two: using the unsupervised change detection algorithm PCAKM of the present invention with existing combination PCA and K-means, in conjunction with The non-supervision variation detection method GaborTLC of Gabor transformation and two-stage cluster, the unsupervised variation detection side based on PCANet It is real to be changed detection comparison to above-mentioned data by method PCANet, the non-supervision variation detection method ELM based on the learning machine that transfinites It tests, performance parameter comparing result is as shown in table 2.
2 the method for the present invention of table and existing unsupervised approaches performance parameter comparing result
As seen from Table 2, the present invention has preferable performance, this is because semi-supervised model can be from having exemplar and nothing Identification information is extracted in exemplar, and coorinated training algorithm improves Generalization Capability by introducing pseudo label training sample, Therefore the present invention obtains better testing result than existing method.
Experiment three: above-mentioned data are become with the method for the present invention and existing method used in experiment one, experiment two Change detection comparative experiments, as a result as shown in Figure 4, in which:
Fig. 4 (a1) is testing result figure of the A2 method in San Francisco data;
Fig. 4 (a2) is testing result figure of the A1 method in San Francisco data;
Fig. 4 (a3) is testing result figure of the A3 method in San Francisco data;
Fig. 4 (a4) is testing result figure of the present invention in San Francisco data;
Fig. 4 (a5) is the real change administrative division map of San Francisco data;
Fig. 4 (a6) is testing result figure of the PCAKM method in San Francisco data;
Fig. 4 (a7) is testing result figure of the GaborTLC method in San Francisco data;
Fig. 4 (a8) is testing result figure of the PCANet method in San Francisco data;
Fig. 4 (a9) is testing result figure of the ELM method in San Francisco data;
Fig. 4 (b1) is testing result figure of the A2 method in Ottawa data;
Fig. 4 (b2) is testing result figure of the A1 method in Ottawa data;
Fig. 4 (b3) is testing result figure of the A3 method in Ottawa data;
Fig. 4 (b4) is testing result figure of the present invention in Ottawa data;
Fig. 4 (b5) is the real change administrative division map of Ottawa data;
Fig. 4 (b6) is testing result figure of the PCAKM method in Ottawa data;
Fig. 4 (b7) is testing result figure of the GaborTLC method in Ottawa data;
Fig. 4 (b8) is testing result figure of the PCANet method in Ottawa data;
Fig. 4 (b9) is testing result figure of the ELM method in Ottawa data;
Fig. 4 (c1) is testing result figure of the A2 method in Yellow River Farmland I datum;
Fig. 4 (c2) is testing result figure of the A1 method in Yellow River Farmland I datum;
Fig. 4 (c3) is testing result figure of the A3 method in Yellow River Farmland I datum;
Fig. 4 (c4) is testing result figure of the present invention in Yellow River Farmland I datum;
Fig. 4 (c5) is the real change administrative division map of Yellow River Farmland I datum;
Fig. 4 (c6) is testing result figure of the PCAKM method in Yellow River Farmland I datum;
Fig. 4 (c7) is testing result figure of the GaborTLC method in Yellow River Farmland I datum;
Fig. 4 (c8) is testing result figure of the PCANet method in Yellow River Farmland I datum;
Fig. 4 (c9) is testing result figure of the ELM method in Yellow River Farmland I datum;
Fig. 4 (d1) is testing result figure of the A2 method in Yellow River Farmland II data;
Fig. 4 (d2) is testing result figure of the A1 method in Yellow River Farmland II data;
Fig. 4 (d3) is testing result figure of the A3 method in Yellow River Farmland II data;
Fig. 4 (d4) is testing result figure of the present invention in Yellow River Farmland II data;
Fig. 4 (d5) is the real change administrative division map of Yellow River Farmland II data;
Fig. 4 (d6) is testing result figure of the PCAKM method in Yellow River Farmland II data;
Fig. 4 (d7) is testing result figure of the GaborTLC method in Yellow River Farmland II data;
Fig. 4 (d8) is testing result figure of the PCANet method in Yellow River Farmland II data;
Fig. 4 (d9) is testing result figure of the ELM method in Yellow River Farmland II data.
From fig. 4, it can be seen that testing result figure of the present invention is more nearly with true region of variation figure, it can be more accurate Ground reflects that the shape of region of variation, detection effect are more preferable.
Above description is only example of the present invention, does not constitute any limitation of the invention, it is clear that for It, all may be without departing substantially from the principle of the invention, structure after having understood the content of present invention and principle for one of skill in the art In the case where, carry out various modifications and change in form and details, but these modifications and variations based on inventive concept Still within the scope of the claims of the present invention.

Claims (8)

1. a kind of SAR image change detection based on semi-supervised confrontation depth network characterized by comprising
(1) two phase SAR image data are utilized, the logarithm ratio disparity map K of this two images is calculated;
(2) training sample and test sample are extracted in two phase SAR images and disparity map by sliding window mode, and from training sample 4% is randomly selected in this as there is label training sample, remaining is as no label training sample;
(3) training network model is constructed:
(3a) is arranged SAR and changes detection dual network Ψ1And Ψ2:
Each network is made of six layer structure, wherein first four layers are inclusion layer, i.e., first layer is full articulamentum L1, the second layer be sharp Function layer ReLU, the third layer of living are full articulamentum L2, the 4th layer be activation primitive layer ReLU, the 5th and the 6th be unshared layer, Wherein:
First network Ψ1Layer 5 be full articulamentum L13, layer 6 is Softmax classifier layer S11,
Second network Ψ2Layer 5 be full articulamentum L23, layer 6 is Softmax classifier layer S21
Two differentiation networks are arranged in (3b)With
The two differentiate that network is identical, are all made of six layer structure, i.e., first layer is full articulamentum, the second layer is activation letter Several layers of ReLU, third layer be full articulamentum, the 4th layer be activation primitive layer, layer 5 is full articulamentum, layer 6 Softmax Classifier layer;
Dual network and two differentiation networks are attached by (3c), i.e., differentiate network for firstIt is connected in the first detection network Ψ1It Afterwards, second differentiates networkIt is connected to the second detection network Ψ2Later, training network model is formed;
(4) training sample data are input in the training network model that (3) are built, iteration successively carries out having label data Supervised training, the dual training without label data and coorinated training obtain trained variation detection network Ψ;
(5) test sample data are input to trained variation detection network Ψ to detect, obtain the variation inspection of SAR image Survey result.
2. according to the method described in claim 1, wherein corresponding right using the calculating of two phase SAR image data in step (1) Number is calculated as follows than disparity map:
Wherein I1For the first phase figure of original SAR image, I2For original SAR image second when phasor.
3. according to the method described in claim 1, wherein being selected in two phase SAR images and disparity map in (2) with sliding window model Training sample and test sample are taken, is accomplished by
Set sliding window size 2a) as N × N, the center of sliding window is (i, j), for training sample, carries out sliding window selection in specified region Image block carries out sliding window in entire image and chooses image block for test sample;
2b) the phase images I at two1With I2In select the image block of N × N size respectively centered on (i, j), be denoted asWith It willWithBe stitched together the sample first passage for obtaining that size is 2N × N along the first dimension;
Phase images I when 2c) corresponding to two1With I2Middle pixel (i, j), image of the selection centered on (i, j) in differential image K Block is denoted asThe size of image block is N × N, is calculatedAverage value m, by m be extended to size be 2N × N matrix, as Sample second channel;
When 2d) choosing training sample, sliding window is slided in specified region, with sample first passage and the composition training of sample second channel Sample, when choosing test sample, sliding window is slided in whole picture figure, forms test specimens with sample first passage and sample second channel This.
4. according to the method described in claim 1, wherein variation detects dual network structure Ψ in (3)1And Ψ2In each layer parameter set It sets and relationship is as follows:
Ψ1And Ψ2First four layers be inclusion layer, in which:
The full articulamentum L of first layer1, 1000 neurons are equipped with, for extracting the shallow-layer feature of training sample and test sample, This layer generates the output vector of one 1000 dimension;
Second layer activation primitive layer ReLU carries out nonlinear mapping for the output to upper one layer full articulamentum, non-linear to reflect It is as follows to penetrate formula:
ReLU (x)=max (0, x)
It is output that wherein x, which is input ReLU (x), and the dimension output and input of this layer is consistent;
Third layer is full articulamentum L2, 1000 neurons are equipped with, for from the shallow-layer feature that upper one layer full articulamentum exports Deeper feature is extracted, this layer generates the output vector of one 1000 dimension;
4th layer is that activation primitive layer ReLU, effect and principle are consistent with above-mentioned ReLU layers;
Ψ1And Ψ2The 5th and layer 6 be unshared layer, in which:
Ψ1Layer 5 be full articulamentum L31, for extracting different types of feature in the output of a upper active coating, the layer Input dimension be 1000, output dimension be 2;
Ψ1Layer 6 is Softmax classifier layer S11, the effect of this layer is by L312 dimensional vectors of output are respectively converted into point Class probability, i.e., current input sample belong to the probability of variation Yu non-changing class, are classified according to probability value to sample;
Ψ2Layer 5 is full articulamentum L32, for extracting different types of feature in the output of a upper active coating, this layer Inputting dimension is 1000, and output dimension is 2;
Ψ2Layer 6 is Softmax classifier layer S21, the effect of this layer is by L322 dimensional vectors of layer output are respectively converted into Class probability, i.e., current input sample belong to the probability of variation Yu non-changing class, are classified according to probability value to sample.
5. according to the method described in claim 1, wherein two differentiation networks in (4)WithIts structure is identical, each layer ginseng Number setting and relationship are as follows:
The full articulamentum of first layer is equipped with 1000 neurons, for extracting the differentiation feature of shallow-layer in the input of network, This layer generates the output vector of 1000 dimensions;
Second layer activation primitive layer ReLU carries out nonlinear mapping for the output to upper one layer full articulamentum, this layer it is defeated Enter consistent with the dimension of output;
Third layer is full articulamentum, is equipped with 1000 neurons, for from the shallow-layer feature that upper one layer full articulamentum exports Deeper differentiation feature is extracted, this layer generates the output vector of 1000 dimensions;
4th layer is activation primitive layer ReLU;
Layer 5 is full articulamentum, and it includes 2 neurons, effect is after being used for upper one layer of output dimensionality reduction to 2 dimensional vectors Two continuous class probabilities calculate;
Layer 6 is Softmax classifier layer, and the effect of this layer is that 2 dimensional vectors that upper one layer exports are converted to two-dimentional classification Probability, i.e., current input sample belongs to true distribution sample and belongs to the probability of generator generation sample, according to probability value to sample This is differentiated.
6. according to the method described in claim 1, being will have label data wherein to the supervised training for having label data in (5) It is sent to detection dual network Ψ1、Ψ2Middle carry out Training, the loss function used is two classification cross entropy loss functions, public Formula is as follows:
Wherein, v1、v2For network Ψ1Two values of the bivector of the output of the middle full articulamentum of the last layer, z1、z2For network Ψ2Two values of the bivector of the output of the middle full articulamentum of the last layer, i is the corresponding correct classification mark of current input sample Number, it is to indicate that input sample is variation class that i, which is 1, and it is to indicate that input sample is not change class that i, which is 2,.
7. according to the method described in claim 1, being wherein accomplished by (5) to the dual training of no label data
(5a) detects network Ψ for first1As generator network P1, and with the first arbiter networkFirst is constituted together to generate Fight network G AN1, dual training is carried out to no label data:
Authentic specimen and generation sample 5a1) are sequentially sent to the first arbiter networkIn the middle, according to the label meter for being sent into sample It calculates first and differentiates network losses function
Wherein, Or1And Of1It is two values for differentiating the bivector of the full articulamentum output of network the last layer, respectively corresponds judgement For authentic specimen and the characteristic value for being judged as generation sample, y1For network P1Output;
It 5a2) calculates first and differentiates network losses functional gradient, update the first arbiter net using gradient descent algorithm backpropagation NetworkParameter;
5a3) by the first generator network P1It generates sample and is sent into the first differentiation networkIn the middle, the first generator network P is calculated1 Loss function
Wherein, Or1And Of1It is two values of the bivector of the full articulamentum output of the first differentiation network the last layer, respectively corresponds It is judged as authentic specimen and is judged as the characteristic value for generating sample, y1For network P1Output;
The first generator network losses functional gradient 5a4) is calculated, updates the first generator using gradient descent algorithm backpropagation Network P1Parameter;
(5b) detects network Ψ for second2First five layer regard generator network P2, and with the second arbiter networkIt constitutes together Second generates confrontation network G AN2, carry out dual training:
It 5b1) by authentic specimen and generates sample and is sequentially sent in the second arbiter network, calculate the according to the label for being sent into sample Two differentiate network losses function
Wherein, Or2And Of2It is two values of the bivector of the full articulamentum output of the second differentiation network the last layer, respectively corresponds It is judged as authentic specimen and is judged as the characteristic value for generating sample, y2For network P2Output;
It 5b2) calculates second and differentiates network losses functional gradient, update network using the backpropagation of lossParameter;
The second generator network 5b3) is generated into sample and is sent into the second differentiation networkIn the middle, the second generator network P is calculated2Damage Lose function
Wherein, Or2And Of2It is two values of the bivector of the full articulamentum output of the second differentiation network the last layer, respectively corresponds It is judged as authentic specimen and is judged as the characteristic value for generating sample, y2For the second generator network P2Output;
The second generator network losses functional gradient 5b4) is calculated, updates the second generator using gradient descent algorithm backpropagation Network P2Parameter.
8. according to the method described in claim 1, the wherein coorinated training in (5) to no label data, is by no label data It is sent into detection dual network Ψ simultaneously respectively1And Ψ2Carry out following coorinated training:
5c) the first detection network Ψ1Category feature is extracted from no label data, and using Softmax classifier to category feature Classify, obtains pseudo label PL corresponding to each unlabeled exemplars1And class probability vector py, calculate i-th of nothing The confidence level of exemplar classification are as follows: Conyi=max (pyi), and by classification confidence ConyiWith preset confidence level Threshold value TyIt is compared, confidence level is greater than the sample of threshold value as the second detection network Ψ2Training sample;
5d) the second detection network Ψ2Pattern feature is extracted from no label data, and using Softmax classifier to pattern feature Classify, obtains pseudo label PL corresponding to each unlabeled exemplars2And class probability vector pz, calculate i-th of nothing The confidence level of exemplar classification are as follows: Conzi=max (pzi), and by ConziWith preset confidence threshold value TzCompared Compared with the sample by confidence level greater than threshold value is as the first detection network Ψ1Training training sample.
CN201910527007.1A 2019-06-18 2019-06-18 SAR image change detection method based on semi-supervised countermeasure depth network Active CN110263845B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910527007.1A CN110263845B (en) 2019-06-18 2019-06-18 SAR image change detection method based on semi-supervised countermeasure depth network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910527007.1A CN110263845B (en) 2019-06-18 2019-06-18 SAR image change detection method based on semi-supervised countermeasure depth network

Publications (2)

Publication Number Publication Date
CN110263845A true CN110263845A (en) 2019-09-20
CN110263845B CN110263845B (en) 2023-05-02

Family

ID=67919058

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910527007.1A Active CN110263845B (en) 2019-06-18 2019-06-18 SAR image change detection method based on semi-supervised countermeasure depth network

Country Status (1)

Country Link
CN (1) CN110263845B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444955A (en) * 2020-03-25 2020-07-24 哈尔滨工程大学 Underwater sonar image unsupervised classification method based on class consciousness field self-adaption
CN111523422A (en) * 2020-04-15 2020-08-11 北京华捷艾米科技有限公司 Key point detection model training method, key point detection method and device
CN112257855A (en) * 2020-11-26 2021-01-22 Oppo(重庆)智能科技有限公司 Neural network training method and device, electronic equipment and storage medium
CN112285664A (en) * 2020-12-18 2021-01-29 南京信息工程大学 Method for evaluating countermeasure simulation confidence of radar-aircraft system
CN112686305A (en) * 2020-12-29 2021-04-20 深圳龙岗智能视听研究院 Semi-supervised learning method and system under assistance of self-supervised learning
CN112766381A (en) * 2021-01-22 2021-05-07 西安电子科技大学 Attribute-guided SAR image generation method under limited sample
CN112784777A (en) * 2021-01-28 2021-05-11 西安电子科技大学 Unsupervised hyperspectral image change detection method based on antagonistic learning
CN113255451A (en) * 2021-04-25 2021-08-13 西北工业大学 Method and device for detecting change of remote sensing image, electronic equipment and storage medium
CN114301637A (en) * 2021-12-11 2022-04-08 河南大学 Intrusion detection method and system for medical Internet of things
CN114821337A (en) * 2022-05-20 2022-07-29 武汉大学 Semi-supervised SAR image building area extraction method based on time phase consistency pseudo-label
CN114821299A (en) * 2022-03-28 2022-07-29 西北工业大学 Remote sensing image change detection method
CN116127345A (en) * 2022-12-23 2023-05-16 北京科技大学 Converter steelmaking process mode design method based on deep clustering generation countermeasure network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3246875A2 (en) * 2016-05-18 2017-11-22 Siemens Healthcare GmbH Method and system for image registration using an intelligent artificial agent
CN107563428A (en) * 2017-08-25 2018-01-09 西安电子科技大学 Classification of Polarimetric SAR Image method based on generation confrontation network
CN107977667A (en) * 2016-10-21 2018-05-01 西安电子科技大学 SAR target discrimination methods based on semi-supervised coorinated training
CN108492298A (en) * 2018-04-13 2018-09-04 西安电子科技大学 Based on the multispectral image change detecting method for generating confrontation network
CN108564115A (en) * 2018-03-30 2018-09-21 西安电子科技大学 Semi-supervised polarization SAR terrain classification method based on full convolution GAN
CN109242889A (en) * 2018-08-27 2019-01-18 大连理工大学 SAR image change detection based on context conspicuousness detection and SAE

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3246875A2 (en) * 2016-05-18 2017-11-22 Siemens Healthcare GmbH Method and system for image registration using an intelligent artificial agent
CN107977667A (en) * 2016-10-21 2018-05-01 西安电子科技大学 SAR target discrimination methods based on semi-supervised coorinated training
CN107563428A (en) * 2017-08-25 2018-01-09 西安电子科技大学 Classification of Polarimetric SAR Image method based on generation confrontation network
CN108564115A (en) * 2018-03-30 2018-09-21 西安电子科技大学 Semi-supervised polarization SAR terrain classification method based on full convolution GAN
CN108492298A (en) * 2018-04-13 2018-09-04 西安电子科技大学 Based on the multispectral image change detecting method for generating confrontation network
CN109242889A (en) * 2018-08-27 2019-01-18 大连理工大学 SAR image change detection based on context conspicuousness detection and SAE

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HAN W等: "《A semi-supervised generative framework with deep learning features for high-resolution remote sensing image scene classification》", 《ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING》 *
杜兰等: "《基于卷积神经网络的SAR图像目标检测算法》", 《电子与信息学报》 *
滑文强: "《小样本下的极化SAR图像分类问题研究》", 《中国博士学位论文全文数据库》 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444955B (en) * 2020-03-25 2022-08-02 哈尔滨工程大学 Underwater sonar image unsupervised classification method based on class consciousness field self-adaption
CN111444955A (en) * 2020-03-25 2020-07-24 哈尔滨工程大学 Underwater sonar image unsupervised classification method based on class consciousness field self-adaption
CN111523422A (en) * 2020-04-15 2020-08-11 北京华捷艾米科技有限公司 Key point detection model training method, key point detection method and device
CN111523422B (en) * 2020-04-15 2023-10-10 北京华捷艾米科技有限公司 Key point detection model training method, key point detection method and device
CN112257855A (en) * 2020-11-26 2021-01-22 Oppo(重庆)智能科技有限公司 Neural network training method and device, electronic equipment and storage medium
CN112257855B (en) * 2020-11-26 2022-08-16 Oppo(重庆)智能科技有限公司 Neural network training method and device, electronic equipment and storage medium
CN112285664A (en) * 2020-12-18 2021-01-29 南京信息工程大学 Method for evaluating countermeasure simulation confidence of radar-aircraft system
CN112285664B (en) * 2020-12-18 2021-04-06 南京信息工程大学 Method for evaluating countermeasure simulation confidence of radar-aircraft system
CN112686305A (en) * 2020-12-29 2021-04-20 深圳龙岗智能视听研究院 Semi-supervised learning method and system under assistance of self-supervised learning
CN112766381B (en) * 2021-01-22 2023-01-24 西安电子科技大学 Attribute-guided SAR image generation method under limited sample
CN112766381A (en) * 2021-01-22 2021-05-07 西安电子科技大学 Attribute-guided SAR image generation method under limited sample
CN112784777A (en) * 2021-01-28 2021-05-11 西安电子科技大学 Unsupervised hyperspectral image change detection method based on antagonistic learning
CN112784777B (en) * 2021-01-28 2023-06-02 西安电子科技大学 Unsupervised hyperspectral image change detection method based on countermeasure learning
CN113255451A (en) * 2021-04-25 2021-08-13 西北工业大学 Method and device for detecting change of remote sensing image, electronic equipment and storage medium
CN113255451B (en) * 2021-04-25 2023-04-07 西北工业大学 Method and device for detecting change of remote sensing image, electronic equipment and storage medium
CN114301637B (en) * 2021-12-11 2022-09-02 河南大学 Intrusion detection method and system for medical Internet of things
CN114301637A (en) * 2021-12-11 2022-04-08 河南大学 Intrusion detection method and system for medical Internet of things
CN114821299A (en) * 2022-03-28 2022-07-29 西北工业大学 Remote sensing image change detection method
CN114821299B (en) * 2022-03-28 2024-03-19 西北工业大学 Remote sensing image change detection method
CN114821337A (en) * 2022-05-20 2022-07-29 武汉大学 Semi-supervised SAR image building area extraction method based on time phase consistency pseudo-label
CN114821337B (en) * 2022-05-20 2024-04-16 武汉大学 Semi-supervised SAR image building area extraction method based on phase consistency pseudo tag
CN116127345A (en) * 2022-12-23 2023-05-16 北京科技大学 Converter steelmaking process mode design method based on deep clustering generation countermeasure network
CN116127345B (en) * 2022-12-23 2023-11-14 北京科技大学 Converter steelmaking process mode design method based on deep clustering generation countermeasure network

Also Published As

Publication number Publication date
CN110263845B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN110263845A (en) SAR image change detection based on semi-supervised confrontation depth network
WO2021134871A1 (en) Forensics method for synthesized face image based on local binary pattern and deep learning
US20200285896A1 (en) Method for person re-identification based on deep model with multi-loss fusion training strategy
CN109948425B (en) Pedestrian searching method and device for structure-aware self-attention and online instance aggregation matching
CN106778595B (en) Method for detecting abnormal behaviors in crowd based on Gaussian mixture model
CN104156734B (en) A kind of complete autonomous on-line study method based on random fern grader
CN110287849A (en) A kind of lightweight depth network image object detection method suitable for raspberry pie
CN108537136A (en) The pedestrian's recognition methods again generated based on posture normalized image
CN109299274A (en) A kind of natural scene Method for text detection based on full convolutional neural networks
CN109934200A (en) A kind of RGB color remote sensing images cloud detection method of optic and system based on improvement M-Net
CN104484681B (en) Hyperspectral Remote Sensing Imagery Classification method based on spatial information and integrated study
CN103996047B (en) Hyperspectral image classification method based on squeezed spectra clustering ensemble
CN107133569A (en) The many granularity mask methods of monitor video based on extensive Multi-label learning
CN104504362A (en) Face detection method based on convolutional neural network
CN106096561A (en) Infrared pedestrian detection method based on image block degree of depth learning characteristic
CN109766936A (en) Image change detection method based on information transmitting and attention mechanism
CN105528794A (en) Moving object detection method based on Gaussian mixture model and superpixel segmentation
CN110598693A (en) Ship plate identification method based on fast-RCNN
CN101464950A (en) Video human face identification and retrieval method based on on-line learning and Bayesian inference
CN108492298A (en) Based on the multispectral image change detecting method for generating confrontation network
CN105427309A (en) Multiscale hierarchical processing method for extracting object-oriented high-spatial resolution remote sensing information
CN110263712A (en) A kind of coarse-fine pedestrian detection method based on region candidate
He et al. Object-oriented mangrove species classification using hyperspectral data and 3-D Siamese residual network
CN108257154A (en) Polarimetric SAR Image change detecting method based on area information and CNN
CN108256462A (en) A kind of demographic method in market monitor video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant