CN107944483B - Multispectral image classification method based on dual-channel DCGAN and feature fusion - Google Patents

Multispectral image classification method based on dual-channel DCGAN and feature fusion Download PDF

Info

Publication number
CN107944483B
CN107944483B CN201711144187.2A CN201711144187A CN107944483B CN 107944483 B CN107944483 B CN 107944483B CN 201711144187 A CN201711144187 A CN 201711144187A CN 107944483 B CN107944483 B CN 107944483B
Authority
CN
China
Prior art keywords
network
image
layer
channel
dcgan
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711144187.2A
Other languages
Chinese (zh)
Other versions
CN107944483A (en
Inventor
焦李成
屈嵘
汶茂宁
马文萍
杨淑媛
侯彪
刘芳
陈璞花
古晶
张丹
唐旭
马晶晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Electronic Science and Technology
Original Assignee
Xian University of Electronic Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Electronic Science and Technology filed Critical Xian University of Electronic Science and Technology
Priority to CN201711144187.2A priority Critical patent/CN107944483B/en
Publication of CN107944483A publication Critical patent/CN107944483A/en
Application granted granted Critical
Publication of CN107944483B publication Critical patent/CN107944483B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/192Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
    • G06V30/194References adjustable by an adaptive method, e.g. learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a multispectral image classification method based on a dual-channel depth convolution generation type countermeasure network DCGAN and feature fusion, which comprises the following specific steps: inputting a multispectral image; normalizing the image of each wave band of each multispectral image; acquiring a multispectral image matrix; acquiring a data set; constructing a two-channel deep convolution generation type confrontation network DCGAN model; training a two-channel deep convolution generation type confrontation network DCGAN classification model; the test data set is classified. The method introduces a dual-channel generation type countermeasure network, combines the feature fusion, extracts multiple high-level feature information in multiple directions and multiple spectra, enhances the feature characterization capability and improves the classification effect.

Description

Multispectral image classification method based on dual-channel DCGAN and feature fusion
Technical Field
The invention belongs to the technical field of image processing, and further relates to a multispectral image classification method based on dual-channel generating countermeasure network DCGAN (deep rational adaptive networks) and feature fusion in the technical field of multispectral image classification. The method can be used for classifying the ground objects including water areas, fields, cities and the like in the multispectral image.
Background
The multispectral image belongs to a remote sensing image, and is an image obtained by repeatedly shooting the same target by a plurality of wave bands. The application value of multispectral images is more and more extensive, such as in the fields of aviation, aerospace ground detection, geodetic surveying and mapping, disaster detection and the like. Image classification is an important direction in the study of multispectral images. There are many traditional classification methods for multispectral images, but most methods need to artificially design and extract feature information according to the characteristics of the images, such as a support vector machine, a decision tree and the like. In recent years, deep learning, such as convolutional neural networks, has a powerful and powerful feature characterization capability in the field of image processing, and the uncertainty of feature extraction through artificial design is reduced.
Leya Kun et al, in a paper published by it, "multispectral remote sensing image classification based on texture feature MNF transform" ("weapon Equipment engineering Proc., 2017,38(2): 113-. The method utilizes the gray level co-occurrence matrix to extract features, then minimum noise separation MNF transformation is carried out on the extracted features, and minimum noise separation MNF components and spectral information are cooperatively classified. The method utilizes the gray level co-occurrence matrix to extract the characteristics, although a better classification result can be obtained. However, the method still has the disadvantages that the feature extraction design of the method is dependent on human experience, is complex and time-consuming, and the combination of features is not generally suitable for scenes with low pixel contrast.
The university of Panzhihua proposes a multispectral remote sensing image classification method based on tensor sparse representation and clustering in the patent document applied by the university of Panzhihua (patent application number: 201710329412.3, publication number: 107067040A). The method divides the multispectral remote sensing image into different groups by using a clustering algorithm in an unsupervised clustering method; converting the multispectral images in each group into a two-dimensional matrix from a three-dimensional form; performing dictionary learning on the two-dimensional matrix to obtain a dictionary which can be used for performing sparse representation on each group of multispectral remote sensing images, sparse representation coefficients and marks of each ground feature; training the obtained sparse representation coefficients and the obtained marks to obtain an optimal classifier; and classifying the pixels of the multispectral remote sensing image by using the obtained classifier according to the sparse representation coefficient of the pixels. Although the method uses a clustering method to divide the multispectral remote sensing image into different groups, each group of data is sparsely represented to obtain the final classification result. However, the method still has the defects that the calculation process is complicated, an unsupervised clustering method is used, the phenomena of same-object different spectrums and same-object spectrums exist, and the classification result is influenced.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a multispectral image classification method based on dual-channel deep convolution generation type countermeasure network (DCGAN) and feature fusion.
In order to achieve the purpose, the method comprises the following specific steps:
(1) inputting a multispectral image:
inputting multispectral images of five regions imaged by two different satellite shots, each region comprising two multispectral images, a first multispectral image comprising images of 10 bands, a second multispectral image comprising images of 9 bands:
(2) and (3) carrying out image normalization processing on each wave band of each multispectral image:
(3) obtaining a multispectral image matrix:
(3a) stacking the normalized images of different wave bands in the first multispectral image to obtain a size W1 i×H1 iX 10 multispectral image matrices for five regions, i ═ 1,2,3,4, 5;
(3b) stacking the normalized images of different wave bands in the second multispectral image to obtain a size W2 i×H2 iX 9 multispectral image matrices for five regions, i ═ 1,2,3,4, 5;
(4) acquiring a data set:
(4a) selecting pixels with class marks from a first multispectral image matrix of the first four regions, dividing each class of pixels with class marks in the four multispectral image matrices into image pixel blocks with the size of 64 multiplied by 10 by using a sliding window with the size of 64 multiplied by 64 pixels, randomly selecting 10 percent of the pixel blocks from the image pixel blocks, and forming a training data set D1Then, randomly selecting 50% pixel blocks from the image pixel blocks to form another training data set D1′;
(4b) Selecting pixels with class marks from the second multispectral image matrix of the first four regions, dividing each class of pixels with class marks in the four multispectral image matrices into image pixel blocks with the size of 64 multiplied by 9 by using a sliding window with the size of 64 multiplied by 64 pixels, and randomly selecting 10 percent of the pixel blocks from the image pixel blocks to form groupsTraining data set D2Then, randomly selecting 50% pixel blocks from the image pixel blocks to form another training data set D2′;
(4c) Selecting pixels with class marks from a first multispectral image matrix of a fifth region, dividing each class of pixels with class marks in the multispectral image matrix into image pixel blocks with the size of 64 multiplied by 10 by using a sliding window with the size of 64 multiplied by 64 pixels, and forming a test data set V by all the image pixel blocks1
(4d) Selecting pixels with class marks from a second multispectral image matrix of a fifth region, dividing each pixel with class marks in the multispectral image matrix into image pixel blocks with the size of 64 multiplied by 9 by using a sliding window with the size of 64 multiplied by 64 pixels, and forming a test data set V by all the image pixel blocks2
(5) Constructing a two-channel deep convolution generation type confrontation network DCGAN model:
(5a) constructing a first channel deep convolution generation type countermeasure network DCGAN, wherein the network consists of a generation network with 6 layers and a discrimination network with 5 layers;
(5b) constructing a second channel deep convolution generation type countermeasure network DCGAN, wherein the network consists of a generation network with 6 layers and a discrimination network with 5 layers;
(5c) vectorizing the feature map extracted by the discrimination network in the first channel, vectorizing the feature map extracted by the discrimination network in the second channel, and fusing two vectorized feature vectors to form a feature fusion layer of a two-channel deep convolution generation type confrontation network DCGAN model;
(5d) connecting a Softmax layer behind the characteristic fusion layer to obtain a two-channel deep convolution generation type confrontation network DCGAN model;
(6) training a two-channel deep convolution generation type confrontation network DCGAN classification model:
(6a) will train data set D1Inputting the data into a first channel deep convolution generation type confrontation network DCGAN network, and training the network by using an unsupervised training method;
(6b) will train data set D2' input to second channel depth convolution generated pairsIn the anti-network DCGAN network, training a second channel deep convolution generation type anti-network DCGAN network by adopting the same training mode as that of the first channel network;
(6c) will train data set D1Inputting the data into a discriminant network of the trained first channel network, and extracting a training data set D1Characteristic S of1Training data set D2Inputting the data into a discriminant network of the trained second channel network, and extracting a training data set D2Characteristic S of2The feature S1And characteristic S2Inputting the fused features into a Softmax layer in a double-channel deep convolution generation type countermeasure network DCGAN network, and carrying out supervised training for 200 times to obtain a trained double-channel deep convolution generation type countermeasure network DCGAN classification model;
(7) classifying the test data set:
(7a) test data set V1Inputting the test data into a discriminant network of a first channel of a trained two-channel deep convolution generation type countermeasure network DCGAN network, and extracting a test data set V1Characteristic C of1
(7b) Test data set V2Inputting the data into a discrimination network of a second channel, and extracting a test data set V2Characteristic C of2
(7c) Will be characterized by C1And feature C2And after fusion, inputting the fusion into a softmax layer of a double-channel deep convolution generation type countermeasure network DCGAN network to obtain a final classification result.
Compared with the prior art, the invention has the following advantages:
firstly, because a double-channel deep convolution generation type confrontation network DCGAN model is built, the characteristic of the multispectral image is extracted by utilizing a discrimination network in the model, the method is a self-learning characteristic extraction method, overcomes the defect of uncertainty of characteristic extraction by artificial design in the prior art, has no pertinence, can be used for extracting the characteristic of the non-determined multispectral image, and has the advantage of wider applicability.
Secondly, because the invention builds a double-channel deep convolution generation type confrontation network DCGAN model, different network channels in the built double-channel network learn the characteristic information of different satellites, and then perform characteristic fusion, thus the learned characteristic information is rich, the defects of fussy characteristic information and less characteristic information of artificial design characteristic extraction steps are overcome, and the extracted characteristic has the advantages of multiple high-level characteristic information of multiple directions and multiple spectrums.
Thirdly, unsupervised and supervised training methods are adopted when the dual-channel deep convolution generation type confrontation network DCGAN classification model is trained, so that the requirement on labeling of image data is low, the defect of uncertainty of unsupervised learning is overcome, and the classification precision is improved.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention;
FIG. 2 is a diagram of manual labeling of images to be classified in the present invention;
fig. 3 is a diagram of the classification result of an image to be classified by using the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
The steps for the implementation of the present invention are described in detail below with reference to fig. 1.
Step 1, inputting a multispectral image.
The method comprises the steps of inputting multispectral images of five regions shot and imaged by two different satellites, wherein the satellites are Sentinel-2 satellites, the satellites are landsat-8 satellites, the five regions are berlin, hong _ kong, paris, round and Sao Paulo, each region comprises two multispectral images, the first multispectral image comprises images of 10 wave bands, and the second multispectral image comprises images of 9 wave bands.
And 2, normalizing the image of each wave band of each multispectral image.
The normalization processing steps are as follows:
step 1, dividing each pixel value in the image of each wave band in the first multispectral image by the maximum pixel value of the image of the wave band to obtain the normalized pixel value of the image of the wave band, setting the pixel value when the normalized pixel value is less than 0 as 0, and keeping the other pixel values unchanged to obtain the normalized image of 10 wave band images in the first multispectral image.
And 2, dividing each pixel value of the image of each wave band in the second multispectral image by the maximum pixel value of the image of the wave band to obtain a normalized pixel value of the image of the wave band, setting the pixel value when the normalized pixel value is less than 0 to be 0, and keeping the other normalized pixel values unchanged to obtain the normalized image of 9 image wave bands in the second multispectral image.
And 3, acquiring a multispectral image matrix.
Stacking the normalized images of different wave bands in the first multispectral image to obtain a size W1 i×H1 iX 10 multispectral image matrix for five regions, i ═ 1,2,3,4, 5.
Stacking the normalized images of different wave bands in the second multispectral image to obtain a size W2 i×H2 iX 9 multispectral image matrix for five regions, i ═ 1,2,3,4, 5.
And 4, acquiring a data set.
Selecting pixels with class marks from the second multispectral image matrix of the first four regions, dividing each class of pixels with class marks in the four multispectral image matrices into image pixel blocks with the size of 64 multiplied by 9 by using a sliding window with the size of 64 multiplied by 64 pixels, randomly selecting 10 percent of the pixel blocks from the image pixel blocks, and forming a training data set D2Then, randomly selecting 50% pixel blocks from the image pixel blocks to form another training data set D2′。
Selecting pixels with class marks from a first multispectral image matrix of a fifth region, dividing each class of pixels with class marks in the multispectral image matrix into image pixel blocks with the size of 64 multiplied by 10 by using a sliding window with the size of 64 multiplied by 64 pixels, and forming a test data set V by all the image pixel blocks1
Second multispectral image matrix from fifth regionSelecting pixels with class marks, dividing each class of pixels with class marks in the multispectral image matrix into image pixel blocks with the size of 64 x 9 by using a sliding window with the size of 64 x 64 pixels, and forming a test data set V by all the image pixel blocks2
And 5, constructing a double-channel deep convolution generation type confrontation network DCGAN model.
And constructing a first channel deep convolution generation type countermeasure network DCGAN, wherein the network consists of a generation network with 6 layers and a discrimination network with 5 layers.
The structure and parameters of the 6-layer generation network are as follows:
the first layer is a noise layer, and a 100-dimensional Gaussian vector is input;
the second layer is a mapping layer which is obtained by mapping 100-dimensional vectors of the noise layer, and the size of the mapping layer is 4 multiplied by 512;
the third layer is a micro-step convolution layer, the number of convolution kernels is set to be 256, the window size of the convolution kernels is 5 multiplied by 5, the step length is 2, and 256 characteristic graphs are output;
the fourth layer is a micro-step convolution layer, the number of convolution kernels is set to be 128, the window size of the convolution kernels is 5 multiplied by 5, the step length is 2, and 128 characteristic graphs are output;
the fifth layer is a micro-step convolution layer, the number of convolution kernels is set to be 64, the window size of the convolution kernels is 5 multiplied by 5, the step size is 2, and 64 characteristic graphs are output;
the sixth layer is a micro-step convolution layer, the number of convolution kernels is set to be 10, the window size of the convolution kernels is 5 multiplied by 5, the step length is 2, and 10 characteristic graphs are output.
The micro-step convolutional layer at each time changes the size of the characteristic diagram to twice of the original size.
The structure and parameters of the 5-layer discrimination network are as follows:
the first layer is an input layer and inputs a training data set;
the second layer is a convolution layer, the number of convolution kernels is set to be 64, the window size of the convolution kernels is 5 multiplied by 5, the step length is 2, and 64 characteristic graphs are output;
the third layer is a convolution layer, the number of convolution kernels is set to be 128, the window size of the convolution kernels is 5 multiplied by 5, the step length is 2, and 128 characteristic graphs are output;
the fourth layer is a convolution layer, the number of convolution kernels is set to be 256, the window size of the convolution kernels is 5 multiplied by 5, the step length is 2, and 256 characteristic graphs are output;
the fifth layer is a convolution layer, the number of convolution kernels is set to be 512, the window size of the convolution kernels is 5 multiplied by 5, the step length is 2, and 512 feature maps are output.
Each layer will change the size of the feature map to one half of the original size.
And constructing a second channel deep convolution generation type countermeasure network DCGAN, wherein the network consists of a generation network with 6 layers and a discrimination network with 5 layers.
The second channel deep convolution generation type countermeasure network DCGAN comprises a generation network with 6 layers and a discrimination network with 5 layers, wherein the structure of the first 5 layers in the generation network is the same as the structure and parameters of the first 5 layers in the generation network in the first channel DCGAN, the number of convolution kernels of the micro-step amplitude convolution layer of the last layer is set to be 9, the window size of the convolution kernels is 5 multiplied by 5, and the step length is 2; each layer structure and parameter in the discrimination network are the same as each layer structure and parameter in the discrimination network in the first channel DCGAN.
Vectorizing the feature map extracted by the discrimination network in the first channel, vectorizing the feature map extracted by the discrimination network in the second channel, and fusing the two vectorized feature vectors to form a feature fusion layer of the two-channel deep convolution generation type confrontation network DCGAN model.
And connecting a Softmax layer behind the characteristic fusion layer to obtain a two-channel deep convolution generation type confrontation network DCGAN model.
And 6, training a two-channel deep convolution generation type confrontation network DCGAN classification model.
Will train data set D1' inputting the data into a first channel deep convolution generation type countermeasure network DCGAN network, and training the network by using an unsupervised training method.
The unsupervised training method comprises the following steps:
first, with a training data set D1In a DCGAN networkAnd judging the network.
And secondly, unsupervised training of a generation network in the first channel deep convolution generation type countermeasure network DCGAN network by Gaussian noise, input of an output image of the generation network into a discrimination network, and training of the discrimination network.
And thirdly, alternately and iteratively training the discrimination network trained in the second step and the generation network trained in the second step, fixing one side during training, updating the network weight of the other side, and alternately iterating, wherein in the process, the generation network generates a true image as much as possible, and the discrimination network recognizes the true and false of the image as much as possible, so as to form a competitive countermeasure, and after the alternate iterative training reaches 500 times, the two sides reach a dynamic balance to obtain the trained first channel network.
Will train data set D2The method comprises the steps of inputting training a second channel deep convolution generation type countermeasure network DCGAN network, and training the second channel deep convolution generation type countermeasure network DCGAN network by adopting the same training mode as that of training a first channel network.
Will train data set D1Inputting the data into a discriminant network of the trained first channel network, and extracting a training data set D1Characteristic S of1Training data set D2Inputting the data into a discriminant network of the trained second channel network, and extracting a training data set D2Characteristic S of2The feature S1And characteristic S2And inputting the fused features into a Softmax layer in the two-channel deep convolution generation type countermeasure network DCGAN network, and performing supervised training for 200 times to obtain a trained two-channel deep convolution generation type countermeasure network DCGAN classification model.
And 7, classifying the test data set.
Test data set V1Inputting the test data into a discriminant network of a first channel of a trained two-channel deep convolution generation type countermeasure network DCGAN network, and extracting a test data set V1Characteristic C of1
Test data set V2Inputting the data into a discrimination network of a second channel, and extracting a test data set V2Characteristic C of2
Will be characterized by C1And feature C2And after fusion, inputting the fusion into a softmax layer of a double-channel deep convolution generation type countermeasure network DCGAN network to obtain a final classification result.
The effect of the invention can be further illustrated by the following simulation experiment:
1. simulation conditions are as follows:
the simulation of the invention is carried out under Hewlett packard Z840, hardware environment of memory 8GB and software environment of Matlab2014Ra and TensorFlow.
2. Simulation content:
the simulation experiment of the invention is that multispectral image data of four areas, namely Berlin berlin, hong Kong _ kong, Paris paris and Rorome, shot and imaged by a satellite sentinel _2 and a satellite landsat _8 are used as training data sets to train a two-channel deep convolution generation type confrontation network, and multispectral image data of an area of Sa Paulo5 is used as a test data set to classify 17 types of land features.
Fig. 2 is a label diagram of real features of the area of st paul sao _ paulo5, the types of features including dense high-rise buildings, dense middle-rise buildings, dense low-rise buildings, open high-rise buildings, open middle-rise buildings, open low-rise buildings, large low-rise buildings, sparsely distributed buildings, heavy industrial areas, dense forests, scattered trees, bushes and dwarf trees, low-lying vegetation, bare rocks, bare soil and sand, water.
Fig. 3 is a graph of the results of classifying a multi-spectral image of the region of saint paulo5 using the method of the present invention.
Comparing the classification pixel obtained by the method of the invention in fig. 3 with the real ground object marking pixel in fig. 2, it can be seen that the accuracy of the classification result obtained by the method of the invention is higher.
The simulation experiment of the invention: simulation 1 and simulation 2, the multispectral image of the area of san paul sao _ paul 5 imaged by the satellite sentinel _2 and the multispectral image of the area of san paul sao _ paul 5 imaged by the satellite landsat _8 were classified by the DCGAN classification method of the prior art. And 3, simulating the multispectral image of the area of the Sausalo _ paulo5 imaged by the satellite sentinel _2 and the multispectral image of the area of the Sausalo _ paulo5 imaged by the satellite landsat _8 by using the method, wherein the result is shown in the figure 3, and the comparison result of the classification accuracy rates obtained by the three simulation methods is shown in the table 1.
3. Simulation effect analysis:
table 1 shows the comparison of the classification accuracy obtained in the simulation by the three methods, and as can be seen from table 1, the present invention inputs multispectral image data obtained by two satellites for shooting into a two-channel deep convolution generation type countermeasure network to extract features, and compared with a single-channel network which processes multispectral image data obtained by a single satellite for inputting into a single-channel network, the present invention has the advantage of improving the classification accuracy.
TABLE 1 Classification accuracy List obtained by three methods in simulation
Simulation method Accuracy of classification
The invention classification method 62.872%
Single channel DCGAN network (sentinel _2 data) 55.635%
Single channel DCGAN network (landsat _8 data) 54.143%
In conclusion, the method introduces the dual-channel generation type countermeasure network, combines the feature fusion, extracts multiple high-level feature information in multiple directions and multiple spectra, enhances the feature characterization capability of the multispectral image, and improves the classification effect.

Claims (6)

1. A multispectral image classification method based on dual-channel depth convolution generation type countermeasure network DCGAN and feature fusion is characterized by comprising the following steps:
(1) inputting a multispectral image:
inputting multispectral images of five regions imaged by two different satellite shots, each region comprising two multispectral images, a first multispectral image comprising images of 10 bands, a second multispectral image comprising images of 9 bands:
(2) and (3) carrying out image normalization processing on each wave band of each multispectral image:
(3) obtaining a multispectral image matrix:
(3a) stacking the normalized images of different wave bands in the first multispectral image to obtain a size W1 i×H1 iX 10 multispectral image matrices for five regions, i ═ 1,2,3,4, 5;
(3b) stacking the normalized images of different wave bands in the second multispectral image to obtain a size W2 i×H2 iX 9 multispectral image matrices for five regions, i ═ 1,2,3,4, 5;
(4) acquiring a data set:
(4a) selecting pixels with class marks from a first multispectral image matrix of the first four regions, dividing each class of pixels with class marks in the four multispectral image matrices into image pixel blocks with the size of 64 multiplied by 10 by using a sliding window with the size of 64 multiplied by 64 pixels, randomly selecting 10 percent of the pixel blocks from the image pixel blocks, and forming a training data set D1Then, randomly selecting 50% pixel blocks from the image pixel blocks to form another training data set D1′;
(4b) Selecting pixels with class marks from the second multispectral image matrix of the first four regions, dividing each class of pixels with class marks in the four multispectral image matrices into image pixel blocks with the size of 64 x 9 by using a sliding window with the size of 64 x 64 pixels, and dividing the image pixel blocks from the image pixelsRandomly selecting 10% pixel blocks from the blocks to form a training data set D2Then, randomly selecting 50% pixel blocks from the image pixel blocks to form another training data set D2′;
(4c) Selecting pixels with class marks from a first multispectral image matrix of a fifth region, dividing each class of pixels with class marks in the multispectral image matrix into image pixel blocks with the size of 64 multiplied by 10 by using a sliding window with the size of 64 multiplied by 64 pixels, and forming a test data set V by all the image pixel blocks1
(4d) Selecting pixels with class marks from a second multispectral image matrix of a fifth region, dividing each pixel with class marks in the multispectral image matrix into image pixel blocks with the size of 64 multiplied by 9 by using a sliding window with the size of 64 multiplied by 64 pixels, and forming a test data set V by all the image pixel blocks2
(5) Constructing a two-channel deep convolution generation type confrontation network DCGAN model:
(5a) constructing a first channel deep convolution generation type countermeasure network DCGAN, wherein the network consists of a generation network with 6 layers and a discrimination network with 5 layers;
(5b) constructing a second channel deep convolution generation type countermeasure network DCGAN, wherein the network consists of a generation network with 6 layers and a discrimination network with 5 layers;
(5c) vectorizing the feature map extracted by the discrimination network in the first channel, vectorizing the feature map extracted by the discrimination network in the second channel, and fusing two vectorized feature vectors to form a feature fusion layer of a two-channel deep convolution generation type confrontation network DCGAN model;
(5d) connecting a Softmax layer behind the characteristic fusion layer to obtain a two-channel deep convolution generation type confrontation network DCGAN model;
(6) training a two-channel deep convolution generation type confrontation network DCGAN classification model:
(6a) will train data set D1Inputting the data into a first channel deep convolution generation type confrontation network DCGAN network, and training the network by using an unsupervised training method;
(6b) will train data set D2' input to the second channel depthIn the degree convolution generation type confrontation network DCGAN network, training a second channel deep convolution generation type confrontation network DCGAN network by adopting the same training mode as that of training the first channel network;
(6c) will train data set D1Inputting the data into a discriminant network of the trained first channel network, and extracting a training data set D1Characteristic S of1Training data set D2Inputting the data into a discriminant network of the trained second channel network, and extracting a training data set D2Characteristic S of2The feature S1And characteristic S2Inputting the fused features into a Softmax layer in a double-channel deep convolution generation type countermeasure network DCGAN network, and carrying out supervised training for 200 times to obtain a trained double-channel deep convolution generation type countermeasure network DCGAN classification model;
(7) classifying the test data set:
(7a) test data set V1Inputting the test data into a discriminant network of a first channel of a trained two-channel deep convolution generation type countermeasure network DCGAN network, and extracting a test data set V1Characteristic C of1
(7b) Test data set V2Inputting the data into a discrimination network of a second channel, and extracting a test data set V2Characteristic C of2
(7c) Will be characterized by C1And feature C2And after fusion, inputting the fusion into a softmax layer of a double-channel deep convolution generation type countermeasure network DCGAN network to obtain a final classification result.
2. The method for multi-spectral image classification based on dual-channel deep convolution generation countermeasure network DCGAN and feature fusion as claimed in claim 1, wherein the normalization process in step (2) is as follows:
1, dividing each pixel value in the image of each wave band in the first multispectral image by the maximum pixel value of the image of the wave band to obtain the normalized pixel value of the image of the wave band, setting the pixel value of which the normalized pixel value is less than 0 as 0, and keeping the other pixel values unchanged to obtain the normalized image of 10 wave band images in the first multispectral image;
and 2, dividing each pixel value of the image of each wave band in the second multispectral image by the maximum pixel value of the image of the wave band to obtain a normalized pixel value of the image of the wave band, setting the pixel value of which the normalized pixel value is less than 0 as 0, and keeping the other normalized pixel values unchanged to obtain the normalized image of 9 image wave bands in the second multispectral image.
3. The method for classifying multispectral images based on dual-channel deep convolution generating countermeasure network DCGAN and feature fusion as claimed in claim 1, wherein the structure and parameters of the 6-layer generation network in the first channel deep convolution generating countermeasure network DCGAN in step (5a) are as follows:
the first layer is a noise layer, and a 100-dimensional Gaussian vector is input;
the second layer is a mapping layer which is obtained by mapping 100-dimensional vectors of the noise layer, and the size of the mapping layer is 4 multiplied by 512;
the third layer is a micro-step convolution layer, the number of convolution kernels is set to be 256, the window size of the convolution kernels is 5 multiplied by 5, the step length is 2, and 256 characteristic graphs are output;
the fourth layer is a micro-step convolution layer, the number of convolution kernels is set to be 128, the window size of the convolution kernels is 5 multiplied by 5, the step length is 2, and 128 characteristic graphs are output;
the fifth layer is a micro-step convolution layer, the number of convolution kernels is set to be 64, the window size of the convolution kernels is 5 multiplied by 5, the step size is 2, and 64 characteristic graphs are output;
the sixth layer is a micro-step convolution layer, the number of convolution kernels is set to be 10, the window size of the convolution kernels is 5 multiplied by 5, the step length is 2, and 10 characteristic graphs are output.
4. The method for classifying multispectral images based on the DCGAN and the feature fusion of the two-channel deep convolution generation countermeasure network of claim 1, wherein the structure and parameters of the discrimination network of 5 layers in the DCGAN in the first-channel deep convolution generation countermeasure network in the step (5a) are as follows:
the first layer is an input layer and inputs a training data set;
the second layer is a convolution layer, the number of convolution kernels is set to be 64, the window size of the convolution kernels is 5 multiplied by 5, the step length is 2, and 64 characteristic graphs are output;
the third layer is a convolution layer, the number of convolution kernels is set to be 128, the window size of the convolution kernels is 5 multiplied by 5, the step length is 2, and 128 characteristic graphs are output;
the fourth layer is a convolution layer, the number of convolution kernels is set to be 256, the window size of the convolution kernels is 5 multiplied by 5, the step length is 2, and 256 characteristic graphs are output;
the fifth layer is a convolution layer, the number of convolution kernels is set to be 512, the window size of the convolution kernels is 5 multiplied by 5, the step length is 2, and 512 feature maps are output.
5. The multispectral image classification method based on the dual-channel deep convolution generation type countermeasure network DCGAN and feature fusion of claim 1, wherein in the step (5b), the second channel deep convolution generation type countermeasure network DCGAN includes 6 layers of generation networks and 5 layers of discrimination networks, the first 5 layers of generation networks have the same structure and parameters as the first 5 layers of generation networks in the first channel DCGAN, the number of convolution kernels of the last layer of the micro-step convolution layer is set to 9, the window size of the convolution kernels is 5 × 5, and the step size is 2; each layer structure and parameter in the discrimination network are the same as each layer structure and parameter in the discrimination network in the first channel DCGAN.
6. The multispectral image classification method based on dual-channel deep convolution generation countermeasure network DCGAN and feature fusion as claimed in claim 1, wherein the step of the unsupervised training method in step (6a) is as follows:
first, with a training data set D1' unsupervised training of a discrimination network in the first channel deep convolution generation type countermeasure network DCGAN network;
secondly, unsupervised training of a generation network in the first channel deep convolution generation type countermeasure network DCGAN network by Gaussian noise, input of an output image of the generation network into a discrimination network, and training of the discrimination network;
and thirdly, performing alternate iterative training 500 times on the discriminant network after the second training and the generated network after the second training to obtain the trained first channel network.
CN201711144187.2A 2017-11-17 2017-11-17 Multispectral image classification method based on dual-channel DCGAN and feature fusion Active CN107944483B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711144187.2A CN107944483B (en) 2017-11-17 2017-11-17 Multispectral image classification method based on dual-channel DCGAN and feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711144187.2A CN107944483B (en) 2017-11-17 2017-11-17 Multispectral image classification method based on dual-channel DCGAN and feature fusion

Publications (2)

Publication Number Publication Date
CN107944483A CN107944483A (en) 2018-04-20
CN107944483B true CN107944483B (en) 2020-02-07

Family

ID=61932768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711144187.2A Active CN107944483B (en) 2017-11-17 2017-11-17 Multispectral image classification method based on dual-channel DCGAN and feature fusion

Country Status (1)

Country Link
CN (1) CN107944483B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765512B (en) * 2018-05-30 2022-04-12 清华大学深圳研究生院 Confrontation image generation method based on multi-level features
CN109086685A (en) * 2018-07-11 2018-12-25 国家林业局森林病虫害防治总站 Forestry biological hazards monitoring method and system based on satellite remote sensing images
CN109034224B (en) * 2018-07-16 2022-03-11 西安电子科技大学 Hyperspectral classification method based on double branch network
CN109360146A (en) * 2018-08-22 2019-02-19 国网甘肃省电力公司 The double light image Fusion Models for generating network DCGAN are fought based on depth convolution
CN109145992B (en) * 2018-08-27 2021-07-20 西安电子科技大学 Hyperspectral image classification method for cooperatively generating countermeasure network and spatial spectrum combination
CN110647927A (en) * 2019-09-18 2020-01-03 长沙理工大学 ACGAN-based image semi-supervised classification algorithm
CN111062403B (en) * 2019-12-26 2022-11-22 哈尔滨工业大学 Hyperspectral remote sensing data depth spectral feature extraction method based on one-dimensional group convolution neural network
CN117253122B (en) * 2023-11-17 2024-01-23 云南大学 Corn seed approximate variety screening method, device, equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023154A (en) * 2016-05-09 2016-10-12 西北工业大学 Multi-temporal SAR image change detection method based on dual-channel convolutional neural network (CNN)
CN106682616A (en) * 2016-12-28 2017-05-17 南京邮电大学 Newborn-painful-expression recognition method based on dual-channel-characteristic deep learning
CN106845381A (en) * 2017-01-16 2017-06-13 西北工业大学 Sky based on binary channels convolutional neural networks composes united hyperspectral image classification method
CN106997380A (en) * 2017-03-21 2017-08-01 北京工业大学 Imaging spectrum safe retrieving method based on DCGAN depth networks
CN107273938A (en) * 2017-07-13 2017-10-20 西安电子科技大学 Multi-source Remote Sensing Images terrain classification method based on binary channels convolution ladder net
CN107292336A (en) * 2017-06-12 2017-10-24 西安电子科技大学 A kind of Classification of Polarimetric SAR Image method based on DCGAN
AU2017101166A4 (en) * 2017-08-25 2017-11-02 Lai, Haodong MR A Method For Real-Time Image Style Transfer Based On Conditional Generative Adversarial Networks

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023154A (en) * 2016-05-09 2016-10-12 西北工业大学 Multi-temporal SAR image change detection method based on dual-channel convolutional neural network (CNN)
CN106682616A (en) * 2016-12-28 2017-05-17 南京邮电大学 Newborn-painful-expression recognition method based on dual-channel-characteristic deep learning
CN106845381A (en) * 2017-01-16 2017-06-13 西北工业大学 Sky based on binary channels convolutional neural networks composes united hyperspectral image classification method
CN106997380A (en) * 2017-03-21 2017-08-01 北京工业大学 Imaging spectrum safe retrieving method based on DCGAN depth networks
CN107292336A (en) * 2017-06-12 2017-10-24 西安电子科技大学 A kind of Classification of Polarimetric SAR Image method based on DCGAN
CN107273938A (en) * 2017-07-13 2017-10-20 西安电子科技大学 Multi-source Remote Sensing Images terrain classification method based on binary channels convolution ladder net
AU2017101166A4 (en) * 2017-08-25 2017-11-02 Lai, Haodong MR A Method For Real-Time Image Style Transfer Based On Conditional Generative Adversarial Networks

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Deep Spectral-Spatial Feature Extraction Based on DCGAN for Hyperspectral Image Retrieval;Lu Chen 等;《2017 IEEE 15th Intl Conf on Dependable,Autonomic and Secure Computing,15th Intl Conf on Pervasive Intelligence and Computing,3rd Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress》;20171110;第752-759页 *
MARTA GANs:Unsupervised Representation Learning for Remote Sensing Image Classification;Daoyu Lin 等;《IEEE Geoscience and Remote Sensing Letters》;20171005;第14卷(第11期);第2092-2096页 *
基于双通道特征自适应融合的红外行为识别方法;吕静 等;《重庆邮电学校学报(自然科学版)》;20170630;第29卷(第3期);第389-395页 *

Also Published As

Publication number Publication date
CN107944483A (en) 2018-04-20

Similar Documents

Publication Publication Date Title
CN107944483B (en) Multispectral image classification method based on dual-channel DCGAN and feature fusion
CN109993220B (en) Multi-source remote sensing image classification method based on double-path attention fusion neural network
CN108830330B (en) Multispectral image classification method based on self-adaptive feature fusion residual error network
CN110084159B (en) Hyperspectral image classification method based on combined multistage spatial spectrum information CNN
CN110334765B (en) Remote sensing image classification method based on attention mechanism multi-scale deep learning
CN110728192B (en) High-resolution remote sensing image classification method based on novel characteristic pyramid depth network
CN107832797B (en) Multispectral image classification method based on depth fusion residual error network
CN109598306B (en) Hyperspectral image classification method based on SRCM and convolutional neural network
CN110348399B (en) Hyperspectral intelligent classification method based on prototype learning mechanism and multidimensional residual error network
CN110929736B (en) Multi-feature cascading RGB-D significance target detection method
CN103955702A (en) SAR image terrain classification method based on depth RBF network
CN112308152B (en) Hyperspectral image ground object classification method based on spectrum segmentation and homogeneous region detection
CN104268593A (en) Multiple-sparse-representation face recognition method for solving small sample size problem
CN109753996B (en) Hyperspectral image classification method based on three-dimensional lightweight depth network
CN110852369B (en) Hyperspectral image classification method combining 3D/2D convolutional network and adaptive spectrum unmixing
CN115116054B (en) Multi-scale lightweight network-based pest and disease damage identification method
CN105184314B (en) Wrapper formula EO-1 hyperion band selection methods based on pixel cluster
Doi et al. The effect of focal loss in semantic segmentation of high resolution aerial image
CN113705641B (en) Hyperspectral image classification method based on rich context network
CN112200123B (en) Hyperspectral open set classification method combining dense connection network and sample distribution
CN104239902A (en) Hyper-spectral image classification method based on non-local similarity and sparse coding
CN103646256A (en) Image characteristic sparse reconstruction based image classification method
CN108256557B (en) Hyperspectral image classification method combining deep learning and neighborhood integration
CN115564996A (en) Hyperspectral remote sensing image classification method based on attention union network
CN114299398B (en) Small sample remote sensing image classification method based on self-supervision contrast learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant