CN107944483A - Classification of Multispectral Images method based on binary channels DCGAN and Fusion Features - Google Patents

Classification of Multispectral Images method based on binary channels DCGAN and Fusion Features Download PDF

Info

Publication number
CN107944483A
CN107944483A CN201711144187.2A CN201711144187A CN107944483A CN 107944483 A CN107944483 A CN 107944483A CN 201711144187 A CN201711144187 A CN 201711144187A CN 107944483 A CN107944483 A CN 107944483A
Authority
CN
China
Prior art keywords
image
network
dcgan
multispectral
depth convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711144187.2A
Other languages
Chinese (zh)
Other versions
CN107944483B (en
Inventor
焦李成
屈嵘
汶茂宁
马文萍
杨淑媛
侯彪
刘芳
陈璞花
古晶
张丹
唐旭
马晶晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201711144187.2A priority Critical patent/CN107944483B/en
Publication of CN107944483A publication Critical patent/CN107944483A/en
Application granted granted Critical
Publication of CN107944483B publication Critical patent/CN107944483B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/192Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
    • G06V30/194References adjustable by an adaptive method, e.g. learning

Abstract

The invention discloses a kind of Classification of Multispectral Images method method that net DCGAN and Fusion Features are resisted based on binary channels depth convolution production, it is comprised the concrete steps that:Input multispectral image;The image normalization of each wave band of each width multispectral image is handled;Obtain multispectral image matrix;Obtain data set;Build binary channels depth convolution production confrontation network DCGAN models;Training binary channels depth convolution production confrontation network DCGAN disaggregated models;Classify to test data set.Invention introduces binary channels production to resist net, and binding characteristic is warm, is extracted multi-direction, multispectral a variety of high-level characteristic information, enhances characteristic present ability, improve classifying quality.

Description

Classification of Multispectral Images method based on binary channels DCGAN and Fusion Features
Technical field
The invention belongs to technical field of image processing, further relates to one kind in Classification of Multispectral Images technical field Based on binary channels production confrontation network DCGAN (Deep Convolutional Generative Adversarial Networks) and Fusion Features Classification of Multispectral Images method.The present invention can be used for multispectral image including waters, field The atural object in ground, city etc. is classified.
Background technology
Multispectral image belongs to one kind of remote sensing images, it is same target to be shot repeatedly by multiple wave bands and obtained The image in road.The application value of multispectral image is more and more extensive, for example is detected over the ground in Aeronautics and Astronautics, the earth mapping, disaster Detection etc. field.Image classification is a direction important in multispectral image research contents.The tradition of multispectral image point Class method has very much, but most methods need the crown to carry out artificial design extraction feature letter according to the characteristics of image itself Breath, such as support vector machines, decision tree etc..In recent years, deep learning, such as convolutional neural networks, led in image procossing Domain shows the characteristic present ability of its powerful effect, reduces the uncertainty of artificial design extraction feature.
Paper " multi-spectral Images Classification based on textural characteristics MNF conversion " that Li Ya is female et al. to be delivered at it (《Soldier Device New Equipment Engineering journal》,2017,38(2):Proposed in 113-117) a kind of based on textural characteristics minimal noise separation MNF The multi-spectral Images Classification method of (Minimum Noise Fraction) conversion.This method using gray level co-occurrence matrixes into Row feature extraction, then the feature to being extracted carry out minimal noise separation MNF conversion, and minimal noise is separated MNF components and light Spectrum information collaboration is classified.This method is using gray level co-occurrence matrixes extraction feature, although preferable classification results can be obtained.But It is that the shortcoming that this method still has is, the feature extraction designs of this method rely on artificial experience, complicated and take, And the combination of feature is generally unsuitable for the unconspicuous scene of pixel contrast.
Patent document " the multi-spectral remote sensing image based on tensor rarefaction representation and cluster point that Panshihua University applies at it Class method " (number of patent application:201710329412.3 publication number:Proposed in 107067040A) a kind of sparse based on tensor The multispectral remote sensing image classification method for representing and clustering.This method utilizes the clustering algorithm in unsupervised clustering will light more Spectrum remote-sensing image is divided into different groups;Multispectral image in each group is converted into the matrix of two dimension by three dimensional form;To institute The matrix for stating two dimension carries out dictionary learning, obtains and can be used in the dictionary, dilute that each group multi-spectral remote sensing image carries out rarefaction representation Dredge and represent coefficient, the mark of each atural object;The rarefaction representation coefficient and mark of acquisition are trained, obtain optimal classification Device;To the pixel of multi-spectral remote sensing image, according to its rarefaction representation coefficient, using the grader of acquisition, classify to it.Should Although multi-spectral remote sensing image is divided into different groups by method with the method for cluster, then to each group of data rarefaction representation, is obtained most Classification results afterwards.But the shortcoming that this method still has is that calculating process is cumbersome, using unsupervised clustering, There are the different spectrum of jljl and foreign matter with phenomenon is composed, classification results are influenced.
The content of the invention
It is an object of the invention to overcome the shortcomings of above-mentioned prior art, it is proposed that one kind is given birth to based on binary channels depth convolution The Classification of Multispectral Images method of accepted way of doing sth confrontation net (DCGAN) and Fusion Features, the present invention are special using production confrontation net extraction The Fusion Features levied and extract two satellites, to improve nicety of grading.
To achieve the above object, it is of the invention to comprise the following steps that:
(1) multispectral image is inputted:
By five regional multispectral images of two different satellite shooting imagings, each area includes light more than two for input Spectrogram picture, the first width multispectral image include the image of 10 wave bands, and the second width multispectral image includes the image of 9 wave bands:
(2) image normalization of each wave band of each width multispectral image is handled:
(3) multispectral image matrix is obtained:
Image stack after different-waveband image normalization in first width multispectral image together, is obtained size by (3a) For W1 i×H1 i× 10 five regional multispectral image matrixes, i=1,2,3,4,5;
Image stack after different-waveband image normalization in second width multispectral image together, is obtained size by (3b) For W2 i×H2 i× 9 five regional multispectral image matrixes, i=1,2,3,4,5;
(4) data set is obtained:
The pixel for having category is chosen in four the first regional width multispectral image matrixes of (4a) the past, with 64 × 64 pixels The sliding window of size, will have every class pixel of category in four multispectral image matrixes, be divided into the figure that size is 64 × 64 × 10 Picture block of pixels, randomly selects 10% block of pixels, composition training dataset D from block of image pixels1, then from block of image pixels 50% block of pixels is randomly selected, forms another training dataset D1′;
The pixel for having category is chosen in four the second regional width multispectral image matrixes of (4b) the past, with 64 × 64 pixels The sliding window of size, will have every class pixel of category in four multispectral image matrixes, be divided into the image that size is 64 × 64 × 9 Block of pixels, randomly selects 10% block of pixels, composition training dataset D from block of image pixels2, then from block of image pixels with Machine chooses 50% block of pixels, forms another training dataset D2′;
(4c) chooses the pixel for having category from the 5th the first regional width multispectral image matrix, with 64 × 64 pixels The sliding window of size, will have every class pixel of category in the multispectral image matrix, it is 64 × 64 × 10 image slices to be divided into size Plain block, by all block of image pixels composition test data set V1
(4d) chooses the pixel for having category from the 5th the second regional width multispectral image matrix, with 64 × 64 pixels The sliding window of size, will have every class pixel of category in the multispectral image matrix, be divided into the image slices that size is 64 × 64 × 9 Plain block, by all block of image pixels composition test data set V2
(5) binary channels depth convolution production confrontation network DCGAN models are built:
(5a) builds first passage depth convolution production confrontation network DCGAN, and the network is by 6 layers of generation network and 5 The differentiation network composition of layer;
(5b) builds second channel depth convolution production confrontation network DCGAN, and the network is by 6 layers of generation network and 5 The differentiation network composition of layer;
(5c) will differentiate the characteristic pattern vectorization of network extraction in first passage, network extraction will be differentiated in second channel Characteristic pattern vectorization, merges the feature vector of two vectorizations, composition binary channels depth convolution production confrontation network DCGAN moulds The Fusion Features layer of type;
(5d) connects one Softmax layers after Fusion Features layer, obtains binary channels depth convolution production confrontation net Network DCGAN models;
(6) training binary channels depth convolution production confrontation network DCGAN disaggregated models:
(6a) is by training dataset D1' be input in first passage depth convolution production confrontation network DCGAN networks, profit With unsupervised training method, the training network;
(6b) is by training dataset D2' be input in experienced second channel depth convolution production confrontation network DCGAN networks, Using the training method identical with training first passage network, training second channel depth convolution production confrontation network DCGAN Network;
(6c) is by training dataset D1It is input in the differentiation network of trained first passage network, extracts training data Collect D1Feature S1, by training dataset D2It is input in the differentiation network of trained second channel network, extracts training data Collect D2Feature S2, by feature S1With feature S2Binary channels depth convolution production confrontation network DCGAN is input to after Fusion Features Softmax layers in network, carry out Training 200 times, obtain trained binary channels depth convolution production confrontation net Network DCGAN disaggregated models;
(7) classify to test data set:
(7a) is by test data set V1, it is input to trained binary channels depth convolution production confrontation network DCGAN nets In the differentiation network of the first passage of network, test data set V is extracted1Feature C1
(7b) is by test data set V2It is input in the differentiation network of second channel, extraction test data set V2Feature C2
(7c) is by feature C1With feature C2After fusion, binary channels depth convolution production confrontation network DCGAN networks are input to Softmax layers, obtain final classification results.
The present invention has the following advantages compared with prior art:
First, due to building binary channels depth convolution production confrontation network DCGAN models in the present invention, utilize the model In differentiate network extraction multispectral image feature, be a kind of feature extracting method of self study, overcome prior art people The shortcomings that extracting the uncertainty of feature for design, feature extracting method of the invention does not have specific aim, can be used for non-true The feature extraction of fixed multispectral image, so as to have the advantages that applicability is wider array of.
Second, due to building binary channels depth convolution production confrontation network DCGAN models, the bilateral of structure in the present invention Different network channels learns the characteristic information of different satellites in road network, then carries out Fusion Features, the feature letter so learnt The shortcomings that ceasing and enrich, overcoming the cumbersome and characteristic information of artificial design feature extraction step less, the feature extracted have more The advantages of direction, multispectral a variety of high-level characteristic information.
3rd, due to being used in the present invention during training binary channels depth convolution production confrontation network DCGAN disaggregated models Training method that is unsupervised and having supervision, the mark requirement to view data overcome only unsupervised learning than relatively low Uncertainty the shortcomings that, improve nicety of grading.
Brief description of the drawings
Fig. 1 realizes flow chart for the present invention's;
Fig. 2 is that the handmarking of image to be classified is schemed in the present invention;
Fig. 3 is the classification results figure to image to be classified with the present invention.
Embodiment
The present invention will be further described below in conjunction with the accompanying drawings.
Referring to the drawings 1, to being described in detail as follows the step of realization for the present invention.
Step 1, multispectral image is inputted.
By five regional multispectral images of two different satellite shooting imagings, satellite one is that Sentinel-2 is defended for input Star, satellite two are landsat-8 satellites, and five areas are berlin, hong_kong, paris, rome, SaoPaulo respectively, Each area includes two multispectral images, and the first width multispectral image includes the image of 10 wave bands, second multispectral figure Image as including 9 wave bands.
Step 2, the image normalization of each wave band of each width multispectral image is handled.
The step of normalized, is as follows:
1st step, with each pixel value divided by the band image in the image of each wave band in the first width multispectral image Pixel maximum, obtain the band image normalization after pixel value, by pixel value after normalization be less than 0 when pixel value set 0 is set to, other pixel values are constant, obtain the image after 10 band image normalization in the first width multispectral image.
2nd step, with the picture of each pixel value divided by the band image of the image of each wave band in second multispectral figure Plain maximum, obtains the pixel value after band image normalization, and pixel value when pixel value after normalization is less than 0 is arranged to 0, the pixel value after other normalization is constant, obtains the image after 9 band image normalization in the second width multispectral image.
Step 3, multispectral image matrix is obtained.
By the image stack after different-waveband image normalization in the first width multispectral image together, it is W to obtain size1 i ×H1 i× 10 five regional multispectral image matrixes, i=1,2,3,4,5.
By the image stack after different-waveband image normalization in the second width multispectral image together, it is W to obtain size2 i ×H2 i× 9 five regional multispectral image matrixes, i=1,2,3,4,5.
Step 4, data set is obtained.
In the past the pixel for having category is chosen in four the second regional width multispectral image matrixes, with 64 × 64 pixel sizes Sliding window, will have every class pixel of category in four multispectral image matrixes, be divided into size be 64 × 64 × 9 image pixel Block, randomly selects 10% block of pixels, composition training dataset D from block of image pixels2, then selected at random from block of image pixels 50% block of pixels is taken, forms another training dataset D2′。
The pixel for having category is chosen in the first width multispectral image matrix regional from the 5th, with 64 × 64 pixel sizes Sliding window, will have every class pixel of category in the multispectral image matrix, it is 64 × 64 × 10 block of image pixels to be divided into size, By all block of image pixels composition test data set V1
The pixel for having category is chosen in the second width multispectral image matrix regional from the 5th, with 64 × 64 pixel sizes Sliding window, will have every class pixel of category in the multispectral image matrix, be divided into size be 64 × 64 × 9 image pixel Block, by all block of image pixels composition test data set V2
Step 5, binary channels depth convolution production confrontation network DCGAN models are built.
First passage depth convolution production confrontation network DCGAN is built, the network is by 6 layers of generation network and 5 layers Differentiate network composition.
The structure and parameter of described 6 layers of generation network is as follows:
First layer is noise floor, and input is that a size is 100 Gaussian vectors tieed up;
The second layer is mapping layer, which mapped by 100 dimensional vectors of noise floor, its size is 4 ×4×512;
Third layer is micro-stepping width convolutional layer, and it is 256 to set convolution nuclear volume, and convolution kernel window size is 5 × 5, and step-length is 2, export 256 characteristic patterns;
4th layer is micro-stepping width convolutional layer, and it is 128 to set convolution nuclear volume, and convolution kernel window size is 5 × 5, and step-length is 2, export 128 characteristic patterns;
Layer 5 is micro-stepping width convolutional layer, and it is 64 to set convolution nuclear volume, and convolution kernel window size is 5 × 5, step-length 2, Export 64 characteristic patterns;
Layer 6 is micro-stepping width convolutional layer, and it is 10 to set convolution nuclear volume, and convolution kernel window size is 5 × 5, step-length 2, Export 10 characteristic patterns.
Characteristic pattern size is changed into original twice by each micro-stepping width convolutional layer.
The structure and parameter of described 5 layers of differentiation network is as follows:
First layer is input layer, inputs training dataset;
The second layer is convolutional layer, and it is 64 to set convolution nuclear volume, and convolution kernel window size is 5 × 5, step-length 2, output 64 A characteristic pattern;
Third layer is convolutional layer, and it is 128 to set convolution nuclear volume, and convolution kernel window size is 5 × 5, step-length 2, output 128 characteristic patterns;
4th layer is convolutional layer, and it is 256 to set convolution nuclear volume, and convolution kernel window size is 5 × 5, step-length 2, output 256 characteristic patterns;
Layer 5 is convolutional layer, and it is 512 to set convolution nuclear volume, and convolution kernel window size is 5 × 5, step-length 2, output 512 characteristic patterns.
Characteristic pattern size is changed into original half by each lamination.
Second channel depth convolution production confrontation network DCGAN is built, the network is by 6 layers of generation network and 5 layers Differentiate network composition.
6 layers of generation network and 5 layers of differentiation net in the second channel depth convolution production confrontation network DCGAN Network, generate network in preceding 5 Rotating fields with first passage DCGAN generate network preceding 5 Rotating fields and parameter it is identical, finally The convolution kernel quantity set of one layer of micro-stepping width convolutional layer is 9;Differentiate that each Rotating fields in network and parameter are led to first Each Rotating fields of differentiation network in road DCGAN are identical with parameter.
The characteristic pattern vectorization of network extraction will be differentiated in first passage, the feature of network extraction will be differentiated in second channel Figure vectorization, merges the feature vector of two vectorizations, composition binary channels depth convolution production confrontation network DCGAN models Fusion Features layer.
One Softmax layers are connected after Fusion Features layer, obtains binary channels depth convolution production confrontation network DCGAN models.
Step 6, training binary channels depth convolution production confrontation network DCGAN disaggregated models.
By training dataset D1' be input in first passage depth convolution production confrontation network DCGAN networks, utilize nothing Supervised training method, the training network.
The step of unsupervised training method, is as follows:
The first step, with training dataset D1', unsupervised trained first passage depth convolution production confrontation network DCGAN Differentiation network in network.
Second step, with the unsupervised trained first passage depth convolution production confrontation network DCGAN networks of Gaussian noise In generation network, by generate network output image be input to differentiate network in, retraining differentiate network.
3rd step, by differentiating after second step is trained, network carries out alternating iteration with the generation network after second step training Training, when training, fixed party, updates the network weight of the opposing party, alternating iteration, in this process, generation network to the greatest extent may be used Genuine image can be generated, and differentiates that network identifies the true and false of image as far as possible, so that competition confrontation is formed, such alternating iteration After training reaches 500 times of setting, both sides reach a dynamic balance, obtain trained first passage network.
By training dataset D2In ' input training second channel depth convolution production confrontation network DCGAN networks, use The training method identical with training first passage network, training second channel depth convolution production confrontation network DCGAN networks.
By training dataset D1It is input in the differentiation network of trained first passage network, extraction training dataset D1 Feature S1, by training dataset D2It is input in the differentiation network of trained second channel network, extraction training dataset D2 Feature S2, by feature S1With feature S2Binary channels depth convolution production confrontation network DCGAN networks are input to after Fusion Features In Softmax layers, carry out Training 200 times, obtain trained binary channels depth convolution production confrontation network DCGAN disaggregated models.
Step 7, classify to test data set.
By test data set V1, it is input to trained binary channels depth convolution production confrontation network DCGAN networks In the differentiation network of first passage, test data set V is extracted1Feature C1
By test data set V2It is input in the differentiation network of second channel, extraction test data set V2Feature C2
By feature C1With feature C2After fusion, binary channels depth convolution production confrontation network DCGAN networks are input to Softmax layers, obtain final classification results.
The effect of the present invention can be further illustrated by following emulation experiment:
1. simulated conditions:
The emulation of the present invention is in Hewlett-Packard Z840, the hardware environment of memory 8GB and Matlab2014Ra, TensorFlow Carried out under software environment.
2. emulation content:
The emulation experiment of the present invention is that satellite s entinel_2 and satellite landsat_8 are shot to Berlin of imaging respectively Multispectral image data regional berlin, Hong Kong hong_kong, Paris paris, Rome rome tetra-, as training data Collection training binary channels depth convolution production confrontation network, using the multispectral image data in Sao Paulo sao_paulo5 areas as Test data set, carries out 17 class terrain classifications.
Fig. 2 be Sao Paulo sao_paulo5 areas truly substance markers figure, atural object classification include intensity skyscraper, Intensive middle level building, intensive low-rise building, Open High Level building, open middle level building, open low-rise building, greatly Type low-rise building, the building of sparse distribution, heavy industrial district, dense forest, scattered trees, bushes and arbuscle, short plant Quilt, exposed rock, exposed soil and sandy soil, water.
Fig. 3 is to be classified using the method for the present invention to the multispectral image in Sao Paulo sao_paulo5 areas Result figure.
Truly substance markers pixel in the classified pixels and Fig. 2 that are obtained to Fig. 3 present invention contrasts, it can be seen that The classification results accuracy rate that method using the present invention obtains is higher.
Emulation experiment of the present invention:Emulation 1 and emulation 2, with the DCGAN sorting techniques of the prior art to satellite s entinel_2 Shoot the multispectral image in Sao Paulo sao_paulo5 areas of imaging and Sao Paulo sao_ of satellite landsat_8 shooting imagings The multispectral image in paulo5 areas is classified.Emulation 3, Sao Paulo sao_ is imaged with the present invention to satellite s entinel_2 The multispectral image that the multispectral image and satellite landsat_8 in paulo5 areas are imaged Sao Paulo sao_paulo5 areas carries out Classification, as a result such as Fig. 3, the classification accuracy comparing result that three kinds of emulation modes obtain such as table 1.
3. simulated effect is analyzed:
Table 1 is the classification accuracy contrast that three kinds of methods obtain in simulations, as seen from Table 1, of the invention by two satellites The multispectral image data that shooting obtains are input to binary channels depth convolution production confrontation network and carry out extraction feature, compared to list The multispectral image data of the single satellite shooting of channel network processing are input in single channel network, and nicety of grading improves.
The classification accuracy list that three kinds of methods obtain during table 1 emulates
Emulation mode Classification accuracy
Sorting technique of the present invention 62.872%
Single channel DCGAN networks (sentinel_2 data) 55.635%
Single channel DCGAN networks (landsat_8 data) 54.143%
In conclusion invention introduces binary channels production to resist net, binding characteristic is warm, is extracted multi-direction, more A variety of high-level characteristic information of spectrum, enhance the characteristic present ability of multispectral image, improve classifying quality.

Claims (6)

1. a kind of Classification of Multispectral Images method that net DCGAN and Fusion Features are resisted based on binary channels depth convolution production, It is characterised in that it includes following steps:
(1) multispectral image is inputted:
By five regional multispectral images of two different satellite shooting imagings, each area includes two multispectral figures for input Picture, the first width multispectral image include the image of 10 wave bands, and the second width multispectral image includes the image of 9 wave bands:
(2) image normalization of each wave band of each width multispectral image is handled:
(3) multispectral image matrix is obtained:
(3a) by the image stack after different-waveband image normalization in the first width multispectral image together, it is W to obtain size1 i ×H1 i× 10 five regional multispectral image matrixes, i=1,2,3,4,5;
(3b) by the image stack after different-waveband image normalization in the second width multispectral image together, it is W to obtain size2 i ×H2 i× 9 five regional multispectral image matrixes, i=1,2,3,4,5;
(4) data set is obtained:
The pixel for having category is chosen in four the first regional width multispectral image matrixes of (4a) the past, with 64 × 64 pixel sizes Sliding window, will have every class pixel of category in four multispectral image matrixes, be divided into size be 64 × 64 × 10 image slices Plain block, randomly selects 10% block of pixels, composition training dataset D from block of image pixels1, then from block of image pixels at random 50% block of pixels is chosen, forms another training dataset D1′;
The pixel for having category is chosen in four the second regional width multispectral image matrixes of (4b) the past, with 64 × 64 pixel sizes Sliding window, will have every class pixel of category in four multispectral image matrixes, be divided into size be 64 × 64 × 9 image pixel Block, randomly selects 10% block of pixels, composition training dataset D from block of image pixels2, then selected at random from block of image pixels 50% block of pixels is taken, forms another training dataset D2′;
(4c) chooses the pixel for having category from the 5th the first regional width multispectral image matrix, with 64 × 64 pixel sizes Sliding window, will have every class pixel of category in the multispectral image matrix, it is 64 × 64 × 10 block of image pixels to be divided into size, By all block of image pixels composition test data set V1
(4d) chooses the pixel for having category from the 5th the second regional width multispectral image matrix, with 64 × 64 pixel sizes Sliding window, will have every class pixel of category in the multispectral image matrix, be divided into size be 64 × 64 × 9 image pixel Block, by all block of image pixels composition test data set V2
(5) binary channels depth convolution production confrontation network DCGAN models are built:
(5a) builds first passage depth convolution production confrontation network DCGAN, and the network is by 6 layers of generation network and 5 layers Differentiate network composition;
(5b) builds second channel depth convolution production confrontation network DCGAN, and the network is by 6 layers of generation network and 5 layers Differentiate network composition;
(5c) will differentiate the characteristic pattern vectorization of network extraction in first passage, the feature of network extraction will be differentiated in second channel Figure vectorization, merges the feature vector of two vectorizations, composition binary channels depth convolution production confrontation network DCGAN models Fusion Features layer;
(5d) connects one Softmax layers after Fusion Features layer, obtains binary channels depth convolution production confrontation network DCGAN models;
(6) training binary channels depth convolution production confrontation network DCGAN disaggregated models:
(6a) is by training dataset D1' be input in first passage depth convolution production confrontation network DCGAN networks, utilize nothing Supervised training method, the training network;
(6b) is by training dataset D2' be input in experienced second channel depth convolution production confrontation network DCGAN networks, use The training method identical with training first passage network, training second channel depth convolution production confrontation network DCGAN networks;
(6c) is by training dataset D1It is input in the differentiation network of trained first passage network, extraction training dataset D1 Feature S1, by training dataset D2It is input in the differentiation network of trained second channel network, extraction training dataset D2 Feature S2, by feature S1With feature S2Binary channels depth convolution production confrontation network DCGAN networks are input to after Fusion Features In Softmax layers, carry out Training 200 times, obtain trained binary channels depth convolution production confrontation network DCGAN disaggregated models;
(7) classify to test data set:
(7a) is by test data set V1, it is input to the of trained binary channels depth convolution production confrontation network DCGAN networks In the differentiation network of one passage, test data set V is extracted1Feature C1
(7b) is by test data set V2It is input in the differentiation network of second channel, extraction test data set V2Feature C2
(7c) is by feature C1With feature C2After fusion, binary channels depth convolution production confrontation network DCGAN networks are input to Softmax layers, obtain final classification results.
2. the light according to claim 1 more that net DCGAN and Fusion Features are resisted based on binary channels depth convolution production Compose image classification method, it is characterised in that as follows the step of normalized described in step (2):
1st step, with the picture of each pixel value divided by the band image in the image of each wave band in the first width multispectral image Plain maximum, obtains the pixel value after band image normalization, and pixel value of the pixel value after normalization less than 0 is arranged to 0, Other pixel values are constant, obtain the image after 10 band image normalization in the first width multispectral image;
2nd step, with the pixel of each pixel value divided by the band image of the image of each wave band in second multispectral figure most Big value, obtains the pixel value after band image normalization, and pixel value of the pixel value after normalization less than 0 is arranged to 0, other Pixel value after normalization is constant, obtains the image after 9 band image normalization in the second width multispectral image.
3. the light according to claim 1 more that net DCGAN and Fusion Features are resisted based on binary channels depth convolution production Compose image classification method, it is characterised in that in the confrontation network of first passage depth convolution production described in step (5a) DCGAN 6 layers generation networks structures and its parameter it is as follows:
First layer is noise floor, and input is that a size is 100 Gaussian vectors tieed up;
The second layer is mapping layer, which mapped by 100 dimensional vectors of noise floor, its size for 4 × 4 × 512;
Third layer is micro-stepping width convolutional layer, and it is 256 to set convolution nuclear volume, and convolution kernel window size is 5 × 5, and step-length 2 is defeated Go out 256 characteristic patterns;
4th layer is micro-stepping width convolutional layer, and it is 128 to set convolution nuclear volume, and convolution kernel window size is 5 × 5, and step-length 2 is defeated Go out 128 characteristic patterns;
Layer 5 is micro-stepping width convolutional layer, and it is 64 to set convolution nuclear volume, and convolution kernel window size is 5 × 5, step-length 2, output 64 characteristic patterns;
Layer 6 is micro-stepping width convolutional layer, and it is 10 to set convolution nuclear volume, and convolution kernel window size is 5 × 5, step-length 2, output 10 characteristic patterns.
4. the light according to claim 1 more that net DCGAN and Fusion Features are resisted based on binary channels depth convolution production Compose image classification method, it is characterised in that in the confrontation network of first passage depth convolution production described in step (5a) DCGAN 5 layers differentiation network structure and parameter it is as follows:
First layer is input layer, inputs training dataset;
The second layer is convolutional layer, and it is 64 to set convolution nuclear volume, and convolution kernel window size is 5 × 5, and step-length 2, exports 64 spies Sign figure;
Third layer is convolutional layer, and it is 128 to set convolution nuclear volume, and convolution kernel window size is 5 × 5, and step-length 2, exports 128 Characteristic pattern;
4th layer is convolutional layer, and it is 256 to set convolution nuclear volume, and convolution kernel window size is 5 × 5, and step-length 2, exports 256 Characteristic pattern;
Layer 5 is convolutional layer, and it is 512 to set convolution nuclear volume, and convolution kernel window size is 5 × 5, and step-length 2, exports 512 Characteristic pattern.
5. the light according to claim 1 more that net DCGAN and Fusion Features are resisted based on binary channels depth convolution production Compose image classification method, it is characterised in that in the confrontation network of second channel depth convolution production described in step (5b) DCGAN 6 layers of generation network and 5 layers of differentiation network, generate the generation network in preceding 5 Rotating fields and first passage DCGAN in network Preceding 5 Rotating fields it is identical with parameter, the convolution kernel quantity set of the micro-stepping width convolutional layer of last layer is 9;Differentiate in network Each Rotating fields and parameter are identical with each Rotating fields and parameter of the differentiation network in first passage DCGAN.
6. the light according to claim 1 more that net DCGAN and Fusion Features are resisted based on binary channels depth convolution production Compose image classification method, it is characterised in that as follows the step of unsupervised training method described in step (6a):
The first step, with training dataset D1', in unsupervised trained first passage depth convolution production confrontation network DCGAN networks Differentiation network;
Second step, with the unsupervised trained first passage depth convolution production confrontation network DCGAN networks of Gaussian noise Network is generated, the output image for generating network is input to and is differentiated in network, retraining differentiates network;
3rd step, will differentiate network and the generation network after second step training after second step is trained, carries out alternating iteration instruction Practice 500 times, obtain trained first passage network.
CN201711144187.2A 2017-11-17 2017-11-17 Multispectral image classification method based on dual-channel DCGAN and feature fusion Active CN107944483B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711144187.2A CN107944483B (en) 2017-11-17 2017-11-17 Multispectral image classification method based on dual-channel DCGAN and feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711144187.2A CN107944483B (en) 2017-11-17 2017-11-17 Multispectral image classification method based on dual-channel DCGAN and feature fusion

Publications (2)

Publication Number Publication Date
CN107944483A true CN107944483A (en) 2018-04-20
CN107944483B CN107944483B (en) 2020-02-07

Family

ID=61932768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711144187.2A Active CN107944483B (en) 2017-11-17 2017-11-17 Multispectral image classification method based on dual-channel DCGAN and feature fusion

Country Status (1)

Country Link
CN (1) CN107944483B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765512A (en) * 2018-05-30 2018-11-06 清华大学深圳研究生院 A kind of confrontation image generating method based on multi-layer feature
CN109034224A (en) * 2018-07-16 2018-12-18 西安电子科技大学 Hyperspectral classification method based on double branching networks
CN109145992A (en) * 2018-08-27 2019-01-04 西安电子科技大学 Cooperation generates confrontation network and sky composes united hyperspectral image classification method
CN109360146A (en) * 2018-08-22 2019-02-19 国网甘肃省电力公司 The double light image Fusion Models for generating network DCGAN are fought based on depth convolution
CN109635702A (en) * 2018-07-11 2019-04-16 国家林业局森林病虫害防治总站 Forestry biological hazards monitoring method and system based on satellite remote sensing images
CN110647927A (en) * 2019-09-18 2020-01-03 长沙理工大学 ACGAN-based image semi-supervised classification algorithm
CN111062403A (en) * 2019-12-26 2020-04-24 哈尔滨工业大学 Hyperspectral remote sensing data depth spectral feature extraction method based on one-dimensional group convolution neural network
CN117253122A (en) * 2023-11-17 2023-12-19 云南大学 Corn seed approximate variety screening method, device, equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023154A (en) * 2016-05-09 2016-10-12 西北工业大学 Multi-temporal SAR image change detection method based on dual-channel convolutional neural network (CNN)
CN106682616A (en) * 2016-12-28 2017-05-17 南京邮电大学 Newborn-painful-expression recognition method based on dual-channel-characteristic deep learning
CN106845381A (en) * 2017-01-16 2017-06-13 西北工业大学 Sky based on binary channels convolutional neural networks composes united hyperspectral image classification method
CN106997380A (en) * 2017-03-21 2017-08-01 北京工业大学 Imaging spectrum safe retrieving method based on DCGAN depth networks
CN107273938A (en) * 2017-07-13 2017-10-20 西安电子科技大学 Multi-source Remote Sensing Images terrain classification method based on binary channels convolution ladder net
CN107292336A (en) * 2017-06-12 2017-10-24 西安电子科技大学 A kind of Classification of Polarimetric SAR Image method based on DCGAN
AU2017101166A4 (en) * 2017-08-25 2017-11-02 Lai, Haodong MR A Method For Real-Time Image Style Transfer Based On Conditional Generative Adversarial Networks

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023154A (en) * 2016-05-09 2016-10-12 西北工业大学 Multi-temporal SAR image change detection method based on dual-channel convolutional neural network (CNN)
CN106682616A (en) * 2016-12-28 2017-05-17 南京邮电大学 Newborn-painful-expression recognition method based on dual-channel-characteristic deep learning
CN106845381A (en) * 2017-01-16 2017-06-13 西北工业大学 Sky based on binary channels convolutional neural networks composes united hyperspectral image classification method
CN106997380A (en) * 2017-03-21 2017-08-01 北京工业大学 Imaging spectrum safe retrieving method based on DCGAN depth networks
CN107292336A (en) * 2017-06-12 2017-10-24 西安电子科技大学 A kind of Classification of Polarimetric SAR Image method based on DCGAN
CN107273938A (en) * 2017-07-13 2017-10-20 西安电子科技大学 Multi-source Remote Sensing Images terrain classification method based on binary channels convolution ladder net
AU2017101166A4 (en) * 2017-08-25 2017-11-02 Lai, Haodong MR A Method For Real-Time Image Style Transfer Based On Conditional Generative Adversarial Networks

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
DAOYU LIN 等: "MARTA GANs:Unsupervised Representation Learning for Remote Sensing Image Classification", 《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》 *
LU CHEN 等: "Deep Spectral-Spatial Feature Extraction Based on DCGAN for Hyperspectral Image Retrieval", 《2017 IEEE 15TH INTL CONF ON DEPENDABLE,AUTONOMIC AND SECURE COMPUTING,15TH INTL CONF ON PERVASIVE INTELLIGENCE AND COMPUTING,3RD INTL CONF ON BIG DATA INTELLIGENCE AND COMPUTING AND CYBER SCIENCE AND TECHNOLOGY CONGRESS》 *
吕静 等: "基于双通道特征自适应融合的红外行为识别方法", 《重庆邮电学校学报(自然科学版)》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765512A (en) * 2018-05-30 2018-11-06 清华大学深圳研究生院 A kind of confrontation image generating method based on multi-layer feature
CN108765512B (en) * 2018-05-30 2022-04-12 清华大学深圳研究生院 Confrontation image generation method based on multi-level features
CN109635702A (en) * 2018-07-11 2019-04-16 国家林业局森林病虫害防治总站 Forestry biological hazards monitoring method and system based on satellite remote sensing images
CN109034224A (en) * 2018-07-16 2018-12-18 西安电子科技大学 Hyperspectral classification method based on double branching networks
CN109034224B (en) * 2018-07-16 2022-03-11 西安电子科技大学 Hyperspectral classification method based on double branch network
CN109360146A (en) * 2018-08-22 2019-02-19 国网甘肃省电力公司 The double light image Fusion Models for generating network DCGAN are fought based on depth convolution
CN109145992A (en) * 2018-08-27 2019-01-04 西安电子科技大学 Cooperation generates confrontation network and sky composes united hyperspectral image classification method
CN109145992B (en) * 2018-08-27 2021-07-20 西安电子科技大学 Hyperspectral image classification method for cooperatively generating countermeasure network and spatial spectrum combination
CN110647927A (en) * 2019-09-18 2020-01-03 长沙理工大学 ACGAN-based image semi-supervised classification algorithm
CN111062403A (en) * 2019-12-26 2020-04-24 哈尔滨工业大学 Hyperspectral remote sensing data depth spectral feature extraction method based on one-dimensional group convolution neural network
CN117253122A (en) * 2023-11-17 2023-12-19 云南大学 Corn seed approximate variety screening method, device, equipment and storage medium
CN117253122B (en) * 2023-11-17 2024-01-23 云南大学 Corn seed approximate variety screening method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN107944483B (en) 2020-02-07

Similar Documents

Publication Publication Date Title
CN107944483A (en) Classification of Multispectral Images method based on binary channels DCGAN and Fusion Features
Basu et al. Deepsat: a learning framework for satellite imagery
CN104077599B (en) Polarization SAR image classification method based on deep neural network
CN110334765A (en) Remote Image Classification based on the multiple dimensioned deep learning of attention mechanism
CN104517284B (en) Polarimetric SAR Image segmentation based on depth confidence net
CN108717568A (en) A kind of image characteristics extraction and training method based on Three dimensional convolution neural network
CN107832797A (en) Classification of Multispectral Images method based on depth integration residual error net
CN108830330B (en) Multispectral image classification method based on self-adaptive feature fusion residual error network
CN110097103A (en) Based on the semi-supervision image classification method for generating confrontation network
CN107392130A (en) Classification of Multispectral Images method based on threshold adaptive and convolutional neural networks
CN104123555B (en) Super-pixel polarimetric SAR land feature classification method based on sparse representation
Postadjian et al. Investigating the potential of deep neural networks for large-scale classification of very high resolution satellite images
CN109255364A (en) A kind of scene recognition method generating confrontation network based on depth convolution
CN107016405A (en) A kind of insect image classification method based on classification prediction convolutional neural networks
CN106446942A (en) Crop disease identification method based on incremental learning
CN106845418A (en) A kind of hyperspectral image classification method based on deep learning
CN104408483B (en) SAR texture image classification methods based on deep neural network
CN107229917A (en) A kind of several remote sensing image general character well-marked target detection methods clustered based on iteration
CN108133173A (en) Classification of Polarimetric SAR Image method based on semi-supervised ladder network
CN106485259B (en) A kind of image classification method based on high constraint high dispersive principal component analysis network
CN104680180B (en) Classification of Polarimetric SAR Image method based on K mean values and sparse own coding
CN107292336A (en) A kind of Classification of Polarimetric SAR Image method based on DCGAN
CN103208001A (en) Remote sensing image processing method combined with shape self-adaption neighborhood and texture feature extraction
CN107766828A (en) UAV Landing Geomorphological Classification method based on wavelet convolution neutral net
CN110516728A (en) Polarization SAR terrain classification method based on denoising convolutional neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant