CN113469077A - PolSAR data compression crop classification method based on NCSAE - Google Patents

PolSAR data compression crop classification method based on NCSAE Download PDF

Info

Publication number
CN113469077A
CN113469077A CN202110767810.XA CN202110767810A CN113469077A CN 113469077 A CN113469077 A CN 113469077A CN 202110767810 A CN202110767810 A CN 202110767810A CN 113469077 A CN113469077 A CN 113469077A
Authority
CN
China
Prior art keywords
layer
encoder
polarization
convolution
crop
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110767810.XA
Other languages
Chinese (zh)
Inventor
张伟涛
王敏
郭交
楼顺天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202110767810.XA priority Critical patent/CN113469077A/en
Publication of CN113469077A publication Critical patent/CN113469077A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a PolSAR data compression crop classification method based on NCSAE. The method comprises the following implementation steps: (1) generating a self-encoder network training set; (2) setting an objective function E of a non-negative constraint sparse self-encoder NCSAE; (3) training a non-negative constraint sparse self-encoder; (4) compressing the data to be classified by using a non-negative constraint sparse self-encoder; (5) generating a crop pixel classification network training set; (6) constructing a multi-scale feature classification network; (7) training a multi-scale feature classification network; (8) and testing the multi-scale feature classification network. The invention utilizes the multi-scale feature classification network, overcomes the problem that the prior art can only extract single feature from a single channel, and improves the classification precision of similar crops.

Description

PolSAR data compression crop classification method based on NCSAE
Technical Field
The invention belongs to the technical field of image processing, and further relates to a Non-negative constraint Sparse self-encoder NCSAE (Sparse Auto-Encoders with Non-connectivity constructs) -based polarimetric Synthetic Aperture radar (polarimetric Synthetic Aperture radar) data compression crop classification method in the field of remote sensing image recognition. The method can be used for identifying and classifying the types of crops in the multi-temporal images obtained by the polarized synthetic aperture radar PolSAR.
Background
The crop classification has important application in remote sensing monitoring of the agricultural conditions and is a precondition for further monitoring the crop growth and evaluating the crop yield. The method can accurately acquire the variety, area and spatial distribution information of crops in time, and can provide scientific basis for reasonable adjustment of agricultural structures. The polarimetric synthetic aperture radar PolSAR is taken as an active remote sensing technology in a microwave remote sensing system, and can acquire all-weather polarimetric synthetic aperture radar PolSAR data all day long, so that the research of classifying crops by using the polarimetric synthetic aperture radar PolSAR data is more and more emphasized by people. However, most of the current crop classification researches use single-time phase polarization synthetic aperture radar (PolSAR) data, which can only obtain crop growth information in a certain period, and scattering characteristic information of crops expressed in different growth periods is not fully utilized, so that high-precision classification performance cannot be obtained in crop classification. The existing crop classification method based on the polarized synthetic aperture radar PolSAR is mainly divided into a supervised classification method and an unsupervised classification method.
The university of Zhongshan proposes an unsupervised classification method of polarimetric synthetic aperture radar PolSAR images based on target scattering identification in the patent document "POLSAR images unsupervised classification method based on target scattering identification" (patent application No. 201210222987.2, application date: 2012.06.29, application publication No. CN 102799896A). The method comprises the steps of firstly calculating the polarization scattering entropy of the PolSAR image, and similarity parameters of surface scattering, even scattering and body scattering, initially classifying the PolSAR image into categories by using the parameters, selecting the minimum antenna receiving power characteristic polarization of a ground object mainly based on surface scattering as an antenna polarization state, calculating the antenna receiving power of each pixel, calculating the category center of each category, then calculating the polarization scattering difference measurement of each pixel, judging the pixel as the category with the minimum difference measurement, and finally checking whether the pixel meets the termination condition or not. The method has the defects that when the difference measurement is carried out on the polarization scattering characteristics of the polarization synthetic aperture radar PolSAR image, the calculated polarization scattering characteristics are easily interfered by noise, so that the measurement difference is inaccurate, and further, the good classification effect cannot be achieved.
Zhuang Z et al, in its published paper "Crops Classification from Sentin-2A Multi-spectral moving Sensing Based on volumetric Neural Networks" (IGARSS 2018-. The method comprises the steps of firstly resampling a multispectral remote sensing image acquired from Sentinel-2A, converting the resampled image into a gray image, and finally identifying corresponding categories of the gray image by using a Convolutional Neural Network (CNN) to acquire a classification result. The method has the disadvantages that the convolutional neural network can only extract the characteristics of a single channel, and for crops with similar scattering characteristics, the characteristic information with weak differences among the crops cannot be effectively extracted, so that the classification performance is poor when the types of the crops are more or similar.
Disclosure of Invention
The invention aims to provide a PolSAR data compression crop classification method based on NCSAE aiming at the defects of the prior art, which is used for solving the problems that the extracted features are interfered by noise and feature redundancy and the prior art can not classify similar crops.
In order to achieve the purpose, the idea of the invention is that a nonnegative constraint sparse self-encoder is constructed, an objective function of the nonnegative constraint sparse self-encoder is realized by adding sparse constraint and nonnegative constraint to a minimum mean square error function, so that the characteristics obtained by polarization decomposition are input into compression of the nonnegative constraint sparse self-encoder, and the characteristics are subjected to sparse constraint and nonnegative constraint through a sparse constraint item and a nonnegative constraint item in the objective function, so that redundant characteristics and characteristics containing noise can be removed, and a finally obtained compression matrix has no redundant characteristics and is not influenced by noise interference. Because the invention constructs the multi-scale feature classification network, the convolution layers with different convolution kernel sizes of the main trunk and the branch in the second module contained in the network are used for feature generation with different scales, so that the network is more favorable for extracting the fine difference information between the features of similar crops in the polarimetric synthetic aperture radar PolSAR image, thereby solving the problem of similar crop classification.
The steps of the invention comprise:
step 1, generating a self-encoder network training set:
(1a) randomly selecting 1% of pixel points in each PolSAR image from at least two time-phase PolSAR images obtained from a topographic map of the same area at least containing three types of crops, and forming all the selected pixel points into a sample set;
(1b) preprocessing each sample in the sample set;
(1c) carrying out polarization characteristic decomposition on each time phase of the preprocessed sample to obtain the polarization characteristic of the time phase;
(1d) all phase polarization features are combined into a Q x K matrix, each row of the matrix representing one sample, each column of the matrix representing one feature, Q representing the total number of sample sets, K representing the total number of all phase polarization features,
Figure BDA0003152569990000031
t represents the total number of time phases, T is more than or equal to 2, FtRepresenting the total number of phase polarization features of the t-th phase;
(1e) taking the matrix with the combined polarization characteristics as a training set;
step 2, setting an objective function E of the non-negative constraint sparse self-encoder NCSAE as follows:
Figure BDA0003152569990000032
wherein Q represents the total number of samples in the training set, Σ represents the summation operation, Q represents the sequence number of the samples in the training set, I represents the total number of input layer neurons from the encoder, I represents the sequence number of input layer neurons from the encoder, xi(q) represents the input data from the ith neuron in the qth sample of the encoder input layer, zi(q) represents output data of the ith neuron in the qth sample of the output layer of the self-encoder, α represents a non-negative constraint coefficient, α ≧ 0, L represents the total number of layers from the encoder, L ≧ 2, L represents the number of layers from the encoder, M represents the total number of the L-th layer neurons from the encoder, N represents the total number of the L-1-th layer neurons from the encoder, M, N represent the neuron numbers of the L-th and L-1-th layers from the encoder, respectively, (. phi.) -represents a non-negative constraint function,
Figure BDA0003152569990000033
Figure BDA0003152569990000034
represents the weight of the mth neuron of the l layer of the encoder to the nth neuron of the l-1 layer, beta represents a sparse regularization term coefficient, beta is more than or equal to 0, J represents the total number of neurons of the hidden layer of the self encoder, J represents the sequence number of the neurons of the hidden layer of the self encoder, rho represents a sparse parameter, rho is more than or equal to 0 and less than or equal to 1, log (-) represents a logarithmic function with the base 10,
Figure BDA0003152569990000036
represents the average activation value of the jth neuron from the encoder hidden layer,
Figure BDA0003152569990000035
yj(q) j-th neuron output data representing a q-th sample from a concealment layer of the encoder;
step 3, training a non-negative constraint sparse self-encoder:
inputting the training set into a nonnegative constraint sparse self-encoder to perform unsupervised training until a target function of the non-negative constraint sparse self-encoder converges, so as to obtain a trained nonnegative constraint sparse self-encoder;
and 4, compressing the data to be classified by using a non-negative constraint sparse self-encoder:
preprocessing and decomposing polarization characteristics of the multi-temporal polarization synthetic aperture radar PolSAR image to be classified by adopting the same method as the steps (1b), (1c) and (1d) to obtain a sample set to be classified consisting of matrixes with combined polarization characteristics, inputting the sample set to be classified into a trained non-negative constraint sparse self-encoder for compression, and outputting the compressed matrixes;
step 5, generating a crop pixel classification network training set:
randomly selecting 1% of pixel points of each crop type in the compressed matrix, and forming a training set by all the selected pixel points;
step 6, constructing a multi-scale feature classification network:
(6a) build a first module, its structure does in proper order: the first rolling layer, the first Relu active layer, the first batch of normalization layers, the second rolling layer, the second Relu active layer, the second batch of normalization layers, the maximum pooling layer, the third rolling layer, the third Relu active layer and the third batch of normalization layers; setting the sizes of convolution kernels of the first convolution layer, the second convolution layer, the third convolution layer and the third convolution layer to be 5 x 9, 3 x 64 and 3 x 128 in sequence, wherein the first convolution layer, the second convolution layer and the third convolution layer are all realized by adopting a Relu activation function, and the size of a convolution kernel of the maximum pooling layer is set to be 2 x 128;
(6b) building a second module consisting of a main road and two branches with the same structure, wherein the structure of the main road sequentially comprises the following steps: the device comprises a first convolution layer, a first Relu activation layer, a first batch of normalization layers, a second convolution layer, a second Relu activation layer, a second batch of normalization layers, a maximum pooling layer and a full connection layer; each branch consists of a convolution layer and a full-connection layer; the convolution kernel sizes of the first convolution layer and the second convolution layer of the main trunk are sequentially set to be 3 x 256 and 3 x 128, the first Relu activation layer and the second Relu activation layer are all realized by Relu activation functions, and the convolution kernel size of the maximum pooling layer is set to be 2 x 128; setting the convolution kernel sizes of the convolution layers of the first branch and the second branch to be 1 multiplied by 256 and 1 multiplied by 128 in sequence; connecting a first branch circuit with the main trunk circuit in parallel, and connecting a second branch circuit with a circuit from a second convolution layer of the main trunk circuit in parallel;
(6c) building a third module consisting of a concat layer, a full connection layer and a softmax layer;
(6d) sequentially connecting the three modules in series to form a multi-scale feature classification network;
step 7, training a multi-scale feature classification network:
inputting the training set into a multi-scale feature classification network, and iteratively updating network parameters by adopting a gradient descent algorithm until a loss function is converged to obtain the trained multi-scale feature classification network;
step 8, testing the multi-scale feature classification network:
and inputting the compressed matrix into a trained multi-scale feature classification network, and outputting the classified crop category.
Compared with the prior art, the invention has the following advantages:
firstly, the non-negative constraint sparse self-encoder constructed by the invention can remove redundant features and features containing noise to obtain a compression matrix with effective feature information, and overcomes the defect of poor classification result caused by extracting features with noise interference in the prior art, so that the classification precision of crops is improved.
Secondly, the multi-scale feature classification network constructed by the invention can be used for generating features of different scales, and overcomes the defect that the classification performance of the network on similar crops is reduced due to the fact that only features of a single channel can be extracted in the prior art, so that the classification precision of the similar crops is improved.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic diagram of a multi-scale feature classification network structure according to the present invention;
FIG. 3 is a simulation of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
The implementation steps of the present invention are further described with reference to fig. 1.
Step 1, generating a training set of a self-encoder network.
In at least two time-phase polarimetric synthetic aperture radar PolSAR images obtained from the topographic map of the same area at least containing three types of crops, 1% of pixel points in each PolSAR image are randomly selected, and all the selected pixel points form a sample set.
And sequentially carrying out format conversion, research area cutting, polarization filtering, radiation calibration and geometric terrain correction on each PolSAR data sample in the sample set.
Format conversion: the TIFF format of the multi-temporal phase polarization synthetic aperture radar PolSAR image is converted into bin format data readable by PolSARPro.
Cutting a research area: taking the region containing the crop target in the PolSARPro-readable format data as a research region, and removing the region outside the research region to obtain a cut rectangular image of the research region.
Polarization filtering: and (3) carrying out speckle noise filtering on the image after the research area is cut by using an improved Lee filter.
Radiation calibration: and converting the brightness gray value of the filtered image into absolute radiation brightness by using a radiance method to obtain a radiation-calibrated image.
And (3) correcting geometric topography: and eliminating the image value change caused by the relief by using a semi-empirical model on the image subjected to radiometric calibration to obtain a preprocessed image.
And carrying out polarization characteristic decomposition on each time phase of the preprocessed sample to obtain the polarization characteristic of the time phase.
The polarization characteristic decomposition refers to that the preprocessed sample is subjected to Freeman decomposition, Yamaguchi decomposition, Cloude decomposition and Huynen decomposition in sequence.
All phase polarization features are combined into a Q x K matrix, each row of the matrix representing one sample, each column of the matrix representing one feature, Q representing the total number of sample sets, K representing the total number of all phase polarization features,
Figure BDA0003152569990000061
t represents the total number of time phases, T is more than or equal to 2, FtRepresenting the total number of polarization features at the t-th phase.
And taking the matrix with the combined polarization characteristics as a training set.
And 2, setting an objective function E of the non-negative constraint sparse self-encoder NCSAE as follows.
Figure BDA0003152569990000062
Wherein Q represents the total number of samples in the training set, Σ represents the summation operation, Q represents the sequence number of the samples in the training set, I represents the total number of input layer neurons from the encoder, I represents the sequence number of input layer neurons from the encoder, xi(q) represents the input data from the ith neuron in the qth sample of the encoder input layer, zi(q) represents output data of the ith neuron in the qth sample of the output layer of the self-encoder, α represents a non-negative constraint coefficient, α ≧ 0, L represents the total number of layers from the encoder, L ≧ 2, L represents the number of layers from the encoder, M represents the total number of the L-th layer neurons from the encoder, N represents the total number of the L-1-th layer neurons from the encoder, M, N represent the neuron numbers of the L-th and L-1-th layers from the encoder, respectively, (. phi.) -represents a non-negative constraint function,
Figure BDA0003152569990000063
Figure BDA0003152569990000064
represents the weight of the mth neuron of the l layer of the encoder to the nth neuron of the l-1 layer, beta represents a sparse regularization term coefficient, beta is more than or equal to 0, J represents the total number of neurons of the hidden layer of the self encoder, J represents the sequence number of the neurons of the hidden layer of the self encoder, rho represents a sparse parameter, rho is more than or equal to 0 and less than or equal to 1, log (-) represents a logarithmic function with the base 10,
Figure BDA0003152569990000066
represents the average activation value of the jth neuron from the encoder hidden layer,
Figure BDA0003152569990000065
yj(q) represents the jth neuron output data from the q sample of the encoder concealment layer.
The NCSAE is composed of an input layer, a hidden layer and an output layer, the number of neurons of the input layer and the output layer is set to be equal to the total number of the features after all time phase decomposition, and the number of neurons of the hidden layer is set to be 9.
And 3, training a non-negative constraint sparse self-encoder.
And inputting the training set into a non-negative constraint sparse self-encoder to perform unsupervised training until a target function of the non-negative constraint sparse self-encoder converges, so as to obtain the trained non-negative constraint sparse self-encoder.
And 4, compressing the data to be classified by using a non-negative constraint sparse self-encoder.
And (2) preprocessing each PolSAR data sample in the sample set by the same method as the step (1), preprocessing and polarization characteristic decomposition are carried out on the multi-temporal polarization synthetic aperture radar PolSAR image to be classified to obtain a sample set to be classified consisting of matrixes with combined polarization characteristics, the sample set to be classified is input into a trained non-negative constraint sparse self-encoder to be compressed, and the compressed matrixes are output.
And 5, generating a crop pixel classification network training set.
And randomly selecting 1% of pixel points of each crop type in the compressed matrix, and forming a training set by all the selected pixel points.
And 6, constructing a multi-scale feature classification network.
The multi-scale feature classification network constructed by the present invention is further described with reference to fig. 2.
Build a first module, its structure does in proper order: the first rolling layer, the first Relu active layer, the first batch of normalization layers, the second rolling layer, the second Relu active layer, the second batch of normalization layers, the maximum pooling layer, the third rolling layer, the third Relu active layer and the third batch of normalization layers; the sizes of convolution kernels of the first convolution layer, the second convolution layer, the third convolution layer and the fourth convolution layer are sequentially set to be 5 x 9, 3 x 64 and 3 x 128, the first convolution layer, the second convolution layer and the fourth convolution layer are all achieved through Relu activation functions, and the size of a convolution kernel of the maximum pooling layer is set to be 2 x 128.
Building a second module consisting of a main road and two branches with the same structure, wherein the structure of the main road sequentially comprises the following steps: the device comprises a first convolution layer, a first Relu activation layer, a first batch of normalization layers, a second convolution layer, a second Relu activation layer, a second batch of normalization layers, a maximum pooling layer and a full connection layer; each branch consists of a convolution layer and a full-connection layer; the convolution kernel sizes of the first convolution layer and the second convolution layer of the main trunk are sequentially set to be 3 x 256 and 3 x 128, the first Relu activation layer and the second Relu activation layer are all realized by Relu activation functions, and the convolution kernel size of the maximum pooling layer is set to be 2 x 128; setting the convolution kernel sizes of the convolution layers of the first branch and the second branch to be 1 multiplied by 256 and 1 multiplied by 128 in sequence; the first branch is connected in parallel with the main trunk, and the second branch is connected in parallel with the line from the second convolution layer of the main trunk.
And building a third module consisting of a concat layer, a full connecting layer and a softmax layer.
And sequentially connecting the three modules in series to form a multi-scale feature classification network.
And 7, training the multi-scale feature classification network.
And inputting the training set into the multi-scale feature classification network, and iteratively updating network parameters by adopting a gradient descent algorithm until a loss function is converged to obtain the trained multi-scale feature classification network.
The loss function is as follows:
Figure BDA0003152569990000081
wherein, JmseRepresenting a loss function, U representing the total number of pixels in the training set, U representing the sequence number of pixels in the training set, YuRepresenting the prediction label output by the multi-scale feature classification network to the u-th pixel point in the training set,
Figure BDA0003152569990000082
and representing the real label of the u-th pixel point in the training set.
And 8, testing the multi-scale feature classification network.
And inputting the compressed matrix into a trained multi-scale feature classification network, and outputting the classified crop category.
The effect of the present invention is further explained by combining the simulation experiment as follows:
1. simulation experiment conditions are as follows:
the hardware platform of the simulation experiment of the invention: the processor is an Intel i 54590 k CPU, the main frequency is 3.3GHz, and the memory is 8 GB. .
The software platform of the simulation experiment of the invention comprises: windows 7 operating system and Matlab 2018 b.
Input data used by simulation experiments of the invention is multi-temporal polarization synthetic aperture radar PolSAR data provided by ESA of the European and air Bureau, the data is collected from Indian Head towns (103 degrees 66 '87.3' W, 50 degrees 53 '18.1' N) in southeast of Saskarm province of Canada, and the data comprises 7 temporal crops and 14 types of crops.
2. Simulation content and result analysis thereof:
the simulation experiment of the invention is to adopt the invention and a prior art (the compression method of the sparse self-encoder is combined with the classification method of the convolutional neural network) to respectively compress and classify the input multi-temporal polarization synthetic aperture radar PolSAR data to obtain a classification result graph and a classification error graph.
The method for combining the compression method of the Sparse self-encoder with the Classification method of the convolutional neural network in the prior art refers to a Classification method of crops, which is provided by Guo J et al in "Feature Dimension Reduction Using Stacked space Auto-Encoders for Crop Classification with Multi-Temporal, Quad-Pol SAR Data, Remote Sens, 2020,12, 321", and is called a Classification method combining the compression method of the Sparse self-encoder with the Classification method of the convolutional neural network for short.
The effect of the present invention will be further described with reference to the simulation diagram of fig. 3.
Fig. 3(a) is a real crop distribution diagram of an input multi-temporal polarimetric synthetic aperture radar PolSAR image, which is 1994 × 1697 pixels in size. The simulation experiment of the invention mainly classifies 14 crops respectively, and does not classify unknown regions. FIG. 3(b) is a diagram showing the classification results of crops in an Indian Head town according to the prior art. Fig. 3(c) is a classification error diagram for classifying crops in Indian Head town by using the prior art, wherein a black part in fig. 3(c) represents a pixel point corresponding to a correctly classified crop, and a white part represents a pixel point corresponding to a wrongly classified crop. FIG. 3(d) is a graph showing the results of classification of crops in an Indian Head town using the method of the present invention. FIG. 3(e) is a plot of the classification error for crop classification in Indian Head towns using the method of the present invention.
As can be seen from fig. 3(d) and 3(e), the classification result of the present invention has fewer misclassified crops compared with the classification result of the prior art, and the classification effect of the present invention is better than that of the classification method of the prior art and is more ideal.
In order to verify the simulation effect of the present invention, the classification results of the two methods were evaluated using the following three evaluation indexes (classification accuracy per class, total accuracy OA, Kappa coefficient).
The total accuracy OA, Kappa coefficient, and classification accuracy of 14 types of crops were calculated using the following formulas, and all the calculation results were plotted in table 1.
Figure BDA0003152569990000091
Figure BDA0003152569990000092
Figure BDA0003152569990000093
Wherein, PoIs the overall classification accuracy OA;suppose the number of real pixels of each class is a1,a2,...,acC is the total number of crop types, and the predicted number of pixels of each type is b1,b2,...,bcWhen the total number of pixels is n, P is presente=(a1×b1+a2×b2+...+ac×bc)/(n×n)。
TABLE 1 quantitative analysis table of classification results of the present invention and the prior art in simulation experiments
Figure BDA0003152569990000101
As can be seen by combining the table 1, the overall classification accuracy OA of the method is 99.33%, the Kappa coefficient is 0.99, and the two indexes are higher than those of the prior art method, so that the method can obtain higher crop classification accuracy.
The above simulation experiments show that: the method provided by the invention can be used for compressing the redundant features of the polarization decomposition features by utilizing the built non-negative constraint sparse self-encoder, can be used for extracting multi-scale feature information by utilizing the built multi-scale feature classification network, solves the problem that the classification precision of similar crops is not high because only single-channel feature information can be extracted in the prior art, and is a very practical crop classification method.

Claims (5)

1. A PolSAR data compression crop classification method based on NCSAE is characterized in that a non-negative constraint sparse self-encoder NCSAE is used for carrying out data compression on polarization decomposition characteristics of multi-temporal polarization synthetic aperture radar PolSAR data, and a multi-scale characteristic classification network is used for carrying out crop classification on the compressed data; the crop classification method comprises the following steps:
step 1, generating a self-encoder network training set:
(1a) randomly selecting 1% of pixel points in each PolSAR image from at least two time-phase PolSAR images obtained from a topographic map of the same area at least containing three types of crops, and forming all the selected pixel points into a sample set;
(1b) preprocessing each sample in the sample set;
(1c) carrying out polarization characteristic decomposition on each time phase of the preprocessed sample to obtain the polarization characteristic of the time phase;
(1d) all phase polarization features are combined into a Q x K matrix, each row of the matrix representing one sample, each column of the matrix representing one feature, Q representing the total number of sample sets, K representing the total number of all phase polarization features,
Figure FDA0003152569980000011
t represents the total number of time phases, T is more than or equal to 2, FtRepresenting the total number of phase polarization features of the t-th phase;
(1e) taking the matrix with the combined polarization characteristics as a training set;
step 2, setting an objective function E of the non-negative constraint sparse self-encoder NCSAE as follows:
Figure FDA0003152569980000012
wherein Q represents the total number of samples in the training set, Σ represents the summation operation, Q represents the sequence number of the samples in the training set, I represents the total number of input layer neurons from the encoder, I represents the sequence number of input layer neurons from the encoder, xi(q) represents the input data from the ith neuron in the qth sample of the encoder input layer, zi(q) represents output data of the ith neuron in the qth sample of the output layer of the self-encoder, α represents a non-negative constraint coefficient, α ≧ 0, L represents the total number of layers from the encoder, L ≧ 2, L represents the number of layers from the encoder, M represents the total number of the L-th layer neurons from the encoder, N represents the total number of the L-1-th layer neurons from the encoder, M, N represent the neuron numbers of the L-th and L-1-th layers from the encoder, respectively, (. phi.) -represents a non-negative constraint function,
Figure FDA0003152569980000021
Figure FDA0003152569980000022
represents the weight of the mth neuron of the l layer of the encoder to the nth neuron of the l-1 layer, beta represents a sparse regularization term coefficient, beta is more than or equal to 0, J represents the total number of neurons of the hidden layer of the self encoder, J represents the sequence number of the neurons of the hidden layer of the self encoder, rho represents a sparse parameter, rho is more than or equal to 0 and less than or equal to 1, log (-) represents a logarithmic function with the base 10,
Figure FDA0003152569980000023
represents the average activation value of the jth neuron from the encoder hidden layer,
Figure FDA0003152569980000024
yj(q) j-th neuron output data representing a q-th sample from a concealment layer of the encoder;
step 3, training a non-negative constraint sparse self-encoder:
inputting the training set into a nonnegative constraint sparse self-encoder to perform unsupervised training until a target function of the non-negative constraint sparse self-encoder converges, so as to obtain a trained nonnegative constraint sparse self-encoder;
step 4, compressing the original data by using a non-negative constraint sparse self-encoder:
preprocessing and decomposing polarization characteristics of the multi-temporal polarization synthetic aperture radar PolSAR image to be classified by adopting the same method as the steps (1b), (1c) and (1d) to obtain a sample set to be classified consisting of matrixes with combined polarization characteristics, inputting the sample set to be classified into a trained non-negative constraint sparse self-encoder for compression, and outputting the compressed matrixes;
step 5, generating a crop pixel classification network training set:
randomly selecting 1% of pixel points of each crop type in the compressed matrix, and forming a training set by all the selected pixel points;
step 6, constructing a multi-scale feature classification network:
(6a) build a first module, its structure does in proper order: the first rolling layer, the first Relu active layer, the first batch of normalization layers, the second rolling layer, the second Relu active layer, the second batch of normalization layers, the maximum pooling layer, the third rolling layer, the third Relu active layer and the third batch of normalization layers; setting the sizes of convolution kernels of the first convolution layer, the second convolution layer, the third convolution layer and the third convolution layer to be 5 x 9, 3 x 64 and 3 x 128 in sequence, wherein the first convolution layer, the second convolution layer and the third convolution layer are all realized by adopting a Relu activation function, and the size of a convolution kernel of the maximum pooling layer is set to be 2 x 128;
(6b) building a second module consisting of a main road and two branches with the same structure, wherein the structure of the main road sequentially comprises the following steps: the device comprises a first convolution layer, a first Relu activation layer, a first batch of normalization layers, a second convolution layer, a second Relu activation layer, a second batch of normalization layers, a maximum pooling layer and a full connection layer; each branch consists of a convolution layer and a full-connection layer; the convolution kernel sizes of the first convolution layer and the second convolution layer of the main trunk are sequentially set to be 3 x 256 and 3 x 128, the first Relu activation layer and the second Relu activation layer are all realized by Relu activation functions, and the convolution kernel size of the maximum pooling layer is set to be 2 x 128; setting the convolution kernel sizes of the convolution layers of the first branch and the second branch to be 1 multiplied by 256 and 1 multiplied by 128 in sequence; connecting a first branch circuit with the main trunk circuit in parallel, and connecting a second branch circuit with a circuit from a second convolution layer of the main trunk circuit in parallel;
(6c) building a third module consisting of a concat layer, a full connection layer and a softmax layer;
(6d) sequentially connecting the three modules in series to form a multi-scale feature classification network;
step 7, training a multi-scale feature classification network:
inputting the training set into a multi-scale feature classification network, and iteratively updating network parameters by adopting a gradient descent algorithm until a loss function is converged to obtain the trained multi-scale feature classification network;
step 8, testing the multi-scale feature classification network:
and inputting the compressed matrix into a trained multi-scale feature classification network, and outputting the classified crop category.
2. The NCSAE-based PolSAR data compression crop classification method of claim 1, wherein said pretreatment in step (1b) is: carrying out format conversion, study area cutting, polarization filtering, radiation calibration and geometric terrain correction on the PolSAR data in sequence;
format conversion: converting the multi-temporal polarization synthetic aperture radar PolSAR image into format data readable by PolSARPro;
cutting a research area: taking an area containing a crop target in the PolSARPro-readable format data as a research area, and removing the area outside the research area to obtain a cut rectangular image of the research area;
polarization filtering: using an improved Lee filter to carry out speckle noise filtering on the image cut out of the research area;
radiation calibration: converting the brightness gray value of the filtered image into absolute radiation brightness by using a radiance method to obtain a radiation-calibrated image;
and (3) correcting geometric topography: and eliminating the image value change caused by the relief by using a semi-empirical model on the image subjected to radiometric calibration to obtain a preprocessed image.
3. The NCSAE-based PolSAR data compression crop classification method of claim 1, wherein said polarization signature decomposition in step (1c) is performed by performing Freeman decomposition, Yamaguchi decomposition, Cloude decomposition and Huynen decomposition on the pretreated sample in sequence.
4. The NCSAE-based PolSAR data compression crop classification method according to claim 1, wherein said non-negative constraint sparse self-encoder NCSAE in step 2 is composed of an input layer, a hidden layer and an output layer, the number of neurons in both the input layer and the output layer is set to be equal to the total number of features after all time phase decomposition, and the number of neurons in the hidden layer is set to be 9.
5. The NCSAE-based PolSAR data compression crop classification method of claim 1, wherein the loss function in step 7 is as follows:
Figure FDA0003152569980000041
wherein, JmseRepresenting a loss function, U representing the total number of pixels in the training set, U representing the sequence number of pixels in the training set, YuRepresenting the prediction label output by the multi-scale feature classification network to the u-th pixel point in the training set,
Figure FDA0003152569980000042
and representing the real label of the u-th pixel point in the training set.
CN202110767810.XA 2021-07-07 2021-07-07 PolSAR data compression crop classification method based on NCSAE Pending CN113469077A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110767810.XA CN113469077A (en) 2021-07-07 2021-07-07 PolSAR data compression crop classification method based on NCSAE

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110767810.XA CN113469077A (en) 2021-07-07 2021-07-07 PolSAR data compression crop classification method based on NCSAE

Publications (1)

Publication Number Publication Date
CN113469077A true CN113469077A (en) 2021-10-01

Family

ID=77879114

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110767810.XA Pending CN113469077A (en) 2021-07-07 2021-07-07 PolSAR data compression crop classification method based on NCSAE

Country Status (1)

Country Link
CN (1) CN113469077A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105117736A (en) * 2015-08-18 2015-12-02 西安电子科技大学 Polarized SAR image classification method based on sparse depth stack network
CN106096652A (en) * 2016-06-12 2016-11-09 西安电子科技大学 Based on sparse coding and the Classification of Polarimetric SAR Image method of small echo own coding device
CN107341511A (en) * 2017-07-05 2017-11-10 西安电子科技大学 Classification of Polarimetric SAR Image method based on super-pixel Yu sparse self-encoding encoder
CN111079505A (en) * 2019-09-17 2020-04-28 西北农林科技大学 Multi-temporal PolSAR scattering characteristic dimension reduction algorithm based on stack type sparse self-coding network
CN112052754A (en) * 2020-08-24 2020-12-08 西安电子科技大学 Polarized SAR image ground feature classification method based on self-supervision characterization learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105117736A (en) * 2015-08-18 2015-12-02 西安电子科技大学 Polarized SAR image classification method based on sparse depth stack network
CN106096652A (en) * 2016-06-12 2016-11-09 西安电子科技大学 Based on sparse coding and the Classification of Polarimetric SAR Image method of small echo own coding device
CN107341511A (en) * 2017-07-05 2017-11-10 西安电子科技大学 Classification of Polarimetric SAR Image method based on super-pixel Yu sparse self-encoding encoder
CN111079505A (en) * 2019-09-17 2020-04-28 西北农林科技大学 Multi-temporal PolSAR scattering characteristic dimension reduction algorithm based on stack type sparse self-coding network
CN112052754A (en) * 2020-08-24 2020-12-08 西安电子科技大学 Polarized SAR image ground feature classification method based on self-supervision characterization learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
李恒辉: "栈式稀疏自编码网络的多时相全极化 SAR散射特征降维", 《遥感学报》, 25 November 2020 (2020-11-25) *
李恒辉: "栈式稀疏自编码网络的多时相全极化SAR 散射特征降维", 《遥感学报》, 25 November 2020 (2020-11-25) *
郭交: "基于 Sentinel -1 和 Sentinel -2 数据融合的农作物分类", 《农业机械学报》, vol. 49, no. 4, 30 April 2018 (2018-04-30) *

Similar Documents

Publication Publication Date Title
CN110287869B (en) High-resolution remote sensing image crop classification method based on deep learning
CN110321963B (en) Hyperspectral image classification method based on fusion of multi-scale and multi-dimensional space spectrum features
CN110084159B (en) Hyperspectral image classification method based on combined multistage spatial spectrum information CNN
Tao et al. A deep neural network modeling framework to reduce bias in satellite precipitation products
CN107103306B (en) Winter wheat powdery mildew remote-sensing monitoring method based on wavelet analysis and support vector machines
CN102385694B (en) Hyperspectral identification method for land parcel-based crop variety
CN113095409B (en) Hyperspectral image classification method based on attention mechanism and weight sharing
CN111161362B (en) Spectral image identification method for growth state of tea tree
CN107680081B (en) Hyperspectral image unmixing method based on convolutional neural network
CN111160392A (en) Hyperspectral classification method based on wavelet width learning system
Liu et al. Maximum relevance, minimum redundancy band selection based on neighborhood rough set for hyperspectral data classification
CN112052758A (en) Hyperspectral image classification method based on attention mechanism and recurrent neural network
Kumawat et al. Time-variant satellite vegetation classification enabled by hybrid metaheuristic-based adaptive time-weighted dynamic time warping
Baidar Rice crop classification and yield estimation using multi-temporal Sentinel-2 data: a case study of terai districts of Nepal
CN116312860B (en) Agricultural product soluble solid matter prediction method based on supervised transfer learning
CN109460788B (en) Hyperspectral image classification method based on low-rank-sparse information combination network
CN117115675A (en) Cross-time-phase light-weight spatial spectrum feature fusion hyperspectral change detection method, system, equipment and medium
CN111751295A (en) Modeling method and application of wheat powdery mildew severity detection model based on imaging hyperspectral data
Qayyum et al. Optimal feature extraction technique for crop classification using aerial imagery
CN113469077A (en) PolSAR data compression crop classification method based on NCSAE
CN110807387A (en) Object classification method and system based on hyperspectral image characteristics
Li et al. Interleaved group convolutions for multitemporal multisensor crop classification
CN113887656B (en) Hyperspectral image classification method combining deep learning and sparse representation
CN112949592B (en) Hyperspectral image classification method and device and electronic equipment
Laine Crop identification with Sentinel-2 satellite imagery in Finland

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination