CN117058558A - Remote sensing image scene classification method based on evidence fusion multilayer depth convolution network - Google Patents

Remote sensing image scene classification method based on evidence fusion multilayer depth convolution network Download PDF

Info

Publication number
CN117058558A
CN117058558A CN202310903456.8A CN202310903456A CN117058558A CN 117058558 A CN117058558 A CN 117058558A CN 202310903456 A CN202310903456 A CN 202310903456A CN 117058558 A CN117058558 A CN 117058558A
Authority
CN
China
Prior art keywords
layer
network
wavelet
remote sensing
dense
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310903456.8A
Other languages
Chinese (zh)
Inventor
宋婉莹
刘倩
刘毓琛
李志�
丛一凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Science and Technology
Original Assignee
Xian University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Science and Technology filed Critical Xian University of Science and Technology
Priority to CN202310903456.8A priority Critical patent/CN117058558A/en
Publication of CN117058558A publication Critical patent/CN117058558A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention relates to a remote sensing image scene classification method based on an evidence fusion multilayer depth convolution network, and belongs to the technical field of remote sensing image processing. Inputting a high-resolution remote sensing image, adjusting the size of the image, normalizing, and executing image enhancement; the preprocessed data set is input into a DenseNet-201 network, a Gabor-CNN network and a Wave-CNN network respectively to output class probabilities; and fusing the class probabilities output by the three networks according to the D-S evidence theory, so as to obtain a classification result. The invention effectively merges multi-scale texture features in the CNN, strengthens the feature expression of different layers of the image, has high classification performance and good robustness, and can be used for target detection and identification of high-resolution remote sensing images.

Description

Remote sensing image scene classification method based on evidence fusion multilayer depth convolution network
Technical Field
The invention belongs to the technical field of remote sensing image processing, and particularly relates to a classification method of a high-resolution remote sensing image, which can be used for target detection and identification of the high-resolution remote sensing image.
Background
With the development of remote sensing platforms, sensors and imaging technologies, the number of remote sensing images has increased significantly and exhibits high spatial, spectral and temporal resolution characteristics. Along with the rapid increase of the number of remote sensing images and the improvement of resolution, more and more researchers explore the problems of analysis and interpretation of the remote sensing images, and main tasks include scene classification, image retrieval, hyperspectral image classification, semantic segmentation, change detection, target detection, SAR image target recognition and the like.
The high-resolution remote sensing image classification is used as an important branch in the fields of remote sensing image analysis and interpretation, is widely applied to the fields of vegetation mapping, natural disaster detection, land coverage type discrimination, ecological environment monitoring, urban planning and the like, and compared with the middle-resolution image and the low-resolution image, the high-resolution remote sensing image has more complex ground object distribution and more abundant space structure information, so that the difference between the remote sensing images of the same type is increased, but the difference between different scenes is reduced, and the high-resolution remote sensing image classification is challenged. The classification of the remote sensing images is also made into the current research hot spot. According to different remote sensing image information understanding modes, the high-resolution remote sensing image classification can be divided into pixel-level remote sensing image classification and remote sensing image scene classification. The high-resolution remote sensing image is characterized by various ground objects and complex space layout, and semantic information in the high-resolution remote sensing image is difficult to extract by a pixel-level classification method, so that the content of the high-resolution remote sensing image can be rapidly and accurately interpreted by directly classifying the scenes of the remote sensing image.
The classification of the remote sensing image faces challenges such as diversity, similarity, variability, complexity of interference factors and the like of targets, and the feature extraction algorithm needs to be optimized. Traditional classification methods are mainly based on the extraction of shallow features of images, such as colors, shapes, textures, etc. These methods require manual design and extraction based on domain knowledge and experience, require knowledge of certain image processing theory, and the design and extraction process of features can be time consuming and laborious. However, research shows that only shallow features are extracted for classification, the influence of noise and change of images is large, and global structure and high-level semantic information of images cannot be fully expressed, so that the accuracy is not high.
Deep Learning (Deep Learning) algorithms have been widely used in recent years for feature mining and scene classification of remote sensing images. One of the most widely used models in this field is convolutional neural networks (Convolutional Neural Network, CNN), which can effectively extract features of the bottom to high layers of images, and deal with various image recognition and classification problems. However, the deep learning algorithm still has some problems, such as difficulty in interpretation of the extracted depth features, small inter-class differences, large intra-class differences, and the like, which may cause degradation of classification accuracy. Therefore, aiming at the advantages and problems of shallow texture features and depth features in remote sensing image classification, the invention develops a high-resolution remote sensing image classification study based on depth texture feature fusion, and the study is an important study subject in the remote sensing field.
Disclosure of Invention
The technical problems to be solved by the invention are as follows:
aiming at the advantages and problems of shallow texture features and depth features in remote sensing image classification, an efficient multi-layer depth texture feature fusion network (Hierarchical Deep Texture Features Fusion Convolutional Neural Network, HDTFF-CNN) is provided for high-resolution remote sensing image classification so as to capture complex structures and changeable textures of images, effectively fuse discrimination features of multiple receptive fields and improve classification accuracy.
In order to solve the technical problems, the invention adopts the following technical scheme:
a remote sensing image scene classification method based on an evidence fusion multilayer depth convolution network is characterized by comprising the following steps of
Inputting the remote sensing image into a DenseNet-201 network, a Gabor-CNN network and a Wave-CNN network respectively to output class probabilities;
and fusing the class probabilities output by the three networks according to the D-S evidence theory to obtain a classification result.
The invention further adopts the technical scheme that: and preprocessing the remote sensing image before inputting the remote sensing image into the network.
The invention further adopts the technical scheme that: the preprocessing comprises the steps of adjusting the image size, normalizing and enhancing data.
The invention further adopts the technical scheme that: the DenseNet-201 network comprises 5 layers, and the processing steps are as follows:
preprocessing an input feature map by Layer 1: firstly, extracting shallow layer information through a 7 multiplied by 7 convolution layer; then, the feature map is downsampled through a 3×3 max pooling layer;
the feature map output by the Layer1 is sequentially input into the Layer2, the Layer3, the Layer4 and the Layer5 to extract deep features of the image;
the BN layer, the activation layer, the global average pooling layer, the Dropout layer and the Softmax layer are sequentially input, and class probabilities are output.
The invention further adopts the technical scheme that: the Layer2, the Layer3, the Layer4 and the Layer5 are respectively composed of 4 groups of Dense Block and 3 groups of Transition Block, the Dense Block is respectively composed of Dense Layer containing 6, 12, 48 and 32 densely connected layers, the Dense Layer uses a Dense connection structure of '1 multiplied by 1+3 multiplied by 3' overlapped convolution, and each Dense Layer acquires all Layer output characteristics in front of the Dense Layer through characteristic diagram channel splicing; the Transition Block consists of a normalization layer, a convolution layer, an activation layer and an average pooling layer, and is inserted into two adjacent Dense blocks.
The invention further adopts the technical scheme that: the Gabor-CNN network is improved on the basis of a DenseNet-201 network: firstly, replacing a part of standard convolution layers of a DenseNet network by GaborConv2D, and determining the replacement position of the GaborConv2D layer through experiments; then, adding a 3×3 standard convolution kernel, setting the step length of the layer to 2, and making the output channel number equal to the input, so as to further extract the shallow layer feature of the image and downsampling it once; finally, feature map scaling is accelerated using a 3 x 3 max pooling layer.
The invention further adopts the technical scheme that: the size of the GaborConv2D Gabor core is set to 7×7, the number of output channels of the module is set to 64, and the step size is set to 1.
The invention further adopts the technical scheme that: the Wave-CNN network is improved on the basis of a DenseNet-201 network: replacing pooling layer of DenseNet-201 network with 2D wavelet transform layer to x low frequency component LL As a result of the downsampling of the input feature map, the features are non-linearly activated by adding a BN layer and a ReLU activation layer to increase the high frequency component x LH And x HL The high-frequency detail texture component H is fused, a space attention module is added, the space matrix and the LL component are subjected to dot multiplication, and then a network layer is accessed by using a shortcut connection mode.
The invention further adopts the technical scheme that: the Wave-CNN network processing steps are as follows:
multistage wavelet feature extraction: suppose the output of Dense Block1 is x dense(1) Extracting x using wavelet transform, respectively LL1_1 ,x LH1_1 ,x HL1_1 Obtaining a Wavelet feature Wavelet of a first level through a Wavelet space attention module 1_1 The method comprises the steps of carrying out a first treatment on the surface of the First-order low-frequency characteristic x LL1_1 Decomposition into second-level wavelet components x by wavelet transform LL1_2 ,x LH1_2 ,x HL1_2 And continue to get the second level Wavelet feature Wavelet by the spatial attention module 1_2 The method comprises the steps of carrying out a first treatment on the surface of the Thus, the nth Dense Block performs 4-n times of wavelet decomposition altogether;
multistage wavelet cascade fusion: the network starts from a Dense Block2, the output of each Dense Block is merged into the multi-level wavelet feature of the previous Dense Block by a feature addition mode, and the nth Dense Block is merged into the wavelet feature output of each n-1 previous Dense Block; the fusion formula is as follows:
x dense(n) ′=H n [x dense(n) ,wavelet n-1 ,...,wavelet 1 ],n≥2
wherein x is dense(n) Is the input of the nth Dense Block in the backbone modelOut of the wavelet n-1 For wavelet characteristics, H n Output feature map dimension of previous wavelet features is related to x by batch normalization, reLU activation function, convolution layer as nonlinear transformation function dense(n) And keep the same.
The invention further adopts the technical scheme that: the classification result obtained by fusion according to the D-S evidence theory is specifically as follows:
model fusion is carried out on the class probabilities output by the three networks through a D-S synthesis formula:
wherein,the method comprises the steps of representing fusion operation, wherein A represents scene types after image fusion, B, C, D respectively represents scene types of images corresponding to three networks of DenseNet-201, gabor-CNN and Wave-CNN;
and taking the scene with the highest probability as a classification result according to the obtained probability of the fused different scenes.
The invention has the beneficial effects that:
the invention provides a remote sensing image scene classification method based on an evidence fusion multi-layer depth convolution network, which mainly solves the problems that depth features are difficult to interpret, single depth feature robustness is poor, convolution neural networks (Convolutional Neural Network, CNN) lack of multiscale detail texture feature receptive fields and the like, and the implementation scheme is as follows: inputting a high-resolution remote sensing image, adjusting the size of the image, normalizing, and executing image enhancement; the preprocessed data set is input into a DenseNet-201 network, deep features of the image are obtained through five layers, four groups of DenseBlock and three layers Translation Layer, and class probabilities are output through a softmax Layer; designing a Gabor convolutional neural network (Gabor Convolutional Neural Network, gabor-CNN) on the basis of the DenseNet-201 network, and replacing part of standard convolutional kernels of the DenseNet-201 with GaborConv2D to construct the Gabor-CNN, so that multi-scale and multi-directional texture features in an image can be extracted; a wavelet convolution neural network (Wavelet Convolutional Neural Network, wave-CNN) network is designed on the basis of a DenseNet-201 network, a maximum pooling/average pooling layer in the DenseNet-201 network is replaced by low-frequency components decomposed by wavelet transformation, and a spatial attention mechanism is combined to process the wavelet components to obtain a spatial attention map. The channel attention module is used for further strengthening the feature expression of the image, and cascade fusion is used for strengthening multi-level wavelet feature multiplexing; fusing the softmax values output by the three networks according to the D-S evidence theory, so as to obtain a classification result; the invention effectively merges multi-scale texture features in the CNN, strengthens the feature expression of different layers of the image, has high classification performance and good robustness, and can be used for target detection and identification of high-resolution remote sensing images.
Compared with the prior art, the method has the following advantages:
firstly, compared with the traditional CNN model, the invention improves the scale and direction analysis interpretation capability of two-dimensional Gabor filtering, and the modulated CNN kernel can extract multi-scale and multi-directional deep texture features;
secondly, aiming at the problem that a part of characteristics can be lost when a pooling layer is used as a downsampling layer of CNN, and the spatial structure of an input characteristic diagram can be influenced at the same time, the invention blends wavelet change in a network, combines a attention module with the wavelet characteristics, strengthens the characteristic expression of different layers of an image, designs a wavelet characteristic fusion network and strengthens the characteristic multiplexing;
thirdly, aiming at the problem that the classification performance of the single depth texture feature model has bottleneck, the invention uses the D-S evidence theory to effectively fuse the depth feature sets from different independent sources, and compared with other methods, the method has higher classification performance and better robustness.
Drawings
The drawings are only for purposes of illustrating particular embodiments and are not to be construed as limiting the invention, like reference numerals being used to refer to like parts throughout the several views.
FIG. 1 is a functional block diagram of the present invention;
FIG. 2 is a schematic block diagram of the classification of high resolution remote sensing images using a DenseNet-201 network in accordance with the present invention;
FIG. 3 is a schematic block diagram of the classification of high resolution remote sensing images using the Gabor-CNN network of the present invention;
FIG. 4 is a schematic block diagram of the classification of high resolution remote sensing images using the Wave-CNN network of the present invention;
fig. 5 is a chaos matrix result graph of three data sets: (a) NWPU-RESISC45 dataset chaos matrix; (b) AID30 dataset chaos matrix; (c) Pattern Net38 dataset chaos matrix.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. In addition, technical features of the embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
The embodiment of the invention provides a remote sensing image scene classification method based on an evidence fusion multilayer deep convolution network, which comprises the following steps:
(1) Inputting a high-resolution remote sensing image, adjusting the size of the image, normalizing, and executing image enhancement;
(2) Inputting the preprocessed data set in the step (1) into a DenseNet-201 network, then, carrying out five layers on a characteristic result, wherein Layer1 is a preprocessing process on an input characteristic map, then, carrying out layers 2, 3, 4 and 5 to obtain image deep characteristic information, and then, adding a BN Layer through the BN Layer and an activation Layer to improve the training speed of the network and avoid overfitting to a certain extent, wherein a global average pooling Layer is used for integrating the characteristics, and a Dropout Layer can enable part of neurons not to participate in training, so that the overall generalization performance of the network is enhanced. The Softmax layer is used for outputting class probability and carrying out classification discrimination;
(3) On the basis of the DenseNet-201 network, a Gabor-CNN network is designed. In the preprocessing, the GaborConv2D module is used for replacing the original standard convolution kernel, the size of the Gabor kernel is set to 7 multiplied by 7, the number of output channels of the module is set to 64, and the step size is set to 1, so that the spatial feature expression of an image can not be destroyed during the extraction of Gabor features. A standard convolution kernel of 3 x 3 is added after the 7 x 7GaborConv layer, the step size of which is set to 2, the number of output channels is equal to the number of inputs, the effect is to further extract the shallow features of the image and downsample it once. Finally, feature map scaling is accelerated using a 3 x 3 max pooling layer. Replacing part of the standard convolution layers in 4 Dense blocks of the DenseNet-201 network of step (2) with GaborConv2D modules, and determining the replacement positions of the GaborConv2D modules through experiments. Inputting the preprocessed data set in the step (1) into a designed Gabor-CNN network, outputting a softmax value, and classifying and judging;
(4) On the basis of the DenseNet-201 network, a Wave-CNN network is designed. Replacing the pooling layer of the DenseNet-201 network of step (2) with a 2D wavelet transform layer, low frequency x LL The components are used as the result of downsampling the input feature map, and the components are used for nonlinear activation of the features by adding a BN layer and a ReLU activation layer, so that x is calculated LH Component sum x HL The components are fused into a high-frequency detail texture component H, a spatial attention module is added, the spatial matrix and the LL component are subjected to dot multiplication, and then a network layer is accessed by using a shortcut connection mode. Inputting the preprocessed data set in the step (1) into a designed Wave-CNN network, so as to output a softmax value, and performing classification and discrimination;
(5) Obtaining a classification result according to a D-S evidence theory;
let m be 1 And m 2 Is a basic probability distribution function on the same identification frame, and the focal elements are respectively A 1 ,A 2 ,...,A n And B 1 ,B 2 ,...,B n Respectively expressed in m 1 And m 2 The basic probability distribution after fusion can be expressed as:
and taking the scene with the highest probability as a classification result according to the obtained probability of the fused different scenes, thereby obtaining the final classification result of the high-resolution remote sensing image.
Referring to fig. 1, the specific steps of the present invention are as follows:
step 1, inputting a high-resolution remote sensing image, adjusting the size of the image, normalizing the image, and enhancing the image.
1.1 Inputting a high-resolution remote sensing image, and selecting three public high-resolution remote sensing image data sets of NWPU-RESISC45, AID30 and Pattern Net 38;
1.2 The three data sets are resized to fit the CNN model by:
G(x 2 ,y 2 )=F(k 1 x 1 ,k 2 y 1 ),
wherein F (x) 1 ,y 1 ) An original image of m×n size; g (x) 2 ,y 2 ) A reduced image of the size αm×βn; scaling factors α, β e (0, 1); k (k) 1 =1/α,k 2 =1/β, for k 1 x 1 ,k 2 y 1 Rounding;
1.3 Normalized for each of the three data sets, obtained by:
(x-x min )/(x max -x min ),
wherein x represents the spatial coordinates of the image; x is x min 、x max Respectively representing the minimum value and the maximum value of the space coordinates;
1.4 Data enhancement is carried out on the three data sets in five modes of random rotation, horizontal overturn, vertical overturn, gaussian noise and contrast adjustment, so that the diversity of samples is increased, and the generalization capability of the proposed deep texture feature network is enhanced. The first three image enhancement methods are to transform in the geometric domain of the image, and are used for solving the problem of position deviation of the image in the training sample. The latter two methods belong to the transformation of pixel domains, can change the characteristic distribution of images, and can increase the diversity of training samples.
Step 2, the preprocessed data set is input into a DenseNet-201 network for training, and a softmax value is output.
The model structure of DenseNet-201 is deep, the extracted depth features are more abstract, the response capability to key information is stronger, and meanwhile, the loss of the feature information caused by too deep network can be avoided through feature multiplexing. The core feature extraction module comprises five layers, four Dense blocks and three Transition blocks, each Dense Block comprises a plurality of Dense Layer layers, each Transition Block consists of a normalization Layer, a convolution Layer, an activation Layer and an average pooling Layer, is inserted into adjacent Dense blocks, has a small number of network parameters and strong deep feature extraction and loss transmission capacity, and is concretely implemented by referring to FIG. 2 as follows:
2.1 Preprocessing the input feature map by the Layer1 for the image output in the step 1: shallow information is first extracted by a 7 x 7 convolutional layer. The feature map is then downsampled by a 3 x 3 max pooling layer. At this time, the dimension of the image is reduced, and the data volume is greatly reduced;
2.2 The feature map output by Layer1 is then sequentially input to another 4 layers for feature extraction. Layer2, layer3, layer4, layer5 are respectively composed of 4 groups of Dense blocks and 3 groups of Transition blocks, the Dense blocks are respectively composed of Dense layers comprising 6, 12, 48 and 32 densely connected layers, the Dense layers use a Dense connection structure of '1×1+3×3' overlapped convolution, and each Dense Layer obtains all Layer output characteristics in front of the Dense layers through characteristic map channel splicing, so that characteristic multiplexing is realized, and no damage to an input characteristic map information stream is caused. The Transition Block consists of a normalization layer, a convolution layer, an activation layer and an average pooling layer. Inserting a Transition Block into two adjacent Dense blocks to connect the characteristics of the blocks, and performing downsampling on the characteristics of the previous Dense Block through a convolution layer of 1 multiplied by 1 and an average pooling layer with the step length of 2 to reduce parameters;
2.3 After all the features are extracted, the final image deep feature information is output, a BN layer and an activation layer are added, the BN layer is added, covariance deviation in the neural network is reduced, the network training speed is improved, and overfitting is avoided to a certain extent;
2.4 The global average pooling layer is used for integrating the input information, and the Dropout layer is used after the global average pooling layer, so that part of neurons do not participate in training, and the overall generalization performance of the network is enhanced;
2.5 Softmax layer is used for outputting class probability to conduct classification prediction.
And 3, designing a Gabor-CNN network on the basis of the DenseNet-201 network, inputting the preprocessed data set into the Gabor-CNN for training, and outputting a softmax value.
Gabor transformation is short-time Fourier transformation which selects a Gaussian function as a window function, and a Gabor convolution layer in a network extracts multi-level features from different scales and directions of a frequency domain, so that effective features are added for learning of the network, and learning capacity is improved. The GaborConv2D replaces part of the standard convolution layer of the DenseNet-201 network to construct the Gabor-CNN network, so that the extraction capability of image textures and detail features is stronger, and the discrimination capability is effectively improved. Inputting the preprocessed image into a designed Gabor-CNN network, outputting class probability through a Softmax layer, and carrying out classification prediction.
The structure of the Gabor-CNN network is shown in fig. 3, and the specific design steps are as follows:
3.1 Designing a GaborConv2D module: the feature extraction function tea function of the standard convolution kernel is replaced by the Gabor filter, 5-scale and 8-direction texture features are extracted in the Gabor kernel, so that 40 Gabor kernels exist in total, after that, gaborFeatureMaps are activated by a nonlinear activation function, and superposition operation is carried out on the obtained feature map so as to allow counter propagation through the Gabor convolution layer, and the nonlinear operation adopts a ReLU activation function. And finally adding a learnable weight filter group after the Gabor convolution layer for carrying out linear combination on the input features and obtaining a final output feature map. The filter bank consists of convolution kernels of dimension CxCx1x 1.
Wherein (C) I ,C O ) The number of channels of the input feature map and the number of convolution kernels of the filter are represented, and (1, 1) is the convolution kernel size.
The complete GaborConv2D layer was obtained by convolving Gabor Feature Maps after activation with 1×1, defined as:
wherein,the input feature map is represented, and the number of channels is m. />Representing a Gabor convolutional layer, including j k×k Gabor kernels, the value of j is determined by the direction and scale of the Gabor kernels, λ for controlling the scale is divided into s segments, θ for controlling the direction is divided into d segments, and j=s×d. f represents the ReLU function for nonlinear activation of Gabor features. />A standard convolution kernel of 1x1 is represented, with a convolution kernel number n, for changing the characteristic channel dimensions and participating in back propagation. Thereby obtaining an output +.>The number of channels is n; the two-dimensional Gabor filter kernel formula is as follows:
wherein,
3.2 Designing a Gabor-CNN network: the GaborConv2D module is used for replacing an initial standard convolution kernel, the size of the Gabor kernel is set to 7 multiplied by 7, the number of output channels of the module is set to 64, and the step size is set to 1, so that the spatial feature expression of an image can not be destroyed during Gabor feature extraction. A standard convolution kernel of 3 x 3 is added after the 7 x 7GaborConv layer, the step size of which is set to 2, the number of output channels being equal to the input. Finally, feature map scaling is accelerated using a 3 x 3 max pooling layer.
The Dense Layer in the Dense Block is composed of a 1×1 convolution Layer for expanding channels and a 3×3 convolution Layer whose output channel is 1/4 times that of the previous Layer. The 4-layer Dense Block extracts deep features of the image, thereby preserving the most important features of the image. The Gabor convolution kernel is used for extracting image details and texture features, and in order to enable the Gabor features to improve the classification capability of a base line network DenseNet to the greatest extent, the invention determines the replacement position of a GaborConv2D module through experiments, and each parameter setting of the Gabor convolution kernel is shown in a table 1:
TABLE 1
And 4, designing a Wave-CNN network on the basis of the DenseNet-201 network, inputting the preprocessed data set into the Wave-CNN training, and outputting a softmax value.
The wavelet transformation has multi-scale analysis characteristic, and the wavelet decomposition is used for replacing the average pooling/maximum pooling of the DenseNet-201 network, so that excessive detail information can be prevented from being lost in the pooling process. Meanwhile, the invention designs a wavelet attention module to strengthen the characteristic expression of different layers of images by considering different wavelet components, different channels and different contribution degrees of the space characteristic diagram to the model. Finally, according to the characteristic that the wavelet has multi-level decomposition, a wavelet characteristic fusion network is designed to strengthen characteristic multiplexing, so as to construct the Wave-CNN network. And inputting the preprocessed image into a designed Wave-CNN network, outputting class probability through a Softmax layer, and carrying out classification prediction.
The Wave-CNN network structure is shown in FIG. 4, and the specific design steps are as follows:
4.1 Wavelet attention module design: x obtained by decomposing wavelet LH Component, x HL The feature map containing rich texture information of the image is fused into a high-frequency detail texture component H through an Add layer, and then a spatial attention module is used to obtain the feature mapAnd the texture position is given higher weight to the space matrix, so that the feature expression is enhanced. The spatial attention weight and the low frequency component x LL The components are multiplied on the corresponding channels and "shortcut connections" are used to prevent network degradation, outputting a spatial attention map. The module formula is as follows:
x H =x LH +x HL
wherein BN is the normalization of batch quantity,the addition of the feature image pixels is represented, conv represents 7×7, and the convolution layer with the convolution kernel 1.
After the wavelet space attention module, an ECA channel attention module is added to further compress and excite the characteristic diagram, and adverse effects caused by channel dimension reduction are avoided. Different weights and wavelet characteristics are obtained through ECA channel attention module the spatial attention is intended to get a wavelet channel attention after multiplication in the channel domain. Finally, nonlinear activation is carried out on the whole wavelet attention feature map by using the BN layer and the ReLU layer;
4.2 Wave-CNN network design: replacing the average pooling/maximum pooling layer of the DenseNet-201 network with the 2D wavelet transform layer, thereby obtaining the low frequency component x LL High frequency component x LH ,x HL ,x HH . Low frequency x LL As a result of the downsampling of the input feature map, the features are non-linearly activated by adding a BN layer and a ReLU activation layer. The preprocessed input feature map preferentially passes through the spatial attention module (the input feature map simultaneously passes through an average pooling layer and a maximum pooling layer, the feature map with the dimension of 1 of two channels is output, and a 1×1 convolution layer is used to obtain a spatial weight matrix after the superimposed feature is merged through the channels). In order to integrate the multistage wavelet theory characteristic in the initial network, the invention provides multistage wavelet cascade by referring to the DenseNet structural characteristicThe fusion aims at enabling the output characteristics of each block of the network to additionally obtain the small ripple output of the previous layer, and further enhancing the texture characteristics, and the specific method is as follows:
4.2.1 Multi-level wavelet feature extraction: suppose the output of Dense Block1 is x dense(1) Extracting x using wavelet transform, respectively LL1_1 ,x LH1_1 ,x HL1_1 Obtaining a Wavelet feature Wavelet of a first level through a Wavelet space attention module 1_1 . First-order low-frequency characteristic x LL1_1 Decomposition into second-level wavelet components x by wavelet transform LL1_2 ,x LH1_2 ,x HL1_2 And continue to get the second level Wavelet feature Wavelet by the spatial attention module 1_2 . Thus, the nth Dense Block performs 4-n times of wavelet decomposition altogether;
4.2.2 Multi-level wavelet cascade fusion): the network starts from the Dense Block2, the output of each Dense Block is merged into the multi-level wavelet feature of the previous Dense Block by a feature addition mode, and the nth Dense Block is merged into the wavelet feature output of each n-1 previous Dense Block. The fusion formula is as follows:
x dense(n) ′=H n [x dense(n) ,wavelet n-1 ,...,wavelet 1 ],n≥2
wherein x is dense(n) For the output of the nth Dense Block in the backbone model, the wavelet n-1 For wavelet characteristics, H n Output feature map dimension of previous wavelet features is related to x by batch normalization, reLU activation function, convolution layer as nonlinear transformation function dense(n) And keep the same.
And 5, fusing the class probabilities output by the three networks according to the D-S evidence theory.
5.1 Acquiring softmax values in respective models by inputting image features of different dimensions, and respectively acquiring class probabilities m of three networks of DenseNet-201, gabor-CNN and Wave-CNN 1 ,m 2 And m 3
5.2 In the D-S fusion phase, the output class probabilities for each network model are taken as a set. Then, fusion of the model is performed by a D-S synthesis formula:
wherein,the method is characterized in that the method represents fusion operation, A, B, C, D is focal element, A represents scene category after image fusion, B, C, D represents scene category of images corresponding to three networks of DenseNet-201, gabor-CNN and Wave-CNN respectively.
The effects of the present invention are further described below in conjunction with simulation experiments:
1. experimental conditions
Experimental simulation environment: tensorflow2.9.1 (GPU) compiles the framework and Python 3.7 compiles the software. windows10 hardware platform configured as a 11th Gen Intel (R) Core (TM) i9-11900F@2.50GHz CPU and a NVIDIA GeForce RTX 3090 GPU.
The experiment selects the following three high-resolution remote sensing image data sets:
the NWPU-RESISC45 data set comprises 45 remote sensing images of different scenes from Google Earth satellite images, and each scene consists of 700 images, namely 31,500 remote sensing images in total. The image size is 256×256×3 and the spatial resolution is between 0.2 and 30 m/pixel.
Second, the AID30 dataset, which was collected from google earth, comprised 30 categories of 10,000 images. Each class has 220-420 images with a fixed 600 x 3 image size and a spatial resolution varying from 0.5 to 8 meters per pixel.
And thirdly, a Pattern Net38 data set which is collected from Google Earth and Google map and comprises 38 remote sensing scene categories, wherein each category comprises 800 color images with the size of 256 multiplied by 3, and the total number of the remote sensing images is 30400. The spatial resolution ranges from 0.062 to 4.693 meters per pixel.
2. Experimental details
To verify the effectiveness of the method of the present invention, a verification experiment was performed using 3 sets of data sets NWPU-RESISC45, AID30, pattern Net 38. The training set proportion of the NWPU-RESISC45 data set is set to 20%, the training set proportion of the AID30 data set is set to 50%, the training set proportion of the PatternNet38 data set is set to 80%, the batch size of the neural network is set to 32, the epoch is divided into two stages, an SGD optimizer is used, 100 rounds of training are performed in the former stage, the learning rate is set to 0.001, and the Early-stop strategy is adopted, namely, when the loss on the training set is continuously reduced for 5 times, the training in the stage is stopped. Training for 100 rounds in the later stage, adopting a learning rate reduction strategy, and reducing the learning rate according to the proportion of 0.5 when the accuracy of the training set is not increased for 5 times, wherein the initial learning rate is set to be 0.0001. The parameter value of the momentum of the two-stage SGD momentum parameter was set to 0.9.
Experiment 1 in order to better evaluate the classification performance of the method of the present invention, the method of the present invention was compared with a plurality of excellent models, and the classification accuracy of the three data sets of NWPU-RESISC45, AID30, and PatternNet38 shown in table 5, table 6, and table 7 were compared, respectively, and a confusion matrix of the method of the present invention of the three data sets was generated, and the experimental results are shown in fig. 5.
Experiment 2, comparing the classification effect of three networks under different fusion strategies, and table 8 shows the comparison of classification accuracy before and after fusion, for better evaluation of the performance of the evidence fusion method in classification.
Experiment 3 to verify if the addition of basic texture features and decision level evidence fusion improved the overall classification effect, ablation experiments were performed to test model performance by comparing different neural network combinations of DenseNet-201, gabor-CNN, wave-CNN, and D-S evidence fusion, the results are shown in Table 9
TABLE 5
TABLE 6
TABLE 7
TABLE 8
TABLE 9
3. Analysis of experimental results
The inventive method was applied to the NWPU-RESISC45 dataset and it can be seen from table 5 that in all comparative algorithms, HDTFF-Net reached an average accuracy of 94.47% which was 0.2% higher than the next-best method. Fig. 5 (a) shows the confusion matrix for the HDTFF network on the NWPU-RESISC45 dataset. From the results of the confusion matrix we can see that the classification accuracy of 42 scenes out of 45 categories exceeds 90%. In the comparison of classification methods, the most confusing scene categories are "palace" and "church". The accuracy of the sub-optimal algorithm LSRS is 82% and 79% respectively, and in contrast, the accuracy of the two scene categories in the invention is 88% and 79% respectively. The accuracy of the church is greatly improved, which shows that the HDTFF-Net provided by the invention has higher recognition performance in every scene of the NWPU-RESISC45 data set.
The AID dataset contains 30 categories, the number of images in each category is not uniform, and the differences within the categories are very different, which makes classification very difficult. Table 6 shows experimental results, and the average accuracy of the HDTFF network proposed by the present invention is 97.46%. Compared with double-stream depth architectures such as LSRL, ACGLNet, attention CNN+H-GCN and Two-stream Fusion, the classification accuracy of the algorithm is respectively improved by 0.10%, 0.36%, 1.68% and 2.88%, which proves that the proposed model Fusion method can realize remarkable improvement on a large remote sensing data set. FIG. 5 (a) shows the confusion matrix for HDTFF-Net on the AID30 dataset. According to diagonal data, it can be seen that the classification accuracy of 29 types of remote sensing scenes in 30 types of remote sensing scenes is more than 90%, the classification accuracy of 9 types of remote sensing images such as baseball field, beach reaches 100%, and the accuracy of the remote sensing images such as airport, bare land reaches more than 98%. The algorithm also achieves better classification results in the scenarios "vacation" and "school", which are easily confused by most comparison algorithms. The HDTFF-Net extracts depth features of different types, scales and directions of the remote sensing image and fuses the depth features in a decision layer, so that the influence of large intra-class variation in the AID data set is reduced, and the accuracy is improved.
The PatternNet38 dataset has a total of 38 scene categories, the scene images in each category are uniform, and the categories vary widely. Table 8 shows experimental comparison results, from which we can see that the average overall accuracy of HDTFF-Net proposed in the present invention is 99.64% better than other methods. The algorithm combines the shallow texture features and the deep texture features, and improves the precision by 0.14%. FIG. 5 (c) shows the confusion matrix for HDTFF-Net on the Pattern Net38 dataset. According to diagonal data, the accuracy of 38 types of scenes can reach more than 95%, and the identification of 26 types of scenes is completely correct. The most confusable category in the Pattern-Net38 dataset, namely "sparse housing", also has a 95% accuracy. This demonstrates that the proposed HDTFF-Net can provide more discriminating characteristics, resulting in higher classification performance.
Table 8 shows the classification accuracy of different fusion methods, and several fusion methods all obtain better classification results, which illustrates that the model fusion can combine the advantages of different sub-models, and improve the overall classification performance. Compared with the common fusion strategy, the accuracy obtained by using the D-S evidence theory is highest, and the validity of the fusion method is proved.
Table 9 shows the results of the ablation experiments, the classification capabilities of different neural network models are combined through the D-S evidence theory, and the classification accuracy is higher than that of a single model after combining different network models as can be found by analyzing the experimental results on the data sets NWPU-RESISC45, AID30 and Pattern-Net 38. After the three models are combined, the classification precision reaches the best, which proves that the depth texture feature fusion strategy based on the D-S evidence theory can further improve the performance of network classification.
From analysis, the feasibility and effectiveness of using the D-S evidence fusion multi-layer depth convolution network in high-resolution remote sensing image classification are proved, and the classification precision is obviously improved.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made without departing from the spirit and scope of the invention.

Claims (10)

1. A remote sensing image scene classification method based on an evidence fusion multilayer depth convolution network is characterized by comprising the following steps of
Inputting the remote sensing image into a DenseNet-201 network, a Gabor-CNN network and a Wave-CNN network respectively to output class probabilities;
and fusing the class probabilities output by the three networks according to the D-S evidence theory to obtain a classification result.
2. The method for classifying a scene in a remote sensing image based on an evidence fusion multi-layer deep convolutional network according to claim 1, further comprising preprocessing the remote sensing image before inputting the network.
3. The remote sensing image scene classification method based on the evidence fusion multi-layer deep convolutional network according to claim 2, wherein the preprocessing comprises the steps of adjusting the image size, normalizing and enhancing the data.
4. The remote sensing image scene classification method based on the evidence fusion multi-layer deep convolutional network according to claim 1, wherein the method is characterized by comprising the following steps of: the DenseNet-201 network comprises 5 layers, and the processing steps are as follows:
preprocessing an input feature map by Layer 1: firstly, extracting shallow layer information through a 7 multiplied by 7 convolution layer; then, the feature map is downsampled through a 3×3 max pooling layer;
the feature map output by the Layer1 is sequentially input into the Layer2, the Layer3, the Layer4 and the Layer5 to extract deep features of the image;
the BN layer, the activation layer, the global average pooling layer, the Dropout layer and the Softmax layer are sequentially input, and class probabilities are output.
5. The remote sensing image scene classification method based on the evidence fusion multi-layer deep convolutional network according to claim 4, wherein the method is characterized by comprising the following steps: the Layer2, the Layer3, the Layer4 and the Layer5 are respectively composed of 4 groups of Dense Block and 3 groups of Transition Block, the Dense Block is respectively composed of Dense Layer containing 6, 12, 48 and 32 densely connected layers, the Dense Layer uses a Dense connection structure of '1 multiplied by 1+3 multiplied by 3' overlapped convolution, and each Dense Layer acquires all Layer output characteristics in front of the Dense Layer through characteristic diagram channel splicing; the Transition Block consists of a normalization layer, a convolution layer, an activation layer and an average pooling layer, and is inserted into two adjacent Dense blocks.
6. The remote sensing image scene classification method based on the evidence fusion multi-layer deep convolutional network according to claim 5, wherein the method is characterized by comprising the following steps of: the Gabor-CNN network is improved on the basis of a DenseNet-201 network: firstly, replacing a part of standard convolution layers of a DenseNet network by GaborConv2D, and determining the replacement position of the GaborConv2D layer through experiments; then, adding a 3×3 standard convolution kernel, setting the step length of the layer to 2, and making the output channel number equal to the input, so as to further extract the shallow layer feature of the image and downsampling it once; finally, feature map scaling is accelerated using a 3 x 3 max pooling layer.
7. The remote sensing image scene classification method based on the evidence fusion multi-layer deep convolutional network of claim 6, wherein the method is characterized by comprising the following steps: the size of the GaborConv2D Gabor core is set to 7×7, the number of output channels of the module is set to 64, and the step size is set to 1.
8. The remote sensing image scene classification method based on the evidence fusion multi-layer deep convolutional network according to claim 5, wherein the method is characterized by comprising the following steps of: the Wave-CNN network is improved on the basis of a DenseNet-201 network: replacing pooling layer of DenseNet-201 network with 2D wavelet transform layer to x low frequency component LL As a result of the downsampling of the input feature map, the features are non-linearly activated by adding a BN layer and a ReLU activation layer to increase the high frequency component x LH And x HL The high-frequency detail texture component H is fused, a space attention module is added, the space matrix and the LL component are subjected to dot multiplication, and then a network layer is accessed by using a shortcut connection mode.
9. The remote sensing image scene classification method based on the evidence fusion multi-layer deep convolutional network of claim 8, wherein the method is characterized by comprising the following steps: the Wave-CNN network processing steps are as follows:
multistage wavelet feature extraction: suppose the output of Dense Block1 is x dense(1) Extracting x using wavelet transform, respectively LL1_1 ,x LH1_1 ,x HL1_1 Obtaining a Wavelet feature Wavelet of a first level through a Wavelet space attention module 1_1 The method comprises the steps of carrying out a first treatment on the surface of the First-order low-frequency characteristic x LL1_1 Decomposition into second-level wavelet components x by wavelet transform LL1_2 ,x LH1_2 ,x HL1_2 And continue to get the second level Wavelet feature Wavelet by the spatial attention module 1_2 The method comprises the steps of carrying out a first treatment on the surface of the Thus, the nth Dense Block performs 4-n times of wavelet decomposition altogether;
multistage wavelet cascade fusion: the network starts from a Dense Block2, the output of each Dense Block is merged into the multi-level wavelet feature of the previous Dense Block by a feature addition mode, and the nth Dense Block is merged into the wavelet feature output of each n-1 previous Dense Block; the fusion formula is as follows:
x dense(n) ′=H n [x dense(n) ,wavelet n-1 ,...,wavelet 1 ],n≥2
wherein x is dense(n) For the output of the nth Dense Block in the backbone model, the wavelet n-1 For wavelet characteristics, H n Output feature map dimension of previous wavelet features is related to x by batch normalization, reLU activation function, convolution layer as nonlinear transformation function dense(n) And keep the same.
10. The remote sensing image scene classification method based on the evidence fusion multi-layer deep convolutional network according to claim 1, wherein the method is characterized by comprising the following steps of: the classification result obtained by fusion according to the D-S evidence theory is specifically as follows:
model fusion is carried out on the class probabilities output by the three networks through a D-S synthesis formula:
wherein,the method comprises the steps of representing fusion operation, wherein A represents scene types after image fusion, B, C, D respectively represents scene types of images corresponding to three networks of DenseNet-201, gabor-CNN and Wave-CNN;
and taking the scene with the highest probability as a classification result according to the obtained probability of the fused different scenes.
CN202310903456.8A 2023-07-22 2023-07-22 Remote sensing image scene classification method based on evidence fusion multilayer depth convolution network Pending CN117058558A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310903456.8A CN117058558A (en) 2023-07-22 2023-07-22 Remote sensing image scene classification method based on evidence fusion multilayer depth convolution network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310903456.8A CN117058558A (en) 2023-07-22 2023-07-22 Remote sensing image scene classification method based on evidence fusion multilayer depth convolution network

Publications (1)

Publication Number Publication Date
CN117058558A true CN117058558A (en) 2023-11-14

Family

ID=88659865

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310903456.8A Pending CN117058558A (en) 2023-07-22 2023-07-22 Remote sensing image scene classification method based on evidence fusion multilayer depth convolution network

Country Status (1)

Country Link
CN (1) CN117058558A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117593649A (en) * 2024-01-18 2024-02-23 中国人民解放军火箭军工程大学 Unbalanced hyperspectral image integrated classification method, unbalanced hyperspectral image integrated classification system and electronic equipment
CN117636080A (en) * 2024-01-26 2024-03-01 深圳市万物云科技有限公司 Scene classification method, device, computer equipment and readable storage medium
CN117593649B (en) * 2024-01-18 2024-05-10 中国人民解放军火箭军工程大学 Unbalanced hyperspectral image integrated classification method, unbalanced hyperspectral image integrated classification system and electronic equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117593649A (en) * 2024-01-18 2024-02-23 中国人民解放军火箭军工程大学 Unbalanced hyperspectral image integrated classification method, unbalanced hyperspectral image integrated classification system and electronic equipment
CN117593649B (en) * 2024-01-18 2024-05-10 中国人民解放军火箭军工程大学 Unbalanced hyperspectral image integrated classification method, unbalanced hyperspectral image integrated classification system and electronic equipment
CN117636080A (en) * 2024-01-26 2024-03-01 深圳市万物云科技有限公司 Scene classification method, device, computer equipment and readable storage medium
CN117636080B (en) * 2024-01-26 2024-04-09 深圳市万物云科技有限公司 Scene classification method, device, computer equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN110728192B (en) High-resolution remote sensing image classification method based on novel characteristic pyramid depth network
CN111080629B (en) Method for detecting image splicing tampering
CN107316013B (en) Hyperspectral image classification method based on NSCT (non-subsampled Contourlet transform) and DCNN (data-to-neural network)
Othman et al. Domain adaptation network for cross-scene classification
CN112836773B (en) Hyperspectral image classification method based on global attention residual error network
CN111080678B (en) Multi-temporal SAR image change detection method based on deep learning
Shi et al. Hyperspectral target detection with macro-micro feature extracted by 3-D residual autoencoder
CN112083422A (en) Single-voyage InSAR system end-to-end classification method based on multistage deep learning network
Rajendran et al. Hyperspectral image classification model using squeeze and excitation network with deep learning
CN117058558A (en) Remote sensing image scene classification method based on evidence fusion multilayer depth convolution network
Dong et al. Deep ensemble CNN method based on sample expansion for hyperspectral image classification
Sethi et al. Scalable machine learning approaches for neighborhood classification using very high resolution remote sensing imagery
Jiang et al. Hyperspectral image classification with CapsNet and Markov random fields
Fırat et al. Spatial-spectral classification of hyperspectral remote sensing images using 3D CNN based LeNet-5 architecture
Zhang et al. Hyperspectral image classification using spatial and edge features based on deep learning
CN115471675A (en) Disguised object detection method based on frequency domain enhancement
CN112784777B (en) Unsupervised hyperspectral image change detection method based on countermeasure learning
Fırat et al. Hybrid 3D convolution and 2D depthwise separable convolution neural network for hyperspectral image classification
CN112381144B (en) Heterogeneous deep network method for non-European and Euclidean domain space spectrum feature learning
CN115578632A (en) Hyperspectral image classification method based on expansion convolution
Farooque et al. Swin transformer with multiscale 3D atrous convolution for hyperspectral image classification
CN113421198A (en) Hyperspectral image denoising method based on subspace non-local low-rank tensor decomposition
CN115471737A (en) Hyperspectral image classification method fusing stacked self-coding network and CNN
CN113139515A (en) Hyperspectral image classification method based on conditional random field and depth feature learning
CN117115675A (en) Cross-time-phase light-weight spatial spectrum feature fusion hyperspectral change detection method, system, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination