CN114926694A - Hyperspectral image classification method and device, electronic equipment and storage medium - Google Patents

Hyperspectral image classification method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114926694A
CN114926694A CN202210643592.3A CN202210643592A CN114926694A CN 114926694 A CN114926694 A CN 114926694A CN 202210643592 A CN202210643592 A CN 202210643592A CN 114926694 A CN114926694 A CN 114926694A
Authority
CN
China
Prior art keywords
layer
feature
analysis
dimensional
spectrum
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210643592.3A
Other languages
Chinese (zh)
Inventor
周浩
张明慧
袁国武
高赟
普园媛
李鹏
王先旺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunnan University YNU
Original Assignee
Yunnan University YNU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunnan University YNU filed Critical Yunnan University YNU
Priority to CN202210643592.3A priority Critical patent/CN114926694A/en
Publication of CN114926694A publication Critical patent/CN114926694A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a hyperspectral image classification method, a hyperspectral image classification device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a plurality of local image blocks with the same size according to the initial hyperspectral image to obtain a plurality of reference image blocks; inputting each reference image block into a three-dimensional residual error multi-layer fusion network obtained by pre-training for extracting space spectrum features to obtain a plurality of two-dimensional feature maps output by the three-dimensional residual error multi-layer fusion network; inputting each two-dimensional characteristic diagram into a pre-trained characteristic analysis network to analyze the spectrum correlation information between the corresponding spectrum bands of each two-dimensional characteristic diagram; and carrying out classification processing based on the spectrum correlation information between the spectrum bands corresponding to the two-dimensional characteristic graphs to obtain a corresponding hyperspectral image classification result. By extracting the spatial spectral features and classifying according to the correlation information among the spectral bands, the correlation among the spectral bands of the hyperspectral image can be fully mined, and the classification precision of the ground objects when the spectral features are not obviously distinguished is obviously improved.

Description

Hyperspectral image classification method and device, electronic equipment and storage medium
Technical Field
The application relates to the field of image classification, in particular to a hyperspectral image classification method and device, electronic equipment and a storage medium.
Background
The hyperspectral imaging technology is a remote sensing imaging technology which continuously extracts information from an object or a scene on the earth surface by using a continuous spectrum channel with high spectral resolution, and simultaneously, by using the spectrum technology and the imaging technology, a plurality of continuous images can be obtained in the range from visible light to near infrared, and each image data has spectral information of hundreds of wave bands. The hyperspectral image classification is widely applied to remote sensing application and can be used for ground object target identification in the fields of agricultural remote sensing, map making and the like.
The existing hyperspectral image classification model is mainly used for extracting spatial features and spectral features, single spatial features or single spectral features often cannot meet the requirement of high classification progress, and the correlation among hyperspectral image spectral wave bands cannot be fully excavated, so that the hyperspectral image features are not extracted sufficiently, and the ground object classification accuracy rate is low, wherein the spatial features and the spectral features are not distinguished obviously.
Disclosure of Invention
An object of the present application is to provide a method, an apparatus, an electronic device, and a storage medium for classifying hyperspectral images, which improve the accuracy of hyperspectral image classification.
In order to achieve the above purpose, the technical solutions adopted in the embodiments of the present application are as follows:
in a first aspect, an embodiment of the present application provides a hyperspectral image classification method, where the method includes:
according to the initial hyperspectral image, acquiring a plurality of local image blocks with the same size to obtain a plurality of reference image blocks, wherein each reference image block comprises a plurality of layers of feature maps, and each layer of feature map comprises a plurality of spectral wave bands;
inputting each reference image block into a three-dimensional residual error multilayer fusion network obtained through pre-training for extracting a null spectrum feature to obtain a plurality of two-dimensional feature maps output by the three-dimensional residual error multilayer fusion network, wherein each two-dimensional feature map is respectively used for representing a null spectrum combined feature of a spectrum band, and the null spectrum combined feature is used for representing the correlation features of the spectrum band on space and spectrum;
inputting each two-dimensional characteristic diagram into a pre-trained characteristic analysis network to analyze the spectrum correlation information among the spectrum bands corresponding to each two-dimensional characteristic diagram;
and carrying out classification processing based on the spectrum correlation information between the spectrum bands corresponding to the two-dimensional characteristic graphs to obtain a hyperspectral image classification result corresponding to the initial hyperspectral image.
Optionally, the inputting each reference image block into a pre-trained three-dimensional residual error multilayer fusion network for spatial spectrum feature extraction to obtain a plurality of two-dimensional feature maps output by the three-dimensional residual error multilayer fusion network includes:
inputting each reference image block into a three-dimensional residual multi-layer fusion network obtained through pre-training, extracting a null spectrum feature by a plurality of feature convolutional layers in the three-dimensional residual multi-layer fusion network, and processing the null spectrum feature by each feature processing layer corresponding to each feature convolutional layer in the three-dimensional residual multi-layer fusion network to obtain two-dimensional feature vectors with multiple dimensions, wherein the three-dimensional residual multi-layer fusion network comprises an original convolutional layer and the plurality of feature convolutional layers which are sequentially arranged behind the original convolutional layer, the original convolutional layer outputs an initial null spectrum feature map according to the reference image block, and each feature convolutional layer performs convolution processing based on the null spectrum feature map output by the previous convolutional layer and outputs the null spectrum feature map after convolution processing to the next convolutional layer and the corresponding feature processing layer;
and performing feature fusion on each two-dimensional feature vector by a full connection layer corresponding to each feature processing layer in the three-dimensional residual multi-layer fusion network, and performing data reconstruction on all fused two-dimensional feature vectors by a second data reconstruction layer in the three-dimensional residual multi-layer fusion network to obtain and output the plurality of two-dimensional feature maps.
Optionally, each of the feature processing layers includes: the system comprises a pooling layer, a dimension reduction layer and a first data reconstruction layer;
extracting the empty spectrum feature by the plurality of feature convolution layers in the three-dimensional residual multilayer fusion network, and processing the empty spectrum feature by each feature processing layer corresponding to each feature convolution layer in the three-dimensional residual multilayer fusion network to obtain a plurality of dimensions of two-dimensional feature vectors, wherein the method comprises the following steps:
extracting the empty spectrum characteristic diagram output by the previous convolution layer by each characteristic convolution layer, and outputting the empty spectrum characteristic diagram after characteristic extraction;
performing maximum pooling on the space spectrum characteristic diagram by each pooling layer to obtain a corresponding first output characteristic diagram;
performing dimensionality reduction processing on the first output characteristic graph by each dimensionality reduction layer to obtain a corresponding second output characteristic graph;
performing data reconstruction on the second output characteristic diagram by each first data reconstruction layer to obtain two-dimensional characteristic vectors of each dimension;
and obtaining the two-dimensional characteristic vectors of the multiple dimensions based on the two-dimensional characteristic vectors of the dimensions.
Optionally, inputting each two-dimensional feature map into a pre-trained feature analysis network to analyze spectral correlation information between spectral bands corresponding to each two-dimensional feature map, where the analyzing includes:
inputting each two-dimensional characteristic diagram into the characteristic analysis network, carrying out blocking processing on each two-dimensional characteristic diagram by a blocking layer of the characteristic analysis network to obtain a blocked two-dimensional characteristic diagram, and taking the blocked two-dimensional characteristic diagram as a first-dimension characteristic diagram;
performing linear transformation processing on the linear processing layer of the feature analysis network based on the first dimension feature map to obtain a second dimension feature map;
and performing spectral analysis on the second dimension characteristic diagram by an analysis processing layer of the characteristic analysis network to obtain spectral correlation information among various spectral bands.
Optionally, the analysis processing layer includes: a plurality of analysis processing sublayers connected in sequence;
the performing, by the analysis processing layer of the feature analysis network, spectrum analysis on the second dimensional feature map to obtain spectrum correlation information between spectrum bands includes:
performing spectral analysis on the second dimension characteristic diagram by a first analysis processing sublayer in the analysis processing layer to obtain an initial analysis result;
taking the analysis result of each analysis processing sub-layer after the first analysis processing sub-layer in the analysis processing layer as input data in sequence, performing spectral analysis, and outputting the analysis result backwards;
and taking an analysis result output by the last analysis processing sublayer in the analysis processing layer as the spectrum correlation information among the spectrum bands.
Optionally, each analysis processing sublayer in the analysis processing layer includes a plurality of consecutive analysis blocks, each analysis block includes a first normalization layer, a window multilayer self-attention layer, a second normalization layer, a first multilayer perceptron, a third normalization layer, a sliding window multi-head self-attention layer, a fourth normalization layer, and a second multilayer perceptron that are sequentially connected, an output result of the second multilayer perceptron serves as an output result of the analysis block, in the plurality of analysis blocks, an output result of a previous analysis block in two adjacent analysis blocks serves as input data of a next analysis block, and an output result of a last analysis block in the plurality of analysis blocks serves as an analysis result of the analysis processing sublayer.
Optionally, the classifying based on the spectrum correlation information between the spectrum bands corresponding to the two-dimensional feature maps to obtain the hyperspectral image classification result corresponding to the initial hyperspectral image includes:
and inputting the spectral correlation information among the spectral bands into a classification model, and classifying the spectral correlation information by two full-connection layers and a Gaussian error linear unit in the classification model according to the spectral correlation information to obtain a hyperspectral image classification result corresponding to the initial hyperspectral image.
In a second aspect, an embodiment of the present application further provides a device for classifying hyperspectral images, where the device includes:
the acquisition module is used for acquiring a plurality of local image blocks with the same size according to the initial hyperspectral image to obtain a plurality of reference image blocks, each reference image block comprises a plurality of layers of feature maps, and each layer of feature map comprises a plurality of spectral wave bands;
the extraction module is used for inputting each reference image block into a three-dimensional residual error multilayer fusion network obtained through pre-training for extracting a null spectrum feature to obtain a plurality of two-dimensional feature maps output by the three-dimensional residual error multilayer fusion network, wherein each two-dimensional feature map is respectively used for representing a null spectrum combined feature of a spectral band, and the null spectrum combined feature is used for representing the spatial and spectral associated features of the spectral band;
the analysis module is used for inputting each two-dimensional characteristic diagram into a characteristic analysis network obtained by pre-training to analyze the spectrum correlation information among the spectrum bands corresponding to each two-dimensional characteristic diagram;
and the classification module is used for performing classification processing based on the spectrum correlation information between the spectrum bands corresponding to the two-dimensional characteristic graphs to obtain a hyperspectral image classification result corresponding to the initial hyperspectral image.
Optionally, the extraction module is specifically configured to:
inputting each reference image block into a three-dimensional residual multi-layer fusion network obtained through pre-training, extracting a null spectrum feature by a plurality of feature convolutional layers in the three-dimensional residual multi-layer fusion network, and processing the null spectrum feature by each feature processing layer corresponding to each feature convolutional layer in the three-dimensional residual multi-layer fusion network to obtain two-dimensional feature vectors with multiple dimensions, wherein the three-dimensional residual multi-layer fusion network comprises an original convolutional layer and the plurality of feature convolutional layers which are sequentially arranged behind the original convolutional layer, the original convolutional layer outputs an initial null spectrum feature map according to the reference image block, and each feature convolutional layer performs convolution processing based on the null spectrum feature map output by the previous convolutional layer and outputs the null spectrum feature map after convolution processing to the next convolutional layer and the corresponding feature processing layer;
and performing feature fusion on the two-dimensional feature vectors by using a full connection layer corresponding to each feature processing layer in the three-dimensional residual multi-layer fusion network, and performing data reconstruction on all the fused two-dimensional feature vectors by using a second data reconstruction layer in the three-dimensional residual multi-layer fusion network to obtain and output the plurality of two-dimensional feature maps.
Optionally, each of the feature processing layers includes: the system comprises a pooling layer, a dimension reduction layer and a first data reconstruction layer;
the extraction module is specifically configured to:
extracting the empty spectrum characteristic diagram output by the previous convolution layer by each characteristic convolution layer, and outputting the empty spectrum characteristic diagram after characteristic extraction;
performing maximum pooling on the space spectrum characteristic diagram by each pooling layer to obtain a corresponding first output characteristic diagram;
performing dimensionality reduction processing on the first output characteristic graph by each dimensionality reduction layer to obtain a corresponding second output characteristic graph;
performing data reconstruction on the second output characteristic diagram by each first data reconstruction layer to obtain two-dimensional characteristic vectors of each dimension;
and obtaining the two-dimensional feature vectors of the multiple dimensions based on the two-dimensional feature vectors of the dimensions.
Optionally, the analysis module is specifically configured to:
inputting each two-dimensional feature map into the feature analysis network, carrying out blocking processing on each two-dimensional feature map by a blocking layer of the feature analysis network to obtain a blocked two-dimensional feature map, and taking the blocked two-dimensional feature map as a first-dimension feature map;
performing linear transformation processing on the linear processing layer of the feature analysis network based on the first dimension feature map to obtain a second dimension feature map;
and carrying out spectral analysis on the second dimension characteristic diagram by an analysis processing layer of the characteristic analysis network to obtain spectral correlation information among various spectral bands.
Optionally, the analysis processing layer includes: a plurality of analysis processing sublayers connected in sequence;
the analysis module is specifically configured to:
performing spectral analysis on the second dimension characteristic diagram by a first analysis processing sublayer in the analysis processing layer to obtain an initial analysis result;
taking the analysis result of each analysis processing sub-layer after the first analysis processing sub-layer in the analysis processing layer as input data in sequence, performing spectral analysis, and outputting the analysis result backwards;
and taking an analysis result output by the last analysis processing sub-layer in the analysis processing layer as the spectrum correlation information among the spectrum bands.
Optionally, each analysis processing sublayer in the analysis processing layer includes a plurality of consecutive analysis blocks, each analysis block includes a first normalization layer, a window multilayer self-attention layer, a second normalization layer, a first multilayer perceptron, a third normalization layer, a sliding window multi-head self-attention layer, a fourth normalization layer, and a second multilayer perceptron, which are connected in sequence, respectively, an output result of the second multilayer perceptron is used as an output result of the analysis block, in the plurality of analysis blocks, an output result of a previous analysis block in two adjacent analysis blocks is used as input data of a subsequent analysis block, and an output result of a last analysis block in the plurality of analysis blocks is used as an analysis result of the analysis processing sublayer.
Optionally, the classification module is specifically configured to:
and inputting the spectrum correlation information among the spectrum bands into a classification model, and classifying the spectrum correlation information by two full connection layers and a Gaussian error linear unit in the classification model according to the spectrum correlation information to obtain a hyperspectral image classification result corresponding to the initial hyperspectral image.
In a third aspect, an embodiment of the present application further provides an electronic device, including: the hyperspectral image classification method comprises a processor, a storage medium and a bus, wherein the storage medium stores program instructions executable by the processor, when an application program runs, the processor and the storage medium communicate through the bus, and the processor executes the program instructions to execute the steps of the hyperspectral image classification method according to the first aspect.
In a fourth aspect, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and the computer program is read and executes the steps of the hyperspectral image classification method according to the first aspect.
The beneficial effect of this application is:
according to the hyperspectral image classification method, the hyperspectral image classification device, the electronic equipment and the storage medium, a plurality of local image blocks with the same size are obtained according to an initial hyperspectral image, each reference image block can comprise a plurality of layers of feature maps, and each layer of feature map can comprise a plurality of spectral wave bands; inputting each reference image block into a three-dimensional residual error multi-layer fusion network obtained by pre-training for extracting space spectrum features to obtain a plurality of two-dimensional feature maps output by the three-dimensional residual error multi-layer fusion network; inputting each two-dimensional characteristic diagram into a pre-trained characteristic analysis network to analyze the spectrum correlation information between the corresponding spectrum bands of each two-dimensional characteristic diagram; and carrying out classification processing based on the spectrum correlation information between the spectrum bands corresponding to the two-dimensional characteristic graphs to obtain a hyperspectral image classification result corresponding to the initial hyperspectral image. The spatial spectrum feature extraction is carried out through a three-dimensional residual multi-layer fusion network obtained through pre-training, the correlation information among the spectral bands is analyzed through a feature analysis network, the correlation among the spectral bands of the hyperspectral image is fully excavated, classification is carried out based on the correlation information among the spectral bands, and the classification precision of the ground features when the spectral features are not obviously distinguished can be obviously improved.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a schematic diagram of an exemplary scenario provided in an embodiment of the present application;
FIG. 2 is a schematic flowchart of a hyperspectral image classification method provided by an embodiment of the application;
fig. 3 is a schematic flow chart of a method for extracting a spatial spectrum feature according to an embodiment of the present application;
fig. 4 is a schematic flowchart of another spatial spectrum feature extraction method provided in the embodiment of the present application;
FIG. 5 is a schematic flow chart of a feature analysis provided in an embodiment of the present application;
FIG. 6 is a schematic flow chart of another feature analysis provided in an embodiment of the present application;
FIG. 7 is a schematic structural diagram of analysis processing sublayers provided in the embodiments of the present application;
FIG. 8 is a schematic diagram of an exemplary complete structure of a hyperspectral image classification provided by an embodiment of the application;
FIG. 9 is a schematic diagram of an apparatus for a hyperspectral image classification method according to an embodiment of the application;
fig. 10 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are only for illustration and description purposes and are not used to limit the protection scope of the present application. Further, it should be understood that the schematic drawings are not drawn to scale. The flowcharts used in this application illustrate operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be performed out of order, and steps without logical context may be performed in reverse order or simultaneously. One skilled in the art, under the guidance of this application, may add one or more other operations to, or remove one or more operations from, the flowchart.
In addition, the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that in the embodiments of the present application, the term "comprising" is used to indicate the presence of the features stated hereinafter, but does not exclude the addition of further features.
Fig. 1 is a schematic view of an exemplary scenario provided by an embodiment of the present application, and as shown in fig. 1, the method is applied to an electronic device, and the scenario involves the electronic device and a storage device. The electronic device may be a terminal device having a computing processing capability and a display function, such as a desktop computer or a notebook computer, or may be a server; the storage device may be a local storage in the electronic device, or may be another storage device connected to the electronic device. The electronic equipment can acquire the hyperspectral images from the storage equipment, and the acquired hyperspectral images are analyzed and processed by the method of the embodiment of the application to obtain a hyperspectral image classification result.
Fig. 2 is a flowchart illustrating a hyperspectral image classification method according to an embodiment of the application, where a main body of the method is implemented in an electronic device as described above. As shown in fig. 2, the method includes:
s101, acquiring a plurality of local image blocks with the same size according to the initial hyperspectral image to obtain a plurality of reference image blocks.
Optionally, the initial hyperspectral image is a three-dimensional cubic data volume obtained by performing continuous spectrum coverage on each spatial pixel of the target ground object through a plurality of wave bands, and the three-dimensional cubic data volume may include two-dimensional geometric space and spectrum information of the target ground object, where the spectrum information is information of each wave band. If the image data corresponding to each band of the hyperspectral image is regarded as a layer, the hyperspectral image may include cubes arranged in the order of the bands on each layer.
The target property may include, among other things, objects such as forests, shrubs, grass, houses, roads, streams, etc.
Illustratively, if an initial hyperspectral image is Z ∈ R W×H×L Then, W × H is the two-dimensional geometric spatial resolution, and L is the number of spectral bands.
Optionally, because the initial hyperspectral image contains a large amount of information, and the initial hyperspectral image contains a plurality of pixels, local image blocks in the initial hyperspectral image can be respectively subjected to local analysis when the initial hyperspectral image is analyzed, specifically, a local image area within a preset range of each pixel can be obtained by taking each pixel in the initial hyperspectral image as a center to serve as a local image block of the initial hyperspectral image, where the preset range may be, for example, a square block of 32 × 32, a plurality of local image blocks of 32 × 32 × L can be obtained, the obtained local image blocks of the same size can serve as a plurality of reference image blocks, each reference image block can include a multilayer feature map, and each layer of feature map can include a plurality of spectral bands.
And S102, inputting each reference image block into a three-dimensional residual error multilayer fusion network obtained through pre-training for spatial spectrum feature extraction, and obtaining a plurality of two-dimensional feature maps output by the three-dimensional residual error multilayer fusion network.
Optionally, the three-dimensional residual multilayer fusion network may include a multilayer neural network, and the multilayer neural network in the three-dimensional residual multilayer fusion network obtained through pre-training performs a spatial spectrum feature extraction on each layer of feature maps in each reference image block, so as to extract a shallow layer spatial spectrum feature and a deep layer spatial spectrum feature of the initial hyperspectral image, and then performs a preset process on each layer of feature maps after the spatial spectrum feature is extracted according to other neural network layers in the three-dimensional residual multilayer fusion network, and outputs a plurality of two-dimensional feature maps, where each two-dimensional feature map is respectively used for representing a spatial spectrum combined feature of one spectral band, the spatial spectrum combined feature is used for representing a spatial and spectral associated feature of the spectral band, and the spectral information includes related information with other bands. If there are L spectral bands, then there are L two-dimensional feature maps.
For example, if the reference image block of 32 × 32 × L size is acquired according to the step S101, there may be L two-dimensional feature maps of 32 × 32 size.
And S103, inputting the two-dimensional characteristic graphs into a characteristic analysis network obtained through pre-training to analyze the spectrum correlation information between the spectrum bands corresponding to the two-dimensional characteristic graphs.
Optionally, the pre-trained feature analysis network may include a multilayer neural network, and the multilayer neural network in the pre-trained feature analysis network is used to analyze the correlation information between the spectral bands corresponding to the two-dimensional feature maps.
The data among the spectral bands are mutually related and have correlation characteristics, one two-dimensional characteristic chart characterizes the space-spectrum combination characteristics of one spectral band, and a plurality of two-dimensional characteristic charts characterize the space-spectrum combination characteristics of a plurality of spectral bands.
And S104, carrying out classification processing based on the spectrum correlation information between the spectrum bands corresponding to the two-dimensional characteristic maps to obtain a hyperspectral image classification result corresponding to the initial hyperspectral image.
Optionally, for the ground objects with similar spatial spectrum characteristics, for example, the situation of same-spectrum foreign objects or same-object different-spectrum, the correlation information between the spectral bands is an important characteristic for analyzing the ground objects with similar spatial spectrum characteristics, and different ground objects can be classified according to the correlation information between the spectral bands in a preset neural network to obtain the classification result of the hyperspectral image, so that different ground objects on the hyperspectral image can be displayed by using different color classifications, for example, a forest can be displayed by using green, a shrub can be displayed by using blue, lake water can be displayed by using white, and the like.
In this implementation, a plurality of local image blocks with the same size are obtained according to the initial hyperspectral image, so as to obtain a plurality of reference image blocks, each reference image block may include a plurality of layers of feature maps, and each layer of feature map may include a plurality of spectral bands; inputting each reference image block into a three-dimensional residual error multi-layer fusion network obtained by pre-training for extracting space spectrum features to obtain a plurality of two-dimensional feature maps output by the three-dimensional residual error multi-layer fusion network; inputting each two-dimensional characteristic diagram into a pre-trained characteristic analysis network to analyze the spectrum correlation information between the corresponding spectrum bands of each two-dimensional characteristic diagram; and carrying out classification processing based on the spectrum correlation information between the spectrum bands corresponding to the two-dimensional characteristic graphs to obtain a hyperspectral image classification result corresponding to the initial hyperspectral image. The spatial spectrum feature extraction is carried out through a three-dimensional residual multilayer fusion network obtained through pre-training, the correlation information among the spectral bands is analyzed through a feature analysis network, the correlation among the spectral bands of the hyperspectral image can be fully mined, classification is carried out based on the correlation information among the spectral bands, and the classification precision of ground objects when the spatial spectrum feature difference is not obvious can be obviously improved.
Fig. 3 is a schematic flow chart of a method for extracting a spatial spectrum feature provided in an embodiment of the present application, and as shown in fig. 3, the step S102 of inputting each reference image block into a pre-trained three-dimensional residual error multilayer fusion network to perform spatial spectrum feature extraction, so as to obtain a plurality of two-dimensional feature maps output by the three-dimensional residual error multilayer fusion network, where the method may include:
s201, inputting each reference image block into a three-dimensional residual multi-layer fusion network obtained through pre-training, extracting a spatial spectrum feature from a plurality of feature convolution layers in the three-dimensional residual multi-layer fusion network, and processing the spatial spectrum feature through each feature processing layer corresponding to each feature convolution layer in the three-dimensional residual multi-layer fusion network to obtain a two-dimensional feature vector with a plurality of dimensions.
Optionally, the three-dimensional residual multilayer fusion network may include an original convolution layer and a plurality of feature convolution layers that are arranged after the original convolution layer and in sequence, where the original convolution layer outputs an initial null spectrum feature map according to each reference image block, and each feature convolution layer performs convolution processing based on the null spectrum feature map output by the previous convolution layer and outputs a null spectrum feature map to the subsequent convolution layer and the corresponding feature processing layer.
For example, the plurality of feature convolutional layers may include three feature convolutional layers, each of the original convolutional layer and the three feature convolutional layers may have 64, 128, 256, and 512 convolution kernels, the sizes of the convolution kernels are all 3 × 3 × 1, each convolutional layer includes 3, 4, 6, and 3 Residual block (Residual block) units, the number of convolution kernels of different convolutional layers is different, and a null spectrum feature map of different dimensions may be output according to different convolution kernels and Residual block units, so as to implement extraction of a null spectrum feature.
Optionally, the original convolutional layer outputs an initial empty spectrum feature map according to each reference image block, specifically, data of each reference image block is activated through an activation function in the activation layer, where the activation function may be Relu, pooling is performed through a maximum pooling layer, each pooled reference image block is input into the original convolutional layer, empty spectrum feature extraction is performed on each reference image block according to a convolution sum and residual error unit in the original convolutional layer, and an initial empty spectrum feature map is output.
Optionally, each feature convolution layer performs convolution processing based on the empty spectrum feature map output by the previous convolution layer and outputs the empty spectrum feature map to the next convolution layer and the corresponding feature processing layer, where the previous convolution layer may be the original convolution layer or any one of the multiple feature convolution layers, if the previous convolution layer of the feature convolution layers is the original convolution layer, the initial empty spectrum feature map output by the original convolution layer is input into the feature convolution layer for convolution processing, the empty spectrum feature map after convolution processing is input into the next feature convolution layer of the feature convolution layer as an input of the next feature convolution layer, and the empty spectrum feature map after convolution processing is input into the feature processing layer corresponding to the feature convolution layer for processing the empty spectrum feature map, so as to obtain the two-dimensional feature vector of the feature convolution.
For example, if the three-dimensional residual multi-layer fusion network includes an original convolutional layer1 and three sequentially connected feature convolutional layers layer2, layer3 and layer4, the output feature maps of the convolutional layers may be represented as O 1 ,O 2 ,O 3 ,O 4 Then, O 1 Is the output characteristic diagram of original convolutional layer1, and is represented by 1 The input to layer2 to obtain an output characteristic diagram O 2 Introducing O 2 Inputting to layer3 to obtain output characteristic diagram O 3 Introducing O 3 Inputting to layer4 to obtain output characteristic diagram O 4 Introducing O into 2 ,O 3 ,O 4 Respectively inputting the two-dimensional feature vectors into the feature processing layers corresponding to the feature convolution layers layer2, layer3 and layer4 for processing to obtain two-dimensional feature vectors with different dimensions, for example, respectively outputting O 23 (n×L,128)、O 33 (n×L,256)、O 43 (nxl, 512) three two-dimensional feature vectors with different dimensions, wherein L is the number of spectral bands, and n is the batch size (bath size) and takes the value of 2.
S202, performing feature fusion on each two-dimensional feature vector by a full connection layer corresponding to each feature processing layer in the three-dimensional residual multi-layer fusion network, and performing data reconstruction on all the fused two-dimensional feature vectors by a second data reconstruction layer in the three-dimensional residual multi-layer fusion network to obtain and output a plurality of two-dimensional feature maps.
Optionally, the two-dimensional feature vectors output by each feature processing layer are input into the full connection layer corresponding to each feature processing layer for feature fusion, and the two-dimensional feature vectors of different dimensions after feature fusion are converted into two-dimensional feature vectors of the same dimension.
Illustratively, the two-dimensional feature vectors O of three different dimensions in step S201 are obtained 23 (n×L,128)、O 33 (n×L,256)、O 43 (nxL, 512) are respectively input into all connection layers corresponding to all feature processing layers for feature fusion, and finally output O 5 (n × L, 1024) two-dimensional feature vectors.
Optionally, the fused two-dimensional feature vectors are subjected to data reconstruction to obtain a plurality of two-dimensional feature maps, and exemplarily, the output O is obtained 5 And (n multiplied by L, 1024) two-dimensional feature vectors are subjected to data reconstruction to obtain a two-dimensional feature map with the dimension of (n, L, 32,32), and the two-dimensional feature map can represent the space spectrum feature of one spectrum band.
Optionally, the processing in steps S201 to S202 is performed on each of the plurality of reference image blocks, so that a plurality of two-dimensional feature maps can be obtained, and then the spatial spectral features of a plurality of spectral bands can be obtained.
In this embodiment, the empty spectrum features are extracted through the plurality of feature convolution layers in the three-dimensional residual multilayer fusion network, so that the extracted empty spectrum features not only include shallow empty spectrum features but also include deep empty spectrum features, the performance of the three-dimensional residual multilayer fusion network for processing complex hyperspectral images is improved, the weights of ground objects with similar features and errors are powerfully and accurately classified, and the extraction and classification of the features of the ground objects with different sizes can be facilitated through the fusion processing of the empty spectrum features of all layers.
Fig. 4 is a schematic flow diagram of another method for extracting a spatial spectrum feature provided in an embodiment of the present application, and as shown in fig. 4, each feature processing layer includes a pooling layer, a dimension reduction layer, and a first data reconstruction layer, in step S201, the spatial spectrum feature is extracted by multiple feature convolution layers in the three-dimensional residual multi-layer fusion network, and the spatial spectrum feature is processed by each feature processing layer corresponding to each feature convolution layer in the three-dimensional residual multi-layer fusion network to obtain a two-dimensional feature vector with multiple dimensions, which may include:
s301, extracting the space spectrum characteristic diagram output by the previous convolution layer by each characteristic convolution layer, and outputting the extracted space spectrum characteristic diagram.
Wherein the previous convolutional layer can be the original convolutional layer or any characteristic convolutional layer.
For example, if each feature convolutional layer is layer2, layer3, and layer4, and each feature convolutional layer is connected in sequence, the feature convolutional layer2 may perform feature extraction on the empty spectrum feature map output by the original convolutional layer, and output the extracted empty spectrum feature map O 2 The feature convolutional layer3 can extract the features of the empty spectrum feature map output by layer2 and output the extracted empty spectrum feature map O 3 The feature convolutional layer4 can extract the features of the empty spectrum feature map output by layer3 and output the extracted empty spectrum feature map O 4 The empty spectrum characteristic diagram output by the characteristic convolution layer after the characteristic extraction can be respectively represented as O 2 ∈R 8×8×L 、O 3 ∈R 4×4×L 、O 4 ∈R 2×2×L
S302, performing maximum pooling on the spatial spectrum characteristic diagram by each pooling layer to obtain a corresponding first output characteristic diagram.
Optionally, each pooling layer performs maximal pooling on the empty spectrum feature map output by each feature convolutional layer, and outputs a first output feature map, for example, the pooled layer corresponding to the feature convolutional layer2, and the empty spectrum feature map O output by the feature convolutional layer2 2 ∈R 8×8×L Performing maximum pooling to obtain a first output characteristic diagram O 21 ∈R 1×1×L (ii) a The pooling layer corresponding to the characteristic convolutional layer3 outputs a space spectrum characteristic diagram O of the characteristic convolutional layer3 3 ∈R 4×4×L Performing maximum pooling to obtain a first output characteristic diagram O 31 ∈R 1×1×L (ii) a The pooling layer corresponding to the characteristic convolution layer4 outputs a space spectrum characteristic diagram O of the characteristic convolution layer4 4 ∈R 2×2×L Performing maximum pooling to obtain a first output characteristic diagram O 41 ∈R 1×1×L
And S303, performing dimension reduction processing on the first output characteristic diagram by each dimension reduction layer to obtain a corresponding second output characteristic diagram.
Optionally, after each dimensionality reduction layer is connected to each pooling layer, dimensionality reduction processing may be performed on the first output characteristic diagram output by each pooling layer through a scatter function, so as to obtain a second output characteristic diagram. Illustratively, for the first output profile O 21 ∈R 1×1×L Obtaining a second output characteristic diagram O through the processing of the dimensionality reduction layer 22 (n,128, L); for the first output profile O 31 ∈R 1×1×L Obtaining a corresponding second output characteristic diagram O through the dimension reduction processing of the dimension reduction layer 32 (n,256, L) for the first output profile O 41 ∈R 1×1×L Obtaining a corresponding second output characteristic diagram O through dimension reduction processing of a dimension reduction layer 42 (n,512,L)。
And S304, performing data reconstruction on the second output characteristic diagram by each first data reconstruction layer to obtain a two-dimensional characteristic vector of each dimension.
Optionally, after each dimension reduction layer is connected to each dimension reduction layer, each second output feature map output by each dimension reduction layer is subjected to data reconstruction, so as to obtain a two-dimensional feature vector of each dimension. Illustratively, for the second output profile O 22 (n,128, L) processing the first data reconstruction layer to obtain a two-dimensional feature vector O of a dimension corresponding to the second output feature map 23 (nxl, 128); for the second output characteristic diagram O 32 (n,256, L) processing the first data reconstruction layer to obtain a two-dimensional feature vector O of a corresponding dimension of the second output feature map 33 (nxl, 256); for the second output characteristic diagram O 42 (n,512, L) processing the first data reconstruction layer to obtain a two-dimensional feature vector O of a dimension corresponding to the second output feature map 43 (n×L,512)。
S305, obtaining two-dimensional feature vectors of multiple dimensions based on the two-dimensional feature vectors of the dimensions.
Optionally, after the maximum pooling, the dimensionality reduction and the data reconstruction are performed on the empty spectrum feature map output by each feature convolution layer, the two-dimensional feature vectors of each dimensionality corresponding to each feature convolution layer can be obtained, wherein the dimensionalities of the two-dimensional feature vectors of each dimensionality are different, and the obtained two-dimensional feature vectors of each dimensionality are combined together to obtain the two-dimensional feature vectors of multiple dimensionalities.
In this embodiment, by performing maximum pooling, dimension reduction and data reconstruction on the spatial spectrum feature map, two-dimensional feature vectors with different dimensions can be obtained, which is beneficial to extraction and classification of small surface feature.
Fig. 5 is a schematic flow chart of feature analysis provided in an embodiment of the present application, and as shown in fig. 5, the step S103 of inputting each two-dimensional feature map into a pre-trained feature analysis network to analyze spectral correlation information between spectral bands corresponding to each two-dimensional feature map may include:
s401, inputting each two-dimensional feature map into a feature analysis network, carrying out blocking processing on each two-dimensional feature map by a blocking layer of the feature analysis network to obtain a two-dimensional feature map after blocking, and taking the two-dimensional feature map after blocking as a first-dimension feature map.
The characteristic analysis network can analyze the spectrum correlation information between the spectrum bands corresponding to the two-dimensional characteristic graphs. For more accurate analysis, each two-dimensional feature map may be subjected to blocking processing to obtain a two-dimensional feature map subjected to blocking processing. Specifically, the feature maps in units of pixels on the two-dimensional geometric space of each two-dimensional feature map may be partitioned.
The characteristic analysis network can be a Swin Transformer network model.
For example, a path of 4 × 4 pixels may be used to block a pixel matrix corresponding to the geometric space of each two-dimensional feature map, and since each path has 16 pixels and each pixel has L values, a feature map F with a dimension of 16 × L may be obtained 1 (Z;θ)∈R 8×8 Where 8 × 8 represents the number of paths, the feature map F 1 (Z;θ)∈R 8×8 And taking the two-dimensional feature map as a first-dimension feature map after the two-dimensional feature map is partitioned.
S402, linear transformation processing is carried out on the linear processing layer of the feature analysis network based on the first dimension feature map, and a second dimension feature map is obtained.
Optionally, the dimensions of each path in the first-dimension feature map are subjected to linear transformation processing.
Exemplary, first dimension profile F 1 (Z;θ)∈R 8×8 If the dimension of (2) is 16 XL, the dimension of (16 XL) is converted into d by linear transformation k Obtaining a second dimension characteristic diagram F with dimension L 2 (Z;θ)∈R 8×8
And S403, performing spectral analysis on the second-dimension characteristic diagram by an analysis processing layer of the characteristic analysis network to obtain spectral correlation information among various spectral bands.
Optionally, each two-dimensional feature map may be subjected to the segmentation and linear transformation to obtain each second-dimensional feature map corresponding to each two-dimensional feature map, and then the plurality of second-dimensional feature maps may obtain a plurality of second-dimensional feature maps, and each second-dimensional feature map in the plurality of second-dimensional feature maps is input to the analysis processing layer, so that the spectral correlation information between the spectral bands may be analyzed.
Fig. 6 is a schematic flow chart of another feature analysis provided in the embodiment of the present application, and as shown in fig. 6, the analysis processing layer may include: a plurality of analysis processing sublayers connected in series. In S403, performing spectral analysis on the second-dimension feature map by the analysis processing layer of the feature analysis network to obtain spectral correlation information between spectral bands may include:
s501, performing spectral analysis on the second dimension characteristic diagram by the first analysis processing sublayer in the analysis processing layer to obtain an initial analysis result.
Illustratively, the analysis processing layer may include 4 analysis processing sublayers connected in sequence, which may be represented by stage1, stage2, stage3 and stage4, respectively, and the first analysis processing sublayer is stage 1. The first analysis processing sublayer 1 performs spectral analysis on the second-dimensional feature map to obtain an initial analysis result, where the initial analysis result may include spectral correlation information between spectral bands.
And S502, taking the analysis result of each analysis processing sub-layer after the first analysis processing sub-layer in the analysis processing layer as input data, performing spectral analysis, and outputting the analysis result backwards.
Optionally, the spectral analysis is to analyze the correlation information between the spectral bands, and ground objects with similar spatial-spectral characteristics can be better distinguished according to the correlation information between the spectral bands.
Optionally, the objects output backwards for different analysis processing sublayers may be different, and for example, for the analysis result output by stage1, the analysis result is input into processing sublayer stage2 to continue the analysis; for the analysis result output by stage2, the analysis result is input to processing sublayer stage3 for further analysis; so as to iterate to the last processing sub-layer, such as stage4, where there are no other processing sub-layers after the processing sub-layer stage4, the analysis result of the last processing sub-layer does not need to be analyzed any more, and the analysis result is input into other network structures.
And S503, taking the analysis result output by the last analysis processing sub-layer in the analysis processing layer as the spectrum related information among the spectrum bands.
Optionally, the analysis result output by the last analysis processing sub-layer is spectrum correlation information between the spectral bands finally analyzed by the feature analysis network, and classification can be performed based on the spectrum correlation information to distinguish different features.
In this embodiment, the spectral analysis is performed by each layer of analysis processing sublayer, so that the obtained spectral correlation information between the spectra is more sufficient and accurate.
Fig. 7 is a schematic structural diagram of analysis processing sublayers according to an embodiment of the present application, and as shown in fig. 7, each analysis processing sublayer in the analysis processing layer may respectively include a plurality of consecutive analysis blocks (switch transformer blocks), and each analysis block respectively includes a first normalization layer (layer Norm), a window multilayer self-attention layer (W-MSA), a second normalization layer, a first multilayer perceptron (MLP), a third normalization layer, a sliding window multi-head self-attention layer (SW-MSA), a fourth normalization layer, and a second multilayer perceptron, which are sequentially connected.
And the output result of the second multi-layer perceptron is used as the output result of the analysis block, in the plurality of analysis blocks, the output result of the previous analysis block in two adjacent analysis blocks is used as the input data of the next analysis block, and the output result of the last analysis block in the plurality of analysis blocks is used as the analysis result of the analysis processing sublayer.
The MLP layer is composed of an input layer, a hidden layer and an output layer and is used for tensor remodeling; the layer Norm layer is used for carrying out standardization processing on the data, namely calculating the mean value and the variance on each sample; the W-MSA calculates attention under each window, and information interaction is carried out with other windows for better purpose; the SW-MSA is to capture the correlation between each band by encoding each band through the global context information.
Optionally, the number of analysis blocks included in each processing sublayer is an integer multiple of 2, one of the analysis blocks being provided to the W-MSA and one to the SW-MSA.
Optionally, in addition to the other analysis processing sublayers of the first analysis processing sublayer, a downsampling layer (path clustering) is further included, then, in the first analysis processing sublayer, the analysis block in the processing sublayer directly performs spectral analysis on the second-dimensional feature map obtained through linear transformation in the step S402, and for the other analysis processing sublayers, the downsampling layer in each processing sublayer needs to perform downsampling processing on the feature map before the analysis block performs spectral analysis, and the downsampled feature map is input into the analysis block in each processing sublayer for analysis.
Optionally, for a plurality of consecutive analysis blocks in each analysis processing sublayer, an output result of a previous analysis block in two adjacent analysis blocks is used as input data of a next analysis block, an output result of the previous analysis block is an output result of a second multilayer perceptron in the analysis block, an output result of the second multilayer perceptron in a last analysis block in the analysis processing sublayer is an analysis result output by the analysis processing sublayer, and an analysis result output by the analysis processing sublayer is input data of the next analysis processing sublayer.
Optionally, for any analysis block, if the input characteristic of the analysis block is z l-1 Wherein l is the first analysis processing sublayer, the input features are normalized by the first normalization layer, feature learning is performed by the window multilayer self-attention layer, and then, residual error processing is performed to obtain features
Figure BDA0003683187150000181
The second normalization is carried out through a second normalization layer, the tensor remodeling and the second residual processing are carried out on the first multilayer perceptron to obtain the characteristic z l Then the feature z is l Inputting the normalized features into a third layer for third normalization, inputting the normalized features into a sliding window multi-head self-attention layer for calculation, and performing residual error processing to obtain the features
Figure BDA0003683187150000182
Then the characteristics
Figure BDA0003683187150000183
Inputting the normalized features into a second multilayer perceptron to carry out tensor remodeling to finally obtain the features z l+1 Then the feature z l+1 For the output analysis result of the analysis block, the formula for specifically calculating each feature is as follows:
Figure BDA0003683187150000184
Figure BDA0003683187150000185
Figure BDA0003683187150000186
Figure BDA0003683187150000187
wherein the formula (one) is calculationThe formula of window multilayer self attention, the formula (II) is the calculation characteristic z l The formula (c) is a formula for calculating the multilayer self-attention of the sliding window, and the formula (d) is a formula for calculating the feature z l+1 The tensor reshaping.
Optionally, the fractional values of the feature map matrix information of one spectral band and the feature map matrix information of other spectral bands are calculated through self-attention in each analysis block to obtain the correlation between the spectral bands, specifically, for a two-dimensional feature map of a certain spectral band, a second-dimensional feature map F after segmentation and linear transformation is used 2 (Z;θ)∈R 8×8 In the first analysis block in the first analysis processing sub-layer, three randomly initialized learnable matrices may be multiplied to obtain three corresponding matrices, for example, three randomly initialized learnable matrices are
Figure BDA0003683187150000191
Three matrixes are obtained through multiplication
Figure BDA0003683187150000192
Wherein M may be 8, d k Which may be 96, Q, K, V represents a matrix of three different dimensions, the calculation of attention being given by equation (five).
Figure BDA0003683187150000193
Wherein, in order to prevent the value obtained by multiplying the Q point and the K point from being large and cause the gradient calculated by the softmax function to become small, the multiplication result is divided by
Figure BDA0003683187150000194
Plus a relative position bias B e R (2M-1)×(2M-1) And point multiplying the weight of the value calculated by the softmax function by the matrix V to obtain the attention.
Optionally, for a first analysis block in another analysis processing sub-layer, the feature map for calculating attention is a feature map of an output of the second multi-layer perceptron in a last analysis block in a previous analysis processing sub-layer, and for other analysis blocks after the first analysis block in a certain analysis processing sub-layer, the feature map for calculating attention is a feature map of an output of a previous analysis block.
In this embodiment, by using a hierarchical analysis processing layer, the spectrum is analyzed more comprehensively as the analysis processing layer deepens, and secondly, by using the calculation of the self-attention, the calculation complexity is reduced, and by adopting an attention mechanism to capture the dependency relationship between the spectral bands, the calculation efficiency and the expandability are improved.
Optionally, the classifying processing is performed based on spectrum correlation information between spectrum bands corresponding to the two-dimensional feature maps to obtain a hyperspectral image classification result corresponding to the initial hyperspectral image, and the classifying processing may include:
optionally, the spectrum correlation information between the spectrum bands is input into the classification model, and two full connection layers and a gaussian error linear unit in the classification model classify according to the spectrum correlation information to obtain a hyperspectral image classification result corresponding to the initial hyperspectral image.
Optionally, the full connection layer and the Gaussian Error Linear unit perform Linear transformation on the output of the full connection layer between the input layer and the output layer of the multilayer perceptron through an activation function in the Gaussian Error Linear unit, specifically, the full connection layer may perform affine transformation on the input data, and then perform Linear transformation through the Gaussian Error Linear unit (referred to as GELU for short), where the specific calculation formula of the GELU is as formula six.
Figure BDA0003683187150000201
Where Φ (x) is a gaussian cumulative distribution function and x is an input value.
Optionally, the last fully-connected layer of the multi-layer perceptron may be a softmax layer, and the final result of the classification may be output, and the specific calculation is shown in formula (seven).
Figure BDA0003683187150000202
Where R is an input vector of the fully-connected layer, i is a sequence number of the category, W is a weight of the softmax layer, b is a deviation of the softmax layer, and P (Y ═ i | R, W, b) may indicate a probability that the input vector belongs to the i-th type.
The embodiment classifies through the multilayer perceptron, and the accuracy of the classification result can be improved.
Fig. 8 is a schematic view of an exemplary complete structure of hyperspectral image classification provided by an embodiment of the application, and as shown in fig. 8, the exemplary complete structure includes a three-dimensional residual error multi-layer fusion network and a feature analysis network.
The three-dimensional residual multilayer fusion network may include an original convolutional layer1, three feature convolutional Layers layer2, layer3 and layer4, maximum pooling Layers (adaptive Max Pool3d) corresponding to the feature convolutional Layers, a dimensionality reduction layer, a data reconstruction layer, and Full connection Layers (Full connected Layers), and a Normalization layer (Bath Normalization) and a maximum pooling layer before the original convolutional layer 1. The specific functions of each layer structure are described in the foregoing embodiments, and are not described herein.
The feature analysis network may be, for example, a Swin Transform network architecture, and may include a plurality of analysis processing sublayers connected in sequence, such as stage1, stage2, stage3, and stage4 in fig. 8, where the stage1 is composed of a linear Transform layer (link Embedding) and a plurality of analysis blocks (Swin Transformer Block), and each of the stage2, stage3, and stage4 is composed of a sampling layer (page merging) and a plurality of analysis blocks (Swin Transformer Block), where the Swin Transformer blocks adopted by the stage1, stage2, stage3, and stage4 are 2, 2, 6, and 2, respectively. The specific functions of the structures have been described in the foregoing specific embodiments, and are not described herein again.
Fig. 9 is a schematic diagram of an apparatus of a hyperspectral image classification method provided in an embodiment of the application, and as shown in fig. 9, the apparatus includes:
an obtaining module 601, configured to obtain a plurality of local image blocks with the same size according to an initial hyperspectral image to obtain a plurality of reference image blocks, where each reference image block includes multiple layers of feature maps, and each layer of feature map includes multiple spectral bands;
an extracting module 602, configured to input each reference image block into a three-dimensional residual multi-layer fusion network obtained through pre-training to perform null spectrum feature extraction, so as to obtain multiple two-dimensional feature maps output by the three-dimensional residual multi-layer fusion network, where each two-dimensional feature map is used to represent a null spectrum joint feature of a spectrum band, and the null spectrum joint feature is used to represent correlation features of the spectrum band in space and spectrum;
an analysis module 603, configured to input each two-dimensional feature map into a pre-trained feature analysis network, and analyze spectral correlation information between spectral bands corresponding to each two-dimensional feature map;
the classification module 604 is configured to perform classification processing based on spectrum correlation information between spectrum bands corresponding to the two-dimensional feature maps to obtain a hyperspectral image classification result corresponding to the initial hyperspectral image.
Optionally, the extracting module 602 is specifically configured to:
inputting each reference image block into a three-dimensional residual multi-layer fusion network obtained through pre-training, extracting a null spectrum feature by a plurality of feature convolutional layers in the three-dimensional residual multi-layer fusion network, and processing the null spectrum feature by each feature processing layer corresponding to each feature convolutional layer in the three-dimensional residual multi-layer fusion network to obtain two-dimensional feature vectors with multiple dimensions, wherein the three-dimensional residual multi-layer fusion network comprises an original convolutional layer and the plurality of feature convolutional layers which are sequentially arranged behind the original convolutional layer, the original convolutional layer outputs an initial null spectrum feature map according to the reference image block, and each feature convolutional layer performs convolution processing based on the null spectrum feature map output by the previous convolutional layer and outputs the null spectrum feature map after convolution processing to the next convolutional layer and the corresponding feature processing layer;
and performing feature fusion on the two-dimensional feature vectors by using a full connection layer corresponding to each feature processing layer in the three-dimensional residual multi-layer fusion network, and performing data reconstruction on all the fused two-dimensional feature vectors by using a second data reconstruction layer in the three-dimensional residual multi-layer fusion network to obtain and output the plurality of two-dimensional feature maps.
Optionally, each of the feature processing layers includes: the system comprises a pooling layer, a dimensionality reduction layer and a first data reconstruction layer;
the extraction module 602 is specifically configured to:
extracting the empty spectrum characteristic diagram output by the previous convolution layer by each characteristic convolution layer, and outputting the empty spectrum characteristic diagram after characteristic extraction;
performing maximum pooling on the space spectrum characteristic diagram by each pooling layer to obtain a corresponding first output characteristic diagram;
performing dimension reduction processing on the first output characteristic diagram by each dimension reduction layer to obtain a corresponding second output characteristic diagram;
performing data reconstruction on the second output characteristic diagram by each first data reconstruction layer to obtain two-dimensional characteristic vectors of each dimension;
and obtaining the two-dimensional feature vectors of the multiple dimensions based on the two-dimensional feature vectors of the dimensions.
Optionally, the analysis module 603 is specifically configured to:
inputting each two-dimensional feature map into the feature analysis network, carrying out blocking processing on each two-dimensional feature map by a blocking layer of the feature analysis network to obtain a blocked two-dimensional feature map, and taking the blocked two-dimensional feature map as a first-dimension feature map;
performing linear transformation processing on the linear processing layer of the feature analysis network based on the first dimension feature map to obtain a second dimension feature map;
and carrying out spectral analysis on the second dimension characteristic diagram by an analysis processing layer of the characteristic analysis network to obtain spectral correlation information among various spectral bands.
Optionally, the analysis processing layer includes: a plurality of analysis processing sublayers connected in sequence;
the analysis module 603 is specifically configured to:
performing spectral analysis on the second dimension characteristic diagram by a first analysis processing sublayer in the analysis processing layer to obtain an initial analysis result;
taking the analysis result of each analysis processing sub-layer after the first analysis processing sub-layer in the analysis processing layer as input data in sequence, performing spectral analysis, and outputting the analysis result backwards;
and taking an analysis result output by the last analysis processing sub-layer in the analysis processing layer as the spectrum correlation information among the spectrum bands.
Optionally, each analysis processing sublayer in the analysis processing layer includes a plurality of consecutive analysis blocks, each analysis block includes a first normalization layer, a window multilayer self-attention layer, a second normalization layer, a first multilayer perceptron, a third normalization layer, a sliding window multi-head self-attention layer, a fourth normalization layer, and a second multilayer perceptron that are sequentially connected, an output result of the second multilayer perceptron serves as an output result of the analysis block, in the plurality of analysis blocks, an output result of a previous analysis block in two adjacent analysis blocks serves as input data of a next analysis block, and an output result of a last analysis block in the plurality of analysis blocks serves as an analysis result of the analysis processing sublayer.
Optionally, the classifying module 604 is specifically configured to:
and inputting the spectral correlation information among the spectral bands into a classification model, and classifying the spectral correlation information by two full-connection layers and a Gaussian error linear unit in the classification model according to the spectral correlation information to obtain a hyperspectral image classification result corresponding to the initial hyperspectral image.
Fig. 10 is a block diagram of an electronic device 700 according to an embodiment of the present disclosure, and as shown in fig. 10, the electronic device may include: a processor 701, a memory 702.
Optionally, the apparatus may further include a bus 703, where the memory 702 is configured to store machine-readable instructions executable by the processor 701 (for example, execution instructions corresponding to the obtaining module, the extracting module, the analyzing module, and the classifying module in the apparatus in fig. 9, and the like), when the electronic device 700 runs, the processor 701 and the memory 702 communicate with each other through the bus 703, and the machine-readable instructions are executed by the processor 701 to perform the method steps in the foregoing method embodiments.
An embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, performs the method steps in the above-mentioned hyperspectral image classification method embodiment.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working process of the system and the apparatus described above may refer to the corresponding process in the method embodiment, and is not described in detail in this application. In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and there may be other divisions in actual implementation, and for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or modules through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and various media capable of storing program codes.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present application, and shall cover the scope of the present application.

Claims (10)

1. A hyperspectral image classification method is characterized by comprising the following steps:
acquiring a plurality of local image blocks with the same size according to the initial hyperspectral image to obtain a plurality of reference image blocks, wherein each reference image block comprises a plurality of layers of feature maps, and each layer of feature map comprises a plurality of spectral wave bands;
inputting each reference image block into a three-dimensional residual error multilayer fusion network obtained through pre-training for extracting a null spectrum feature to obtain a plurality of two-dimensional feature maps output by the three-dimensional residual error multilayer fusion network, wherein each two-dimensional feature map is respectively used for representing a null spectrum combined feature of a spectrum band, and the null spectrum combined feature is used for representing the correlation features of the spectrum band on space and spectrum;
inputting each two-dimensional characteristic diagram into a characteristic analysis network obtained by pre-training to analyze the spectrum correlation information among the spectrum bands corresponding to each two-dimensional characteristic diagram;
and carrying out classification processing based on the spectrum correlation information between the spectrum bands corresponding to the two-dimensional characteristic graphs to obtain a classification result corresponding to the initial hyperspectral image.
2. The hyperspectral image classification method according to claim 1, wherein the step of inputting each reference image block into a three-dimensional residual error multilayer fusion network obtained by pre-training for spatial spectrum feature extraction to obtain a plurality of two-dimensional feature maps output by the three-dimensional residual error multilayer fusion network comprises the steps of:
inputting each reference image block into a three-dimensional residual multi-layer fusion network obtained through pre-training, extracting a null spectrum feature by a plurality of feature convolutional layers in the three-dimensional residual multi-layer fusion network, and processing the null spectrum feature by each feature processing layer corresponding to each feature convolutional layer in the three-dimensional residual multi-layer fusion network to obtain two-dimensional feature vectors with multiple dimensions, wherein the three-dimensional residual multi-layer fusion network comprises an original convolutional layer and the plurality of feature convolutional layers which are sequentially arranged behind the original convolutional layer, the original convolutional layer outputs an initial null spectrum feature map according to the reference image block, and each feature convolutional layer performs convolution processing based on the null spectrum feature map output by the previous convolutional layer and outputs the null spectrum feature map after convolution processing to the next convolutional layer and the corresponding feature processing layer;
and performing feature fusion on the two-dimensional feature vectors by using a full connection layer corresponding to each feature processing layer in the three-dimensional residual multi-layer fusion network, and performing data reconstruction on all the fused two-dimensional feature vectors by using a second data reconstruction layer in the three-dimensional residual multi-layer fusion network to obtain and output the plurality of two-dimensional feature maps.
3. The hyperspectral image classification method according to claim 2, wherein each of the feature processing layers comprises: the system comprises a pooling layer, a dimensionality reduction layer and a first data reconstruction layer;
extracting a space spectrum feature by a plurality of feature convolution layers in the three-dimensional residual multilayer fusion network, and processing the space spectrum feature by each feature processing layer corresponding to each feature convolution layer in the three-dimensional residual multilayer fusion network to obtain a plurality of dimensions of two-dimensional feature vectors, wherein the method comprises the following steps:
extracting the empty spectrum characteristic diagram output by the previous convolution layer by each characteristic convolution layer, and outputting the empty spectrum characteristic diagram after characteristic extraction;
performing maximum pooling on the empty spectrum characteristic map by each pooling layer to obtain a corresponding first output characteristic map;
performing dimensionality reduction processing on the first output characteristic graph by each dimensionality reduction layer to obtain a corresponding second output characteristic graph;
performing data reconstruction on the second output characteristic diagram by each first data reconstruction layer to obtain two-dimensional characteristic vectors of each dimension;
and obtaining the two-dimensional feature vectors of the multiple dimensions based on the two-dimensional feature vectors of the dimensions.
4. The hyperspectral image classification method according to claim 1, wherein the step of inputting each two-dimensional feature map into a pre-trained feature analysis network to analyze the spectral correlation information between the spectral bands corresponding to each two-dimensional feature map comprises the steps of:
inputting each two-dimensional characteristic diagram into the characteristic analysis network, carrying out blocking processing on each two-dimensional characteristic diagram by a blocking layer of the characteristic analysis network to obtain a blocked two-dimensional characteristic diagram, and taking the blocked two-dimensional characteristic diagram as a first-dimension characteristic diagram;
carrying out linear transformation processing on the basis of the first dimension characteristic diagram by a linear processing layer of the characteristic analysis network to obtain a second dimension characteristic diagram;
and carrying out spectral analysis on the second dimension characteristic diagram by an analysis processing layer of the characteristic analysis network to obtain spectral correlation information among various spectral bands.
5. The hyperspectral image classification method according to claim 4, wherein the analysis processing layer comprises: a plurality of analysis processing sublayers connected in sequence;
the spectral analysis of the second dimension characteristic diagram by the analysis processing layer of the characteristic analysis network to obtain the spectral correlation information among the spectral bands comprises:
performing spectral analysis on the second dimension characteristic diagram by a first analysis processing sublayer in the analysis processing layer to obtain an initial analysis result;
taking the analysis result of each analysis processing sub-layer after the first analysis processing sub-layer in the analysis processing layer as input data in sequence, performing spectral analysis, and outputting the analysis result backwards;
and taking an analysis result output by the last analysis processing sub-layer in the analysis processing layer as the spectrum correlation information among the spectrum bands.
6. The hyperspectral image classification method according to claim 5, wherein each analysis processing sublayer in the analysis processing layers respectively comprises a plurality of continuous analysis blocks, each analysis block respectively comprises a first normalization layer, a window multilayer self-attention layer, a second normalization layer, a first multilayer perceptron, a third normalization layer, a sliding window multi-head self-attention layer, a fourth normalization layer and a second multilayer perceptron which are sequentially connected, an output result of the second multilayer perceptron is used as an output result of the analysis block, an output result of a previous analysis block in two adjacent analysis blocks in the plurality of analysis blocks is used as input data of a next analysis block, and an output result of a last analysis block in the plurality of analysis blocks is used as an analysis result of the analysis processing sublayer.
7. The hyperspectral image classification method according to any one of claims 1 to 6, wherein the classifying based on the spectral correlation information between the spectral bands corresponding to the two-dimensional feature maps to obtain the classified hyperspectral image corresponding to the initial hyperspectral image comprises:
and inputting the spectral correlation information among the spectral bands into a classification model, and classifying the spectral correlation information by two full-connection layers and a Gaussian error linear unit in the classification model according to the spectral correlation information to obtain a hyperspectral image classification result corresponding to the initial hyperspectral image.
8. A hyperspectral image classification apparatus, the apparatus comprising:
the acquisition module is used for acquiring a plurality of local image blocks with the same size according to the initial hyperspectral image to obtain a plurality of reference image blocks, each reference image block comprises a plurality of layers of feature maps, and each layer of feature map comprises a plurality of spectral wave bands;
the extraction module is used for inputting each reference image block into a three-dimensional residual error multi-layer fusion network obtained through pre-training for performing empty spectrum feature extraction to obtain a plurality of two-dimensional feature maps output by the three-dimensional residual error multi-layer fusion network, wherein each two-dimensional feature map is respectively used for representing an empty spectrum joint feature of a spectrum band, and the empty spectrum joint feature is used for representing the spatial and spectral correlation features of the spectrum band;
the analysis module is used for inputting each two-dimensional characteristic diagram into a characteristic analysis network obtained by pre-training to analyze the spectrum correlation information among the spectrum bands corresponding to each two-dimensional characteristic diagram;
and the classification module is used for performing classification processing based on the spectrum correlation information between the spectrum bands corresponding to the two-dimensional characteristic maps to obtain a hyperspectral image classification result corresponding to the initial hyperspectral image.
9. An electronic device, comprising a memory storing a computer program executable by a processor and a processor implementing the steps of the hyperspectral image classification method according to any of the claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, performs the steps of the hyperspectral image classification method according to any of the claims 1 to 7.
CN202210643592.3A 2022-06-08 2022-06-08 Hyperspectral image classification method and device, electronic equipment and storage medium Pending CN114926694A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210643592.3A CN114926694A (en) 2022-06-08 2022-06-08 Hyperspectral image classification method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210643592.3A CN114926694A (en) 2022-06-08 2022-06-08 Hyperspectral image classification method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114926694A true CN114926694A (en) 2022-08-19

Family

ID=82812440

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210643592.3A Pending CN114926694A (en) 2022-06-08 2022-06-08 Hyperspectral image classification method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114926694A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116051896A (en) * 2023-01-28 2023-05-02 西南交通大学 Hyperspectral image classification method of lightweight mixed tensor neural network
CN116468906A (en) * 2023-04-24 2023-07-21 中国测绘科学研究院 Hyperspectral data classification method based on space expansion convolution and spectrum expansion convolution
CN117372789A (en) * 2023-12-07 2024-01-09 北京观微科技有限公司 Image classification method and image classification device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116051896A (en) * 2023-01-28 2023-05-02 西南交通大学 Hyperspectral image classification method of lightweight mixed tensor neural network
CN116051896B (en) * 2023-01-28 2023-06-20 西南交通大学 Hyperspectral image classification method of lightweight mixed tensor neural network
CN116468906A (en) * 2023-04-24 2023-07-21 中国测绘科学研究院 Hyperspectral data classification method based on space expansion convolution and spectrum expansion convolution
CN117372789A (en) * 2023-12-07 2024-01-09 北京观微科技有限公司 Image classification method and image classification device
CN117372789B (en) * 2023-12-07 2024-03-08 北京观微科技有限公司 Image classification method and image classification device

Similar Documents

Publication Publication Date Title
Xie et al. Hyperspectral image super-resolution using deep feature matrix factorization
CN108009559B (en) Hyperspectral data classification method based on space-spectrum combined information
Sun et al. Low-rank and sparse matrix decomposition-based anomaly detection for hyperspectral imagery
CN114926694A (en) Hyperspectral image classification method and device, electronic equipment and storage medium
Ortac et al. Comparative study of hyperspectral image classification by multidimensional Convolutional Neural Network approaches to improve accuracy
Ahmad et al. Multi-layer Extreme Learning Machine-based Autoencoder for Hyperspectral Image Classification.
CN113344103B (en) Hyperspectral remote sensing image ground object classification method based on hypergraph convolution neural network
CN115205590A (en) Hyperspectral image classification method based on complementary integration Transformer network
Hosseiny et al. A hyperspectral anomaly detection framework based on segmentation and convolutional neural network algorithms
Tun et al. Hyperspectral remote sensing images classification using fully convolutional neural network
Albano et al. Euclidean commute time distance embedding and its application to spectral anomaly detection
CN111242228B (en) Hyperspectral image classification method, hyperspectral image classification device, hyperspectral image classification equipment and storage medium
Hou et al. Spatial–spectral weighted and regularized tensor sparse correlation filter for object tracking in hyperspectral videos
CN115240072A (en) Hyperspectral multi-class change detection method based on multidirectional multi-scale spectrum-space residual convolution neural network
Ahmad et al. Hybrid dense network with attention mechanism for hyperspectral image classification
CN115439325A (en) Low-resolution hyperspectral image processing method and device and computer program product
CN117788296B (en) Infrared remote sensing image super-resolution reconstruction method based on heterogeneous combined depth network
Li et al. Improving model robustness for soybean iron deficiency chlorosis rating by unsupervised pre-training on unmanned aircraft system derived images
Nyasaka et al. Learning hyperspectral feature extraction and classification with resnext network
CN113408540B (en) Synthetic aperture radar image overlap area extraction method and storage medium
Küçük et al. Sparse and low-rank matrix decomposition-based method for hyperspectral anomaly detection
Nyabuga et al. [Retracted] A 3D‐2D Convolutional Neural Network and Transfer Learning for Hyperspectral Image Classification
Wang et al. Hybrid network model based on 3D convolutional neural network and scalable graph convolutional network for hyperspectral image classification
Nouri et al. Processing of Hyperion data set for detection of indicative minerals using a hybrid method in Dost-Bayli, Iran
Song et al. HDTFF-Net: Hierarchical deep texture features fusion network for high-resolution remote sensing scene classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination