CN116704378A - Homeland mapping data classification method based on self-growing convolution neural network - Google Patents

Homeland mapping data classification method based on self-growing convolution neural network Download PDF

Info

Publication number
CN116704378A
CN116704378A CN202310489020.9A CN202310489020A CN116704378A CN 116704378 A CN116704378 A CN 116704378A CN 202310489020 A CN202310489020 A CN 202310489020A CN 116704378 A CN116704378 A CN 116704378A
Authority
CN
China
Prior art keywords
neural network
convolutional neural
network model
data
feature extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310489020.9A
Other languages
Chinese (zh)
Inventor
梁勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
JINAN INSTITUTE OF SURVEY & MAPPING
Original Assignee
JINAN INSTITUTE OF SURVEY & MAPPING
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by JINAN INSTITUTE OF SURVEY & MAPPING filed Critical JINAN INSTITUTE OF SURVEY & MAPPING
Priority to CN202310489020.9A priority Critical patent/CN116704378A/en
Publication of CN116704378A publication Critical patent/CN116704378A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/58Extraction of image or video features relating to hyperspectral data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Remote Sensing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a classification method of territorial mapping data based on a self-growing convolution neural network, which comprises the following steps: the convolutional neural network model is constructed and comprises a spectrum feature extraction module and a space feature extraction module which are used for extracting corresponding feature information from two aspects of spectrum and space, a feature fusion module fuses the feature information corresponding to the spectrum and space respectively, and a network output module classifies the fused feature information; training the model by using the labeled dataset; after each training, selecting part of non-tag data to be added to a tagged data set, judging whether the model meets self-growth or not, if so, adding a spectral feature extraction module and a spatial feature extraction module to update the model, and continuously training the updated model by using a new tagged data set until a stopping condition is met, and outputting a trained model; and classifying the data to be classified by using the trained model. The invention can obtain better hyperspectral image classification performance.

Description

Homeland mapping data classification method based on self-growing convolution neural network
Technical Field
The invention belongs to the technical field of data processing, and particularly relates to a homeland mapping data classification method based on a self-growing convolution neural network.
Background
With the development of image imaging, sensor technology and aerospace technology, in the field of homeland mapping, homeland mapping hyperspectral images can be obtained through the technology, and then the space information and the spectrum information of ground objects can be obtained by utilizing the characteristic of the hyperspectral images. The hyperspectral image has more spectral bands and higher spectral resolution, and can capture tens to hundreds of narrow-band spectral information of each ground feature, so that the spatial information and the spectral information of the hyperspectral image are organically combined together.
The vast amount of hyperspectral data presents a great opportunity as well as a number of challenges. With the continuous development of the remote sensing technology in recent years, the space and spectral resolution are higher and higher, the data volume is increased continuously, and compared with the traditional remote sensing image, the hyperspectral image has the characteristic of high dimensionality due to the abundant spectral information of the hyperspectral image, and the hyperspectral remote sensing image has the characteristics of map unification, high dimensionality, large band correlation, serious data redundancy and the like, so that a new challenge is provided for the hyperspectral remote sensing image classification method. How to efficiently utilize the abundant spectrum and space information of the hyperspectral remote sensing image and reduce the computational complexity caused by high-dimensional data has become the first problem in the hyperspectral remote sensing field. In recent years, convolutional neural networks have been widely used in feature classification research of hyperspectral images due to their strong feature extraction and classification capabilities, and have achieved better classification performance.
However, the common feature of the existing hyperspectral image classification method of the convolutional neural network is that information is lost due to insufficient utilization rate of extracted features or redundant information is caused by excessive irrelevant information when the features between spectrums and the space are extracted, key information in a hyperspectral image wave band cannot be fully utilized, more resolvable spectral empty features are obtained, a large number of hyperspectral samples are needed to train the neural network during training, and therefore the classification effect of the classification methods on hyperspectral images is poor during limited training.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a classification method of territorial mapping data based on a self-growing convolution neural network. The technical problems to be solved by the invention are realized by the following technical scheme:
the embodiment of the invention provides a homeland mapping data classification method based on a self-growing convolution neural network, which comprises the following steps:
shooting homeland mapping image data by using an unmanned aerial vehicle carried camera; wherein the homeland mapping image data comprises a small number of tagged data sets and a large number of untagged data sets;
constructing a convolutional neural network model; the convolutional neural network model comprises a basic feature extraction module formed by a spectrum feature extraction module and a space feature extraction module, a feature fusion module and a network output module, wherein the spectrum feature extraction module and the space feature extraction module are respectively used for extracting corresponding feature information from two aspects of spectrum and space, the feature fusion module is used for fusing the feature information corresponding to the spectrum and the feature information corresponding to the space, and the network output module is used for classifying the fused feature information;
Performing iterative training on the constructed convolutional neural network model by using the labeled data set; after each iteration training is completed, selecting part of non-tag data from the non-tag data set, adding the part of non-tag data to the tagged data set, judging whether the convolutional neural network model meets self-growth conditions or not, if so, respectively adding a spectral feature extraction module and a spatial feature extraction module in a basic feature extraction module to update the convolutional neural network model, continuing training the updated convolutional neural network model by using a new tagged data set, if not, maintaining the convolutional neural network model, continuing training the maintained convolutional neural network model by using the new tagged data set until the iteration stop conditions are met, and outputting a trained convolutional neural network model;
and classifying the photographed national survey image data to be classified by using the trained convolutional neural network model.
In one embodiment of the present invention, the spectral feature extraction module comprises a plurality of 3D variable convolution modules connected in sequence, a maximum pooling layer is connected between the 3D variable convolution modules, wherein,
Each 3D variable convolution module comprises a plurality of 3D variable convolution layers which are sequentially connected, an activation layer is connected behind each 3D variable convolution layer, and the activation layer connected with the first 3D variable convolution layer and the last 3D variable convolution layer are connected to form a residual structure.
In one embodiment of the invention, the spatial feature extraction module comprises a plurality of multi-scale feature extraction branches in parallel, all of which are sequentially connected with a splicing layer and a maximum pooling layer, wherein,
each multi-scale feature extraction branch comprises a scale operation layer, a convolution layer, a normalization layer and an activation layer which are connected in sequence.
In one embodiment of the present invention, the feature fusion module includes a plurality of 2D convolution layers connected in sequence.
In one embodiment of the invention, the network output module includes a splice layer, an averaging pooling layer, a full connectivity layer, and a softmax classifier.
In one embodiment of the invention, selecting a portion of the unlabeled data from the unlabeled dataset to add to the labeled dataset includes:
and carrying out consistency measurement on the unlabeled data set through a high confidence sample selection strategy, and selecting part of unlabeled data from the unlabeled data set to be added to the labeled data set.
In one embodiment of the invention, the consistency metric is performed on the unlabeled dataset by a high confidence sample selection policy, selecting a portion of unlabeled data from the unlabeled dataset to add to the labeled dataset, comprising:
obtaining a classification result corresponding to a current convolutional neural network model, and constructing a mapping diagram and a probability matrix corresponding to the label-free data set according to the classification result;
designing a preset size window, and extracting a mapping matrix corresponding to each piece of non-tag data in the non-tag data set from the mapping graph by using the preset size window;
selecting high-confidence unlabeled data from the mapping matrix by using a neighborhood consistency criterion to form a data set to be added;
and selecting high-probability data to be added from the data set to be added according to the probability matrix, and adding the high-probability data to be added to the data set with the tag.
In one embodiment of the present invention, determining whether the convolutional neural network model satisfies a self-growth condition includes:
constructing an integral loss function corresponding to the convolutional neural network model;
and calculating a loss value of the current convolutional neural network model according to the integral loss function, judging whether the loss value is larger than a preset loss value, if not, enabling the convolutional neural network model to meet a self-growth condition, and if so, enabling the convolutional neural network model not to meet the self-growth condition.
In one embodiment of the present invention, during the training process, the overall loss function corresponding to the constructed convolutional neural network model includes two parts, and the formula is:
L=L 1 +λL 2
wherein L represents the overall loss function, L 1 Represents a cross entropy loss function, L 2 Represents the local feature preservation function, λ represents the trade-off parameter.
In one embodiment of the present invention, after selecting a portion of the unlabeled data from the unlabeled dataset to be added to the labeled dataset during training, further comprising:
and carrying out weighted average filtering processing on the non-tag data which are difficult to classify in the updated non-tag data set by utilizing the non-tag data in the adjacent space.
The invention has the beneficial effects that:
the invention provides a homeland mapping data classification method based on a self-growing convolution neural network, which is a comprehensive data classification method, and specifically comprises the following steps of: shooting homeland mapping image data by using an unmanned aerial vehicle carried camera; wherein the homeland mapping image data comprises a small number of labeled data sets and a large number of unlabeled data sets; constructing a convolutional neural network model; the convolutional neural network model comprises a basic feature extraction module formed by a spectrum feature extraction module and a space feature extraction module, a feature fusion module and a network output module, wherein the spectrum feature extraction module and the space feature extraction module are respectively used for extracting corresponding feature information from two aspects of spectrum and space, the feature fusion module is used for fusing the feature information corresponding to the spectrum and the feature information corresponding to the space, and the network output module is used for classifying the fused feature information; performing iterative training on the constructed convolutional neural network model by using the labeled data set; after each iteration training is completed, selecting part of non-tag data from the non-tag data set, adding the part of non-tag data to the tagged data set, judging whether the convolutional neural network model meets self-growth conditions or not, if so, respectively adding a spectral feature extraction module and a spatial feature extraction module in the basic feature extraction module to update the convolutional neural network model, continuing training the updated convolutional neural network model by using a new tagged data set, if not, maintaining the convolutional neural network model, continuing training the maintained convolutional neural network model by using the new tagged data set until the iteration stop conditions are met, and outputting a trained convolutional neural network model; and classifying the photographed homeland mapping image data to be classified by using the trained convolutional neural network model. Therefore, the convolutional neural network model constructed by the method performs feature extraction from two aspects of spectrum and space, so that the classification accuracy can be improved, and the space information contained in the hyperspectral image can be extracted more deeply; in the training process of the convolutional neural network model, part of non-tag data is selected from the non-tag data set and added into the tagged data set to form a new tagged data set after each iteration training is finished, more tagged data are used for training, the problem that the fitting of the training of the network model is over caused by less tagged data can be avoided, a better convolutional neural network model can be obtained after the training is finished, and therefore the classification performance of the convolutional neural network model is improved; in the training process of the convolutional neural network model, the convolutional neural network model is not based on a fixed convolutional neural network model any more, but is adaptively grown in the training process, the obtained convolutional neural network model can be used for extracting shallow features, and the gradually-growing convolutional neural network model can be used for extracting higher-level features, so that the convolutional neural network model structure can obtain better hyperspectral image classification performance.
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Drawings
FIG. 1 is a flow chart of a classification method of homeland mapping data based on a self-growing convolution neural network provided by an embodiment of the invention;
FIG. 2 is a schematic diagram of a convolutional neural network model provided by an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a spectral feature extraction module according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a spatial feature extraction module according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a feature fusion module and a network output module according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an exemplary process for updating a tagged data set provided by an embodiment of the invention;
FIG. 7 is a schematic diagram of a convolutional neural network model after self-growth according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to specific examples, but embodiments of the present invention are not limited thereto.
In order to realize high-precision classification of homeland mapping image data under limited label samples, please refer to fig. 1, an embodiment of the invention provides a homeland mapping data classification method based on a self-growing convolution neural network, which specifically comprises the following steps:
S10, shooting homeland mapping image data by using an unmanned aerial vehicle carried camera; wherein the homeland mapping image data comprises a small number of labeled data sets and a large number of unlabeled data sets.
According to the embodiment of the invention, the camera is carried by the unmanned aerial vehicle, but the method is not limited to the method, the homeland mapping image data are shot, the shot homeland mapping image data are hyperspectral images with high dimensionality, and the shot homeland mapping image data are mostly unlabeled data due to large shot data quantity and redundancy and correlation of shot data, and only a small amount of labeled data exist, so that the labeled data quantity used for the subsequent neural network model training is limited, and difficulty is brought to classification of shot data.
S20, constructing a convolutional neural network model; the convolutional neural network model comprises a basic feature extraction module, a feature fusion module and a network output module, wherein the basic feature extraction module is composed of a spectrum feature extraction module and a space feature extraction module, the spectrum feature extraction module and the space feature extraction module are respectively used for extracting corresponding feature information from two aspects of spectrum and space, the feature fusion module is used for fusing the feature information corresponding to the spectrum and the feature information corresponding to the space, and the network output module is used for classifying the fused feature information.
The existing hyperspectral image classification methods based on the neural network model have the defects that information is lost due to insufficient utilization rate of extracted features during extraction of inter-spectrum and spatial features, or information redundancy is caused by excessive irrelevant information, key information in hyperspectral image wave bands cannot be fully utilized, spectral empty features with higher resolution are obtained, and a large number of hyperspectral samples are needed for training the neural network during training, so that the hyperspectral image classification effect is poor during the insufficient of the existing label samples, and the difference of information among different spectrums is not focused deeply.
Based on the shortcomings of the existing neural network model, the embodiment of the invention provides an alternative scheme, and a new convolutional neural network model shown in fig. 2 is constructed, wherein the new convolutional neural network model comprises a basic feature extraction module, a feature fusion module and a network output module, wherein the basic feature extraction module consists of a spectrum feature extraction module and a spatial feature extraction module. Specific embodiments each module is described below.
And the spectral feature extraction module is used for extracting spectral feature information from the spectral aspect. The embodiment of the invention provides an alternative scheme, as shown in fig. 3, the spectral feature extraction module comprises a plurality of 3D variable convolution modules connected in sequence, a maximum pooling layer is connected between the 3D variable convolution modules, wherein,
Each 3D variable convolution module comprises a plurality of 3D variable convolution layers which are sequentially connected, an activation layer is connected behind each 3D variable convolution layer, and the activation layer connected with the first 3D variable convolution layer and the last 3D variable convolution layer are connected to form a residual structure. The spectrum feature extraction module constructed by the embodiment of the invention can extract rich inter-spectrum features through the 3D deformable convolution blocks, and the inter-spectrum features can be extracted with more resolution through focusing and screening the inter-spectrum features through the 3D deformable convolution blocks, so that the problem that more useful information cannot be extracted due to the fixation of convolution kernels or information redundancy caused by excessive irrelevant information is reserved in the process of extracting the inter-spectrum features in the prior art is overcome, and the classification precision of the ground features in the hyperspectral image is improved.
Taking three 3D variable convolution modules as an example, the convolution kernel size of the largest pooling layer arranged between the first 3D variable convolution module and the second 3D variable convolution module is set to 2 x 4, and the number of convolution kernels is set to 8; the convolution kernel size of the largest pooling layer set between the second 3D variable convolution module and the third 3D variable convolution module is set to 2×2×4, and the number of convolution kernels is set to 16. The convolution kernel size of the 3D variable convolution layer in each 3D variable convolution module is set to 3 x 3, the activation function of each activation layer is set to a ReLU activation function.
Further, the spatial feature extraction module is used for extracting spectral feature information from the spatial aspect. The embodiment of the invention provides an alternative scheme, as shown in fig. 4, the spatial feature extraction module comprises a plurality of parallel multi-scale feature extraction branches, all the multi-scale feature extraction branches are sequentially connected with a splicing layer and a maximum pooling layer,
each multi-scale feature extraction branch comprises a scale operation layer, a convolution layer, a normalization layer and an activation layer which are connected in sequence. The spatial feature extraction module constructed by the embodiment of the invention enables the convolutional neural network model to pay attention to spatial features of different scales, overcomes the defect that the prior art uses a single scale to extract the spatial features of hyperspectral images, can pay attention to and screen the multiscale spatial features through multiscale feature extraction branches in the spatial feature extraction module, extracts more resolved spatial features, overcomes the information loss caused by insufficient utilization rate of the extracted features or information redundancy caused by retention of excessive irrelevant information in the spatial feature extraction process in the prior art, and improves the classification capability of the convolutional neural network model in sample training.
Taking three multi-scale feature extraction branches as an example, the number of the convolution kernel of the convolution layer is set to be 5 x 4 for the peripheral edge of the selected image, the number of the convolution kernel of the convolution layer is set to be 3 x 4 for the peripheral edge of the selected image, the number of the convolution kernel of the convolution layer in the third multi-scale feature extraction branch is set to be 1 x 4, the number of the convolution kernels of the convolution layer in each multi-scale feature extraction branch is set to be 16, the activation functions of the activation layers in each multi-scale feature extraction branch are set to be ReLU activation functions, and the output of the three activation layers is 16 features with the size of 5 x 25; performing splicing operation through the splicing layer to obtain 16 characteristics with the size of 5-75; and performing pooling operation through a maximum pooling layer, wherein the convolution kernels of the maximum pooling layer are set to be 2 x 8, and the number of the convolution kernels is set to be 16.
Further, the feature fusion module is used for fusing the feature information corresponding to the spectrum and the feature information corresponding to the space. The embodiment of the invention provides an alternative scheme, and the feature fusion module shown in fig. 5 comprises a plurality of 2D convolution layers which are connected in sequence. The embodiment of the invention mainly aims at the problem that redundant information among wave bands is too much due to more spectral wave bands of the hyperspectral image, and the useful inter-spectrum features and spatial features are extracted through the feature fusion module, so that the convolutional neural network model pays attention to useful information in the feature information more, and further classification precision of the hyperspectral image is provided.
The feature fusion module fuses the feature information corresponding to the spectrum and the space, so that one part of the 2D convolution layers are used for processing the feature information corresponding to the spectrum, the other part of the 2D convolution layers are used for processing the feature information corresponding to the space, and finally, the feature information fusion is realized by using a weighting mode. For example, in the embodiment of the invention, characteristic information corresponding to a processing spectrum of 1 2D convolution layer is set, characteristic information corresponding to a processing space of 2D convolution layers is set, 3 2D convolution layers are sequentially connected in series, the convolution kernel sizes of the 3 2D convolution layers are set to 9*9, and the number of convolution kernels is set to 1.
Further, the network output module is used for classifying the fused characteristic information. The embodiment of the invention provides an alternative scheme, as shown in fig. 5, the network output module comprises a splicing layer, an average pooling layer, a full connection layer and a softmax classifier, and specifically: the splicing layer splices the characteristics output by the characteristic fusion module, the splicing result is subjected to redundancy removal through the average pooling layer, the spectral-empty characteristics with more distinguishability are obtained through the full-connection layer, and finally the classification result of the hyperspectral image is obtained through the softmax classifier.
S30, performing iterative training on the constructed convolutional neural network model by using the labeled data set; after each iteration training is completed, part of non-label data is selected from the non-label data set and added to the labeled data set, meanwhile, whether the convolutional neural network model meets the self-growth condition is judged, if so, a spectral feature extraction module and a spatial feature extraction module are respectively added in the basic feature extraction module to update the convolutional neural network model, the updated convolutional neural network model is continuously trained by using the new labeled data set, if not, the convolutional neural network model is kept, the kept convolutional neural network model is continuously trained by using the new labeled data set until the iteration stop condition is met, and the trained convolutional neural network model is output.
In order to solve the problem that it is difficult to obtain a superior neural network model based on a limited set of tagged data, embodiments of the present invention provide an alternative solution to select a portion of the non-tagged data from the non-tagged data set to be added to the tagged data set, including:
the unlabeled dataset is subjected to a consistency metric by a high confidence sample selection policy, and a portion of the unlabeled data is selected from the unlabeled dataset to be added to the labeled dataset. Specifically:
obtaining a classification result corresponding to the current convolutional neural network model, and constructing a mapping diagram and a probability matrix corresponding to the label-free data set according to the classification result; designing a preset size window, and extracting a mapping matrix corresponding to each piece of non-tag data in the non-tag data set from the mapping graph by using the preset size window; selecting high-confidence unlabeled data from the mapping matrix by using a neighborhood consistency criterion to form a data set to be added; and selecting high-probability data to be added from the data set to be added according to the probability matrix, and adding the high-probability data to the labeled data set.
Firstly, a softmax classifier is connected behind a current convolutional neural network model, a mapping diagram of unlabeled hyperspectral training data is constructed according to the classification result of the current convolutional neural network model, as shown in fig. 6, different symbols in the diagram represent different classification results obtained through the current convolutional neural network model, and for each sample in the unlabeled data set, a mapping matrix corresponding to each sample in the unlabeled data set is extracted from the mapping diagram by utilizing a preset size window. When selecting high-quality added samples, considering spatial information, judging whether the samples corresponding to the unlabeled data sets can be used as data to be added or not by adopting a neighborhood-based consistency criterion, wherein the neighborhood-based consistency criterion specifically comprises: by means of a preset maximum threshold value T v =N ne Voting (highest consistency) mechanism to evaluate for non-scalarConfidence of sample corresponding to the signature dataset, counting the number of neighborhood samples with the same label from the mapping matrix, and N ne Representing the number of neighborhood samples with the same label of the samples corresponding to the counted non-label data set, and selecting the number of neighborhood samples counted around the neighborhood samples to be not less than N ne The sample corresponding to the unlabeled data set indicates that the sample corresponding to the unlabeled data set has higher confidence and can be used as data to be added, and the sample corresponding to the unlabeled data set is added into the data to be added.
Then converting the classification result of the softmax classifier into a corresponding probability value, forming a probability matrix by the probability value corresponding to the classification result of each sample in the labeled data set, wherein the probability matrix is used for measuring each sample x in the training data i Probability values belonging to a certain label class. For the data to be added in the same class in the data set to be added, sorting all the data to be added from the data set to be added according to a probability matrix, wherein the formula is expressed as follows:
T i =f rank (p(M(x i )=y i )) (1)
wherein T is i Representing samples x in a non-labeled dataset i Belonging to the label class y i A sequence number of probability values of (c). Ranking value T i The method can be designed by category matrixes and probability matrixes, and comprises the following calculation processes: y is i Is a sample x in a label-free dataset i The label value M (·) of the belonging label class represents a class matrix for determining the sample x in the unlabeled dataset i Whether or not it belongs to the label class y i P (·) represents the sample x in the unlabeled dataset i Belonging to the label class y i Probability f of (f) rank (. Cndot.) indicates a sorting function, which can implement a function of sorting all samples in the unlabeled dataset according to a descending order in the class of the labels to which the samples belong, specifically, assigning a rank value that decreases in order according to the sample assignment in the unlabeled dataset with decreasing probability, for example, the highest probability value, the rank value being 1, the next highest probability value, the rank value being 2, and so on, where the higher the rank value, the lower the class probability value is represented. According to such a ranAnd the k value sorting result can be used for selecting a specific percentage of data to be added with a low rank value from the k value sorting result as a final data set to be added of each class, so that the selection of high-quality data to be added is realized, and the specific percentage is set according to the actual design. And finally, adding all the selected data to be added to the tagged data set to finish updating the tagged data set.
Therefore, in the training process of the embodiment of the invention, the convolutional neural network model is directly trained by using a limited label data set, and then the data to be added is generated based on the convolutional neural network model obtained by current training, specifically: and sending the labeled data set into the convolutional neural network model obtained by current training to finish feature extraction and classification, obtaining a probability matrix corresponding to the labeled data set, and then selecting unlabeled data with higher confidence as data to be added by utilizing the high-confidence sample selection strategy, thereby providing more available identification information for image classification and being used in high-precision convolutional neural network model training.
The method for updating the labeled data set provided by the embodiment of the invention avoids manual labeling operation, improves the accuracy of newly added labeled data, enables the labeled data set with high confidence in the training process to provide more priori information for the convolutional neural network model training, and enables the deep learning network based on the more priori information to avoid the problem that the limited labeled data set is easy to be fitted, thereby training the convolutional neural network model corresponding to the optimal network parameters.
In the whole convolutional neural network model training process, the integral loss function corresponding to the constructed convolutional neural network model comprises two parts, and the formula is expressed as follows:
L=L 1 +λL 2 (2)
wherein L represents the overall loss function, L 1 Representing a cross entropy loss function, which is the main loss function term of the convolutional neural network model, L 2 Representing the local feature retention function, λ represents a trade-off parameter, such as the trade-off parameter λ takes a value of 0.001. Specifically:
cross entropy loss optimization function, formulated as:
wherein L is 1 Representing the loss value between the predicted and real label vectors, d (·) represents the Euclidean distance, F ω (. Cndot.) the feature extraction function of parameter ω, f l Features representing the first category in the tagged dataset, C being the number of categories, x j Representing one sample in the labeled dataset, y j Representing sample x j The corresponding tag, Q, represents the tagged dataset.
Through research and analysis of the inventor, the data structure can provide rich information for acquiring the relationship between the attribute of the data and the sample, and in the characteristic mapping of the traditional neural network, the spatial structure information which is helpful for acquiring the identification characteristic cannot be maintained. The local linear embedding is a non-linear unsupervised basic manifold learning algorithm, which projects data in a high-dimensional space into a low-dimensional space, so that a global non-linear structure of the high-dimensional data is converted into a local linear structure, and the local geometric structure of the data in the original high-dimensional space is well reserved by reconstructing data points by using a neighborhood of a sample in the mapped low-dimensional space and a local weight matrix in the high-dimensional space, thereby reducing data redundancy. The local feature retaining function finally designed by the embodiment of the invention is expressed as the following formula:
Wherein N represents the number of samples in the labeled dataset, θ represents the network parameters of the convolutional neural network, k represents the number of nearest neighbor samples, m (x) i θ) represents a markSample x in signature dataset i Representation in a low-dimensional feature space, m (x ij θ) represents a neighborhood sample x ij Representation in a low-dimensional feature space, ω ij Representing neighborhood samples x ij Sample x in hyperspectral training data of label i The contribution of the reconstruction is made,representing the square of the F-norm. By introducing equation (5), the local reconstruction relationship between each sample in the tag dataset and its neighborhood samples can be maintained in the mapping space.
The integral loss function constructed by the embodiment of the invention can restrict the relation between the input feature space and the mapping feature space of the network, and can optimize the network parameters of the convolutional neural network model by enriching the labeled data information and mining the unlabeled data information, thereby providing guidance for better interpreting hyperspectral images and further better realizing classification of hyperspectral images.
Further, because the embodiment of the invention provides a limited updating strategy of the labeled data set, the convolutional neural network model training has more labeled data sets as training samples, the conventional convolutional neural network model training is limited by the limited labeled data sets, and more training samples exist now, which means that a more complex convolutional neural network model can be designed to extract more abundant characteristic information for classification. The embodiment of the invention provides an alternative scheme, after a labeled data set is updated, firstly judging whether a convolutional neural network model meets a self-growth condition, if so, respectively adding a spectral feature extraction module and a spatial feature extraction module in a basic feature extraction module to update the convolutional neural network model, continuing training the updated convolutional neural network model by using a new labeled data set, if not, maintaining the convolutional neural network model, continuing training the maintained convolutional neural network model by using the new labeled data set until an iteration stop condition is met, and outputting a trained convolutional neural network model.
For how to judge the self-growth condition, the embodiment of the invention provides an alternative scheme for judging whether the convolutional neural network model meets the self-growth condition, which comprises the following steps:
constructing an integral loss function corresponding to the convolutional neural network model; and calculating a loss value of the current convolutional neural network model according to the integral loss function, judging whether the loss value is larger than a preset loss value, if not, enabling the convolutional neural network model to meet the self-growth condition, and if so, enabling the convolutional neural network model not to meet the self-growth condition. The preset loss value is designed according to actual conditions.
For the case of meeting the self-growth condition, the embodiment of the invention provides an alternative scheme for updating the convolutional neural network model, as shown in fig. 7, a spectral feature extraction module and a spatial feature extraction module are respectively added in a basic feature extraction module, the network structure of other modules is kept unchanged, the added spectral feature extraction module and spatial feature extraction module have the same network parameters as the original spectral feature extraction module and spatial feature extraction module, and M represents the times of meeting the self-growth in multiple iterative training. In the growth process of the convolutional neural network model, as the number of basic feature extraction modules in the convolutional neural network model increases, the network parameters of the basic feature extraction modules correspondingly increase, if a traditional method of directly carrying out random initialization on a large number of parameters in a network is utilized, the feature extraction capacity of the network can be affected to a certain extent.
It can be seen that the convolutional neural network model of the embodiment of the invention can be used for extracting shallow features, and the convolutional neural network model which is gradually increased can be used for extracting higher-level features, so that the convolutional neural network model structure can obtain better hyperspectral image classification performance.
It should be noted that, the sampling method in the whole convolutional neural network model training process is not limited, for example, an Adam model optimizer can be adopted, and the learning rate of the Adam model optimizer can be set to be a value in the range of 0.00001-0.001 for different training data sets; the maximum iteration number in the training iteration process depends on the actual design requirement, for example, the maximum iteration number can be 20 times.
Further, an embodiment of the present invention provides an alternative, in a training process, after selecting a part of non-tag data from the non-tag data set to be added to the tagged data set, further including: and carrying out weighted average filtering processing on the non-tag data which are difficult to classify in the updated non-tag data set by utilizing the non-tag data in the adjacent space.
After each iteration training, the labeled data set is expanded, the unlabeled data set added with the labeled data set is removed, and the samples in the rest unlabeled data set are considered to be samples which are difficult to classify. The embodiment of the invention provides an alternative scheme, which carries out weighted average filtering treatment on the non-tag data which is difficult to classify in the updated non-tag data set by utilizing the non-tag data in the adjacent space, and specifically realizes the following steps:
Each pixel in the hyperspectral image has spectral and spatial correlation with the adjacent pixel point in the spatial position, and the pixels are composed of the same class of ground objects under a high probability. Based on the above, the embodiment of the invention provides a method for aiming at the pixel point x i Is a weighted average filtering operation example:
let sample i correspond to pixel point x in the whole hyperspectral image i Its coordinates in the homeland mapping image to be classified are (h) i ,w i ) Then with x i Square adjacent space omega (x) with omega as side length for central pixel point i ) Can be defined as:
wherein the neighbor space Ω (x i ) The pixel comprises omega X omega pixel points, and the center pixel point x is divided i Rest of omega 2 -1 pixel point can be represented as respectivelyFor a pixel located at the edge position of the hyperspectral image, the pixel itself is used for pixel padding.
Center pixel point x i By means of weighted summation, ω in the neighbor space is utilized 2 -1 pixel neighborhood point and reconstruction is carried out on the pixel neighborhood point to obtain a reconstructed pixel point x i ' then x i ' can be expressed as:
wherein x is j Representing the nearest neighbor space Ω (x i ) Middle neighborhood pixel point, v k Representing the nearest neighbor space Ω (x i ) The kth pixel point x in (1) ik The weights in the weighted summation can be solved using a thermonuclear function:
Wherein d i Representing the nearest neighbor space Ω (x i ) All pixels in (a) and center pixel x i The average value of the distances can be expressed as:
the weighted average filtering method adjusts the filtering window by setting the value of the parameter omega, which is essentially to measure the neighbor space omega (x i ) Middle neighboring pixel and center pixel x i Is used to reconstruct the weighted mean of the center pixels. The higher the similarity, the greater the corresponding weight thereof; differences inThe greater the sex, the less its corresponding weight. Therefore, the weighted average filtering method can effectively eliminate the interference of background points and noise points, and obtain images with smoother edges, so that guidance is provided for better interpretation of hyperspectral images, and classification of the hyperspectral images is better realized.
S40, classifying the photographed homeland mapping image data to be classified by using the trained convolutional neural network model.
In summary, the homeland mapping data classification method based on the self-growing convolution neural network provided by the embodiment of the invention is a data classification method considering comprehensiveness, and specifically: shooting homeland mapping image data by using an unmanned aerial vehicle carried camera; wherein the homeland mapping image data comprises a small number of labeled data sets and a large number of unlabeled data sets; constructing a convolutional neural network model; the convolutional neural network model comprises a basic feature extraction module formed by a spectrum feature extraction module and a space feature extraction module, a feature fusion module and a network output module, wherein the spectrum feature extraction module and the space feature extraction module are respectively used for extracting corresponding feature information from two aspects of spectrum and space, the feature fusion module is used for fusing the feature information corresponding to the spectrum and the feature information corresponding to the space, and the network output module is used for classifying the fused feature information; performing iterative training on the constructed convolutional neural network model by using the labeled data set; after each iteration training is completed, selecting part of non-tag data from the non-tag data set, adding the part of non-tag data to the tagged data set, judging whether the convolutional neural network model meets self-growth conditions or not, if so, respectively adding a spectral feature extraction module and a spatial feature extraction module in the basic feature extraction module to update the convolutional neural network model, continuing training the updated convolutional neural network model by using a new tagged data set, if not, maintaining the convolutional neural network model, continuing training the maintained convolutional neural network model by using the new tagged data set until the iteration stop conditions are met, and outputting a trained convolutional neural network model; and classifying the photographed homeland mapping image data to be classified by using the trained convolutional neural network model. Therefore, the convolutional neural network model constructed by the embodiment of the invention performs feature extraction from two aspects of spectrum and space, can improve classification accuracy and can extract the space information contained in the hyperspectral image more deeply; in the training process of the convolutional neural network model, part of non-label data is selected from the non-label data set after each iteration training is finished and added into the labeled data set to form a new labeled data set, more labeled data are used for training, the problem that the fitting of the training of the network model is over caused by less labeled data can be avoided, a better convolutional neural network model can be obtained after the training is finished, and therefore the classification performance of the convolutional neural network model is improved; in addition, in the convolutional neural network model training process, the convolutional neural network model is not based on a fixed convolutional neural network model any more, but is adaptively grown in the training process, the obtained convolutional neural network model can be used for extracting shallow features, and the gradually-growing convolutional neural network model can be used for extracting higher-level features, so that the convolutional neural network model structure can obtain better hyperspectral image classification performance.
Referring to fig. 8, an embodiment of the present invention provides an electronic device, including a processor 801, a communication interface 802, a memory 803, and a communication bus 804, where the processor 801, the communication interface 802, and the memory 803 complete communication with each other through the communication bus 804;
a memory 803 for storing a computer program;
the processor 801 is configured to implement the above-described method for classifying homeland mapping data based on a self-growing convolutional neural network when executing the program stored in the memory 803.
The embodiment of the invention provides a computer readable storage medium, wherein a computer program is stored in the computer readable storage medium, and the computer program realizes the steps of the homeland mapping data classification method based on the self-growing convolutional neural network when being executed by a processor.
For the electronic device/storage medium embodiments, the description is relatively simple as it is substantially similar to the method embodiments, as relevant points are found in the partial description of the method embodiments.
In the description of the present invention, it should be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
Although the invention is described herein in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the specification and the drawings. In the description, the word "comprising" does not exclude other elements or steps, and the "a" or "an" does not exclude a plurality. Some measures are described in mutually different embodiments, but this does not mean that these measures cannot be combined to produce a good effect.
The foregoing is a further detailed description of the invention in connection with the preferred embodiments, and it is not intended that the invention be limited to the specific embodiments described. It will be apparent to those skilled in the art that several simple deductions or substitutions may be made without departing from the spirit of the invention, and these should be considered to be within the scope of the invention.

Claims (10)

1. A homeland mapping data classification method based on a self-growing convolution neural network is characterized by comprising the following steps:
shooting homeland mapping image data by using an unmanned aerial vehicle carried camera; wherein the homeland mapping image data comprises a small number of tagged data sets and a large number of untagged data sets;
Constructing a convolutional neural network model; the convolutional neural network model comprises a basic feature extraction module formed by a spectrum feature extraction module and a space feature extraction module, a feature fusion module and a network output module, wherein the spectrum feature extraction module and the space feature extraction module are respectively used for extracting corresponding feature information from two aspects of spectrum and space, the feature fusion module is used for fusing the feature information corresponding to the spectrum and the feature information corresponding to the space, and the network output module is used for classifying the fused feature information;
performing iterative training on the constructed convolutional neural network model by using the labeled data set; after each iteration training is completed, selecting part of non-tag data from the non-tag data set, adding the part of non-tag data to the tagged data set, judging whether the convolutional neural network model meets self-growth conditions or not, if so, respectively adding a spectral feature extraction module and a spatial feature extraction module in a basic feature extraction module to update the convolutional neural network model, continuing training the updated convolutional neural network model by using a new tagged data set, if not, maintaining the convolutional neural network model, continuing training the maintained convolutional neural network model by using the new tagged data set until the iteration stop conditions are met, and outputting a trained convolutional neural network model;
And classifying the photographed national survey image data to be classified by using the trained convolutional neural network model.
2. The classification method of homeland mapping data based on self-growing convolution neural network of claim 1, wherein the spectral feature extraction module comprises a plurality of 3D variable convolution modules connected in sequence, the 3D variable convolution modules between the 3D variable convolution modules are connected with a maximum pooling layer, wherein,
each 3D variable convolution module comprises a plurality of 3D variable convolution layers which are sequentially connected, an activation layer is connected behind each 3D variable convolution layer, and the activation layer connected with the first 3D variable convolution layer and the last 3D variable convolution layer are connected to form a residual structure.
3. The method for classifying homeland mapping data based on a self-growing convolution neural network according to claim 1, wherein the spatial feature extraction module comprises a plurality of multi-scale feature extraction branches in parallel, all of which are sequentially connected with a splicing layer and a maximum pooling layer,
each multi-scale feature extraction branch comprises a scale operation layer, a convolution layer, a normalization layer and an activation layer which are connected in sequence.
4. The method for classifying homeland mapping data based on a self-growing convolutional neural network of claim 1, wherein the feature fusion module comprises a plurality of 2D convolutional layers connected in sequence.
5. The method of claim 1, wherein the network output module comprises a stitching layer, an averaging pooling layer, a full connection layer, and a softmax classifier.
6. The self-growing convolutional neural network-based homeland mapping data classification method of claim 1, wherein selecting a portion of unlabeled data from the unlabeled dataset to add to the labeled dataset comprises:
and carrying out consistency measurement on the unlabeled data set through a high confidence sample selection strategy, and selecting part of unlabeled data from the unlabeled data set to be added to the labeled data set.
7. The self-growing rolled neural network based homeland mapping data classification method of claim 6, wherein the consistency metric for the unlabeled dataset by a high confidence sample selection strategy, selecting a portion of unlabeled data from the unlabeled dataset to add to the labeled dataset, comprises:
Obtaining a classification result corresponding to a current convolutional neural network model, and constructing a mapping diagram and a probability matrix corresponding to the label-free data set according to the classification result;
designing a preset size window, and extracting a mapping matrix corresponding to each piece of non-tag data in the non-tag data set from the mapping graph by using the preset size window;
selecting high-confidence unlabeled data from the mapping matrix by using a neighborhood consistency criterion to form a data set to be added;
and selecting high-probability data to be added from the data set to be added according to the probability matrix, and adding the high-probability data to be added to the data set with the tag.
8. The method for classifying homeland mapping data based on a self-growing convolutional neural network of claim 1, wherein determining whether the convolutional neural network model satisfies the self-growing condition comprises:
constructing an integral loss function corresponding to the convolutional neural network model;
and calculating a loss value of the current convolutional neural network model according to the integral loss function, judging whether the loss value is larger than a preset loss value, if not, enabling the convolutional neural network model to meet a self-growth condition, and if so, enabling the convolutional neural network model not to meet the self-growth condition.
9. The method for classifying homeland mapping data based on a self-growing convolutional neural network of claim 8, wherein the overall loss function corresponding to the constructed convolutional neural network model in the training process comprises two parts, and the formula is expressed as:
L=L 1 +λL 2
wherein L represents the overall loss function, L 1 Represents a cross entropy loss function, L 2 Represents the local feature preservation function, λ represents the trade-off parameter.
10. The method of claim 1, wherein during training, selecting a portion of unlabeled data from the unlabeled dataset to add to the labeled dataset further comprises:
and carrying out weighted average filtering processing on the non-tag data which are difficult to classify in the updated non-tag data set by utilizing the non-tag data in the adjacent space.
CN202310489020.9A 2023-04-25 2023-04-25 Homeland mapping data classification method based on self-growing convolution neural network Pending CN116704378A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310489020.9A CN116704378A (en) 2023-04-25 2023-04-25 Homeland mapping data classification method based on self-growing convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310489020.9A CN116704378A (en) 2023-04-25 2023-04-25 Homeland mapping data classification method based on self-growing convolution neural network

Publications (1)

Publication Number Publication Date
CN116704378A true CN116704378A (en) 2023-09-05

Family

ID=87834751

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310489020.9A Pending CN116704378A (en) 2023-04-25 2023-04-25 Homeland mapping data classification method based on self-growing convolution neural network

Country Status (1)

Country Link
CN (1) CN116704378A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118089670A (en) * 2024-04-23 2024-05-28 济南市勘察测绘研究院 Geographic information visual analysis system and method based on unmanned aerial vehicle mapping

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118089670A (en) * 2024-04-23 2024-05-28 济南市勘察测绘研究院 Geographic information visual analysis system and method based on unmanned aerial vehicle mapping

Similar Documents

Publication Publication Date Title
CN110399909B (en) Hyperspectral image classification method based on label constraint elastic network graph model
CN111191736B (en) Hyperspectral image classification method based on depth feature cross fusion
CN111191514A (en) Hyperspectral image band selection method based on deep learning
CN106295613A (en) A kind of unmanned plane target localization method and system
CN113435253B (en) Multi-source image combined urban area ground surface coverage classification method
CN110245683B (en) Residual error relation network construction method for less-sample target identification and application
CN112614119A (en) Medical image region-of-interest visualization method, device, storage medium and equipment
CN112580480B (en) Hyperspectral remote sensing image classification method and device
CN113705580A (en) Hyperspectral image classification method based on deep migration learning
CN113705641A (en) Hyperspectral image classification method based on rich context network
CN111639697B (en) Hyperspectral image classification method based on non-repeated sampling and prototype network
CN117315381B (en) Hyperspectral image classification method based on second-order biased random walk
CN109002771B (en) Remote sensing image classification method based on recurrent neural network
CN114510594A (en) Traditional pattern subgraph retrieval method based on self-attention mechanism
CN111680579A (en) Remote sensing image classification method for adaptive weight multi-view metric learning
CN113435254A (en) Sentinel second image-based farmland deep learning extraction method
CN116486251A (en) Hyperspectral image classification method based on multi-mode fusion
Guo et al. CNN‐combined graph residual network with multilevel feature fusion for hyperspectral image classification
CN116310466A (en) Small sample image classification method based on local irrelevant area screening graph neural network
CN116704378A (en) Homeland mapping data classification method based on self-growing convolution neural network
CN116503677B (en) Wetland classification information extraction method, system, electronic equipment and storage medium
Jing et al. Time series land cover classification based on semi-supervised convolutional long short-term memory neural networks
CN114202694A (en) Small sample remote sensing scene image classification method based on manifold mixed interpolation and contrast learning
CN112949771A (en) Hyperspectral remote sensing image classification method based on multi-depth multi-scale hierarchical attention fusion mechanism
CN111898579A (en) Extreme gradient lifting-based unbiased semi-supervised classification model for high-resolution remote sensing images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination