CN108830330B - Multispectral image classification method based on self-adaptive feature fusion residual error network - Google Patents

Multispectral image classification method based on self-adaptive feature fusion residual error network Download PDF

Info

Publication number
CN108830330B
CN108830330B CN201810650236.8A CN201810650236A CN108830330B CN 108830330 B CN108830330 B CN 108830330B CN 201810650236 A CN201810650236 A CN 201810650236A CN 108830330 B CN108830330 B CN 108830330B
Authority
CN
China
Prior art keywords
layer
residual error
fusion
network
self
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810650236.8A
Other languages
Chinese (zh)
Other versions
CN108830330A (en
Inventor
焦李成
李玲玲
李阁
冯捷
张丹
尚凡华
刘园园
张梦旋
丁静怡
杨淑媛
侯彪
屈嵘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201810650236.8A priority Critical patent/CN108830330B/en
Publication of CN108830330A publication Critical patent/CN108830330A/en
Application granted granted Critical
Publication of CN108830330B publication Critical patent/CN108830330B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a multispectral image classification method based on a self-adaptive feature fusion residual error network, which mainly solves the problems that the universality is not high and multi-level features cannot be fully utilized in the prior art. The method comprises the following specific steps: (1) inputting a multispectral image; (2) carrying out normalization processing on the multispectral image; (3) selecting a training sample and a test sample; (4) generating a training data set; (5) building a basic residual error network; (6) building a self-adaptive feature fusion network; (7) generating a self-adaptive feature fusion residual error network; (8) training a self-adaptive feature fusion residual error net; (9) generating a test data set; (10) the test data set is classified. The invention can adaptively fuse multi-level features, extracts the features with better discriminability and richer semantic information, and has the advantages of simple training and testing process and full utilization of the features.

Description

Multispectral image classification method based on self-adaptive feature fusion residual error network
Technical Field
The invention belongs to the technical field of image processing, and further relates to a multispectral remote sensing image classification method based on a self-adaptive feature fusion residual error network in the technical field of image classification. The method can be used for classifying the ground objects in the multispectral remote sensing image.
Background
The deep learning method has strong characteristic representation capability in the technical field of image processing, and reduces the uncertainty of characteristic extraction through manual design. The deep learning method mainly comprises the steps of building a deep model, extracting deep features of the multispectral remote sensing image by using the model, and classifying data by using the features, but the extracted features cannot well fit the characteristics of the multispectral image due to the fact that the number of network layers used at present is large.
In patent document applied by Beijing aerospace university, namely multispectral remote sensing image ground feature classification method based on spectrum and textural features (patent application No. 201310482404.4, application publication No. CN103559500A), the patent document provides a multispectral remote sensing image ground feature classification method based on spectrum and textural features. The method comprises the following implementation steps: establishing a typical feature sample library, extracting and normalizing typical feature of the feature, selecting feature of a block and making rules, processing the block of an image to be classified, training a support vector machine classifier, classifying the image block based on the support vector machine and processing a boundary block; the method adopts a quadtree blocking technology to carry out multi-stage blocking processing on an image, and extracts the spectral and textural features of the ground features in an image block mode, wherein the spectral features comprise spectral values of all wave bands, ratios among the wave bands, normalized vegetation indexes, water body indexes and the like, and the textural features comprise a plurality of statistics (entropy, correlation and the like) of gray scale co-occurrence moments, edge abundance and the like. The method has the defects that the spectrum and texture characteristics used in the process of extracting the feature of the ground feature are artificially designed aiming at experimental data, the universality is not high, the training and testing process comprises four parts of multi-stage blocking processing of the image, extraction of two features of the spectrum and the texture, ground feature classification of the image block and edge area processing, and the training and testing process is complex and influences the efficiency of multispectral image classification.
The patent document "multispectral image classification method based on depth fusion residual error network" (patent application number: 201711144061.5, application publication number: CN107832797A) applied by the university of sienna electronic technology proposes a multispectral image classification method based on depth fusion residual error network. The method comprises the following steps: (1) inputting a multispectral image; (2) removing the ground object target normalization processing of the image of each wave band of each multispectral image; (3) acquiring a multispectral image matrix; (4) acquiring a data set; (5) building a depth fusion residual error network; (6) training a deep fusion residual error net; (7) classifying the test data set; when the test data sets are classified, the features of the two data sets are extracted by using a depth residual error network respectively, and the two features are fused and classified. The method has the disadvantages that when the features of two data sets are extracted by using the depth residual error network, semantic information contained in low-level features is ignored, fusion between the low-level features and high-level features is not considered, multi-level features in the depth residual error network are not fully utilized, the discrimination and robustness of the extracted features are poor, and the classification accuracy is influenced.
Disclosure of Invention
The invention aims to provide a multispectral image classification method based on a self-adaptive feature fusion residual error network aiming at the defects of the prior art.
The idea of realizing the purpose of the invention is to construct an adaptive feature fusion residual error network comprising a basic residual error network and an adaptive feature fusion network, extract multi-level features through the basic residual error network, and adaptively fuse the multi-level features by utilizing a fusion module in the adaptive feature fusion network, so that the extracted features have better discriminability and richer semantic information, thereby improving the classification accuracy.
The method comprises the following specific steps:
(1) inputting a multispectral image:
inputting a multispectral image containing a plurality of channels and a plurality of ground object targets;
(2) and (3) carrying out normalization processing on the multispectral image:
normalizing each pixel in each channel of the multispectral image by using a linear normalization method, and combining the normalized values of all pixels of all channels to obtain a normalized multispectral image;
(3) selecting a training sample and a test sample:
randomly selecting 4000 pixels from each type of the pixels with the surface object in the multispectral image as a training sample, and taking the other pixels with the surface object as test samples;
(4) generating a training data set:
with each training sample as a center and a matrix window with the size of 32 multiplied by 32, carrying out blocking processing on the normalized multispectral image to obtain an image block corresponding to each training sample, and forming a training data set by the image blocks corresponding to all the training samples;
(5) constructing a basic residual error network:
(5a) building a basic residual error network, wherein the basic residual error network sequentially comprises the following structures: input layer → convolutional layer → pooling layer → first residual block → second residual block → third residual block; wherein the plurality of residual blocks generate a multi-level feature;
(5b) the parameters of each layer are set as follows: the number of feature maps of the input layer is 3, the number of feature maps of the convolutional layer is 64, and the number of feature maps of the pooling layer is 256;
(6) constructing an adaptive feature fusion network:
(6a) building a self-adaptive feature fusion network, wherein the self-adaptive feature fusion network structure sequentially comprises the following steps: the first fusion module → the second fusion module → the convolutional layer → the pooling layer → the global pooling layer → the fully-connected layer → the softmax classifier layer; the fusion module is used for fusing two input matrixes, generating a corresponding weight for each matrix, respectively multiplying the two weights by the two matrixes for weighting, and adding the two weighted matrixes to complete the fusion process;
(6b) the parameters of each layer are set as follows: the feature map of the convolutional layer is 256, the feature map of the pooling layer is 256, and the feature map of the full-link layer is set to be 100;
(7) generating an adaptive feature fusion residual error network:
the output of the second residual error block and the output of the third residual error block in the basic residual error network are used as the input of the first fusion module in the self-adaptive characteristic fusion network, and the output of the first fusion module in the self-adaptive characteristic fusion network and the output of the first residual error block in the basic residual error network are used as the input of the second fusion module, so that the self-adaptive characteristic fusion network adaptively fuses multi-level characteristics in the basic residual error network; the input and output of other layers in the basic residual error network and the self-adaptive feature fusion network are unchanged, and the modified basic residual error network and the self-adaptive feature fusion network form the self-adaptive feature fusion residual error network;
(8) training the adaptive feature fusion residual error net:
inputting the training data set into a self-adaptive feature fusion residual error network for iterative training for 2 ten thousand times to obtain a trained self-adaptive feature fusion residual error network;
(9) generating a test data set:
taking each test sample as a center, and using a matrix window with the size of 32 multiplied by 32 to cut the normalized multispectral image into blocks to obtain an image block corresponding to each test sample, and taking the image blocks corresponding to all the test samples as a test data set;
(10) classifying the test data set:
(10a) inputting the test data set into a trained adaptive feature fusion residual error network to obtain a final classification result;
(10b) and calculating the accuracy of the classification result by using an accuracy calculation formula.
Compared with the prior art, the invention has the following advantages:
firstly, the invention builds the self-adaptive feature fusion residual error network, utilizes the self-adaptive feature fusion residual error network to extract the features of the multispectral image, is a self-learning feature extraction method, overcomes the defects of low universality and complex training and testing processes caused by artificial design features in the prior art, and has the advantages of high universality and simple training and testing processes.
Secondly, because the invention builds the basic residual error network and the self-adaptive feature fusion network in sequence, the multi-level features are extracted by using the basic residual error network, and the multi-level features are fused by using the fusion module in the self-adaptive feature fusion network according to the characteristics of the multi-level features extracted by the basic residual error network, thereby overcoming the defect of insufficient utilization of the multi-level features in the network in the prior art.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a simulation diagram of the present invention.
Detailed Description
The present invention will be described in further detail with reference to fig. 1.
The implementation steps of the present invention are described in further detail with reference to fig. 1.
Step 1, inputting a multispectral image.
A multispectral image containing a plurality of channels and a plurality of ground object targets is input.
And 2, carrying out normalization processing on the multispectral image.
And performing normalization processing on each pixel in each channel of the multispectral image by using a linear normalization method, and combining the normalized values of all the pixels of all the channels to obtain the normalized multispectral image.
The steps of the linear normalization method are as follows.
Step 1, calculating a preliminary normalized value of each pixel in each channel of the multispectral image according to the following formula:
Figure BDA0001704546960000041
wherein, yi,jRepresenting a preliminary normalized value, x, of the jth pixel of the ith channel of the multispectral imagei,jThe j pixel value, x, representing the ith channel in the multi-spectral imageminRepresenting the minimum pixel value, x, in the ith channelmaxRepresenting the maximum pixel value in the ith channel.
Step 2, calculating the normalized value of each pixel in each channel of the multispectral image according to the following formula,
Figure BDA0001704546960000042
wherein z isk,lRepresenting the normalized value, y, of the ith pixel of the kth channel in the multispectral imagek,lRepresenting the preliminary normalized value, y, of the ith pixel of the kth channel in the multispectral imagemeanRepresenting the mean, y, of all pixel values in the k-th channelstdRepresenting the variance of all pixel values in the kth channel.
And 3, selecting a training sample and a testing sample.
And randomly selecting 4000 pixels from each type of the pixels with the ground object type in the multispectral image as a training sample, and taking the other pixels with the ground object type as test samples.
And 4, generating a training data set.
And with each training sample as a center and a matrix window with the size of 32 multiplied by 32, carrying out blocking processing on the normalized multispectral image to obtain an image block corresponding to each training sample, and forming a training data set by the image blocks corresponding to all the training samples.
And 5, constructing a basic residual error network.
Building a basic residual error network, wherein the basic residual error network sequentially comprises the following structures: input layer → convolutional layer → pooling layer → first residual block → second residual block → third residual block; wherein multiple residual blocks generate a multi-level signature.
The parameters of each layer of the basic residual error network are set as follows: the number of feature maps of the input layer is 3, the number of feature maps of the convolutional layer is 64, and the number of feature maps of the pooling layer is 256.
The first residual block, the second residual block and the third residual block have the same structure, each residual block is provided with nine layers, and the structure of the residual block is as follows: first convolution layer → first batch normalization layer → first linear activation function layer → second convolution layer → second batch normalization layer → second linear activation function layer → third convolution layer → third batch normalization layer → third linear activation function layer, wherein the first linear activation function layer is connected with the third batch normalization layer.
The parameters of each layer of each residual block are set as follows: the feature maps for the first convolutional layer are set to 64, the feature maps for the second convolutional layer to 256, and the feature maps for the third convolutional layer to 256.
And 6, constructing the self-adaptive feature fusion network.
Building a self-adaptive feature fusion network, wherein the self-adaptive feature fusion network structure sequentially comprises the following steps: first fusion module → second fusion module → convolutional layer → pooling layer → global pooling layer → fully connected layer → softmax classifier layer. The fusion module realizes fusion of two input matrixes, generates a corresponding weight for each matrix, multiplies the two weights by the two matrixes respectively for weighting, and adds the two weighted matrixes to complete the fusion process.
The parameters of each layer of the self-adaptive feature fusion network are set as follows: the feature map of the convolutional layer is 256, the feature map of the pooling layer is 256, and the full link layer feature map is set to 100.
The first fusion module and the second fusion module have the same structure. Each fusion module structure is as follows in sequence: input layer → upsampled layer → pooling addition layer → first fully-connected layer → second fully-connected layer → sigmoid layer → weighted addition layer. The parameters of each layer of each fusion module are set as follows: the first fully-connected layer has 8 feature maps and the second fully-connected layer has 128 feature maps. The input layer is two matrixes with different sizes, and the up-sampling layer up-samples the matrix with the smaller size to enable the two matrixes to be the same in size. The pooling addition layer performs global pooling and addition of the two matrices. The vector output from the sigmoid layer and the vector obtained by subtracting the vector from all 1-column vectors are input to the weighted addition layer together with the two matrices as weights.
And 7, generating a self-adaptive feature fusion residual error network.
And taking the output of the second residual block and the output of the third residual block in the basic residual error network as the input of the first fusion module in the self-adaptive characteristic fusion network, and taking the output of the first fusion module in the self-adaptive characteristic fusion network and the output of the first residual block in the basic residual error network as the input of the second fusion module, so that the self-adaptive characteristic fusion network adaptively fuses multi-level characteristics in the basic residual error network. And the input and output of other layers in the basic residual error network and the self-adaptive feature fusion network are unchanged, and the modified basic residual error network and the self-adaptive feature fusion network form the self-adaptive feature fusion residual error network.
And 8, training the self-adaptive feature fusion residual error net.
Inputting the training data set into the adaptive feature fusion residual error network to carry out iterative training for 2 ten thousand times to obtain the trained adaptive feature fusion residual error network.
And 9, generating a test data set.
And taking each test sample as a center, and using a matrix window with the size of 32 multiplied by 32 to cut the normalized multispectral image into blocks to obtain an image block corresponding to each test sample, wherein the image blocks corresponding to all the test samples are used as a test data set.
And 10, classifying the test data set.
And inputting the test data set into the trained adaptive feature fusion residual error network to obtain a final classification result.
And calculating the accuracy of the classification result by using an accuracy calculation formula.
The effect of the present invention will be further described with reference to simulation experiments.
1. Simulation conditions are as follows:
the simulation experiment of the invention is carried out under the Intel (R) Xeon (R) E5-2630CPU with main frequency of 2.40GHz 16, hardware environment with memory of 64GB and software environment of Caffe.
2. Simulation content and result analysis:
under the simulation conditions, the method and the deep residual error network classification method in the prior art are adopted to classify partial multispectral images of the west-ampere region shot by the satellite Quickbird respectively, and the classification result is shown in figure 2. Fig. 2(a) is a real landmark signature corresponding to a part of multispectral images of the west ampere region to be classified in the simulation experiment of the present invention, and the landmark categories include 7 categories of buildings, flat land, roads, shadows, soil, trees, and water. FIG. 2(b) is a graph showing the classification results obtained by the method of the present invention. Fig. 2(c) is a diagram of a classification result obtained by using a depth residual error net classification method in the prior art.
As can be seen from a comparison of fig. 2: compared with the depth residual error net classification method in the prior art, the classification result graph obtained by the method is closer to a real ground object marking graph.
The classification accuracy of the method and the depth residual error net classification method is respectively calculated according to the following formulas:
the classification accuracy is the total number of correctly classified pixels/the total number of pixels
The results of comparing the classification accuracy obtained by the two methods are shown in table 1.
TABLE 1 Classification accuracy comparison Table for the inventive method and the prior art method
Accuracy (%) Construction (%) Flat ground (%) Lu (%) Shadow (%) Soil (%) Tree (%) Water (%)
Method of the invention 95.34 93.10 98.67 98.48 96.40 98.52 93.82 98.99
Prior Art 94.54 90.95 97.49 97.52 95.68 98.16 93.50 98.95
As can be seen from Table 1, compared with the deep residual error net method in the prior art, the classification accuracy obtained by using the method of the present invention is improved, and the accuracy of each class is improved.
In summary, the invention provides the adaptive feature fusion residual error network, adaptively fuses the multi-level features in the residual error network, more fully utilizes the features containing different semantic information, samples the features with smaller size when fusing the features with different sizes, and better retains the detailed information of the features, so that the extracted features have better discriminability and robustness, contain more semantic information, and can obtain higher classification accuracy compared with the prior art.

Claims (3)

1. A multispectral image classification method based on adaptive feature fusion residual error network is characterized in that the method constructs an adaptive feature fusion residual error network, and utilizes a fusion module to adaptively fuse multilevel features; the method comprises the following specific steps:
(1) inputting a multispectral image:
inputting a multispectral image containing a plurality of channels and a plurality of ground object targets;
(2) and (3) carrying out normalization processing on the multispectral image:
normalizing each pixel in each channel of the multispectral image by using a linear normalization method, and combining the normalized values of all pixels of all channels to obtain a normalized multispectral image;
(3) selecting a training sample and a test sample:
randomly selecting 4000 pixels from each type of the pixels with the surface object in the multispectral image as a training sample, and taking the other pixels with the surface object as test samples;
(4) generating a training data set:
with each training sample as a center and a matrix window with the size of 32 multiplied by 32, carrying out blocking processing on the normalized multispectral image to obtain an image block corresponding to each training sample, and forming a training data set by the image blocks corresponding to all the training samples;
(5) constructing a basic residual error network:
(5a) building a basic residual error network, wherein the basic residual error network sequentially comprises the following structures: input layer → convolutional layer → pooling layer → first residual block → second residual block → third residual block; wherein the plurality of residual blocks generate a multi-level feature;
(5b) the parameters of each layer are set as follows: the number of feature maps of the input layer is 3, the number of feature maps of the convolutional layer is 64, and the number of feature maps of the pooling layer is 256;
(6) constructing an adaptive feature fusion network:
(6a) building a self-adaptive feature fusion network, wherein the self-adaptive feature fusion network structure sequentially comprises the following steps: the first fusion module → the second fusion module → the convolutional layer → the pooling layer → the global pooling layer → the fully-connected layer → the softmax classifier layer; the fusion module is used for fusing two input matrixes, generating a corresponding weight for each matrix, respectively multiplying the two weights by the two matrixes for weighting, and adding the two weighted matrixes to complete the fusion process;
the first fusion module and the second fusion module have the same structure, and the structure of each fusion module is as follows in sequence: input layer → upsampling layer → pooling addition layer → first fully-connected layer → second fully-connected layer → sigmoid layer → weighted addition layer; the parameters of each layer of each fusion module are set as follows: the number of the feature maps of the first full-connection layer is set to be 8, and the number of the feature maps of the second full-connection layer is set to be 128; the input layer is two matrixes with different sizes, and the up-sampling layer up-samples the matrix with the smaller size to ensure that the two matrixes have the same size; the pooling addition layer performs global pooling on the two matrixes and then adds the two matrixes; using the vector output by the sigmoid layer and the vector obtained by subtracting the vector from the vector of all 1 columns as weights, and inputting the weights and the two matrixes into a weighting addition layer;
(6b) the parameters of each layer are set as follows: the feature map of the convolutional layer is 256, the feature map of the pooling layer is 256, and the feature map of the full-link layer is set to be 100;
(7) generating an adaptive feature fusion residual error network:
the output of the second residual error block and the output of the third residual error block in the basic residual error network are used as the input of the first fusion module in the self-adaptive characteristic fusion network, and the output of the first fusion module in the self-adaptive characteristic fusion network and the output of the first residual error block in the basic residual error network are used as the input of the second fusion module, so that the self-adaptive characteristic fusion network adaptively fuses multi-level characteristics in the basic residual error network; the input and output of other layers in the basic residual error network and the self-adaptive feature fusion network are unchanged, and the modified basic residual error network and the self-adaptive feature fusion network form the self-adaptive feature fusion residual error network;
(8) training the adaptive feature fusion residual error net:
inputting the training data set into a self-adaptive feature fusion residual error network for iterative training for 2 ten thousand times to obtain a trained self-adaptive feature fusion residual error network;
(9) generating a test data set:
taking each test sample as a center, and using a matrix window with the size of 32 multiplied by 32 to cut the normalized multispectral image into blocks to obtain an image block corresponding to each test sample, and taking the image blocks corresponding to all the test samples as a test data set;
(10) classifying the test data set:
(10a) inputting the test data set into a trained adaptive feature fusion residual error network to obtain a final classification result;
(10b) and calculating the accuracy of the classification result by using an accuracy calculation formula.
2. The method for classifying multispectral images based on adaptive feature fusion residual error network as claimed in claim 1, wherein the linear normalization in step (2) comprises the following steps:
firstly, calculating a preliminary normalized value of each pixel in each channel of the multispectral image according to the following formula:
Figure FDA0003199798080000031
wherein, yi,jRepresenting a preliminary normalized value, x, of the jth pixel of the ith channel of the multispectral imagei,jThe j pixel value, x, representing the ith channel in the multi-spectral imageminRepresenting the minimum pixel value, x, in the ith channelmaxRepresents the maximum pixel value in the ith channel;
secondly, calculating the normalized value of each pixel in each channel of the multispectral image according to the following formula,
Figure FDA0003199798080000032
wherein z isk,lRepresenting the normalized value, y, of the ith pixel of the kth channel in the multispectral imagek,lRepresenting the preliminary normalized value, y, of the ith pixel of the kth channel in the multispectral imagemeanRepresenting the mean, y, of all pixel values in the k-th channelstdRepresenting the variance of all pixel values in the kth channel.
3. The method according to claim 1, wherein the first residual block, the second residual block, and the third residual block in step (5a) have the same structure, and each residual block has nine layers, which have the following structures: the first convolution layer → the first batch normalization layer → the first linear activation function layer → the second convolution layer → the second batch normalization layer → the second linear activation function layer → the third convolution layer → the third batch normalization layer → the third linear activation function layer, wherein the first linear activation function layer is connected to the third batch normalization layer; the parameters of each layer are set as follows: the feature maps for the first convolutional layer are set to 64, the feature maps for the second convolutional layer to 256, and the feature maps for the third convolutional layer to 256.
CN201810650236.8A 2018-06-22 2018-06-22 Multispectral image classification method based on self-adaptive feature fusion residual error network Active CN108830330B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810650236.8A CN108830330B (en) 2018-06-22 2018-06-22 Multispectral image classification method based on self-adaptive feature fusion residual error network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810650236.8A CN108830330B (en) 2018-06-22 2018-06-22 Multispectral image classification method based on self-adaptive feature fusion residual error network

Publications (2)

Publication Number Publication Date
CN108830330A CN108830330A (en) 2018-11-16
CN108830330B true CN108830330B (en) 2021-11-02

Family

ID=64137460

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810650236.8A Active CN108830330B (en) 2018-06-22 2018-06-22 Multispectral image classification method based on self-adaptive feature fusion residual error network

Country Status (1)

Country Link
CN (1) CN108830330B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111275066A (en) * 2018-12-05 2020-06-12 北京嘀嘀无限科技发展有限公司 Image feature fusion method and device and electronic equipment
CN109919171A (en) * 2018-12-21 2019-06-21 广东电网有限责任公司 A kind of Infrared image recognition based on wavelet neural network
CN109785302B (en) * 2018-12-27 2021-03-19 中国科学院西安光学精密机械研究所 Space-spectrum combined feature learning network and multispectral change detection method
CN110032928B (en) * 2019-02-27 2021-09-24 成都数之联科技有限公司 Satellite remote sensing image water body identification method suitable for color sensitivity
CN110503130B (en) * 2019-07-19 2021-11-30 西安邮电大学 Present survey image classification method based on feature fusion
CN110443296B (en) * 2019-07-30 2022-05-06 西北工业大学 Hyperspectral image classification-oriented data adaptive activation function learning method
CN110441312A (en) * 2019-07-30 2019-11-12 上海深视信息科技有限公司 A kind of surface defects of products detection system based on multispectral imaging
CN110651277B (en) * 2019-08-08 2023-08-01 京东方科技集团股份有限公司 Computer-implemented method, computer-implemented diagnostic method, image classification device, and computer program product
CN111191735B (en) * 2020-01-04 2023-03-24 西安电子科技大学 Convolutional neural network image classification method based on data difference and multi-scale features
CN111199214B (en) * 2020-01-04 2023-05-05 西安电子科技大学 Residual network multispectral image ground object classification method
CN113240017B (en) * 2021-05-18 2023-09-12 西安理工大学 Multispectral and panchromatic image classification method based on attention mechanism
CN114332592B (en) * 2022-03-11 2022-06-21 中国海洋大学 Ocean environment data fusion method and system based on attention mechanism

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049886A (en) * 2011-10-12 2013-04-17 方正国际软件(北京)有限公司 Image texture repair method and system
CN103279957A (en) * 2013-05-31 2013-09-04 北京师范大学 Method for extracting remote sensing image interesting area based on multi-scale feature fusion
CN105760488A (en) * 2016-02-17 2016-07-13 北京大学 Image expressing method and device based on multi-level feature fusion
CN105787458A (en) * 2016-03-11 2016-07-20 重庆邮电大学 Infrared behavior identification method based on adaptive fusion of artificial design feature and depth learning feature
CN106845510A (en) * 2016-11-07 2017-06-13 中国传媒大学 Chinese tradition visual culture Symbol Recognition based on depth level Fusion Features
CN107330457A (en) * 2017-06-23 2017-11-07 电子科技大学 A kind of Classification of Polarimetric SAR Image method based on multi-feature fusion
CN107657257A (en) * 2017-08-14 2018-02-02 中国矿业大学 A kind of semantic image dividing method based on multichannel convolutive neutral net
CN107944470A (en) * 2017-11-03 2018-04-20 西安电子科技大学 SAR image sorting technique based on profile ripple FCN CRF
CN108038519A (en) * 2018-01-30 2018-05-15 浙江大学 A kind of uterine neck image processing method and device based on dense feature pyramid network
CN108133020A (en) * 2017-12-25 2018-06-08 上海七牛信息技术有限公司 Video classification methods, device, storage medium and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10068171B2 (en) * 2015-11-12 2018-09-04 Conduent Business Services, Llc Multi-layer fusion in a convolutional neural network for image classification

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049886A (en) * 2011-10-12 2013-04-17 方正国际软件(北京)有限公司 Image texture repair method and system
CN103279957A (en) * 2013-05-31 2013-09-04 北京师范大学 Method for extracting remote sensing image interesting area based on multi-scale feature fusion
CN105760488A (en) * 2016-02-17 2016-07-13 北京大学 Image expressing method and device based on multi-level feature fusion
CN105787458A (en) * 2016-03-11 2016-07-20 重庆邮电大学 Infrared behavior identification method based on adaptive fusion of artificial design feature and depth learning feature
CN106845510A (en) * 2016-11-07 2017-06-13 中国传媒大学 Chinese tradition visual culture Symbol Recognition based on depth level Fusion Features
CN107330457A (en) * 2017-06-23 2017-11-07 电子科技大学 A kind of Classification of Polarimetric SAR Image method based on multi-feature fusion
CN107657257A (en) * 2017-08-14 2018-02-02 中国矿业大学 A kind of semantic image dividing method based on multichannel convolutive neutral net
CN107944470A (en) * 2017-11-03 2018-04-20 西安电子科技大学 SAR image sorting technique based on profile ripple FCN CRF
CN108133020A (en) * 2017-12-25 2018-06-08 上海七牛信息技术有限公司 Video classification methods, device, storage medium and electronic equipment
CN108038519A (en) * 2018-01-30 2018-05-15 浙江大学 A kind of uterine neck image processing method and device based on dense feature pyramid network

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
"Squeeze-and-Excitation Networks";Jie Hu;《arXiv》;20170905;第1-11页 *
"Deep Multiscale Spectral-Spatial Feature Fusion for Hyperspectral Images Classification";Miaomiao Liang等;《IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing》;20180608;第11卷(第8期);第2911-2924页 *
"Hyperspectral Image Classification With Deep Feature Fusion Network";Weiwei Song等;《IEEE Transactions on Geoscience and Remote Sensing》;20180228;第56卷(第6期);第3173-3184页 *
"Intergrating Multilayer Features of Convolutional Neural Networks for Remote Sensing Scene Classification";Erzhu Li等;《IEEE Transactions on Geoscience and Remote Sensing》;20171010;第55卷(第10期);第5653-5665页 *
"全卷积网络多层特征融合的飞机快速检测";辛鹏等;《光学学报》;20180331;第38卷(第3期);第1-7页 *
"基于核函数粒子滤波和多特征自适应融合的目标跟踪";袁广林等;《计算机辅助设计与图形学学报》;20091231;第21卷(第12期);第1774-1784页 *
"基于自适应特征融合的均值迁移目标跟踪";汪首坤等;《北京理工大学学报》;20110731;第31卷(第7期);第803-809页 *

Also Published As

Publication number Publication date
CN108830330A (en) 2018-11-16

Similar Documents

Publication Publication Date Title
CN108830330B (en) Multispectral image classification method based on self-adaptive feature fusion residual error network
CN108985238B (en) Impervious surface extraction method and system combining deep learning and semantic probability
CN108564006B (en) Polarized SAR terrain classification method based on self-learning convolutional neural network
CN110348399B (en) Hyperspectral intelligent classification method based on prototype learning mechanism and multidimensional residual error network
CN107944483B (en) Multispectral image classification method based on dual-channel DCGAN and feature fusion
CN107832797B (en) Multispectral image classification method based on depth fusion residual error network
CN110110596B (en) Hyperspectral image feature extraction, classification model construction and classification method
CN103955702A (en) SAR image terrain classification method based on depth RBF network
CN105117736B (en) Classification of Polarimetric SAR Image method based on sparse depth heap stack network
CN110197205A (en) A kind of image-recognizing method of multiple features source residual error network
CN113705641B (en) Hyperspectral image classification method based on rich context network
CN111339862B (en) Remote sensing scene classification method and device based on channel attention mechanism
CN107239759A (en) A kind of Hi-spatial resolution remote sensing image transfer learning method based on depth characteristic
CN112232328A (en) Remote sensing image building area extraction method and device based on convolutional neural network
CN115909052A (en) Hyperspectral remote sensing image classification method based on hybrid convolutional neural network
CN110807485B (en) Method for fusing two-classification semantic segmentation maps into multi-classification semantic map based on high-resolution remote sensing image
CN112200123B (en) Hyperspectral open set classification method combining dense connection network and sample distribution
CN108256557B (en) Hyperspectral image classification method combining deep learning and neighborhood integration
CN113705580A (en) Hyperspectral image classification method based on deep migration learning
CN114943893B (en) Feature enhancement method for land coverage classification
CN110689065A (en) Hyperspectral image classification method based on flat mixed convolution neural network
CN115116054A (en) Insect pest identification method based on multi-scale lightweight network
CN115908924A (en) Multi-classifier-based small sample hyperspectral image semantic segmentation method and system
CN111738052A (en) Multi-feature fusion hyperspectral remote sensing ground object classification method based on deep learning
CN112329818B (en) Hyperspectral image non-supervision classification method based on graph convolution network embedded characterization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant