CN109754017A - Based on separable three-dimensional residual error network and transfer learning hyperspectral image classification method - Google Patents
Based on separable three-dimensional residual error network and transfer learning hyperspectral image classification method Download PDFInfo
- Publication number
- CN109754017A CN109754017A CN201910018924.7A CN201910018924A CN109754017A CN 109754017 A CN109754017 A CN 109754017A CN 201910018924 A CN201910018924 A CN 201910018924A CN 109754017 A CN109754017 A CN 109754017A
- Authority
- CN
- China
- Prior art keywords
- training
- data
- dimensional
- classification
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Abstract
The present invention relates to one kind based on separable three-dimensional residual error network and transfer learning hyperspectral image classification method, firstly, design parameter amount is less and is suitable for the Three dimensional convolution network of high spectrum image feature.Secondly, the migrating technology and Three dimensional convolution network model between the high spectrum image that design different sensors obtain combine, the high spectrum image high-precision classification under condition of small sample is realized.It realizes under condition of small sample, the autonomous extraction of high spectrum image depth characteristic, high-precision classification.For the present invention compared with the existing hyperspectral image classification method based on deep learning, network model is deeper, and precision is higher, and parameter amount is less.
Description
Technical field
The present invention relates to one kind based on separable three-dimensional residual error network and transfer learning hyperspectral image classification method, belongs to
Field of remote sensing image processing.
Background technique
High spectrum image includes spectral information and spatial information simultaneously, suffers from important answer in military and civil field
With.However, high correlation, spectral mixing etc. make classification hyperspectral imagery face between the higher-dimension characteristic of high spectrum image, wave band
Face huge challenge.In recent years, with the appearance of deep learning new technology, the hyperspectral image classification method based on deep learning is obtained
Breakthrough progress is arrived.But deep learning model usually contains quantity of parameters, needs a large amount of training sample.And bloom
Spectrogram picture has the sample of mark relatively fewer, it is difficult to which the training for fully meeting profound deep learning model was easy to appear quasi-
Conjunction problem.And homologous hyperspectral image data collection is considerably less, solves high-spectrum using the transfer learning between same source data
Small sample problem as in also will receive many restrictions.Therefore, high spectrum image high-precision is realized in research under condition of small sample
The depth model of classification is always a challenging task.
Classification hyperspectral imagery problem is intended to the given one secondary image that there is part to mark pixel, by related algorithm, in advance
It is other to measure the corresponding specifically species of all pixels in image.Traditional hyperspectral image classification method is generally using artificial default
Feature, such as SIFT, HOG, PHOG etc. extract feature from high spectrum image, then by multilayer perceptron, support vector machines
Models are waited to classify.But it designs of these artificial default features and chooses and relies on professional knowledge, and be difficult to choose one kind and have
The feature of versatility.
In recent years, with the rise of deep learning, complete data drives and does not need the deep neural network of priori knowledge
Shown advantage outstanding in the fields such as image procossing and computer vision, application range cover high vision identification,
All various aspects such as middle low-level image processing, such as target identification, detection, classification and image denoising, dynamic deblurring, reconstruction etc.
Deng.The relevant technologies of deep learning have also been introduced in classification hyperspectral imagery field, and achieve point for being substantially better than conventional method
Class effect.But limited by high spectrum image training samples number, apply the deep learning mould in classification hyperspectral imagery
Type is relatively shallower, although many experiments have shown that the effective depth that increases is non-for promotion classification performance in terms of computer vision
Chang Youyi.
Summary of the invention
Technical problems to be solved
In order to avoid the shortcomings of the prior art, the present invention proposes one kind based on separable three-dimensional residual error network and moves
Move study hyperspectral image classification method.Firstly, design parameter amount is less and is suitable for the Three dimensional convolution of high spectrum image feature
Network.Secondly, the migrating technology and Three dimensional convolution network model between the high spectrum image that design different sensors obtain combine,
Realize the high spectrum image high-precision classification under condition of small sample.
Technical solution
One kind is based on separable three-dimensional residual error network and transfer learning hyperspectral image classification method, it is characterised in that step
It is rapid as follows:
Step 1: minimax normalization, minimax data prediction: being carried out to hyperspectral image data to be processed
It is as follows to normalize formula:
Step 2: data divide: for pre-training data set, extracting all samples for having mark as pre-training data set;
For target data set, each classification extracts 10-20 sample as training set, and remaining part is as test set;Extract sample
This specific practice: for one having a size of the three-dimensional hyperspectral image data of M × N × L, M, N respectively indicate high spectrum image
Height and width, L indicates the wave band number of data, when sample drawn, centered on pixel to be processed, extracts S × S × L data block
Centered on pixel sample data, S indicate Size of Neighborhood;
Step 3: the separable three-dimensional residual error network model of building, including feature extraction and classification two parts:
1) characteristic extraction part, input data first pass sequentially through an asymmetric Three dimensional convolution layer, normalize layer batch
Normalization, excitation function ReLU and pond layer;Successively pass through four that width is 32,64,128,256 again to separate
Three-dimensional residual error network module further extracts depth characteristic;The asymmetric Three dimensional convolution layer uses the asymmetric three-dimensional of structure
Convolution kernel, the convolution kernel are greater than space dimension size in spectrum dimension size;The pond layer uses three-dimensional maximum pondization operation;
2) classified part, the full articulamentum structure which is 256 by three-dimensional an adaptive averagely pond layer and width
At;Three-dimensional adaptive evaluates pond layer can adjust the size and step-length of the core that pondization operates according to input data, can will be any
The input data of latitude is processed into fixed-size output data;
Step 4: training network model:
1) pre-training: pre-training is carried out using pre-training the set pair analysis model, training data is input in bulk and is built
In separable three-dimensional residual error network, using the classification of mark as tutorial message, network parameter is carried out using gradient descent algorithm
Training, until network convergence obtains pre-training model;It is unduplicated at random every time that 10- is extracted in training set in training process
20 samples are a collection of training data, which are input to network, extraction feature simultaneously calculates prediction result, with prediction result and reality
Cross entropy between the result of border is loss function, calculates the partial derivative of network weight, and utilize gradient descent algorithm, updates network
Parameter;The convenient entire training set of training process is once wheel training, and entire training process carries out 60 wheels, and preceding 50 wheel learning rate is set
It is 0.01, last 10 wheel, learning rate decays to 0.001;In entire training process, momentum term is set as 0.9;
2) model migrates: retaining the characteristic extraction part of pre-training model, and again according to the classification number of target data set
Classified part is initialized, migration models are obtained;
3) it finely tunes: concentrating the training set extracted to be finely adjusted migration models from target data, trim process is equally trained
60 wheels, preceding 50 wheel characteristic extraction part learning rate are set as 0.001, and classifier part learning rate is set as 0.01, last 10 and takes turns, feature
It extracts part learning rate and decays to 0.0001, classifier part learning rate decays to 0.001;In entire training process, momentum term
It is set as 0.9;
Step 5: generating classification results.Using final mask, all pixels are concentrated to carry out target hyperspectral image data
Class prediction obtains classification results figure.
S in step 2 takes 27.
The specific structure of separable three-dimensional residual error module in step 3: from input terminal to output end, right side trunk portion
It include successively a point-wise convolutional layer, convolution kernel size is respectively 1 × 3 × 3 and 3 × 1 × 1 convolutional layer, Yi Jiling
One point-wise convolutional layer;First point-wise convolutional layer and convolution kernel size are respectively 1 × 3 × 3 and 3 × 1 × 1
Convolutional layer after then normaliaztion layers of batch and ReLU excitation layer;Second point-wise convolutional layer
Back only has normalization layers of batch, in trunk with the apparatus derivatorius in left side by being added by element by element
After merging the output of module can be obtained by a ReLU excitation layer;Left side apparatus derivatorius successively includes that a window width is 3,
The average pond layer and a point-wise convolutional layer that step-length is 2.
Beneficial effect
The present invention is directed to hyperspectral image data feature, constructs separable three-dimensional residual error network, devises heterologous height
Migration strategy between spectral image data realizes sample in conjunction with the separable three-dimensional residual error network and migration strategy of building
Under the conditions of this, the autonomous extraction of high spectrum image depth characteristic, high-precision classification.The present invention is based on deep learning with existing
Hyperspectral image classification method compare, network model is deeper, and precision is higher, and parameter amount is less.
Detailed description of the invention
Small sample hyperspectral image classification method flow chart of the Fig. 1 based on separable three-dimensional residual error network and transfer learning
The separable three-dimensional residual error network module of Fig. 2
The separable three-dimensional residual error network structure of Fig. 3
Specific embodiment
Now in conjunction with embodiment, attached drawing, the invention will be further described:
The method of the present invention mainly includes that the data set (referred to herein as homologous) that identical sensor obtains or different sensors obtain
The pre-training on data (referred to herein as heterologous) data set taken, the fine tuning on target data set, and target data set is carried out
Classification three parts content.Two different hyperspectral image data collection involved in this method, 1) target data set to be processed, letter
Claim target data set.2) with another homologous or heterologous data set of target data set, abbreviation pre-training data set.This method benefit
Pre-training is carried out with pre-training data the set pair analysis model, later again moves to model on target high-spectral data collection, then from target
It extracts minimal amount of sample on data set to be finely adjusted model, the final network using by fine tuning is to whole picture target EO-1 hyperion
Data set is classified.
The concrete measure of the technical program is as follows:
Step 1: data prediction.Minimax normalization is carried out to target data set and pre-training data set
Step 2: data divide.For pre-training data set, all samples for having mark are extracted as pre-training data set.
For target data set, each classification extracts 10-20 sample as training set, and remaining part is as test set.
Step 3: building network model.Network structure constructed by the present invention successively includes two parts, and 1) feature extraction unit
Point, it is made of the separable three-dimensional residual error network module of four different in width.2) three-dimensional adaptive is averaged pond layer and full connection
The classified part that layer is constituted.
Step 4: training network model.
1) pre-training.The network model of training building reaches convergence on pre-training data set, obtains pre-training model.
2) model migrates.Retain the characteristic extraction part of pre-training model, and again according to the classification number of target data set
Classified part is initialized, migration models are obtained.
3) it finely tunes.Migration models are finely adjusted on target data set, obtain final mask.
In pre-training and trim process, using the classification of mark as tutorial message, using gradient descent algorithm to network parameter
It is trained, until network convergence.
Step 5: generating classification results.Using final mask, all pixels are concentrated to carry out target hyperspectral image data
Class prediction obtains classification results figure.
Specific embodiment
Step 1: data prediction.Hyperspectral image data to be processed carries out minimax normalization, normalizes formula
It is as follows:
Step 2: data divide.Data divide.For pre-training data set, all samples for having mark are extracted as pre- instruction
Practice data set.For target data set, each classification extracts 10-20 sample as training set, and remaining part is as test
Collection.The specific practice of sample drawn is as follows, and for one having a size of the three-dimensional hyperspectral image data of M × N × L, M, N distinguish table
Show that the height and width of high spectrum image, L indicate the wave band number of data.When sample drawn, centered on pixel to be processed, S × S is extracted
The sample data of pixel centered on the data block of × L, S indicate Size of Neighborhood, generally take 27.
Step 3: building network model.The network that the present invention designs successively contains two-part structure:
1) characteristic extraction part.Input data first passes sequentially through an asymmetric Three dimensional convolution layer, normalizes layer, excitation
Function and pond layer.Wherein asymmetric Three dimensional convolution layer uses the asymmetric three dimensional convolution kernel of structure, which ties up in spectrum
Size is greater than space dimension size, more focuses on spectrum dimension information in data procedures to play the role of handling in the processing module,
For example, it is 8 that spectrum dimension scale, which can be used, in convolutional layer, space dimension scale is 3 × 3 convolution kernel, and convolutional layer width is set as 32.
In the module, normalization uses batch normalization, and excitation function uses ReLU, and pond layer uses three-dimensional
Maximum pondization operation.Data handle through pond layer and then successively pass through four that width is 32,64,128,256 and separate three
Dimension residual error network module further extracts depth characteristic.The specific structure is shown in FIG. 3.Separable three-dimensional residual error module is specifically tied
For structure as shown in Fig. 2, from input terminal to output end, right side trunk portion successively includes a point-wise convolutional layer, convolution kernel
Size is respectively 1 × 3 × 3 and 3 × 1 × 1 convolutional layer and another point-wise convolutional layer.First point-
Wise convolutional layer and convolution kernel size be respectively 1 × 3 × 3 and 3 × 1 × 1 convolutional layer after then batch
Normaliaztion layers and ReLU excitation layer.Second point-wise convolutional layer back only has a batch
Normalization layers, in trunk with the apparatus derivatorius in left side by the way that one can be passed through after being added merging by element by element
ReLU excitation layer obtains the output of module.Left side apparatus derivatorius successively includes that a window width is 3, the average pond that step-length is 2
Change layer and a point-wise convolutional layer.
2) classified part.The full articulamentum structure that the part is 256 by three-dimensional an adaptive averagely pond layer and width
At.Three-dimensional adaptive evaluates pond layer can adjust the size and step-length of the core that pondization operates according to input data, can will be any
The input data of latitude is processed into fixed-size output data.Therefore, the high spectrum image number different in processing spectrum latitude
According to when, the width of full articulamentum is not necessarily to be adjusted variation for data.Network overall structure is referring to figure 3..
Step 4: training network model.
1) pre-training.Pre-training is carried out using pre-training the set pair analysis model, training data is input in bulk and is built
In separable three-dimensional residual error network, using the classification of mark as tutorial message, network parameter is carried out using gradient descent algorithm
Training, until network convergence.In training process, the random unduplicated 10-20 sample that extracts in training set is a batch instruction every time
Practice data, which is input to network, extraction feature simultaneously calculates prediction result, with the friendship between prediction result and actual result
Fork entropy is loss function, calculates the partial derivative of network weight, and utilize gradient descent algorithm, updates network parameter.Training process
Convenient entire training set is once wheel training.Entire training process carries out 60 wheels, and preceding 50 wheel learning rate is set as 0.01, last 10
Wheel, learning rate decay to 0.001.In entire training process, momentum term is set as 0.9.Herein, it is referred to as repaired by the pre-training stage
The model of positive parameter is pre-training model.
2) model migrates.Retain the characteristic extraction part of pre-training model, and again according to the classification number of target data set
Classified part is initialized, migration models are obtained.Classified part in the present invention contains the three-dimensional pond that is adaptively averaged
Layer, for the three-dimensional EO-1 hyperion input data of arbitrary dimension, output is all fixed-size.Classified part reinitializes only
It needs to adjust the number of nodes of the full linking layer in classifier end, is consistent so that changing number of nodes with target data set classification number.This
Wen Zhong is referred to as migration models by the model that model migrates.
3) it finely tunes.The training set extracted is concentrated to be finely adjusted migration models using from target data.Trim process is same
60 wheel of training.Preceding 50 wheel characteristic extraction part learning rate is set as 0.001, and classifier part learning rate is set as 0.01, last 10 and takes turns,
Characteristic extraction part learning rate decays to 0.0001, and classifier part learning rate decays to 0.001.In entire training process, move
Quantifier is set as 0.9.
By pre-training, model migration obtains final mask after finely tuning three steps.Detailed process is referring to Fig.1.
Step 5: generating classification results.Using final mask, all pixels are concentrated to carry out target hyperspectral image data
Class prediction obtains classification results figure.
Claims (3)
1. one kind is based on separable three-dimensional residual error network and transfer learning hyperspectral image classification method, it is characterised in that step
It is as follows:
Step 1: minimax normalization, minimax normalizing data prediction: being carried out to hyperspectral image data to be processed
It is as follows to change formula:
Step 2: data divide: for pre-training data set, extracting all samples for having mark as pre-training data set;For
Target data set, each classification extract 10-20 sample as training set, and remaining part is as test set;Sample drawn
Specific practice: for one having a size of the three-dimensional hyperspectral image data of M × N × L, M, N respectively indicate the height of high spectrum image
And width, L indicates the wave band number of data, when sample drawn, centered on pixel to be processed, extracts S × S × L data block conduct
The sample data of center pixel, S indicate Size of Neighborhood;
Step 3: the separable three-dimensional residual error network model of building, including feature extraction and classification two parts:
1) characteristic extraction part, input data first pass sequentially through an asymmetric Three dimensional convolution layer, normalize layer batch
Normalization, excitation function ReLU and pond layer;Successively pass through four that width is 32,64,128,256 again to separate
Three-dimensional residual error network module further extracts depth characteristic;The asymmetric Three dimensional convolution layer uses the asymmetric three-dimensional of structure
Convolution kernel, the convolution kernel are greater than space dimension size in spectrum dimension size;The pond layer uses three-dimensional maximum pondization operation;
2) classified part, the part are made of the full articulamentum that three-dimensional an adaptive averagely pond layer and width are 256;Three
Wei Zishiyingpingjiachiization layer can adjust the size and step-length of the core of pondization operation according to input data, can be by any latitude
Input data is processed into fixed-size output data;
Step 4: training network model:
1) pre-training: pre-training is carried out using pre-training the set pair analysis model, training data is input to dividing of building in bulk
From three-dimensional residual error network in, using the classification of mark as tutorial message, network parameter is trained using gradient descent algorithm,
Until network convergence obtains pre-training model;It is unduplicated at random every time that 10-20 sample is extracted in training set in training process
For a collection of training data, which is input to network, extraction feature simultaneously calculates prediction result, with prediction result and actual result
Between cross entropy be loss function, calculate the partial derivative of network weight, and utilize gradient descent algorithm, update network parameter;
The convenient entire training set of training process is once wheel training, and entire training process carries out 60 wheels, and preceding 50 wheel learning rate is set as
0.01, last 10 wheel, learning rate decays to 0.001;In entire training process, momentum term is set as 0.9;
2) model migrates: retaining the characteristic extraction part of pre-training model, and again initial according to the classification number of target data set
Change classified part, obtains migration models;
3) it finely tunes: concentrating the training set extracted to be finely adjusted migration models from target data, trim process equally trains 60 wheels,
Preceding 50 wheel characteristic extraction part learning rate is set as 0.001, and classifier part learning rate is set as 0.01, last 10 and takes turns, feature extraction
Part learning rate decays to 0.0001, and classifier part learning rate decays to 0.001;In entire training process, momentum term is set as
0.9;
Step 5: generating classification results.Using final mask, all pixels are concentrated to carry out classification target hyperspectral image data
Prediction, obtains classification results figure.
2. according to claim 1 a kind of based on separable three-dimensional residual error network and transfer learning classification hyperspectral imagery
Method, it is characterised in that the S in step 2 takes 27.
3. according to claim 1 a kind of based on separable three-dimensional residual error network and transfer learning classification hyperspectral imagery
Method, it is characterised in that the specific structure of the separable three-dimensional residual error module in step 3: from input terminal to output end, right side
Trunk portion successively includes a point-wise convolutional layer, and convolution kernel size is respectively 1 × 3 × 3 and 3 × 1 × 1 convolution
Layer and another point-wise convolutional layer;First point-wise convolutional layer and convolution kernel size are respectively 1 × 3 × 3
With then normaliaztion layers of batch and ReLU excitation layer after 3 × 1 × 1 convolutional layer;Second point-
Wise convolutional layer back only has normalization layers of batch, and the apparatus derivatorius in trunk and left side passes through by element
The output of module can be obtained by a ReLU excitation layer by being added after merging by element;Left side apparatus derivatorius successively includes a window
Mouth width degree is 3, the average pond layer and a point-wise convolutional layer that step-length is 2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910018924.7A CN109754017B (en) | 2019-01-09 | 2019-01-09 | Hyperspectral image classification method based on separable three-dimensional residual error network and transfer learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910018924.7A CN109754017B (en) | 2019-01-09 | 2019-01-09 | Hyperspectral image classification method based on separable three-dimensional residual error network and transfer learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109754017A true CN109754017A (en) | 2019-05-14 |
CN109754017B CN109754017B (en) | 2022-05-10 |
Family
ID=66405416
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910018924.7A Active CN109754017B (en) | 2019-01-09 | 2019-01-09 | Hyperspectral image classification method based on separable three-dimensional residual error network and transfer learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109754017B (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110210356A (en) * | 2019-05-24 | 2019-09-06 | 厦门美柚信息科技有限公司 | A kind of picture discrimination method, apparatus and system |
CN110222773A (en) * | 2019-06-10 | 2019-09-10 | 西北工业大学 | Based on the asymmetric high spectrum image small sample classification method for decomposing convolutional network |
CN110334743A (en) * | 2019-06-10 | 2019-10-15 | 浙江大学 | A kind of progressive transfer learning method based on the long memory network in short-term of convolution |
CN110348399A (en) * | 2019-07-15 | 2019-10-18 | 中国人民解放军国防科技大学 | EO-1 hyperion intelligent method for classifying based on prototype study mechanism and multidimensional residual error network |
CN110555446A (en) * | 2019-08-19 | 2019-12-10 | 北京工业大学 | Remote sensing image scene classification method based on multi-scale depth feature fusion and transfer learning |
CN110852227A (en) * | 2019-11-04 | 2020-02-28 | 中国科学院遥感与数字地球研究所 | Hyperspectral image deep learning classification method, device, equipment and storage medium |
CN110866552A (en) * | 2019-11-06 | 2020-03-06 | 西北工业大学 | Hyperspectral image classification method based on full convolution space propagation network |
CN111091550A (en) * | 2019-12-12 | 2020-05-01 | 创新奇智(北京)科技有限公司 | Multi-size self-adaptive PCB solder paste area detection system and detection method |
CN111259954A (en) * | 2020-01-15 | 2020-06-09 | 北京工业大学 | Hyperspectral traditional Chinese medicine tongue coating and tongue quality classification method based on D-Resnet |
CN111667019A (en) * | 2020-06-23 | 2020-09-15 | 哈尔滨工业大学 | Hyperspectral image classification method based on deformable separation convolution |
CN112317957A (en) * | 2020-10-09 | 2021-02-05 | 五邑大学 | Laser welding method, laser welding apparatus, and storage medium therefor |
CN112580581A (en) * | 2020-12-28 | 2021-03-30 | 英特灵达信息技术(深圳)有限公司 | Target detection method and device and electronic equipment |
CN112597865A (en) * | 2020-12-16 | 2021-04-02 | 燕山大学 | Intelligent identification method for edge defects of hot-rolled strip steel |
CN112766392A (en) * | 2021-01-26 | 2021-05-07 | 杭州师范大学 | Image classification method of deep learning network based on parallel asymmetric hole convolution |
CN114359544A (en) * | 2021-12-27 | 2022-04-15 | 江苏大学 | Vis-NIR spectrum deep migration learning method based on T-SAE crop plant lead concentration |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106845381A (en) * | 2017-01-16 | 2017-06-13 | 西北工业大学 | Sky based on binary channels convolutional neural networks composes united hyperspectral image classification method |
CN108388917A (en) * | 2018-02-26 | 2018-08-10 | 东北大学 | A kind of hyperspectral image classification method based on improvement deep learning model |
CN108596248A (en) * | 2018-04-23 | 2018-09-28 | 上海海洋大学 | A kind of classification of remote-sensing images model based on improvement depth convolutional neural networks |
CN108830262A (en) * | 2018-07-25 | 2018-11-16 | 上海电力学院 | Multi-angle human face expression recognition method under natural conditions |
-
2019
- 2019-01-09 CN CN201910018924.7A patent/CN109754017B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106845381A (en) * | 2017-01-16 | 2017-06-13 | 西北工业大学 | Sky based on binary channels convolutional neural networks composes united hyperspectral image classification method |
CN108388917A (en) * | 2018-02-26 | 2018-08-10 | 东北大学 | A kind of hyperspectral image classification method based on improvement deep learning model |
CN108596248A (en) * | 2018-04-23 | 2018-09-28 | 上海海洋大学 | A kind of classification of remote-sensing images model based on improvement depth convolutional neural networks |
CN108830262A (en) * | 2018-07-25 | 2018-11-16 | 上海电力学院 | Multi-angle human face expression recognition method under natural conditions |
Non-Patent Citations (5)
Title |
---|
DONG-QING ZHANG: "clcNet: Improving the Efficiency of Convolutional Neural Network Using Channel Local Convolutions", 《2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
FRANÇOIS CHOLLET: "Xception: Deep Learning with Depthwise Separable Convolutions", 《2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 * |
ZILONG ZHONG等: "Spectral–Spatial Residual Network for Hyperspectral Image Classification:A 3-D Deep Learning Framework", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》 * |
张号逵等: "深度学习在高光谱图像分类领域的研究现状与展望", 《自动化学报》 * |
陈晓东等: "基于高光谱智能检测及支持向量机分类的香肠品质判定", 《中国食品添加剂分析测试》 * |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110210356A (en) * | 2019-05-24 | 2019-09-06 | 厦门美柚信息科技有限公司 | A kind of picture discrimination method, apparatus and system |
CN110222773A (en) * | 2019-06-10 | 2019-09-10 | 西北工业大学 | Based on the asymmetric high spectrum image small sample classification method for decomposing convolutional network |
CN110334743A (en) * | 2019-06-10 | 2019-10-15 | 浙江大学 | A kind of progressive transfer learning method based on the long memory network in short-term of convolution |
CN110222773B (en) * | 2019-06-10 | 2023-03-24 | 西北工业大学 | Hyperspectral image small sample classification method based on asymmetric decomposition convolution network |
CN110334743B (en) * | 2019-06-10 | 2021-05-04 | 浙江大学 | Gradual migration learning method based on convolution long-time and short-time memory network |
CN110348399B (en) * | 2019-07-15 | 2020-09-29 | 中国人民解放军国防科技大学 | Hyperspectral intelligent classification method based on prototype learning mechanism and multidimensional residual error network |
CN110348399A (en) * | 2019-07-15 | 2019-10-18 | 中国人民解放军国防科技大学 | EO-1 hyperion intelligent method for classifying based on prototype study mechanism and multidimensional residual error network |
CN110555446A (en) * | 2019-08-19 | 2019-12-10 | 北京工业大学 | Remote sensing image scene classification method based on multi-scale depth feature fusion and transfer learning |
CN110555446B (en) * | 2019-08-19 | 2023-06-02 | 北京工业大学 | Remote sensing image scene classification method based on multi-scale depth feature fusion and migration learning |
CN110852227A (en) * | 2019-11-04 | 2020-02-28 | 中国科学院遥感与数字地球研究所 | Hyperspectral image deep learning classification method, device, equipment and storage medium |
CN110866552B (en) * | 2019-11-06 | 2023-04-14 | 西北工业大学 | Hyperspectral image classification method based on full convolution space propagation network |
CN110866552A (en) * | 2019-11-06 | 2020-03-06 | 西北工业大学 | Hyperspectral image classification method based on full convolution space propagation network |
CN111091550A (en) * | 2019-12-12 | 2020-05-01 | 创新奇智(北京)科技有限公司 | Multi-size self-adaptive PCB solder paste area detection system and detection method |
CN111259954A (en) * | 2020-01-15 | 2020-06-09 | 北京工业大学 | Hyperspectral traditional Chinese medicine tongue coating and tongue quality classification method based on D-Resnet |
CN111667019A (en) * | 2020-06-23 | 2020-09-15 | 哈尔滨工业大学 | Hyperspectral image classification method based on deformable separation convolution |
CN112317957A (en) * | 2020-10-09 | 2021-02-05 | 五邑大学 | Laser welding method, laser welding apparatus, and storage medium therefor |
CN112597865A (en) * | 2020-12-16 | 2021-04-02 | 燕山大学 | Intelligent identification method for edge defects of hot-rolled strip steel |
CN112580581A (en) * | 2020-12-28 | 2021-03-30 | 英特灵达信息技术(深圳)有限公司 | Target detection method and device and electronic equipment |
CN112766392A (en) * | 2021-01-26 | 2021-05-07 | 杭州师范大学 | Image classification method of deep learning network based on parallel asymmetric hole convolution |
CN112766392B (en) * | 2021-01-26 | 2023-10-24 | 杭州师范大学 | Image classification method of deep learning network based on parallel asymmetric hole convolution |
CN114359544A (en) * | 2021-12-27 | 2022-04-15 | 江苏大学 | Vis-NIR spectrum deep migration learning method based on T-SAE crop plant lead concentration |
CN114359544B (en) * | 2021-12-27 | 2024-04-12 | 江苏大学 | Vis-NIR spectrum deep migration learning method based on T-SAE crop plant lead concentration |
Also Published As
Publication number | Publication date |
---|---|
CN109754017B (en) | 2022-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109754017A (en) | Based on separable three-dimensional residual error network and transfer learning hyperspectral image classification method | |
CN105320965B (en) | Sky based on depth convolutional neural networks composes united hyperspectral image classification method | |
CN107909101B (en) | Semi-supervised transfer learning character identifying method and system based on convolutional neural networks | |
CN109784347A (en) | Image classification method based on multiple dimensioned dense convolutional neural networks and spectrum attention mechanism | |
CN110378381A (en) | Object detecting method, device and computer storage medium | |
CN110533084A (en) | A kind of multiscale target detection method based on from attention mechanism | |
CN106022355B (en) | High spectrum image sky based on 3DCNN composes joint classification method | |
CN105243670B (en) | A kind of sparse and accurate extracting method of video foreground object of low-rank Combined expression | |
CN109753996A (en) | Hyperspectral image classification method based on D light quantisation depth network | |
CN110033440A (en) | Biological cell method of counting based on convolutional neural networks and Fusion Features | |
CN103886342B (en) | Hyperspectral image classification method based on spectrums and neighbourhood information dictionary learning | |
CN106845418A (en) | A kind of hyperspectral image classification method based on deep learning | |
CN109190643A (en) | Based on the recognition methods of convolutional neural networks Chinese medicine and electronic equipment | |
CN106910188A (en) | The detection method of airfield runway in remote sensing image based on deep learning | |
CN109978041A (en) | A kind of hyperspectral image classification method based on alternately update convolutional neural networks | |
CN107463954B (en) | A kind of template matching recognition methods obscuring different spectrogram picture | |
CN110222773A (en) | Based on the asymmetric high spectrum image small sample classification method for decomposing convolutional network | |
CN110852369B (en) | Hyperspectral image classification method combining 3D/2D convolutional network and adaptive spectrum unmixing | |
CN110009015A (en) | EO-1 hyperion small sample classification method based on lightweight network and semi-supervised clustering | |
CN104751111B (en) | Identify the method and system of human body behavior in video | |
CN106855996B (en) | Gray-scale image coloring method and device based on convolutional neural network | |
CN116402671B (en) | Sample coding image processing method for automatic coding system | |
CN111127490A (en) | Medical image segmentation method based on cyclic residual U-Net network | |
CN105550712B (en) | Aurora image classification method based on optimization convolution autocoding network | |
CN110096976A (en) | Human behavior micro-Doppler classification method based on sparse migration network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |