CN110472668B - Image classification method - Google Patents

Image classification method Download PDF

Info

Publication number
CN110472668B
CN110472668B CN201910659388.9A CN201910659388A CN110472668B CN 110472668 B CN110472668 B CN 110472668B CN 201910659388 A CN201910659388 A CN 201910659388A CN 110472668 B CN110472668 B CN 110472668B
Authority
CN
China
Prior art keywords
characteristic
channel
layer
recalibration
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910659388.9A
Other languages
Chinese (zh)
Other versions
CN110472668A (en
Inventor
张珂
郭玉荣
王新胜
苏昱坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North China Electric Power University
Original Assignee
North China Electric Power University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China Electric Power University filed Critical North China Electric Power University
Priority to CN201910659388.9A priority Critical patent/CN110472668B/en
Publication of CN110472668A publication Critical patent/CN110472668A/en
Application granted granted Critical
Publication of CN110472668B publication Critical patent/CN110472668B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features

Abstract

The invention discloses an image classification method, which is particularly an image classification method based on an end-to-end dual-channel feature recalibration dense connection convolutional neural network.

Description

Image classification method
Technical Field
The invention belongs to the field of images, and particularly relates to an image classification method.
Background
At present, image classification is widely applied in a plurality of fields such as video monitoring analysis, medical image recognition, face image recognition and the like. The traditional image classification is characterized by extracting features through a manual design method, and the generalization capability is poor. In recent years, Deep learning has been successfully applied to speech recognition, natural language processing, and in particular, computer vision, and Deep Convolutional Neural Networks (DCNN) has become a main research method in the field of computer vision.
Among them, a multi-stage Feature retargeted dense-connected convolutional neural network (MFR-densneet) is developed based on a deep convolutional neural network. The network realizes channel characteristic recalibration and interlayer characteristic recalibration through a model integration method, and obtains a better image classification effect. Firstly, constructing a Channel Feature recalibration dense connection convolutional neural network (CFR-DenseNet) and training; secondly, constructing an Inter-Layer characteristic recalibration dense connection convolutional neural network (ILFR-DenseNet), loading and fixing Excitation Layer parameters of an extrusion Excitation Module (Squeeze-and-Excitation Module, SEM) in the CFR-DenseNet to an ILFR-DenseNet corresponding Layer, and training the ILFR-DenseNet; finally, in the test phase, the full link layer outputs of CFR-DenseNet and ILFR-DenseNet are averaged and final predicted. It can be seen that MFR-DenseNet adopts two independent network models to realize channel characteristic recalibration and interlayer characteristic recalibration and perform fusion, and three stages are needed to complete, so that end-to-end training cannot be realized, the training process is complicated, and the training and testing are time-consuming, thereby limiting the application thereof.
Disclosure of Invention
Based on the above technical problems of MFR-DenseNet, the present application provides an image classification method, specifically, an image classification method based on end-to-end dual-channel feature recalibration of a dense-connection convolutional neural network, where end-to-end means that a predicted result is obtained from an input end (input data) to an output end in a network training process, and an error is obtained by comparing the predicted result with a real result, the error is transmitted (back-propagated) in each layer of a model, and the representation of each layer is adjusted according to the error until the model converges or an expected effect is achieved. The method and the device can complete channel characteristic recalibration and interlayer characteristic recalibration and combine the channel characteristic recalibration and the interlayer characteristic recalibration by constructing a network model, and the training process of the model is end-to-end training.
A method of image classification, the method comprising:
establishing a Dual-channel characteristic re-calibration dense connection convolutional neural network (DFR-DenseNet) on the basis of a basic dense connection convolutional neural network framework, wherein an output characteristic diagram of each convolutional layer in the DFR-DenseNet respectively completes channel characteristic re-calibration and interlayer characteristic re-calibration through two channels to obtain two characteristic diagrams with the same channel number, and then merging the characteristic diagrams of the two characteristic diagrams;
classifying the image classification data set by adopting the double-channel characteristic re-calibration dense connection convolutional neural network;
in order to ensure that the number of channels of the output characteristic diagram of each convolution layer after recalibration is the same as the number of channels before recalibration, carrying out 1 × 1 convolution operation on the combined characteristic diagram to realize dimension reduction of the channels and realize channel characteristic recalibration and interlayer characteristic recalibration of the convolution layer;
the two channels comprise a first channel and a second channel, the importance degree of each channel feature is automatically obtained in a training mode in the first channel, useful features are improved, the features ineffective to the current task are inhibited, the channel feature correlation of a single convolutional layer output feature diagram is modeled, the importance degree of each layer of features is automatically obtained in a training mode in the second channel, and feature recalibration in the feature layer dimension is achieved;
wherein, the channel characteristic recalibration of the convolution layer completed by the first channel specifically comprises the following steps: the output characteristic diagram of each 3 multiplied by 3 convolutional layer is firstly subjected to 'squeezing' operation, the characteristic diagram is subjected to characteristic compression along the spatial dimension, the two-dimensional characteristic diagram of each channel is changed into a real number, and the kth characteristic diagram X of the g layerg,kIs expressed by equation (2). The "excitation" operation consists of two fully connected layers (FC), generating a weight for each channel feature, the excitation process can be represented by equation (3), where X ″)g,kThe weight values of the kth feature map of the g-th layer are respectively represented by delta, a ReLU function and sigma, and the Sigmoid function is represented by sigma. Finally, repositioning operation, namely weighting the output weight to each channel feature through multiplication, as shown in formula (4), so that feature recalibration on channel dimension is realized.
Figure GDA0002813438340000031
Where W represents the width of the feature map and H represents the height of the feature map.
(X″g,1,X″g,2,…,X″g,C)
=Fex(X′g,1,X′g,2,…,X′g,C)
=σ(g(z,W))=σ(W2δ(W1))
(3)
Wherein, W1Parameter, W, representing the first fully-connected layer2Representing the parameters of the second fully connected layer.
Xg,k=FRe(·)=Xg,k·X″g,k (4)
The second channel completes interlayer feature recalibration. Firstly, carrying out a first extrusion excitation operation to carry out extrusion excitation on each layer of output characteristic graph, wherein the operation process is the same as the channel characteristic recalibration, and generating an extrusion value (X ') of each layer of output channel characteristic'g,1,X′g,2,…,X′g,C) And weight value (X ″)g,1,X″g,2,…,X″g,C) (ii) a Then, carrying out a second extrusion operation, carrying out weighted average on the compression value of the channel feature after extrusion and the weighted value of the channel feature after excitation, and compressing each layer of feature into a real value, as shown in formula (5), X'gRepresenting the compression value of the g layer, and characterizing the global distribution of the feature map of each layer; then, carrying out excitation operation on the layer compression values to obtain weight values of the characteristics of each layer, wherein the weight values can be represented by a formula (6); and finally, weighting the characteristics of each layer, as shown in a formula (7), so that the characteristic recalibration on the dimension of the characteristic layer is realized.
Figure GDA0002813438340000041
Where C represents the number of channels per convolutional layer feature map.
(X″1,X″2,…,X″N-1)
=F′ex(X′1,X′2,…,X′N-1)=δ(W)
(6)
Xg=F′Re(·)=Xg·X″g
(7)
The two channels respectively complete channel characteristic re-calibration and interlayer characteristic re-calibration to obtain two characteristic graphs with the same channel number, and then the two characteristic graphs are merged. In order to ensure that the number of channels of the output feature map of each convolution layer after recalibration is the same as the number of channels before recalibration, 1 × 1 convolution operation is carried out on the combined feature map to realize dimension reduction of the channels. As shown in equation (8), the characteristic diagram of the input nth layer is:
[H[X1,k,X1],H[X2,k,X2],…,H[XN-1,k,XN-1]]
(8)
where H (-) represents the complex function: 1 × 1 convolution, ReLU function. By merging and dimensionality reduction of the two types of feature maps, the influence of channel relocation and interlayer relocation on the features is kept, and the mutual influence between the two kinds of relocation is avoided. Because the output characteristic diagram of each convolution layer in the network respectively completes channel characteristic recalibration and interlayer characteristic recalibration through two channels, the network is named as a Dual Feature reweigh DenseNet (DFR-DenseNet).
Compared with DenseNet, the method and the device have the advantages that a single network model is used, and the interdependence between the channel characteristics and the characteristics between layers is modeled, so that the classification accuracy can be improved, and the parameter quantity and the calculated quantity of the network are basically unchanged while the classification accuracy is improved. Compared with MFR-DenseNet, the method uses a single network model, so that multiple accesses of multiple models are not needed, direct end-to-end training can be realized, the training process is simple, and the time consumed by training and testing is greatly shortened.
Drawings
FIG. 1 is a flow chart of a method of an embodiment of the present application;
FIG. 2 is a schematic diagram of the underlying DenseNet model;
FIG. 3 is a schematic diagram of the Dense Block structure of the basic DenseNet;
FIG. 4 is a schematic diagram of a Dense Block of a dual channel feature recalibration Dense connection convolutional neural network.
Detailed Description
In the prior art, two independent network models are adopted by MFR-DenseNet to realize channel characteristic recalibration and interlayer characteristic recalibration and combine, and three stages are needed to finish, so that end-to-end training cannot be realized. The training process of the MFR-DenseNet is divided into a plurality of stages, a plurality of models need to be accessed for a plurality of times, the process is complicated, the parameter quantity is large, the calculated quantity is large, and the training time is long; in the testing stage, the images obtain the final prediction result through two independent network models, compared with a single model, the parameter quantity of a plurality of models is doubled, and the testing time is long, so that the requirements on the storage space and the calculation performance of equipment in practical application are high, and the application of the equipment is limited.
Aiming at the problems in MFR-DenseNet, the application provides an image classification method, which is specifically an image classification method based on an end-to-end dual-channel feature recalibration dense connection convolutional neural network. That is, the present application establishes a network model to realize the channel feature recalibration and the interlayer feature recalibration and combine them, and the training process of this model is end-to-end training, i.e., the present application does not need to complete stage by stage, and can directly realize end-to-end training. Therefore, compared with DenseNet, the method has the advantages that a single network model is used, and the interdependence between the channel characteristics and the characteristics between layers is modeled, so that the classification accuracy can be improved, and the parameters and the calculated quantity of the network are basically unchanged while the classification accuracy is improved. Compared with MFR-DenseNet, the method uses a single network model, so that multiple models are not required to be accessed for multiple times, direct end-to-end training can be realized, the training process is simple, and the time consumed by training and testing is greatly shortened.
The application provides an image classification method, which specifically comprises the following steps:
step 101, establishing a Dual-channel Feature re-calibration dense connection convolutional neural network (DFR-densnet) on the basis of a basic dense connection convolutional neural network framework, wherein an output Feature map of each convolutional layer in the DFR-densnet respectively completes channel Feature re-calibration and interlayer Feature re-calibration through two channels to obtain two Feature maps with the same number of channels, and then merging the Feature maps of the two Feature maps.
In order to ensure that the number of channels of the output characteristic diagram of each convolutional layer after recalibration is the same as the number of channels before recalibration, 1 × 1 convolution operation is performed on the combined characteristic diagram to realize dimension reduction of the channels, and channel characteristic recalibration and interlayer characteristic recalibration of the convolutional layer are realized.
The two channels comprise a first channel and a second channel, the importance degree of each channel feature is automatically obtained through a training mode in the first channel, useful features are improved, the features ineffective to the current task are inhibited, the channel feature correlation of a single convolutional layer output feature diagram is modeled, the importance degree of each layer of features is automatically obtained through the training mode in the second channel, and feature recalibration in the feature layer dimension is achieved. Wherein the invalid features comprise background features of the image.
Specifically, the method selects basic 40-layer and 64-layer dense connection convolutional neural networks as basic models, and the two networks are composed of 3 groups of dense blocks with the same structure. Fig. 2 is a schematic diagram of a basic denset model, and as shown in fig. 2, two kinds of dense blocks respectively include 12 and 20 3 × 3 convolutional layers, and the number of output characteristic graphs of each convolutional layer is 12. To ensure maximum inter-layer information flow in convolutional neural networks, the DenseNet network directly connects all layers together. In each Dense Block, each layer combines the output characteristic maps of all the front convolutional layers as input, and transmits the output characteristic map to all the rear layers, as shown in fig. 3, which is a schematic structural diagram of a density Block of the basic DenseNet. Thus, the l-th layer receives all of its front layers x0,x1,…,xl-1As input, namely:
xl=Hl([x0,x1,…,xl-1]) (1)
wherein Hl(. cndot.) is a composite function of three successive operations BN-ReLU-Conv: batch Normalization (BN), ReLU function, Convolution (Convolution). H for all layers (except the pooling layer) in this modellThe (C) adopts BN-ReLU-Conv architecture. As shown in fig. 1.
FIG. 4 is a schematic diagram of Dense Block of the dual channel feature recalibration Dense connection convolutional neural network, showing a structural diagram of the feature map input to the Nth layer.
The first channel completes the channel feature recalibration of the convolutional layer. The output characteristic diagram of each 3 multiplied by 3 convolutional layer is firstly subjected to 'squeezing' operation, the characteristic diagram is subjected to characteristic compression along the spatial dimension, the two-dimensional characteristic diagram of each channel is changed into a real number, and the kth characteristic diagram X of the g layerg,kIs expressed by equation (2). The "excitation" operation consists of two fully connected layers (FC), generating a weight for each channel feature, the excitation process can be represented by equation (3), where X ″)g,kThe weight values of the kth feature map of the g-th layer are respectively represented by delta, a ReLU function and sigma, and the Sigmoid function is represented by sigma. Finally, repositioning operation, namely weighting the output weight to each channel feature through multiplication, as shown in formula (4), so that feature recalibration on channel dimension is realized.
Figure GDA0002813438340000081
Where W represents the width of the feature map and H represents the height of the feature map.
(X″g,1,X″g,2,…,X″g,C)
=Fex(X′g,1,X′g,2,…,X′g,C)
=σ(g(z,W))=σ(W2δ(W1))
(3)
Wherein, W1Parameter, W, representing the first fully-connected layer2Representing the parameters of the second fully connected layer.
Xg,k=FRe(·)=Xg,k·X″g,k (4)
The second channel completes interlayer feature recalibration. Firstly, carrying out a first extrusion excitation operation to carry out extrusion excitation on each layer of output characteristic graph, wherein the operation process is the same as the channel characteristic recalibration, and generating an extrusion value (X ') of each layer of output channel characteristic'g,1,X′g,2,…,X′g,C) And weight value (X ″)g,1,X″g,2,…,X″g,C) (ii) a Then, carrying out a second extrusion operation, carrying out weighted average on the compression value of the channel feature after extrusion and the weighted value of the channel feature after excitation, and compressing each layer of feature into a real value, as shown in formula (5), X'gRepresenting the compression value of the g layer, and characterizing the global distribution of the feature map of each layer; then, carrying out excitation operation on the layer compression values to obtain weight values of the characteristics of each layer, wherein the weight values can be represented by a formula (6); and finally, weighting the characteristics of each layer, as shown in a formula (7), so that the characteristic recalibration on the dimension of the characteristic layer is realized.
Figure GDA0002813438340000091
Where C represents the number of channels per convolutional layer signature, for example C may be 12.
(X″1,X″2,…,X″N-1)
=F′ex(X′1,X′2,…,X″N-1)=δ(W)
(6)
Xg=F′Re(·)=Xg·X″g
(7)
The two channels respectively complete channel characteristic re-calibration and interlayer characteristic re-calibration to obtain two characteristic graphs with the same channel number, and then the two characteristic graphs are merged. In order to ensure that the number of channels of the output feature map of each convolution layer after recalibration is the same as the number of channels before recalibration, 1 × 1 convolution operation is carried out on the combined feature map to realize dimension reduction of the channels. As shown in equation (8), the characteristic diagram of the input nth layer is:
[H[X1,k,X1],H[X2,k,X2],…,H[XN-1,k,XN-1]]
(8)
where H (-) represents the complex function: 1 × 1 convolution, ReLU function. By merging and dimensionality reduction of the two types of feature maps, the influence of channel relocation and interlayer relocation on the features is kept, and the mutual influence between the two kinds of relocation is avoided. Because the output characteristic diagram of each convolution layer in the network respectively completes channel characteristic recalibration and interlayer characteristic recalibration through two channels, the network is named as a Dual Feature recalibration dense connection convolutional neural network (DFR-DenseNet)
And step 102, classifying the image classification data set CIFAR-10/100 by adopting the double-channel characteristic recalibration dense connection convolutional neural network.
Therefore, the image classification method provided by the application has the following advantages:
1. on the basis of the basic dense connection convolutional neural network, an end-to-end double-channel characteristic recalibration dense connection convolutional neural network is established. The network keeps the advantages of the original dense connection network, can effectively relieve the problem of gradient disappearance, strengthens feature propagation and supports feature reuse.
2. The method aims at the defect that the DenseNet does not fully consider the channel characteristic correlation and the interlayer characteristic correlation, an end-to-end double-channel characteristic recalibration dense connection convolutional neural network is established, the network simultaneously realizes the channel characteristic recalibration and the interlayer characteristic recalibration of the DenseNet, the interdependency between channel characteristics and between interlayer characteristics is modeled, the importance degree of each channel characteristic and each layer of characteristics is automatically obtained through a training mode, so that the useful characteristics are improved, the characteristics invalid to the current task are inhibited, and the learning capacity of the DenseNet is improved.
3. According to the method, the double-channel characteristic re-calibration dense connection convolutional neural network is established, channel characteristic re-calibration and interlayer characteristic re-calibration can be achieved and combined only by one model, the channel characteristic re-calibration and interlayer characteristic re-calibration can be achieved and combined only by one model without being completed in stages, and the training process of the model is end-to-end training, so that end-to-end training can be achieved. Moreover, compared with a DenseNet, the method has the advantages that a single model is used, the classification accuracy is improved, meanwhile, the parameter quantity and the calculated quantity of the network are basically unchanged, compared with an MFR-DenseNet, multiple models are not needed to be accessed for multiple times, training can be completed at one time, the training process is simple, and the time consumption of training and testing is greatly shortened.
To illustrate the advantages of the end-to-end dual channel feature recalibration dense connection convolutional neural network proposed in the present application, experiments were performed on the image classification dataset CIFAR-10/100. The experimental results are shown in table 1, and it can be known from the experimental results that the two-channel feature recalibration dense connection convolutional neural network provided by the application can reduce the classification error rate of the CIFAR-10/100 data set compared with the basic dense connection convolutional neural network with the same number of layers, and the end-to-end two-channel feature recalibration dense connection convolutional neural network has better learning capability than the basic dense connection convolutional neural network.
TABLE 1 results of the experiments on CIFAR-10/100 for the different models (%)
Figure GDA0002813438340000111
In the description, each part is described in a progressive manner, each part is emphasized to be different from other parts, and the same and similar parts among the parts are referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (1)

1. A method of image classification, the method comprising:
establishing a Dual-channel characteristic re-calibration dense connection convolutional neural network (DFR-DenseNet) on the basis of a basic dense connection convolutional neural network framework, wherein an output characteristic diagram of each convolutional layer in the DFR-DenseNet respectively completes channel characteristic re-calibration and interlayer characteristic re-calibration through two channels to obtain two characteristic diagrams with the same channel number, and then merging the characteristic diagrams of the two characteristic diagrams;
classifying the image classification data set by adopting the double-channel characteristic re-calibration dense connection convolutional neural network;
in order to ensure that the number of channels of the output characteristic diagram of each convolution layer after recalibration is the same as the number of channels before recalibration, carrying out 1 × 1 convolution operation on the combined characteristic diagram to realize dimension reduction of the channels and realize channel characteristic recalibration and interlayer characteristic recalibration of the convolution layer;
the two channels comprise a first channel and a second channel, the importance degree of each channel feature is automatically obtained in a training mode in the first channel, useful features are improved, the features ineffective to the current task are inhibited, the channel feature correlation of a single convolutional layer output feature diagram is modeled, the importance degree of each layer of features is automatically obtained in a training mode in the second channel, and feature recalibration in the feature layer dimension is achieved;
wherein, the channel characteristic recalibration of the convolution layer completed by the first channel specifically comprises the following steps: the output characteristic diagram of each 3 multiplied by 3 convolutional layer is firstly subjected to 'squeezing' operation, the characteristic diagram is subjected to characteristic compression along the spatial dimension, the two-dimensional characteristic diagram of each channel is changed into a real number, and the kth characteristic diagram X of the g layerg,kThe compression process of (2); the "excitation" operation consists of two fully connected layers (FC), generating a weight for each channel feature, the excitation process can be represented by equation (3), where X ″)g,kThe weight value of the kth characteristic diagram of the g layer is delta, delta represents a ReLU function, and sigma represents a Sigmoid function; finally, a relocation operation is performed to weight the output to each channel feature by multiplication, as in equation (4)As shown, feature recalibration on channel dimensions is achieved;
Figure FDA0002885886010000011
wherein, W represents the width of the characteristic diagram, H represents the height of the characteristic diagram;
(X″g,1,X″g,2,…,X″g,C)
=Fex(X′g,1,X′g,2,…,X′g,C)
=σ(W2δ(W1))
(3)
wherein, W1Parameter, W, representing the first fully-connected layer2A parameter representing a second fully connected layer;
Xg,k=FRe(·)=Xg,k·X″g,k (4)
the second channel completes interlayer characteristic recalibration; firstly, carrying out a first extrusion excitation operation to carry out extrusion excitation on each layer of output characteristic graph, wherein the operation process is the same as the channel characteristic recalibration, and generating an extrusion value (X ') of each layer of output channel characteristic'g,1,X′g,2,…,X′g,C) And weight value (X ″)g,1,X″g,2,…,X″g,C) (ii) a Then, carrying out a second extrusion operation, carrying out weighted average on the compression value of the channel feature after extrusion and the weighted value of the channel feature after excitation, and compressing each layer of feature into a real value, as shown in formula (5), X'gRepresenting the compression value of the g layer, and characterizing the global distribution of the feature map of each layer; then, carrying out excitation operation on the layer compression values to obtain weight values of the characteristics of each layer, wherein the weight values can be represented by a formula (6); finally, weighting the characteristics of each layer, as shown in a formula (7), so that the characteristic re-calibration on the dimension of the characteristic layer is realized;
Figure FDA0002885886010000021
Figure FDA0002885886010000031
wherein C represents the channel number of each convolution layer characteristic diagram;
(X″1,X″2,…,X″N-1)
=F′ex(X′1,X′2,…,X′N-1)=δ(W3) (6)
wherein, W3Parameters representing a fully connected layer;
Xg=F′Re(·)=Xg·X″g (7)
the two channels respectively complete channel characteristic re-calibration and interlayer characteristic re-calibration to obtain two characteristic graphs with the same channel number, and then the two characteristic graphs are merged; in order to ensure that the number of channels of the output characteristic diagram of each convolution layer after recalibration is the same as the number of channels before recalibration, 1 × 1 convolution operation is carried out on the combined characteristic diagram to realize the dimension reduction of the channels; as shown in equation (8), the characteristic diagram of the input nth layer is:
[H[X1,k,X1],H[X2,k,X2],...,H[XN-1,k,XN-1]] (8)
where H (-) represents the complex function: 1 × 1 convolution, ReLU function; by merging and dimensionality reduction of the two types of feature graphs, the influence of channel relocation and interlayer relocation on the features is kept, and the mutual influence between the two kinds of relocation is avoided; because the output characteristic diagram of each convolution layer in the network respectively completes channel characteristic recalibration and interlayer characteristic recalibration through two channels, the network is named as a Dual Feature reweigh DenseNet (DFR-DenseNet).
CN201910659388.9A 2019-07-22 2019-07-22 Image classification method Active CN110472668B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910659388.9A CN110472668B (en) 2019-07-22 2019-07-22 Image classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910659388.9A CN110472668B (en) 2019-07-22 2019-07-22 Image classification method

Publications (2)

Publication Number Publication Date
CN110472668A CN110472668A (en) 2019-11-19
CN110472668B true CN110472668B (en) 2021-02-19

Family

ID=68508323

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910659388.9A Active CN110472668B (en) 2019-07-22 2019-07-22 Image classification method

Country Status (1)

Country Link
CN (1) CN110472668B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111523483B (en) * 2020-04-24 2023-10-03 北京邮电大学 Chinese meal dish image recognition method and device
CN111767964A (en) * 2020-07-08 2020-10-13 福州大学 Improved DenseNet-based multi-channel feature re-labeling image classification method

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106096535B (en) * 2016-06-07 2020-10-23 广东顺德中山大学卡内基梅隆大学国际联合研究院 Face verification method based on bilinear joint CNN
CN107122796B (en) * 2017-04-01 2019-07-12 中国科学院空间应用工程与技术中心 A kind of remote sensing image classification method based on multiple-limb network integration model
KR102662201B1 (en) * 2017-06-28 2024-04-30 매직 립, 인코포레이티드 Method and system for performing simultaneous localization and mapping using convolutional image transformation
CN107437096B (en) * 2017-07-28 2020-06-26 北京大学 Image classification method based on parameter efficient depth residual error network model
CN108664993B (en) * 2018-04-08 2022-01-11 浙江工业大学 Dense weight connection convolutional neural network image classification method
CN108615010B (en) * 2018-04-24 2022-02-11 重庆邮电大学 Facial expression recognition method based on parallel convolution neural network feature map fusion
CN109657582B (en) * 2018-12-10 2023-10-31 平安科技(深圳)有限公司 Face emotion recognition method and device, computer equipment and storage medium
CN109785344A (en) * 2019-01-22 2019-05-21 成都大学 The remote sensing image segmentation method of binary channel residual error network based on feature recalibration

Also Published As

Publication number Publication date
CN110472668A (en) 2019-11-19

Similar Documents

Publication Publication Date Title
CN108231201B (en) Construction method, system and application method of disease data analysis processing model
CN109598732B (en) Medical image segmentation method based on three-dimensional space weighting
CN113177882B (en) Single-frame image super-resolution processing method based on diffusion model
CN112529146B (en) Neural network model training method and device
CN110472668B (en) Image classification method
Huang et al. End-to-end continuous emotion recognition from video using 3D ConvLSTM networks
CN111047078B (en) Traffic characteristic prediction method, system and storage medium
CN113269312B (en) Model compression method and system combining quantization and pruning search
Wu et al. FW-GAN: Underwater image enhancement using generative adversarial network with multi-scale fusion
US20210365791A1 (en) Reconfigurable device based deep neural network system and method
CN114330673A (en) Method and device for performing multi-party joint training on business prediction model
Nikolopoulos et al. Non-intrusive surrogate modeling for parametrized time-dependent PDEs using convolutional autoencoders
Singh et al. Deep hidden analysis: A statistical framework to prune feature maps
CN114821259A (en) Zero-learning medical image fusion method based on twin convolutional neural network
CN114119391A (en) Method for establishing anti-neural network denoising model and ultrasonic image denoising method
CN116129124A (en) Image segmentation method, system and equipment
CN107291897A (en) A kind of time series data stream clustering method based on small wave attenuation summary tree
WO2021238734A1 (en) Method for training neural network, and related device
Tang et al. A deep map transfer learning method for face recognition in an unrestricted smart city environment
CN115035408A (en) Unmanned aerial vehicle image tree species classification method based on transfer learning and attention mechanism
CN112734025B (en) Neural network parameter sparsification method based on fixed base regularization
Tang et al. Training Compact DNNs with ℓ1/2 Regularization
Chen et al. Fpar: Filter pruning via attention and rank enhancement
CN112906291A (en) Neural network-based modeling method and device
Huang et al. Accelerating convolutional neural network via structured gaussian scale mixture models: a joint grouping and pruning approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant