CN116824245A - Image classification method based on SN-HiFuse network - Google Patents

Image classification method based on SN-HiFuse network Download PDF

Info

Publication number
CN116824245A
CN116824245A CN202310746779.0A CN202310746779A CN116824245A CN 116824245 A CN116824245 A CN 116824245A CN 202310746779 A CN202310746779 A CN 202310746779A CN 116824245 A CN116824245 A CN 116824245A
Authority
CN
China
Prior art keywords
block
hifuse
feature
network
feature fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310746779.0A
Other languages
Chinese (zh)
Inventor
陈昱莅
张欣欣
陆铖
白佳洋
陈国萍
马苗
裴炤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Normal University
Original Assignee
Shaanxi Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi Normal University filed Critical Shaanxi Normal University
Priority to CN202310746779.0A priority Critical patent/CN116824245A/en
Publication of CN116824245A publication Critical patent/CN116824245A/en
Pending legal-status Critical Current

Links

Abstract

An image classification method based on an SN-HiFuse network comprises the steps of preprocessing a data set, constructing the SN-HiFuse network, training the SN-HiFuse network, saving a model, verifying the SN-HiFuse network and testing the SN-HiFuse network. The invention adopts the channel attention branches of the HiFuse network, the SimAM attention mechanism module and the NAM attention mechanism module to construct the SN-HiFuse network, and the SN-HiFuse network can fully utilize the effective information in the images to accurately classify the images. Compared with the existing image classification method, the image classification method has the advantages of high accuracy, high classification speed, high robustness and the like, and can be used for automatically classifying images by a deep learning method.

Description

Image classification method based on SN-HiFuse network
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to image classification.
Background
Image classification is a core task in computer vision, and the aim is to divide different images into different categories according to different features reflected in image information, so as to realize minimum classification errors. The image classification by the deep learning technology shows stronger performance, and the deep learning technology classifies the image by the extracted features through continuous training. Therefore, the deep learning technology has wide research value and significance in the field of image classification.
Many deep learning methods applied to image classification, such as Swin transformer network, hifose network, have been studied. For images with larger difference between different categories, the Swin transform network has obvious classification effect, but when the difference of the images between the different categories is smaller and even the images are difficult to distinguish by naked eyes, the HiFuse network fuses the transform and the convolutional neural network, the classification result has low accuracy, and the classification result needs to be improved; the Swin transformer network has higher classification effect on images with smaller differences than the hifose network, but does not meet certain requirements.
In the technical field of image classification, a technical problem to be solved urgently at present is to provide a classification method with accurate classification.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide the image classification method based on the SN-HiFuse network, which has high classification accuracy and high classification speed.
The technical scheme adopted for solving the technical problems is composed of the following steps:
(1) Dataset preprocessing
The cell nuclei containing picture 4999, label a dataset 702, label B dataset 2951, label C dataset 1336, picture size 2000 x 2000 pixels were taken.
1) The image dataset pixel values are normalized to [ -1,1], and the picture is reshaped into a picture of 224 x 224 pixels in size.
2) The segmented data set is processed according to the following steps: 2: the proportion of 2 is randomly divided into a training set, a verification set and a test set.
(2) Construction of SN-HiFuse network
The SN-HiFuse network is formed by connecting a local characteristic branch with a global characteristic branch and a characteristic fusion branch in sequence in parallel, and the output end of the characteristic fusion branch is connected with a classifier.
The local feature branch is formed by sequentially connecting a local feature block 1, a local feature block 2, a local feature block 3 and a local feature block 4 in series; the global feature branch is formed by sequentially connecting a global feature block 1, a global feature block 2, a global feature block 3 and a global feature block 4 in series; the feature fusion branch is formed by sequentially connecting a feature fusion block 1, a feature fusion block 2, a feature fusion block 3 and a feature fusion block 4 in series; the outputs of the local feature block 1 and the global feature block 1 are connected with the input of the feature fusion block 1, the outputs of the local feature block 2 and the global feature block 2 are connected with the input of the feature fusion block 2, the outputs of the local feature block 3 and the global feature block 3 are connected with the input of the feature fusion block 3, and the outputs of the local feature block 4 and the global feature block 4 are connected with the input of the feature fusion block 4.
The local feature block 1 is formed by serially connecting a convolution module and a SimAM attention mechanism module, and the structures of the local feature block 2, the local feature block 3 and the local feature block 4 are the same as those of the local feature block 1.
The feature fusion block 1 is formed by connecting a channel attention branch of a NAM attention mechanism module and a space attention branch of a CBAM attention mechanism module in parallel and then connecting the NAM attention mechanism module and the CBAM attention mechanism module in series, and the structures of the feature fusion block 2, the feature fusion block 3 and the feature fusion block 4 are the same as those of the feature fusion block 1.
(3) Training SN-HiFuse network
1) Determining an objective function
The objective function includes a Loss function Loss and an evaluation function AUC, and the Loss function Loss is determined according to the following formula:
where m is the total number of training samples, x i For the ith sample, y i For the label corresponding to the ith sample, l θ (x i ) For the output of the SN-HiFuse network, m and i are finite positive integers.
The evaluation function AUC is determined as follows:
wherein P is i+ Probability of predicting as positive sample for ith sample, P i- Predicting the probability of being a negative sample for the ith sample, I being a finite positive integer, P ε (0, 1]M is the number of positive samples, N is the number of negative samples, and M and N are finite positive integers.
2) Training SN-HiFuse network
Inputting the training set into an SN-HiFuse network for training, wherein the learning rate gamma of the SN-HiFuse network is E [10 ] -5 ,10 -3 ]The optimizer adopts an Adam optimizer to train and iterate to the loss function convergence of the SN-HiFuse network.
(4) Preservation model
In the process of training the SN-HiFuse network, the weight is continuously updated, and corresponding parameters and weight files are stored.
(5) Validating SN-HiFuse networks
The verification set is input into the SN-HiFuse network for verification.
(6) Testing SN-HiFuse networks
And inputting the test set into the SN-HiFuse network for testing, and loading the saved parameters and weight files to obtain an image classification result.
In the step (2) of constructing the SN-HiFuse network, the convolution module of the local feature block 1 is formed by sequentially connecting a depth convolution layer with a convolution kernel of 3 multiplied by 3, a normalization layer, a convolution layer with a convolution kernel of 1 multiplied by 1 and a GELU activation function layer in series, and the structures of the convolution modules of the local feature block 2, the local feature block 3 and the local feature block 4 are the same as the structure of the convolution module of the local feature block 1.
In the step (2) of constructing the SN-HiFuse network, the construction method of the SimAM attention mechanism module of the local feature block 1 comprises the following steps of an energy function e (x t ,x k ) The energy function is determined as follows:
wherein x is t And x k Is the input feature of the target neuron in the same channel as other neurons, y 1 Label, y, which is positive sample 0 The label being a negative-working example,is x t Linear conversion of>Is x k T is the index of the target spatial dimension, K is the index of the other spatial dimension, K is the number of neurons on the current channel, K is a finite positive integer, and w and b are the weights and offsets of the linear transformations, respectively.
The structure of the SimAM attention mechanism modules of the local feature block 2, the local feature block 3 and the local feature block 4 is the same as that of the SimAM attention mechanism module of the local feature block 1.
In the step (2) of constructing the SN-HiFuse network, the channel attention branch of the NAM attention mechanism module of the feature fusion block 1 is formed by connecting a batch normalization layer and a sigmoid activation function layer in series; the structures of the channel attention branches of the feature fusion block 2, the feature fusion block 3 and the feature fusion block 4 are the same as those of the channel attention branches of the feature fusion block 1.
The spatial attention branch of the CBAM attention mechanism module of the feature fusion block 1 is formed by sequentially connecting a maximum pooling layer and an average pooling layer in series with a convolution layer with a convolution kernel of 7 multiplied by 7 and a sigmoid activation function layer; the structures of the spatial attention branches of the feature fusion block 2, the feature fusion block 3 and the feature fusion block 4 are the same as those of the spatial attention branch of the feature fusion block 1.
In the step (2) of constructing the SN-HiFuse network, the convolution module of the feature fusion block 1 is formed by sequentially connecting a convolution layer with a convolution kernel of 1 multiplied by 1, an average pooling layer, a normalization layer, a convolution layer with a convolution kernel of 1 multiplied by 1 and a GELU activation function layer in series; the convolution modules of the feature fusion block 2, the feature fusion block 3 and the feature fusion block 4 have the same structure as the convolution module of the feature fusion block 1.
Because the invention adopts the channel attention branches of the HiFuse network, the SimAM attention mechanism module and the NAM attention mechanism module, the SN-HiFuse network is constructed, and the SN-HiFuse network fully utilizes the effective information in the images to accurately classify the images. Compared with the existing image classification method, the image classification method has the advantages of high accuracy of the image classification result, high classification speed and the like, and can be used for automatically classifying images by a deep learning method.
Drawings
Fig. 1 is a flow chart of embodiment 1 of the present invention.
Fig. 2 is a schematic diagram of the structure of an SN-HiFuse network.
Fig. 3 is a schematic diagram of the structure of the local feature branch in fig. 2.
Fig. 4 is a schematic structural diagram of the feature fusion branch in fig. 2.
Detailed Description
The invention will be further described with reference to the accompanying drawings and examples, but the invention is not limited to the following examples.
Example 1
Fig. 1 shows a flowchart of the present embodiment 1. In fig. 1, the image classification method based on the SN-hifose network of the present embodiment is composed of the following steps:
(1) Dataset preprocessing
The cell nuclei containing picture 4999, label a dataset 702, label B dataset 2951, label C dataset 1336, picture size 2000 x 2000 pixels were taken.
1) The image dataset pixel values are normalized to any number of [ -1,1] and the picture is reshaped into a 224 x 224 pixel size picture.
2) The segmented data set is processed according to the following steps: 2: the proportion of 2 is randomly divided into a training set, a verification set and a test set.
(2) Construction of SN-HiFuse network
In fig. 2, the SN-HiFuse network of the present embodiment is formed by sequentially connecting a local feature branch, a global feature branch, and a feature fusion branch in parallel, where the output end of the feature fusion branch is connected to a classifier.
The local feature branch of the embodiment is formed by sequentially connecting a local feature block 1, a local feature block 2, a local feature block 3 and a local feature block 4 in series; the global feature branch is formed by sequentially connecting a global feature block 1, a global feature block 2, a global feature block 3 and a global feature block 4 in series; the feature fusion branch is formed by sequentially connecting a feature fusion block 1, a feature fusion block 2, a feature fusion block 3 and a feature fusion block 4 in series; the outputs of the local feature block 1 and the global feature block 1 are connected with the input of the feature fusion block 1, the outputs of the local feature block 2 and the global feature block 2 are connected with the input of the feature fusion block 2, the outputs of the local feature block 3 and the global feature block 3 are connected with the input of the feature fusion block 3, and the outputs of the local feature block 4 and the global feature block 4 are connected with the input of the feature fusion block 4.
In fig. 3, the local feature block 1 of the present embodiment is formed by a convolution module and a SimAM attention mechanism module in series, and the structures of the local feature block 2, the local feature block 3, and the local feature block 4 are the same as those of the local feature block 1.
The convolution module of the local feature block 1 in this embodiment is composed of a depth convolution layer with a convolution kernel of 3×3, a normalization layer, a convolution layer with a convolution kernel of 1×1, and a GELU activation function layer which are sequentially connected in series. The convolution modules of the local feature blocks 2, 3, and 4 have the same structure as the convolution module of the local feature block 1.
The method for constructing the SimAM attention mechanism module of the local feature block 1 of this embodiment is composed of an energy function e (x t ,x k ) The energy function is determined as follows:
wherein x is t And x k Is the input feature of the target neuron in the same channel as other neurons, y 1 Label, y, which is positive sample 0 The label being a negative-working example,is x t Linear conversion of>Is x k T is the index of the target spatial dimension, K is the index over the other spatial dimensions, K is the number of neurons on the current channel, and K is finiteW and b are the weights and biases of the linear transformation, respectively.
The structure of the SimAM attention mechanism modules of the local feature block 2, the local feature block 3 and the local feature block 4 is the same as that of the SimAM attention mechanism module of the local feature block 1.
In fig. 4, the feature fusion block 1 of the present embodiment is formed by connecting a channel attention branch of the NAM attention mechanism module and a spatial attention branch of the CBAM attention mechanism module in parallel, and then connecting the two branches in series with a convolution module, where the structures of the feature fusion block 2, the feature fusion block 3, and the feature fusion block 4 are the same as those of the feature fusion block 1.
The channel attention branch of the NAM attention mechanism module of the feature fusion block 1 of the embodiment is formed by series connection of a batch normalization layer and a sigmoid activation function layer; the structures of the channel attention branches of the feature fusion block 2, the feature fusion block 3 and the feature fusion block 4 are the same as those of the channel attention branches of the feature fusion block 1.
The spatial attention branch of the CBAM attention mechanism module of the feature fusion block 1 is formed by sequentially connecting a maximum pooling layer and an average pooling layer in series with a convolution layer with a convolution kernel of 7 multiplied by 7 and a sigmoid activation function layer; the structures of the spatial attention branches of the feature fusion block 2, the feature fusion block 3 and the feature fusion block 4 are the same as those of the spatial attention branch of the feature fusion block 1.
The convolution module of the feature fusion block 1 of the embodiment is composed of a convolution layer with a convolution kernel of 1×1, an average pooling layer, a normalization layer, a convolution layer with a convolution kernel of 1×1, and a GELU activation function layer which are sequentially connected in series; the convolution modules of the feature fusion block 2, the feature fusion block 3 and the feature fusion block 4 have the same structure as the convolution module of the feature fusion block 1.
The invention adopts the channel attention branches of the HiFuse network, the SimAM attention mechanism module and the NAM attention mechanism module to construct the SN-HiFuse network, and the SN-HiFuse network can fully utilize the effective information in the images to accurately classify the images.
(3) Training SN-HiFuse network
1) Determining an objective function
The objective function includes a Loss function Loss and an evaluation function AUC, and the Loss function Loss is determined according to the following formula:
where m is the total number of training samples, x i For the ith sample, y i For the label corresponding to the ith sample, l θ (x i ) For the output of the SN-HiFuse network, m and i are finite positive integers.
The evaluation function AUC is determined as follows:
wherein P is i+ Probability of predicting as positive sample for ith sample, P i- Predicting the probability of being a negative sample for the ith sample, I being a finite positive integer, P ε (0, 1]In this embodiment, the value of P is 0.5, M is the number of positive samples, N is the number of negative samples, and M and N are finite positive integers.
2) Training SN-HiFuse network
Inputting the training set into an SN-HiFuse network for training, wherein the learning rate gamma of the SN-HiFuse network is E [10 ] -5 ,10 -3 ]The learning rate γ of the present embodiment takes a value of 10 -4 The optimizer adopts an Adam optimizer to train and iterate to the loss function convergence of the SN-HiFuse network.
(4) Preservation model
In the process of training the SN-HiFuse network, the weight is continuously updated, and corresponding parameters and weight files are stored.
(5) Validating SN-HiFuse networks
The verification set is input into the SN-HiFuse network for verification.
(6) Testing SN-HiFuse networks
And inputting the test set into the SN-HiFuse network for testing, and loading the saved parameters and weight files to obtain an image classification result.
And (5) completing an image classification method based on the SN-HiFuse network.
Example 2
The image classification method based on the SN-HiFuse network in the embodiment comprises the following steps:
(1) Dataset preprocessing
This step is the same as in example 1.
(2) Construction of SN-HiFuse network
This step is the same as in example 1.
(3) Training SN-HiFuse network
1) Determining an objective function
The objective function includes a Loss function Loss and an evaluation function AUC, the Loss function Loss being the same as in example 1. The evaluation function AUC is determined as follows:
wherein P is i+ Probability of predicting as positive sample for ith sample, P i- Predicting the probability of being a negative sample for the ith sample, I being a finite positive integer, P ε (0, 1]In this embodiment, the value of P is 0.1, M is the number of positive samples, N is the number of negative samples, and M and N are finite positive integers.
2) Training SN-HiFuse network
Inputting the training set into an SN-HiFuse network for training, wherein the learning rate gamma of the SN-HiFuse network is E [10 ] -5 ,10 -3 ]The learning rate γ of the present embodiment takes a value of 10 -5 The optimizer adopts an Adam optimizer to train and iterate to the loss function convergence of the SN-HiFuse network.
The other steps were the same as in example 1. And (5) completing an image classification method based on the SN-HiFuse network.
Example 3
The image classification method based on the SN-HiFuse network in the embodiment comprises the following steps:
(1) Dataset preprocessing
This step is the same as in example 1.
(2) Construction of SN-HiFuse network
This step is the same as in example 1.
(3) Training SN-HiFuse network
1) Determining an objective function
The objective function includes a Loss function Loss and an evaluation function AUC, the Loss function Loss being the same as in example 1.
The evaluation function AUC is determined as follows:
wherein P is i+ Probability of predicting as positive sample for ith sample, P i- Predicting the probability of being a negative sample for the ith sample, I being a finite positive integer, P ε (0, 1]In this embodiment, the value of P is 1, M is the number of positive samples, N is the number of negative samples, and M and N are finite positive integers.
2) Training SN-HiFuse network
Inputting the training set into an SN-HiFuse network for training, wherein the learning rate gamma of the SN-HiFuse network is E [10 ] -5 ,10 -3 ]The learning rate γ of the present embodiment takes a value of 10 -3 The optimizer adopts an Adam optimizer to train and iterate to the loss function convergence of the SN-HiFuse network.
The other steps were the same as in example 1. And (5) completing an image classification method based on the SN-HiFuse network.
In order to verify the beneficial effects of the invention, a comparison simulation experiment is carried out by adopting the SN-HiFuse network-based image classification method of the embodiment 1 of the invention, a Swin transducer method and a HiFuse method, and various experimental conditions are as follows:
the same test set is tested by each trained model, the accuracy of the model is tested by using an evaluation code, and the evaluation function AUC is used as the quality of the evaluation method, wherein the larger the AUC value of the evaluation function is, the better the method is.
The results of the evaluation function AUC are shown in table 1.
Table 1 evaluation function AUC values for example 1 and comparative experimental methods
Test method Evaluation function AUC
Example 1 0.8235
HiFuse method 0.7982
Swin transducer method 0.805
As can be seen from table 1, the evaluation function AUC value of the method of example 1 was 0.8235, the evaluation function AUC value of the hifose method was 0.7982,Swin transformer, and the evaluation function AUC value was 0.805. The evaluation function AUC values of the method of example 1 were 2.53% higher than those of the hifose method and 1.85% higher than those of the Swin transducer method.

Claims (5)

1. An image classification method based on an SN-HiFuse network is characterized by comprising the following steps:
(1) Dataset preprocessing
Taking 4999 pictures containing cell nuclei, 702 pictures of a tag A data set, 2951 pictures of a tag B data set and 1336 pictures of a tag C data set, wherein the size of the pictures is 2000 multiplied by 2000 pixels;
1) Normalizing the image dataset pixel values to [ -1,1], reshaping the picture into a picture of 224 x 224 pixels in size;
2) The segmented data set is processed according to the following steps: 2:2 is randomly divided into a training set, a verification set and a test set;
(2) Construction of SN-HiFuse network
The SN-HiFuse network is formed by sequentially connecting a local characteristic branch with a global characteristic branch and a characteristic fusion branch in parallel, and connecting the output end of the characteristic fusion branch with a classifier;
the local feature branch is formed by sequentially connecting a local feature block 1, a local feature block 2, a local feature block 3 and a local feature block 4 in series; the global feature branch is formed by sequentially connecting a global feature block 1, a global feature block 2, a global feature block 3 and a global feature block 4 in series; the feature fusion branch is formed by sequentially connecting a feature fusion block 1, a feature fusion block 2, a feature fusion block 3 and a feature fusion block 4 in series; the outputs of the local feature block 1 and the global feature block 1 are connected with the input of the feature fusion block 1, the outputs of the local feature block 2 and the global feature block 2 are connected with the input of the feature fusion block 2, the outputs of the local feature block 3 and the global feature block 3 are connected with the input of the feature fusion block 3, and the outputs of the local feature block 4 and the global feature block 4 are connected with the input of the feature fusion block 4;
the local feature block 1 is formed by serially connecting a convolution module and a SimAM attention mechanism module, and the structures of the local feature block 2, the local feature block 3 and the local feature block 4 are the same as those of the local feature block 1;
the feature fusion block 1 is formed by connecting a channel attention branch of a NAM attention mechanism module and a space attention branch of a CBAM attention mechanism module in parallel and then connecting the NAM attention mechanism module and the CBAM attention mechanism module in series, wherein the structures of a feature fusion block 2, a feature fusion block 3 and a feature fusion block 4 are the same as those of the feature fusion block 1;
(3) Training SN-HiFuse network
1) Determining an objective function
The objective function includes a Loss function Loss and an evaluation function AUC, and the Loss function Loss is determined according to the following formula:
where m is the total number of training samples, x i For the ith sample, y i For the label corresponding to the ith sample, l θ (x i ) For the output of the SN-HiFuse network, m and i are finite positive integers;
the evaluation function AUC is determined as follows:
wherein P is i+ Probability of predicting as positive sample for ith sample, P i- Predicting the probability of being a negative sample for the ith sample, I being a finite positive integer, P ε (0, 1]M is the number of positive samples, N is the number of negative samples, and M and N are finite positive integers;
2) Training SN-HiFuse network
Inputting the training set into an SN-HiFuse network for training, wherein the learning rate gamma of the SN-HiFuse network is E [10 ] -5 ,10 -3 ]The optimizer adopts an Adam optimizer to train and iterate to the loss function convergence of the SN-HiFuse network;
(4) Preservation model
In the process of training the SN-HiFuse network, continuously updating the weight and storing corresponding parameters and weight files;
(5) Validating SN-HiFuse networks
Inputting the verification set into an SN-HiFuse network for verification;
(6) Testing SN-HiFuse networks
And inputting the test set into the SN-HiFuse network for testing, and loading the saved parameters and weight files to obtain an image classification result.
2. The SN-HiFuse network-based image classification method of claim 1, wherein: in the step (2) of constructing the SN-HiFuse network, the convolution module of the local feature block 1 is formed by sequentially connecting a depth convolution layer with a convolution kernel of 3 multiplied by 3, a normalization layer, a convolution layer with a convolution kernel of 1 multiplied by 1 and a GELU activation function layer in series, and the structures of the convolution modules of the local feature block 2, the local feature block 3 and the local feature block 4 are the same as the structure of the convolution module of the local feature block 1.
3. The SN-HiFuse network-based image classification method of claim 1, wherein: in the step (2) of constructing the SN-HiFuse network, the construction method of the SimAM attention mechanism module of the local feature block 1 is implemented by an energy function e (x t ,x k ) The energy function is determined as follows:
wherein x is t And x k Is the input feature of the target neuron in the same channel as other neurons, y 1 Label, y, which is positive sample 0 The label being a negative-working example,is x t Linear conversion of>Is x k T is the index of the target spatial dimension, K is the index of the other spatial dimension, K is the number of neurons on the current channel, K is a finite positive integer, w and b are the weights and offsets of the linear transformations, respectively;
the structure of the SimAM attention mechanism modules of the local feature block 2, the local feature block 3 and the local feature block 4 is the same as that of the SimAM attention mechanism module of the local feature block 1.
4. The SN-HiFuse network-based image classification method of claim 1, wherein: in the step (2) of constructing an SN-HiFuse network, a channel attention branch of the NAM attention mechanism module of the feature fusion block 1 is formed by connecting a batch normalization layer and a sigmoid activation function layer in series; the structures of the channel attention branches of the feature fusion block 2, the feature fusion block 3 and the feature fusion block 4 are the same as the structure of the channel attention branch of the feature fusion block 1;
the spatial attention branch of the CBAM attention mechanism module of the feature fusion block 1 is formed by sequentially connecting a maximum pooling layer and an average pooling layer in series with a convolution layer with a convolution kernel of 7 multiplied by 7 and a sigmoid activation function layer; the structures of the spatial attention branches of the feature fusion block 2, the feature fusion block 3 and the feature fusion block 4 are the same as those of the spatial attention branch of the feature fusion block 1.
5. The SN-HiFuse network-based image classification method of claim 1, wherein: in the step (2) of constructing an SN-HiFuse network, the convolution module of the feature fusion block 1 is formed by sequentially connecting a convolution layer with a convolution kernel of 1 multiplied by 1, an average pooling layer, a normalization layer, a convolution layer with a convolution kernel of 1 multiplied by 1 and a GELU activation function layer in series; the convolution modules of the feature fusion block 2, the feature fusion block 3 and the feature fusion block 4 have the same structure as the convolution module of the feature fusion block 1.
CN202310746779.0A 2023-06-25 2023-06-25 Image classification method based on SN-HiFuse network Pending CN116824245A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310746779.0A CN116824245A (en) 2023-06-25 2023-06-25 Image classification method based on SN-HiFuse network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310746779.0A CN116824245A (en) 2023-06-25 2023-06-25 Image classification method based on SN-HiFuse network

Publications (1)

Publication Number Publication Date
CN116824245A true CN116824245A (en) 2023-09-29

Family

ID=88116040

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310746779.0A Pending CN116824245A (en) 2023-06-25 2023-06-25 Image classification method based on SN-HiFuse network

Country Status (1)

Country Link
CN (1) CN116824245A (en)

Similar Documents

Publication Publication Date Title
CN108665460B (en) Image quality evaluation method based on combined neural network and classified neural network
CN110569738B (en) Natural scene text detection method, equipment and medium based on densely connected network
CN112633382B (en) Method and system for classifying few sample images based on mutual neighbor
CN108985236B (en) Face recognition method based on deep separable convolution model
CN110197205A (en) A kind of image-recognizing method of multiple features source residual error network
CN113378890B (en) Lightweight pedestrian vehicle detection method based on improved YOLO v4
CN110728656A (en) Meta-learning-based no-reference image quality data processing method and intelligent terminal
CN113034483B (en) Cigarette defect detection method based on deep migration learning
CN111428558A (en) Vehicle detection method based on improved YO L Ov3 method
CN112381763A (en) Surface defect detection method
CN113674334A (en) Texture recognition method based on depth self-attention network and local feature coding
CN115035418A (en) Remote sensing image semantic segmentation method and system based on improved deep LabV3+ network
CN110599459A (en) Underground pipe network risk assessment cloud system based on deep learning
CN112288700A (en) Rail defect detection method
CN116385958A (en) Edge intelligent detection method for power grid inspection and monitoring
CN116824239A (en) Image recognition method and system based on transfer learning and ResNet50 neural network
CN112347910A (en) Signal fingerprint identification method based on multi-mode deep learning
CN115240259A (en) Face detection method and face detection system based on YOLO deep network in classroom environment
CN112085001A (en) Tunnel recognition model and method based on multi-scale edge feature detection
CN117132919A (en) Multi-scale high-dimensional feature analysis unsupervised learning video anomaly detection method
CN116994151A (en) Marine ship target identification method based on SAR image and YOLOv5s network
CN116824245A (en) Image classification method based on SN-HiFuse network
CN116467930A (en) Transformer-based structured data general modeling method
CN116343016A (en) Multi-angle sonar image target classification method based on lightweight convolution network
CN115861595A (en) Multi-scale domain self-adaptive heterogeneous image matching method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination