CN116188867A - Multi-label electrocardiograph image classification method based on attention-enhancing network - Google Patents

Multi-label electrocardiograph image classification method based on attention-enhancing network Download PDF

Info

Publication number
CN116188867A
CN116188867A CN202310206939.2A CN202310206939A CN116188867A CN 116188867 A CN116188867 A CN 116188867A CN 202310206939 A CN202310206939 A CN 202310206939A CN 116188867 A CN116188867 A CN 116188867A
Authority
CN
China
Prior art keywords
attention
convolution
feature
label
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310206939.2A
Other languages
Chinese (zh)
Other versions
CN116188867B (en
Inventor
王英龙
徐国璇
舒明雷
朱亮
单珂
刘照阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu University of Technology
Shandong Institute of Artificial Intelligence
Original Assignee
Qilu University of Technology
Shandong Institute of Artificial Intelligence
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilu University of Technology, Shandong Institute of Artificial Intelligence filed Critical Qilu University of Technology
Priority to CN202310206939.2A priority Critical patent/CN116188867B/en
Publication of CN116188867A publication Critical patent/CN116188867A/en
Application granted granted Critical
Publication of CN116188867B publication Critical patent/CN116188867B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

A multi-label electrocardiograph image classification method based on an attention enhancement network is used, a model capable of classifying multi-label electrocardiograph images is built, context information of various electrocardiograph image category characteristics is mined, correlation among feature channels is fully captured, and correlation information among category labels is utilized, so that multi-label electrocardiograph images are effectively classified, and classification accuracy and precision are improved.

Description

Multi-label electrocardiograph image classification method based on attention-enhancing network
Technical Field
The invention relates to the technical field of electrocardiograph image classification, in particular to a multi-label electrocardiograph image classification method based on an attention-enhancing network.
Background
The practical application environment center electric image mostly exists in a multi-label mode, and the multi-label electrocardiograph classification research has extremely high application value and feasibility, so that researchers are promoted to continuously explore the multi-label electrocardiograph classification direction. Although research into multi-labeled electrocardiographic image classification has achieved a high degree of accuracy, most efforts have focused on how adequately to extract images from signals, but neglecting the correlation between labels in multi-labeled electrocardiographic images.
Disclosure of Invention
In order to overcome the defects of the technology, the invention provides a method capable of mining the context information of various characteristics from the data set so as to enhance the input characteristics of the current electrocardiographic image.
The technical scheme adopted for overcoming the technical problems is as follows:
a multi-labeled electrocardiographic image classification method based on an attention-enhancing network, comprising the steps of:
a) Acquiring a multi-label electrocardiograph image N, wherein the label of the multi-label electrocardiograph image N is S, and S= { S 1 ,S 2 ,...,S i ,...,S m },S i I e {1,.. M } for the i-th tag, m being the number of categories of objects in the electrocardiographic image;
b) Constructing a feature mining module, and inputting the multi-label electrocardiographic image N into the feature mining module to obtain a multi-label electrocardiographic feature map X RS
c) Multiple label electrocardiographic feature map X RS Inputting into the full connection layer to obtain the characteristic vector X 'of the multi-label electrocardiograph category' RS
d) Multiple label electrocardiographic feature map X RS Feature vector X 'associated with a multi-labeled electrocardiograph class' RS Performing splicing operation to obtain fusion characteristic X r
e) Constructing a convolution attention enhancement module CAAB, and fusing the features X r Inputting the attention characteristic vector n into a convolution attention enhancement module CAAB;
f) The attention feature vector n and the multi-label electrocardiographic feature map X RS Performing splicing operation to obtain fusion characteristic X r′
g) Will fuse feature X r′ Inputting the attention characteristic vector p into a convolution attention enhancement module CAAB; h) The attention feature vector p and the multi-label electrocardiographic feature map X RS Performing splicing operation to obtain fusion characteristic X p
i) Constructing a channel correlation module, and acquiring a multi-label electrocardiographic feature map X RS Inputting the obtained characteristic X 'into a channel correlation module' RS2
j) Will feature X' RS2 And multi-label electrocardiographic feature map X RS Splicing, fusing and adding to obtain a correlation characteristic X n The method comprises the steps of carrying out a first treatment on the surface of the k) Constructing a classification module to fuse the features X p And correlation feature X n And inputting the multi-label electrocardio images into a classification module, and outputting a classification result of the multi-label electrocardio images.
Further, the feature mining module in the step b) is composed of a ResNet-34 network, the multi-label electrocardiographic image N is input into the ResNet-34 network, and a multi-label electrocardiographic feature image X is obtained through output RS ,X RS ∈R C×H×W R is real space, C is the channel number of the feature map, H is the height of the feature map, and W is the width of the feature map.
Further, step e) comprises the steps of:
e-1) a convolution attention enhancement module CAAB consists of a first convolution layer, a second convolution layer, a third convolution layer, a fourth convolution layer, a ReLU activation function layer and a SE-Net network;
e-2) fusing the features X r Input to a convolution attention enhancement module CAAIn the first convolution layer of B, the output gets the attention characteristic
Figure BDA0004111221130000021
e-3) fusing the features X r Input into a second convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
Figure BDA0004111221130000022
e-4) fusing the features X r Input into a third convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
Figure BDA0004111221130000031
e-5) fusing feature X r Input into a fourth convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
Figure BDA0004111221130000032
e-6) attention-taking feature
Figure BDA0004111221130000033
Attention character->
Figure BDA0004111221130000034
Attention character->
Figure BDA0004111221130000035
Attention character->
Figure BDA0004111221130000036
Adding and inputting to a ReLU activation function layer of a convolution attention enhancement module CAAB, and outputting to obtain attention characteristic X' r
e-7) attention-seeking feature X' r Input into a SE-Net network of a convolution attention enhancement module CAAB, and output to obtain an attention characteristic vector n.
Preferably, in e-1), the convolution kernel size of the first convolution layer is 1×25, the number of convolution kernels is 32, the convolution kernel size of the second convolution layer is 1×15, the number of convolution kernels is 32, the convolution kernel size of the third convolution layer is 1×7, the number of convolution kernels is 32, the convolution kernel size of the fourth convolution layer is 1×3, and the number of convolution kernels is 32.
Further, step g) comprises the steps of:
g-1) fusion of the features X r′ Input into a first convolution layer of a convolution attention enhancement module CAAB, and output to obtain attention characteristics
Figure BDA0004111221130000037
g-2) fusing the features X r′ Input into a second convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
Figure BDA0004111221130000038
g-3) fusion of feature X r′ Input into a third convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
Figure BDA0004111221130000039
g-4) fusing the features X r′ Input into a fourth convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
Figure BDA00041112211300000310
g-5) attention-directing feature
Figure BDA0004111221130000041
Attention character->
Figure BDA0004111221130000042
Attention character->
Figure BDA0004111221130000043
Injection and injection methodItalian character->
Figure BDA0004111221130000044
Adding and inputting to a ReLU activation function layer of a convolution attention enhancement module CAAB, and outputting to obtain attention characteristic X' r′
g-6) attention-seeking feature X' r′ Input to the SE-Net network of the convolution attention enhancement module CAAB, and output to obtain the attention characteristic vector p.
Further, step i) comprises the steps of:
further, the i-1) channel correlation module is composed of a global maximum pooling layer, a full connection layer, a first ReLU activation function layer and a second ReLU activation function layer;
i-2) mapping multi-labeled electrocardiographic signatures X RS Input into a global maximization layer of a channel correlation module, and output to obtain a feature map X RS2 ,X RS2 ∈R C×1×1
i-3) mapping feature patterns X in the channel dimension RS2 Dividing into o groups to obtain features
Figure BDA0004111221130000045
I e {1,., o } for the i-th feature;
i-4) characterization of the characteristics
Figure BDA0004111221130000046
Respectively and sequentially inputting the channel characteristics into a full-connection layer and a first ReLU activation function layer of a channel correlation module, and outputting the converted channel characteristics
Figure BDA0004111221130000047
For the ith transformed channel feature, +.>
Figure BDA0004111221130000048
i-5) characterization of
Figure BDA0004111221130000049
And transformed channel characteristics->
Figure BDA00041112211300000410
Phase splicing operation to obtain the i-th channel correlation characteristic +.>
Figure BDA00041112211300000411
Figure BDA00041112211300000412
i-6) characterizing all channel correlations
Figure BDA00041112211300000413
Adding and inputting to a second ReLU activation function layer of the channel correlation module, and outputting to obtain a characteristic X' RS2 ,X′ RS2 ∈R C×1×1
Further, step k) comprises the steps of:
the k-1) classification module consists of a fusion unit and a full connection layer;
k-2) fusing features X p And correlation feature X n Inputting the characteristics into a fusion unit of the classification module for characteristic fusion, and outputting to obtain final characteristics X;
k-3) inputting the final feature X into a full-connection layer of the classification module to obtain a classification result of the multi-label electrocardiograph image.
The beneficial effects of the invention are as follows: a multi-label electrocardiograph classification method based on an attention-enhancing network is used for constructing a model capable of classifying multi-label electrocardiograph images, mining context information of various electrocardiograph image category characteristics, fully capturing correlation among feature channels and utilizing the correlation information among category labels, so that the multi-label electrocardiograph images are effectively classified, and the classification accuracy and precision are improved.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
The invention is further described with reference to fig. 1.
A multi-labeled electrocardiographic image classification method based on an attention-enhancing network, comprising the steps of:
a) Acquiring a multi-label electrocardiograph image N, wherein the label of the multi-label electrocardiograph image N is S, and S= { S 1 ,S 2 ,...,S i ,...,S m },S i For the i-th label, i e {1,., m }, m is the number of categories of objects in the electrocardiographic image, if the multi-label electrocardiograph image N has a label i, S i Equal to 1 and vice versa is 0.
b) Constructing a feature mining module, and inputting the multi-label electrocardiographic image N into the feature mining module to obtain a multi-label electrocardiographic feature map X RS
c) Multiple label electrocardiographic feature map X RS Inputting into the full connection layer to obtain the characteristic vector X 'of the multi-label electrocardiograph category' RS
d) Multiple label electrocardiographic feature map X RS Feature vector X 'associated with a multi-labeled electrocardiograph class' RS Feature fusion and splicing are carried out, and the currently input electrocardiographic features are utilized to obtain category feature vectors X' RS Related information is mined, and context information outside a single electrocardiograph image is extracted to obtain fusion characteristics X r
e) Constructing a convolution attention enhancement module CAAB, and fusing the features X r And inputting the attention characteristic vector n into a convolution attention enhancement module CAAB to obtain the attention characteristic vector n. The convolution attention enhancement module CAAB is used for extracting context information between images from multi-label electrocardio images and directly correlating data labels to enhance the current input characteristic X RS
f) The attention feature vector n and the multi-label electrocardiographic feature map X RS Performing feature fusion splicing operation to obtain fusion features X r′
g) Will fuse feature X r′ And inputting the attention characteristic vector p into a convolution attention enhancement module CAAB to obtain the attention characteristic vector p. h) The attention feature vector p and the multi-label electrocardiographic feature map X RS Performing splicing operation to obtain fusion characteristic X p
i) Constructing a channel correlation module, and obtaining multi-label electrocardiographSign X RS Inputting the obtained characteristic X 'into a channel correlation module' RS2
j) Will feature X' RS2 And multi-label electrocardiographic feature map X RS Phase characteristic fusion and splicing to obtain correlation characteristic X n
k) Constructing a classification module to fuse the features X p And correlation feature X n And inputting the multi-label electrocardio images into a classification module, and outputting a classification result of the multi-label electrocardio images.
A model capable of classifying the multi-label electrocardiograph is constructed, context information of various electrocardiograph category characteristics is mined, correlation among feature channels is fully captured, and the correlation information among category labels is utilized, so that the multi-label electrocardiograph is effectively classified, and the classification accuracy and precision are improved.
Example 1:
the feature mining module in the step b) is composed of a ResNet-34 network, a multi-label electrocardiographic image N is input into the ResNet-34 network, and a multi-label electrocardiographic feature image X is obtained through output RS ,X RS ∈R C×H×W R is real space, C is the channel number of the feature map, H is the height of the feature map, and W is the width of the feature map.
Example 2:
step e) comprises the steps of:
e-1) the convolution attention enhancement module CAAB consists of a first convolution layer, a second convolution layer, a third convolution layer, a fourth convolution layer, a ReLU activation function layer and a SE-Net network.
e-2) fusing the features X r Input into a first convolution layer of a convolution attention enhancement module CAAB, and output to obtain attention characteristics
Figure BDA0004111221130000071
e-3) fusing the features X r Input into a second convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
Figure BDA0004111221130000072
e-4) fusing the features X r Input into a third convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
Figure BDA0004111221130000073
e-5) fusing feature X r Input into a fourth convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
Figure BDA0004111221130000074
/>
e-6) attention-taking feature
Figure BDA0004111221130000075
Attention character->
Figure BDA0004111221130000076
Attention character->
Figure BDA0004111221130000077
Attention character->
Figure BDA0004111221130000078
Adding and inputting to a ReLU activation function layer of a convolution attention enhancement module CAAB, and outputting to obtain attention characteristic X' r
e-7) attention-seeking feature X' r Input into a SE-Net network of a convolution attention enhancement module CAAB, and output to obtain an attention characteristic vector n.
Example 3:
e-1) the first convolution layer has a convolution kernel size of 1×25, the number of convolution kernels is 32, the second convolution layer has a convolution kernel size of 1×15, the number of convolution kernels is 32, the third convolution layer has a convolution kernel size of 1×7, the fourth convolution layer has a convolution kernel size of 1×3, and the fourth convolution layer has a convolution kernel size of 32.
Example 4:
step g) comprises the steps of:
g-1) fusion of the features X r′ Input into a first convolution layer of a convolution attention enhancement module CAAB, and output to obtain attention characteristics
Figure BDA0004111221130000079
g-2) fusing the features X r′ Input into a second convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
Figure BDA00041112211300000710
g-3) fusion of feature X r′ Input into a third convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
Figure BDA0004111221130000081
g-4) fusing the features X r′ Input into a fourth convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
Figure BDA0004111221130000082
g-5) attention-directing feature
Figure BDA0004111221130000083
Attention character->
Figure BDA0004111221130000084
Attention character->
Figure BDA0004111221130000085
Attention character->
Figure BDA0004111221130000086
Adding and inputting to a ReLU activation function layer of a convolution attention enhancement module CAAB, and outputting to obtain attention characteristic X' r′
g-6) attention-seeking feature X' r′ Input to convolution attention enhancement modeIn the SE-Net network of the block CAAB, the attention feature vector p is output.
Example 5:
step i) comprises the steps of:
the i-1) channel correlation module is composed of a global maximum pooling layer, a full connection layer, a first ReLU activation function layer and a second ReLU activation function layer.
i-2) the multi-labeled electrocardiograph comprises a plurality of objects, with a specific correlation between the objects. The channel correlation module can effectively enhance the correlation between the channel characteristic diagrams. Feature X finally extracted by feature mining module RS Multi-label electrocardiographic feature map X RS Input into the global maximization layer of the channel correlation module, and the global maximization layer is utilized to perform characteristic X RS Compressing and outputting a feature image X with higher feature information degree RS2 ,X RS2 ∈R C×1×1
i-3) channel correlation module to build correlations between different channel features between multi-labeled electrocardiographic images, the feature map X is constructed in the channel dimension RS2 Dividing into o groups to obtain features
Figure BDA0004111221130000087
For the ith feature, i e {1,..>
Figure BDA0004111221130000091
i-4) characterization of the characteristics
Figure BDA0004111221130000092
Respectively and sequentially inputting the channel characteristics into a full-connection layer and a first ReLU activation function layer of a channel correlation module, and outputting the converted channel characteristics
Figure BDA0004111221130000093
For the ith transformed channel feature, +.>
Figure BDA0004111221130000094
i-5) characterization of
Figure BDA0004111221130000095
And transformed channel characteristics->
Figure BDA0004111221130000096
Splicing and fusing to obtain the i-th channel correlation characteristic +.>
Figure BDA0004111221130000097
Figure BDA0004111221130000098
i-6) characterizing all channel correlations ∈ ->
Figure BDA0004111221130000099
Adding and inputting to a second ReLU activation function layer of the channel correlation module, and outputting to obtain a characteristic X' RS2 ,X′ RS2 ∈R C×1×1
Example 6:
step k) comprises the steps of:
the k-1) classification module is composed of a fusion unit and a full connection layer.
k-2) fusing features X p And correlation feature X n And inputting the characteristics into a fusion unit of the classification module for characteristic fusion, and outputting to obtain a final characteristic X.
k-3) inputting the final feature X into a full-connection layer of the classification module to obtain a classification result of the multi-label electrocardiograph image.
Finally, it should be noted that: the foregoing description is only a preferred embodiment of the present invention, and the present invention is not limited thereto, but it is to be understood that modifications and equivalents of some of the technical features described in the foregoing embodiments may be made by those skilled in the art, although the present invention has been described in detail with reference to the foregoing embodiments. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. A multi-tag electrocardiographic image classification method based on an attention-enhancing network, comprising the steps of:
a) Acquiring a multi-label electrocardiograph image N, wherein the label of the multi-label electrocardiograph image N is S, and S= { S 1 ,S 2 ,...,S i ,...,S m },S i I e {1,.. M } for the i-th tag, m being the number of categories of objects in the electrocardiographic image;
b) Constructing a feature mining module, and inputting the multi-label electrocardiographic image N into the feature mining module to obtain a multi-label electrocardiographic feature map X RS
c) Multiple label electrocardiographic feature map X RS Inputting into the full connection layer to obtain the characteristic vector X 'of the multi-label electrocardiograph category' RS
d) Multiple label electrocardiographic feature map X RS Feature vector X 'associated with a multi-labeled electrocardiograph class' RS Performing splicing operation to obtain fusion characteristic X r
e) Constructing a convolution attention enhancement module CAAB, and fusing the features X r Inputting the attention characteristic vector n into a convolution attention enhancement module CAAB;
f) The attention feature vector n and the multi-label electrocardiographic feature map X RS Performing splicing operation to obtain fusion characteristic X r′
g) Will fuse feature X r′ Inputting the attention characteristic vector p into a convolution attention enhancement module CAAB;
h) The attention feature vector p and the multi-label electrocardiographic feature map X RS Performing splicing operation to obtain fusion characteristic X p
i) Constructing a channel correlation module, and acquiring a multi-label electrocardiographic feature map X RS Inputting the obtained characteristic X 'into a channel correlation module' RS2
j) Will feature X' RS2 And multi-label electrocardiographic feature map X RS Phase-spliced to obtain correlation characteristic X n
k) Constructing a classification module to fuse the features X p And correlation feature X n And inputting the multi-label electrocardio images into a classification module, and outputting a classification result of the multi-label electrocardio images.
2. The attention-enhancing network-based multi-labeled electrocardiographic classification method according to claim 1 wherein: the feature mining module in the step b) is composed of a ResNet-34 network, a multi-label electrocardiographic image N is input into the ResNet-34 network, and a multi-label electrocardiographic feature image X is obtained through output RS ,X RS ∈R C×H×W R is real space, C is the channel number of the feature map, H is the height of the feature map, and W is the width of the feature map.
3. The attention-enhancing network-based multi-labeled electrocardiographic image classification method according to claim 1 wherein step e) comprises the steps of:
e-1) a convolution attention enhancement module CAAB consists of a first convolution layer, a second convolution layer, a third convolution layer, a fourth convolution layer, a ReLU activation function layer and a SE-Net network;
e-2) fusing the features X r Input into a first convolution layer of a convolution attention enhancement module CAAB, and output to obtain attention characteristics
Figure FDA0004111221120000021
e-3) fusing the features X r Input into a second convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
Figure FDA0004111221120000022
e-4) fusing the features X r Input into a third convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
Figure FDA0004111221120000023
e-5) fusing feature X r Input into a fourth convolution layer of the convolution attention enhancement module CAAB, and output to obtain attentionFeatures (e.g. a character)
Figure FDA0004111221120000024
e-6) attention-taking feature
Figure FDA0004111221120000025
Attention character->
Figure FDA0004111221120000026
Attention character->
Figure FDA0004111221120000027
Attention character->
Figure FDA0004111221120000028
Adding and inputting to a ReLU activation function layer of a convolution attention enhancement module CAAB, and outputting to obtain attention characteristic X' r
e-7) attention-seeking feature X' r Input into a SE-Net network of a convolution attention enhancement module CAAB, and output to obtain an attention characteristic vector n.
4. A multi-labeled electrocardiographic image classification method based on attention-enhancing networks according to claim 3, wherein: e-1) the first convolution layer has a convolution kernel size of 1×25, the number of convolution kernels is 32, the second convolution layer has a convolution kernel size of 1×15, the number of convolution kernels is 32, the third convolution layer has a convolution kernel size of 1×7, the fourth convolution layer has a convolution kernel size of 1×3, and the fourth convolution layer has a convolution kernel size of 32.
5. A multi-labeled electrocardiographic image classification method based on attention-enhancing networks according to claim 3, wherein step g) comprises the steps of:
g-1) fusion of the features X r′ Input into a first convolution layer of a convolution attention enhancement module CAAB, and output to obtain attention characteristics
Figure FDA0004111221120000031
g-2) fusing the features X r′ Input into a second convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
Figure FDA0004111221120000032
g-3) fusion of feature X r′ Input into a third convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
Figure FDA0004111221120000033
g-4) fusing the features X r′ Input into a fourth convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
Figure FDA0004111221120000034
g-5) attention-directing feature
Figure FDA0004111221120000035
Attention character->
Figure FDA0004111221120000036
Attention character->
Figure FDA0004111221120000037
Attention character->
Figure FDA0004111221120000038
Adding and inputting to a ReLU activation function layer of a convolution attention enhancement module CAAB, and outputting to obtain attention characteristic X' r′
g-6) attention-seeking feature X' r′ Input to the SE-Net network of the convolution attention enhancement module CAAB, and output to obtain the attention characteristic vector p.
6. A multi-labeled electrocardiographic image classification method based on attention-enhancing networks according to claim 3, wherein step i) comprises the steps of:
the i-1) channel correlation module is composed of a global maximum pooling layer, a full connection layer, a first ReLU activation function layer and a second ReLU activation function layer;
i-2) mapping multi-labeled electrocardiographic signatures X RS Input into a global maximization layer of a channel correlation module, and output to obtain a feature map X RS2 ,X RS2 ∈R C×1×1
i-3) mapping feature patterns X in the channel dimension RS2 Dividing into o groups to obtain features
Figure FDA0004111221120000041
I e {1,., o } for the i-th feature;
i-4) characterization of the characteristics
Figure FDA0004111221120000042
Respectively and sequentially inputting the channel characteristics into a full-connection layer and a first ReLU activation function layer of a channel correlation module, and outputting the converted channel characteristics
Figure FDA0004111221120000043
For the ith transformed channel feature, +.>
Figure FDA0004111221120000044
/>
i-5) characterization of
Figure FDA0004111221120000045
And transformed channel characteristics->
Figure FDA0004111221120000046
Phase splicing operation is carried out to obtain the ith channel correlation characteristic
Figure FDA0004111221120000047
i-6) characterizing all channel correlations
Figure FDA0004111221120000048
Adding and inputting to a second ReLU activation function layer of the channel correlation module, and outputting to obtain a characteristic X' RS2 ,X′ RS2 ∈R C×1×1
7. The attention-enhancing network-based multi-labeled electrocardiographic image classification method according to claim 1 wherein step k) comprises the steps of:
the k-1) classification module consists of a fusion unit and a full connection layer;
k-2) fusing features X p And correlation feature X n Inputting the characteristics into a fusion unit of the classification module for characteristic fusion, and outputting to obtain final characteristics X;
k-3) inputting the final feature X into a full-connection layer of the classification module to obtain a classification result of the multi-label electrocardiograph image.
CN202310206939.2A 2023-03-07 2023-03-07 Multi-label electrocardiograph image classification method based on attention-enhancing network Active CN116188867B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310206939.2A CN116188867B (en) 2023-03-07 2023-03-07 Multi-label electrocardiograph image classification method based on attention-enhancing network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310206939.2A CN116188867B (en) 2023-03-07 2023-03-07 Multi-label electrocardiograph image classification method based on attention-enhancing network

Publications (2)

Publication Number Publication Date
CN116188867A true CN116188867A (en) 2023-05-30
CN116188867B CN116188867B (en) 2023-10-31

Family

ID=86452105

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310206939.2A Active CN116188867B (en) 2023-03-07 2023-03-07 Multi-label electrocardiograph image classification method based on attention-enhancing network

Country Status (1)

Country Link
CN (1) CN116188867B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109165674A (en) * 2018-07-19 2019-01-08 南京富士通南大软件技术有限公司 A kind of certificate photo classification method based on multi-tag depth convolutional network
EP3654248A1 (en) * 2018-11-19 2020-05-20 Siemens Aktiengesellschaft Verification of classification decisions in convolutional neural networks
US20200237246A1 (en) * 2017-11-27 2020-07-30 Lepu Medical Technology (Bejing) Co., Ltd. Automatic recognition and classification method for electrocardiogram heartbeat based on artificial intelligence
CN113222055A (en) * 2021-05-28 2021-08-06 新疆爱华盈通信息技术有限公司 Image classification method and device, electronic equipment and storage medium
CN113947161A (en) * 2021-10-28 2022-01-18 广东工业大学 Attention mechanism-based multi-label text classification method and system
CN114612681A (en) * 2022-01-30 2022-06-10 西北大学 GCN-based multi-label image classification method, model construction method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200237246A1 (en) * 2017-11-27 2020-07-30 Lepu Medical Technology (Bejing) Co., Ltd. Automatic recognition and classification method for electrocardiogram heartbeat based on artificial intelligence
CN109165674A (en) * 2018-07-19 2019-01-08 南京富士通南大软件技术有限公司 A kind of certificate photo classification method based on multi-tag depth convolutional network
EP3654248A1 (en) * 2018-11-19 2020-05-20 Siemens Aktiengesellschaft Verification of classification decisions in convolutional neural networks
CN113222055A (en) * 2021-05-28 2021-08-06 新疆爱华盈通信息技术有限公司 Image classification method and device, electronic equipment and storage medium
CN113947161A (en) * 2021-10-28 2022-01-18 广东工业大学 Attention mechanism-based multi-label text classification method and system
CN114612681A (en) * 2022-01-30 2022-06-10 西北大学 GCN-based multi-label image classification method, model construction method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SHUHONG WANG.ET AL: ""Multiscale Residual Network Based on Channel Spatial Attention Mechanism for Multilabel ECG Classification"", 《HTTPS://DOI.ORG/10.1155/2021/6630643》 *
XIAOYUN XIE.ET AL: ""Multilabel 12-Lead ECG Classification Based on Leadwise Grouping Multibranch Network"", 《IEEE》 *
薛丽霞: ""融合注意力机制和语义关联性 的多标签图像分类"", 《光电工程》, vol. 46, no. 9 *

Also Published As

Publication number Publication date
CN116188867B (en) 2023-10-31

Similar Documents

Publication Publication Date Title
Yang et al. Clustered object detection in aerial images
CN109685067B (en) Image semantic segmentation method based on region and depth residual error network
Tang et al. RGBT salient object detection: Benchmark and a novel cooperative ranking approach
CN114202672A (en) Small target detection method based on attention mechanism
CN109858333B (en) Image processing method, image processing device, electronic equipment and computer readable medium
KR101191223B1 (en) Method, apparatus and computer-readable recording medium by for retrieving image
CN110781350A (en) Pedestrian retrieval method and system oriented to full-picture monitoring scene
CN113780229A (en) Text recognition method and device
CN110765882A (en) Video tag determination method, device, server and storage medium
CN113434716A (en) Cross-modal information retrieval method and device
CN113869361A (en) Model training method, target detection method and related device
CN112861970A (en) Fine-grained image classification method based on feature fusion
CN111460223A (en) Short video single-label classification method based on multi-mode feature fusion of deep network
Shen et al. Unsupervised multiview distributed hashing for large-scale retrieval
US9595113B2 (en) Image transmission system, image processing apparatus, image storage apparatus, and control methods thereof
CN116188867B (en) Multi-label electrocardiograph image classification method based on attention-enhancing network
Sun et al. Road and car extraction using UAV images via efficient dual contextual parsing network
CN110516640B (en) Vehicle re-identification method based on feature pyramid joint representation
CN113221977A (en) Small sample semantic segmentation method based on anti-aliasing semantic reconstruction
CN113920127B (en) Training data set independent single-sample image segmentation method and system
CN115984547A (en) Target detection model, training method and system, and target detection method and system
CN115578599A (en) Polarized SAR image classification method based on superpixel-hypergraph feature enhancement network
CN115564044A (en) Graph neural network convolution pooling method, device, system and storage medium
Yu et al. Social group suggestion from user image collections
JP2015158739A (en) Image sorting device, image classification method, and image classification program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant