CN116188867A - Multi-label electrocardiograph image classification method based on attention-enhancing network - Google Patents
Multi-label electrocardiograph image classification method based on attention-enhancing network Download PDFInfo
- Publication number
- CN116188867A CN116188867A CN202310206939.2A CN202310206939A CN116188867A CN 116188867 A CN116188867 A CN 116188867A CN 202310206939 A CN202310206939 A CN 202310206939A CN 116188867 A CN116188867 A CN 116188867A
- Authority
- CN
- China
- Prior art keywords
- attention
- convolution
- feature
- label
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 17
- 239000013598 vector Substances 0.000 claims description 27
- 230000004927 fusion Effects 0.000 claims description 26
- 230000004913 activation Effects 0.000 claims description 21
- 238000005065 mining Methods 0.000 claims description 12
- 238000012512 characterization method Methods 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 4
- GNFTZDOKVXKIBK-UHFFFAOYSA-N 3-(2-methoxyethoxy)benzohydrazide Chemical compound COCCOC1=CC=CC(C(=O)NN)=C1 GNFTZDOKVXKIBK-UHFFFAOYSA-N 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 238000002347 injection Methods 0.000 description 2
- 239000007924 injection Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000007526 fusion splicing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/7715—Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Abstract
A multi-label electrocardiograph image classification method based on an attention enhancement network is used, a model capable of classifying multi-label electrocardiograph images is built, context information of various electrocardiograph image category characteristics is mined, correlation among feature channels is fully captured, and correlation information among category labels is utilized, so that multi-label electrocardiograph images are effectively classified, and classification accuracy and precision are improved.
Description
Technical Field
The invention relates to the technical field of electrocardiograph image classification, in particular to a multi-label electrocardiograph image classification method based on an attention-enhancing network.
Background
The practical application environment center electric image mostly exists in a multi-label mode, and the multi-label electrocardiograph classification research has extremely high application value and feasibility, so that researchers are promoted to continuously explore the multi-label electrocardiograph classification direction. Although research into multi-labeled electrocardiographic image classification has achieved a high degree of accuracy, most efforts have focused on how adequately to extract images from signals, but neglecting the correlation between labels in multi-labeled electrocardiographic images.
Disclosure of Invention
In order to overcome the defects of the technology, the invention provides a method capable of mining the context information of various characteristics from the data set so as to enhance the input characteristics of the current electrocardiographic image.
The technical scheme adopted for overcoming the technical problems is as follows:
a multi-labeled electrocardiographic image classification method based on an attention-enhancing network, comprising the steps of:
a) Acquiring a multi-label electrocardiograph image N, wherein the label of the multi-label electrocardiograph image N is S, and S= { S 1 ,S 2 ,...,S i ,...,S m },S i I e {1,.. M } for the i-th tag, m being the number of categories of objects in the electrocardiographic image;
b) Constructing a feature mining module, and inputting the multi-label electrocardiographic image N into the feature mining module to obtain a multi-label electrocardiographic feature map X RS ;
c) Multiple label electrocardiographic feature map X RS Inputting into the full connection layer to obtain the characteristic vector X 'of the multi-label electrocardiograph category' RS ;
d) Multiple label electrocardiographic feature map X RS Feature vector X 'associated with a multi-labeled electrocardiograph class' RS Performing splicing operation to obtain fusion characteristic X r ;
e) Constructing a convolution attention enhancement module CAAB, and fusing the features X r Inputting the attention characteristic vector n into a convolution attention enhancement module CAAB;
f) The attention feature vector n and the multi-label electrocardiographic feature map X RS Performing splicing operation to obtain fusion characteristic X r′ ;
g) Will fuse feature X r′ Inputting the attention characteristic vector p into a convolution attention enhancement module CAAB; h) The attention feature vector p and the multi-label electrocardiographic feature map X RS Performing splicing operation to obtain fusion characteristic X p ;
i) Constructing a channel correlation module, and acquiring a multi-label electrocardiographic feature map X RS Inputting the obtained characteristic X 'into a channel correlation module' RS2 ;
j) Will feature X' RS2 And multi-label electrocardiographic feature map X RS Splicing, fusing and adding to obtain a correlation characteristic X n The method comprises the steps of carrying out a first treatment on the surface of the k) Constructing a classification module to fuse the features X p And correlation feature X n And inputting the multi-label electrocardio images into a classification module, and outputting a classification result of the multi-label electrocardio images.
Further, the feature mining module in the step b) is composed of a ResNet-34 network, the multi-label electrocardiographic image N is input into the ResNet-34 network, and a multi-label electrocardiographic feature image X is obtained through output RS ,X RS ∈R C×H×W R is real space, C is the channel number of the feature map, H is the height of the feature map, and W is the width of the feature map.
Further, step e) comprises the steps of:
e-1) a convolution attention enhancement module CAAB consists of a first convolution layer, a second convolution layer, a third convolution layer, a fourth convolution layer, a ReLU activation function layer and a SE-Net network;
e-2) fusing the features X r Input to a convolution attention enhancement module CAAIn the first convolution layer of B, the output gets the attention characteristic
e-3) fusing the features X r Input into a second convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
e-4) fusing the features X r Input into a third convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
e-5) fusing feature X r Input into a fourth convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
e-6) attention-taking featureAttention character->Attention character->Attention character->Adding and inputting to a ReLU activation function layer of a convolution attention enhancement module CAAB, and outputting to obtain attention characteristic X' r ;
e-7) attention-seeking feature X' r Input into a SE-Net network of a convolution attention enhancement module CAAB, and output to obtain an attention characteristic vector n.
Preferably, in e-1), the convolution kernel size of the first convolution layer is 1×25, the number of convolution kernels is 32, the convolution kernel size of the second convolution layer is 1×15, the number of convolution kernels is 32, the convolution kernel size of the third convolution layer is 1×7, the number of convolution kernels is 32, the convolution kernel size of the fourth convolution layer is 1×3, and the number of convolution kernels is 32.
Further, step g) comprises the steps of:
g-1) fusion of the features X r′ Input into a first convolution layer of a convolution attention enhancement module CAAB, and output to obtain attention characteristics
g-2) fusing the features X r′ Input into a second convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
g-3) fusion of feature X r′ Input into a third convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
g-4) fusing the features X r′ Input into a fourth convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
g-5) attention-directing featureAttention character->Attention character->Injection and injection methodItalian character->Adding and inputting to a ReLU activation function layer of a convolution attention enhancement module CAAB, and outputting to obtain attention characteristic X' r′ ;
g-6) attention-seeking feature X' r′ Input to the SE-Net network of the convolution attention enhancement module CAAB, and output to obtain the attention characteristic vector p.
Further, step i) comprises the steps of:
further, the i-1) channel correlation module is composed of a global maximum pooling layer, a full connection layer, a first ReLU activation function layer and a second ReLU activation function layer;
i-2) mapping multi-labeled electrocardiographic signatures X RS Input into a global maximization layer of a channel correlation module, and output to obtain a feature map X RS2 ,X RS2 ∈R C×1×1 ;
i-3) mapping feature patterns X in the channel dimension RS2 Dividing into o groups to obtain featuresI e {1,., o } for the i-th feature;
i-4) characterization of the characteristicsRespectively and sequentially inputting the channel characteristics into a full-connection layer and a first ReLU activation function layer of a channel correlation module, and outputting the converted channel characteristicsFor the ith transformed channel feature, +.>
i-5) characterization ofAnd transformed channel characteristics->Phase splicing operation to obtain the i-th channel correlation characteristic +.>
i-6) characterizing all channel correlationsAdding and inputting to a second ReLU activation function layer of the channel correlation module, and outputting to obtain a characteristic X' RS2 ,X′ RS2 ∈R C×1×1 。
Further, step k) comprises the steps of:
the k-1) classification module consists of a fusion unit and a full connection layer;
k-2) fusing features X p And correlation feature X n Inputting the characteristics into a fusion unit of the classification module for characteristic fusion, and outputting to obtain final characteristics X;
k-3) inputting the final feature X into a full-connection layer of the classification module to obtain a classification result of the multi-label electrocardiograph image.
The beneficial effects of the invention are as follows: a multi-label electrocardiograph classification method based on an attention-enhancing network is used for constructing a model capable of classifying multi-label electrocardiograph images, mining context information of various electrocardiograph image category characteristics, fully capturing correlation among feature channels and utilizing the correlation information among category labels, so that the multi-label electrocardiograph images are effectively classified, and the classification accuracy and precision are improved.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
The invention is further described with reference to fig. 1.
A multi-labeled electrocardiographic image classification method based on an attention-enhancing network, comprising the steps of:
a) Acquiring a multi-label electrocardiograph image N, wherein the label of the multi-label electrocardiograph image N is S, and S= { S 1 ,S 2 ,...,S i ,...,S m },S i For the i-th label, i e {1,., m }, m is the number of categories of objects in the electrocardiographic image, if the multi-label electrocardiograph image N has a label i, S i Equal to 1 and vice versa is 0.
b) Constructing a feature mining module, and inputting the multi-label electrocardiographic image N into the feature mining module to obtain a multi-label electrocardiographic feature map X RS 。
c) Multiple label electrocardiographic feature map X RS Inputting into the full connection layer to obtain the characteristic vector X 'of the multi-label electrocardiograph category' RS 。
d) Multiple label electrocardiographic feature map X RS Feature vector X 'associated with a multi-labeled electrocardiograph class' RS Feature fusion and splicing are carried out, and the currently input electrocardiographic features are utilized to obtain category feature vectors X' RS Related information is mined, and context information outside a single electrocardiograph image is extracted to obtain fusion characteristics X r 。
e) Constructing a convolution attention enhancement module CAAB, and fusing the features X r And inputting the attention characteristic vector n into a convolution attention enhancement module CAAB to obtain the attention characteristic vector n. The convolution attention enhancement module CAAB is used for extracting context information between images from multi-label electrocardio images and directly correlating data labels to enhance the current input characteristic X RS 。
f) The attention feature vector n and the multi-label electrocardiographic feature map X RS Performing feature fusion splicing operation to obtain fusion features X r′ 。
g) Will fuse feature X r′ And inputting the attention characteristic vector p into a convolution attention enhancement module CAAB to obtain the attention characteristic vector p. h) The attention feature vector p and the multi-label electrocardiographic feature map X RS Performing splicing operation to obtain fusion characteristic X p 。
i) Constructing a channel correlation module, and obtaining multi-label electrocardiographSign X RS Inputting the obtained characteristic X 'into a channel correlation module' RS2 。
j) Will feature X' RS2 And multi-label electrocardiographic feature map X RS Phase characteristic fusion and splicing to obtain correlation characteristic X n 。
k) Constructing a classification module to fuse the features X p And correlation feature X n And inputting the multi-label electrocardio images into a classification module, and outputting a classification result of the multi-label electrocardio images.
A model capable of classifying the multi-label electrocardiograph is constructed, context information of various electrocardiograph category characteristics is mined, correlation among feature channels is fully captured, and the correlation information among category labels is utilized, so that the multi-label electrocardiograph is effectively classified, and the classification accuracy and precision are improved.
Example 1:
the feature mining module in the step b) is composed of a ResNet-34 network, a multi-label electrocardiographic image N is input into the ResNet-34 network, and a multi-label electrocardiographic feature image X is obtained through output RS ,X RS ∈R C×H×W R is real space, C is the channel number of the feature map, H is the height of the feature map, and W is the width of the feature map.
Example 2:
step e) comprises the steps of:
e-1) the convolution attention enhancement module CAAB consists of a first convolution layer, a second convolution layer, a third convolution layer, a fourth convolution layer, a ReLU activation function layer and a SE-Net network.
e-2) fusing the features X r Input into a first convolution layer of a convolution attention enhancement module CAAB, and output to obtain attention characteristics
e-3) fusing the features X r Input into a second convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
e-4) fusing the features X r Input into a third convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
e-5) fusing feature X r Input into a fourth convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics/>
e-6) attention-taking featureAttention character->Attention character->Attention character->Adding and inputting to a ReLU activation function layer of a convolution attention enhancement module CAAB, and outputting to obtain attention characteristic X' r 。
e-7) attention-seeking feature X' r Input into a SE-Net network of a convolution attention enhancement module CAAB, and output to obtain an attention characteristic vector n.
Example 3:
e-1) the first convolution layer has a convolution kernel size of 1×25, the number of convolution kernels is 32, the second convolution layer has a convolution kernel size of 1×15, the number of convolution kernels is 32, the third convolution layer has a convolution kernel size of 1×7, the fourth convolution layer has a convolution kernel size of 1×3, and the fourth convolution layer has a convolution kernel size of 32.
Example 4:
step g) comprises the steps of:
g-1) fusion of the features X r′ Input into a first convolution layer of a convolution attention enhancement module CAAB, and output to obtain attention characteristics
g-2) fusing the features X r′ Input into a second convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
g-3) fusion of feature X r′ Input into a third convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
g-4) fusing the features X r′ Input into a fourth convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
g-5) attention-directing featureAttention character->Attention character->Attention character->Adding and inputting to a ReLU activation function layer of a convolution attention enhancement module CAAB, and outputting to obtain attention characteristic X' r′ 。
g-6) attention-seeking feature X' r′ Input to convolution attention enhancement modeIn the SE-Net network of the block CAAB, the attention feature vector p is output.
Example 5:
step i) comprises the steps of:
the i-1) channel correlation module is composed of a global maximum pooling layer, a full connection layer, a first ReLU activation function layer and a second ReLU activation function layer.
i-2) the multi-labeled electrocardiograph comprises a plurality of objects, with a specific correlation between the objects. The channel correlation module can effectively enhance the correlation between the channel characteristic diagrams. Feature X finally extracted by feature mining module RS Multi-label electrocardiographic feature map X RS Input into the global maximization layer of the channel correlation module, and the global maximization layer is utilized to perform characteristic X RS Compressing and outputting a feature image X with higher feature information degree RS2 ,X RS2 ∈R C×1×1 。
i-3) channel correlation module to build correlations between different channel features between multi-labeled electrocardiographic images, the feature map X is constructed in the channel dimension RS2 Dividing into o groups to obtain featuresFor the ith feature, i e {1,..>
i-4) characterization of the characteristicsRespectively and sequentially inputting the channel characteristics into a full-connection layer and a first ReLU activation function layer of a channel correlation module, and outputting the converted channel characteristicsFor the ith transformed channel feature, +.>
i-5) characterization ofAnd transformed channel characteristics->Splicing and fusing to obtain the i-th channel correlation characteristic +.> i-6) characterizing all channel correlations ∈ ->Adding and inputting to a second ReLU activation function layer of the channel correlation module, and outputting to obtain a characteristic X' RS2 ,X′ RS2 ∈R C×1×1 。
Example 6:
step k) comprises the steps of:
the k-1) classification module is composed of a fusion unit and a full connection layer.
k-2) fusing features X p And correlation feature X n And inputting the characteristics into a fusion unit of the classification module for characteristic fusion, and outputting to obtain a final characteristic X.
k-3) inputting the final feature X into a full-connection layer of the classification module to obtain a classification result of the multi-label electrocardiograph image.
Finally, it should be noted that: the foregoing description is only a preferred embodiment of the present invention, and the present invention is not limited thereto, but it is to be understood that modifications and equivalents of some of the technical features described in the foregoing embodiments may be made by those skilled in the art, although the present invention has been described in detail with reference to the foregoing embodiments. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (7)
1. A multi-tag electrocardiographic image classification method based on an attention-enhancing network, comprising the steps of:
a) Acquiring a multi-label electrocardiograph image N, wherein the label of the multi-label electrocardiograph image N is S, and S= { S 1 ,S 2 ,...,S i ,...,S m },S i I e {1,.. M } for the i-th tag, m being the number of categories of objects in the electrocardiographic image;
b) Constructing a feature mining module, and inputting the multi-label electrocardiographic image N into the feature mining module to obtain a multi-label electrocardiographic feature map X RS ;
c) Multiple label electrocardiographic feature map X RS Inputting into the full connection layer to obtain the characteristic vector X 'of the multi-label electrocardiograph category' RS ;
d) Multiple label electrocardiographic feature map X RS Feature vector X 'associated with a multi-labeled electrocardiograph class' RS Performing splicing operation to obtain fusion characteristic X r ;
e) Constructing a convolution attention enhancement module CAAB, and fusing the features X r Inputting the attention characteristic vector n into a convolution attention enhancement module CAAB;
f) The attention feature vector n and the multi-label electrocardiographic feature map X RS Performing splicing operation to obtain fusion characteristic X r′ ;
g) Will fuse feature X r′ Inputting the attention characteristic vector p into a convolution attention enhancement module CAAB;
h) The attention feature vector p and the multi-label electrocardiographic feature map X RS Performing splicing operation to obtain fusion characteristic X p ;
i) Constructing a channel correlation module, and acquiring a multi-label electrocardiographic feature map X RS Inputting the obtained characteristic X 'into a channel correlation module' RS2 ;
j) Will feature X' RS2 And multi-label electrocardiographic feature map X RS Phase-spliced to obtain correlation characteristic X n ;
k) Constructing a classification module to fuse the features X p And correlation feature X n And inputting the multi-label electrocardio images into a classification module, and outputting a classification result of the multi-label electrocardio images.
2. The attention-enhancing network-based multi-labeled electrocardiographic classification method according to claim 1 wherein: the feature mining module in the step b) is composed of a ResNet-34 network, a multi-label electrocardiographic image N is input into the ResNet-34 network, and a multi-label electrocardiographic feature image X is obtained through output RS ,X RS ∈R C×H×W R is real space, C is the channel number of the feature map, H is the height of the feature map, and W is the width of the feature map.
3. The attention-enhancing network-based multi-labeled electrocardiographic image classification method according to claim 1 wherein step e) comprises the steps of:
e-1) a convolution attention enhancement module CAAB consists of a first convolution layer, a second convolution layer, a third convolution layer, a fourth convolution layer, a ReLU activation function layer and a SE-Net network;
e-2) fusing the features X r Input into a first convolution layer of a convolution attention enhancement module CAAB, and output to obtain attention characteristics
e-3) fusing the features X r Input into a second convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
e-4) fusing the features X r Input into a third convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
e-5) fusing feature X r Input into a fourth convolution layer of the convolution attention enhancement module CAAB, and output to obtain attentionFeatures (e.g. a character)
e-6) attention-taking featureAttention character->Attention character->Attention character->Adding and inputting to a ReLU activation function layer of a convolution attention enhancement module CAAB, and outputting to obtain attention characteristic X' r ;
e-7) attention-seeking feature X' r Input into a SE-Net network of a convolution attention enhancement module CAAB, and output to obtain an attention characteristic vector n.
4. A multi-labeled electrocardiographic image classification method based on attention-enhancing networks according to claim 3, wherein: e-1) the first convolution layer has a convolution kernel size of 1×25, the number of convolution kernels is 32, the second convolution layer has a convolution kernel size of 1×15, the number of convolution kernels is 32, the third convolution layer has a convolution kernel size of 1×7, the fourth convolution layer has a convolution kernel size of 1×3, and the fourth convolution layer has a convolution kernel size of 32.
5. A multi-labeled electrocardiographic image classification method based on attention-enhancing networks according to claim 3, wherein step g) comprises the steps of:
g-1) fusion of the features X r′ Input into a first convolution layer of a convolution attention enhancement module CAAB, and output to obtain attention characteristics
g-2) fusing the features X r′ Input into a second convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
g-3) fusion of feature X r′ Input into a third convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
g-4) fusing the features X r′ Input into a fourth convolution layer of the convolution attention enhancement module CAAB, and output to obtain attention characteristics
g-5) attention-directing featureAttention character->Attention character->Attention character->Adding and inputting to a ReLU activation function layer of a convolution attention enhancement module CAAB, and outputting to obtain attention characteristic X' r′ ;
g-6) attention-seeking feature X' r′ Input to the SE-Net network of the convolution attention enhancement module CAAB, and output to obtain the attention characteristic vector p.
6. A multi-labeled electrocardiographic image classification method based on attention-enhancing networks according to claim 3, wherein step i) comprises the steps of:
the i-1) channel correlation module is composed of a global maximum pooling layer, a full connection layer, a first ReLU activation function layer and a second ReLU activation function layer;
i-2) mapping multi-labeled electrocardiographic signatures X RS Input into a global maximization layer of a channel correlation module, and output to obtain a feature map X RS2 ,X RS2 ∈R C×1×1 ;
i-3) mapping feature patterns X in the channel dimension RS2 Dividing into o groups to obtain featuresI e {1,., o } for the i-th feature;
i-4) characterization of the characteristicsRespectively and sequentially inputting the channel characteristics into a full-connection layer and a first ReLU activation function layer of a channel correlation module, and outputting the converted channel characteristicsFor the ith transformed channel feature, +.>/>
i-5) characterization ofAnd transformed channel characteristics->Phase splicing operation is carried out to obtain the ith channel correlation characteristic
7. The attention-enhancing network-based multi-labeled electrocardiographic image classification method according to claim 1 wherein step k) comprises the steps of:
the k-1) classification module consists of a fusion unit and a full connection layer;
k-2) fusing features X p And correlation feature X n Inputting the characteristics into a fusion unit of the classification module for characteristic fusion, and outputting to obtain final characteristics X;
k-3) inputting the final feature X into a full-connection layer of the classification module to obtain a classification result of the multi-label electrocardiograph image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310206939.2A CN116188867B (en) | 2023-03-07 | 2023-03-07 | Multi-label electrocardiograph image classification method based on attention-enhancing network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310206939.2A CN116188867B (en) | 2023-03-07 | 2023-03-07 | Multi-label electrocardiograph image classification method based on attention-enhancing network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116188867A true CN116188867A (en) | 2023-05-30 |
CN116188867B CN116188867B (en) | 2023-10-31 |
Family
ID=86452105
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310206939.2A Active CN116188867B (en) | 2023-03-07 | 2023-03-07 | Multi-label electrocardiograph image classification method based on attention-enhancing network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116188867B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109165674A (en) * | 2018-07-19 | 2019-01-08 | 南京富士通南大软件技术有限公司 | A kind of certificate photo classification method based on multi-tag depth convolutional network |
EP3654248A1 (en) * | 2018-11-19 | 2020-05-20 | Siemens Aktiengesellschaft | Verification of classification decisions in convolutional neural networks |
US20200237246A1 (en) * | 2017-11-27 | 2020-07-30 | Lepu Medical Technology (Bejing) Co., Ltd. | Automatic recognition and classification method for electrocardiogram heartbeat based on artificial intelligence |
CN113222055A (en) * | 2021-05-28 | 2021-08-06 | 新疆爱华盈通信息技术有限公司 | Image classification method and device, electronic equipment and storage medium |
CN113947161A (en) * | 2021-10-28 | 2022-01-18 | 广东工业大学 | Attention mechanism-based multi-label text classification method and system |
CN114612681A (en) * | 2022-01-30 | 2022-06-10 | 西北大学 | GCN-based multi-label image classification method, model construction method and device |
-
2023
- 2023-03-07 CN CN202310206939.2A patent/CN116188867B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200237246A1 (en) * | 2017-11-27 | 2020-07-30 | Lepu Medical Technology (Bejing) Co., Ltd. | Automatic recognition and classification method for electrocardiogram heartbeat based on artificial intelligence |
CN109165674A (en) * | 2018-07-19 | 2019-01-08 | 南京富士通南大软件技术有限公司 | A kind of certificate photo classification method based on multi-tag depth convolutional network |
EP3654248A1 (en) * | 2018-11-19 | 2020-05-20 | Siemens Aktiengesellschaft | Verification of classification decisions in convolutional neural networks |
CN113222055A (en) * | 2021-05-28 | 2021-08-06 | 新疆爱华盈通信息技术有限公司 | Image classification method and device, electronic equipment and storage medium |
CN113947161A (en) * | 2021-10-28 | 2022-01-18 | 广东工业大学 | Attention mechanism-based multi-label text classification method and system |
CN114612681A (en) * | 2022-01-30 | 2022-06-10 | 西北大学 | GCN-based multi-label image classification method, model construction method and device |
Non-Patent Citations (3)
Title |
---|
SHUHONG WANG.ET AL: ""Multiscale Residual Network Based on Channel Spatial Attention Mechanism for Multilabel ECG Classification"", 《HTTPS://DOI.ORG/10.1155/2021/6630643》 * |
XIAOYUN XIE.ET AL: ""Multilabel 12-Lead ECG Classification Based on Leadwise Grouping Multibranch Network"", 《IEEE》 * |
薛丽霞: ""融合注意力机制和语义关联性 的多标签图像分类"", 《光电工程》, vol. 46, no. 9 * |
Also Published As
Publication number | Publication date |
---|---|
CN116188867B (en) | 2023-10-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yang et al. | Clustered object detection in aerial images | |
CN109685067B (en) | Image semantic segmentation method based on region and depth residual error network | |
Tang et al. | RGBT salient object detection: Benchmark and a novel cooperative ranking approach | |
CN114202672A (en) | Small target detection method based on attention mechanism | |
CN109858333B (en) | Image processing method, image processing device, electronic equipment and computer readable medium | |
KR101191223B1 (en) | Method, apparatus and computer-readable recording medium by for retrieving image | |
CN110781350A (en) | Pedestrian retrieval method and system oriented to full-picture monitoring scene | |
CN113780229A (en) | Text recognition method and device | |
CN110765882A (en) | Video tag determination method, device, server and storage medium | |
CN113434716A (en) | Cross-modal information retrieval method and device | |
CN113869361A (en) | Model training method, target detection method and related device | |
CN112861970A (en) | Fine-grained image classification method based on feature fusion | |
CN111460223A (en) | Short video single-label classification method based on multi-mode feature fusion of deep network | |
Shen et al. | Unsupervised multiview distributed hashing for large-scale retrieval | |
US9595113B2 (en) | Image transmission system, image processing apparatus, image storage apparatus, and control methods thereof | |
CN116188867B (en) | Multi-label electrocardiograph image classification method based on attention-enhancing network | |
Sun et al. | Road and car extraction using UAV images via efficient dual contextual parsing network | |
CN110516640B (en) | Vehicle re-identification method based on feature pyramid joint representation | |
CN113221977A (en) | Small sample semantic segmentation method based on anti-aliasing semantic reconstruction | |
CN113920127B (en) | Training data set independent single-sample image segmentation method and system | |
CN115984547A (en) | Target detection model, training method and system, and target detection method and system | |
CN115578599A (en) | Polarized SAR image classification method based on superpixel-hypergraph feature enhancement network | |
CN115564044A (en) | Graph neural network convolution pooling method, device, system and storage medium | |
Yu et al. | Social group suggestion from user image collections | |
JP2015158739A (en) | Image sorting device, image classification method, and image classification program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |