CN115358270A - Electrocardiogram classification method based on multi-task MTEF-NET - Google Patents
Electrocardiogram classification method based on multi-task MTEF-NET Download PDFInfo
- Publication number
- CN115358270A CN115358270A CN202210996094.7A CN202210996094A CN115358270A CN 115358270 A CN115358270 A CN 115358270A CN 202210996094 A CN202210996094 A CN 202210996094A CN 115358270 A CN115358270 A CN 115358270A
- Authority
- CN
- China
- Prior art keywords
- layer
- convolution
- feature map
- characteristic diagram
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/318—Heart-related electrical modalities, e.g. electrocardiography [ECG]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7203—Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Abstract
The method comprises the steps of establishing a multitask neural network by setting two tasks, and improving the performance of each task in parallel by utilizing a multitask joint learning mode so as to improve the accuracy of the whole electrocardiosignal classification. Different weight proportions are distributed to different tasks, so that the model has the capability of balancing the difference of data samples, and the problem of data imbalance can be solved. The established MTEF-NET network model has the quantitative adjustment capability of a complex network, and the neural network only needs fewer network parameters through the comprehensive adjustment of the depth, the width and the resolution of an input picture, so that the complexity of the modeling is reduced, and the accuracy of classification can be improved. By constructing the CBAM-MBCov module, the problem that the lost channel and space characteristics disappear after the electrocardiosignals are converted into two-dimensional images can be solved.
Description
Technical Field
The invention relates to an electrocardio classification method based on a multi-task MTEF-NET.
Background
The electrical signal is an important electrical signal for recording heart activity, records the change of heart waveform, can intuitively reflect the change of heart beat, and has certain regularity. The electrocardiogram is composed of a series of electrocardiosignals and can reflect the physiological activities of different heartbeats, and the heartbeats of each wave band have different meanings, so that the correct classification of the electrocardiosignals is extremely important.
With the rise of deep learning, feature extraction and classification can be performed by utilizing a neural network, but the models are usually single-task, and only focus on a single task, but neglect other information which is possibly helpful for optimizing classification of electrocardio signals, and in addition, due to the imbalance problem among data, the single-task network model is usually over-fitted to one of the signals, so that an over-fitting phenomenon occurs. And when the performance of the model is improved by the traditional neural network, the depth and the width of the network or the input resolution ratio are only singly changed, so that the model can easily reach the saturation degree, and the advantages of the model are difficult to exert. In addition, after the traditional neural network converts the one-dimensional electrocardiosignals into two-dimensional images, the channel and the spatial characteristics of the electrocardiosignals are often ignored.
Disclosure of Invention
In order to overcome the defects of the technology, the invention provides a method for constructing a brand-new CBAM-MBConv module to construct an MTEF-NET network model for carrying out electrocardio classification and improving the classification accuracy.
The technical scheme adopted by the invention for overcoming the technical problems is as follows:
an electrocardio classification method based on a multi-task MTEF-NET comprises the following steps:
a) Obtaining original electrocardiosignal X = { X ] from MIT-BIH arrhythmia database 1 ,x 2 ,....,x i ,...,x n In which x i For the ith original electrocardiosignal, i belongs to { 1., n }, and n is the total number of the original electrocardiosignals.
b) Preprocessing the original electrocardiosignal X, removing the noise in the original electrocardiosignal X, and obtaining a clean electrocardiosignal U = { U = 1 ,u 2 ,....,u i ,...,u n In which u i For the ith original ECG signal, i ∈ { 1., n }.
c) Conversion of a clean electrocardiosignal U into a two-dimensional image data set Y = { Y } by means of continuous wavelet transformation 1 ,y 2 ,....,y i ,...,y n In which y is i For the ith two-dimensional image, i belongs to {1,.. Multidot.n }, and the two-dimensional image y i Has a height of H, a width of W, a channel number of C, and a dimension of H × W × C.
d) Dividing a two-dimensional image data set Y into 50% and 50% in equal proportion, wherein the first divided data set is used for detecting whether the electrocardiosignals are normal or notWhereinFor the ith two-dimensional image,the images after the wavelet transformation are subjected to mirror image or inversion processing in the second divided data set to obtain a data set for judging the specific type of the electrocardiosignalsWhereinFor the j-th two-dimensional image,
e) Establishing a MTEF-NET network model, wherein the MTEF-NET network model sequentially comprises a convolutional layer, a CBAM-MBConv1 module, a first CBAM-MBConv6 module, a second CBAM-MBConv6 module, a third CBAM-MBConv6 module, a fourth CBAM-MBConv6 module, a fifth CBAM-MBConv6 module, a sixth CBAM-MBConv6 module, a seventh CBAM-MBConv6 module, an eighth CBAM-MBConv6 module, a ninth CBAM-MBConv6 module, a tenth CBAM-MBConv6 module, an eleventh CBAM-MBConv6 module, a twelfth CBAM-MBConv6 module, a thirteenth CBAM-MBConv6 module, a fourteenth CBAM-MBConv6 module, a fifteenth CBAM-MBConv6 module and a global pooling layer.
f) Data set Y 1 Each two-dimensional image is input into the MTEF-NET network model and output to obtain a feature map setWhereinFor the ith two-dimensional image,two-dimensional imageHas the dimension of
g) Data set Y 2 Each two-dimensional image is input into the MTEF-NET network model and output to obtain a feature map setWhereinFor the j-th two-dimensional image,two-dimensional imageHas the dimension of
h) Drawing set of characteristicsInputting the electrocardiosignals into a global pooling layer of an MTEF-NET network model, and outputting to obtain electrocardiosignal classification resultsWherein y is ai In the case of the ith result,drawing set of characteristicsInputting the electrocardiosignals into a global pooling layer of an MTEF-NET network model, and outputting to obtain electrocardiosignal classification resultsWherein y is bj For the result of the j-th event,
in a further aspect of the present invention, removing the original electrocardiosignal X = { X) by using two median filters in step b) 1 ,x 2 ,....,x i ,...,x n The noise in the device, a clean electrocardiosignal U = { U =, = 1 ,u 2 ,....,u i ,...,u n The width of the first median filter is 300ms and the width of the second median filter is 600ms.
Further, in step c) by formulaThe ith two-dimensional image y is obtained through calculation i Where a is the transformation scale, b is the translation factor, and phi (-) is the mother wavelet.
Further, step f) comprises the steps of:
f-1) two-dimensional imagesInputting the data into an MTEF-NET network model, entering a convolution layer with a convolution kernel size of 3 x 3, and outputting the convolution layer with the obtained dimension of 3 x 3Characteristic diagram of
f-2) the CBAM-MBConv1 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 1, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and the feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the third convolution layer and the fourth convolution layerSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map ofFeature map of spatial attentionInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
f-3) the first CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 6, a first global maximum pooling layer and a second convolution layerA global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 × 7, a sigmoid activation function layer, and a fourth convolution layer with convolution kernel size of 1 × 1, and forming the feature mapSequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map ofFeature map of spatial attentionInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
f-4) the second CBAM-MBConv6 module is composed of a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1 in sequence, and the feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the first global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 x 7, the sigmoid activation function layer and the fourth convolution layer with convolution kernel size of 1 x 1Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map ofFeature map of spatial attentionInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
f-5) the third CBAM-MBConv6 module is sequentially formed by a first convolution layer with convolution kernel size of 1 x 1, a convolution kernel size of 5 x 5 and expansionA second convolution layer with the proportion of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with the convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with the convolution kernel size of 1 x 1, and the feature diagram is formedSequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofSpatial attention feature mapIs inputted to the firstThe output in the four convolution layers is obtained with the dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
f-6) a fourth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer, the second convolution layer, the first global average pooling layer, the second global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 x 7, the sigmoid activation function layer and the fourth convolution layer with convolution kernel size of 1 x 1Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain dimensionalityIs composed ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map of (1)Feature map of spatial attentionInput to the fourth convolution layer and output with a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
f-7) fifth CBAM-MBConv6 Module in turn by convolution kernelsA first convolution layer with the size of 1 × 1, a second convolution layer with the convolution kernel size of 3 × 3 and the expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with the convolution kernel size of 7 × 7, a sigmoid activation function layer and a fourth convolution layer with the convolution kernel size of 1 × 1, and forming the characteristic diagramSequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofSpatial attention feature mapInput to the fourth convolution layer and output with a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
f-8) a sixth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer, the second convolution layer, the first global average pooling layer, the second global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 x 7, the sigmoid activation function layer and the fourth convolution layer with convolution kernel size of 1 x 1Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially input to the first inputThe dimensionality obtained by outputting the local maximum pooling layer and the first global average pooling layer isCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map of (1)Feature map of spatial attentionInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
f-9) the seventh CBAM-MBConv6 module is composed of, in order, a first convolution layer with convolution kernel size of 1 × 1, a second convolution layer with convolution kernel size of 3 × 3 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 × 7, a sigmoid activation function layer, and a fourth convolution layer with convolution kernel size of 1 × 1, and the feature map is formed by combining the first convolution layer with convolution kernel size of 1 × 1, the second convolution layer, the first global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 × 7, the sigmoid activation function layer, and the fourth convolution layer with convolution kernel size of 1 × 1Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map of (1)Spatial attention feature mapInput to the fourth convolution layer and output with a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
f-10) eighth CBAM-MBConv6 module, which is composed of a first convolution layer with convolution kernel size of 1 × 1, a second convolution layer with convolution kernel size of 5 × 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 × 7, a sigmoid activation function layer, and a fourth convolution layer with convolution kernel size of 1 × 1, in sequence, and maps the feature mapSequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map of (1)Feature map of spatial attentionInput to the fourth convolution layer and output with a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
f-11) the ninth CBAM-MBConv6 module is composed of, in order, a first convolution layer with convolution kernel size of 1 × 1, a second convolution layer with convolution kernel size of 5 × 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 × 7, a sigmoid activation function layer, and a fourth convolution layer with convolution kernel size of 1 × 1, and the feature map is formed by combining the first convolution layer with convolution kernel size of 1 × 1, the second convolution layer, the first global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 × 7, the sigmoid activation function layer, and the fourth convolution layer with convolution kernel size of 1 × 1Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map ofFeature map of spatial attentionInput to the fourth convolution layer and output with a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
f-12) a tenth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the first global maximum pooling layer and the second global average pooling layerSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofFeature map of spatial attentionInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
f-13) the eleventh CBAM-MBConv6 module is composed of, in order, a first convolution layer with convolution kernel size of 1 × 1, a second convolution layer with convolution kernel size of 5 × 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 × 7, a sigmoid activation function layer, and a fourth convolution layer with convolution kernel size of 1 × 1, and the feature map is formed by combining the first convolution layer with convolution kernel size of 1 × 1, the second convolution layer, the first global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 × 7, the sigmoid activation function layer, and the fourth convolution layer with convolution kernel size of 1 × 1Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofFeature map of spatial attentionInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
f-14) the twelfth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and the feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the third convolution layer and the fourth convolution layerSequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map of (1)Feature map of spatial attentionInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
f-15) a thirteenth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer, the second convolution layer, the first global average pooling layer, the second global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 x 7, the sigmoid activation function layer and the fourth convolution layer with convolution kernel size of 1 x 1Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofFeature map of spatial attentionInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
f-16) a fourteenth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and the feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the third convolution layer and the fourth convolution layerSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofIs characterized byDrawing (A)Will feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map of (1)Feature map of spatial attentionInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to obtainTo dimension ofCharacteristic diagram of
f-17) a fifteenth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the first global maximum pooling layer and the second global average pooling layerSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofFeature map of spatial attentionInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram ofFurther, step g) comprises the following steps:
g-1) two-dimensional imagingInputting the data into an MTEF-NET network model, entering a convolution layer with a convolution kernel size of 3 x 3, and outputting the convolution layer with the obtained dimension of 3 x 3Characteristic diagram of
g-2) mapping the characteristicsSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map ofFeature map of spatial attentionInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
g-3) mapping the characteristicsSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofFeature map of spatial attentionInput to the fourth convolution layer and output with a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
g-4) feature mapSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputtingTo obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map of (1)Spatial attention feature mapInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
g-5) feature mapSequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofFeature map of spatial attentionInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
g-6) feature mapSequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map of (1)Spatial attention feature mapInput to the fourth convolution layer and output with a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
g-7) feature mapSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average poolOutput after stratification to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map y of j,29 Will pay attention to the spatial feature map y j,29 Input to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
g-8) feature mapSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map ofSpatial attention feature mapInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
g-9) feature mapsSequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofSpatial attention feature mapInput to the fourth convolution layer and output with a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
g-10) will feature mapSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofFeature map of spatial attentionInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
g-11) mapping the characteristicsSequentially inputting the first convolution layer and the second convolution layer and outputting the input signals to obtain the dimensionIs composed ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map of (1)Spatial attention feature mapInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
g-12) feature mapSequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofFeature map of spatial attentionInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
g-13) mapping the characteristicsSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofFeature map of spatial attentionInput to the fourth convolution layer and output with a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
g-14) feature mapSequentially fed to a first winding layer and a second winding layerOutput after lamination to obtain dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map of (1)Feature map of spatial attentionInput to the fourth convolution layer and output with a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
g-15) feature mapSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map of (1)Feature map of spatial attentionInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
g-16) feature mapSequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map of (1)Spatial attention feature mapInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
g-17) mapping the characteristicsAre sequentially inputOutput to the first convolution layer and the second convolution layer to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map ofSpatial attention feature mapInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram ofFurther, step h) is performed by the formula
Calculating the ith result y ai In the formula H ai For the ith result y ai High, W of ai As the ith result y ai Width of (C) ai As the ith result y ai By the formula
Calculate the jth result y bj In the formula H bj As the jth result y bj High, W of bj For the jth result y bj Width of (C) bj As the jth result y bj The number of channels of (2).
Further, the method also comprises the following steps after the step h):
i) By the formula
Calculating to obtain a loss function L, wherein lambda and beta are both weights, lambda + beta =1, and softmax (·) is a softmax activation function;
j) Updating the parameters of the MTEF-NET network model in the step e) through an optimization function Adam by using a loss function L, and storing the training model and the parameters after 100 times of training.
The invention has the beneficial effects that: a multi-task neural network is constructed by setting two tasks, and the performance of each task is improved in parallel by using a multi-task joint learning mode, so that the accuracy of the whole electrocardiosignal classification is improved. Different weight proportions are distributed to different tasks, so that the model has the capability of balancing data sample differences, and the problem of data imbalance can be solved. The established MTEF-NET network model has the quantitative adjustment capability of a complex network, and the neural network only needs fewer network parameters through the comprehensive adjustment of the depth, the width and the resolution of an input picture, so that the complexity of the modeling is reduced, and the accuracy of classification can be improved. By constructing the CBAM-MBCov module, the problem that the lost channel and space characteristics disappear after the electrocardiosignals are converted into two-dimensional images can be solved.
Drawings
FIG. 1 is a diagram of the MTEF-NET network architecture of the present invention;
FIG. 2 is a flow chart of the method of the present invention;
FIG. 3 is a block diagram of the CBAM-MBConv module of the present invention.
Detailed Description
The invention will be further explained with reference to fig. 1, fig. 2 and fig. 3.
An electrocardio classification method based on a multi-task MTEF-NET comprises the following steps:
a) Obtaining original electrocardiosignal X = { X ] from MIT-BIH arrhythmia database 1 ,x 2 ,....,x i ,...,x n In which x i For the ith original electrocardiosignal, i belongs to { 1., n }, and n is the total number of the original electrocardiosignals.
b) The raw ECG signal X is preprocessed and the ECG signal is usually subjected to various noise disturbances, such as baseline wander, electromyogram disturbances and power line disturbances, which makes it difficult to extract useful information from the raw ECG signal. Therefore, noise reduction processing is required before performing the classification task, so that noise in the original electrocardiographic signal X is removed, and a clean electrocardiographic signal U = { U } is obtained 1 ,u 2 ,....,u i ,...,u n In which u i For the ith original ECG signal, i ∈ { 1., n }.
c) By usingContinuous wavelet transform converts clean electrocardiosignals U into two-dimensional image data sets Y = { Y = 1 ,y 2 ,....,y i ,...,y n H to facilitate feature extraction, where y i For the ith two-dimensional image, i belongs to { 1.,. N }, and the two-dimensional image y belongs to i Has a height of H, a width of W, a channel number of C, and a dimension of H × W × C.
d) Reconstructing the data set, constructing a single-task data set into a multi-task data set, specifically dividing the two-dimensional image data set Y according to equal proportion of 50% and 50%, wherein the first divided data set is a data set for detecting whether the electrocardiosignals are normal or notWhereinFor the ith two-dimensional image,the images after the wavelet transformation are subjected to mirror image or inversion processing in the second divided data set to obtain a data set for judging the specific type of the electrocardiosignalsWhereinFor the j-th two-dimensional image,data set Y with task one (coarse grain branching) 1 Is used for detecting whether the electrocardiosignals are normal or not. Task two (fine grain branching) dataset Y 2 Used for judging the specific type of the electrocardiosignal. Task one dataset Y 1 Is wavelet transformed image data set without any change, and task two data set Y 2 And carrying out mirror image or inversion processing on the wavelet-transformed image. The two classification tasks share the same neural network (MTEF-NET network model) to carry out feature extraction, and finally, the feature extraction is carried outAnd outputting the category types of the tasks in the global average pooling layer.
e) Establishing a MTEF-NET network model, wherein the MTEF-NET network model sequentially comprises a convolutional layer, a CBAM-MBConv1 module, a first CBAM-MBConv6 module, a second CBAM-MBConv6 module, a third CBAM-MBConv6 module, a fourth CBAM-MBConv6 module, a fifth CBAM-MBConv6 module, a sixth CBAM-MBConv6 module, a seventh CBAM-MBConv6 module, an eighth CBAM-MBConv6 module, a ninth CBAM-MBConv6 module, a tenth CBAM-MBConv6 module, an eleventh CBAM-MBConv6 module, a twelfth CBAM-MBConv6 module, a thirteenth CBAM-MBConv6 module, a fourteenth CBAM-MBConv6 module, a fifteenth CBAM-MBConv6 module and a global pooling layer.
f) Data set Y 1 Each two-dimensional image is input into the MTEF-NET network model and output to obtain a feature map setWhereinFor the ith two-dimensional image,two-dimensional imageHas the dimension of
g) Data set Y 2 Each two-dimensional image is input into the MTEF-NET network model and output to obtain a feature map setWhereinFor the (j) th two-dimensional image,two-dimensional imageHas a dimension of
h) Feature map setInputting the electrocardiosignals into a global pooling layer of an MTEF-NET network model, and outputting to obtain electrocardiosignal classification resultsWherein y is ai In the case of the ith result,feature map setInputting the electrocardiosignals into a global pooling layer of a MTEF-NET network model, and outputting to obtain electrocardiosignal classification resultsWherein y is bj For the result of the j-th event,
a multi-task neural network is constructed by setting two tasks, and the performance of each task is improved in parallel by utilizing a multi-task joint learning mode, so that the accuracy of the whole electrocardiosignal classification is improved. Different weight proportions are distributed to different tasks, so that the model has the capability of balancing data sample differences, and the problem of data imbalance can be solved. The established MTEF-NET network model has the quantitative adjustment capability of a complex network, and the neural network only needs fewer network parameters through the comprehensive adjustment of the depth, the width and the resolution of an input picture, so that the complexity of the modeling is reduced, and the accuracy of classification can be improved. By constructing the CBAM-MBCov module, the problem that the lost channel and space characteristics disappear after the electrocardiosignals are converted into two-dimensional images can be solved.
Example 1:
removing the original electrocardiosignal X = { X) by using two median filters in step b) 1 ,x 2 ,....,x i ,...,x n The noise in the device, a clean electrocardiosignal U = { U =, = 1 ,u 2 ,....,u i ,...,u n The width of the first median filter is 300ms and the width of the second median filter is 600ms.
Example 2:
in step c) by the formulaThe ith two-dimensional image y is obtained through calculation i Where a is the transformation scale, b is the translation factor, and phi (-) is the mother wavelet.
Example 3:
step f) comprises the following steps:
f-1) two-dimensional imagesInputting the data into MTEF-NET network model, inputting the data into convolution layer with convolution kernel size of 3 x 3, and outputting the data to obtain dimension ofCharacteristic diagram of
f-2) the CBAM-MBConv1 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 1, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and the feature map is formed by the convolution layerSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofPerforming dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer, extracting channel characteristics by using channel attention, and outputting the channel characteristics to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map of (1)Spatial attention feature mapInputting the data into a fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
f-3) the first CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and the feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the third convolution layer and the fourth convolution layerSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofPerforming dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature mapThe data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map of (1)Feature map of spatial attentionInputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
f-4) the second CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1 and convolution kernel size of 3*3 and an expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third pooling layer with a convolution kernel size of 7 × 7, a sigmoid activation function layer, and a fourth pooling layer with a convolution kernel size of 1 × 1, and forming the feature mapSequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension ofCharacteristic diagram ofPerforming dimensionality increase operation on the first convolution layer, performing channel expansion on the second convolution layer, and generating a characteristic diagramThe data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map of (1)Feature map of spatial attentionInputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
f-5) a third CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer, the second convolution layer, the first global average pooling layer, the second global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 x 7, the sigmoid activation function layer and the fourth convolution layer with convolution kernel size of 1 x 1Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofPerforming dimensionality increase operation on the first convolution layer, performing channel expansion on the second convolution layer, and generating a characteristic diagramThe data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map of (1)Feature map of spatial attentionInputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
f-6) a fourth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer, the second convolution layer, the first global average pooling layer, the second global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 x 7, the sigmoid activation function layer and the fourth convolution layer with convolution kernel size of 1 x 1Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofPerforming dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer, extracting channel characteristics by using channel attention, and outputting the channel characteristics to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map of (1)Spatial attention feature mapInputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
f-7) a fifth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer, the second convolution layer, the first global average pooling layer, the second global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 x 7, the sigmoid activation function layer and the fourth convolution layer with convolution kernel size of 1 x 1Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension ofCharacteristic diagram ofPerforming dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature mapThe data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map of (1)Feature map of spatial attentionInputting the data into a fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
f-8) a sixth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer, the second convolution layer, the first global average pooling layer, the second global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 x 7, the sigmoid activation function layer and the fourth convolution layer with convolution kernel size of 1 x 1Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension ofCharacteristic diagram ofPerforming dimensionality increase operation on the first convolution layer, performing channel expansion on the second convolution layer, and generating a characteristic diagramThe data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension ofCharacteristic diagram ofWill be characterized byDrawing (A)Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofFeature map of spatial attentionInputting the data into a fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
f-9) the seventh CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, and a sigmoid activation functionLayer and the fourth convolution layer with convolution kernel size of 1 × 1, and forming the feature mapSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofPerforming dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature mapThe data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map of (1)Spatial attention feature mapInputting the data into a fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
f-10) the eighth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and the feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the third convolution layer and the fourth convolution layerSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofPerforming dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature mapInput into the first global maximum pooling layer and the first global average pooling layer in turn utilizes channel attention extractionThe output after channel characteristics obtains the dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofSpatial attention feature mapInputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
f-11) The ninth CBAM-MBConv6 module is composed of a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1 in sequence, and the feature map is formed by the first convolution layer, the second convolution layer, the first global average pooling layer, the second global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 x 7, the sigmoid activation function layer and the fourth convolution layer with convolution kernel size of 1 x 1Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension ofCharacteristic diagram ofPerforming dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature mapThe data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofFeature map of spatial attentionInputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
f-12) a tenth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the first global maximum pooling layer and the second global average pooling layerSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofPerforming dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer, extracting channel characteristics by using channel attention, and outputting the channel characteristics to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map of (1)Feature map of spatial attentionInputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to obtainDimension ofCharacteristic diagram of
f-13) the eleventh CBAM-MBConv6 module is composed of a first convolutional layer with convolution kernel size of 1 × 1, a second convolutional layer with convolution kernel size of 5 × 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolutional layer with convolution kernel size of 7 × 7, a sigmoid activation function layer, and a fourth convolutional layer with convolution kernel size of 1 × 1 in sequence, and the feature map is formed by combining the first convolutional layer with convolution kernel size of 1 × 1, the second convolutional layer, the first global maximum pooling layer, the second global average pooling layer, the third convolutional layer with convolution kernel size of 7 × 7, the sigmoid activation function layer, and the fourth convolutional layer with convolution kernel size of 1 × 1Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofPerforming dimensionality increase operation on the first convolution layer, performing channel expansion on the second convolution layer, and generating a characteristic diagramSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer, extracting channel characteristics by using channel attention, and outputting the channel characteristics to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially input into a second global maximum pooling layer, a second global average pooling layer, and a volumeOutputting a third convolution layer with the kernel size of 7 x 7 and a sigmoid activation function layer to obtain the dimension ofSpatial attention feature map ofSpatial attention feature mapInputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
f-14) the twelfth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and the feature map is formed by combining the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the first global average pooling layer and the fourth convolution layerAre sequentially inputOutput to the first convolution layer and the second convolution layer to obtain a dimension ofCharacteristic diagram ofPerforming dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature mapThe data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map ofFeature map of spatial attentionInputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
f-15) a thirteenth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer, the second convolution layer, the first global average pooling layer, the second global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 x 7, the sigmoid activation function layer and the fourth convolution layer with convolution kernel size of 1 x 1Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofPerforming dimensionality increase operation on the first convolution layer, performing channel expansion on the second convolution layer, and generating a characteristic diagramThe data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofSpatial attention feature mapInputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
f-16) a fourteenth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer and a second global average pooling layerA global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 × 7, a sigmoid activation function layer, and a fourth convolution layer with convolution kernel size of 1 × 1, and forming a feature mapSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofPerforming dimensionality increase operation on the first convolution layer, performing channel expansion on the second convolution layer, and generating a characteristic diagramSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer, extracting channel characteristics by using channel attention, and outputting the channel characteristics to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map of (1)Feature map of spatial attentionInputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
f-17) a fifteenth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the first global maximum pooling layer and the second global average pooling layerSequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension ofCharacteristic diagram ofPerforming dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer, extracting channel characteristics by using channel attention, and outputting the channel characteristics to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map ofFeature map of spatial attentionInputting the data into a fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
Example 4:
the step g) comprises the following steps:
g-1) two-dimensional imagingInputting the data into an MTEF-NET network model, entering a convolution layer with a convolution kernel size of 3 x 3, and outputting the convolution layer with the obtained dimension of 3 x 3Characteristic diagram of
g-2) mapping the characteristicsSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofPerforming dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature mapAfter being sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel feature output is extracted by utilizing channel attention to obtain the dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofFeature map of spatial attentionInputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
g-3) feature mapSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofPerforming dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature mapThe data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofFeature map of spatial attentionInputting the data into a fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
g-4) feature mapSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofPerforming dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature mapThe data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map of (1)Spatial attention feature mapInputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
g-5) feature mapSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofPerforming dimensionality increase operation on the first convolution layer, performing channel expansion on the second convolution layer, and generating a characteristic diagramSequentially inputting the data to a first global maximum pooling layer and a first global average pooling layer, extracting channel characteristics by using channel attention, and outputtingTo obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofFeature map of spatial attentionInputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
g-6) feature mapSequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension ofCharacteristic diagram ofPerforming dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer, extracting channel characteristics by using channel attention, and outputting the channel characteristics to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map of (1)Spatial attention feature mapInputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
g-7) feature mapSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofPerforming dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature mapThe data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map y of j,29 Will pay attention to the spatial feature map y j,29 Inputting the data into a fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
g-8) feature mapSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofPerforming dimensionality increase operation on the first convolution layer, performing channel expansion on the second convolution layer, and generating a characteristic diagramSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer, extracting channel characteristics by using channel attention, and outputting the channel characteristics to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofSpatial attention feature mapInputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
g-9) feature mapsSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofPerforming dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer, extracting channel characteristics by using channel attention, and outputting the channel characteristics to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofSpatial attention feature mapIs input intoPerforming dimensionality reduction in the fourth convolution layer, outputting to obtain dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
g-10) feature mapSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofPerforming dimensionality increase operation on the first convolution layer, performing channel expansion on the second convolution layer, and generating a characteristic diagramSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer, extracting channel characteristics by using channel attention, and outputting the channel characteristics to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map of (1)Feature map of spatial attentionInputting the data into a fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
g-11) feature mapSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofPerforming dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature mapThe data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map of (1)Spatial attention feature mapInputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
g-12) feature mapSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofPerforming dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature mapThe data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map ofFeature map of spatial attentionInputting the data into a fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
g-13) feature mapSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofPerforming dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer, extracting channel characteristics by using channel attention, and outputting the channel characteristics to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map ofSpatial attention feature mapInputting the data into a fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
g-14) feature mapSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofPerforming dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer, extracting channel characteristics by using channel attention, and outputting the channel characteristics to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map ofFeature map of spatial attentionInputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
g-15) mapping the characteristicsSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofPerforming dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature mapThe data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofSpatial attention feature mapInputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
g-16) mapping the characteristicsSequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension ofCharacteristic diagram ofPerforming dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature mapThe data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map ofFeature map of spatial attentionInputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
g-17) mapping the characteristicsSequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension ofCharacteristic diagram ofPerforming dimensionality increase operation on the first convolution layer, performing channel expansion on the second convolution layer, and generating a characteristic diagramThe data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofFeature map of spatial attentionInputting the data into a fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
Example 5:
we will utilize CBAM-MBConv module to carry out the final feature map Y of feature extraction 81 By task one feature graphAnd task two feature mapComposition, input to global pooling layer for final output, modifying global pooling layer into two output branches, wherein coarse grain branch is represented by Y a Denotes the fine particle size distribution Y b Showing, in a specific step h), by the formula:Calculate the ith result y ai In the formula H ai As the ith result y ai High, W of ai As the ith result y ai Width of (C) ai As the ith result y ai Number of channels by the formula
Calculating the jth result y bj In the formula H bj For the jth result y bj High, W of bj For the jth result y bj Width of (C) bj For the jth result y bj The number of channels of (2).
Example 6:
the model is optimized through a loss function, and because the invention consists of two tasks, namely a coarse-grained branch and a fine-grained branch, the optimization can not be performed only by using the loss function of a single task, and the loss function needs to be improved so as to be more suitable for a multi-task neural network. Therefore, it further comprises after step h) performing the steps of:
i) By the formula
And calculating to obtain a loss function L, wherein lambda and beta are both weights, and lambda + beta =1, and softmax (·) is a softmax activation function.
j) Updating the parameters of the MTEF-NET network model in the step e) through an optimization function Adam by using a loss function L, and storing the training model and the parameters after 100 times of training.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (7)
1. An electrocardio classification method based on a multi-task MTEF-NET is characterized by comprising the following steps:
a) Obtaining original electrocardiosignal X = { X ] from MIT-BIH arrhythmia database 1 ,x 2 ,....,x i ,...,x n In which x is i For the ith original electrocardiosignal, i belongs to { 1., n }, and n is the total number of the original electrocardiosignals;
b) Preprocessing the original electrocardiosignal X, removing noise in the original electrocardiosignal X, and obtaining a clean electrocardiosignal U = { U = 1 ,u 2 ,....,u i ,...,u n In which u i For the ith original electrocardiosignal, i belongs to { 1., n };
c) Conversion of a clean electrocardiosignal U into a two-dimensional image data set Y = { Y } by means of continuous wavelet transformation 1 ,y 2 ,....,y i ,...,y n In which y is i For the ith two-dimensional image, i belongs to { 1.,. N }, and the two-dimensional image y belongs to i Has a height of H, a width of W, a channel number of C and a dimension of H, W and C;
d) Dividing a two-dimensional image data set Y into 50% and 50% in equal proportion, wherein the first divided data set is used for detecting whether the electrocardiosignals are normal or notWhereinFor the ith two-dimensional image,the images after wavelet transformation are subjected to mirror image or inversion processing in the second divided data set to obtain a data set for judging the specific type of the electrocardiosignalsWhereinFor the j-th two-dimensional image,
e) Establishing an MTEF-NET network model, wherein the MTEF-NET network model sequentially comprises a convolutional layer, a CBAM-MBConv1 module, a first CBAM-MBConv6 module, a second CBAM-MBConv6 module, a third CBAM-MBConv6 module, a fourth CBAM-MBConv6 module, a fifth CBAM-MBConv6 module, a sixth CBAM-MBConv6 module, a seventh CBAM-MBConv6 module, an eighth CBAM-MBConv6 module, a ninth CBAM-MBConv6 module, a tenth CBAM-MBConv6 module, an eleventh CBAM-Conv 6 module, a twelfth CBAM-MBConv6 module, a thirteenth CBAM-MBConv6 module, a fourteenth CBAM-MBConv6 module, a fifteenth CBAM-MBConv6 module and a global pooling layer;
f) Data set Y 1 Each two-dimensional image is input into the MTEF-NET network model and output to obtain a feature map setWhereinFor the (i) th two-dimensional image,two-dimensional imageHas the dimension of
g) Data set Y 2 Each two-dimensional image is input into the MTEF-NET network model and outputObtaining a feature map setWhereinFor the (j) th two-dimensional image,two-dimensional imageHas a dimension of
h) Feature map setInputting the electrocardiosignals into a global pooling layer of an MTEF-NET network model, and outputting to obtain electrocardiosignal classification resultsWherein y is ai In the case of the ith result,feature map setInputting the electrocardiosignals into a global pooling layer of an MTEF-NET network model, and outputting to obtain electrocardiosignal classification resultsWherein y is bj For the result of the j-th event,
2. the multi-task MTEF-NET-based electrocardiogram classification method according to claim 1, characterized in that: removing the original electrocardiosignal X = { X) by using two median filters in step b) 1 ,x 2 ,....,x i ,...,x n The noise in the device, a clean electrocardiosignal U = { U =, = 1 ,u 2 ,....,u i ,...,u n A first median filter of 300ms width and a second median filter of 600ms width.
3. The multi-task MTEF-NET-based electrocardiogram classification method according to claim 1, characterized in that: in step c) by the formulaThe ith two-dimensional image y is obtained through calculation i Where a is the transformation scale, b is the translation factor, and phi (-) is the mother wavelet.
4. The multi-task MTEF-NET-based electrocardiogram classification method according to claim 1, wherein step f) comprises the steps of:
f-1) two-dimensional imagesInputting the data into MTEF-NET network model, inputting the data into convolution layer with convolution kernel size of 3 x 3, and outputting the data to obtain dimension ofCharacteristic diagram of
f-2) the CBAM-MBConv1 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 1, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer and a convolution kernel size of largeA third convolution layer with a size of 7 × 7, a sigmoid activation function layer, and a fourth convolution layer with a convolution kernel size of 1 × 1, and forming the feature mapSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofFeature map of spatial attentionInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
f-3) the first CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and the feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the third convolution layer and the fourth convolution layerSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofFeature map of spatial attentionInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
f-4) the second CBAM-MBConv6 module is composed of a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1 in sequence, and the feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the third convolution layer and the fourth convolution layerSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofFeature map of spatial attentionInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
f-5) a third CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer, the second convolution layer, the first global average pooling layer, the second global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 x 7, the sigmoid activation function layer and the fourth convolution layer with convolution kernel size of 1 x 1Sequentially inputting into the first convolution layer and the second convolution layer and outputtingTo dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofSpatial attention feature mapInput to the fourth convolution layer and output with a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
f-6) a fourth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer, the second convolution layer, the first global average pooling layer, the second global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 x 7, the sigmoid activation function layer and the fourth convolution layer with convolution kernel size of 1 x 1Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofSpatial attention feature mapInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
f-7) the fifth CBAM-MBConv6 module comprises a first convolution layer with convolution kernel size of 1 x 1, convolution kernel size of 3 x 3 and expansion ratioA second convolution layer with a convolution kernel size of 7 x 7, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with a convolution kernel size of 7 x 7, a sigmoid activation function layer, and a fourth convolution layer with a convolution kernel size of 1 x 1, wherein the feature map is formed by combining the first convolution layer, the second global maximum pooling layer, the first global average pooling layer, the second global maximum pooling layer, the third convolution layer and the fourth convolution layerSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofSpatial attention feature mapInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
f-8) the sixth CBAM-MBConv6 module is composed of a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1 in sequence, and the feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the third convolution layer, the sigmoid activation function layer and the fourth convolution layerSequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map ofSpatial attention feature mapInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
f-9) the seventh CBAM-MBConv6 module is composed of, in order, a first convolutional layer with convolution kernel size of 1 × 1, a second convolutional layer with convolution kernel size of 3 × 3 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolutional layer with convolution kernel size of 7 × 7, a sigmoid activation function layer, and a fourth convolutional layer with convolution kernel size of 1 × 1, and the feature map is formed by combining the first convolutional layer with convolution kernel size of 1 × 1, the second convolutional layer, the first global maximum pooling layer, the second global average pooling layer, the third convolutional layer with convolution kernel size of 7 × 7, the sigmoid activation function layer, and the fourth convolutional layer with convolution kernel size of 1 × 1Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data to a second global maximum pooling layer and a second global averageThe dimension of the third convolution layer is obtained by outputting the third convolution layer with 7-by-7 convolution kernels and the sigmoid activation function layerSpatial attention feature map of (1)Feature map of spatial attentionInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
f-10) the eighth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and the feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the third convolution layer and the fourth convolution layerSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map ofSpatial attention feature mapInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
f-11) the ninth CBAM-MBConv6 module is composed of, in order, a first convolution layer with convolution kernel size of 1 × 1, a second convolution layer with convolution kernel size of 5 × 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 × 7, a sigmoid activation function layer, and a fourth convolution layer with convolution kernel size of 1 × 1, and the feature map is formed by combining the first convolution layer with convolution kernel size of 1 × 1, the second convolution layer, the first global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 × 7, the sigmoid activation function layer, and the fourth convolution layer with convolution kernel size of 1 × 1Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map of (1)Feature map of spatial attentionInput into the fourth convolution layer and output to a bit density ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
f-12) a tenth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the first global maximum pooling layer and the second global average pooling layerSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map of (1)Feature map of spatial attentionInput to the fourth convolution layer and output with a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
f-13) the eleventh CBAM-MBConv6 module is composed of, in order, a first convolution layer with convolution kernel size of 1 × 1, a second convolution layer with convolution kernel size of 5 × 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 × 7, a sigmoid activation function layer, and a fourth convolution layer with convolution kernel size of 1 × 1, and the feature map is formed by combining the first convolution layer with convolution kernel size of 1 × 1, the second convolution layer, the first global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 × 7, the sigmoid activation function layer, and the fourth convolution layer with convolution kernel size of 1 × 1Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofFeature map of spatial attentionInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
f-14) the twelfth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and the feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the third convolution layer and the fourth convolution layerSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofSpatial attention feature mapInput to the fourth convolution layer and output with a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
f-15) the thirteenth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer and a second global maximum pooling layerA second global average pooling layer, a third convolution layer with convolution kernel size of 7 × 7, a sigmoid activation function layer, and a fourth convolution layer with convolution kernel size of 1 × 1, and forming the feature mapSequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofSpatial attention feature mapInput to the fourth convolution layer and output with a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
f-16) a fourteenth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and the feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the third convolution layer and the fourth convolution layerSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map ofSpatial attention feature mapInput to the fourth convolution layer and output with a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
f-17) a fifteenth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the first global maximum pooling layer and the second global average pooling layerSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map of (1)Spatial attention feature mapInput to the fourth convolution layer and output with a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
5. The multi-task MTEF-NET-based electrocardiogram classification method according to claim 4, wherein step g) comprises the steps of:
g-1) two-dimensional imagingInputting the data into an MTEF-NET network model, entering a convolution layer with a convolution kernel size of 3 x 3, and outputting the convolution layer with the obtained dimension of 3 x 3Characteristic diagram of
g-2) mapping the characteristicsSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map ofFeature map of spatial attentionInput to the fourth convolution layer and output with a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
g-3) feature mapSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofFeature map of spatial attentionInput to the fourth convolution layer and output with a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
g-4) feature mapSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map of (1)Spatial attention feature mapInput to the fourth convolution layer and output with a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
g-5) feature mapSequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map of (1)Feature map of spatial attentionInput to the fourth convolution layer and output with a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
g-6) feature mapSequentially inputting into the first convolution layer and the second convolution layer and outputtingTo dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofSpatial attention feature mapInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
g-7) feature mapSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and sigmoid activationThe dimensionality obtained by outputting the function layer isSpatial attention feature map y of j,29 Will pay attention to the spatial feature map y j,29 Input to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
g-8) feature mapSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially input into the first global maximum pooling layer and the second global maximum pooling layerA global average pooling layer is then output with the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map of (1)Spatial attention feature mapInput to the fourth convolution layer and output with a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
g-9) feature mapSequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map of (1)Spatial attention feature mapInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
g-10) will feature mapSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map of (1)Feature map of spatial attentionInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
g-11) mapping the characteristicsSequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map ofFeature map of spatial attentionInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
g-12) feature mapSequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapInput in sequenceOutputting to a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer to obtain a dimensionality ofSpatial attention feature map ofSpatial attention feature mapInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
g-13) mapping the characteristicsSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofSpatial attention feature mapInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
g-14) mapping the characteristicsSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality ofSpatial attention feature map of (1)Feature map of spatial attentionInput to the fourth convolution layer and output with a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
g-15) feature mapSequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofSpatial attention feature mapInput to the fourth convolution layer and output with a dimensionality ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
g-16) feature mapSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofSpatial attention feature mapInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramAdd element by element to get dimension ofCharacteristic diagram of
g-17) feature mapSequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality ofCharacteristic diagram ofWill feature mapSequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension ofSpatial attention feature map ofFeature map of spatial attentionInput to the fourth convolution layer and output with a resulting dimension ofCharacteristic diagram ofWill feature mapAnd characteristic diagramElement by element addition to a dimension ofCharacteristic diagram of
6. The multi-task MTEF-NET-based electrocardiogram classification method according to claim 1, characterized in that:
in step h) by the formula
Calculating the ith result y ai In the formula H ai For the ith result y ai High, W of ai As the ith result y ai Width of (C) ai For the ith result y ai By the formula
Calculating the jth result y bj In the formula H bj For the jth result y bj High, W of bj As the jth result y bj Width of (C) bj As the jth result y bj The number of channels of (c).
7. The multi-task MTEF-NET-based electrocardiographic classification method according to claim 1, further comprising the following steps after step h):
i) By the formula
Calculating to obtain a loss function L, wherein lambda and beta are both weights, lambda + beta =1, and softmax (·) is a softmax activation function;
j) Updating the parameters of the MTEF-NET network model in the step e) through an optimization function Adam by using a loss function L, and storing the training model and the parameters after 100 times of training.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210996094.7A CN115358270B (en) | 2022-08-19 | 2022-08-19 | Electrocardiogram classification method based on multitasking MTEF-NET |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210996094.7A CN115358270B (en) | 2022-08-19 | 2022-08-19 | Electrocardiogram classification method based on multitasking MTEF-NET |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115358270A true CN115358270A (en) | 2022-11-18 |
CN115358270B CN115358270B (en) | 2023-06-20 |
Family
ID=84002094
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210996094.7A Active CN115358270B (en) | 2022-08-19 | 2022-08-19 | Electrocardiogram classification method based on multitasking MTEF-NET |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115358270B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110638430A (en) * | 2019-10-23 | 2020-01-03 | 苏州大学 | Multi-task cascade neural network ECG signal arrhythmia disease classification model and method |
CN112674780A (en) * | 2020-12-23 | 2021-04-20 | 山东省人工智能研究院 | Automatic atrial fibrillation signal detection method in electrocardiogram abnormal signals |
WO2022073452A1 (en) * | 2020-10-07 | 2022-04-14 | 武汉大学 | Hyperspectral remote sensing image classification method based on self-attention context network |
CN114781445A (en) * | 2022-04-11 | 2022-07-22 | 山东省人工智能研究院 | Deep neural network electrocardiosignal noise reduction method based on interpretability |
-
2022
- 2022-08-19 CN CN202210996094.7A patent/CN115358270B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110638430A (en) * | 2019-10-23 | 2020-01-03 | 苏州大学 | Multi-task cascade neural network ECG signal arrhythmia disease classification model and method |
WO2022073452A1 (en) * | 2020-10-07 | 2022-04-14 | 武汉大学 | Hyperspectral remote sensing image classification method based on self-attention context network |
CN112674780A (en) * | 2020-12-23 | 2021-04-20 | 山东省人工智能研究院 | Automatic atrial fibrillation signal detection method in electrocardiogram abnormal signals |
CN114781445A (en) * | 2022-04-11 | 2022-07-22 | 山东省人工智能研究院 | Deep neural network electrocardiosignal noise reduction method based on interpretability |
Non-Patent Citations (3)
Title |
---|
MURAT CANAYAZ: "C + EffxNet: A novel hybrid approach for COVID-19 diagnosis on CT images based on CBAM and EfficientNet", 《ELSEVIER》, pages 1 - 10 * |
XIAOYUN XIE ET AL: "A multi-stage denoising framework for ambulatory ECG signal based on domain knowledge and motion artifact detection", 《ELSEVIER》, pages 103 - 116 * |
ZHENZHEN MAO ET AL: "Multi-views reinforced LSTM for video-based action recognition", 《JOURNAL OF ELECTRONIC IMAGING》, vol. 30, no. 5, pages 053021 - 1 * |
Also Published As
Publication number | Publication date |
---|---|
CN115358270B (en) | 2023-06-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110236543B (en) | Alzheimer disease multi-classification diagnosis system based on deep learning | |
Milligan | A study of the beta-flexible clustering method | |
CN112633195B (en) | Myocardial infarction recognition and classification method based on frequency domain features and deep learning | |
CN110307983B (en) | CNN-Bagging-based unmanned aerial vehicle bearing fault diagnosis method | |
CN107944490A (en) | A kind of image classification method based on half multi-modal fusion feature reduction frame | |
CN113888412B (en) | Image super-resolution reconstruction method for diabetic retinopathy classification | |
CN111274525A (en) | Tensor data recovery method based on multi-linear augmented Lagrange multiplier method | |
CN115689008A (en) | CNN-BilSTM short-term photovoltaic power prediction method and system based on ensemble empirical mode decomposition | |
CN112634214A (en) | Brain network classification method combining node attributes and multilevel topology | |
CN114648048B (en) | Electrocardiosignal noise reduction method based on variational self-coding and PixelCNN model | |
CN116913504A (en) | Self-supervision multi-view knowledge distillation method for single-lead arrhythmia diagnosis | |
CN115358270A (en) | Electrocardiogram classification method based on multi-task MTEF-NET | |
CN116597167B (en) | Permanent magnet synchronous motor small sample demagnetization fault diagnosis method, storage medium and system | |
CN117172294A (en) | Method, system, equipment and storage medium for constructing sparse brain network | |
CN110327034B (en) | Tachycardia electrocardiogram screening method based on depth feature fusion network | |
Chen et al. | Automated sleep staging via parallel frequency-cut attention | |
CN115239674B (en) | Computer angiography imaging synthesis method based on multi-scale discrimination | |
CN115759186A (en) | Six-class motor imagery electroencephalogram signal classification method based on convolutional neural network | |
CN115363594A (en) | Real-time heart disease screening method based on recurrent neural network | |
Xiaoai et al. | An overview of disease prediction based on graph convolutional neural network | |
CN114757911A (en) | Magnetic resonance image auxiliary processing system based on graph neural network and contrast learning | |
Yang et al. | Tensor-based Complex-valued Graph Neural Network for Dynamic Coupling Multimodal brain Networks | |
CN112132790B (en) | DAC-GAN model construction method and application thereof in mammary gland MR image | |
CN112085718B (en) | NAFLD ultrasonic video diagnosis system based on twin attention network | |
Huang et al. | FNSAM: Image super-resolution using a feedback network with self-attention mechanism |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |