CN115358270A - Electrocardiogram classification method based on multi-task MTEF-NET - Google Patents

Electrocardiogram classification method based on multi-task MTEF-NET Download PDF

Info

Publication number
CN115358270A
CN115358270A CN202210996094.7A CN202210996094A CN115358270A CN 115358270 A CN115358270 A CN 115358270A CN 202210996094 A CN202210996094 A CN 202210996094A CN 115358270 A CN115358270 A CN 115358270A
Authority
CN
China
Prior art keywords
layer
convolution
feature map
characteristic diagram
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210996094.7A
Other languages
Chinese (zh)
Other versions
CN115358270B (en
Inventor
舒明雷
耿全成
刘辉
谢小云
刘照阳
高天雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu University of Technology
Shandong Institute of Artificial Intelligence
Original Assignee
Qilu University of Technology
Shandong Institute of Artificial Intelligence
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilu University of Technology, Shandong Institute of Artificial Intelligence filed Critical Qilu University of Technology
Priority to CN202210996094.7A priority Critical patent/CN115358270B/en
Publication of CN115358270A publication Critical patent/CN115358270A/en
Application granted granted Critical
Publication of CN115358270B publication Critical patent/CN115358270B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/318Heart-related electrical modalities, e.g. electrocardiography [ECG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The method comprises the steps of establishing a multitask neural network by setting two tasks, and improving the performance of each task in parallel by utilizing a multitask joint learning mode so as to improve the accuracy of the whole electrocardiosignal classification. Different weight proportions are distributed to different tasks, so that the model has the capability of balancing the difference of data samples, and the problem of data imbalance can be solved. The established MTEF-NET network model has the quantitative adjustment capability of a complex network, and the neural network only needs fewer network parameters through the comprehensive adjustment of the depth, the width and the resolution of an input picture, so that the complexity of the modeling is reduced, and the accuracy of classification can be improved. By constructing the CBAM-MBCov module, the problem that the lost channel and space characteristics disappear after the electrocardiosignals are converted into two-dimensional images can be solved.

Description

Electrocardiogram classification method based on multi-task MTEF-NET
Technical Field
The invention relates to an electrocardio classification method based on a multi-task MTEF-NET.
Background
The electrical signal is an important electrical signal for recording heart activity, records the change of heart waveform, can intuitively reflect the change of heart beat, and has certain regularity. The electrocardiogram is composed of a series of electrocardiosignals and can reflect the physiological activities of different heartbeats, and the heartbeats of each wave band have different meanings, so that the correct classification of the electrocardiosignals is extremely important.
With the rise of deep learning, feature extraction and classification can be performed by utilizing a neural network, but the models are usually single-task, and only focus on a single task, but neglect other information which is possibly helpful for optimizing classification of electrocardio signals, and in addition, due to the imbalance problem among data, the single-task network model is usually over-fitted to one of the signals, so that an over-fitting phenomenon occurs. And when the performance of the model is improved by the traditional neural network, the depth and the width of the network or the input resolution ratio are only singly changed, so that the model can easily reach the saturation degree, and the advantages of the model are difficult to exert. In addition, after the traditional neural network converts the one-dimensional electrocardiosignals into two-dimensional images, the channel and the spatial characteristics of the electrocardiosignals are often ignored.
Disclosure of Invention
In order to overcome the defects of the technology, the invention provides a method for constructing a brand-new CBAM-MBConv module to construct an MTEF-NET network model for carrying out electrocardio classification and improving the classification accuracy.
The technical scheme adopted by the invention for overcoming the technical problems is as follows:
an electrocardio classification method based on a multi-task MTEF-NET comprises the following steps:
a) Obtaining original electrocardiosignal X = { X ] from MIT-BIH arrhythmia database 1 ,x 2 ,....,x i ,...,x n In which x i For the ith original electrocardiosignal, i belongs to { 1., n }, and n is the total number of the original electrocardiosignals.
b) Preprocessing the original electrocardiosignal X, removing the noise in the original electrocardiosignal X, and obtaining a clean electrocardiosignal U = { U = 1 ,u 2 ,....,u i ,...,u n In which u i For the ith original ECG signal, i ∈ { 1., n }.
c) Conversion of a clean electrocardiosignal U into a two-dimensional image data set Y = { Y } by means of continuous wavelet transformation 1 ,y 2 ,....,y i ,...,y n In which y is i For the ith two-dimensional image, i belongs to {1,.. Multidot.n }, and the two-dimensional image y i Has a height of H, a width of W, a channel number of C, and a dimension of H × W × C.
d) Dividing a two-dimensional image data set Y into 50% and 50% in equal proportion, wherein the first divided data set is used for detecting whether the electrocardiosignals are normal or not
Figure BDA0003805528100000021
Wherein
Figure BDA0003805528100000022
For the ith two-dimensional image,
Figure BDA0003805528100000023
the images after the wavelet transformation are subjected to mirror image or inversion processing in the second divided data set to obtain a data set for judging the specific type of the electrocardiosignals
Figure BDA0003805528100000024
Wherein
Figure BDA0003805528100000025
For the j-th two-dimensional image,
Figure BDA0003805528100000026
e) Establishing a MTEF-NET network model, wherein the MTEF-NET network model sequentially comprises a convolutional layer, a CBAM-MBConv1 module, a first CBAM-MBConv6 module, a second CBAM-MBConv6 module, a third CBAM-MBConv6 module, a fourth CBAM-MBConv6 module, a fifth CBAM-MBConv6 module, a sixth CBAM-MBConv6 module, a seventh CBAM-MBConv6 module, an eighth CBAM-MBConv6 module, a ninth CBAM-MBConv6 module, a tenth CBAM-MBConv6 module, an eleventh CBAM-MBConv6 module, a twelfth CBAM-MBConv6 module, a thirteenth CBAM-MBConv6 module, a fourteenth CBAM-MBConv6 module, a fifteenth CBAM-MBConv6 module and a global pooling layer.
f) Data set Y 1 Each two-dimensional image is input into the MTEF-NET network model and output to obtain a feature map set
Figure BDA0003805528100000027
Wherein
Figure BDA0003805528100000028
For the ith two-dimensional image,
Figure BDA0003805528100000029
two-dimensional image
Figure BDA00038055281000000210
Has the dimension of
Figure BDA00038055281000000211
g) Data set Y 2 Each two-dimensional image is input into the MTEF-NET network model and output to obtain a feature map set
Figure BDA00038055281000000212
Wherein
Figure BDA00038055281000000213
For the j-th two-dimensional image,
Figure BDA00038055281000000214
two-dimensional image
Figure BDA00038055281000000215
Has the dimension of
Figure BDA00038055281000000216
h) Drawing set of characteristics
Figure BDA00038055281000000217
Inputting the electrocardiosignals into a global pooling layer of an MTEF-NET network model, and outputting to obtain electrocardiosignal classification results
Figure BDA00038055281000000218
Wherein y is ai In the case of the ith result,
Figure BDA00038055281000000219
drawing set of characteristics
Figure BDA00038055281000000220
Inputting the electrocardiosignals into a global pooling layer of an MTEF-NET network model, and outputting to obtain electrocardiosignal classification results
Figure BDA0003805528100000031
Wherein y is bj For the result of the j-th event,
Figure BDA0003805528100000032
in a further aspect of the present invention, removing the original electrocardiosignal X = { X) by using two median filters in step b) 1 ,x 2 ,....,x i ,...,x n The noise in the device, a clean electrocardiosignal U = { U =, = 1 ,u 2 ,....,u i ,...,u n The width of the first median filter is 300ms and the width of the second median filter is 600ms.
Further, in step c) by formula
Figure BDA0003805528100000033
The ith two-dimensional image y is obtained through calculation i Where a is the transformation scale, b is the translation factor, and phi (-) is the mother wavelet.
Further, step f) comprises the steps of:
f-1) two-dimensional images
Figure BDA0003805528100000034
Inputting the data into an MTEF-NET network model, entering a convolution layer with a convolution kernel size of 3 x 3, and outputting the convolution layer with the obtained dimension of 3 x 3
Figure BDA0003805528100000035
Characteristic diagram of
Figure BDA0003805528100000036
f-2) the CBAM-MBConv1 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 1, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and the feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the third convolution layer and the fourth convolution layer
Figure BDA0003805528100000037
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA0003805528100000038
Characteristic diagram of
Figure BDA0003805528100000039
Will feature map
Figure BDA00038055281000000310
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality of
Figure BDA00038055281000000311
Characteristic diagram of
Figure BDA00038055281000000312
Will feature map
Figure BDA00038055281000000313
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure BDA00038055281000000314
Spatial attention feature map of
Figure BDA00038055281000000315
Feature map of spatial attention
Figure BDA00038055281000000316
Input to the fourth convolution layer and output with a resulting dimension of
Figure BDA00038055281000000317
Characteristic diagram of
Figure BDA00038055281000000318
Will feature map
Figure BDA00038055281000000319
And characteristic diagram
Figure BDA00038055281000000320
Add element by element to get dimension of
Figure BDA00038055281000000321
Characteristic diagram of
Figure BDA00038055281000000322
f-3) the first CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 6, a first global maximum pooling layer and a second convolution layerA global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 × 7, a sigmoid activation function layer, and a fourth convolution layer with convolution kernel size of 1 × 1, and forming the feature map
Figure BDA0003805528100000041
Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension of
Figure BDA0003805528100000042
Characteristic diagram of
Figure BDA0003805528100000043
Will feature map
Figure BDA0003805528100000044
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality of
Figure BDA0003805528100000045
Characteristic diagram of
Figure BDA0003805528100000046
Will feature map
Figure BDA0003805528100000047
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure BDA0003805528100000048
Spatial attention feature map of
Figure BDA0003805528100000049
Feature map of spatial attention
Figure BDA00038055281000000410
Input to the fourth convolution layer and output with a resulting dimension of
Figure BDA00038055281000000411
Characteristic diagram of
Figure BDA00038055281000000412
Will feature map
Figure BDA00038055281000000413
And characteristic diagram
Figure BDA00038055281000000414
Element by element addition to a dimension of
Figure BDA00038055281000000415
Characteristic diagram of
Figure BDA00038055281000000416
f-4) the second CBAM-MBConv6 module is composed of a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1 in sequence, and the feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the first global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 x 7, the sigmoid activation function layer and the fourth convolution layer with convolution kernel size of 1 x 1
Figure BDA00038055281000000417
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA00038055281000000418
Characteristic diagram of
Figure BDA00038055281000000419
Will feature map
Figure BDA00038055281000000420
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality of
Figure BDA00038055281000000421
Characteristic diagram of
Figure BDA00038055281000000422
Will feature map
Figure BDA00038055281000000423
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure BDA00038055281000000424
Spatial attention feature map of
Figure BDA00038055281000000425
Feature map of spatial attention
Figure BDA00038055281000000426
Input to the fourth convolution layer and output with a resulting dimension of
Figure BDA00038055281000000427
Characteristic diagram of
Figure BDA00038055281000000428
Will feature map
Figure BDA00038055281000000429
And characteristic diagram
Figure BDA00038055281000000430
Add element by element to get dimension of
Figure BDA00038055281000000431
Characteristic diagram of
Figure BDA00038055281000000432
f-5) the third CBAM-MBConv6 module is sequentially formed by a first convolution layer with convolution kernel size of 1 x 1, a convolution kernel size of 5 x 5 and expansionA second convolution layer with the proportion of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with the convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with the convolution kernel size of 1 x 1, and the feature diagram is formed
Figure BDA00038055281000000433
Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension of
Figure BDA00038055281000000434
Characteristic diagram of
Figure BDA00038055281000000435
Will feature map
Figure BDA00038055281000000436
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality of
Figure BDA00038055281000000437
Characteristic diagram of
Figure BDA00038055281000000438
Will feature map
Figure BDA00038055281000000439
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA00038055281000000440
Spatial attention feature map of
Figure BDA00038055281000000441
Spatial attention feature map
Figure BDA0003805528100000051
Is inputted to the firstThe output in the four convolution layers is obtained with the dimension of
Figure BDA0003805528100000052
Characteristic diagram of
Figure BDA0003805528100000053
Will feature map
Figure BDA0003805528100000054
And characteristic diagram
Figure BDA0003805528100000055
Element by element addition to a dimension of
Figure BDA0003805528100000056
Characteristic diagram of
Figure BDA0003805528100000057
f-6) a fourth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer, the second convolution layer, the first global average pooling layer, the second global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 x 7, the sigmoid activation function layer and the fourth convolution layer with convolution kernel size of 1 x 1
Figure BDA0003805528100000058
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA0003805528100000059
Characteristic diagram of
Figure BDA00038055281000000510
Will feature map
Figure BDA00038055281000000511
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain dimensionalityIs composed of
Figure BDA00038055281000000512
Characteristic diagram of
Figure BDA00038055281000000513
Will feature map
Figure BDA00038055281000000514
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA00038055281000000515
Spatial attention feature map of (1)
Figure BDA00038055281000000516
Feature map of spatial attention
Figure BDA00038055281000000517
Input to the fourth convolution layer and output with a dimensionality of
Figure BDA00038055281000000518
Characteristic diagram of
Figure BDA00038055281000000519
Will feature map
Figure BDA00038055281000000520
And characteristic diagram
Figure BDA00038055281000000521
Add element by element to get dimension of
Figure BDA00038055281000000522
Characteristic diagram of
Figure BDA00038055281000000523
f-7) fifth CBAM-MBConv6 Module in turn by convolution kernelsA first convolution layer with the size of 1 × 1, a second convolution layer with the convolution kernel size of 3 × 3 and the expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with the convolution kernel size of 7 × 7, a sigmoid activation function layer and a fourth convolution layer with the convolution kernel size of 1 × 1, and forming the characteristic diagram
Figure BDA00038055281000000524
Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension of
Figure BDA00038055281000000525
Characteristic diagram of
Figure BDA00038055281000000526
Will feature map
Figure BDA00038055281000000527
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure BDA00038055281000000528
Characteristic diagram of
Figure BDA00038055281000000529
Will feature map
Figure BDA00038055281000000530
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA00038055281000000531
Spatial attention feature map of
Figure BDA00038055281000000532
Spatial attention feature map
Figure BDA00038055281000000533
Input to the fourth convolution layer and output with a dimensionality of
Figure BDA00038055281000000534
Characteristic diagram of
Figure BDA00038055281000000535
Will feature map
Figure BDA00038055281000000536
And characteristic diagram
Figure BDA00038055281000000537
Add element by element to get dimension of
Figure BDA00038055281000000538
Characteristic diagram of
Figure BDA00038055281000000539
f-8) a sixth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer, the second convolution layer, the first global average pooling layer, the second global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 x 7, the sigmoid activation function layer and the fourth convolution layer with convolution kernel size of 1 x 1
Figure BDA0003805528100000061
Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension of
Figure BDA0003805528100000062
Characteristic diagram of
Figure BDA0003805528100000063
Will feature map
Figure BDA0003805528100000064
Sequentially input to the first inputThe dimensionality obtained by outputting the local maximum pooling layer and the first global average pooling layer is
Figure BDA0003805528100000065
Characteristic diagram of
Figure BDA0003805528100000066
Will feature map
Figure BDA0003805528100000067
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA0003805528100000068
Spatial attention feature map of (1)
Figure BDA0003805528100000069
Feature map of spatial attention
Figure BDA00038055281000000610
Input to the fourth convolution layer and output with a resulting dimension of
Figure BDA00038055281000000611
Characteristic diagram of
Figure BDA00038055281000000612
Will feature map
Figure BDA00038055281000000613
And characteristic diagram
Figure BDA00038055281000000614
Add element by element to get dimension of
Figure BDA00038055281000000615
Characteristic diagram of
Figure BDA00038055281000000616
f-9) the seventh CBAM-MBConv6 module is composed of, in order, a first convolution layer with convolution kernel size of 1 × 1, a second convolution layer with convolution kernel size of 3 × 3 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 × 7, a sigmoid activation function layer, and a fourth convolution layer with convolution kernel size of 1 × 1, and the feature map is formed by combining the first convolution layer with convolution kernel size of 1 × 1, the second convolution layer, the first global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 × 7, the sigmoid activation function layer, and the fourth convolution layer with convolution kernel size of 1 × 1
Figure BDA00038055281000000617
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA00038055281000000618
Characteristic diagram of
Figure BDA00038055281000000619
Will feature map
Figure BDA00038055281000000620
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure BDA00038055281000000621
Characteristic diagram of
Figure BDA00038055281000000622
Will feature map
Figure BDA00038055281000000623
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure BDA00038055281000000624
Spatial attention feature map of (1)
Figure BDA00038055281000000625
Spatial attention feature map
Figure BDA00038055281000000626
Input to the fourth convolution layer and output with a dimensionality of
Figure BDA00038055281000000627
Characteristic diagram of
Figure BDA00038055281000000628
Will feature map
Figure BDA00038055281000000629
And characteristic diagram
Figure BDA00038055281000000630
Add element by element to get dimension of
Figure BDA00038055281000000631
Characteristic diagram of
Figure BDA00038055281000000632
f-10) eighth CBAM-MBConv6 module, which is composed of a first convolution layer with convolution kernel size of 1 × 1, a second convolution layer with convolution kernel size of 5 × 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 × 7, a sigmoid activation function layer, and a fourth convolution layer with convolution kernel size of 1 × 1, in sequence, and maps the feature map
Figure BDA00038055281000000633
Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension of
Figure BDA00038055281000000634
Characteristic diagram of
Figure BDA00038055281000000635
Will feature map
Figure BDA00038055281000000636
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure BDA00038055281000000637
Characteristic diagram of
Figure BDA00038055281000000638
Will feature map
Figure BDA00038055281000000639
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure BDA00038055281000000640
Spatial attention feature map of (1)
Figure BDA00038055281000000641
Feature map of spatial attention
Figure BDA0003805528100000071
Input to the fourth convolution layer and output with a dimensionality of
Figure BDA0003805528100000072
Characteristic diagram of
Figure BDA0003805528100000073
Will feature map
Figure BDA0003805528100000074
And characteristic diagram
Figure BDA0003805528100000075
Element by element addition to a dimension of
Figure BDA0003805528100000076
Characteristic diagram of
Figure BDA0003805528100000077
f-11) the ninth CBAM-MBConv6 module is composed of, in order, a first convolution layer with convolution kernel size of 1 × 1, a second convolution layer with convolution kernel size of 5 × 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 × 7, a sigmoid activation function layer, and a fourth convolution layer with convolution kernel size of 1 × 1, and the feature map is formed by combining the first convolution layer with convolution kernel size of 1 × 1, the second convolution layer, the first global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 × 7, the sigmoid activation function layer, and the fourth convolution layer with convolution kernel size of 1 × 1
Figure BDA0003805528100000078
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA0003805528100000079
Characteristic diagram of
Figure BDA00038055281000000710
Will feature map
Figure BDA00038055281000000711
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure BDA00038055281000000712
Characteristic diagram of
Figure BDA00038055281000000713
Will feature map
Figure BDA00038055281000000714
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure BDA00038055281000000715
Spatial attention feature map of
Figure BDA00038055281000000716
Feature map of spatial attention
Figure BDA00038055281000000717
Input to the fourth convolution layer and output with a dimensionality of
Figure BDA00038055281000000718
Characteristic diagram of
Figure BDA00038055281000000719
Will feature map
Figure BDA00038055281000000720
And characteristic diagram
Figure BDA00038055281000000721
Add element by element to get dimension of
Figure BDA00038055281000000722
Characteristic diagram of
Figure BDA00038055281000000723
f-12) a tenth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the first global maximum pooling layer and the second global average pooling layer
Figure BDA00038055281000000724
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA00038055281000000725
Characteristic diagram of
Figure BDA00038055281000000726
Will feature map
Figure BDA00038055281000000727
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure BDA00038055281000000728
Characteristic diagram of
Figure BDA00038055281000000729
Will feature map
Figure BDA00038055281000000730
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA00038055281000000731
Spatial attention feature map of
Figure BDA00038055281000000732
Feature map of spatial attention
Figure BDA00038055281000000733
Input to the fourth convolution layer and output with a resulting dimension of
Figure BDA00038055281000000734
Characteristic diagram of
Figure BDA00038055281000000735
Will feature map
Figure BDA00038055281000000736
And characteristic diagram
Figure BDA00038055281000000737
Add element by element to get dimension of
Figure BDA00038055281000000738
Characteristic diagram of
Figure BDA00038055281000000739
f-13) the eleventh CBAM-MBConv6 module is composed of, in order, a first convolution layer with convolution kernel size of 1 × 1, a second convolution layer with convolution kernel size of 5 × 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 × 7, a sigmoid activation function layer, and a fourth convolution layer with convolution kernel size of 1 × 1, and the feature map is formed by combining the first convolution layer with convolution kernel size of 1 × 1, the second convolution layer, the first global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 × 7, the sigmoid activation function layer, and the fourth convolution layer with convolution kernel size of 1 × 1
Figure BDA0003805528100000081
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA0003805528100000082
Characteristic diagram of
Figure BDA0003805528100000083
Will feature map
Figure BDA0003805528100000084
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality of
Figure BDA0003805528100000085
Characteristic diagram of
Figure BDA0003805528100000086
Will feature map
Figure BDA0003805528100000087
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA0003805528100000088
Spatial attention feature map of
Figure BDA0003805528100000089
Feature map of spatial attention
Figure BDA00038055281000000810
Input to the fourth convolution layer and output with a resulting dimension of
Figure BDA00038055281000000811
Characteristic diagram of
Figure BDA00038055281000000812
Will feature map
Figure BDA00038055281000000813
And characteristic diagram
Figure BDA00038055281000000814
Element by element addition to a dimension of
Figure BDA00038055281000000815
Characteristic diagram of
Figure BDA00038055281000000816
f-14) the twelfth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and the feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the third convolution layer and the fourth convolution layer
Figure BDA00038055281000000817
Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension of
Figure BDA00038055281000000818
Characteristic diagram of
Figure BDA00038055281000000819
Will feature map
Figure BDA00038055281000000820
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure BDA00038055281000000821
Characteristic diagram of
Figure BDA00038055281000000822
Will feature map
Figure BDA00038055281000000823
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure BDA00038055281000000824
Spatial attention feature map of (1)
Figure BDA00038055281000000825
Feature map of spatial attention
Figure BDA00038055281000000826
Input to the fourth convolution layer and output with a resulting dimension of
Figure BDA00038055281000000827
Characteristic diagram of
Figure BDA00038055281000000828
Will feature map
Figure BDA00038055281000000829
And characteristic diagram
Figure BDA00038055281000000830
Element by element addition to a dimension of
Figure BDA00038055281000000831
Characteristic diagram of
Figure BDA00038055281000000832
f-15) a thirteenth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer, the second convolution layer, the first global average pooling layer, the second global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 x 7, the sigmoid activation function layer and the fourth convolution layer with convolution kernel size of 1 x 1
Figure BDA00038055281000000833
Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension of
Figure BDA00038055281000000834
Characteristic diagram of
Figure BDA00038055281000000835
Will feature map
Figure BDA00038055281000000836
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality of
Figure BDA00038055281000000837
Characteristic diagram of
Figure BDA00038055281000000838
Will feature map
Figure BDA00038055281000000839
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA00038055281000000840
Spatial attention feature map of
Figure BDA00038055281000000841
Feature map of spatial attention
Figure BDA0003805528100000091
Input to the fourth convolution layer and output with a resulting dimension of
Figure BDA0003805528100000092
Characteristic diagram of
Figure BDA0003805528100000093
Will feature map
Figure BDA0003805528100000094
And characteristic diagram
Figure BDA00038055281000000942
Add element by element to get dimension of
Figure BDA0003805528100000095
Characteristic diagram of
Figure BDA0003805528100000096
f-16) a fourteenth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and the feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the third convolution layer and the fourth convolution layer
Figure BDA0003805528100000097
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA0003805528100000098
Is characterized byDrawing (A)
Figure BDA0003805528100000099
Will feature map
Figure BDA00038055281000000910
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure BDA00038055281000000911
Characteristic diagram of
Figure BDA00038055281000000912
Will feature map
Figure BDA00038055281000000913
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA00038055281000000914
Spatial attention feature map of (1)
Figure BDA00038055281000000915
Feature map of spatial attention
Figure BDA00038055281000000916
Input to the fourth convolution layer and output with a resulting dimension of
Figure BDA00038055281000000917
Characteristic diagram of
Figure BDA00038055281000000918
Will feature map
Figure BDA00038055281000000919
And characteristic diagram
Figure BDA00038055281000000920
Element by element addition to obtainTo dimension of
Figure BDA00038055281000000921
Characteristic diagram of
Figure BDA00038055281000000922
f-17) a fifteenth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the first global maximum pooling layer and the second global average pooling layer
Figure BDA00038055281000000923
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA00038055281000000924
Characteristic diagram of
Figure BDA00038055281000000925
Will feature map
Figure BDA00038055281000000926
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure BDA00038055281000000927
Characteristic diagram of
Figure BDA00038055281000000928
Will feature map
Figure BDA00038055281000000929
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA00038055281000000930
Spatial attention feature map of
Figure BDA00038055281000000931
Feature map of spatial attention
Figure BDA00038055281000000932
Input to the fourth convolution layer and output with a resulting dimension of
Figure BDA00038055281000000933
Characteristic diagram of
Figure BDA00038055281000000934
Will feature map
Figure BDA00038055281000000935
And characteristic diagram
Figure BDA00038055281000000936
Element by element addition to a dimension of
Figure BDA00038055281000000937
Characteristic diagram of
Figure BDA00038055281000000938
Further, step g) comprises the following steps:
g-1) two-dimensional imaging
Figure BDA00038055281000000939
Inputting the data into an MTEF-NET network model, entering a convolution layer with a convolution kernel size of 3 x 3, and outputting the convolution layer with the obtained dimension of 3 x 3
Figure BDA00038055281000000940
Characteristic diagram of
Figure BDA00038055281000000941
g-2) mapping the characteristics
Figure BDA0003805528100000101
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA0003805528100000102
Characteristic diagram of
Figure BDA0003805528100000103
Will feature map
Figure BDA0003805528100000104
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure BDA0003805528100000105
Characteristic diagram of
Figure BDA0003805528100000106
Will feature map
Figure BDA0003805528100000107
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure BDA0003805528100000108
Spatial attention feature map of
Figure BDA0003805528100000109
Feature map of spatial attention
Figure BDA00038055281000001010
Input to the fourth convolution layer and output with a resulting dimension of
Figure BDA00038055281000001011
Characteristic diagram of
Figure BDA00038055281000001012
Will feature map
Figure BDA00038055281000001013
And characteristic diagram
Figure BDA00038055281000001014
Add element by element to get dimension of
Figure BDA00038055281000001015
Characteristic diagram of
Figure BDA00038055281000001016
g-3) mapping the characteristics
Figure BDA00038055281000001017
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA00038055281000001018
Characteristic diagram of
Figure BDA00038055281000001019
Will feature map
Figure BDA00038055281000001020
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality of
Figure BDA00038055281000001021
Characteristic diagram of
Figure BDA00038055281000001022
Will feature map
Figure BDA00038055281000001023
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA00038055281000001024
Spatial attention feature map of
Figure BDA00038055281000001025
Feature map of spatial attention
Figure BDA00038055281000001026
Input to the fourth convolution layer and output with a dimensionality of
Figure BDA00038055281000001027
Characteristic diagram of
Figure BDA00038055281000001028
Will feature map
Figure BDA00038055281000001029
And characteristic diagram
Figure BDA00038055281000001030
Add element by element to get dimension of
Figure BDA00038055281000001031
Characteristic diagram of
Figure BDA00038055281000001032
g-4) feature map
Figure BDA00038055281000001033
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA00038055281000001034
Characteristic diagram of
Figure BDA00038055281000001035
Will feature map
Figure BDA00038055281000001036
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputtingTo obtain a dimension of
Figure BDA00038055281000001037
Characteristic diagram of
Figure BDA00038055281000001038
Will feature map
Figure BDA00038055281000001039
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA00038055281000001040
Spatial attention feature map of (1)
Figure BDA00038055281000001041
Spatial attention feature map
Figure BDA00038055281000001042
Input to the fourth convolution layer and output with a resulting dimension of
Figure BDA00038055281000001043
Characteristic diagram of
Figure BDA00038055281000001044
Will feature map
Figure BDA00038055281000001045
And characteristic diagram
Figure BDA00038055281000001046
Add element by element to get dimension of
Figure BDA00038055281000001047
Characteristic diagram of
Figure BDA00038055281000001048
g-5) feature map
Figure BDA00038055281000001049
Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension of
Figure BDA00038055281000001050
Characteristic diagram of
Figure BDA00038055281000001051
Will feature map
Figure BDA00038055281000001052
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality of
Figure BDA0003805528100000111
Characteristic diagram of
Figure BDA0003805528100000112
Will feature map
Figure BDA0003805528100000113
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA0003805528100000114
Spatial attention feature map of
Figure BDA0003805528100000115
Feature map of spatial attention
Figure BDA0003805528100000116
Input to the fourth convolution layer and output with a resulting dimension of
Figure BDA0003805528100000117
Characteristic diagram of
Figure BDA0003805528100000118
Will feature map
Figure BDA0003805528100000119
And characteristic diagram
Figure BDA00038055281000001110
Element by element addition to a dimension of
Figure BDA00038055281000001111
Characteristic diagram of
Figure BDA00038055281000001112
g-6) feature map
Figure BDA00038055281000001113
Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension of
Figure BDA00038055281000001114
Characteristic diagram of
Figure BDA00038055281000001115
Will feature map
Figure BDA00038055281000001116
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure BDA00038055281000001117
Characteristic diagram of
Figure BDA00038055281000001118
Will feature map
Figure BDA00038055281000001119
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA00038055281000001120
Spatial attention feature map of (1)
Figure BDA00038055281000001121
Spatial attention feature map
Figure BDA00038055281000001122
Input to the fourth convolution layer and output with a dimensionality of
Figure BDA00038055281000001123
Characteristic diagram of
Figure BDA00038055281000001124
Will feature map
Figure BDA00038055281000001125
And characteristic diagram
Figure BDA00038055281000001126
Element by element addition to a dimension of
Figure BDA00038055281000001127
Characteristic diagram of
Figure BDA00038055281000001128
g-7) feature map
Figure BDA00038055281000001129
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA00038055281000001130
Characteristic diagram of
Figure BDA00038055281000001131
Will feature map
Figure BDA00038055281000001132
Sequentially inputting the data into a first global maximum pooling layer and a first global average poolOutput after stratification to obtain the dimension of
Figure BDA00038055281000001133
Characteristic diagram of
Figure BDA00038055281000001134
Will feature map
Figure BDA00038055281000001135
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA00038055281000001136
Spatial attention feature map y of j,29 Will pay attention to the spatial feature map y j,29 Input to the fourth convolution layer and output with a resulting dimension of
Figure BDA00038055281000001137
Characteristic diagram of
Figure BDA00038055281000001138
Will feature map
Figure BDA00038055281000001139
And characteristic diagram
Figure BDA00038055281000001140
Element by element addition to a dimension of
Figure BDA00038055281000001141
Characteristic diagram of
Figure BDA00038055281000001142
g-8) feature map
Figure BDA00038055281000001143
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA00038055281000001144
Characteristic diagram of
Figure BDA00038055281000001145
Will feature map
Figure BDA00038055281000001146
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure BDA00038055281000001147
Characteristic diagram of
Figure BDA00038055281000001148
Will feature map
Figure BDA00038055281000001149
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure BDA00038055281000001150
Spatial attention feature map of
Figure BDA00038055281000001151
Spatial attention feature map
Figure BDA00038055281000001152
Input to the fourth convolution layer and output with a resulting dimension of
Figure BDA0003805528100000121
Characteristic diagram of
Figure BDA0003805528100000122
Will feature map
Figure BDA0003805528100000123
And characteristic diagram
Figure BDA0003805528100000124
Add element by element to get dimension of
Figure BDA0003805528100000125
Characteristic diagram of
Figure BDA0003805528100000126
g-9) feature maps
Figure BDA0003805528100000127
Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension of
Figure BDA0003805528100000128
Characteristic diagram of
Figure BDA0003805528100000129
Will feature map
Figure BDA00038055281000001210
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality of
Figure BDA00038055281000001211
Characteristic diagram of
Figure BDA00038055281000001212
Will feature map
Figure BDA00038055281000001213
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA00038055281000001214
Spatial attention feature map of
Figure BDA00038055281000001215
Spatial attention feature map
Figure BDA00038055281000001216
Input to the fourth convolution layer and output with a dimensionality of
Figure BDA00038055281000001217
Characteristic diagram of
Figure BDA00038055281000001218
Will feature map
Figure BDA00038055281000001219
And characteristic diagram
Figure BDA00038055281000001220
Add element by element to get dimension of
Figure BDA00038055281000001221
Characteristic diagram of
Figure BDA00038055281000001222
g-10) will feature map
Figure BDA00038055281000001223
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA00038055281000001224
Characteristic diagram of
Figure BDA00038055281000001225
Will feature map
Figure BDA00038055281000001226
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure BDA00038055281000001227
Characteristic diagram of
Figure BDA00038055281000001228
Will feature map
Figure BDA00038055281000001229
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA00038055281000001230
Spatial attention feature map of
Figure BDA00038055281000001231
Feature map of spatial attention
Figure BDA00038055281000001232
Input to the fourth convolution layer and output with a resulting dimension of
Figure BDA00038055281000001233
Characteristic diagram of
Figure BDA00038055281000001234
Will feature map
Figure BDA00038055281000001235
And characteristic diagram
Figure BDA00038055281000001236
Add element by element to get dimension of
Figure BDA00038055281000001237
Characteristic diagram of
Figure BDA00038055281000001238
g-11) mapping the characteristics
Figure BDA00038055281000001239
Sequentially inputting the first convolution layer and the second convolution layer and outputting the input signals to obtain the dimensionIs composed of
Figure BDA00038055281000001240
Characteristic diagram of
Figure BDA00038055281000001241
Will feature map
Figure BDA00038055281000001242
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure BDA00038055281000001243
Characteristic diagram of
Figure BDA00038055281000001244
Will feature map
Figure BDA00038055281000001245
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA00038055281000001246
Spatial attention feature map of (1)
Figure BDA00038055281000001247
Spatial attention feature map
Figure BDA00038055281000001248
Input to the fourth convolution layer and output with a resulting dimension of
Figure BDA00038055281000001249
Characteristic diagram of
Figure BDA00038055281000001250
Will feature map
Figure BDA00038055281000001251
And characteristic diagram
Figure BDA00038055281000001252
Add element by element to get dimension of
Figure BDA00038055281000001253
Characteristic diagram of
Figure BDA00038055281000001254
g-12) feature map
Figure BDA0003805528100000131
Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension of
Figure BDA0003805528100000132
Characteristic diagram of
Figure BDA0003805528100000133
Will feature map
Figure BDA0003805528100000134
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure BDA0003805528100000135
Characteristic diagram of
Figure BDA0003805528100000136
Will feature map
Figure BDA0003805528100000137
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA0003805528100000138
Spatial attention feature map of
Figure BDA0003805528100000139
Feature map of spatial attention
Figure BDA00038055281000001310
Input to the fourth convolution layer and output with a resulting dimension of
Figure BDA00038055281000001311
Characteristic diagram of
Figure BDA00038055281000001312
Will feature map
Figure BDA00038055281000001313
And characteristic diagram
Figure BDA00038055281000001314
Element by element addition to a dimension of
Figure BDA00038055281000001315
Characteristic diagram of
Figure BDA00038055281000001316
g-13) mapping the characteristics
Figure BDA00038055281000001317
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA00038055281000001318
Characteristic diagram of
Figure BDA00038055281000001319
Will feature map
Figure BDA00038055281000001320
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality of
Figure BDA00038055281000001321
Characteristic diagram of
Figure BDA00038055281000001322
Will feature map
Figure BDA00038055281000001323
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA00038055281000001324
Spatial attention feature map of
Figure BDA00038055281000001325
Feature map of spatial attention
Figure BDA00038055281000001326
Input to the fourth convolution layer and output with a dimensionality of
Figure BDA00038055281000001327
Characteristic diagram of
Figure BDA00038055281000001328
Will feature map
Figure BDA00038055281000001329
And characteristic diagram
Figure BDA00038055281000001330
Element by element addition to a dimension of
Figure BDA00038055281000001331
Characteristic diagram of
Figure BDA00038055281000001332
g-14) feature map
Figure BDA00038055281000001333
Sequentially fed to a first winding layer and a second winding layerOutput after lamination to obtain dimension of
Figure BDA00038055281000001334
Characteristic diagram of
Figure BDA00038055281000001335
Will feature map
Figure BDA00038055281000001336
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure BDA00038055281000001337
Characteristic diagram of
Figure BDA00038055281000001338
Will feature map
Figure BDA00038055281000001339
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA00038055281000001340
Spatial attention feature map of (1)
Figure BDA00038055281000001341
Feature map of spatial attention
Figure BDA00038055281000001342
Input to the fourth convolution layer and output with a dimensionality of
Figure BDA00038055281000001343
Characteristic diagram of
Figure BDA00038055281000001344
Will feature map
Figure BDA00038055281000001345
And characteristic diagram
Figure BDA00038055281000001346
Element by element addition to a dimension of
Figure BDA00038055281000001347
Characteristic diagram of
Figure BDA00038055281000001348
g-15) feature map
Figure BDA00038055281000001349
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA00038055281000001350
Characteristic diagram of
Figure BDA00038055281000001351
Will feature map
Figure BDA00038055281000001352
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality of
Figure BDA0003805528100000141
Characteristic diagram of
Figure BDA0003805528100000142
Will feature map
Figure BDA0003805528100000143
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure BDA0003805528100000144
Spatial attention feature map of (1)
Figure BDA0003805528100000145
Feature map of spatial attention
Figure BDA0003805528100000146
Input to the fourth convolution layer and output with a resulting dimension of
Figure BDA0003805528100000147
Characteristic diagram of
Figure BDA0003805528100000148
Will feature map
Figure BDA0003805528100000149
And characteristic diagram
Figure BDA00038055281000001410
Element by element addition to a dimension of
Figure BDA00038055281000001411
Characteristic diagram of
Figure BDA00038055281000001412
g-16) feature map
Figure BDA00038055281000001413
Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension of
Figure BDA00038055281000001414
Characteristic diagram of
Figure BDA00038055281000001415
Will feature map
Figure BDA00038055281000001416
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure BDA00038055281000001417
Characteristic diagram of
Figure BDA00038055281000001418
Will feature map
Figure BDA00038055281000001419
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA00038055281000001420
Spatial attention feature map of (1)
Figure BDA00038055281000001421
Spatial attention feature map
Figure BDA00038055281000001422
Input to the fourth convolution layer and output with a resulting dimension of
Figure BDA00038055281000001423
Characteristic diagram of
Figure BDA00038055281000001424
Will feature map
Figure BDA00038055281000001425
And characteristic diagram
Figure BDA00038055281000001426
Element by element addition to a dimension of
Figure BDA00038055281000001427
Characteristic diagram of
Figure BDA00038055281000001428
g-17) mapping the characteristics
Figure BDA00038055281000001429
Are sequentially inputOutput to the first convolution layer and the second convolution layer to obtain a dimension of
Figure BDA00038055281000001430
Characteristic diagram of
Figure BDA00038055281000001431
Will feature map
Figure BDA00038055281000001432
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure BDA00038055281000001433
Characteristic diagram of
Figure BDA00038055281000001434
Will feature map
Figure BDA00038055281000001435
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure BDA00038055281000001436
Spatial attention feature map of
Figure BDA00038055281000001437
Spatial attention feature map
Figure BDA00038055281000001438
Input to the fourth convolution layer and output with a resulting dimension of
Figure BDA00038055281000001439
Characteristic diagram of
Figure BDA00038055281000001440
Will feature map
Figure BDA00038055281000001441
And characteristic diagram
Figure BDA00038055281000001442
Element by element addition to a dimension of
Figure BDA00038055281000001443
Characteristic diagram of
Figure BDA00038055281000001444
Further, step h) is performed by the formula
Figure BDA00038055281000001445
Calculating the ith result y ai In the formula H ai For the ith result y ai High, W of ai As the ith result y ai Width of (C) ai As the ith result y ai By the formula
Figure BDA0003805528100000151
Calculate the jth result y bj In the formula H bj As the jth result y bj High, W of bj For the jth result y bj Width of (C) bj As the jth result y bj The number of channels of (2).
Further, the method also comprises the following steps after the step h):
i) By the formula
Figure BDA0003805528100000152
Calculating to obtain a loss function L, wherein lambda and beta are both weights, lambda + beta =1, and softmax (·) is a softmax activation function;
j) Updating the parameters of the MTEF-NET network model in the step e) through an optimization function Adam by using a loss function L, and storing the training model and the parameters after 100 times of training.
The invention has the beneficial effects that: a multi-task neural network is constructed by setting two tasks, and the performance of each task is improved in parallel by using a multi-task joint learning mode, so that the accuracy of the whole electrocardiosignal classification is improved. Different weight proportions are distributed to different tasks, so that the model has the capability of balancing data sample differences, and the problem of data imbalance can be solved. The established MTEF-NET network model has the quantitative adjustment capability of a complex network, and the neural network only needs fewer network parameters through the comprehensive adjustment of the depth, the width and the resolution of an input picture, so that the complexity of the modeling is reduced, and the accuracy of classification can be improved. By constructing the CBAM-MBCov module, the problem that the lost channel and space characteristics disappear after the electrocardiosignals are converted into two-dimensional images can be solved.
Drawings
FIG. 1 is a diagram of the MTEF-NET network architecture of the present invention;
FIG. 2 is a flow chart of the method of the present invention;
FIG. 3 is a block diagram of the CBAM-MBConv module of the present invention.
Detailed Description
The invention will be further explained with reference to fig. 1, fig. 2 and fig. 3.
An electrocardio classification method based on a multi-task MTEF-NET comprises the following steps:
a) Obtaining original electrocardiosignal X = { X ] from MIT-BIH arrhythmia database 1 ,x 2 ,....,x i ,...,x n In which x i For the ith original electrocardiosignal, i belongs to { 1., n }, and n is the total number of the original electrocardiosignals.
b) The raw ECG signal X is preprocessed and the ECG signal is usually subjected to various noise disturbances, such as baseline wander, electromyogram disturbances and power line disturbances, which makes it difficult to extract useful information from the raw ECG signal. Therefore, noise reduction processing is required before performing the classification task, so that noise in the original electrocardiographic signal X is removed, and a clean electrocardiographic signal U = { U } is obtained 1 ,u 2 ,....,u i ,...,u n In which u i For the ith original ECG signal, i ∈ { 1., n }.
c) By usingContinuous wavelet transform converts clean electrocardiosignals U into two-dimensional image data sets Y = { Y = 1 ,y 2 ,....,y i ,...,y n H to facilitate feature extraction, where y i For the ith two-dimensional image, i belongs to { 1.,. N }, and the two-dimensional image y belongs to i Has a height of H, a width of W, a channel number of C, and a dimension of H × W × C.
d) Reconstructing the data set, constructing a single-task data set into a multi-task data set, specifically dividing the two-dimensional image data set Y according to equal proportion of 50% and 50%, wherein the first divided data set is a data set for detecting whether the electrocardiosignals are normal or not
Figure BDA0003805528100000161
Wherein
Figure BDA0003805528100000162
For the ith two-dimensional image,
Figure BDA0003805528100000163
the images after the wavelet transformation are subjected to mirror image or inversion processing in the second divided data set to obtain a data set for judging the specific type of the electrocardiosignals
Figure BDA0003805528100000164
Wherein
Figure BDA0003805528100000165
For the j-th two-dimensional image,
Figure BDA0003805528100000166
data set Y with task one (coarse grain branching) 1 Is used for detecting whether the electrocardiosignals are normal or not. Task two (fine grain branching) dataset Y 2 Used for judging the specific type of the electrocardiosignal. Task one dataset Y 1 Is wavelet transformed image data set without any change, and task two data set Y 2 And carrying out mirror image or inversion processing on the wavelet-transformed image. The two classification tasks share the same neural network (MTEF-NET network model) to carry out feature extraction, and finally, the feature extraction is carried outAnd outputting the category types of the tasks in the global average pooling layer.
e) Establishing a MTEF-NET network model, wherein the MTEF-NET network model sequentially comprises a convolutional layer, a CBAM-MBConv1 module, a first CBAM-MBConv6 module, a second CBAM-MBConv6 module, a third CBAM-MBConv6 module, a fourth CBAM-MBConv6 module, a fifth CBAM-MBConv6 module, a sixth CBAM-MBConv6 module, a seventh CBAM-MBConv6 module, an eighth CBAM-MBConv6 module, a ninth CBAM-MBConv6 module, a tenth CBAM-MBConv6 module, an eleventh CBAM-MBConv6 module, a twelfth CBAM-MBConv6 module, a thirteenth CBAM-MBConv6 module, a fourteenth CBAM-MBConv6 module, a fifteenth CBAM-MBConv6 module and a global pooling layer.
f) Data set Y 1 Each two-dimensional image is input into the MTEF-NET network model and output to obtain a feature map set
Figure BDA0003805528100000171
Wherein
Figure BDA0003805528100000172
For the ith two-dimensional image,
Figure BDA0003805528100000173
two-dimensional image
Figure BDA0003805528100000174
Has the dimension of
Figure BDA0003805528100000175
g) Data set Y 2 Each two-dimensional image is input into the MTEF-NET network model and output to obtain a feature map set
Figure BDA0003805528100000176
Wherein
Figure BDA0003805528100000177
For the (j) th two-dimensional image,
Figure BDA0003805528100000178
two-dimensional image
Figure BDA0003805528100000179
Has a dimension of
Figure BDA00038055281000001710
h) Feature map set
Figure BDA00038055281000001711
Inputting the electrocardiosignals into a global pooling layer of an MTEF-NET network model, and outputting to obtain electrocardiosignal classification results
Figure BDA00038055281000001712
Wherein y is ai In the case of the ith result,
Figure BDA00038055281000001713
feature map set
Figure BDA00038055281000001714
Inputting the electrocardiosignals into a global pooling layer of a MTEF-NET network model, and outputting to obtain electrocardiosignal classification results
Figure BDA00038055281000001715
Wherein y is bj For the result of the j-th event,
Figure BDA00038055281000001716
a multi-task neural network is constructed by setting two tasks, and the performance of each task is improved in parallel by utilizing a multi-task joint learning mode, so that the accuracy of the whole electrocardiosignal classification is improved. Different weight proportions are distributed to different tasks, so that the model has the capability of balancing data sample differences, and the problem of data imbalance can be solved. The established MTEF-NET network model has the quantitative adjustment capability of a complex network, and the neural network only needs fewer network parameters through the comprehensive adjustment of the depth, the width and the resolution of an input picture, so that the complexity of the modeling is reduced, and the accuracy of classification can be improved. By constructing the CBAM-MBCov module, the problem that the lost channel and space characteristics disappear after the electrocardiosignals are converted into two-dimensional images can be solved.
Example 1:
removing the original electrocardiosignal X = { X) by using two median filters in step b) 1 ,x 2 ,....,x i ,...,x n The noise in the device, a clean electrocardiosignal U = { U =, = 1 ,u 2 ,....,u i ,...,u n The width of the first median filter is 300ms and the width of the second median filter is 600ms.
Example 2:
in step c) by the formula
Figure BDA0003805528100000181
The ith two-dimensional image y is obtained through calculation i Where a is the transformation scale, b is the translation factor, and phi (-) is the mother wavelet.
Example 3:
step f) comprises the following steps:
f-1) two-dimensional images
Figure BDA0003805528100000182
Inputting the data into MTEF-NET network model, inputting the data into convolution layer with convolution kernel size of 3 x 3, and outputting the data to obtain dimension of
Figure BDA0003805528100000183
Characteristic diagram of
Figure BDA0003805528100000184
f-2) the CBAM-MBConv1 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 1, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and the feature map is formed by the convolution layer
Figure BDA0003805528100000185
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA0003805528100000186
Characteristic diagram of
Figure BDA0003805528100000187
Performing dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature map
Figure BDA0003805528100000188
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer, extracting channel characteristics by using channel attention, and outputting the channel characteristics to obtain a dimensionality of
Figure BDA0003805528100000189
Characteristic diagram of
Figure BDA00038055281000001810
Will feature map
Figure BDA00038055281000001811
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA00038055281000001812
Spatial attention feature map of (1)
Figure BDA00038055281000001813
Spatial attention feature map
Figure BDA00038055281000001814
Inputting the data into a fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality of
Figure BDA00038055281000001815
Characteristic diagram of
Figure BDA00038055281000001816
Will feature map
Figure BDA00038055281000001817
And characteristic diagram
Figure BDA00038055281000001818
Element by element addition to a dimension of
Figure BDA00038055281000001819
Characteristic diagram of
Figure BDA00038055281000001820
f-3) the first CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and the feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the third convolution layer and the fourth convolution layer
Figure BDA0003805528100000191
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA0003805528100000192
Characteristic diagram of
Figure BDA0003805528100000193
Performing dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature map
Figure BDA0003805528100000194
The data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension of
Figure BDA0003805528100000195
Characteristic diagram of
Figure BDA0003805528100000196
Will feature map
Figure BDA0003805528100000197
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA0003805528100000198
Spatial attention feature map of (1)
Figure BDA0003805528100000199
Feature map of spatial attention
Figure BDA00038055281000001910
Inputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality of
Figure BDA00038055281000001911
Characteristic diagram of
Figure BDA00038055281000001912
Will feature map
Figure BDA00038055281000001913
And characteristic diagram
Figure BDA00038055281000001914
Add element by element to get dimension of
Figure BDA00038055281000001915
Characteristic diagram of
Figure BDA00038055281000001916
f-4) the second CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1 and convolution kernel size of 3*3 and an expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third pooling layer with a convolution kernel size of 7 × 7, a sigmoid activation function layer, and a fourth pooling layer with a convolution kernel size of 1 × 1, and forming the feature map
Figure BDA00038055281000001917
Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension of
Figure BDA00038055281000001918
Characteristic diagram of
Figure BDA00038055281000001919
Performing dimensionality increase operation on the first convolution layer, performing channel expansion on the second convolution layer, and generating a characteristic diagram
Figure BDA00038055281000001920
The data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension of
Figure BDA00038055281000001921
Characteristic diagram of
Figure BDA00038055281000001922
Will feature map
Figure BDA00038055281000001923
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA00038055281000001924
Spatial attention feature map of (1)
Figure BDA00038055281000001925
Feature map of spatial attention
Figure BDA00038055281000001926
Inputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality of
Figure BDA00038055281000001927
Characteristic diagram of
Figure BDA00038055281000001928
Will feature map
Figure BDA00038055281000001929
And characteristic diagram
Figure BDA00038055281000001930
Add element by element to get dimension of
Figure BDA00038055281000001931
Characteristic diagram of
Figure BDA00038055281000001932
f-5) a third CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer, the second convolution layer, the first global average pooling layer, the second global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 x 7, the sigmoid activation function layer and the fourth convolution layer with convolution kernel size of 1 x 1
Figure BDA00038055281000001933
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA00038055281000001934
Characteristic diagram of
Figure BDA00038055281000001935
Performing dimensionality increase operation on the first convolution layer, performing channel expansion on the second convolution layer, and generating a characteristic diagram
Figure BDA0003805528100000201
The data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension of
Figure BDA0003805528100000202
Characteristic diagram of
Figure BDA0003805528100000203
Will feature map
Figure BDA0003805528100000204
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure BDA0003805528100000205
Spatial attention feature map of (1)
Figure BDA0003805528100000206
Feature map of spatial attention
Figure BDA0003805528100000207
Inputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality of
Figure BDA0003805528100000208
Characteristic diagram of
Figure BDA0003805528100000209
Will feature map
Figure BDA00038055281000002010
And characteristic diagram
Figure BDA00038055281000002011
Element by element addition to a dimension of
Figure BDA00038055281000002012
Characteristic diagram of
Figure BDA00038055281000002013
f-6) a fourth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer, the second convolution layer, the first global average pooling layer, the second global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 x 7, the sigmoid activation function layer and the fourth convolution layer with convolution kernel size of 1 x 1
Figure BDA00038055281000002014
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA00038055281000002015
Characteristic diagram of
Figure BDA00038055281000002016
Performing dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature map
Figure BDA00038055281000002017
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer, extracting channel characteristics by using channel attention, and outputting the channel characteristics to obtain a dimensionality of
Figure BDA00038055281000002018
Characteristic diagram of
Figure BDA00038055281000002019
Will feature map
Figure BDA00038055281000002020
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure BDA00038055281000002021
Spatial attention feature map of (1)
Figure BDA00038055281000002022
Spatial attention feature map
Figure BDA00038055281000002023
Inputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality of
Figure BDA00038055281000002024
Characteristic diagram of
Figure BDA00038055281000002025
Will feature map
Figure BDA00038055281000002026
And characteristic diagram
Figure BDA00038055281000002027
Element by element addition to a dimension of
Figure BDA00038055281000002028
Characteristic diagram of
Figure BDA00038055281000002029
f-7) a fifth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer, the second convolution layer, the first global average pooling layer, the second global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 x 7, the sigmoid activation function layer and the fourth convolution layer with convolution kernel size of 1 x 1
Figure BDA00038055281000002030
Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension of
Figure BDA00038055281000002031
Characteristic diagram of
Figure BDA00038055281000002032
Performing dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature map
Figure BDA00038055281000002033
The data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension of
Figure BDA00038055281000002034
Characteristic diagram of
Figure BDA00038055281000002035
Will feature map
Figure BDA00038055281000002036
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure BDA0003805528100000211
Spatial attention feature map of (1)
Figure BDA0003805528100000212
Feature map of spatial attention
Figure BDA0003805528100000213
Inputting the data into a fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality of
Figure BDA0003805528100000214
Characteristic diagram of
Figure BDA0003805528100000215
Will feature map
Figure BDA0003805528100000216
And characteristic diagram
Figure BDA0003805528100000217
Element by element addition to a dimension of
Figure BDA0003805528100000218
Characteristic diagram of
Figure BDA0003805528100000219
f-8) a sixth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer, the second convolution layer, the first global average pooling layer, the second global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 x 7, the sigmoid activation function layer and the fourth convolution layer with convolution kernel size of 1 x 1
Figure BDA00038055281000002110
Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension of
Figure BDA00038055281000002111
Characteristic diagram of
Figure BDA00038055281000002112
Performing dimensionality increase operation on the first convolution layer, performing channel expansion on the second convolution layer, and generating a characteristic diagram
Figure BDA00038055281000002113
The data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension of
Figure BDA00038055281000002114
Characteristic diagram of
Figure BDA00038055281000002115
Will be characterized byDrawing (A)
Figure BDA00038055281000002116
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA00038055281000002117
Spatial attention feature map of
Figure BDA00038055281000002118
Feature map of spatial attention
Figure BDA00038055281000002119
Inputting the data into a fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality of
Figure BDA00038055281000002120
Characteristic diagram of
Figure BDA00038055281000002121
Will feature map
Figure BDA00038055281000002122
And characteristic diagram
Figure BDA00038055281000002123
Element by element addition to a dimension of
Figure BDA00038055281000002124
Characteristic diagram of
Figure BDA00038055281000002125
f-9) the seventh CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, and a sigmoid activation functionLayer and the fourth convolution layer with convolution kernel size of 1 × 1, and forming the feature map
Figure BDA00038055281000002126
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA00038055281000002127
Characteristic diagram of
Figure BDA00038055281000002128
Performing dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature map
Figure BDA00038055281000002129
The data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension of
Figure BDA00038055281000002130
Characteristic diagram of
Figure BDA00038055281000002131
Will feature map
Figure BDA00038055281000002132
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA00038055281000002133
Spatial attention feature map of (1)
Figure BDA00038055281000002134
Spatial attention feature map
Figure BDA00038055281000002135
Inputting the data into a fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality of
Figure BDA00038055281000002136
Characteristic diagram of
Figure BDA00038055281000002137
Will feature map
Figure BDA00038055281000002138
And characteristic diagram
Figure BDA00038055281000002139
Element by element addition to a dimension of
Figure BDA00038055281000002140
Characteristic diagram of
Figure BDA00038055281000002141
f-10) the eighth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and the feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the third convolution layer and the fourth convolution layer
Figure BDA0003805528100000221
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA0003805528100000222
Characteristic diagram of
Figure BDA0003805528100000223
Performing dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature map
Figure BDA0003805528100000224
Input into the first global maximum pooling layer and the first global average pooling layer in turn utilizes channel attention extractionThe output after channel characteristics obtains the dimensionality of
Figure BDA0003805528100000225
Characteristic diagram of
Figure BDA0003805528100000226
Will feature map
Figure BDA0003805528100000227
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA0003805528100000228
Spatial attention feature map of
Figure BDA0003805528100000229
Spatial attention feature map
Figure BDA00038055281000002210
Inputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality of
Figure BDA00038055281000002211
Characteristic diagram of
Figure BDA00038055281000002212
Will feature map
Figure BDA00038055281000002213
And characteristic diagram
Figure BDA00038055281000002214
Element by element addition to a dimension of
Figure BDA00038055281000002215
Characteristic diagram of
Figure BDA00038055281000002216
f-11) The ninth CBAM-MBConv6 module is composed of a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1 in sequence, and the feature map is formed by the first convolution layer, the second convolution layer, the first global average pooling layer, the second global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 x 7, the sigmoid activation function layer and the fourth convolution layer with convolution kernel size of 1 x 1
Figure BDA00038055281000002217
Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension of
Figure BDA00038055281000002218
Characteristic diagram of
Figure BDA00038055281000002219
Performing dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature map
Figure BDA00038055281000002220
The data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension of
Figure BDA00038055281000002221
Characteristic diagram of
Figure BDA00038055281000002222
Will feature map
Figure BDA00038055281000002223
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA00038055281000002224
Spatial attention feature map of
Figure BDA00038055281000002225
Feature map of spatial attention
Figure BDA00038055281000002226
Inputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality of
Figure BDA00038055281000002227
Characteristic diagram of
Figure BDA00038055281000002228
Will feature map
Figure BDA00038055281000002229
And characteristic diagram
Figure BDA00038055281000002230
Add element by element to get dimension of
Figure BDA00038055281000002231
Characteristic diagram of
Figure BDA00038055281000002232
f-12) a tenth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the first global maximum pooling layer and the second global average pooling layer
Figure BDA00038055281000002233
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA0003805528100000231
Characteristic diagram of
Figure BDA0003805528100000232
Performing dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature map
Figure BDA0003805528100000233
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer, extracting channel characteristics by using channel attention, and outputting the channel characteristics to obtain a dimensionality of
Figure BDA0003805528100000234
Characteristic diagram of
Figure BDA0003805528100000235
Will feature map
Figure BDA0003805528100000236
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA0003805528100000237
Spatial attention feature map of (1)
Figure BDA0003805528100000238
Feature map of spatial attention
Figure BDA0003805528100000239
Inputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality of
Figure BDA00038055281000002310
Characteristic diagram of
Figure BDA00038055281000002311
Will feature map
Figure BDA00038055281000002312
And characteristic diagram
Figure BDA00038055281000002313
Element by element addition to obtainDimension of
Figure BDA00038055281000002314
Characteristic diagram of
Figure BDA00038055281000002315
f-13) the eleventh CBAM-MBConv6 module is composed of a first convolutional layer with convolution kernel size of 1 × 1, a second convolutional layer with convolution kernel size of 5 × 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolutional layer with convolution kernel size of 7 × 7, a sigmoid activation function layer, and a fourth convolutional layer with convolution kernel size of 1 × 1 in sequence, and the feature map is formed by combining the first convolutional layer with convolution kernel size of 1 × 1, the second convolutional layer, the first global maximum pooling layer, the second global average pooling layer, the third convolutional layer with convolution kernel size of 7 × 7, the sigmoid activation function layer, and the fourth convolutional layer with convolution kernel size of 1 × 1
Figure BDA00038055281000002316
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA00038055281000002317
Characteristic diagram of
Figure BDA00038055281000002318
Performing dimensionality increase operation on the first convolution layer, performing channel expansion on the second convolution layer, and generating a characteristic diagram
Figure BDA00038055281000002319
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer, extracting channel characteristics by using channel attention, and outputting the channel characteristics to obtain a dimensionality of
Figure BDA00038055281000002320
Characteristic diagram of
Figure BDA00038055281000002321
Will feature map
Figure BDA00038055281000002322
Sequentially input into a second global maximum pooling layer, a second global average pooling layer, and a volumeOutputting a third convolution layer with the kernel size of 7 x 7 and a sigmoid activation function layer to obtain the dimension of
Figure BDA00038055281000002323
Spatial attention feature map of
Figure BDA00038055281000002324
Spatial attention feature map
Figure BDA00038055281000002325
Inputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality of
Figure BDA00038055281000002326
Characteristic diagram of
Figure BDA00038055281000002327
Will feature map
Figure BDA00038055281000002328
And characteristic diagram
Figure BDA00038055281000002329
Element by element addition to a dimension of
Figure BDA00038055281000002330
Characteristic diagram of
Figure BDA00038055281000002331
f-14) the twelfth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and the feature map is formed by combining the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the first global average pooling layer and the fourth convolution layer
Figure BDA00038055281000002332
Are sequentially inputOutput to the first convolution layer and the second convolution layer to obtain a dimension of
Figure BDA00038055281000002333
Characteristic diagram of
Figure BDA00038055281000002334
Performing dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature map
Figure BDA00038055281000002335
The data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension of
Figure BDA00038055281000002336
Characteristic diagram of
Figure BDA00038055281000002337
Will feature map
Figure BDA00038055281000002338
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure BDA0003805528100000241
Spatial attention feature map of
Figure BDA0003805528100000242
Feature map of spatial attention
Figure BDA0003805528100000243
Inputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality of
Figure BDA0003805528100000244
Characteristic diagram of
Figure BDA0003805528100000245
Will feature map
Figure BDA0003805528100000246
And characteristic diagram
Figure BDA0003805528100000247
Element by element addition to a dimension of
Figure BDA0003805528100000248
Characteristic diagram of
Figure BDA0003805528100000249
f-15) a thirteenth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer, the second convolution layer, the first global average pooling layer, the second global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 x 7, the sigmoid activation function layer and the fourth convolution layer with convolution kernel size of 1 x 1
Figure BDA00038055281000002410
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA00038055281000002411
Characteristic diagram of
Figure BDA00038055281000002412
Performing dimensionality increase operation on the first convolution layer, performing channel expansion on the second convolution layer, and generating a characteristic diagram
Figure BDA00038055281000002413
The data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension of
Figure BDA00038055281000002414
Characteristic diagram of
Figure BDA00038055281000002415
Will feature map
Figure BDA00038055281000002416
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA00038055281000002417
Spatial attention feature map of
Figure BDA00038055281000002418
Spatial attention feature map
Figure BDA00038055281000002419
Inputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality of
Figure BDA00038055281000002420
Characteristic diagram of
Figure BDA00038055281000002421
Will feature map
Figure BDA00038055281000002422
And characteristic diagram
Figure BDA00038055281000002423
Add element by element to get dimension of
Figure BDA00038055281000002424
Characteristic diagram of
Figure BDA00038055281000002425
f-16) a fourteenth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer and a second global average pooling layerA global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 × 7, a sigmoid activation function layer, and a fourth convolution layer with convolution kernel size of 1 × 1, and forming a feature map
Figure BDA00038055281000002426
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA00038055281000002427
Characteristic diagram of
Figure BDA00038055281000002428
Performing dimensionality increase operation on the first convolution layer, performing channel expansion on the second convolution layer, and generating a characteristic diagram
Figure BDA00038055281000002429
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer, extracting channel characteristics by using channel attention, and outputting the channel characteristics to obtain a dimensionality of
Figure BDA00038055281000002430
Characteristic diagram of
Figure BDA00038055281000002431
Will feature map
Figure BDA00038055281000002432
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure BDA00038055281000002433
Spatial attention feature map of (1)
Figure BDA00038055281000002434
Feature map of spatial attention
Figure BDA00038055281000002435
Inputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality of
Figure BDA00038055281000002436
Characteristic diagram of
Figure BDA00038055281000002437
Will feature map
Figure BDA0003805528100000251
And characteristic diagram
Figure BDA0003805528100000252
Add element by element to get dimension of
Figure BDA0003805528100000253
Characteristic diagram of
Figure BDA0003805528100000254
f-17) a fifteenth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the first global maximum pooling layer and the second global average pooling layer
Figure BDA0003805528100000255
Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension of
Figure BDA0003805528100000256
Characteristic diagram of
Figure BDA0003805528100000257
Performing dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature map
Figure BDA0003805528100000258
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer, extracting channel characteristics by using channel attention, and outputting the channel characteristics to obtain a dimensionality of
Figure BDA0003805528100000259
Characteristic diagram of
Figure BDA00038055281000002510
Will feature map
Figure BDA00038055281000002511
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure BDA00038055281000002512
Spatial attention feature map of
Figure BDA00038055281000002513
Feature map of spatial attention
Figure BDA00038055281000002514
Inputting the data into a fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality of
Figure BDA00038055281000002515
Characteristic diagram of
Figure BDA00038055281000002516
Will feature map
Figure BDA00038055281000002517
And characteristic diagram
Figure BDA00038055281000002518
Element by element addition to a dimension of
Figure BDA00038055281000002519
Characteristic diagram of
Figure BDA00038055281000002520
Example 4:
the step g) comprises the following steps:
g-1) two-dimensional imaging
Figure BDA00038055281000002521
Inputting the data into an MTEF-NET network model, entering a convolution layer with a convolution kernel size of 3 x 3, and outputting the convolution layer with the obtained dimension of 3 x 3
Figure BDA00038055281000002522
Characteristic diagram of
Figure BDA00038055281000002523
g-2) mapping the characteristics
Figure BDA00038055281000002524
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA00038055281000002525
Characteristic diagram of
Figure BDA00038055281000002526
Performing dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature map
Figure BDA00038055281000002527
After being sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel feature output is extracted by utilizing channel attention to obtain the dimensionality of
Figure BDA00038055281000002528
Characteristic diagram of
Figure BDA00038055281000002529
Will feature map
Figure BDA00038055281000002530
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA00038055281000002531
Spatial attention feature map of
Figure BDA00038055281000002532
Feature map of spatial attention
Figure BDA00038055281000002533
Inputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality of
Figure BDA00038055281000002534
Characteristic diagram of
Figure BDA00038055281000002535
Will feature map
Figure BDA00038055281000002536
And characteristic diagram
Figure BDA00038055281000002537
Add element by element to get dimension of
Figure BDA00038055281000002538
Characteristic diagram of
Figure BDA00038055281000002539
g-3) feature map
Figure BDA0003805528100000261
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA0003805528100000262
Characteristic diagram of
Figure BDA0003805528100000263
Performing dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature map
Figure BDA0003805528100000264
The data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension of
Figure BDA0003805528100000265
Characteristic diagram of
Figure BDA0003805528100000266
Will feature map
Figure BDA0003805528100000267
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA0003805528100000268
Spatial attention feature map of
Figure BDA0003805528100000269
Feature map of spatial attention
Figure BDA00038055281000002610
Inputting the data into a fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality of
Figure BDA00038055281000002611
Characteristic diagram of
Figure BDA00038055281000002612
Will feature map
Figure BDA00038055281000002613
And characteristic diagram
Figure BDA00038055281000002614
Add element by element to get dimension of
Figure BDA00038055281000002615
Characteristic diagram of
Figure BDA00038055281000002616
g-4) feature map
Figure BDA00038055281000002617
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA00038055281000002618
Characteristic diagram of
Figure BDA00038055281000002619
Performing dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature map
Figure BDA00038055281000002620
The data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension of
Figure BDA00038055281000002621
Characteristic diagram of
Figure BDA00038055281000002622
Will feature map
Figure BDA00038055281000002623
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure BDA00038055281000002624
Spatial attention feature map of (1)
Figure BDA00038055281000002625
Spatial attention feature map
Figure BDA00038055281000002626
Inputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality of
Figure BDA00038055281000002627
Characteristic diagram of
Figure BDA00038055281000002628
Will feature map
Figure BDA00038055281000002629
And characteristic diagram
Figure BDA00038055281000002630
Add element by element to get dimension of
Figure BDA00038055281000002631
Characteristic diagram of
Figure BDA00038055281000002632
g-5) feature map
Figure BDA00038055281000002633
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA00038055281000002634
Characteristic diagram of
Figure BDA00038055281000002635
Performing dimensionality increase operation on the first convolution layer, performing channel expansion on the second convolution layer, and generating a characteristic diagram
Figure BDA00038055281000002636
Sequentially inputting the data to a first global maximum pooling layer and a first global average pooling layer, extracting channel characteristics by using channel attention, and outputtingTo obtain a dimension of
Figure BDA00038055281000002637
Characteristic diagram of
Figure BDA00038055281000002638
Will feature map
Figure BDA00038055281000002639
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA00038055281000002640
Spatial attention feature map of
Figure BDA00038055281000002641
Feature map of spatial attention
Figure BDA00038055281000002642
Inputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality of
Figure BDA00038055281000002643
Characteristic diagram of
Figure BDA00038055281000002644
Will feature map
Figure BDA00038055281000002645
And characteristic diagram
Figure BDA00038055281000002646
Add element by element to get dimension of
Figure BDA00038055281000002647
Characteristic diagram of
Figure BDA00038055281000002648
g-6) feature map
Figure BDA0003805528100000271
Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension of
Figure BDA0003805528100000272
Characteristic diagram of
Figure BDA0003805528100000273
Performing dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature map
Figure BDA0003805528100000274
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer, extracting channel characteristics by using channel attention, and outputting the channel characteristics to obtain a dimensionality of
Figure BDA0003805528100000275
Characteristic diagram of
Figure BDA0003805528100000276
Will feature map
Figure BDA0003805528100000277
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA0003805528100000278
Spatial attention feature map of (1)
Figure BDA0003805528100000279
Spatial attention feature map
Figure BDA00038055281000002710
Inputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality of
Figure BDA00038055281000002711
Characteristic diagram of
Figure BDA00038055281000002712
Will feature map
Figure BDA00038055281000002713
And characteristic diagram
Figure BDA00038055281000002714
Add element by element to get dimension of
Figure BDA00038055281000002715
Characteristic diagram of
Figure BDA00038055281000002716
g-7) feature map
Figure BDA00038055281000002717
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA00038055281000002718
Characteristic diagram of
Figure BDA00038055281000002719
Performing dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature map
Figure BDA00038055281000002720
The data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension of
Figure BDA00038055281000002721
Characteristic diagram of
Figure BDA00038055281000002722
Will feature map
Figure BDA00038055281000002723
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA00038055281000002724
Spatial attention feature map y of j,29 Will pay attention to the spatial feature map y j,29 Inputting the data into a fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality of
Figure BDA00038055281000002725
Characteristic diagram of
Figure BDA00038055281000002726
Will feature map
Figure BDA00038055281000002727
And characteristic diagram
Figure BDA00038055281000002728
Add element by element to get dimension of
Figure BDA00038055281000002729
Characteristic diagram of
Figure BDA00038055281000002730
g-8) feature map
Figure BDA00038055281000002731
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA00038055281000002732
Characteristic diagram of
Figure BDA00038055281000002733
Performing dimensionality increase operation on the first convolution layer, performing channel expansion on the second convolution layer, and generating a characteristic diagram
Figure BDA00038055281000002734
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer, extracting channel characteristics by using channel attention, and outputting the channel characteristics to obtain a dimensionality of
Figure BDA00038055281000002735
Characteristic diagram of
Figure BDA00038055281000002736
Will feature map
Figure BDA00038055281000002737
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA00038055281000002738
Spatial attention feature map of
Figure BDA00038055281000002739
Spatial attention feature map
Figure BDA00038055281000002740
Inputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality of
Figure BDA00038055281000002741
Characteristic diagram of
Figure BDA00038055281000002742
Will feature map
Figure BDA00038055281000002743
And characteristic diagram
Figure BDA00038055281000002744
Add element by element to get dimension of
Figure BDA00038055281000002745
Characteristic diagram of
Figure BDA00038055281000002746
g-9) feature maps
Figure BDA0003805528100000281
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA0003805528100000282
Characteristic diagram of
Figure BDA0003805528100000283
Performing dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature map
Figure BDA0003805528100000284
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer, extracting channel characteristics by using channel attention, and outputting the channel characteristics to obtain a dimensionality of
Figure BDA0003805528100000285
Characteristic diagram of
Figure BDA0003805528100000286
Will feature map
Figure BDA0003805528100000287
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA0003805528100000288
Spatial attention feature map of
Figure BDA0003805528100000289
Spatial attention feature map
Figure BDA00038055281000002810
Is input intoPerforming dimensionality reduction in the fourth convolution layer, outputting to obtain dimensionality of
Figure BDA00038055281000002811
Characteristic diagram of
Figure BDA00038055281000002812
Will feature map
Figure BDA00038055281000002813
And characteristic diagram
Figure BDA00038055281000002814
Element by element addition to a dimension of
Figure BDA00038055281000002815
Characteristic diagram of
Figure BDA00038055281000002816
g-10) feature map
Figure BDA00038055281000002817
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA00038055281000002818
Characteristic diagram of
Figure BDA00038055281000002819
Performing dimensionality increase operation on the first convolution layer, performing channel expansion on the second convolution layer, and generating a characteristic diagram
Figure BDA00038055281000002820
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer, extracting channel characteristics by using channel attention, and outputting the channel characteristics to obtain a dimensionality of
Figure BDA00038055281000002821
Characteristic diagram of
Figure BDA00038055281000002822
Will feature map
Figure BDA00038055281000002823
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure BDA00038055281000002824
Spatial attention feature map of (1)
Figure BDA00038055281000002825
Feature map of spatial attention
Figure BDA00038055281000002826
Inputting the data into a fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality of
Figure BDA00038055281000002827
Characteristic diagram of
Figure BDA00038055281000002828
Will feature map
Figure BDA00038055281000002829
And characteristic diagram
Figure BDA00038055281000002830
Add element by element to get dimension of
Figure BDA00038055281000002831
Characteristic diagram of
Figure BDA00038055281000002832
g-11) feature map
Figure BDA00038055281000002833
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA00038055281000002834
Characteristic diagram of
Figure BDA00038055281000002835
Performing dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature map
Figure BDA00038055281000002836
The data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension of
Figure BDA00038055281000002837
Characteristic diagram of
Figure BDA00038055281000002838
Will feature map
Figure BDA00038055281000002839
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA00038055281000002840
Spatial attention feature map of (1)
Figure BDA00038055281000002841
Spatial attention feature map
Figure BDA00038055281000002842
Inputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality of
Figure BDA00038055281000002843
Characteristic diagram of
Figure BDA00038055281000002844
Will feature map
Figure BDA00038055281000002845
And characteristic diagram
Figure BDA00038055281000002846
Add element by element to get dimension of
Figure BDA00038055281000002847
Characteristic diagram of
Figure BDA00038055281000002848
g-12) feature map
Figure BDA0003805528100000291
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA0003805528100000292
Characteristic diagram of
Figure BDA0003805528100000293
Performing dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature map
Figure BDA0003805528100000294
The data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension of
Figure BDA0003805528100000295
Characteristic diagram of
Figure BDA0003805528100000296
Will feature map
Figure BDA0003805528100000297
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure BDA0003805528100000298
Spatial attention feature map of
Figure BDA0003805528100000299
Feature map of spatial attention
Figure BDA00038055281000002910
Inputting the data into a fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality of
Figure BDA00038055281000002911
Characteristic diagram of
Figure BDA00038055281000002912
Will feature map
Figure BDA00038055281000002913
And characteristic diagram
Figure BDA00038055281000002914
Add element by element to get dimension of
Figure BDA00038055281000002915
Characteristic diagram of
Figure BDA00038055281000002916
g-13) feature map
Figure BDA00038055281000002917
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA00038055281000002918
Characteristic diagram of
Figure BDA00038055281000002919
Performing dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature map
Figure BDA00038055281000002920
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer, extracting channel characteristics by using channel attention, and outputting the channel characteristics to obtain a dimensionality of
Figure BDA00038055281000002921
Characteristic diagram of
Figure BDA00038055281000002922
Will feature map
Figure BDA00038055281000002923
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure BDA00038055281000002924
Spatial attention feature map of
Figure BDA00038055281000002925
Spatial attention feature map
Figure BDA00038055281000002926
Inputting the data into a fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality of
Figure BDA00038055281000002927
Characteristic diagram of
Figure BDA00038055281000002928
Will feature map
Figure BDA00038055281000002929
And characteristic diagram
Figure BDA00038055281000002930
Add element by element to get dimension of
Figure BDA00038055281000002931
Characteristic diagram of
Figure BDA00038055281000002932
g-14) feature map
Figure BDA00038055281000002933
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA00038055281000002934
Characteristic diagram of
Figure BDA00038055281000002935
Performing dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature map
Figure BDA00038055281000002936
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer, extracting channel characteristics by using channel attention, and outputting the channel characteristics to obtain a dimensionality of
Figure BDA00038055281000002937
Characteristic diagram of
Figure BDA00038055281000002938
Will feature map
Figure BDA00038055281000002939
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure BDA00038055281000002940
Spatial attention feature map of
Figure BDA00038055281000002941
Feature map of spatial attention
Figure BDA00038055281000002942
Inputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality of
Figure BDA00038055281000002943
Characteristic diagram of
Figure BDA00038055281000002944
Will feature map
Figure BDA00038055281000002945
And characteristic diagram
Figure BDA00038055281000002946
Add element by element to get dimension of
Figure BDA00038055281000002947
Characteristic diagram of
Figure BDA00038055281000002948
g-15) mapping the characteristics
Figure BDA0003805528100000301
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure BDA0003805528100000302
Characteristic diagram of
Figure BDA0003805528100000303
Performing dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature map
Figure BDA0003805528100000304
The data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension of
Figure BDA0003805528100000305
Characteristic diagram of
Figure BDA0003805528100000306
Will feature map
Figure BDA0003805528100000307
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA0003805528100000308
Spatial attention feature map of
Figure BDA0003805528100000309
Spatial attention feature map
Figure BDA00038055281000003010
Inputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality of
Figure BDA00038055281000003011
Characteristic diagram of
Figure BDA00038055281000003012
Will feature map
Figure BDA00038055281000003013
And characteristic diagram
Figure BDA00038055281000003014
Add element by element to get dimension of
Figure BDA00038055281000003015
Characteristic diagram of
Figure BDA00038055281000003016
g-16) mapping the characteristics
Figure BDA00038055281000003017
Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension of
Figure BDA00038055281000003018
Characteristic diagram of
Figure BDA00038055281000003019
Performing dimension increasing operation on the first convolution layer, performing channel expansion on the second convolution layer, and forming a feature map
Figure BDA00038055281000003020
The data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension of
Figure BDA00038055281000003021
Characteristic diagram of
Figure BDA00038055281000003022
Will feature map
Figure BDA00038055281000003023
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure BDA00038055281000003024
Spatial attention feature map of
Figure BDA00038055281000003025
Feature map of spatial attention
Figure BDA00038055281000003026
Inputting the data into the fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality of
Figure BDA00038055281000003027
Characteristic diagram of
Figure BDA00038055281000003028
Will feature map
Figure BDA00038055281000003029
And characteristic diagram
Figure BDA00038055281000003030
Element by element addition to a dimension of
Figure BDA00038055281000003031
Characteristic diagram of
Figure BDA00038055281000003032
g-17) mapping the characteristics
Figure BDA00038055281000003033
Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension of
Figure BDA00038055281000003034
Characteristic diagram of
Figure BDA00038055281000003035
Performing dimensionality increase operation on the first convolution layer, performing channel expansion on the second convolution layer, and generating a characteristic diagram
Figure BDA00038055281000003036
The data are sequentially input into a first global maximum pooling layer and a first global average pooling layer, channel features are extracted by using channel attention, and then the data are output to obtain the dimension of
Figure BDA00038055281000003037
Characteristic diagram of
Figure BDA00038055281000003038
Will feature map
Figure BDA00038055281000003039
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure BDA00038055281000003040
Spatial attention feature map of
Figure BDA00038055281000003041
Feature map of spatial attention
Figure BDA00038055281000003042
Inputting the data into a fourth convolution layer for dimensionality reduction, and outputting the data to obtain a dimensionality of
Figure BDA00038055281000003043
Characteristic diagram of
Figure BDA00038055281000003044
Will feature map
Figure BDA00038055281000003045
And characteristic diagram
Figure BDA00038055281000003046
Element by element addition to a dimension of
Figure BDA00038055281000003047
Characteristic diagram of
Figure BDA00038055281000003048
Example 5:
we will utilize CBAM-MBConv module to carry out the final feature map Y of feature extraction 81 By task one feature graph
Figure BDA0003805528100000311
And task two feature map
Figure BDA0003805528100000312
Composition, input to global pooling layer for final output, modifying global pooling layer into two output branches, wherein coarse grain branch is represented by Y a Denotes the fine particle size distribution Y b Showing, in a specific step h), by the formula:
Figure BDA0003805528100000313
Calculate the ith result y ai In the formula H ai As the ith result y ai High, W of ai As the ith result y ai Width of (C) ai As the ith result y ai Number of channels by the formula
Figure BDA0003805528100000314
Calculating the jth result y bj In the formula H bj For the jth result y bj High, W of bj For the jth result y bj Width of (C) bj For the jth result y bj The number of channels of (2).
Example 6:
the model is optimized through a loss function, and because the invention consists of two tasks, namely a coarse-grained branch and a fine-grained branch, the optimization can not be performed only by using the loss function of a single task, and the loss function needs to be improved so as to be more suitable for a multi-task neural network. Therefore, it further comprises after step h) performing the steps of:
i) By the formula
Figure BDA0003805528100000315
And calculating to obtain a loss function L, wherein lambda and beta are both weights, and lambda + beta =1, and softmax (·) is a softmax activation function.
j) Updating the parameters of the MTEF-NET network model in the step e) through an optimization function Adam by using a loss function L, and storing the training model and the parameters after 100 times of training.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. An electrocardio classification method based on a multi-task MTEF-NET is characterized by comprising the following steps:
a) Obtaining original electrocardiosignal X = { X ] from MIT-BIH arrhythmia database 1 ,x 2 ,....,x i ,...,x n In which x is i For the ith original electrocardiosignal, i belongs to { 1., n }, and n is the total number of the original electrocardiosignals;
b) Preprocessing the original electrocardiosignal X, removing noise in the original electrocardiosignal X, and obtaining a clean electrocardiosignal U = { U = 1 ,u 2 ,....,u i ,...,u n In which u i For the ith original electrocardiosignal, i belongs to { 1., n };
c) Conversion of a clean electrocardiosignal U into a two-dimensional image data set Y = { Y } by means of continuous wavelet transformation 1 ,y 2 ,....,y i ,...,y n In which y is i For the ith two-dimensional image, i belongs to { 1.,. N }, and the two-dimensional image y belongs to i Has a height of H, a width of W, a channel number of C and a dimension of H, W and C;
d) Dividing a two-dimensional image data set Y into 50% and 50% in equal proportion, wherein the first divided data set is used for detecting whether the electrocardiosignals are normal or not
Figure FDA0003805528090000011
Wherein
Figure FDA0003805528090000012
For the ith two-dimensional image,
Figure FDA0003805528090000013
the images after wavelet transformation are subjected to mirror image or inversion processing in the second divided data set to obtain a data set for judging the specific type of the electrocardiosignals
Figure FDA0003805528090000014
Wherein
Figure FDA0003805528090000015
For the j-th two-dimensional image,
Figure FDA0003805528090000016
e) Establishing an MTEF-NET network model, wherein the MTEF-NET network model sequentially comprises a convolutional layer, a CBAM-MBConv1 module, a first CBAM-MBConv6 module, a second CBAM-MBConv6 module, a third CBAM-MBConv6 module, a fourth CBAM-MBConv6 module, a fifth CBAM-MBConv6 module, a sixth CBAM-MBConv6 module, a seventh CBAM-MBConv6 module, an eighth CBAM-MBConv6 module, a ninth CBAM-MBConv6 module, a tenth CBAM-MBConv6 module, an eleventh CBAM-Conv 6 module, a twelfth CBAM-MBConv6 module, a thirteenth CBAM-MBConv6 module, a fourteenth CBAM-MBConv6 module, a fifteenth CBAM-MBConv6 module and a global pooling layer;
f) Data set Y 1 Each two-dimensional image is input into the MTEF-NET network model and output to obtain a feature map set
Figure FDA0003805528090000021
Wherein
Figure FDA0003805528090000022
For the (i) th two-dimensional image,
Figure FDA0003805528090000023
two-dimensional image
Figure FDA0003805528090000024
Has the dimension of
Figure FDA0003805528090000025
g) Data set Y 2 Each two-dimensional image is input into the MTEF-NET network model and outputObtaining a feature map set
Figure FDA0003805528090000026
Wherein
Figure FDA0003805528090000027
For the (j) th two-dimensional image,
Figure FDA0003805528090000028
two-dimensional image
Figure FDA0003805528090000029
Has a dimension of
Figure FDA00038055280900000210
h) Feature map set
Figure FDA00038055280900000211
Inputting the electrocardiosignals into a global pooling layer of an MTEF-NET network model, and outputting to obtain electrocardiosignal classification results
Figure FDA00038055280900000212
Wherein y is ai In the case of the ith result,
Figure FDA00038055280900000213
feature map set
Figure FDA00038055280900000214
Inputting the electrocardiosignals into a global pooling layer of an MTEF-NET network model, and outputting to obtain electrocardiosignal classification results
Figure FDA00038055280900000215
Wherein y is bj For the result of the j-th event,
Figure FDA00038055280900000216
2. the multi-task MTEF-NET-based electrocardiogram classification method according to claim 1, characterized in that: removing the original electrocardiosignal X = { X) by using two median filters in step b) 1 ,x 2 ,....,x i ,...,x n The noise in the device, a clean electrocardiosignal U = { U =, = 1 ,u 2 ,....,u i ,...,u n A first median filter of 300ms width and a second median filter of 600ms width.
3. The multi-task MTEF-NET-based electrocardiogram classification method according to claim 1, characterized in that: in step c) by the formula
Figure FDA0003805528090000031
The ith two-dimensional image y is obtained through calculation i Where a is the transformation scale, b is the translation factor, and phi (-) is the mother wavelet.
4. The multi-task MTEF-NET-based electrocardiogram classification method according to claim 1, wherein step f) comprises the steps of:
f-1) two-dimensional images
Figure FDA0003805528090000032
Inputting the data into MTEF-NET network model, inputting the data into convolution layer with convolution kernel size of 3 x 3, and outputting the data to obtain dimension of
Figure FDA0003805528090000033
Characteristic diagram of
Figure FDA0003805528090000034
f-2) the CBAM-MBConv1 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 1, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer and a convolution kernel size of largeA third convolution layer with a size of 7 × 7, a sigmoid activation function layer, and a fourth convolution layer with a convolution kernel size of 1 × 1, and forming the feature map
Figure FDA0003805528090000035
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure FDA0003805528090000036
Characteristic diagram of
Figure FDA0003805528090000037
Will feature map
Figure FDA0003805528090000038
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure FDA0003805528090000039
Characteristic diagram of
Figure FDA00038055280900000310
Will feature map
Figure FDA00038055280900000312
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure FDA00038055280900000313
Spatial attention feature map of
Figure FDA00038055280900000314
Feature map of spatial attention
Figure FDA00038055280900000315
Input to the fourth convolution layer and output with a resulting dimension of
Figure FDA00038055280900000316
Characteristic diagram of
Figure FDA00038055280900000317
Will feature map
Figure FDA00038055280900000318
And characteristic diagram
Figure FDA00038055280900000319
Add element by element to get dimension of
Figure FDA00038055280900000320
Characteristic diagram of
Figure FDA00038055280900000321
f-3) the first CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and the feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the third convolution layer and the fourth convolution layer
Figure FDA00038055280900000322
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure FDA00038055280900000323
Characteristic diagram of
Figure FDA00038055280900000324
Will feature map
Figure FDA00038055280900000325
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure FDA0003805528090000041
Characteristic diagram of
Figure FDA0003805528090000042
Will feature map
Figure FDA0003805528090000043
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure FDA0003805528090000044
Spatial attention feature map of
Figure FDA0003805528090000045
Feature map of spatial attention
Figure FDA0003805528090000046
Input to the fourth convolution layer and output with a resulting dimension of
Figure FDA0003805528090000047
Characteristic diagram of
Figure FDA0003805528090000048
Will feature map
Figure FDA0003805528090000049
And characteristic diagram
Figure FDA00038055280900000410
Add element by element to get dimension of
Figure FDA00038055280900000411
Characteristic diagram of
Figure FDA00038055280900000412
f-4) the second CBAM-MBConv6 module is composed of a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1 in sequence, and the feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the third convolution layer and the fourth convolution layer
Figure FDA00038055280900000413
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure FDA00038055280900000414
Characteristic diagram of
Figure FDA00038055280900000415
Will feature map
Figure FDA00038055280900000416
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure FDA00038055280900000417
Characteristic diagram of
Figure FDA00038055280900000418
Will feature map
Figure FDA00038055280900000419
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure FDA00038055280900000420
Spatial attention feature map of
Figure FDA00038055280900000421
Feature map of spatial attention
Figure FDA00038055280900000422
Input to the fourth convolution layer and output with a resulting dimension of
Figure FDA00038055280900000423
Characteristic diagram of
Figure FDA00038055280900000424
Will feature map
Figure FDA00038055280900000425
And characteristic diagram
Figure FDA00038055280900000426
Add element by element to get dimension of
Figure FDA00038055280900000427
Characteristic diagram of
Figure FDA00038055280900000428
f-5) a third CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer, the second convolution layer, the first global average pooling layer, the second global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 x 7, the sigmoid activation function layer and the fourth convolution layer with convolution kernel size of 1 x 1
Figure FDA00038055280900000429
Sequentially inputting into the first convolution layer and the second convolution layer and outputtingTo dimension of
Figure FDA00038055280900000430
Characteristic diagram of
Figure FDA00038055280900000431
Will feature map
Figure FDA00038055280900000432
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality of
Figure FDA0003805528090000051
Characteristic diagram of
Figure FDA0003805528090000052
Will feature map
Figure FDA0003805528090000053
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure FDA0003805528090000054
Spatial attention feature map of
Figure FDA0003805528090000055
Spatial attention feature map
Figure FDA0003805528090000056
Input to the fourth convolution layer and output with a dimensionality of
Figure FDA0003805528090000057
Characteristic diagram of
Figure FDA0003805528090000058
Will feature map
Figure FDA0003805528090000059
And characteristic diagram
Figure FDA00038055280900000510
Add element by element to get dimension of
Figure FDA00038055280900000511
Characteristic diagram of
Figure FDA00038055280900000512
f-6) a fourth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer, the second convolution layer, the first global average pooling layer, the second global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 x 7, the sigmoid activation function layer and the fourth convolution layer with convolution kernel size of 1 x 1
Figure FDA00038055280900000513
Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension of
Figure FDA00038055280900000514
Characteristic diagram of
Figure FDA00038055280900000515
Will feature map
Figure FDA00038055280900000516
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure FDA00038055280900000517
Characteristic diagram of
Figure FDA00038055280900000518
Will feature map
Figure FDA00038055280900000519
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure FDA00038055280900000520
Spatial attention feature map of
Figure FDA00038055280900000521
Spatial attention feature map
Figure FDA00038055280900000522
Input to the fourth convolution layer and output with a resulting dimension of
Figure FDA00038055280900000523
Characteristic diagram of
Figure FDA00038055280900000524
Will feature map
Figure FDA00038055280900000525
And characteristic diagram
Figure FDA00038055280900000526
Add element by element to get dimension of
Figure FDA00038055280900000527
Characteristic diagram of
Figure FDA00038055280900000528
f-7) the fifth CBAM-MBConv6 module comprises a first convolution layer with convolution kernel size of 1 x 1, convolution kernel size of 3 x 3 and expansion ratioA second convolution layer with a convolution kernel size of 7 x 7, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with a convolution kernel size of 7 x 7, a sigmoid activation function layer, and a fourth convolution layer with a convolution kernel size of 1 x 1, wherein the feature map is formed by combining the first convolution layer, the second global maximum pooling layer, the first global average pooling layer, the second global maximum pooling layer, the third convolution layer and the fourth convolution layer
Figure FDA00038055280900000529
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure FDA00038055280900000530
Characteristic diagram of
Figure FDA00038055280900000531
Will feature map
Figure FDA00038055280900000532
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure FDA0003805528090000061
Characteristic diagram of
Figure FDA0003805528090000062
Will feature map
Figure FDA0003805528090000063
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure FDA0003805528090000064
Spatial attention feature map of
Figure FDA0003805528090000065
Spatial attention feature map
Figure FDA0003805528090000066
Input to the fourth convolution layer and output with a resulting dimension of
Figure FDA0003805528090000067
Characteristic diagram of
Figure FDA0003805528090000068
Will feature map
Figure FDA0003805528090000069
And characteristic diagram
Figure FDA00038055280900000610
Add element by element to get dimension of
Figure FDA00038055280900000611
Characteristic diagram of
Figure FDA00038055280900000612
f-8) the sixth CBAM-MBConv6 module is composed of a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1 in sequence, and the feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the third convolution layer, the sigmoid activation function layer and the fourth convolution layer
Figure FDA00038055280900000613
Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension of
Figure FDA00038055280900000614
Characteristic diagram of
Figure FDA00038055280900000615
Will feature map
Figure FDA00038055280900000616
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure FDA00038055280900000617
Characteristic diagram of
Figure FDA00038055280900000618
Will feature map
Figure FDA00038055280900000619
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure FDA00038055280900000620
Spatial attention feature map of
Figure FDA00038055280900000621
Spatial attention feature map
Figure FDA00038055280900000622
Input to the fourth convolution layer and output with a resulting dimension of
Figure FDA00038055280900000623
Characteristic diagram of
Figure FDA00038055280900000624
Will feature map
Figure FDA00038055280900000625
And characteristic diagram
Figure FDA00038055280900000626
Element by element addition to a dimension of
Figure FDA00038055280900000627
Characteristic diagram of
Figure FDA00038055280900000628
f-9) the seventh CBAM-MBConv6 module is composed of, in order, a first convolutional layer with convolution kernel size of 1 × 1, a second convolutional layer with convolution kernel size of 3 × 3 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolutional layer with convolution kernel size of 7 × 7, a sigmoid activation function layer, and a fourth convolutional layer with convolution kernel size of 1 × 1, and the feature map is formed by combining the first convolutional layer with convolution kernel size of 1 × 1, the second convolutional layer, the first global maximum pooling layer, the second global average pooling layer, the third convolutional layer with convolution kernel size of 7 × 7, the sigmoid activation function layer, and the fourth convolutional layer with convolution kernel size of 1 × 1
Figure FDA00038055280900000629
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure FDA00038055280900000630
Characteristic diagram of
Figure FDA00038055280900000631
Will feature map
Figure FDA00038055280900000632
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality of
Figure FDA0003805528090000071
Characteristic diagram of
Figure FDA0003805528090000072
Will feature map
Figure FDA0003805528090000073
Sequentially inputting the data to a second global maximum pooling layer and a second global averageThe dimension of the third convolution layer is obtained by outputting the third convolution layer with 7-by-7 convolution kernels and the sigmoid activation function layer
Figure FDA0003805528090000074
Spatial attention feature map of (1)
Figure FDA0003805528090000075
Feature map of spatial attention
Figure FDA0003805528090000076
Input to the fourth convolution layer and output with a resulting dimension of
Figure FDA0003805528090000077
Characteristic diagram of
Figure FDA0003805528090000078
Will feature map
Figure FDA0003805528090000079
And characteristic diagram
Figure FDA00038055280900000710
Add element by element to get dimension of
Figure FDA00038055280900000711
Characteristic diagram of
Figure FDA00038055280900000712
f-10) the eighth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and the feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the third convolution layer and the fourth convolution layer
Figure FDA00038055280900000713
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure FDA00038055280900000714
Characteristic diagram of
Figure FDA00038055280900000715
Will feature map
Figure FDA00038055280900000716
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality of
Figure FDA00038055280900000717
Characteristic diagram of
Figure FDA00038055280900000718
Will feature map
Figure FDA00038055280900000719
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure FDA00038055280900000720
Spatial attention feature map of
Figure FDA00038055280900000721
Spatial attention feature map
Figure FDA00038055280900000722
Input to the fourth convolution layer and output with a resulting dimension of
Figure FDA00038055280900000723
Characteristic diagram of
Figure FDA00038055280900000724
Will feature map
Figure FDA00038055280900000725
And characteristic diagram
Figure FDA00038055280900000726
Add element by element to get dimension of
Figure FDA00038055280900000727
Characteristic diagram of
Figure FDA00038055280900000728
f-11) the ninth CBAM-MBConv6 module is composed of, in order, a first convolution layer with convolution kernel size of 1 × 1, a second convolution layer with convolution kernel size of 5 × 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 × 7, a sigmoid activation function layer, and a fourth convolution layer with convolution kernel size of 1 × 1, and the feature map is formed by combining the first convolution layer with convolution kernel size of 1 × 1, the second convolution layer, the first global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 × 7, the sigmoid activation function layer, and the fourth convolution layer with convolution kernel size of 1 × 1
Figure FDA00038055280900000729
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure FDA00038055280900000730
Characteristic diagram of
Figure FDA00038055280900000731
Will feature map
Figure FDA00038055280900000732
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure FDA0003805528090000081
Characteristic diagram of
Figure FDA0003805528090000082
Will feature map
Figure FDA0003805528090000083
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure FDA0003805528090000084
Spatial attention feature map of (1)
Figure FDA0003805528090000085
Feature map of spatial attention
Figure FDA0003805528090000086
Input into the fourth convolution layer and output to a bit density of
Figure FDA0003805528090000088
Characteristic diagram of
Figure FDA0003805528090000089
Will feature map
Figure FDA00038055280900000810
And characteristic diagram
Figure FDA00038055280900000811
Add element by element to get dimension of
Figure FDA00038055280900000812
Characteristic diagram of
Figure FDA00038055280900000813
f-12) a tenth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the first global maximum pooling layer and the second global average pooling layer
Figure FDA00038055280900000814
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure FDA00038055280900000815
Characteristic diagram of
Figure FDA00038055280900000816
Will feature map
Figure FDA00038055280900000817
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure FDA00038055280900000818
Characteristic diagram of
Figure FDA00038055280900000819
Will feature map
Figure FDA00038055280900000820
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure FDA00038055280900000821
Spatial attention feature map of (1)
Figure FDA00038055280900000822
Feature map of spatial attention
Figure FDA00038055280900000823
Input to the fourth convolution layer and output with a dimensionality of
Figure FDA00038055280900000824
Characteristic diagram of
Figure FDA00038055280900000825
Will feature map
Figure FDA00038055280900000826
And characteristic diagram
Figure FDA00038055280900000827
Element by element addition to a dimension of
Figure FDA00038055280900000828
Characteristic diagram of
Figure FDA00038055280900000829
f-13) the eleventh CBAM-MBConv6 module is composed of, in order, a first convolution layer with convolution kernel size of 1 × 1, a second convolution layer with convolution kernel size of 5 × 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 × 7, a sigmoid activation function layer, and a fourth convolution layer with convolution kernel size of 1 × 1, and the feature map is formed by combining the first convolution layer with convolution kernel size of 1 × 1, the second convolution layer, the first global maximum pooling layer, the second global average pooling layer, the third convolution layer with convolution kernel size of 7 × 7, the sigmoid activation function layer, and the fourth convolution layer with convolution kernel size of 1 × 1
Figure FDA00038055280900000830
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure FDA00038055280900000831
Characteristic diagram of
Figure FDA00038055280900000832
Will feature map
Figure FDA00038055280900000833
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality of
Figure FDA0003805528090000091
Characteristic diagram of
Figure FDA0003805528090000092
Will feature map
Figure FDA0003805528090000093
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure FDA0003805528090000094
Spatial attention feature map of
Figure FDA0003805528090000095
Feature map of spatial attention
Figure FDA0003805528090000096
Input to the fourth convolution layer and output with a resulting dimension of
Figure FDA0003805528090000097
Characteristic diagram of
Figure FDA0003805528090000098
Will feature map
Figure FDA0003805528090000099
And characteristic diagram
Figure FDA00038055280900000910
Add element by element to get dimension of
Figure FDA00038055280900000911
Characteristic diagram of
Figure FDA00038055280900000912
f-14) the twelfth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and the feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the third convolution layer and the fourth convolution layer
Figure FDA00038055280900000913
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure FDA00038055280900000914
Characteristic diagram of
Figure FDA00038055280900000915
Will feature map
Figure FDA00038055280900000916
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure FDA00038055280900000917
Characteristic diagram of
Figure FDA00038055280900000918
Will feature map
Figure FDA00038055280900000919
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure FDA00038055280900000920
Spatial attention feature map of
Figure FDA00038055280900000921
Spatial attention feature map
Figure FDA00038055280900000922
Input to the fourth convolution layer and output with a dimensionality of
Figure FDA00038055280900000923
Characteristic diagram of
Figure FDA00038055280900000924
Will feature map
Figure FDA00038055280900000925
And characteristic diagram
Figure FDA00038055280900000926
Add element by element to get dimension of
Figure FDA00038055280900000927
Characteristic diagram of
Figure FDA00038055280900000928
f-15) the thirteenth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer and a second global maximum pooling layerA second global average pooling layer, a third convolution layer with convolution kernel size of 7 × 7, a sigmoid activation function layer, and a fourth convolution layer with convolution kernel size of 1 × 1, and forming the feature map
Figure FDA00038055280900000929
Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension of
Figure FDA00038055280900000930
Characteristic diagram of
Figure FDA00038055280900000931
Will feature map
Figure FDA00038055280900000932
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure FDA0003805528090000101
Characteristic diagram of
Figure FDA0003805528090000102
Will feature map
Figure FDA0003805528090000103
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure FDA0003805528090000104
Spatial attention feature map of
Figure FDA0003805528090000105
Spatial attention feature map
Figure FDA0003805528090000106
Input to the fourth convolution layer and output with a dimensionality of
Figure FDA0003805528090000107
Characteristic diagram of
Figure FDA0003805528090000108
Will feature map
Figure FDA0003805528090000109
And characteristic diagram
Figure FDA00038055280900001010
Add element by element to get dimension of
Figure FDA00038055280900001011
Characteristic diagram of
Figure FDA00038055280900001012
f-16) a fourteenth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 5 x 5 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and the feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the third convolution layer and the fourth convolution layer
Figure FDA00038055280900001013
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure FDA00038055280900001014
Characteristic diagram of
Figure FDA00038055280900001015
Will feature map
Figure FDA00038055280900001016
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality of
Figure FDA00038055280900001017
Characteristic diagram of
Figure FDA00038055280900001018
Will feature map
Figure FDA00038055280900001019
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure FDA00038055280900001020
Spatial attention feature map of
Figure FDA00038055280900001021
Spatial attention feature map
Figure FDA00038055280900001022
Input to the fourth convolution layer and output with a dimensionality of
Figure FDA00038055280900001023
Characteristic diagram of
Figure FDA00038055280900001024
Will feature map
Figure FDA00038055280900001025
And characteristic diagram
Figure FDA00038055280900001026
Add element by element to get dimension of
Figure FDA00038055280900001027
Characteristic diagram of
Figure FDA00038055280900001028
f-17) a fifteenth CBAM-MBConv6 module sequentially comprises a first convolution layer with convolution kernel size of 1 x 1, a second convolution layer with convolution kernel size of 3 x 3 and expansion ratio of 6, a first global maximum pooling layer, a first global average pooling layer, a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7, a sigmoid activation function layer and a fourth convolution layer with convolution kernel size of 1 x 1, and a feature map is formed by the first convolution layer with convolution kernel size of 1 x 1, the second convolution layer, the first global maximum pooling layer and the second global average pooling layer
Figure FDA00038055280900001029
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure FDA00038055280900001030
Characteristic diagram of
Figure FDA00038055280900001031
Will feature map
Figure FDA00038055280900001032
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure FDA0003805528090000111
Characteristic diagram of
Figure FDA0003805528090000112
Will feature map
Figure FDA0003805528090000113
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure FDA0003805528090000114
Spatial attention feature map of (1)
Figure FDA0003805528090000115
Spatial attention feature map
Figure FDA0003805528090000116
Input to the fourth convolution layer and output with a dimensionality of
Figure FDA0003805528090000117
Characteristic diagram of
Figure FDA0003805528090000118
Will feature map
Figure FDA0003805528090000119
And characteristic diagram
Figure FDA00038055280900001110
Element by element addition to a dimension of
Figure FDA00038055280900001111
Characteristic diagram of
Figure FDA00038055280900001112
5. The multi-task MTEF-NET-based electrocardiogram classification method according to claim 4, wherein step g) comprises the steps of:
g-1) two-dimensional imaging
Figure FDA00038055280900001113
Inputting the data into an MTEF-NET network model, entering a convolution layer with a convolution kernel size of 3 x 3, and outputting the convolution layer with the obtained dimension of 3 x 3
Figure FDA00038055280900001114
Characteristic diagram of
Figure FDA00038055280900001115
g-2) mapping the characteristics
Figure FDA00038055280900001116
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure FDA00038055280900001117
Characteristic diagram of
Figure FDA00038055280900001118
Will feature map
Figure FDA00038055280900001119
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure FDA00038055280900001120
Characteristic diagram of
Figure FDA00038055280900001121
Will feature map
Figure FDA00038055280900001122
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure FDA00038055280900001123
Spatial attention feature map of
Figure FDA00038055280900001124
Feature map of spatial attention
Figure FDA00038055280900001125
Input to the fourth convolution layer and output with a dimensionality of
Figure FDA00038055280900001126
Characteristic diagram of
Figure FDA00038055280900001127
Will feature map
Figure FDA00038055280900001128
And characteristic diagram
Figure FDA00038055280900001129
Add element by element to get dimension of
Figure FDA00038055280900001130
Characteristic diagram of
Figure FDA00038055280900001131
g-3) feature map
Figure FDA00038055280900001132
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure FDA00038055280900001133
Characteristic diagram of
Figure FDA00038055280900001134
Will feature map
Figure FDA00038055280900001135
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality of
Figure FDA00038055280900001136
Characteristic diagram of
Figure FDA00038055280900001137
Will feature map
Figure FDA00038055280900001138
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure FDA0003805528090000121
Spatial attention feature map of
Figure FDA0003805528090000122
Feature map of spatial attention
Figure FDA0003805528090000123
Input to the fourth convolution layer and output with a dimensionality of
Figure FDA0003805528090000124
Characteristic diagram of
Figure FDA0003805528090000125
Will feature map
Figure FDA0003805528090000126
And characteristic diagram
Figure FDA0003805528090000127
Element by element addition to a dimension of
Figure FDA0003805528090000128
Characteristic diagram of
Figure FDA0003805528090000129
g-4) feature map
Figure FDA00038055280900001210
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure FDA00038055280900001211
Characteristic diagram of
Figure FDA00038055280900001212
Will feature map
Figure FDA00038055280900001213
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality of
Figure FDA00038055280900001214
Characteristic diagram of
Figure FDA00038055280900001215
Will feature map
Figure FDA00038055280900001216
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure FDA00038055280900001217
Spatial attention feature map of (1)
Figure FDA00038055280900001218
Spatial attention feature map
Figure FDA00038055280900001219
Input to the fourth convolution layer and output with a dimensionality of
Figure FDA00038055280900001220
Characteristic diagram of
Figure FDA00038055280900001221
Will feature map
Figure FDA00038055280900001222
And characteristic diagram
Figure FDA00038055280900001223
Element by element addition to a dimension of
Figure FDA00038055280900001224
Characteristic diagram of
Figure FDA00038055280900001225
g-5) feature map
Figure FDA00038055280900001226
Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension of
Figure FDA00038055280900001227
Characteristic diagram of
Figure FDA00038055280900001228
Will feature map
Figure FDA00038055280900001229
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure FDA00038055280900001230
Characteristic diagram of
Figure FDA00038055280900001231
Will feature map
Figure FDA00038055280900001232
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure FDA00038055280900001233
Spatial attention feature map of (1)
Figure FDA00038055280900001234
Feature map of spatial attention
Figure FDA00038055280900001235
Input to the fourth convolution layer and output with a dimensionality of
Figure FDA00038055280900001236
Characteristic diagram of
Figure FDA00038055280900001237
Will feature map
Figure FDA00038055280900001238
And characteristic diagram
Figure FDA00038055280900001239
Add element by element to get dimension of
Figure FDA00038055280900001240
Characteristic diagram of
Figure FDA00038055280900001241
g-6) feature map
Figure FDA00038055280900001242
Sequentially inputting into the first convolution layer and the second convolution layer and outputtingTo dimension of
Figure FDA00038055280900001243
Characteristic diagram of
Figure FDA00038055280900001244
Will feature map
Figure FDA00038055280900001245
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure FDA0003805528090000131
Characteristic diagram of
Figure FDA0003805528090000132
Will feature map
Figure FDA0003805528090000133
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure FDA0003805528090000134
Spatial attention feature map of
Figure FDA0003805528090000135
Spatial attention feature map
Figure FDA0003805528090000136
Input to the fourth convolution layer and output with a resulting dimension of
Figure FDA0003805528090000137
Characteristic diagram of
Figure FDA0003805528090000138
Will feature map
Figure FDA0003805528090000139
And characteristic diagram
Figure FDA00038055280900001310
Add element by element to get dimension of
Figure FDA00038055280900001311
Characteristic diagram of
Figure FDA00038055280900001312
g-7) feature map
Figure FDA00038055280900001313
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure FDA00038055280900001314
Characteristic diagram of
Figure FDA00038055280900001315
Will feature map
Figure FDA00038055280900001316
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality of
Figure FDA00038055280900001317
Characteristic diagram of
Figure FDA00038055280900001318
Will feature map
Figure FDA00038055280900001319
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and sigmoid activationThe dimensionality obtained by outputting the function layer is
Figure FDA00038055280900001320
Spatial attention feature map y of j,29 Will pay attention to the spatial feature map y j,29 Input to the fourth convolution layer and output with a resulting dimension of
Figure FDA00038055280900001321
Characteristic diagram of
Figure FDA00038055280900001322
Will feature map
Figure FDA00038055280900001323
And characteristic diagram
Figure FDA00038055280900001324
Element by element addition to a dimension of
Figure FDA00038055280900001325
Characteristic diagram of
Figure FDA00038055280900001326
g-8) feature map
Figure FDA00038055280900001327
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure FDA00038055280900001328
Characteristic diagram of
Figure FDA00038055280900001329
Will feature map
Figure FDA00038055280900001330
Sequentially input into the first global maximum pooling layer and the second global maximum pooling layerA global average pooling layer is then output with the dimension of
Figure FDA00038055280900001331
Characteristic diagram of
Figure FDA00038055280900001332
Will feature map
Figure FDA00038055280900001333
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure FDA00038055280900001334
Spatial attention feature map of (1)
Figure FDA00038055280900001335
Spatial attention feature map
Figure FDA00038055280900001336
Input to the fourth convolution layer and output with a dimensionality of
Figure FDA00038055280900001337
Characteristic diagram of
Figure FDA00038055280900001338
Will feature map
Figure FDA00038055280900001339
And characteristic diagram
Figure FDA00038055280900001340
Element by element addition to a dimension of
Figure FDA00038055280900001341
Characteristic diagram of
Figure FDA00038055280900001342
g-9) feature map
Figure FDA0003805528090000141
Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension of
Figure FDA0003805528090000142
Characteristic diagram of
Figure FDA0003805528090000143
Will feature map
Figure FDA0003805528090000144
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure FDA0003805528090000145
Characteristic diagram of
Figure FDA0003805528090000146
Will feature map
Figure FDA0003805528090000147
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure FDA0003805528090000148
Spatial attention feature map of (1)
Figure FDA0003805528090000149
Spatial attention feature map
Figure FDA00038055280900001410
Input to the fourth convolution layer and output with a resulting dimension of
Figure FDA00038055280900001411
Characteristic diagram of
Figure FDA00038055280900001412
Will feature map
Figure FDA00038055280900001413
And characteristic diagram
Figure FDA00038055280900001414
Add element by element to get dimension of
Figure FDA00038055280900001415
Characteristic diagram of
Figure FDA00038055280900001416
g-10) will feature map
Figure FDA00038055280900001417
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure FDA00038055280900001418
Characteristic diagram of
Figure FDA00038055280900001419
Will feature map
Figure FDA00038055280900001420
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure FDA00038055280900001421
Characteristic diagram of
Figure FDA00038055280900001422
Will feature map
Figure FDA00038055280900001423
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure FDA00038055280900001424
Spatial attention feature map of (1)
Figure FDA00038055280900001425
Feature map of spatial attention
Figure FDA00038055280900001426
Input to the fourth convolution layer and output with a resulting dimension of
Figure FDA00038055280900001427
Characteristic diagram of
Figure FDA00038055280900001428
Will feature map
Figure FDA00038055280900001429
And characteristic diagram
Figure FDA00038055280900001430
Add element by element to get dimension of
Figure FDA00038055280900001431
Characteristic diagram of
Figure FDA00038055280900001432
g-11) mapping the characteristics
Figure FDA00038055280900001433
Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension of
Figure FDA00038055280900001434
Characteristic diagram of
Figure FDA00038055280900001435
Will feature map
Figure FDA00038055280900001436
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality of
Figure FDA00038055280900001437
Characteristic diagram of
Figure FDA00038055280900001438
Will feature map
Figure FDA00038055280900001439
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure FDA00038055280900001440
Spatial attention feature map of
Figure FDA00038055280900001441
Feature map of spatial attention
Figure FDA00038055280900001442
Input to the fourth convolution layer and output with a resulting dimension of
Figure FDA0003805528090000151
Characteristic diagram of
Figure FDA0003805528090000152
Will feature map
Figure FDA0003805528090000153
And characteristic diagram
Figure FDA0003805528090000154
Add element by element to get dimension of
Figure FDA0003805528090000155
Characteristic diagram of
Figure FDA0003805528090000156
g-12) feature map
Figure FDA0003805528090000157
Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension of
Figure FDA0003805528090000158
Characteristic diagram of
Figure FDA0003805528090000159
Will feature map
Figure FDA00038055280900001510
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure FDA00038055280900001511
Characteristic diagram of
Figure FDA00038055280900001512
Will feature map
Figure FDA00038055280900001513
Input in sequenceOutputting to a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer to obtain a dimensionality of
Figure FDA00038055280900001514
Spatial attention feature map of
Figure FDA00038055280900001515
Spatial attention feature map
Figure FDA00038055280900001516
Input to the fourth convolution layer and output with a resulting dimension of
Figure FDA00038055280900001517
Characteristic diagram of
Figure FDA00038055280900001518
Will feature map
Figure FDA00038055280900001519
And characteristic diagram
Figure FDA00038055280900001520
Element by element addition to a dimension of
Figure FDA00038055280900001521
Characteristic diagram of
Figure FDA00038055280900001522
g-13) mapping the characteristics
Figure FDA00038055280900001523
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure FDA00038055280900001524
Characteristic diagram of
Figure FDA00038055280900001525
Will feature map
Figure FDA00038055280900001526
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure FDA00038055280900001527
Characteristic diagram of
Figure FDA00038055280900001528
Will feature map
Figure FDA00038055280900001529
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure FDA00038055280900001530
Spatial attention feature map of
Figure FDA00038055280900001531
Spatial attention feature map
Figure FDA00038055280900001532
Input to the fourth convolution layer and output with a resulting dimension of
Figure FDA00038055280900001533
Characteristic diagram of
Figure FDA00038055280900001534
Will feature map
Figure FDA00038055280900001535
And characteristic diagram
Figure FDA00038055280900001536
Add element by element to get dimension of
Figure FDA00038055280900001537
Characteristic diagram of
Figure FDA00038055280900001538
g-14) mapping the characteristics
Figure FDA00038055280900001539
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure FDA00038055280900001540
Characteristic diagram of
Figure FDA00038055280900001541
Will feature map
Figure FDA00038055280900001542
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality of
Figure FDA00038055280900001543
Characteristic diagram of
Figure FDA00038055280900001544
Will feature map
Figure FDA00038055280900001545
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain the data with dimensionality of
Figure FDA0003805528090000161
Spatial attention feature map of (1)
Figure FDA0003805528090000162
Feature map of spatial attention
Figure FDA0003805528090000163
Input to the fourth convolution layer and output with a dimensionality of
Figure FDA0003805528090000164
Characteristic diagram of
Figure FDA0003805528090000165
Will feature map
Figure FDA0003805528090000166
And characteristic diagram
Figure FDA0003805528090000167
Element by element addition to a dimension of
Figure FDA0003805528090000168
Characteristic diagram of
Figure FDA0003805528090000169
g-15) feature map
Figure FDA00038055280900001610
Sequentially inputting the first convolution layer and the second convolution layer and outputting the first convolution layer and the second convolution layer to obtain a dimension of
Figure FDA00038055280900001611
Characteristic diagram of
Figure FDA00038055280900001612
Will feature map
Figure FDA00038055280900001613
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimension of
Figure FDA00038055280900001614
Characteristic diagram of
Figure FDA00038055280900001615
Will feature map
Figure FDA00038055280900001616
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure FDA00038055280900001617
Spatial attention feature map of
Figure FDA00038055280900001618
Spatial attention feature map
Figure FDA00038055280900001619
Input to the fourth convolution layer and output with a dimensionality of
Figure FDA00038055280900001620
Characteristic diagram of
Figure FDA00038055280900001621
Will feature map
Figure FDA00038055280900001622
And characteristic diagram
Figure FDA00038055280900001623
Add element by element to get dimension of
Figure FDA00038055280900001624
Characteristic diagram of
Figure FDA00038055280900001625
g-16) feature map
Figure FDA00038055280900001626
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure FDA00038055280900001627
Characteristic diagram of
Figure FDA00038055280900001628
Will feature map
Figure FDA00038055280900001629
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality of
Figure FDA00038055280900001630
Characteristic diagram of
Figure FDA00038055280900001631
Will feature map
Figure FDA00038055280900001632
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure FDA00038055280900001633
Spatial attention feature map of
Figure FDA00038055280900001634
Spatial attention feature map
Figure FDA00038055280900001635
Input to the fourth convolution layer and output with a resulting dimension of
Figure FDA00038055280900001636
Characteristic diagram of
Figure FDA00038055280900001637
Will feature map
Figure FDA00038055280900001638
And characteristic diagram
Figure FDA00038055280900001639
Add element by element to get dimension of
Figure FDA00038055280900001640
Characteristic diagram of
Figure FDA00038055280900001641
g-17) feature map
Figure FDA00038055280900001642
Sequentially input into the first convolution layer and the second convolution layer and output to obtain the dimension of
Figure FDA00038055280900001643
Characteristic diagram of
Figure FDA00038055280900001644
Will feature map
Figure FDA00038055280900001645
Sequentially inputting the data into a first global maximum pooling layer and a first global average pooling layer and then outputting the data to obtain a dimensionality of
Figure FDA0003805528090000171
Characteristic diagram of
Figure FDA0003805528090000172
Will feature map
Figure FDA0003805528090000173
Sequentially inputting the data into a second global maximum pooling layer, a second global average pooling layer, a third convolution layer with convolution kernel size of 7 x 7 and a sigmoid activation function layer, and outputting the data to obtain a dimension of
Figure FDA0003805528090000174
Spatial attention feature map of
Figure FDA0003805528090000175
Feature map of spatial attention
Figure FDA0003805528090000176
Input to the fourth convolution layer and output with a resulting dimension of
Figure FDA0003805528090000177
Characteristic diagram of
Figure FDA0003805528090000178
Will feature map
Figure FDA0003805528090000179
And characteristic diagram
Figure FDA00038055280900001710
Element by element addition to a dimension of
Figure FDA00038055280900001711
Characteristic diagram of
Figure FDA00038055280900001712
6. The multi-task MTEF-NET-based electrocardiogram classification method according to claim 1, characterized in that:
in step h) by the formula
Figure FDA00038055280900001713
Calculating the ith result y ai In the formula H ai For the ith result y ai High, W of ai As the ith result y ai Width of (C) ai For the ith result y ai By the formula
Figure FDA00038055280900001714
Calculating the jth result y bj In the formula H bj For the jth result y bj High, W of bj As the jth result y bj Width of (C) bj As the jth result y bj The number of channels of (c).
7. The multi-task MTEF-NET-based electrocardiographic classification method according to claim 1, further comprising the following steps after step h):
i) By the formula
Figure FDA0003805528090000181
Calculating to obtain a loss function L, wherein lambda and beta are both weights, lambda + beta =1, and softmax (·) is a softmax activation function;
j) Updating the parameters of the MTEF-NET network model in the step e) through an optimization function Adam by using a loss function L, and storing the training model and the parameters after 100 times of training.
CN202210996094.7A 2022-08-19 2022-08-19 Electrocardiogram classification method based on multitasking MTEF-NET Active CN115358270B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210996094.7A CN115358270B (en) 2022-08-19 2022-08-19 Electrocardiogram classification method based on multitasking MTEF-NET

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210996094.7A CN115358270B (en) 2022-08-19 2022-08-19 Electrocardiogram classification method based on multitasking MTEF-NET

Publications (2)

Publication Number Publication Date
CN115358270A true CN115358270A (en) 2022-11-18
CN115358270B CN115358270B (en) 2023-06-20

Family

ID=84002094

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210996094.7A Active CN115358270B (en) 2022-08-19 2022-08-19 Electrocardiogram classification method based on multitasking MTEF-NET

Country Status (1)

Country Link
CN (1) CN115358270B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110638430A (en) * 2019-10-23 2020-01-03 苏州大学 Multi-task cascade neural network ECG signal arrhythmia disease classification model and method
CN112674780A (en) * 2020-12-23 2021-04-20 山东省人工智能研究院 Automatic atrial fibrillation signal detection method in electrocardiogram abnormal signals
WO2022073452A1 (en) * 2020-10-07 2022-04-14 武汉大学 Hyperspectral remote sensing image classification method based on self-attention context network
CN114781445A (en) * 2022-04-11 2022-07-22 山东省人工智能研究院 Deep neural network electrocardiosignal noise reduction method based on interpretability

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110638430A (en) * 2019-10-23 2020-01-03 苏州大学 Multi-task cascade neural network ECG signal arrhythmia disease classification model and method
WO2022073452A1 (en) * 2020-10-07 2022-04-14 武汉大学 Hyperspectral remote sensing image classification method based on self-attention context network
CN112674780A (en) * 2020-12-23 2021-04-20 山东省人工智能研究院 Automatic atrial fibrillation signal detection method in electrocardiogram abnormal signals
CN114781445A (en) * 2022-04-11 2022-07-22 山东省人工智能研究院 Deep neural network electrocardiosignal noise reduction method based on interpretability

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MURAT CANAYAZ: "C + EffxNet: A novel hybrid approach for COVID-19 diagnosis on CT images based on CBAM and EfficientNet", 《ELSEVIER》, pages 1 - 10 *
XIAOYUN XIE ET AL: "A multi-stage denoising framework for ambulatory ECG signal based on domain knowledge and motion artifact detection", 《ELSEVIER》, pages 103 - 116 *
ZHENZHEN MAO ET AL: "Multi-views reinforced LSTM for video-based action recognition", 《JOURNAL OF ELECTRONIC IMAGING》, vol. 30, no. 5, pages 053021 - 1 *

Also Published As

Publication number Publication date
CN115358270B (en) 2023-06-20

Similar Documents

Publication Publication Date Title
CN110236543B (en) Alzheimer disease multi-classification diagnosis system based on deep learning
Milligan A study of the beta-flexible clustering method
CN112633195B (en) Myocardial infarction recognition and classification method based on frequency domain features and deep learning
CN110307983B (en) CNN-Bagging-based unmanned aerial vehicle bearing fault diagnosis method
CN107944490A (en) A kind of image classification method based on half multi-modal fusion feature reduction frame
CN113888412B (en) Image super-resolution reconstruction method for diabetic retinopathy classification
CN111274525A (en) Tensor data recovery method based on multi-linear augmented Lagrange multiplier method
CN115689008A (en) CNN-BilSTM short-term photovoltaic power prediction method and system based on ensemble empirical mode decomposition
CN112634214A (en) Brain network classification method combining node attributes and multilevel topology
CN114648048B (en) Electrocardiosignal noise reduction method based on variational self-coding and PixelCNN model
CN116913504A (en) Self-supervision multi-view knowledge distillation method for single-lead arrhythmia diagnosis
CN115358270A (en) Electrocardiogram classification method based on multi-task MTEF-NET
CN116597167B (en) Permanent magnet synchronous motor small sample demagnetization fault diagnosis method, storage medium and system
CN117172294A (en) Method, system, equipment and storage medium for constructing sparse brain network
CN110327034B (en) Tachycardia electrocardiogram screening method based on depth feature fusion network
Chen et al. Automated sleep staging via parallel frequency-cut attention
CN115239674B (en) Computer angiography imaging synthesis method based on multi-scale discrimination
CN115759186A (en) Six-class motor imagery electroencephalogram signal classification method based on convolutional neural network
CN115363594A (en) Real-time heart disease screening method based on recurrent neural network
Xiaoai et al. An overview of disease prediction based on graph convolutional neural network
CN114757911A (en) Magnetic resonance image auxiliary processing system based on graph neural network and contrast learning
Yang et al. Tensor-based Complex-valued Graph Neural Network for Dynamic Coupling Multimodal brain Networks
CN112132790B (en) DAC-GAN model construction method and application thereof in mammary gland MR image
CN112085718B (en) NAFLD ultrasonic video diagnosis system based on twin attention network
Huang et al. FNSAM: Image super-resolution using a feedback network with self-attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant