CN117281528A - Multi-lead pulse signal intelligent identification method and system based on deep learning - Google Patents

Multi-lead pulse signal intelligent identification method and system based on deep learning Download PDF

Info

Publication number
CN117281528A
CN117281528A CN202311584793.1A CN202311584793A CN117281528A CN 117281528 A CN117281528 A CN 117281528A CN 202311584793 A CN202311584793 A CN 202311584793A CN 117281528 A CN117281528 A CN 117281528A
Authority
CN
China
Prior art keywords
lead
signal
network
pulse signal
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311584793.1A
Other languages
Chinese (zh)
Inventor
孙启玉
高亚欣
王停停
刘肖
刘晓芳
刘玉峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Fengshi Information Technology Co ltd
Original Assignee
Shandong Fengshi Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Fengshi Information Technology Co ltd filed Critical Shandong Fengshi Information Technology Co ltd
Priority to CN202311584793.1A priority Critical patent/CN117281528A/en
Publication of CN117281528A publication Critical patent/CN117281528A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/318Heart-related electrical modalities, e.g. electrocardiography [ECG]
    • A61B5/346Analysis of electrocardiograms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/10Pre-processing; Data cleansing
    • G06F18/15Statistical pre-processing, e.g. techniques for normalisation or restoring missing data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2131Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on a transform domain processing, e.g. wavelet transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2123/00Data types
    • G06F2123/02Data types in the time domain, e.g. time-series data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Medical Informatics (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Pathology (AREA)
  • Cardiology (AREA)
  • Signal Processing (AREA)
  • Psychiatry (AREA)
  • Physiology (AREA)
  • Fuzzy Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a multi-lead pulse signal intelligent recognition method and system based on deep learning, and belongs to the technical field of pulse signal recognition. Inputting a multi-lead slice pulse signal into a multi-lead pulse signal feature extraction network branch consisting of a convolutional neural network, a long-short-time memory network and an attention learning module to obtain signal data features, inputting a time-frequency diagram obtained by wavelet transformation and conversion of a single-lead pulse signal into a time-frequency diagram feature extraction network branch consisting of a feature extraction network and a feature enhancement module to obtain image features, fusing the signal features of different modes learned by the two branches, classifying to obtain a classification prediction result, training and optimizing a network model to store optimal parameters, and inputting data to be tested into the optimal model to obtain a recognition result. According to the invention, by fusing the characteristics of the double branches, more accurate and comprehensive characteristic representation is obtained, and the characteristics of different modal signals are fully utilized in the fusion process, so that the final characteristics are more representative.

Description

Multi-lead pulse signal intelligent identification method and system based on deep learning
Technical Field
The invention relates to a multi-lead pulse signal intelligent recognition method and system based on deep learning, and belongs to the technical field of pulse signal recognition.
Background
With the rapid development of deep learning technology, artificial intelligence has been widely used in the field of medical diagnosis. Especially in the field of pulse diagnosis of traditional Chinese medicine, along with the continuous progress and popularization of pulse diagnosis instruments of traditional Chinese medicine, more and more pulse wave signal data can be acquired. These pulse wave signals are not only tools for medical diagnostics, but are also valuable sources of information. The signals contain physiological information about important organs such as heart, liver, lung and the like of the patient, and the information has important significance for intelligent medical diagnosis. Traditional Chinese medicine pulse-taking methods generally rely on experience and observation of doctors, but these methods have subjectivity and certain limitations. The intervention of modern technology enables us to analyze and understand pulse wave signals more objectively, thereby improving the accuracy and reliability of pulse diagnosis in traditional Chinese medicine.
The existing multi-lead pulse detection method based on deep learning can automatically extract the characteristics of each moment of a waveform, but the accuracy is improved from the aspect of extracting key characteristic information, for example, patent CN 109063552A discloses a multi-lead electrocardiosignal classification method, which comprises the following steps: s1, processing multi-lead electrocardiosignals through a multi-branch convolution residual error neural network to extract signal characteristics of each multi-lead electrocardiosignal; s2, fusing the signal characteristics of the extracted multi-lead electrocardiosignals; s3, classifying the fused multi-lead electrocardiosignals based on a Softmax function. However, the method only adopts a simple neural network to extract signal characteristics, lacks characteristic learning of different modal data, and lacks fusion of different types of information. Patent CN 113951893A discloses a multi-lead electrocardiosignal feature point extraction method combining deep learning and electrophysiological knowledge, firstly, a multi-lead electrocardiosignal acquisition module is used for extracting 12-lead electrocardiosignals; secondly, a feature point extraction module extracts morphological features of a heart beat and strong time sequence correlation features of sampling time through a Convolutional Neural Network (CNN) and a long short term memory network (LSTM) based on a U-net framework, strengthens finer features of each time of a waveform through fusion of bottom layer and high-level information, and extracts feature points through a fixed threshold method; finally, the feature point position correction module further improves feature point extraction accuracy through a multi-lead interaction method based on electrophysiology knowledge and a dynamic threshold self-adaptive adjustment strategy, and the method extracts features of different lead signals but lacks learning of interaction among different leads.
Therefore, how to design a multi-lead pulse signal intelligent recognition method based on deep learning is still a challenging problem.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a multi-lead pulse signal intelligent identification method based on deep learning, which obtains more accurate and comprehensive characteristic representation by fusing the characteristics of double branches, thereby ensuring comprehensiveness and accuracy.
The technical scheme adopted by the invention is as follows:
the intelligent multi-lead pulse signal identification method based on deep learning comprises the following steps:
s1, acquiring pulse signals of N leadsX N Preprocessing each original signal data, filtering and segmenting the signal data to obtain a multi-lead slice pulse signalM N
S2, slicing the multi-lead pulse signalsM N Inputting multi-lead pulse signal characteristic extraction network branches consisting of a convolutional neural network, a long and short time memory network (LSTM) and an attention learning module to extract comprehensive information to obtain signal data characteristics F3:
firstly, feature extraction is carried out through a convolutional neural network, local features and modes of pulse signals are captured, structural information of the pulse signals is learned, then, time correlation and sequence features of single-lead pulse signals are learned through a long-short-term memory network, and finally, correlation among multiple-lead pulse signals is learned through a attention mechanical learning module;
s3, inputting a time-frequency diagram obtained by wavelet transformation and conversion of the single-lead pulse signal into a time-frequency diagram feature extraction network and a feature enhancement module, and extracting more explanatory key information by a branch of the time-frequency diagram feature extraction network to obtain image features FP3:
firstly, extracting time domain and frequency domain features through a feature extraction network, and then removing noise and enhancing edges and features of signals through a feature enhancement module;
s4, fusing the signal characteristics of the two branches in different modes, namely fusing the signal data characteristic F3 and the image characteristic FP3 to obtain a final characteristic FD, and classifying to obtain a classification prediction result;
s5, training and optimizing the network model to store optimal parameters, and inputting data to be tested into the optimal model to obtain a recognition result.
In the method, the number N of the lead pulse signals in the step S1 is selected from 9 to 45, and N is preferably 27.
The attention mechanics module described in step S2 first needs to calculate the correlation score between each pair of leads by additive attention, namely:
wherein,W q andW k is a learned weight matrix, reLU is a modified linear unit activation function, q is a query vector, and k is a keyword vector;
the relevance scores are then normalized by a softmax function to obtain the attention weight γ for each lead, namely:
where N represents the number of input elements,k i representing all input feature vectors;
finally, multiplying the attention weight with the characteristic F2 output by the long-short-time memory network, and carrying out weighted average on the characteristic of each channel to obtain a global multi-lead representation as follows:
wherein,v i representing the characteristic value of the input element,representing the attention weight value of each lead.
The feature extraction network in step S3 mainly comprises three convolution layers, the first convolution layer mainly comprises a convolution, a batch normalization, a ReLU activation function and a Max-Pool operation, the second convolution layer and the third convolution layer respectively comprise a convolution, a batch normalization and a ReLU activation function, and through a series of convolution operations, normalization and activation functions, signal features on different levels and scales including local edges and textures to global patterns and structures are learned, so that a more abstract and advanced feature representation FP1 is obtained.
In step S3, the feature enhancement module adopts a SENet (Squeeze-and-Excitation Networks) network and a soft threshold operation, where the SENet network can automatically learn the important weight of each feature channel, adaptively adjust the contribution of each channel, strengthen the perception of the key feature, and the soft threshold operation determines whether to retain or suppress each data point by comparing the data point with a corresponding threshold, and can be expressed as:
wherein,xas the original signal is meant to be a signal,λas a parameter of the soft threshold value,S(x) Is the processed signal.
In the step S4, final feature FD is obtained by effectively fusing the concat operation, the fused feature is input into a full-connection layer and a softmax function layer, and a classification prediction result is output.
Step S5 of slicing the pulse signals with multiple leadsM N The data set is divided into a training set and a testing set according to the proportion of 4:1, meanwhile, the training set is used as a verification set, an Adam optimizer is used for a training stage by using a cross entropy loss function, 200 rounds of iteration are carried out, a StepLR learning rate adjustment strategy is used, learning rate updating operation is carried out every 40 rounds, verification is carried out every 2 rounds of verification by using the verification set, and the optimal verification result is used as a reference to store network model weights.
Another object of the present invention is to provide a multi-lead pulse signal intelligent recognition system based on deep learning, which includes a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the multi-lead pulse signal intelligent recognition method based on deep learning as described above when executing the program.
The beneficial effects of the invention are as follows:
the method of the invention carries out feature extraction and intelligent identification on pulse signals of different modes in a parallel double-branch mode, and the multi-lead pulse signal feature extraction network uses a convolutional neural network, a short-time memory network and an attention mechanics learning module to extract structural information of the pulse signals, which can capture complex structures, time correlation and correlation among different lead pulse signals. This branch provides rich information for the whole method, which helps to understand the characteristics of the pulse signal more accurately. The time-frequency diagram feature extraction network utilizes a feature extraction module and a feature enhancement module, focuses on extracting local features of pulse signals in time and frequency, can learn and capture different features such as edges and textures in the pulse signals, enhances the features through soft threshold operation, improves the quality of the signals, and is favorable for extracting key information of the pulse signals from noise by the aid of the branches, so that the signals are more interpretable. By fusing the characteristics of the double branches, more accurate and comprehensive characteristic representation is obtained, and the characteristic of different mode signals is fully utilized in the fusion process, so that the final characteristic is more representative.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of a network model according to the present invention;
FIG. 3 is a graph comparing the performance of different numbers of leads according to the present invention.
Detailed Description
The invention will be further illustrated with reference to specific examples.
Example 1: the intelligent multi-lead pulse signal identification method based on deep learning comprises the following steps (as shown in figure 1):
s1, acquiring pulse signals of N leadsX N Preprocessing each original signal data, filtering and segmenting the signal data to obtain a multi-lead slice pulse signalM N
First, acquiring from a biosensor or medical deviceN-lead original pulse signalX N These leads are often placed on the wrist of the patient to record the electrical signals of the activity of different parts of the body. Since the original pulse signal may be affected by power supply interference and motion artifacts, signal preprocessing is required to reduce noise and enhance signal quality, and signal filtering is performed by low-pass filtering to remove high-frequency noise. The pulse signal is usually continuous time series data, but usually contains some useless information, so that the signal needs to be segmented into meaningful fragments, the signal is segmented into a plurality of time window slices with equal length according to the position of an R peak, and the processed signal is segmented into a plurality of time window slices in the form of overlapped slices to obtain the multi-lead slice pulse signalM N These signals, which have been preprocessed and sliced, have a low noise level.
S2, slicing the multi-lead pulse signalsM N Inputting multi-lead pulse signal characteristic extraction network branches consisting of a convolutional neural network, a long and short time memory network (LSTM) and an attention learning module to extract comprehensive information to obtain signal data characteristics F3:
first, pulse signals are transmittedM N The characteristic extraction is carried out through the convolutional neural network, the local characteristics and modes of the pulse signals are captured, the structural information of the pulse signals is learned, the frequency domain characteristics and the local structures in the signals are better extracted, a characteristic diagram F1 is obtained, then the characteristic diagram F1 is sent into a long-short time memory network (LSTM) to process the time correlation and sequence characteristics of the pulse signals, the time dependence of the single-lead pulse signals is learned, the periodic and sequence characteristics in the signals are recognized and analyzed, the characteristic diagrams F2 and F2 enter an attention mechanical learning module, the correlation among the multiple-lead pulse signals is learned, the module can be used for more comprehensively understanding the correlation of the whole signals, the information interaction and the synergistic effect among different leads can be analyzed more accurately, the global characteristics and the relation can be extracted, and the more comprehensive multi-lead pulse signals are expressed as signal data characteristics F3.
Pulse signalM N Represented asH×1×NWhereinHFor the length of the time-sliced signal, 1 denotes the number of channels of the signal, here a single-lead signal,Nthe time resolution of the signals, namely the number of time points of the signals, is represented and input into a convolutional neural network, the local characteristics and the structural information of the signals are captured, a characteristic diagram F1 is obtained, and the dimension is represented asH cnn ×C×N cnn WhereinH cnn Representing the time step of the convolutional neural network output,Cthe number of channels representing the output of the convolutional layer,N cnn representing the time resolution of the CNN output. The feature map F1 is sent into a multi-layer unidirectional LSTM network for learning, so that long-term dependence in signal data can be effectively captured and processed, time dynamic features of the data can be better learned, a feature map F2 is obtained, and the dimension is expressed asH lstm ×L×N lstm WhereinH lstm The time step representing the LSTM output is,Lrepresenting the number of hidden states in the LSTM,N lstm representing the temporal resolution of the LSTM output. The feature map F2 is fed into an attention learning module, which is mainly used for learning the correlation between different leads in order to better capture the important information of the multi-lead pulse signal, firstly, the correlation score between each pair of leads needs to be calculated through additive attention, namely:
wherein,W q andW k is a learned weight matrix, reLU is a modified linear unit activation function, q is a query vector, and k is a keyword vector.
The relevance scores are then normalized by a softmax function to obtain the attention weight γ for each lead, namely:
where N represents the number of input elements,k i representing all input feature vectors.
These weights represent the importance of each lead in the current context, and finally the attention weights are multiplied by the feature F2 and the features of each channel are weighted averaged to obtain a global multi-lead representation as:
wherein, v i representing the characteristic value of the input element,representing the attention weight value of each lead.
Finally, we obtain a comprehensive representation F3 of the multi-lead pulse signal, with dimensions ofH final ×D×N final WhereinH final Representing the final output time step, which is the final time dimension of the entire network, D represents the number of multiple leads, i.e. the number of different signal channels,N final representing the time resolution of the final output. Important information in the input data is automatically focused by the attention mechanism model, while unimportant information is ignored. This helps to improve the understanding and processing power of the model for the data.
The integrated representation F3 of the multi-lead pulse signal contains the frequency domain characteristics of the signal (via CNN), the time series characteristics (via LSTM) and the correlation between the multi-leads (via the attention to mechanics model). This representation allows us to more fully understand the course of the pulse signal variation, including interactions between different leads, as well as dynamic variations in time and frequency domain.
S3, inputting a time-frequency diagram obtained by wavelet transformation and conversion of the single-lead pulse signal into a time-frequency diagram feature extraction network and a feature enhancement module, and extracting more explanatory key information by a branch of the time-frequency diagram feature extraction network to obtain image features FP3:
selecting data of different modes for feature learning, wherein the data of different modes can provide different classesThe information of the type is improved, so that the diversity of the characteristics is increased, the data of different modes possibly contain complementary information, and the defect of one mode can be compensated by other modes. The time-frequency diagram obtained by wavelet transformation of single-lead pulse signal is used as input, the wavelet transformation is used for decomposing the signal into sub-signals with different scales and frequencies so as to capture the local characteristics of the signal in time and frequency, and the wavelet transformation is used for transforming the pulse signalXConverting into a time-frequency diagram P, decomposing the signal into components with different scales and frequencies by using Wavelet Toolbox in MATLAB, generating coefficients of a time-frequency domain, calculating and generating the time-frequency diagram P, and expressing the dimension asWhere the horizontal axis represents time and the vertical axis represents frequency and the color or brightness of the image represents the intensity or energy distribution of the signal, the wavelet transform provides an efficient mapping between time and frequency domains while taking into account the time and frequency domain characteristics of the signal, thereby improving the performance of signal analysis and processing. And then inputting the time-frequency diagram P into a feature extraction network to perform feature extraction on the time-frequency diagram, wherein the feature extraction network (see figure 2) mainly comprises three convolution layers, the first convolution layer mainly comprises a convolution, a batch normalization, a ReLU activation function and a maximum pooling (Max-Pool) operation, the second convolution layer and the third convolution layer respectively comprise a convolution, a batch normalization and a ReLU activation function, and the signal features on different levels and scales, including local edges and textures to a global mode and structure, are learned through a series of convolution operations, normalization and activation functions, so that a more abstract and higher-level feature representation FP1 is obtained. Wherein the FP1 dimension isC×H cnn ×W cnn CThe number of channels is indicated and the number of channels is indicated,H cnn for the height of the convolved output,W cnn the width of the convolution output is represented, the convolution kernel size is k=c, and the convolution kernel size is equal to the channel number C of the input feature map. Feature FP1 goes through a Global Averaging Pooling (GAP) operation that helps capture global context information, reduces feature data to a fixed-size representation, and an Absolute operation that helps remove creditsNoise in the number or abrupt oscillations. The obtained characteristics are input into a characteristic enhancement module, the characteristic enhancement module adopts a SENet network and a soft threshold operation, the SENet network can automatically learn the important weight of each characteristic channel, the contribution of each channel is adaptively adjusted, the perception of key characteristics is enhanced, and the soft threshold operation determines whether to keep or inhibit each data point (pixel, sampling point and the like) by comparing the data point with a corresponding threshold value. The larger data points are retained compared to the threshold while the smaller data points are suppressed. This helps to remove noise, enhance the edges and features of the signal. Can be expressed as:
wherein,xas the original signal is meant to be a signal,λas a parameter of the soft threshold value,S(x) Is the processed signal.
The feature enhancement module automatically generates a set of adaptive thresholds using the channel weight information learned by SENet. These thresholds are dynamically adjusted based on the characteristics and content of the signal to better handle noise in the signal, enhance key characteristics and suppress irrelevant information, the final output being the more accurate characteristic FP3, denoted asC×D×1,CIn order to provide the number of channels,Drepresenting the dimensions of the feature. The quality of the signal data and the strength of the related features can be obviously improved through the feature enhancement module.
S4, fusing the signal features of the two branches in different modes, namely fusing the signal data feature F3 and the image feature FP3 to obtain a final feature FD, and classifying to obtain a classification prediction result:
the signal data characteristic F3 learned by the multi-lead pulse signal characteristic extraction network and the image characteristic FP3 learned by the time-frequency image characteristic extraction network are fused, the fusion multi-mode data can capture more abundant characteristics, and the final characteristic FD is obtained by effectively fusing concat operationInputting the fused characteristics into the full connectionIn the interface layer and the softmax function layer, the classification prediction result is output, namely the final recognition result Class is obtained
S5, training and optimizing a network model to store optimal parameters, and inputting data to be tested into the optimal model to obtain a recognition result:
slicing pulse signals with multiple leadsM N The data set is divided into a training set and a testing set according to the proportion of 4:1, meanwhile, the training set is used as a verification set, an Adam optimizer is used for a cross entropy loss function in the training stage, 200 rounds of iteration are carried out, a StepLR learning rate adjustment strategy is used, learning rate updating operation is carried out every 40 rounds, verification is carried out every 2 rounds of verification set, and the optimal verification result is used as a reference to store network model weights for subsequent testing.
In the result analysis, five indexes of sensitivity, specificity, misdiagnosis rate, missed diagnosis rate and accuracy rate are used for evaluating the performance of the model, and a formula can be expressed as follows:
wherein,TPthe number of real examples is indicated,TNthe number of true negative examples is represented,FPthe number of false positive examples is indicated,FNrepresenting the number of false negatives.
At the same time, for the setting of the number of leads N, performance comparison is performed, as shown in FIG. 3, and we perform performance experimental comparison under different lead numbers to select the optimal model. Experimental performance comparisons were made with the number of leads 9, 18, 27, 36 and 45 respectively, and it is apparent from the results of fig. 3 that the method of the present invention exhibited the best performance when the number of leads was 27. In particular, the specificity and accuracy at 27 leads are significantly higher than the other leads and the misdiagnosis rate is lower than in other cases. This illustrates that the model of the present invention is better able to identify and classify pulse signals at 27 leads.
Example 2: the multi-lead pulse signal intelligent recognition system based on deep learning comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the multi-lead pulse signal intelligent recognition method based on deep learning as described in the embodiment 1 when executing the program.
The foregoing is a detailed description of the invention with reference to specific embodiments, and the scope of the invention is not limited thereto.

Claims (8)

1. The intelligent multi-lead pulse signal identification method based on deep learning is characterized by comprising the following steps:
s1, acquiring pulse signals of N leadsX N Preprocessing each original signal data, filtering and segmenting the signal data to obtain a multi-lead slice pulse signalM N
S2, slicing the multi-lead pulse signalsM N Inputting multi-lead pulse signal characteristic extraction network branches consisting of a convolutional neural network, a long-short-time memory network and an attention learning module, and extracting comprehensive information to obtain signal data characteristics F3:
firstly, feature extraction is carried out through a convolutional neural network, local features and modes of pulse signals are captured, structural information of the pulse signals is learned, then, time correlation and sequence features of single-lead pulse signals are learned through a long-short-term memory network, and finally, correlation among multiple-lead pulse signals is learned through a attention mechanical learning module;
s3, inputting a time-frequency diagram obtained by wavelet transformation and conversion of the single-lead pulse signal into a time-frequency diagram feature extraction network and a feature enhancement module, and extracting more explanatory key information by a branch of the time-frequency diagram feature extraction network to obtain image features FP3:
firstly, extracting time domain and frequency domain features through a feature extraction network, and then removing noise and enhancing edges and features of signals through a feature enhancement module;
s4, fusing the signal characteristics of the two branches in different modes, namely fusing the signal data characteristic F3 and the image characteristic FP3 to obtain a final characteristic FD, and classifying to obtain a classification prediction result;
s5, training and optimizing the network model to store optimal parameters, and inputting data to be tested into the optimal model to obtain a recognition result.
2. The intelligent recognition method of multi-lead pulse signals based on deep learning according to claim 1, wherein N in step S1 is selected from 9 to 45.
3. The intelligent recognition method of multi-lead pulse signals based on deep learning according to claim 2, wherein in step S1, 27 is selected.
4. The intelligent recognition method of multiple lead pulse signals based on deep learning according to claim 1, wherein the attention mechanics module in step S2 first calculates the correlation score between each pair of leads by additive attention, namely:
wherein,W q andW k is a learned weight matrix, reLU is a modified linear unit activation function, q is a query vector, and k is a keyword vector;
the relevance scores are then normalized by a softmax function to obtain the attention weight γ for each lead, namely:
where N represents the number of input elements,k i representing all input feature vectors;
finally, multiplying the attention weight with the characteristics output by the long-short-time memory network, and carrying out weighted average on the characteristics of each channel to obtain a global multi-lead representation as follows:
wherein,v i representing the characteristic value of the input element,representing the attention weight value of each lead.
5. The intelligent recognition method of multiple lead pulse signals based on deep learning according to claim 1, wherein the feature extraction network in step S3 mainly comprises three convolution layers, the first convolution layer mainly comprises one convolution, one batch normalization, one ReLU activation function and a maximum pooling operation, and the second and third convolution layers respectively comprise one convolution, one batch normalization and one ReLU activation function.
6. The intelligent recognition method of multi-lead pulse signals based on deep learning according to claim 1, wherein in step S3, the feature enhancement module adopts a SENet network and a soft threshold operation, the SENet network can automatically learn the important weight of each feature channel, adaptively adjust the contribution of each channel, strengthen the perception of key features, and the soft threshold operation can be expressed as that whether each data point is reserved or suppressed by comparing the data point with a corresponding threshold value:
wherein,xas the original signal is meant to be a signal,λas a parameter of the soft threshold value,S(x) Is the processed signal.
7. The intelligent recognition method of the multi-lead pulse signals based on deep learning according to claim 1, wherein in the step S4, final features FD are obtained by effective fusion through concat operation, the fused features are input into a full-connection layer and a softmax function layer, and a classification prediction result is output.
8. A multi-lead pulse signal intelligent recognition system based on deep learning, comprising a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the multi-lead pulse signal intelligent recognition method based on deep learning as claimed in any one of claims 1-7 when executing the program.
CN202311584793.1A 2023-11-27 2023-11-27 Multi-lead pulse signal intelligent identification method and system based on deep learning Pending CN117281528A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311584793.1A CN117281528A (en) 2023-11-27 2023-11-27 Multi-lead pulse signal intelligent identification method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311584793.1A CN117281528A (en) 2023-11-27 2023-11-27 Multi-lead pulse signal intelligent identification method and system based on deep learning

Publications (1)

Publication Number Publication Date
CN117281528A true CN117281528A (en) 2023-12-26

Family

ID=89258949

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311584793.1A Pending CN117281528A (en) 2023-11-27 2023-11-27 Multi-lead pulse signal intelligent identification method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN117281528A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108388348A (en) * 2018-03-19 2018-08-10 浙江大学 A kind of electromyography signal gesture identification method based on deep learning and attention mechanism
CN112597814A (en) * 2020-12-04 2021-04-02 南通大学 Improved Openpos classroom multi-person abnormal behavior and mask wearing detection method
CN112702294A (en) * 2021-03-24 2021-04-23 四川大学 Modulation recognition method for multi-level feature extraction based on deep learning
CN113768515A (en) * 2021-09-17 2021-12-10 重庆邮电大学 Electrocardiosignal classification method based on deep convolutional neural network
CN113901893A (en) * 2021-09-22 2022-01-07 西安交通大学 Electrocardiosignal identification and classification method based on multiple cascade deep neural network
CN114155397A (en) * 2021-11-29 2022-03-08 中国船舶重工集团公司第七0九研究所 Small sample image classification method and system
CN115238731A (en) * 2022-06-13 2022-10-25 重庆邮电大学 Emotion identification method based on convolution recurrent neural network and multi-head self-attention
CN116226724A (en) * 2023-03-28 2023-06-06 清华大学深圳国际研究生院 Non-invasive coronary heart disease detection system
CN116361688A (en) * 2023-03-20 2023-06-30 重庆大学 Multi-mode feature fusion model construction method for automatic classification of electrocardiographic rhythms

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108388348A (en) * 2018-03-19 2018-08-10 浙江大学 A kind of electromyography signal gesture identification method based on deep learning and attention mechanism
CN112597814A (en) * 2020-12-04 2021-04-02 南通大学 Improved Openpos classroom multi-person abnormal behavior and mask wearing detection method
CN112702294A (en) * 2021-03-24 2021-04-23 四川大学 Modulation recognition method for multi-level feature extraction based on deep learning
CN113768515A (en) * 2021-09-17 2021-12-10 重庆邮电大学 Electrocardiosignal classification method based on deep convolutional neural network
CN113901893A (en) * 2021-09-22 2022-01-07 西安交通大学 Electrocardiosignal identification and classification method based on multiple cascade deep neural network
CN114155397A (en) * 2021-11-29 2022-03-08 中国船舶重工集团公司第七0九研究所 Small sample image classification method and system
CN115238731A (en) * 2022-06-13 2022-10-25 重庆邮电大学 Emotion identification method based on convolution recurrent neural network and multi-head self-attention
CN116361688A (en) * 2023-03-20 2023-06-30 重庆大学 Multi-mode feature fusion model construction method for automatic classification of electrocardiographic rhythms
CN116226724A (en) * 2023-03-28 2023-06-06 清华大学深圳国际研究生院 Non-invasive coronary heart disease detection system

Similar Documents

Publication Publication Date Title
CN108714026B (en) Fine-grained electrocardiosignal classification method based on deep convolutional neural network and online decision fusion
CN109758145B (en) Automatic sleep staging method based on electroencephalogram causal relationship
CN110427924A (en) A kind of heart impact signal based on LSTM more classifying identification methods automatically
CN110584649A (en) Method and system for identifying abnormal electrocardiosignals in dynamic single-lead electrocardiogram
CN108511055B (en) Ventricular premature beat recognition system and method based on classifier fusion and diagnosis rules
CN114533086B (en) Motor imagery brain electrolysis code method based on airspace characteristic time-frequency transformation
CN103049892A (en) Non-local image denoising method based on similar block matrix rank minimization
CN108567418A (en) A kind of pulse signal inferior health detection method and detecting system based on PCANet
CN114469124B (en) Method for identifying abnormal electrocardiosignals in movement process
CN112043260B (en) Electrocardiogram classification method based on local mode transformation
CN107049269A (en) A kind of Pulse Signals Treatment Analysis system
Zhang et al. Human identification driven by deep CNN and transfer learning based on multiview feature representations of ECG
CN111898526B (en) Myoelectric gesture recognition method based on multi-stream convolution neural network
CN113616213B (en) Electrocardiosignal denoising method, electrocardiosignal denoising device and electrocardiosignal denoising storage medium based on BP neural network and improved EMD method
CN114532993A (en) Automatic detection method for electroencephalogram high-frequency oscillation signals of epileptic
CN110543831A (en) brain print identification method based on convolutional neural network
CN113116300A (en) Physiological signal classification method based on model fusion
CN116361688A (en) Multi-mode feature fusion model construction method for automatic classification of electrocardiographic rhythms
CN113128384B (en) Brain-computer interface software key technical method of cerebral apoplexy rehabilitation system based on deep learning
CN117235576A (en) Method for classifying motor imagery electroencephalogram intentions based on Riemann space
CN117883082A (en) Abnormal emotion recognition method, system, equipment and medium
CN112205990A (en) Wrist angle prediction method and device based on sEMG under different loads
CN113069124A (en) Electrocardio monitoring method based on CNN-ET model
CN117281528A (en) Multi-lead pulse signal intelligent identification method and system based on deep learning
CN111803061A (en) R wave identification method and device based on target detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination