CN114886404B - Electronic equipment, device and storage medium - Google Patents
Electronic equipment, device and storage medium Download PDFInfo
- Publication number
- CN114886404B CN114886404B CN202210818321.7A CN202210818321A CN114886404B CN 114886404 B CN114886404 B CN 114886404B CN 202210818321 A CN202210818321 A CN 202210818321A CN 114886404 B CN114886404 B CN 114886404B
- Authority
- CN
- China
- Prior art keywords
- feature vector
- vector
- fusion
- fused
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 239000013598 vector Substances 0.000 claims abstract description 491
- 230000033764 rhythmic process Effects 0.000 claims abstract description 158
- 230000004927 fusion Effects 0.000 claims abstract description 84
- 238000005070 sampling Methods 0.000 claims abstract description 25
- 238000000034 method Methods 0.000 claims description 50
- 230000000747 cardiac effect Effects 0.000 claims description 40
- 230000015654 memory Effects 0.000 claims description 28
- 238000000605 extraction Methods 0.000 claims description 24
- 230000006835 compression Effects 0.000 claims description 22
- 238000007906 compression Methods 0.000 claims description 22
- 238000011176 pooling Methods 0.000 claims description 18
- 238000004590 computer program Methods 0.000 claims description 13
- 238000012545 processing Methods 0.000 abstract description 8
- 230000006870 function Effects 0.000 description 26
- 238000013527 convolutional neural network Methods 0.000 description 19
- 238000010586 diagram Methods 0.000 description 19
- 230000004913 activation Effects 0.000 description 13
- 206010003119 arrhythmia Diseases 0.000 description 8
- 230000006793 arrhythmia Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 5
- 101150060512 SPATA6 gene Proteins 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 238000013480 data collection Methods 0.000 description 3
- 230000007547 defect Effects 0.000 description 3
- 201000010099 disease Diseases 0.000 description 3
- 230000036541 health Effects 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 230000000306 recurrent effect Effects 0.000 description 3
- 208000024172 Cardiovascular disease Diseases 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 208000019622 heart disease Diseases 0.000 description 2
- MYWUZJCMWCOHBA-VIFPVBQESA-N methamphetamine Chemical compound CN[C@@H](C)CC1=CC=CC=C1 MYWUZJCMWCOHBA-VIFPVBQESA-N 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 239000010754 BS 2869 Class F Substances 0.000 description 1
- 206010006578 Bundle-Branch Block Diseases 0.000 description 1
- 206010049418 Sudden Cardiac Death Diseases 0.000 description 1
- 206010047289 Ventricular extrasystoles Diseases 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000010247 heart contraction Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000006386 memory function Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 230000004962 physiological condition Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 230000000087 stabilizing effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000002861 ventricular Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
- A61B5/024—Detecting, measuring or recording pulse rate or heart rate
- A61B5/0245—Detecting, measuring or recording pulse rate or heart rate by using sensing means generating electric signals, i.e. ECG signals
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Cardiology (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Signal Processing (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Physiology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Psychiatry (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The application discloses electronic equipment, a device and a storage medium, and relates to the technical field of medical signal processing. In the application, the waveform characteristics of the rhythm data of the target object in a set time range are extracted to obtain a corresponding original characteristic vector set; then, respectively obtaining a corresponding first feature vector set and a corresponding second feature vector set according to two preset vector element sampling modes; further, fusing the original feature vector set, the first feature vector set and the second feature vector set to obtain a corresponding fused feature vector set, and determining the classification weight of each fused feature vector based on the element correlation degree among the vector elements contained in each fused feature vector in the fused feature vector set; finally, the rhythm type of the rhythm data is determined based on each fusion feature vector and the classification weight corresponding to each fusion feature vector. In this way, the accuracy of classification of the heart rhythm data is improved.
Description
Technical Field
The present application relates to the field of medical signal processing technologies, and in particular, to an electronic device, an apparatus, and a storage medium.
Background
In recent years, people pay more and more attention to their health conditions along with the continuous improvement of the material level. Among various diseases, heart disease is not only a common disease type, but also poses a great threat to the life and health of people.
An Electrocardiogram (ECG) can objectively reflect physiological conditions and working states of various parts of the heart, and is an important means and a main basis for diagnosing arrhythmia diseases, however, identification of the ECG still requires experienced medical staff to accurately diagnose the type of arrhythmia. Therefore, the intelligent medical equipment and the related algorithm are utilized to monitor the heart beating state of the current patient in time and automatically classify the heart rhythm, so that the intelligent medical equipment and the related algorithm have strong practical significance.
In the prior art, in order to realize automatic classification of heart rhythm, after an electrocardiogram contains electrocardiosignals, normalization processing is performed on the electrocardiosignals to obtain corresponding heart rhythm data, and on the basis, feature extraction is performed on the heart rhythm data based on a heart rhythm data classification module constructed by a Convolutional Neural Network (CNN) model and a coding-decoding model to obtain corresponding heart rhythm feature information, so that classification of the heart rhythm data is completed according to the obtained heart rhythm feature information.
However, the classification of the heart rhythm data is completed by the heart rhythm data module by using the above classification method of the heart rhythm data, and the classification of the heart rhythm data is not accurate due to insufficient extraction capability of the fusion characteristics of the convolutional neural network model and the coding-decoding model.
Therefore, in the above manner, the accuracy of classification of the heart rhythm data is low.
Disclosure of Invention
The embodiment of the application provides an electronic device, an apparatus and a storage medium, which are used for improving the accuracy of classification of heart rhythm data.
In a first aspect, an embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the computer program to implement the following method for classifying cardiac rhythm data:
acquiring rhythm data of a target object in a set time range, and performing feature extraction on waveform features of the rhythm data to obtain a corresponding original feature vector set;
respectively performing feature compression on each original feature vector contained in the original feature vector set according to two preset vector element sampling modes to obtain a corresponding first feature vector set and a corresponding second feature vector set;
fusing the original feature vector set, the first feature vector set and the second feature vector set to obtain a corresponding fused feature vector set;
determining the classification weight of each fusion feature vector based on the element correlation degree between vector elements contained in each fusion feature vector in the fusion feature vector set;
and determining the rhythm type of the rhythm data based on the fusion characteristic vectors and the classification weights corresponding to the fusion characteristic vectors.
In a second aspect, an embodiment of the present application further provides a cardiac rhythm data classification apparatus, including:
the acquisition module is used for acquiring the rhythm data of the target object within a set time range, and extracting the characteristics of the waveform characteristics of the rhythm data to obtain a corresponding original characteristic vector set;
the compression module is used for performing feature compression on each original feature vector contained in the original feature vector set according to two preset vector element sampling modes to obtain a corresponding first feature vector set and a corresponding second feature vector set;
the fusion module is used for fusing the original feature vector set, the first feature vector set and the second feature vector set to obtain a corresponding fused feature vector set;
the configuration module is used for determining the classification weight of each fusion characteristic vector based on the element correlation degree between the vector elements contained in each fusion characteristic vector in the fusion characteristic vector set;
and the identification module is used for determining the rhythm type of the rhythm data based on each fusion feature vector and the classification weight corresponding to each fusion feature vector.
In a possible embodiment, when feature compression is performed on each original feature vector included in an original feature vector set according to two preset vector element sampling manners, to obtain a corresponding first feature vector set and a corresponding second feature vector set, the compression module is specifically configured to:
based on two vector element sampling modes, performing global average pooling on each original feature vector to obtain a first semantic information set, and performing global maximum pooling on each original feature vector to obtain a second semantic information set;
a first set of feature vectors is generated based on the first set of semantic information, and a second set of feature vectors is generated based on the second set of semantic information.
In a possible embodiment, when the element correlation degree between the vector elements included in each fused feature vector is based on the fused feature vector set, the configuration module is specifically configured to:
for each fused feature vector, the following operations are respectively performed:
acquiring each vector element contained in one fused feature vector, and performing semantic extraction on the element name of each vector element respectively to obtain semantic information of each element name;
for each element name, the following operations are respectively performed:
respectively comparing semantic similarity of semantic information of one element name with semantic information of other element names to obtain at least one semantic similarity;
and respectively determining the element correlation degree between the vector element corresponding to one element name and other vector elements based on the obtained at least one semantic similarity.
In a possible embodiment, when determining the classification weight of each fused feature vector based on the element correlation between the vector elements included in each fused feature vector in the fused feature vector set, the configuration module is specifically configured to:
for each fused feature vector, the following operations are respectively performed:
respectively obtaining each vector element contained in one fusion feature vector and at least one element correlation degree corresponding to each vector element; each element relevance characterizes: a degree of association between the respective vector element and one of the other vector elements;
respectively obtaining sub-classification weights respectively corresponding to the vector elements based on at least one element correlation degree respectively corresponding to the vector elements;
and obtaining a classification weight of the fused feature vector based on the obtained sub-classification weights and the respective element values of the vector elements.
In a possible embodiment, when determining the rhythm type of the rhythm data based on the respective fusion feature vectors and their respective corresponding classification weights, the identification module is specifically configured to:
respectively determining classification weights corresponding to the fusion feature vectors and fusion feature weight intervals to which the fusion feature vectors belong;
and determining the rhythm type of the rhythm data based on the obtained fusion characteristic weight intervals and the corresponding relation between the preset fusion characteristic weight intervals and the rhythm type.
In a third aspect, an electronic device is proposed, comprising a processor and a memory, wherein the memory has stored program code, which, when executed by the processor, causes the processor to perform the steps of the method for classifying cardiac rhythm data of the first aspect.
In a fourth aspect, a computer program product is provided, which when invoked by a computer causes the computer to perform the steps of the method of classifying cardiac rhythm data according to the first aspect.
The beneficial effect of this application is as follows:
in the electronic device provided by the embodiment of the application, the rhythm data of a target object in a set time range is obtained, and the waveform characteristics of the rhythm data are subjected to characteristic extraction to obtain a corresponding original characteristic vector set; then, respectively performing feature compression on each original feature vector contained in the original feature vector set according to two preset vector element sampling modes to obtain a corresponding first feature vector set and a corresponding second feature vector set; further, fusing the original feature vector set, the first feature vector set and the second feature vector set to obtain a corresponding fused feature vector set, and determining the classification weight of each fused feature vector based on the element correlation degree among the vector elements contained in each fused feature vector in the fused feature vector set; finally, the rhythm type of the rhythm data is determined based on the fusion characteristic vectors and the classification weights corresponding to the fusion characteristic vectors.
By adopting the method, the original feature vector set, the first feature vector set and the second feature vector set are fused to obtain the corresponding fused feature vector set, so that the classification weight of each fused feature vector is determined based on the element correlation degree between the vector elements contained in each fused feature vector in the fused feature vector set, and the rhythm type of the rhythm data is determined based on each fused feature vector and the corresponding classification weight thereof, thereby avoiding the technical defect that the classification of the rhythm data is inaccurate due to insufficient feature extraction capability of the fusion of a convolutional neural network model and a coding-decoding model in the prior art, and improving the accuracy of the classification of the rhythm data.
Furthermore, other features and advantages of the present application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the present application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings required to be used in the description of the embodiments will be briefly introduced below, and it is apparent that the drawings in the description below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings may be obtained according to the drawings without inventive labor. In the drawings:
FIG. 1 illustrates an alternative schematic diagram of a system architecture to which embodiments of the present application are applicable;
fig. 2 is a schematic diagram illustrating an overall implementation flow of a method for classifying cardiac rhythm data according to an embodiment of the present application;
fig. 3 is a flowchart illustrating an implementation of a method of classifying cardiac rhythm data according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a structure of a convolutional neural network model provided by an embodiment of the present application;
fig. 5 illustrates a logic diagram for obtaining a first feature vector set and a second feature vector set according to an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating a structure of a sequence-to-sequence network model provided by an embodiment of the present application;
fig. 7 is a schematic diagram illustrating a specific application scenario for obtaining element relevancy according to an embodiment of the present application;
FIG. 8 is a flow chart illustrating an implementation of a method for determining classification weights for fused feature vectors according to an embodiment of the present application;
fig. 9 is a schematic diagram illustrating a specific application scenario for determining a rhythm class of rhythm data according to an embodiment of the present application;
fig. 10 is a schematic diagram illustrating a specific application scenario based on fig. 3 according to an embodiment of the present application;
fig. 11 is a schematic structural diagram schematically illustrating a cardiac rhythm data classification apparatus provided by an embodiment of the present application;
fig. 12 schematically illustrates a structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments, but not all embodiments, of the technical solutions of the present application. All other embodiments obtained by a person skilled in the art without any inventive step based on the embodiments described in the present application are within the scope of the protection of the present application.
It should be noted that "a plurality" is understood as "at least two" in the description of the present application. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. A is connected with B and can represent: a and B are directly connected and A and B are connected through C. In addition, in the description of the present application, the terms "first," "second," and the like are used for descriptive purposes only and are not intended to indicate or imply relative importance nor order to be construed.
In addition, in the technical scheme of the application, the data acquisition, transmission, use and the like all meet the requirements of relevant national laws and regulations.
The following briefly introduces the design concept of the embodiments of the present application:
in recent years, cardiovascular diseases have become one of the important diseases threatening human health, and the prevalence and mortality rate thereof increase year by year, wherein most of the cardiovascular diseases usually occur along with arrhythmia, i.e. arrhythmia is an important cause of heart disease and sudden cardiac death.
Because the ECG can objectively reflect the physiological status and working status of various parts of the heart, the ECG can be used as an important means and a main basis for diagnosing arrhythmia diseases, however, the identification of EFC still requires experienced medical personnel to accurately diagnose the type of arrhythmia. However, even the experienced staffing person may make erroneous judgment on the heart rate data, and the amount of the ECG data is huge, so that the medical staff is limited, and the erroneous judgment on the heart rate data is further aggravated due to unavoidable fatigue and other factors.
In addition, in order to relieve the working pressure of medical staff, various related technologies for automatically classifying the heart rhythm data appear, however, the existing automatic classification technology for the heart rhythm data is not enough for extracting the characteristics of the heart rhythm data; therefore, a method for classifying cardiac rhythm data that improves the feature extraction capability of cardiac rhythm data is needed, thereby improving the accuracy of cardiac rhythm data classification.
In view of this, an embodiment of the present application provides a method for classifying cardiac rhythm data, which is performed by an electronic device, and specifically includes: acquiring rhythm data of a target object in a set time range, and performing feature extraction on waveform features of the rhythm data to obtain a corresponding original feature vector set; then, respectively performing feature compression on each original feature vector contained in the original feature vector set according to two preset vector element sampling modes to obtain a corresponding first feature vector set and a corresponding second feature vector set; further, fusing the original feature vector set, the first feature vector set and the second feature vector set to obtain a corresponding fused feature vector set, and determining the classification weight of each fused feature vector based on the element correlation degree among the vector elements contained in each fused feature vector in the fused feature vector set; finally, determining the rhythm type of the rhythm data based on each fusion feature vector and the classification weight corresponding to each fusion feature vector; furthermore, for convenience of description and understanding, in the embodiments of the present application, the electronic device may be a server that performs a method of classifying cardiac rhythm data.
In particular, preferred embodiments of the present application will be described below with reference to the accompanying drawings of the specification, it being understood that the preferred embodiments described herein are merely for illustrating and explaining the present application, and are not intended to limit the present application, and that the features of the embodiments and examples of the present application may be combined with each other without conflict.
Referring to fig. 1, a schematic diagram of a system architecture provided in an embodiment of the present application is shown, where the system architecture includes: server 101, terminal equipment 102 and data acquisition equipment 103. The server 101 and the terminal device 102 may perform information interaction through a communication network, where the communication mode adopted by the communication network may include: a wireless communication mode and a wired communication mode; the terminal device 102 is connected to the data acquisition device 103, and the heart rhythm data acquired by the data acquisition device 103 is sent to the terminal device 102.
Illustratively, the server 101 may communicate with the terminal device 102 by accessing a network via a cellular Mobile communication technology, including, for example, a fifth Generation Mobile networks (5g) technology.
Optionally, the server 101 may access a network through a short-range Wireless communication mode, for example, including a Wireless Fidelity (Wi-Fi) technology, to communicate with the terminal device 102.
In the embodiment of the present application, the number of the above-mentioned devices is not limited at all, as shown in fig. 1, only the server 101, the terminal device 102, and the data acquisition device 103 are taken as examples for description, and the above-mentioned devices and their respective functions are briefly introduced below.
The server 101 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
It is worth proposing that, in the embodiment of the present application, the server 101 is configured to obtain the cardiac rhythm data of the target object within a set time range, and perform feature extraction on the waveform features of the cardiac rhythm data to obtain a corresponding original feature vector set; then, respectively performing feature compression on each original feature vector contained in the original feature vector set according to two preset vector element sampling modes to obtain a corresponding first feature vector set and a corresponding second feature vector set; further, fusing the original feature vector set, the first feature vector set and the second feature vector set to obtain a corresponding fused feature vector set, and determining the classification weight of each fused feature vector based on the element correlation degree among the vector elements contained in each fused feature vector in the fused feature vector set; finally, the rhythm type of the rhythm data is determined based on the fusion characteristic vectors and the classification weights corresponding to the fusion characteristic vectors.
For example, referring to fig. 2, which is a flowchart of an overall implementation of the method for classifying cardiac rhythm data according to the embodiment of the present application, a server may determine a cardiac rhythm type of cardiac rhythm data of a target object, which is obtained within a set time range, based on a pre-trained convolutional neural network model and a sequence-to-sequence network model, where the sequence-to-sequence network model may also be referred to as a coding-decoding model; in addition, in the embodiment of the present application, the heart rhythm data can be classified into five heart rhythms according to the cardiac rhythm: class N (normal or bundle branch block beats), class S (supraventricular abnormal beats), class V (ventricular abnormal beats), class F (fusion beats), and class Q (terminal classification beats).
It should be noted that, when the server executes the above method for classifying the heart rhythm data, the server extracts features by using the convolutional neural network with channel attention, and then sends the features to the sequence network model for classification, and the feature extraction capability of the heart rhythm data is improved by introducing a dual-attention mechanism.
Terminal device 102 is a device that can provide voice and/or data connectivity to a user, and includes: a hand-held terminal device, a vehicle-mounted terminal device, etc. having a wireless connection function.
Illustratively, the terminal device 102 includes, but is not limited to: the Mobile terminal Device comprises a Mobile phone, a tablet computer, a notebook computer, a palm computer, a Mobile Internet Device (MID), a wearable Device, a Virtual Reality (VR) Device, an Augmented Reality (AR) Device, a wireless terminal Device in industrial control, a wireless terminal Device in unmanned driving, a wireless terminal Device in a smart grid, a wireless terminal Device in transportation safety, a wireless terminal Device in a smart city, a wireless terminal Device in a smart home, and the like.
In addition, the terminal device 102 may have an associated client installed thereon, where the client may be software (e.g., APP, browser, short video software, etc.), or may be a web page, an applet, or the like. In an embodiment of the application, the heart rate data may be sent by the terminal device 102 to the server 101.
The data acquisition device 103 is a device for acquiring data and recording information, and can convert acquired signals into analog electric signals through corresponding sensors, and further convert the analog electric signals into digital signals to be stored for preprocessing. The system comprises handheld data acquisition equipment with a wireless connection function, head-mounted data acquisition equipment, fixed data acquisition equipment and the like.
Illustratively, the data acquisition device 103 may be: a batch data collector, an industrial data collector, a Radio Frequency Identification (RFID) data collector, other data collection devices (mobile phones, tablet computers, etc.) with a data collection function, a data collection card, etc.
It should be noted that, in the embodiment of the present application, the data acquisition device 103 is described by taking an electrocardiograph as an example, where the electrocardiograph is capable of acquiring the heart rhythm data of the target object in real time and uploading the acquired heart rhythm data to the terminal device 102.
The method for classifying cardiac rhythm data provided by the exemplary embodiments of the present application will be described below with reference to the above system architecture and the accompanying drawings, it should be noted that the above system architecture is only shown for the convenience of understanding the spirit and principles of the present application, and the embodiments of the present application are not limited in this respect.
Referring to fig. 3, which is a flowchart illustrating an implementation of a method for classifying cardiac rhythm data according to an embodiment of the present application, an implementation subject takes a server as an example, and the method is implemented as follows:
s301: acquiring the rhythm data of the target object in a set time range, and extracting the characteristics of the waveform characteristics of the rhythm data to obtain a corresponding original characteristic vector set.
Specifically, when step S301 is executed, after acquiring the cardiac rhythm data of the target object acquired by the data acquisition device within the set time range, the server may perform feature extraction on the waveform features of the cardiac rhythm data based on the convolutional neural network model with channel attention, so as to obtain a corresponding original feature vector set.
As shown in fig. 4, the convolutional neural network model mainly includes two modules, which specifically include: the system comprises a CMC module and an ML module, wherein the CMC module is used for convolution calculation and local feature extraction, the ML module is used for changing the space dimension of an original feature vector set and stabilizing the training process, and it needs to be noted that the original feature vector set can be presented for a corresponding original feature diagram.
Further, the CMC module is mainly composed of three parts, which are: convolution, an activation function and channel attention, based on the above composition structure, the CMC module firstly performs feature extraction on waveform features of acquired heart rhythm data through one-dimensional convolution to obtain an original feature vector set, wherein after each convolution operation, a corresponding activation function is applied to enhance the nonlinear expression capability of a convolution neural network model, and then according to the channel attention module, feature compression on each original feature vector contained in the original feature vector set is completed, that is, the local feature extraction capability is improved.
It should be noted that, because the hash activation function has the advantages of no upper bound, infinite order continuity, smoothness, and the like, information can be allowed to flow into the convolutional neural network better, and the accuracy and the generalization of the convolutional neural network model are enhanced, in the embodiment of the present application, the activation function is the hash activation function, where a specific expression of the hash activation function is as follows:
wherein,xthe input is represented by a representation of the input,tanh(. Star) is a common activation function, the output mean value is 0, therefore, the convergence data is fast, the number of iterations can be reduced, wherein,tanhthe specific expression of the activation function is as follows:
wherein,xrepresenting the input.
Furthermore, the ML module is mainly composed of two parts, respectively: the maximum pooling and the layer normalization are performed, based on the constitution structure of the ML module, the technical defects that the calculation amount and the training time from the sequence to the sequence network model are increased due to the fact that the original feature vectors processed by the channel attention module are concentrated and the fused feature vectors corresponding to the original feature vectors are too long can be avoided, the space size is reduced while the number of feature channels of the fused feature vector set is increased, and the training efficiency of the convolutional neural network model is improved; in addition, the application layer normalization can stabilize the training process and accelerate the convergence speed of the convolutional neural network model while the spatial size of the fused feature vector set is changed.
Obviously, based on the convolutional neural network model, two times of downsampling operations are performed on the original feature vector set corresponding to the rhythm data.
S302: and respectively performing feature compression on each original feature vector contained in the original feature vector set according to two preset vector element sampling modes to obtain a corresponding first feature vector set and a corresponding second feature vector set.
Specifically, as shown in fig. 5, after the server obtains the original feature vector set, based on the two vector element sampling manners, the server performs global average pooling on each original feature vector to obtain a first semantic information set, performs global maximum pooling on each original feature vector to obtain a second semantic information set, thereby generating the first feature vector set based on the obtained first semantic information set, and generating the second feature vector set based on the obtained second semantic information set.
Illustratively, the server, through the channel attention module in the convolutional neural network model, first, applies to an input raw feature vector set (e.g., a raw feature map), where the spatial dimension of the raw feature map can be expressed as:,Cthe number of channels is indicated and indicated,Hthe height is indicated by the indication of the height,Wrepresents a width;then, respectively using global average pooling and global maximum pooling along the channel axial direction to collect the spatial information in the original feature map, respectively generating two different channel semantic information sets, namely a first semantic information setAnd a second set of speech information(ii) a Further, the first semantic information set is integratedAnd a second set of speech informationAnd respectively sending the data to a multilayer perceptron to generate a first channel feature map (a first feature vector set) and a second channel feature map (a second feature vector set).
Wherein, global average pooling: if the input feature map has a size ofH 1 ×W 1 ×C 1 The features on all channels are pooled evenly, and the size of the pooled layer is set as the input size of the feature mapH 1 ×W 1 The average elements of the whole feature map are output by down sampling, the output size is 1 multiplied by 1C 1 (ii) a Global maximum pooling: input feature map size ofH 2 ×W 2 ×C 2 Performing maximum pooling on the features on all channels, wherein the size of the pooled layer is set as the input size of the feature mapH 1 ×W 1 The maximum element of the whole feature map is output by down sampling, the output size is 1 multiplied by 1C 2 。
It should be noted that, because the number of channels of each original feature included in the original feature map is not large, in the multilayer perceptron, only two full-connected layers with the same number of feature channels are used, and after each full-connected layer, a hash activation function is applied, so that the convolutional neural network model learns a more complex nonlinear relationship.
S303: and fusing the original feature vector set, the first feature vector set and the second feature vector set to obtain a corresponding fused feature vector set.
Specifically, when step S303 is executed, after the server obtains the first feature vector set and the second feature vector set, the server fuses the original feature vector set, the first feature vector set, and the second feature vector set based on a preset feature vector fusion method, so as to obtain a corresponding fusion feature vector set, so as to facilitate a subsequent sequence to the sequence network model to perform global feature extraction.
Illustratively, the server adds a first feature vector set corresponding to the first semantic information set and a second feature vector set corresponding to the second semantic information set based on the multi-layer perceptron of the channel attention module in the convolutional neural network model, normalizes the added feature vector sets by applying a Sigmoid activation function to obtain a third feature vector set, and then fuses the third feature vector set with the original feature vector set through a dot product operation to obtain a corresponding fused feature vector set, wherein the specific process of obtaining the third feature vector set and the fused feature vector set can be represented as follows:
wherein,Xa set of original feature vectors is represented,a third set of feature vectors resulting from the channel attention enhancement is represented,representing a Sigmoid activation function, the input can be redistributed to intervals of [0,1]In two classes, the model can be used asAn output layer, the output of which can represent a probability,Wa multi-layer perceptron is represented, F GAV which represents the global average pooling of the image,F MAX a global maximum pooling is indicated by the global maximum,x c representing a set of raw feature vectorsXThe original feature vector of (1) is,the fusion feature vector set after the channel attention enhancement is shown, and the specific expression of the Sigmoid activation function is as follows:
wherein,xrepresenting the input.
S304: and determining the classification weight of each fused feature vector based on the element correlation between the vector elements contained in each fused feature vector in the fused feature vector set.
Specifically, in step S304, after obtaining the fused feature vector set based on the convolutional neural network model, the server may determine the classification weight of each fused feature vector based on the element correlation between the vector elements included in each fused feature vector in the fused feature vector set according to the sequence-to-sequence network model.
Before specifically introducing a method for determining respective classification weights of each fusion feature vector, a sequence-to-sequence network model is briefly described, as shown in fig. 6, the sequence-to-sequence model is also a coding-decoding model based on a recurrent neural network, sequence-to-sequence refers to sequence conversion from a sequence a to a sequence B, an input sequence is sent to a coder, the coder compresses the input sequence into a vector with a specified length, and the process becomes coding; the decoder takes the last state of the encoder as an initialization state and takes the last state as input, and restores the vector sent by the encoder into a sequence, and the process is called decoding; finally, the rhythm type of the rhythm data can be determined based on the output result of the decoder.
It should be noted that, a Recurrent Neural Network (RNN) is usually used as a basic unit of an encoder and a decoder, but in a deep Neural Network, the RNN is affected by short-time memory, and if the sequence length is too long, it is difficult to send information from an earlier time step to a later time step, which results in that finally some layers stop updating parameters; in addition, during the back propagation, the RNN is more prone to gradient disappearance or explosion, and therefore, in the embodiment of the present application, a Long Short-Term Memory network (LSTM) is used instead of the RNN as a basic unit of the encoder and the decoder in the sequence-to-sequence network model.
The LSTM is an improved recurrent neural network, can solve the problem that the RNN cannot realize long-distance dependence, and has only one transmission state compared with the RNNh t LSTM has two states of propagation: cell statec t And a hidden stateh t . LSTM removes or adds information to the cellular state by gating structures, where gating is a way to selectively pass information, including a Sigmoid activation function with point multiplication (point-by-point multiplication), and consists of three gate layers: forgetting gate (forget gate), input gate (input gate), output gate (output gate).
Optionally, the encoder and the decoder process the sequence data in the order from left to right, but a Bidirectional Long-Short Term Memory network (Bi-LSTM) may also be used, so that the sequence-to-sequence network model may update parameters from two directions, and not only can utilize past information, but also can capture subsequent information; therefore, since the Bi-LSTM can better utilize the context information of the data, and generally has better performance than the standard LSTM, in the embodiment of the present application, the Bi-LSTM can be used as the basic unit of the encoder and the decoder.
Next, based on the sequence-to-sequence network model, a fused feature vector set may be obtained, and the element correlation between the vector elements included in each fused feature vector may be obtained, as shown in fig. 7, for each fused feature vector in the fused feature vector set, the following operations are performed: acquiring each vector element contained in one fused feature vector, and performing semantic extraction on the element name of each vector element respectively to obtain semantic information of each element name; then, for each element name, the following operations are performed: and comparing the semantic similarity of the semantic information of one element name with the semantic information of other element names to obtain at least one semantic similarity, and determining the element correlation between the vector element corresponding to one element name and other vector elements based on the obtained at least one semantic similarity.
Exemplarily, taking one fused feature vector fus.feat.v1 as an example, it is assumed that the fused feature vector fus.feat.v1 contains 5 vector elements, i.e. the fused feature vector fus.feat.v1 is expressed as: (vect.ele.1, vect.ele.2, vect.ele.3, vect.ele.4, and vect.ele.5), wherein each vector element can be represented by a corresponding Key and a value, the Key represents an element name of the vector element, the value represents an element value of the vector element, and the server obtains each vector element and its corresponding element similarity based on the element correlation obtaining method as shown in table 1:
TABLE 1
Based on the above table, the server may determine, according to at least one semantic similarity corresponding to each of the semantic information of one element name, the semantic information of another element name, and the vector element corresponding to the one element name, and the element correlation between the vector element and the other vector elements, for example, taking vector element vect.ele.1 as an example, the semantic similarity between vector element vect.ele.1 and the element names of vector element vect.ele.2, vector element vect.ele.3, vector element vect.ele.4, and vector element vect.ele.5 is, in order: 88%, 76%, 98%, 83%, based on the four semantic similarities obtained above, it can be confirmed that the element correlations between the vector element vect.ele.1 and the vector elements vect.ele.2, vect.ele.3, vect.ele.4, and vect.ele.5 are: 0.88, 0.76, 0.98, 0.83.
Further, after obtaining the element correlation degree between the vector elements included in each fused feature vector in the fused feature vector set, the server determines the classification weight of each fused feature vector based on the element correlation degree between the vector elements included in each fused feature vector in the fused feature vector set, and for each fused feature vector, as shown in fig. 8, it is an implementation flowchart of a method for determining the classification weight of the fused feature vector provided in the embodiment of the present application, and a specific implementation flow of the method is as follows:
s3041: and respectively obtaining each vector element contained in one fused feature vector and at least one element correlation degree corresponding to each vector element.
Specifically, when step S3041 is executed, after the server obtains the fused feature vector set based on the above-mentioned method for obtaining element relevancy, and at least one element relevancy corresponding to each of all vector elements, each vector element included in one fused feature vector and at least one element relevancy corresponding to each vector element can be obtained from the element relevancy set according to the category information or the identification information of the fused feature vector; wherein each element relevance characterizes: the degree of association between the respective vector element and one of the other vector elements.
S3042: and respectively obtaining the sub-classification weight corresponding to each vector element based on at least one element correlation degree corresponding to each vector element.
Specifically, when step S3042 is executed, after the server obtains at least one element relevancy corresponding to each vector element, the server obtains a sub-classification weight corresponding to each vector element based on the at least one element relevancy corresponding to each vector element; the calculation formula of the sub-classification weight is specifically as follows:
wherein,representing vector elementsiThe sub-classification weight of (a) is,representing vector elementsiAnd a firstjThe degree of element correlation between the elements of the vector,representing vector elementsThe number of corresponding vector elements.
For example, still taking the vector element vect.ele.1 as an example, the element correlations with the vector element vect.ele.2, the vector element vect.ele.3, the vector element vect.ele.4, and the vector element vect.ele.5 are, in order:therefore, based on the calculation formula of the sub-classification weights, the sub-classification weights of the vector elements Vect.Ele.1 can be obtained。
S3043: and obtaining a classification weight of the fused feature vector based on the obtained sub-classification weights and the respective element values of the vector elements.
Specifically, in step S3043, after obtaining the sub-classification weights corresponding to the respective vector elements, the server may obtain the classification weight of the above-mentioned one fused feature vector based on the obtained sub-classification weights, the respective element values of the respective vector elements, and a preset classification weight calculation formula of the fused feature vector, where the classification weight calculation formula is specifically as follows:
wherein,a classification weight representing the one fused feature vector,representing vector elementsiThe sub-classification weight of (a) is,representing vector elementsiThe value of the element(s) of (c),mthe number of vector elements included in the one fused feature vector is represented.
Obviously, by adopting the above method, it is effectively avoided that all contents in the sequence share the same importance degree in the traditional sequence-to-sequence network model, but it is not reasonable in practical tasks: firstly, converting an input into a word vector can lose a part of information, and in a long sequence, the importance degrees of different word vectors are greatly different, and after an attention mechanism is introduced, by performing probability re-weighting on sequence contents, the weight information obtained through attention can be redistributed to each piece of information, namely, the originally same intermediate semantic A can be replaced by A which is continuously changed according to the current content i The model can learn more important contents; in addition, the attention mechanism does not need to change the original sequence-to-sequence network model structure, and improves the global feature extraction and fusion capacity and effectively improves the accuracy of heart rhythm data classification by increasing few additional parameters and calculated quantities.
S305: and determining the rhythm type of the rhythm data based on the fusion characteristic vectors and the classification weights corresponding to the fusion characteristic vectors.
Specifically, referring to fig. 9, in step S305, after determining the classification weight of each fusion feature vector, the server may determine the classification weight corresponding to each fusion feature vector and the fusion feature weight interval to which each fusion feature vector belongs according to a preset fusion feature weight interval; then, the rhythm type of the rhythm data is determined based on the obtained fusion characteristic weight intervals and the corresponding relation between the preset fusion characteristic weight intervals and the rhythm type.
For example, taking two fusion feature vectors as an example, each fusion feature weight interval and its corresponding rhythm class are shown in table 2:
TABLE 2
It should be noted that, in the above table, the fused feature vector fus.feat.v1 may be divided into two fused feature weight intervals according to a preset fused feature weight interval division rule, and the fused feature vector fus.feat.v2 may be divided into three fused feature weight intervals according to a preset fused feature weight interval division rule; further, after determining the classification weight and the fusion feature weight interval of the fusion feature vector fus.feat.v1 and the fusion feature vector fus.feat.v2, the server may determine the rhythm type of the corresponding rhythm data.
For example, taking the case that the classification weight of the fused feature vector fus.feat.v1 belongs to the fused feature weight interval 1, and the classification weight of the fused feature vector fus.feat.v2 belongs to the fused feature weight interval 3, it is known that the above-mentioned cardiac rhythm data belongs to N classes of cardiac rhythms.
Based on the above-mentioned heart rhythm data classification method, as shown in fig. 10, which is a specific application scenario diagram of the heart rhythm data classification method provided in the embodiment of the present application, the server acquires the heart rhythm data arrhythmia.data in a target object (e.g., person a) within a set time range (2022.06.25; then, respectively performing feature compression on each original feature vector contained in an original feature vector set origin.Eig.Set according to two preset vector element sampling modes (sampling.Meth 1 and sampling.Meth 2) to obtain a corresponding first feature vector set Fri.Eig.Set and a second feature vector set Sec.Eig.Set; further, the original feature vector set origin.eig.set, the first feature vector set fri.eig.set, and the second feature vector set sec.eig.set are fused to obtain a corresponding fused feature vector set fus.eig.set, so that based on the element correlation (e.g., the element correlation ele.core 1 and ele.core 2 corresponding to the fused feature vector fus.feat.v1 and the element correlation ele.core 3 and ele.core 4 corresponding to the fused feature vector fus.feat.v2) between the vector elements included in each fused feature vector (e.g., fus.feat.v1 and fus.feat.v2) in the fused feature vector set fus.eig.set, the classification of each fused feature vector is determined, and the weights are sequentially: 0.85 and 0.76; finally, the rhythm class of the rhythm data arrhythmia.data is determined to be a class N rhythm based on the respective fused feature vectors (fus.feat.v1 and fus.feat.v2) and their respective corresponding classification weights (0.85 and 0.76).
In summary, in the method for classifying cardiac rhythm data executed by the electronic device according to the embodiment of the present application, cardiac rhythm data of a target object in a set time range is obtained, and feature extraction is performed on waveform features of the cardiac rhythm data to obtain a corresponding original feature vector set; then, respectively performing feature compression on each original feature vector contained in the original feature vector set according to two preset vector element sampling modes to obtain a corresponding first feature vector set and a corresponding second feature vector set; further, fusing the original feature vector set, the first feature vector set and the second feature vector set to obtain a corresponding fused feature vector set, and determining the classification weight of each fused feature vector based on the element correlation degree between the vector elements contained in each fused feature vector in the fused feature vector set; finally, the rhythm type of the rhythm data is determined based on the fusion characteristic vectors and the classification weights corresponding to the fusion characteristic vectors.
By adopting the method, the original feature vector set, the first feature vector set and the second feature vector set are fused to obtain the corresponding fused feature vector set, so that the classification weight of each fused feature vector is determined based on the element correlation degree between the vector elements contained in each fused feature vector in the fused feature vector set, and the rhythm type of the rhythm data is determined based on each fused feature vector and the corresponding classification weight thereof, thereby avoiding the technical defect that the classification of the rhythm data is inaccurate due to insufficient feature extraction capability of the fusion of a convolutional neural network model and a coding-decoding model in the prior art, and improving the accuracy of the classification of the rhythm data.
It should be noted that, according to the method for classifying rhythm data provided in the embodiment of the present application, the server determines the accuracy of the rhythm type of the rhythm data according to the accuracy, sensitivity, specificity, and precision of the recognition result of the rhythm type, that is, the method for classifying rhythm data provided in the embodiment of the present application provides the accuracy, sensitivity, specificity, and precision of the classification of rhythm data.
Further, based on the same technical concept, the embodiment of the present application also provides a heart rhythm data classification device, and the heart rhythm data classification device is used for implementing the flow of the above-mentioned heart rhythm data classification method of the embodiment of the present application. Referring to fig. 11, the apparatus for classifying cardiac rhythm data includes: the information processing method includes an obtaining module 1101, a compressing module 1102, a fusing module 1103, a configuring module 1104 and an identifying module 1105, wherein:
the acquisition module 1101 is configured to acquire cardiac rhythm data of a target object within a set time range, and perform feature extraction on waveform features of the cardiac rhythm data to obtain a corresponding original feature vector set;
a compression module 1102, configured to perform feature compression on each original feature vector included in an original feature vector set according to two preset vector element sampling manners, respectively, to obtain a corresponding first feature vector set and a second feature vector set;
a fusion module 1103, configured to fuse the original feature vector set, the first feature vector set, and the second feature vector set to obtain a corresponding fusion feature vector set;
a configuration module 1104, configured to determine respective classification weights of the respective fusion feature vectors based on element correlations between vector elements included in the respective fusion feature vectors in the set of fusion feature vectors;
an identifying module 1105, configured to determine a rhythm type of the rhythm data based on each fusion feature vector and its respective corresponding classification weight.
In a possible embodiment, when feature compression is performed on each original feature vector included in an original feature vector set according to two preset vector element sampling manners, to obtain a corresponding first feature vector set and second feature vector set, the compression module 1102 is specifically configured to:
based on two vector element sampling modes, performing global average pooling on each original feature vector to obtain a first semantic information set, and performing global maximum pooling on each original feature vector to obtain a second semantic information set;
a first set of feature vectors is generated based on the first set of semantic information, and a second set of feature vectors is generated based on the second set of semantic information.
In a possible embodiment, when the element correlation degree between the vector elements included in each fused feature vector is based on the fused feature vector set, the configuration module 1104 is specifically configured to:
for each fused feature vector, the following operations are respectively performed:
acquiring each vector element contained in one fused feature vector, and performing semantic extraction on the element name of each vector element respectively to obtain semantic information of each element name;
for each element name, the following operations are respectively performed:
respectively comparing semantic similarity of semantic information of one element name with semantic information of other element names to obtain at least one semantic similarity;
and respectively determining the element correlation degree between the vector element corresponding to one element name and other vector elements based on the obtained at least one semantic similarity.
In a possible embodiment, when determining the classification weight of each fused feature vector based on the element correlation between the vector elements included in each fused feature vector in the fused feature vector set, the configuration module 1104 is specifically configured to:
for each fused feature vector, the following operations are respectively performed:
respectively obtaining each vector element contained in one fusion feature vector and at least one element correlation degree corresponding to each vector element; each element relevance characterizes: a degree of association between the respective vector element and one of the other vector elements;
respectively obtaining sub-classification weights respectively corresponding to the vector elements based on at least one element correlation degree respectively corresponding to the vector elements;
and obtaining a classification weight of the fusion feature vector based on the obtained sub-classification weights and the respective element values of the vector elements.
In a possible embodiment, when determining the rhythm type of the rhythm data based on each fused feature vector and its corresponding classification weight, the identifying module 1105 is specifically configured to:
respectively determining classification weights corresponding to the fusion feature vectors and fusion feature weight intervals to which the fusion feature vectors belong;
and determining the rhythm type of the rhythm data based on the obtained fusion characteristic weight intervals and the corresponding relation between the preset fusion characteristic weight intervals and the rhythm type.
Based on the same technical concept, the embodiment of the present application further provides an electronic device, and the electronic device can implement the flow of the method for classifying cardiac rhythm data provided by the above embodiment of the present application. In one embodiment, the electronic device may be a server, a terminal device, or other electronic device. As shown in fig. 12, the electronic device may include:
at least one processor 1201 and a memory 1202 connected to the at least one processor 1201, in this embodiment, a specific connection medium between the processor 1201 and the memory 1202 is not limited, and fig. 12 illustrates an example in which the processor 1201 and the memory 1202 are connected by a bus 1200. The bus 1200 is shown by a thick line in fig. 12, and the connection manner between other components is merely illustrative and not limited thereto. The bus 1200 may be divided into an address bus, a data bus, a control bus, etc., and for ease of illustration only one thick line is shown in fig. 12, but not to indicate only one bus or type of bus. Alternatively, the processor 1201 may also be referred to as a controller, without limitation to name a few.
In an embodiment of the present application, the memory 1202 stores instructions executable by the at least one processor 1201, and the at least one processor 1201 may perform one of the cardiac rhythm data classification methods discussed above by executing the instructions stored in the memory 1202. The processor 1201 may implement the functions of the respective modules in the apparatus shown in fig. 11.
The processor 1201 is a control center of the apparatus, and may connect various parts of the entire control device by using various interfaces and lines, and perform various functions and process data of the apparatus by operating or executing instructions stored in the memory 1202 and calling data stored in the memory 1202, thereby performing overall monitoring of the apparatus.
In one possible design, the processor 1201 may include one or more processing units, and the processor 1201 may integrate an application processor, which primarily handles operating systems, user interfaces, application programs, and the like, and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1201. In some embodiments, the processor 1201 and the memory 1202 may be implemented on the same chip, or in some embodiments, they may be implemented separately on separate chips.
The processor 1201 may be a general-purpose processor, such as a CPU, digital signal processor, application specific integrated circuit, field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like, that may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method for classifying cardiac rhythm data disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor.
Memory 1202, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory 1202 may include at least one type of storage medium, and may include, for example, a flash Memory, a hard disk, a multimedia card, a card-type Memory, a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Programmable Read Only Memory (PROM), a Read Only Memory (ROM), a charge Erasable Programmable Read Only Memory (EEPROM), a magnetic Memory, a magnetic disk, an optical disk, and so on. The memory 1202 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 1202 in the embodiments of the subject application may also be circuitry or any other device capable of performing a memory function for storing program instructions and/or data.
The processor 1201 is programmed to incorporate code corresponding to a method for classifying cardiac rhythm data as described in the previous embodiments into the chip, so that the chip can perform the steps of a method for classifying cardiac rhythm data as shown in fig. 3. How the processor 1201 is programmed is well known to those skilled in the art and will not be described in detail herein.
Based on the same inventive concept, the present application also provides a storage medium storing computer instructions, which when executed on a computer, cause the computer to perform a method for classifying cardiac rhythm data as discussed above.
In some possible embodiments, the present application provides that the various aspects of a method of classifying cardiac rhythm data may also be embodied in the form of a program product comprising program code for causing the control apparatus to perform the steps of a method of classifying cardiac rhythm data according to various exemplary embodiments of the present application described above in this specification, when the program product is run on an apparatus.
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units described above may be embodied in one unit, according to embodiments of the application. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a server, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user computing device, partly on the user equipment, as a stand-alone software package, partly on the user computing device and partly on a remote computing device, or entirely on the remote computing device or server.
In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
Claims (11)
1. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor, when executing the computer program, implements a method for classifying cardiac rhythm data comprising:
acquiring rhythm data of a target object in a set time range, and performing feature extraction on waveform features of the rhythm data to obtain a corresponding original feature vector set;
respectively performing feature compression on each original feature vector contained in the original feature vector set according to two preset vector element sampling modes to obtain a corresponding first feature vector set and a corresponding second feature vector set;
fusing the original feature vector set, the first feature vector set and the second feature vector set to obtain corresponding fused feature vector sets;
determining respective classification weights of the fusion feature vectors based on element correlation degrees between vector elements contained in the fusion feature vector set and the fusion feature vectors respectively;
and determining the rhythm type of the rhythm data based on the fusion characteristic vectors and the classification weights corresponding to the fusion characteristic vectors.
2. The electronic device according to claim 1, wherein the performing feature compression on each original feature vector included in the original feature vector set according to two preset vector element sampling manners to obtain a corresponding first feature vector set and a second feature vector set comprises:
based on the two vector element sampling modes, performing global average pooling on each original feature vector to obtain a first semantic information set, and performing global maximum pooling on each original feature vector to obtain a second semantic information set;
generating the first set of feature vectors based on the first set of semantic information, and generating the second set of feature vectors based on the second set of semantic information.
3. The electronic device of claim 1, wherein the element correlation between vector elements included in each of the fused feature vectors based on the set of fused feature vectors comprises:
for each fused feature vector, respectively performing the following operations:
acquiring each vector element contained in one fused feature vector, and performing semantic extraction on the element name of each vector element respectively to obtain semantic information of each element name;
for each element name, respectively performing the following operations:
respectively comparing semantic similarity of semantic information of one element name with semantic information of other element names to obtain at least one semantic similarity;
and respectively determining the element correlation degree between the vector element corresponding to the element name and other vector elements based on the obtained at least one semantic similarity.
4. The electronic device of any of claims 1-3, wherein determining the classification weight for each respective fused feature vector in the set of fused feature vectors based on an element correlation between vector elements included in the respective fused feature vector comprises:
for each fused feature vector, respectively performing the following operations:
respectively obtaining each vector element contained in one fusion feature vector and at least one element correlation degree corresponding to each vector element; each element relevance characterizes: a degree of association between the respective vector element and one of the other vector elements;
respectively obtaining sub-classification weights respectively corresponding to the vector elements based on at least one element correlation degree respectively corresponding to the vector elements;
and obtaining the classification weight of the fusion feature vector based on the obtained sub-classification weights and the respective element values of the vector elements.
5. The electronic device of any one of claims 1-3, wherein the determining a rhythm class of the rhythm data based on the respective fused feature vectors and their respective corresponding classification weights comprises:
respectively determining the classification weight corresponding to each fusion feature vector and the fusion feature weight interval to which each fusion feature vector belongs;
and determining the rhythm type of the rhythm data based on the obtained fusion characteristic weight intervals and the corresponding relation between the preset fusion characteristic weight intervals and the rhythm type.
6. A cardiac rhythm data classification apparatus, comprising:
the acquisition module is used for acquiring the heart rhythm data of a target object within a set time range, and extracting the characteristics of the waveform characteristics of the heart rhythm data to obtain a corresponding original characteristic vector set;
the compression module is used for performing feature compression on each original feature vector contained in the original feature vector set according to two preset vector element sampling modes to obtain a corresponding first feature vector set and a corresponding second feature vector set;
the fusion module is used for fusing the original feature vector set, the first feature vector set and the second feature vector set to obtain a corresponding fusion feature vector set;
the configuration module is used for determining the classification weight of each fused feature vector based on the element correlation degree between the vector elements contained in each fused feature vector in the fused feature vector set;
and the identification module is used for determining the rhythm type of the rhythm data based on each fusion feature vector and the corresponding classification weight.
7. The apparatus according to claim 6, wherein when the feature compression is performed on each original feature vector included in the original feature vector set according to two preset vector element sampling manners, respectively, to obtain a corresponding first feature vector set and a corresponding second feature vector set, the compression module is specifically configured to:
based on the two vector element sampling modes, performing global average pooling on each original feature vector to obtain a first semantic information set, and performing global maximum pooling on each original feature vector to obtain a second semantic information set;
generating the first feature vector set based on the first semantic information set, and generating the second feature vector set based on the second semantic information set.
8. The apparatus according to claim 6, wherein, in said based on the element correlation between the vector elements included in each fused feature vector in the fused feature vector set, the configuration module is specifically configured to:
for each fused feature vector, respectively performing the following operations:
acquiring each vector element contained in one fused feature vector, and performing semantic extraction on the element name of each vector element respectively to obtain semantic information of each element name;
for each element name, the following operations are respectively executed:
respectively comparing semantic similarity of semantic information of one element name with semantic information of other element names to obtain at least one semantic similarity;
and respectively determining the element correlation degree between the vector element corresponding to the element name and other vector elements based on the obtained at least one semantic similarity.
9. The apparatus according to any of claims 6-8, wherein, in said determining the classification weight of each respective fused feature vector based on the element correlation between the vector elements included in each respective fused feature vector in the set of fused feature vectors, the configuration module is specifically configured to:
for each fused feature vector, respectively performing the following operations:
respectively obtaining each vector element contained in one fusion feature vector and at least one element correlation degree corresponding to each vector element; each element relevance characterizes: a degree of association between the respective vector element and one of the other vector elements;
respectively obtaining sub-classification weights corresponding to the vector elements based on at least one element correlation degree corresponding to the vector elements;
and obtaining the classification weight of the fusion feature vector based on the obtained sub-classification weights and the respective element values of the vector elements.
10. The apparatus of any one of claims 6-8, wherein, in the determining the rhythm category of the rhythm data based on the respective fused feature vectors and their respective corresponding classification weights, the identification module is specifically configured to:
respectively determining the classification weight corresponding to each fusion feature vector and the fusion feature weight interval to which each fusion feature vector belongs;
and determining the rhythm type of the rhythm data based on the obtained fusion characteristic weight intervals and the corresponding relation between the preset fusion characteristic weight intervals and the rhythm type.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the electronic device according to any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210818321.7A CN114886404B (en) | 2022-07-13 | 2022-07-13 | Electronic equipment, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210818321.7A CN114886404B (en) | 2022-07-13 | 2022-07-13 | Electronic equipment, device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114886404A CN114886404A (en) | 2022-08-12 |
CN114886404B true CN114886404B (en) | 2022-10-28 |
Family
ID=82729538
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210818321.7A Active CN114886404B (en) | 2022-07-13 | 2022-07-13 | Electronic equipment, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114886404B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115063795B (en) * | 2022-08-17 | 2023-01-24 | 西南民族大学 | Urinary sediment classification detection method and device, electronic equipment and storage medium |
CN115211866B (en) * | 2022-09-07 | 2022-12-27 | 西南民族大学 | Arrhythmia classification method and system and electronic equipment |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5967995A (en) * | 1998-04-28 | 1999-10-19 | University Of Pittsburgh Of The Commonwealth System Of Higher Education | System for prediction of life-threatening cardiac arrhythmias |
EP1416425A1 (en) * | 2002-11-04 | 2004-05-06 | Samsung Electronics Co., Ltd. | System and method for detecting a face |
CN104921722A (en) * | 2015-07-15 | 2015-09-23 | 西南民族大学 | Double-lead combined with electrocardiogram QRS wave detection method |
CN106377247A (en) * | 2016-09-10 | 2017-02-08 | 天津大学 | Feature selection-based arrhythmia classification method |
CN108470156A (en) * | 2018-03-06 | 2018-08-31 | 南京邮电大学 | A kind of cardiechema signals classifying identification method |
CN109271912A (en) * | 2018-09-05 | 2019-01-25 | 中国电子科技集团公司第三研究所 | Video classification methods, device, electronic equipment and storage medium |
CN110751131A (en) * | 2019-11-16 | 2020-02-04 | 李汭傧 | Arrhythmia detection device |
CN111460956A (en) * | 2020-03-26 | 2020-07-28 | 山东科技大学 | Unbalanced electrocardiogram sample classification method based on data enhancement and loss weighting |
CN111528832A (en) * | 2020-05-28 | 2020-08-14 | 四川大学华西医院 | Arrhythmia classification method and validity verification method thereof |
CN111626114A (en) * | 2020-04-20 | 2020-09-04 | 哈尔滨工业大学 | Electrocardiosignal arrhythmia classification system based on convolutional neural network |
CN111755120A (en) * | 2020-06-29 | 2020-10-09 | 西南民族大学 | Cognitive impairment prediction method based on edge intelligence and multimode perception |
CN113011504A (en) * | 2021-03-23 | 2021-06-22 | 华南理工大学 | Virtual reality scene emotion recognition method based on visual angle weight and feature fusion |
CN113408086A (en) * | 2021-05-24 | 2021-09-17 | 中国能源建设集团山西省电力勘测设计院有限公司 | Analytical calculation method for self-inductance value of air-core reactor |
CN114010202A (en) * | 2021-09-18 | 2022-02-08 | 苏州无双医疗设备有限公司 | Method for classifying heart rhythms of implantable heart rhythm management device and distinguishing ventricular rate from supraventricular rate |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7917196B2 (en) * | 2005-05-09 | 2011-03-29 | Cardiac Pacemakers, Inc. | Arrhythmia discrimination using electrocardiograms sensed from multiple implanted electrodes |
US7430446B2 (en) * | 2005-01-20 | 2008-09-30 | Cardiac Pacemakers, Inc. | Methods and apparatuses for cardiac arrhythmia classification using morphology stability |
US7697766B2 (en) * | 2005-03-17 | 2010-04-13 | Delphi Technologies, Inc. | System and method to determine awareness |
CN104055522A (en) * | 2014-07-01 | 2014-09-24 | 清华大学深圳研究生院 | Electrocardiosignal identity recognition method under arrhythmia condition |
TWI732489B (en) * | 2020-03-17 | 2021-07-01 | 國防醫學院 | Method and system for quickly detecting abnormal concentration of potassium ion in blood from electrocardiogram |
CN111557659B (en) * | 2020-05-22 | 2023-04-28 | 郑州大学 | Arrhythmia classification method based on multi-feature fusion and Stacking-DWKNN algorithm |
US11730415B2 (en) * | 2020-06-10 | 2023-08-22 | Implicity | Method and system for analyzing heart rhythms |
CN111657926B (en) * | 2020-07-08 | 2021-04-23 | 中国科学技术大学 | Arrhythmia classification method based on multi-lead information fusion |
CN114652322B (en) * | 2022-03-24 | 2024-10-11 | 中南大学 | Electrocardiosignal classification method and system based on multi-domain feature learning |
-
2022
- 2022-07-13 CN CN202210818321.7A patent/CN114886404B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5967995A (en) * | 1998-04-28 | 1999-10-19 | University Of Pittsburgh Of The Commonwealth System Of Higher Education | System for prediction of life-threatening cardiac arrhythmias |
EP1416425A1 (en) * | 2002-11-04 | 2004-05-06 | Samsung Electronics Co., Ltd. | System and method for detecting a face |
CN104921722A (en) * | 2015-07-15 | 2015-09-23 | 西南民族大学 | Double-lead combined with electrocardiogram QRS wave detection method |
CN106377247A (en) * | 2016-09-10 | 2017-02-08 | 天津大学 | Feature selection-based arrhythmia classification method |
CN108470156A (en) * | 2018-03-06 | 2018-08-31 | 南京邮电大学 | A kind of cardiechema signals classifying identification method |
CN109271912A (en) * | 2018-09-05 | 2019-01-25 | 中国电子科技集团公司第三研究所 | Video classification methods, device, electronic equipment and storage medium |
CN110751131A (en) * | 2019-11-16 | 2020-02-04 | 李汭傧 | Arrhythmia detection device |
CN111460956A (en) * | 2020-03-26 | 2020-07-28 | 山东科技大学 | Unbalanced electrocardiogram sample classification method based on data enhancement and loss weighting |
CN111626114A (en) * | 2020-04-20 | 2020-09-04 | 哈尔滨工业大学 | Electrocardiosignal arrhythmia classification system based on convolutional neural network |
CN111528832A (en) * | 2020-05-28 | 2020-08-14 | 四川大学华西医院 | Arrhythmia classification method and validity verification method thereof |
CN111755120A (en) * | 2020-06-29 | 2020-10-09 | 西南民族大学 | Cognitive impairment prediction method based on edge intelligence and multimode perception |
CN113011504A (en) * | 2021-03-23 | 2021-06-22 | 华南理工大学 | Virtual reality scene emotion recognition method based on visual angle weight and feature fusion |
CN113408086A (en) * | 2021-05-24 | 2021-09-17 | 中国能源建设集团山西省电力勘测设计院有限公司 | Analytical calculation method for self-inductance value of air-core reactor |
CN114010202A (en) * | 2021-09-18 | 2022-02-08 | 苏州无双医疗设备有限公司 | Method for classifying heart rhythms of implantable heart rhythm management device and distinguishing ventricular rate from supraventricular rate |
Non-Patent Citations (18)
Title |
---|
A deep residual inception network with channel attention modules for multi-label cardiac abnormality detection from reduced-lead ECG;Srivastava, Apoorva;Pratiher, Sawon;Alam, Sazedul;《PHYSIOLOGICAL MEASUREMENT》;20220630;第43卷(第6期);全文 * |
Automated ECG classification using a non-local convolutional blo.Wang, JK * |
Hybrid feature selection using component co-occurrence based feature relevance measurement;Wang, Youwei;Feng, Lizhou;;《EXPERT SYSTEMS WITH APPLICATIONS》;20180715;第102卷;全文 * |
Inter-patient automated arrhythmia classification: A new approach of weight capsule and sequence to sequence combination;Li, Xiangkui;Zhang, Jian;Chen, Wei;《COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE》;20220228;第214卷;全文 * |
Qiao, X ; Zhang, H.《COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE》.2021,第203卷 * |
一种基于CNN和LSTM的新型双导联心律失常分类系统;汪虹;《万方》;20200803;全文 * |
一种改进的卷积神经网络的室内深度估计方法;梁煜,张金铭,张为;《天津大学学报》;20200605;第53卷(第8期);全文 * |
基于ECG的心律失常特征提取及分类算法的研究;王艳;《万方》;20181218;全文 * |
基于支持向量机的文本分类算法研究;秦玉平;《万方》;20090430;全文 * |
基于深度学习量化权重的心电数据特征表达与分类;张亚彬;《万方》;20211008;全文 * |
基于知识蒸馏的心律失常分类模型;张逸,周莉,陈杰.;《电子设计工程》;20220512;第30卷(第8期);全文 * |
多尺度编码-解码结构的单幅图像去雨网络;张学锋,李金晶,储岳中,汤亚玲.;《华中科技大学学报(自然科学版)》;20210425;第49卷(第7期);全文 * |
循环谱分析在心律失常分类中的应用研究;褚晶辉,卢莉莉,吕卫,李喆.;《计算机科学与探索》;20171231;第11卷(第11期);全文 * |
成人暴发性心肌炎早期心电图特点分析;兰永昊,郑梅,马旃;《国际心血管病杂志》;20181231;第45卷(第5期);全文 * |
房性期前收缩未下传二联律与慢节律的鉴别;李秀兰,郝玉红,向伟.;《临床荟萃》;20031231(第11期);全文 * |
考虑皮尔逊相关系数和切比雪夫距离的电压暂降类型计算方法;杜培,林焱,张伟骏;《电工电能新技术》;20210406;第40卷(第3期);全文 * |
融合单导联心电图传统与深度特征的常见心律失常识别方法研究;李全池,黄鑫,罗成思,黄惠泉,饶妮妮.;《中国生物医学工程学报》;20220220;第41卷(第1期);全文 * |
融合深度学习和决策机制的心电节律和形态异常识别;沈腾飞;《万方》;20210907;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN114886404A (en) | 2022-08-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114886404B (en) | Electronic equipment, device and storage medium | |
CN111950596B (en) | Training method for neural network and related equipment | |
CN111666477A (en) | Data processing method and device, intelligent equipment and medium | |
CN111931002B (en) | Matching method and related equipment | |
CN113191479B (en) | Method, system, node and storage medium for joint learning | |
WO2021098534A1 (en) | Similarity determining method and device, network training method and device, search method and device, and electronic device and storage medium | |
CN113707323B (en) | Disease prediction method, device, equipment and medium based on machine learning | |
CN115512005A (en) | Data processing method and device | |
CN113627422A (en) | Image classification method and related equipment thereof | |
CN115238909A (en) | Data value evaluation method based on federal learning and related equipment thereof | |
CN112529149A (en) | Data processing method and related device | |
WO2024179485A1 (en) | Image processing method and related device thereof | |
CN113658655B (en) | Physical examination recommendation method, device, storage medium and equipment | |
CN111160049A (en) | Text translation method, device, machine translation system and storage medium | |
WO2024114659A1 (en) | Summary generation method and related device | |
CN113693611A (en) | Machine learning-based electrocardiogram data classification method and device | |
CN117056589A (en) | Article recommendation method and related equipment thereof | |
CN116798623A (en) | Sleep evaluation method, device, equipment and storage medium based on artificial intelligence | |
CN115063795B (en) | Urinary sediment classification detection method and device, electronic equipment and storage medium | |
CN116259311A (en) | Voice processing method and related equipment thereof | |
CN116362301A (en) | Model quantization method and related equipment | |
Yang et al. | Superimposed semantic communication for iot-based real-time ecg monitoring | |
CN113643141B (en) | Method, device, equipment and storage medium for generating interpretation conclusion report | |
CN117746047A (en) | Image processing method and related equipment thereof | |
CN113408539B (en) | Data identification method, device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |