TW202343477A - Method for osa severity detection using recording-based electrocardiography signal - Google Patents
Method for osa severity detection using recording-based electrocardiography signal Download PDFInfo
- Publication number
- TW202343477A TW202343477A TW111116341A TW111116341A TW202343477A TW 202343477 A TW202343477 A TW 202343477A TW 111116341 A TW111116341 A TW 111116341A TW 111116341 A TW111116341 A TW 111116341A TW 202343477 A TW202343477 A TW 202343477A
- Authority
- TW
- Taiwan
- Prior art keywords
- osa
- ecg
- layer
- model
- input
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 20
- 238000000034 method Methods 0.000 title claims abstract description 19
- 238000002565 electrocardiography Methods 0.000 title abstract 3
- 208000001797 obstructive sleep apnea Diseases 0.000 claims abstract description 53
- 208000008784 apnea Diseases 0.000 claims abstract description 7
- 206010021079 Hypopnoea Diseases 0.000 claims abstract description 5
- 238000011176 pooling Methods 0.000 claims description 20
- 238000013527 convolutional neural network Methods 0.000 claims description 11
- 238000000605 extraction Methods 0.000 claims description 8
- 208000024891 symptom Diseases 0.000 claims description 7
- 238000013135 deep learning Methods 0.000 abstract description 3
- 238000003745 diagnosis Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 4
- 230000002159 abnormal effect Effects 0.000 description 3
- 230000004913 activation Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 210000002569 neuron Anatomy 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000007958 sleep Effects 0.000 description 2
- 230000005856 abnormality Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 230000004622 sleep time Effects 0.000 description 1
Images
Abstract
Description
本發明有關於檢測阻塞型睡眠呼吸中止症狀的方法,尤其是指以整筆(recording-based)心電圖(Electrocardiography,ECG)訊號作為輸入而直接檢測輸出阻塞型睡眠呼吸中止症狀(Obstructive sleep apnea,OSA)嚴重程度的呼吸暫停或呼吸不足指標(apnea-hypopnea index,AHI)數值結果。 The present invention relates to a method for detecting obstructive sleep apnea symptoms, in particular to using a recording-based electrocardiogram (ECG) signal as input to directly detect and output obstructive sleep apnea symptoms (OSA). ) Numerical result of the severity of apnea or hypopnea index (apnea-hypopnea index, AHI).
請見圖1,為現今所有深度學習(deep learning)技術是以切片訊號(segment-based signal)的辨識作為模型輸入,而且只為阻塞型睡眠呼吸中止症狀(OSA)提供兩類辨識模型,即正常(normal)或異常(apnea)。一筆心電圖訊號(ECG訊號)1被切成K個ECG切片訊號2,分別輸入到兩類的辨識模型3執行辨識,得到正常或異常的辨識結果4。然後進行OSA嚴重程度的計算5,進而顯示辨識結果6。
Please see Figure 1. All current deep learning technologies use the recognition of segment-based signals as model input, and only provide two types of recognition models for obstructive sleep apnea (OSA), namely Normal (normal) or abnormal (apnea). An electrocardiogram signal (ECG signal) 1 is cut into K
舉例說明圖1中計算OSA嚴重程度:假設一個人的睡眠時間為8小時,若欲進行OSA嚴重程度辨識,則以8小時睡眠時間全程檢測。但不是固定8小時才能檢測,睡眠時間長度沒有限制,8小時只舉例。假設: 一筆訊號的總時間長度等於8小時;一個切片訊號的時間長度:T=60秒;共切出K=480個切片訊號(一筆訊號的總時間長等於K * T=28800秒=8小時);假設經由模型辨識出共有L=200個切片訊號是屬於異常;於是,可計算出OSA嚴重程度指標apnea-hypopnea index(AHI):AHI=L/(K * T/3600),此方程式意指平均每小時出現異常的次數正常:AHI<5輕度:5AHI<15中度:15AHI<30重度:AHI30由上例可得AHI=25,屬於中度患者。 For example, calculate the OSA severity in Figure 1: Assume that a person sleeps for 8 hours. If you want to identify the OSA severity, use the entire 8-hour sleep period for detection. However, it is not a fixed 8 hours to detect. There is no limit to the length of sleep time. 8 hours is just an example. Assume: The total time length of a signal is equal to 8 hours; the time length of a slice signal: T = 60 seconds; a total of K = 480 slice signals are cut out (the total time length of a signal is equal to K * T = 28800 seconds = 8 hours ); Assume that a total of L=200 slice signals are identified through the model as abnormal; therefore, the OSA severity index apnea-hypopnea index (AHI) can be calculated: AHI=L/(K * T/3600), this equation means Refers to the average number of abnormalities per hour. Normal: AHI<5. Mild: 5 AHI<15 Moderate: 15 AHI<30 Severe: AHI 30 From the above example, we can get AHI=25, which is a moderate patient.
此方法的缺點有三:第一是辨識過程相當繁複。第二是該辨識模型的準確率會因不同的切片訊號時間長度而受影響。第三是該辨識模型在訓練過程所使用的切片訊號資料集,其標註工作十分繁複,花費的人力與時間成本相當可觀。 There are three disadvantages of this method: First, the identification process is quite complicated. The second is that the accuracy of the recognition model will be affected by different slice signal time lengths. The third is the sliced signal data set used in the training process of the recognition model. The labeling work is very complicated and costs considerable manpower and time.
本發明的目的在提出一種使用整筆心電圖訊號作為輸入的檢測阻塞型睡眠呼吸中止症狀(Obstructive Sleep Apnea,OSA)的方法,本發明的內容敘述如下。 The purpose of the present invention is to provide a method for detecting obstructive sleep apnea (OSA) symptoms using the entire electrocardiogram signal as input. The content of the present invention is described as follows.
首先建立一阻塞型睡眠呼吸中止症狀(OSA)嚴重程度偵測模型。 First, an obstructive sleep apnea (OSA) severity detection model is established.
以一心電圖訊號(ECG)公開資料集作為訓練資料,輸入該OSA嚴重程度偵測模型中訓練,而完成一模型。 An electrocardiogram signal (ECG) public data set is used as training data and input into the OSA severity detection model for training to complete a model.
將一整筆心電圖訊號(ECG)輸入該模型中,直接顯示AHI數值以及對應的OSA嚴重程度分類結果(即正常、輕度、中度或重度)。 Input an entire electrocardiogram signal (ECG) into the model and directly display the AHI value and the corresponding OSA severity classification result (i.e. normal, mild, moderate or severe).
其中該整筆心電圖訊號(ECG)輸入該模型中,歷經一基於卷積神經網路的特徵圖提取層、一全域平均池化層、一全連接層、一輸出層之處理而得到該AHI數值以及對應的OSA嚴重程度四分類結果。 The entire electrocardiogram signal (ECG) is input into the model, and is processed by a feature map extraction layer based on a convolutional neural network, a global average pooling layer, a fully connected layer, and an output layer to obtain the AHI value. and the corresponding four-category results of OSA severity.
1:ECG訊號 1:ECG signal
2:K個ECG切片訊號 2:K ECG slice signals
3:兩類的OSA辨識模型 3: Two types of OSA identification models
4:K個切片訊號的辨識結果 4: Identification results of K slice signals
5:OSA嚴重程度的計算 5: Calculation of OSA severity
6:顯示辨識結果 6: Display the identification results
21:OSA嚴重程度偵測模型 21: OSA severity detection model
22:顯示偵測結果 22: Display detection results
311、312、313、321、323、324、325、331、332、334、335:卷積層 311, 312, 313, 321, 323, 324, 325, 331, 332, 334, 335: convolutional layer
314:池化層 314: Pooling layer
322、333:相加 322, 333: Addition
340:全域平均池化層 340: Global average pooling layer
350:全連接層 350: Fully connected layer
360:輸出層 360:Output layer
41:ECG公開資料集 41:ECG public data set
42:建立OSA嚴重程度偵測模型 42: Establish OSA severity detection model
43:使用ECG訓練集 43: Using ECG training set
44:訓練模型 44:Training model
45:是否收斂 45: Is it convergent?
46:模型訓練完成 46: Model training completed
51:受測者 51: Subject
52:穿戴裝置 52: Wearable devices
53:顯示診斷結果 53: Display diagnosis results
圖1為現今以切片訊號(segment-based signal)的辨識作為模型輸入,為阻塞型睡眠呼吸中止症狀(OSA)提供兩類辨識模型的示意圖。 Figure 1 is a schematic diagram of two types of recognition models for obstructive sleep apnea (OSA) using segment-based signal recognition as model input.
圖2為本發明以整筆心電圖訊號(ECG訊號)作為輸入而直接檢測輸出阻塞型睡眠呼吸中止症狀(OSA)的AHI數值以及對應的嚴重程度分類結果示意圖。 Figure 2 is a schematic diagram of the present invention using the entire electrocardiogram signal (ECG signal) as input to directly detect and output the AHI value of obstructive sleep apnea symptoms (OSA) and the corresponding severity classification results.
圖3詳細說明圖2的OSA嚴重程度偵測模型21架構的實施例。
Figure 3 details an embodiment of the OSA
圖4為本發明如何訓練產生OSA嚴重程度偵測模型的示意圖。 Figure 4 is a schematic diagram of how the present invention trains and generates an OSA severity detection model.
圖5為本發明受測者手上帶著量測ECG訊號的穿戴裝置而執行OSA診斷的示意圖。 FIG. 5 is a schematic diagram of a subject carrying a wearable device for measuring ECG signals while performing OSA diagnosis according to the present invention.
圖2說明本發明以整筆心電圖訊號(ECG訊號)1作為輸入而直接檢測輸出阻塞型睡眠呼吸中止症狀(OSA)的AHI數值以及對應的嚴重程度四分類結果22。 Figure 2 illustrates that the present invention uses the entire electrocardiogram signal (ECG signal) 1 as input to directly detect and output the AHI value of obstructive sleep apnea (OSA) and the corresponding four severity classification results 22.
本發明提供OSA嚴重程度偵測模型21直接輸出AHI數值以及對應的OSA嚴重程度結果,即正常、輕度、中度、重度等四分類。直接將整筆(例如8小時,但不限制時間)心電圖訊號(ECG訊號)1輸入模型以供辨識,並直接顯示偵測結果22。
The present invention provides the OSA
圖3詳細說明圖2的OSA嚴重程度偵測模型21架構的實施例。圖3內容可細分為四個部份分別是:基於卷積神經網路的特徵圖提取層311~335、全域平均池化層340、全連接層350、輸出層360。本專利模型的輸入是整筆ECG訊號1,輸入的訊號時間長度沒有限制,可以是任意的時間長度訊號。而現有方法(兩類OSA辨識模型)的輸入訊號時間長度是固定的。
Figure 3 details an embodiment of the OSA
基於卷積神經網路的特徵圖提取層311~335:特徵圖提取層(feature maps extraction layer)是使用卷積神經網路(convolutional neural network,CNN)進行輸入訊號的特徵圖(feature maps)提取。卷積神經網路是深度學習最常使用的方法,卷積神經網路最大的特色是可以透過模型訓練的方式自動萃取輸入訊號的特徵資訊,稱為特徵圖,再以特徵圖進行辨識任務,以此方式能有效提升辨識準確率。卷積神經網路裡的組成成份有卷積層(convolution layer)、激活函數(activation function)和池化層(pooling layer),透過這些組成成份重複地進行多層次的串聯或並聯連接則可以建構出各式各樣不同架構的卷積神經網路。在本實施例中,卷積層311~313與池化層314為第一層級的特徵圖提取;卷積層321、相加322、卷積層323、卷積層324、卷積層325是第二層級的特徵圖提取;而卷積層331、卷積層332、相加333、卷積層334、卷積層335是第三層級的特徵圖提取。 Feature map extraction layer 311~335 based on convolutional neural network: The feature map extraction layer uses a convolutional neural network (CNN) to extract feature maps of the input signal. . Convolutional neural network is the most commonly used method for deep learning. The biggest feature of convolutional neural network is that it can automatically extract the characteristic information of the input signal through model training, called a feature map, and then use the feature map to perform identification tasks. This method can effectively improve the identification accuracy. The components of a convolutional neural network include a convolution layer, an activation function, and a pooling layer. Through repeated multi-level series or parallel connections of these components, a convolutional neural network can be constructed. A wide variety of convolutional neural networks with different architectures. In this embodiment, the convolution layers 311 to 313 and the pooling layer 314 are the first-level feature map extraction; the convolution layer 321, the addition 322, the convolution layer 323, the convolution layer 324, and the convolution layer 325 are the second-level features. Image extraction; and convolution layer 331, convolution layer 332, addition 333, convolution layer 334, and convolution layer 335 are the third level of feature map extraction.
全域平均池化層340:使用全域平均池化(global average pooling)方法,即計算每一個特徵圖的平均值作為此池化層的輸出。使用此方法能讓不同長度訊號的輸入轉換成相同長度的輸出,換言之,以此方式讓本專利模型的輸入訊號可以是任意長度。 Global average pooling layer 340: Use global average pooling pooling) method, which calculates the average of each feature map as the output of this pooling layer. Using this method, input signals of different lengths can be converted into outputs of the same length. In other words, in this way, the input signal of this patented model can be of any length.
全連接層350:將前面已獲得的高度抽象化的特徵進行整合,再傳遞到輸出層360。 Fully connected layer 350: integrates the highly abstract features obtained previously and then passes them to the output layer 360.
輸出層360:使用Rectified Linear Unit(ReLU)激活函數輸出一個大於等於零的數值,即為AHI數值。 Output layer 360: Use the Rectified Linear Unit (ReLU) activation function to output a value greater than or equal to zero, which is the AHI value.
在本實施例中,所有卷積層都是使用一維(1-dimension)的卷積核。卷積層311使用32個尺寸為20的卷積核,在此以(32,20)表示之。卷積層312使用(64,20)的卷積核;卷積層313使用(128,5)的卷積核;卷積層321使用(128,3)的卷積核;卷積層323使用(128,3)的卷積核;卷積層324使用(64,1)的卷積核;卷積層325使用(128,3)的卷積核;卷積層331使用(128,3)的卷積核;卷積層332使用(128,3)的卷積核;卷積層334使用(128,3)的卷積核;卷積層335使用(64,1)的卷積核。 In this embodiment, all convolutional layers use one-dimensional (1-dimension) convolution kernels. The convolutional layer 311 uses 32 convolution kernels of size 20, which are represented here by (32,20). The convolution layer 312 uses the convolution kernel of (64,20); the convolution layer 313 uses the convolution kernel of (128,5); the convolution layer 321 uses the convolution kernel of (128,3); the convolution layer 323 uses (128,3) ) convolution kernel; convolution layer 324 uses the convolution kernel of (64,1); convolution layer 325 uses the convolution kernel of (128,3); convolution layer 331 uses the convolution kernel of (128,3); convolution layer 332 uses the convolution kernel of (128,3); the convolution layer 334 uses the convolution kernel of (128,3); the convolution layer 335 uses the convolution kernel of (64,1).
在本實施例中,若輸入ECG訊號1是在時間長度6小時、100Hz取樣頻率下取樣出2,160,000個取樣點的輸入資料,經過卷積層311的卷積運算會轉變為32個尺寸為108,000的特徵圖,此特徵圖是一個二維(2-dimension)的陣列,以108,000×32表示之。接著,卷積層312再對卷積層311輸出的特徵圖進行卷積運算以產生5,400×64的特徵圖。該特徵圖再經卷積層313進行卷積運算得到1,080×128的特徵圖。池化層314使用尺寸為2的滑動視窗對卷積層313輸出的特徵圖採用最大池化法(max pooling)得到540×128的特徵圖。
In this embodiment, if the
接著,卷積層321對池化層314輸出的特徵圖進行卷積運算後產生540×128的特徵圖,而後再經卷積層323進行卷積運算後得到540×128的特徵圖。然後,池化層314輸出的特徵圖與卷積層323輸出的特徵圖會在相加322執行加法運算,得到融合後的540×128的特徵圖。卷積層324對相加322輸出的特徵圖進行卷積運算後得到540×64的特徵圖。卷積層325對卷積層324輸出的特徵圖進行卷積運算後得到540×128的特徵圖。 Next, the convolution layer 321 performs a convolution operation on the feature map output by the pooling layer 314 to generate a 540×128 feature map, and then the convolution layer 323 performs a convolution operation to obtain a 540×128 feature map. Then, the feature map output by the pooling layer 314 and the feature map output by the convolution layer 323 will be added in an addition 322 to obtain a fused 540×128 feature map. The convolution layer 324 performs a convolution operation on the feature map output by the addition 322 to obtain a 540×64 feature map. The convolution layer 325 performs a convolution operation on the feature map output by the convolution layer 324 to obtain a 540×128 feature map.
而後,卷積層331對卷積層325輸出的特徵圖進行卷積運算後產生540×128的特徵圖,接著再經卷積層332以及卷積層334進行卷積運算後皆維持相同維度540×128的特徵圖。然後,卷積層325輸出的特徵圖與卷積層334輸出的特徵圖會在相加333執行加法運算,得到融合後的540×128的特徵圖。最後,卷積層335對相加333輸出的特徵圖進行卷積運算後得到540×64的特徵圖。 Then, the convolution layer 331 performs a convolution operation on the feature map output by the convolution layer 325 to generate a 540×128 feature map, and then performs a convolution operation through the convolution layer 332 and the convolution layer 334 to maintain the same dimension of 540×128 features. Figure. Then, the feature map output by the convolution layer 325 and the feature map output by the convolution layer 334 are added in an addition 333 to obtain a fused 540×128 feature map. Finally, the convolution layer 335 performs a convolution operation on the feature map output by the addition 333 to obtain a 540×64 feature map.
本實施例接著在全域平均池化層340對卷積層335輸出的特徵圖使用全域平均池化方法獲得64個特徵圖平均值。值得一提的是,使用全域平均池化層能讓不同長度的輸入資料轉換成相同長度的輸出,以此方式讓本專利模型的輸入訊號可以是任意長度。而後,全域平均池化層340輸出的特徵將連接到具有16個神經元的全連接層350,全連接層350再連接到只有1個神經元的輸出層360。輸出層360使用ReLU激活函數輸出一個大於等於零的數值,該數值即是AHI數值。最後,顯示偵測結果22將顯示該AHI數值,以及此數值對應的OSA嚴重程度,即正常、輕度、中度、重度等四分類。 This embodiment then uses the global average pooling method on the feature map output by the convolution layer 335 in the global average pooling layer 340 to obtain the average value of 64 feature maps. It is worth mentioning that using the global average pooling layer can convert input data of different lengths into outputs of the same length. In this way, the input signal of this patented model can be of any length. Then, the features output by the global average pooling layer 340 will be connected to the fully connected layer 350 with 16 neurons, and the fully connected layer 350 will then be connected to the output layer 360 with only 1 neuron. The output layer 360 uses the ReLU activation function to output a value greater than or equal to zero, which is the AHI value. Finally, the display detection result 22 will display the AHI value and the OSA severity corresponding to this value, that is, four categories: normal, mild, moderate, and severe.
圖4說明本發明如何訓練產生模型。首先取得已公開的ECG
資料集41,建立OSA嚴重程度偵測模型42,使用ECG訓練集43輸入至OSA嚴重程度偵測模型中進行訓練44。訓練過程中會檢查模型是否收斂45,若模型己達收斂則會結束訓練,否則訓練將持續進行。訓練完成的模型46即可作為執行OSA診斷之用。
Figure 4 illustrates how the present invention trains and generates a model. First obtain a published ECG
Data set 41 is used to establish an OSA severity detection model 42 , and the ECG training set 43 is used to input the OSA severity detection model for
現今具有量測ECG訊號的穿戴裝置或可攜式裝置愈來愈普及,對於透過分析ECG訊號進行OSA診斷非常方便,讓使用者在家也可以自我檢測。請見圖5,表示受測者51手上帶著量測ECG訊號的穿戴裝置52,取得ECG訊號輸入本發明所建立的OSA嚴重程度偵測模型21中,執行OSA診斷,直接顯示AHI數值以及對應的OSA嚴重程度:正常、輕度、中度或重度等診斷結果53。
Nowadays, wearable devices or portable devices capable of measuring ECG signals are becoming more and more popular. It is very convenient for OSA diagnosis by analyzing ECG signals, allowing users to self-test at home. Please see Figure 5, which shows that the subject 51 is wearing a
本發明的精神與範圍決定於下面的申請專利範圍,不受限於上述實施例。 The spirit and scope of the present invention are determined by the following patent application scope and are not limited to the above embodiments.
51:受測者 51: Subject
52:穿戴裝置 52: Wearable devices
1:ECG訊號 1:ECG signal
21:OSA嚴重程度偵測模型 21: OSA severity detection model
53:顯示診斷結果 53: Display diagnosis results
Claims (3)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW111116341A TW202343477A (en) | 2022-04-29 | 2022-04-29 | Method for osa severity detection using recording-based electrocardiography signal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW111116341A TW202343477A (en) | 2022-04-29 | 2022-04-29 | Method for osa severity detection using recording-based electrocardiography signal |
Publications (1)
Publication Number | Publication Date |
---|---|
TW202343477A true TW202343477A (en) | 2023-11-01 |
Family
ID=89720616
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW111116341A TW202343477A (en) | 2022-04-29 | 2022-04-29 | Method for osa severity detection using recording-based electrocardiography signal |
Country Status (1)
Country | Link |
---|---|
TW (1) | TW202343477A (en) |
-
2022
- 2022-04-29 TW TW111116341A patent/TW202343477A/en unknown
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111110228B (en) | Electrocardiosignal R wave detection method and device | |
CN112801000B (en) | Household old man falling detection method and system based on multi-feature fusion | |
CN111248859A (en) | Automatic sleep apnea detection method based on convolutional neural network | |
CN111053552B (en) | QRS wave detection method based on deep learning | |
Cohen-McFarlane et al. | Comparison of silence removal methods for the identification of audio cough events | |
CN113901891A (en) | Parkinson's disease fist making task evaluation method and system, storage medium and terminal | |
CN111915024A (en) | Sequence prediction model training method, prediction system, prediction method and medium | |
CN111428655A (en) | Scalp detection method based on deep learning | |
CN113674767A (en) | Depression state identification method based on multi-modal fusion | |
CN109567832A (en) | A kind of method and system of the angry driving condition of detection based on Intelligent bracelet | |
WO2021120007A1 (en) | Infrared image sequence-based sleep quality evaluation system and method | |
Wu et al. | A novel approach to diagnose sleep apnea using enhanced frequency extraction network | |
Yang et al. | Classification of phonocardiogram signals based on envelope optimization model and support vector machine | |
JP2019051129A (en) | Deglutition function analysis system and program | |
Parte et al. | Non-invasive method for diabetes detection using CNN and SVM classifier | |
CN113116300A (en) | Physiological signal classification method based on model fusion | |
TW202343477A (en) | Method for osa severity detection using recording-based electrocardiography signal | |
Bilang et al. | Cactaceae detection using MobileNet architecture | |
Zaki et al. | Smart medical chatbot with integrated contactless vital sign monitor | |
TW202341926A (en) | Method for osa severity classification using recording-based peripheral oxygen saturation signal | |
CN111012306B (en) | Sleep respiratory sound detection method and system based on double neural networks | |
US20230346304A1 (en) | Method for OSA Severity Detection Using Recording-based Electrocardiography Signal | |
US20230346302A1 (en) | Method for OSA Severity Classification Using Recording-based Peripheral Oxygen Saturation Signal | |
TWI765420B (en) | Assembly of heart failure prediction | |
US20240062582A1 (en) | Method and Device for Dynamic Recognition of Emotion Based on Facial Muscle Movement Monitoring |