TWI653027B - Algorithmic method for extracting human pulse rate from compressed video data of a human face - Google Patents

Algorithmic method for extracting human pulse rate from compressed video data of a human face Download PDF

Info

Publication number
TWI653027B
TWI653027B TW107120251A TW107120251A TWI653027B TW I653027 B TWI653027 B TW I653027B TW 107120251 A TW107120251 A TW 107120251A TW 107120251 A TW107120251 A TW 107120251A TW I653027 B TWI653027 B TW I653027B
Authority
TW
Taiwan
Prior art keywords
signal
frequency
sub
overlapping
heart rate
Prior art date
Application number
TW107120251A
Other languages
Chinese (zh)
Other versions
TW202000124A (en
Inventor
林俊良
趙昶辰
陳偉海
Original Assignee
國立中興大學
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 國立中興大學 filed Critical 國立中興大學
Priority to TW107120251A priority Critical patent/TWI653027B/en
Application granted granted Critical
Publication of TWI653027B publication Critical patent/TWI653027B/en
Publication of TW202000124A publication Critical patent/TW202000124A/en

Links

Landscapes

  • Image Analysis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

本發明係包括第一次重疊切割步驟、第一次處理步驟、第一次重疊相加步驟、第二次重疊切割步驟、第二次處理步驟及第二次重疊相加步驟。藉前述步驟,達成單通道訊號分離的方法進行生理訊號提取。進而選取被壓縮算法影響最小的綠色通道進行處理,再經過奇異譜分析、二倍關係篩選、頻率遮罩篩選等處理,最後得到處理後之最終心率訊號。故,本案達到兼具人臉壓縮影像的心率提取演算法相當新穎、應用範圍廣及單通道訊號分離法大幅減少視頻影像傳輸量等優點。The present invention includes a first overlapping cutting step, a first processing step, a first overlapping addition step, a second overlapping cutting step, a second processing step, and a second overlapping addition step. By the foregoing steps, a method of separating single channel signals is performed to perform physiological signal extraction. Then, the green channel with the least influence of the compression algorithm is selected for processing, and then processed by singular spectrum analysis, double relationship filtering, frequency mask screening, etc., and finally the processed final heart rate signal is obtained. Therefore, the heart rate extraction algorithm that has the face compression image is quite novel, the application range is wide, and the single channel signal separation method greatly reduces the video image transmission amount.

Description

人臉壓縮影像的心率提取演算方法Heart rate extraction algorithm for face compressed image

本發明係有關一種人臉壓縮影像的心率提取演算方法,尤指一種兼具人臉壓縮影像的心率提取演算法相當新穎、應用範圍廣及單通道訊號分離法大幅減少視頻影像傳輸量之人臉壓縮影像的心率提取演算方法。 The invention relates to a heart rate extraction calculation method for a face compressed image, in particular to a heart rate extraction algorithm with a face compressed image which is quite novel, has a wide application range and a single channel signal separation method substantially reduces the amount of video image transmission. Heart rate extraction calculation method for compressed images.

2008年威姆‧費爾克魯伊瑟(Wim Verkruysse)等人指出,通過自然光和消費級攝像頭可以檢測心率波形,進而可以遠程的分析生理資訊。 In 2008, Wim Verkruysse and others pointed out that heart rate waveforms can be detected by natural light and consumer cameras, so that physiological information can be analyzed remotely.

2010年明哲‧波(Ming-Zher Poh)等人利用盲信號分離(blind source separation)技術對視頻中的彩色信號進行分析,從中提取人體的心率信號。 In 2010, Ming-Zher Poh et al. used blind source separation technology to analyze the color signals in the video and extract the heart rate signals from the human body.

2012年浩宇‧吳(Hao-Yu Wu)等人提出尤拉影像放大演算法,對視頻流中皮膚顏色的微小變化進行放大,使得皮膚顏色的變化可以被人們觀察到。 In 2012, Hao-Yu Wu and others proposed the Euler image enlargement algorithm to amplify the small changes in skin color in the video stream, so that changes in skin color can be observed.

2013年古哈‧巴拉克利斯南(Guha Balakrishnan)等人指出,心臟的節律性跳動會引起頭部的微小晃動,可以通過檢測視頻中的這種頭部微小晃動從而偵測心率。 In 2013, Guha Balakrishnan and others pointed out that the rhythmic beat of the heart can cause small shaking of the head, which can be detected by detecting the slight shaking of the head in the video.

2015年,香港城市大學和荷蘭艾恩德后芬(Eindhoven)理工大學,各自發展出在非運動狀態下,對受測者肢體限制更少的測量技術,例如受測者可以在頭部轉動或晃動的情況中,偵測到可靠的心律數值。 In 2015, the City University of Hong Kong and the Eindhoven University of Technology in the Netherlands each developed a measurement technique that restricted the subject's limbs in a non-sports state, such as the subject being able to turn or shake the head. In the case of the case, a reliable heart rate value is detected.

2016年圖利雅科夫(Tulyakov)等人提出自適應矩陣填補方法,在檢測心率信號的同時,自動檢測哪個區域包含心率信號,將這些區域用於心率偵測。 In 2016, Tulyakov et al. proposed an adaptive matrix filling method to automatically detect which region contains heart rate signals while detecting heart rate signals, and use these regions for heart rate detection.

以上方法均以未壓縮視頻數據為處理對象,採用三通道信號分離技術,將心率信號從原始信號中分離開來。當處理壓縮視頻時,由於壓縮算法對心率信號存在極大影響,上述方法都無法有效得到準確的心率信息。 All of the above methods use uncompressed video data as the processing object, and the three-channel signal separation technology is used to separate the heart rate signal from the original signal. When processing compressed video, because the compression algorithm has a great influence on the heart rate signal, the above methods cannot effectively obtain accurate heart rate information.

2016年漢夫蘭德(Hanfland)等人對原始視頻進行壓縮並將壓縮視頻與原始視頻進行比較,研究結果表明心率信號在壓縮視頻中仍然存在,但總體的品質已大幅下降。 In 2016, Hanfland and others compressed the original video and compared the compressed video with the original video. The results show that the heart rate signal still exists in the compressed video, but the overall quality has been greatly reduced.

2017年麥克杜夫(McDuff)等人用兩種壓縮方法(x264和x265)將原始視頻壓縮到不同的比特率,結果說明視頻壓縮使心率信號的信噪比明顯下降。 In 2017, McDuff et al. used two compression methods (x264 and x265) to compress the original video to different bit rates. The results show that video compression significantly reduces the signal-to-noise ratio of the heart rate signal.

此外,現行的遠程光體積變化描記圖法(Rppg)均採用未壓縮影像為處理對象,即在未壓縮之影像數據上面提取人臉隱含的心率信息;這種方式帶來的負面影響之一是會造成儲存的困難。眾所周知,未壓縮影像的數據量巨大,例如,一段分辨率為640*480,30fps一分鐘的影像需要大約1.7GB的儲存空間。如此巨大的儲存需求必然造成資源的浪費;另一個問題是未壓縮影像資料根本無法進行遠距離傳輸,這樣給rPPG技術的應用面帶來了很大的限制。目前網路傳輸能力無法實現未壓縮影像的即時傳輸;現有的rPPG技術無法應用於需要遠距離影像即時傳輸的場合。 In addition, the current remote light volume change tracing method (Rppg) uses uncompressed image as the processing object, that is, extracts the heart rate information implied by the face on the uncompressed image data; one of the negative effects of this method It will cause storage difficulties. As we all know, the amount of data in uncompressed images is huge. For example, a resolution of 640*480, 30fps one minute image requires about 1.7GB of storage space. Such a huge storage requirement will inevitably result in waste of resources; another problem is that uncompressed image data cannot be transmitted at a long distance, which brings great limitations to the application of rPPG technology. At present, the network transmission capability cannot realize the instant transmission of uncompressed images; the existing rPPG technology cannot be applied to the occasions requiring real-time transmission of long-distance images.

總之,現今之影像遠距離傳送,幾乎都會進行影像壓縮,使傳送之數據量降低。較常見之壓縮方式有下列四種:x264、x265、vp8與vp9。但是,經壓縮後之影像,幾乎無法有效得到準確的心率信息。 In short, today's images are transmitted over long distances, and image compression is almost always performed, reducing the amount of data transmitted. The more common compression methods are the following four types: x264, x265, vp8, and vp9. However, the compressed image is almost ineffective in obtaining accurate heart rate information.

有鑑於此,必須研發出可解決上述習用缺點之技術。 In view of this, it is necessary to develop a technique that can solve the above disadvantages.

本發明之目的,在於提供一種人臉壓縮影像的心率提取演算方法,其兼具人臉壓縮影像的心率提取演算法相當新穎、應用範圍廣及單通道訊號分離法大 幅減少視頻影像傳輸量等優點。特別是,本發明所欲解決之問題係在於未壓縮影像資料無法進行遠距離傳輸等問題。 The object of the present invention is to provide a heart rate extraction calculation method for a face compressed image, which has a novel heart rate extraction algorithm for a face compressed image, a wide application range and a single channel signal separation method. The width reduces the amount of video image transmission and so on. In particular, the problem to be solved by the present invention is that uncompressed image data cannot be transmitted over long distances.

解決上述問題之技術手段係提供一種人臉壓縮影像的心率提取演算方法,其包括下列步驟:一.第一次重疊切割步驟;二.第一次處理步驟:[a]預處理步驟;[b]帶通濾波處理步驟;[c]第一次奇異譜分析步驟;[d]二倍關係篩選步驟;[e]重建步驟;三.第一次重疊相加步驟;四.第二次重疊切割步驟;五.第二次處理步驟:[f]頻率遮罩建構步驟;[g]第二次奇異譜分析步驟;[h]頻率遮罩篩選步驟;六.第二次重疊相加步驟。 The technical means for solving the above problem is to provide a heart rate extraction calculation method for a face compressed image, which comprises the following steps: 1. The first overlapping cutting step; two. The first processing step: [a] pre-processing step; [b] band-pass filtering processing step; [c] first singular spectrum analysis step; [d] double-relationship screening step; [e] reconstruction step; The first overlap addition step; four. The second overlapping cutting step; The second processing step: [f] frequency mask construction step; [g] second singular spectrum analysis step; [h] frequency mask screening step; The second overlap addition step.

茲以下列實施例並配合圖式詳細說明本發明於後: The invention will be described in detail in the following examples in conjunction with the drawings:

10‧‧‧視頻裝置 10‧‧‧Video installation

20‧‧‧網路連接裝置 20‧‧‧Network connection device

S1‧‧‧第一次重疊切割步驟 S1‧‧‧ first overlapping cutting step

S2‧‧‧第一次處理步驟 S2‧‧‧ first processing steps

S21‧‧‧預處理步驟 S21‧‧‧Pretreatment steps

S22‧‧‧帶通濾波處理步驟 S22‧‧‧ Bandpass Filtering Procedure

S23‧‧‧第一次奇異譜分析步驟 S23‧‧‧First singular spectrum analysis step

S24‧‧‧二倍關係篩選步驟 S24‧‧‧Two-fold relationship screening step

S25‧‧‧重建步驟 S25‧‧‧Reconstruction steps

S3‧‧‧第一次重疊相加步驟 S3‧‧‧ first overlap addition step

S4‧‧‧第二次重疊切割步驟 S4‧‧‧Second overlapping cutting step

S5‧‧‧第二次處理步驟 S5‧‧‧Second processing steps

S51‧‧‧頻率遮罩建構步驟 S51‧‧‧ Frequency mask construction steps

S52‧‧‧第二次奇異譜分析步驟 S52‧‧‧Second singular spectrum analysis step

S53‧‧‧頻率遮罩篩選步驟 S53‧‧‧frequency mask screening step

S6‧‧‧第二次重疊相加步驟 S6‧‧‧Second overlap addition step

M‧‧‧人臉壓縮影片 M‧‧‧Face compressed video

M1‧‧‧小段影片 M1‧‧‧Small video

T1、T2‧‧‧時間長度 T1, T2‧‧‧ length of time

T3‧‧‧步長時間 T3‧‧ ‧ long time

K‧‧‧原始影像 K‧‧‧ original image

P‧‧‧人臉區域 P‧‧‧Face area

G0(t)‧‧‧原始訊號 G0(t)‧‧‧ original signal

G1(t)‧‧‧第一訊號 G1(t)‧‧‧ first signal

G1m‧‧‧第一子序列 G1m‧‧‧ first subsequence

G1f‧‧‧第一主頻率 G1f‧‧‧ first main frequency

G2(t)‧‧‧第二訊號 G2(t)‧‧‧second signal

G22‧‧‧重疊相加後第二訊號 G22‧‧‧second signal after overlapping and adding

G3(t)‧‧‧第三訊號 G3(t)‧‧‧third signal

G3j‧‧‧中心頻率 G3j‧‧‧ center frequency

G3ju‧‧‧通過上限頻率 G3ju‧‧‧pass upper limit frequency

G3jd‧‧‧通過下限頻率 G3jd‧‧‧pass lower limit frequency

G3m‧‧‧第二子序列 G3m‧‧‧Second subsequence

G3f‧‧‧第二主頻率 G3f‧‧‧second main frequency

S34‧‧‧頻率遮罩篩選步驟 S34‧‧‧frequency mask screening step

G4(t)‧‧‧第4訊號 G4(t)‧‧‧4th signal

G44‧‧‧重疊相加後第四訊號 G44‧‧‧4th signal after overlapping and adding

L1、LA‧‧‧第一曲線 L1, LA‧‧‧ first curve

L2、LB‧‧‧第二曲線 L2, LB‧‧‧ second curve

L3、LC‧‧‧第三曲線 L3, LC‧‧‧ third curve

L4、LD‧‧‧第四曲線 L4, LD‧‧‧ fourth curve

第1圖係本發明之演算方法之流程圖 Figure 1 is a flow chart of the calculation method of the present invention

第2圖係本發明之示意圖 Figure 2 is a schematic view of the present invention

第3圖係本發明之第一次重疊切割過程之示意圖 Figure 3 is a schematic view of the first overlapping cutting process of the present invention.

第4圖係本發明之每一小段影片包含N禎原始影像之示意圖 Figure 4 is a schematic diagram of each of the short films of the present invention containing N祯 original images.

第5A及第5B圖係分別為本發明之進行人臉區域之追蹤之示意圖 5A and 5B are respectively schematic diagrams of tracking of a face region according to the present invention.

第6圖係本發明之原始訊號之示意圖 Figure 6 is a schematic diagram of the original signal of the present invention.

第7及第8圖係分別為本發明之原始訊號進行帶通濾波處理之處理前與處理後之示意圖 7 and 8 are respectively schematic diagrams of the pre-processing and the post-processing of the band-pass filtering process of the original signal of the present invention.

第9圖係本發明之第一訊號之示意圖 Figure 9 is a schematic diagram of the first signal of the present invention.

第10圖係本發明之第一次奇異譜分析步驟之示意圖 Figure 10 is a schematic diagram of the first singular spectrum analysis step of the present invention.

第11圖係本發明之第二訊號之示意圖 Figure 11 is a schematic view of the second signal of the present invention

第12圖係本發明之第一次重疊相加過程之示意圖 Figure 12 is a schematic diagram of the first overlap addition process of the present invention.

第13圖係本發明之第二次重疊切割過程之示意圖 Figure 13 is a schematic view of the second overlapping cutting process of the present invention

第14圖係本發明之頻率遮罩建構過程之示意圖 Figure 14 is a schematic diagram of the construction process of the frequency mask of the present invention

第15圖係本發明之第二次奇異譜分析過程之示意圖 Figure 15 is a schematic diagram of the second singular spectrum analysis process of the present invention.

第16圖係本發明之第二次重疊相加過程之示意圖 Figure 16 is a schematic diagram of the second overlap addition process of the present invention.

第17A圖係本發明之一靜態影片進行帶通濾波處理步驟後示意圖 Figure 17A is a schematic diagram of a static film of the present invention after performing a band pass filtering process

第17B圖係第17A圖之第一次奇異譜分析步驟及兩倍關係篩選處理步驟後之示意圖 Figure 17B is a schematic diagram of the first singular spectrum analysis step of Figure 17A and the double relationship screening processing step

第17C圖係第17B圖之第二次奇異譜分析與頻率遮罩篩選步驟之示意圖 Figure 17C is a schematic diagram of the second singular spectrum analysis and frequency mask screening step of Figure 17B

第17D圖係第17A、第17B與第17C圖之比較圖 Figure 17D is a comparison chart of 17A, 17B and 17C

第18A圖係本發明之一動態影片進行帶通濾波處理步驟後示意圖 Figure 18A is a schematic diagram of a dynamic film of the present invention after performing a band pass filtering process

第18B圖係第18A圖之第一次奇異譜分析步驟及兩倍關係篩選處理步驟後之示意圖 Figure 18B is a schematic diagram of the first singular spectrum analysis step of Figure 18A and the double relationship screening processing step

第18C圖係第18B圖之第二次奇異譜分析與頻率遮罩篩選步驟之示意圖 Figure 18C is a schematic diagram of the second singular spectrum analysis and frequency mask screening step of Figure 18B

第18D圖係本發明之提取最終心率訊號經第18A、第18B與第18C圖後之比較圖 Figure 18D is a comparison diagram of the extracted final heart rate signal of the present invention after the 18A, 18B and 18C pictures.

參閱第1及第2圖,本發明係為一人臉壓縮影像的心率提取演算方法,其包括下列步驟: Referring to FIGS. 1 and 2, the present invention is a heart rate extraction calculation method for a face compressed image, which includes the following steps:

一.第一次重疊切割步驟S1:參閱第3圖,取得一經過視頻壓縮處理後之人臉壓縮影片M,其時間長度為T1,將該人臉壓縮影片M重疊切割為複數個小段影片M1,該每一小段影片M1之時間長度係為T2;該重疊切割之方式係被定義為由該人臉壓縮影片M之開始處,擷取第一個該小段影片M1,之後每隔一個步長時間T3,再擷取另一個該小段影片M1,重覆前述動作直到該人臉壓縮影片M結束,進而能取得複數個該小段影片M1。例如:該人臉壓縮影片M為600秒(即該時間長度T1)之影片,每一該小段影片M1為3秒(即該時間長度T2),該步長時間T3為1.5秒,則最後可被重疊切割為399個小段影片M1。 One. The first overlapping cutting step S1: referring to FIG. 3, obtaining a face compressed movie M after video compression processing, the length of time is T1, and the face compressed movie M is superimposed and cut into a plurality of small segments of film M1. The length of each small piece of film M1 is T2; the method of overlapping cutting is defined as the beginning of the face compression film M, taking the first piece of the film M1, and then every other step for a long time T3 Then, another small piece of film M1 is retrieved, and the foregoing action is repeated until the face compression movie M ends, so that a plurality of the small pieces of the movie M1 can be obtained. For example, the face compression movie M is a movie of 600 seconds (that is, the length of time T1), and each of the small films M1 is 3 seconds (that is, the length of time T2), and the step T3 is 1.5 seconds for a long time. It is overlapped and cut into 399 small pieces of film M1.

二.第一次處理步驟S2,係針對每一該小段影片M1逐一進行下列步驟: two. In the first processing step S2, the following steps are performed one by one for each of the small segments of the movie M1:

[a]預處理步驟S21:每一該小段影片M1係包含N禎原始影像K(參閱第4圖);對每一禎原始影像K進行人臉區域追蹤(參閱第5A及第5B圖,此為公知技術),可得到一人臉區域P,該人臉區域P具有X乘Y個像素,針對該X乘Y之複數像素之綠色值加以平均,而得到一純量,其係被定義為單禎平均綠色值;依此類推,可得到N禎平均綠色值,進一步由該N禎平均綠色值得到一原始訊號G0(t)(參閱第6圖,其縱向座標軸GV係為平均綠色值,而GV為英文Green Value之縮寫,又G0(t)係指在一段時間內之N個單禎平均綠色值所形成之一訊號曲線),其中t=1至N。 [a] Pre-processing step S21: Each of the small-length films M1 includes N祯 original images K (see FIG. 4); face region tracking is performed for each of the original images K (see FIGS. 5A and 5B, In the prior art, a face region P having X by Y pixels is averaged, and the green values of the complex pixels of the X by Y are averaged to obtain a scalar quantity, which is defined as a single祯 average green value; and so on, the average green value of N祯 can be obtained, and an original signal G0(t) is further obtained from the average green value of N祯 (refer to Fig. 6, the longitudinal coordinate axis GV is the average green value, and GV is the abbreviation of English Green Value, and G0(t) is a signal curve formed by the average green value of N single ticks over a period of time), where t=1 to N.

[b]帶通濾波處理步驟S22:對前述之該原始訊號G0(t)進行帶通濾波處理,選取頻率0.8Hz至2.0Hz間之訊號(參閱第7及第8圖),而得到一第一訊號G1(t)(參閱第9圖),其中t=1至N。 [b] band pass filtering processing step S22: performing band pass filtering on the original signal G0(t), and selecting a signal between frequencies of 0.8 Hz to 2.0 Hz (see FIGS. 7 and 8) to obtain a first A signal G1(t) (see Figure 9), where t = 1 to N.

[c]第一次奇異譜分析(singular spectrum analysis,簡稱SSA)步驟S23:將該第一訊號G1(t)分解成複數個第一子序列G1m(參閱第10圖),並對所有該第一子序 列G1m進行快速傅利葉轉換,而得到複數個該第一子序列G1m之頻譜,再將每一該第一子序列G1m之頻譜中之幅值最大的頻率作為其第一主頻率G1f。其中,前述之奇異譜分析係為已知技術,實務上,透過一般市面流通之應用軟體(例如MATLAB),即能據以實施,故該奇異譜分析之細部過程在此不贅述。 [c] first singular spectrum analysis (SSA) step S23: decomposing the first signal G1(t) into a plurality of first sub-sequences G1m (see FIG. 10), and for all the One subsequence The column G1m performs fast Fourier transform to obtain a spectrum of the plurality of first sub-sequences G1m, and the frequency having the largest amplitude in the spectrum of each of the first sub-sequences G1m is taken as its first main frequency G1f. The foregoing singular spectrum analysis is a known technology. In practice, the application software that is generally distributed in the market (for example, MATLAB) can be implemented, so the detailed process of the singular spectrum analysis will not be described here.

[d]二倍關係篩選步驟S24:將前述第一子序列G1m兩兩進行比對,若該兩第一主頻率G1f比對後呈兩倍關係,則該兩第一主頻率G1f保留,否則捨棄該兩第一主頻率G1f。若兩兩比對後並無任一滿足此二倍關係時,則所有第一子序列G1m均保留;而可得到至少二個之第一保留子序列。 [d] double relationship screening step S24: comparing the first sub-sequence G1m two or two, if the two first main frequencies G1f are doubled after the comparison, the two first main frequencies G1f are retained, otherwise The two first main frequencies G1f are discarded. If none of the two or two alignments satisfy the double relationship, then all of the first subsequences G1m are retained; and at least two of the first reserved subsequences are obtained.

[e]重建步驟S25:將前一步驟之所有的該第一保留子序列重建,而得到一第二訊號G2(t)(參閱第11圖),其中t=1至N,該第二訊號G2(t)之橫軸為時間,縱軸為綠色值強度,第二訊號G2(t)係具有該時間長度T2。 [e] reconstruction step S25: reconstructing all of the first reserved subsequences of the previous step to obtain a second signal G2(t) (refer to FIG. 11), where t=1 to N, the second signal The horizontal axis of G2(t) is time, the vertical axis is the intensity of green value, and the second signal G2(t) has the length of time T2.

三.第一次重疊相加步驟S3:參閱第12圖,將該第一次處理步驟S2後得到之複數個第二訊號G2(t)利用現有升餘弦窗法之重疊相加技術處理,而得到一重疊相加後第二訊號G22,其具有該時間長度T1;該重疊相加技術係被定義為相鄰該兩第二訊號G2(t)之間,係以該步長時間T3重疊相加。亦即,對每個小段信號(例如該第二訊號G2(t))進行上述處理,將處理的結果乘以升餘弦窗(Hanning窗),再將處理結果相加,得到處理後的長時間訊號(例如該重疊相加後第二訊號G22)。實務上,可透過一般市面流通之應用軟體(例如MATLAB),即能據以實施,故細部過程在此不贅述。 three. First overlap addition step S3: Referring to FIG. 12, the plurality of second signals G2(t) obtained after the first processing step S2 are processed by the overlap-add technique of the existing raised cosine window method to obtain one The overlapped second signal G22 has the length of time T1; the overlap and add technique is defined as being adjacent between the two second signals G2(t), and the overlap is added by the long time T3. That is, the above processing is performed for each small segment signal (for example, the second signal G2(t)), the processing result is multiplied by the raised cosine window (Hanning window), and the processing results are added to obtain the processed long time. The signal (for example, the second signal G22 after the overlap is added). In practice, the application software that can be circulated through the general market (for example, MATLAB) can be implemented accordingly, so the detailed process will not be described here.

四.第二次重疊切割步驟S4:參閱第13圖,取得前述重疊相加後第二訊號G22,其具有該時間長度T1,將該重疊相加後第二訊號G22重疊切割為複數個小段訊號,該每一小段訊號係具有該時間長度T2;該重疊切割之方式係被定義為由該重疊相加後第二訊號G22開始處,擷取一個該小段訊號,之後每隔一個該步長時間T3,再擷取另一個該小段訊號,重覆前述動作,直到前述重疊相加 後第二訊號G22結束,進而能取複數個該小段訊號,該每一小段訊號係被定義為第三訊號G3(t),其中t=1至N。 four. The second overlap cutting step S4: refer to FIG. 13 to obtain the overlapped second signal G22 having the time length T1, and the second signal G22 is overlapped and cut into a plurality of small segments after the overlap is added. Each of the small segments of the signal has the length of time T2; the manner of the overlapping cut is defined as the beginning of the second signal G22 after the overlap is added, and a small segment of the signal is extracted, and then every other step is T3 for a long time. Then take another small segment of the signal and repeat the above actions until the overlap is added. After the second signal G22 ends, a plurality of the small signals can be obtained, and each of the small signals is defined as a third signal G3(t), where t=1 to N.

五.第二次處理步驟S5:針對該每一第三訊號G3(t)逐一進行下列步驟: Fives. The second processing step S5: performing the following steps one by one for each of the third signals G3(t):

[f]頻率遮罩建構步驟S51:將該第三訊號G3(t)之幅值最大的頻率作為其中心頻率G3j(參閱第14圖),並以該中心頻率G3j為中心,設定一通過上限頻率G3ju及一通過下限頻率G3jd,而得到一頻率遮罩範圍。 [f] Frequency mask construction step S51: the frequency at which the amplitude of the third signal G3(t) is the largest is taken as the center frequency G3j (refer to FIG. 14), and the upper limit is set with the center frequency G3j as the center. The frequency G3ju and one pass the lower limit frequency G3jd to obtain a frequency mask range.

[g]第二次奇異譜分析(SSA)步驟S52:將該第三訊號G3(t)分解成複數個第二子序列G3m(參閱第15圖),並對所有第二子序列G3m進行快速傅利葉轉換,而得到複數個第二子序列G3m之頻譜,再將該每一第二子序列G3m之頻譜中之幅值最大的頻率,作為其第二主頻率G3f。 [g] second singular spectrum analysis (SSA) step S52: decomposing the third signal G3(t) into a plurality of second sub-sequences G3m (see Fig. 15), and performing fast on all second sub-sequences G3m The Fourier transform converts the spectrum of the plurality of second sub-sequences G3m, and the frequency having the largest amplitude in the spectrum of each of the second sub-sequences G3m is taken as its second main frequency G3f.

[h]頻率遮罩篩選步驟S53:僅將該第二主頻率G3f介於該頻率遮罩範圍內之該第二子序列G3m保留,則可篩選得到至少一個第二保留子序列,其變成一第四訊號G4(t),其中t=1至N,該第四訊號G4(t)之橫軸為時間,縱軸為綠色值強度。若篩選未得到任一該第二保留子序列,則該第四訊號G4(t)直接等於該第三訊號G3(t)。 [h] frequency mask screening step S53: only retaining the second sub-sequence G3m of the second main frequency G3f within the frequency mask range, then screening at least one second reserved sub-sequence, which becomes one The fourth signal G4(t), where t=1 to N, the horizontal axis of the fourth signal G4(t) is time, and the vertical axis is the green value intensity. If the screening does not obtain any of the second reserved subsequences, the fourth signal G4(t) is directly equal to the third signal G3(t).

六.第二次重疊相加步驟S6:參閱第16圖,將該第二次處理步驟S5後得到之複數個該第四訊號G4(t),利用現有升餘弦窗法之重疊相加技術處理,而得到一重疊相加後第四訊號G44,其具有該時間長度T1,係為最終心率訊號。 six. Second overlap addition step S6: Referring to FIG. 16, the plurality of fourth signals G4(t) obtained after the second processing step S5 are processed by the overlap-add technique of the existing raised cosine window method. The fourth signal G44 is obtained after the overlap and addition, and has the length of time T1, which is the final heart rate signal.

實務上,於該取得原始影像步驟S1,係設置兩組視頻裝置10及一網路連接裝置20。該兩組視頻裝置10係透過該網路連接裝置20,達成可供視頻連絡者。 In practice, in the original image capturing step S1, two sets of video devices 10 and one network connecting device 20 are provided. The two sets of video devices 10 are connected to the network connection device 20 to achieve a video contact.

該人臉壓縮影像之壓縮方式係選自x264、x265、vp8、vp9其中之一。 The compression method of the face compressed image is selected from one of x264, x265, vp8, and vp9.

關於前述的奇異譜分析(singular spectrum analysis,簡稱SSA),其為公知技術,說明如下:輸入:一個長度為N的向量x=[x 1 ,x 2 ,...,x N ] T Regarding the aforementioned singular spectrum analysis (SSA), which is a well-known technique, it is explained as follows: Input: a vector of length N = [ x 1 , x 2 , ..., x N ] T .

第一步:漢克爾矩陣嵌入(英文為Hankel matrix embedding),即將y按照如下方式轉換成矩陣X。 The first step: Hankel matrix embedding (English is Hankel matrix embedding), which is to convert y into matrix X as follows.

其中K=N-L+1,L和K為矩陣的行與列,由用戶自己設定。 Where K = N - L +1, L and K are the rows and columns of the matrix, which are set by the user.

第二步:前述矩陣X進行奇異值分解(Singular value decomposition,簡稱SVD): The second step: the aforementioned matrix X performs Singular value decomposition (SVD):

其中σ i 為X的奇異值,u i v i 為對應的奇異向量,rX的秩。令為秩為1的子矩陣,則:X=X 1+X 2+…+X r Where σ i is the singular value of X, u i and v i are the corresponding singular vectors, and r is the rank of X. make For a submatrix of rank 1, then: X = X 1 + X 2 +... + X r .

第三步:重建。方法為對角線平均(Diagonal averaging),這一步說起來比較複雜,具體參考附件。作用是將上式的矩陣相加形式轉換為向量相加形式:x=y 1+y 2+…+y r The third step: reconstruction. The method is Diagonal averaging, which is more complicated to refer to. The function is to convert the matrix addition form of the above formula into a vector addition form: x = y 1 + y 2 +... + y r .

每一個y i 稱作一個重建成份(reconstructed component,或簡稱RC)。 Each y i is called a reconstructed component (or RC for short).

第四步:根據需要,在y i 中選擇滿足需要的RC,得到最後的時間序列。 Step 4: Select the RC that meets the needs in y i as needed to get the final time series.

舉例來講:在我們的算法中,x為綠色通道一個時間窗口所得到的時間序列,例如,若時間窗口(即時間長度T2)為3s,幀頻為30fps,則x的長度N=90。K一般設為N的一半,即45。計算得到K=46。在SVD后,r一般為K和N的小者,即r=45。在第四步中,有r=45個y i ,我們可以只考慮前20個以加快計算速度。 For example: In our algorithm, x is the time series obtained by a time window of the green channel. For example, if the time window (ie, the length of time T2) is 3s and the frame rate is 30fps, the length of x is N=90. K is generally set to half of N, which is 45. Calculated K = 46. After SVD, r is generally the smallest of K and N, ie r=45. In the fourth step, there are r=45 y i , we can only consider the first 20 to speed up the calculation.

本發明之重點在於充分考慮到壓縮影像對人體生理訊號的影響,採用單通道訊號分離的方法進行生理訊號提取。進而選取被壓縮算法影響最小的通道(G通道)進行處理。亦即,只擷取該人臉區域之該X乘Y之複數像素之綠色值加以平均。這樣做的好處是充分避免壓縮算法對心率訊號所產生的影響,使得心率訊號可以準確、穩定地從壓縮視頻中提取出來。另外,配合奇異譜分析(singular spectrum analysis,簡稱SSA),利用心率訊號的頻率結構特徵,可以在包含有雜訊的混合訊號中有效的識別心率訊號。再通過頻率遮罩,進一步濾除雜訊,得到更加準確的心率訊號。本發明通過三種方法從混合訊號中提取心率訊號,可大幅保證心率訊號的準確程度,也可有效地避免壓縮算法對心率訊號計算的影響。 The focus of the invention is to fully consider the influence of the compressed image on the physiological signal of the human body, and adopt the method of single channel signal separation to perform physiological signal extraction. Then select the channel (G channel) that is least affected by the compression algorithm for processing. That is, only the green value of the X-by-th pixel of the face region is averaged. The advantage of this is to fully avoid the impact of the compression algorithm on the heart rate signal, so that the heart rate signal can be accurately and stably extracted from the compressed video. In addition, with the singular spectrum analysis (SSA), the frequency structure of the heart rate signal can be used to effectively identify the heart rate signal in the mixed signal containing the noise. Then through the frequency mask, the noise is further filtered to obtain a more accurate heart rate signal. The invention extracts the heart rate signal from the mixed signal by three methods, can greatly ensure the accuracy of the heart rate signal, and can effectively avoid the influence of the compression algorithm on the heart rate signal calculation.

關於本案之實驗結果,請參閱第17A圖至第18D圖。 For the experimental results of this case, please refer to Figures 17A to 18D.

在一靜態影片(其壓縮方式為vp8,比特率為100kb/3),當只進行到第三步驟之帶通濾波處理(簡稱filter),其結果在頻域之波形如第17A圖所示;若再進續第一次奇異譜分析與兩倍關係篩選處理之後(簡稱filter+SSA),其結果在頻域之波形如第17B圖所示;若再繼續進行至第二次奇異譜分析與頻率遮罩篩選之步驟(簡稱filter+SSA+refine;即本案),其結果在頻域之波形如第17C圖所示。而這三者與實際心率訊號在時域之比較係如第17D圖(其中的第一曲線L1、第二曲線L2、第三曲線L3、第四曲線L4分別表示實際心率、帶通濾波、帶通濾波+SSA、帶通濾波+SSA+頻率遮罩)所示,可以看出,本案之結果最接近實際之實際心率訊號。 In a static movie (the compression mode is vp8, the bit rate is 100 kb/3), when only the band pass filtering process (referred to as filter) of the third step is performed, the waveform of the result in the frequency domain is as shown in FIG. 17A; If the first singular spectrum analysis and the double relationship filtering process (referred to as filter+SSA) are continued, the waveform of the result in the frequency domain is as shown in Fig. 17B; if it continues to the second singular spectrum analysis and The frequency mask screening step (referred to as filter + SSA + refine; that is, the case), the result of the waveform in the frequency domain is shown in Figure 17C. The comparison between the three and the actual heart rate signal in the time domain is as shown in Fig. 17D (where the first curve L1, the second curve L2, the third curve L3, and the fourth curve L4 respectively represent the actual heart rate, band pass filtering, band As shown by the pass filter + SSA, band pass filter + SSA + frequency mask, it can be seen that the result of this case is closest to the actual actual heart rate signal.

另外,在一動態影片(其壓縮方式為x264,比特率為584kb/3),當只進行到第三步驟之帶通濾波處理(簡稱filter),其結果在頻域之波形如第18A圖所示;若再進續第一次奇異譜分析與兩倍關係篩選處理之後(簡稱filter+SSA),其結果在頻域之波形如第18B圖所示;若再繼續進行至第二次奇異譜分析與頻率遮罩篩 選之步驟(簡稱filter+SSA+refine;即本案),其結果在頻域之波形如第18C圖所示。而這三者與實際心率訊號在時域之比較係如第18D圖(其中的第一曲線LA、第二曲線LB、第三曲線LC、第四曲線LD分別表示實際心率、帶通濾波、帶通濾波+SSA、帶通濾波+SSA+頻率遮罩)所示,其中,可以看出,本案之結果最接近實際之實際心率訊號。 In addition, in a dynamic movie (the compression mode is x264, the bit rate is 584 kb/3), when only the band pass filtering process (referred to as filter) in the third step is performed, the waveform in the frequency domain is as shown in FIG. 18A. If the first singular spectrum analysis and the double relationship filtering process (referred to as filter+SSA) are continued, the waveform of the result in the frequency domain is as shown in Fig. 18B; if it continues to the second singular spectrum Analysis and frequency mask screen The selection step (referred to as filter + SSA + refine; that is, the case), the result of the waveform in the frequency domain is shown in Figure 18C. The comparison between the three and the actual heart rate signal in the time domain is as shown in Fig. 18D (where the first curve LA, the second curve LB, the third curve LC, and the fourth curve LD respectively represent the actual heart rate, band pass filtering, band Through filtering + SSA, band pass filtering + SSA + frequency mask), it can be seen that the result of this case is closest to the actual actual heart rate signal.

本發明之優點及功效係如下所述: The advantages and functions of the present invention are as follows:

[1]人臉壓縮影像的心率提取演算法相當新穎。本發明採用獨特之奇異譜分析、二倍關係篩選、頻率遮罩篩選等處理步驟,而可在已壓縮視頻數據中提取人體心率訊號,是一前所未見之技術手段。故,本案之人臉壓縮影像的心率提取演算法相當新穎。 [1] The heart rate extraction algorithm for face compressed images is quite novel. The invention adopts unique singular spectrum analysis, double relationship screening, frequency mask screening and other processing steps, and can extract the human heart rate signal in the compressed video data, which is an unprecedented technical means. Therefore, the heart rate extraction algorithm for the face compressed image of this case is quite novel.

[2]應用範圍廣。本發明可應用於一切需要影像壓縮的場合,例如,遠程醫療中,病人的視頻數據經過壓縮後傳輸到醫院做進一步分析。手機應用中,使用者拍攝的視頻經過無線網路傳輸到雲端進行心率測量與分析。藉本發明,可將本技術應用到遠程醫療領域、家庭照護領域或體適能訓練等領域,特別是遠程醫療領域在家庭照護方面的技術得以提升。故,應用範圍廣。 [2] A wide range of applications. The invention can be applied to all occasions where image compression is required. For example, in telemedicine, the patient's video data is compressed and transmitted to the hospital for further analysis. In mobile applications, video captured by users is transmitted over the wireless network to the cloud for heart rate measurement and analysis. By the invention, the technology can be applied to the fields of telemedicine, home care or fitness training, especially in the field of home care in the field of telemedicine. Therefore, the application range is wide.

[3]單通道訊號分離法大幅減少視頻影像傳輸量。本發明採用單通道訊號分離的方法進行生理訊號提取,進而選取被壓縮算法影響最小的通道(G通道)進行處理。亦即,只擷取該人臉區域中之X乘Y之複數像素之綠色值加以平均,可大幅減少視頻影像傳輸量。故,單通道訊號分離法大幅減少視頻影像傳輸量。 [3] The single channel signal separation method greatly reduces the amount of video image transmission. The invention adopts the method of single channel signal separation to perform physiological signal extraction, and then selects the channel (G channel) which is least affected by the compression algorithm for processing. That is, only the green value of the plurality of pixels of X by Y in the face area is averaged, and the video image transmission amount can be greatly reduced. Therefore, the single channel signal separation method greatly reduces the amount of video image transmission.

以上僅是藉由較佳實施例詳細說明本發明,對於該實施例所做的任何簡單修改與變化,皆不脫離本發明之精神與範圍。 The present invention has been described in detail with reference to the preferred embodiments of the present invention, without departing from the spirit and scope of the invention.

Claims (3)

一種人臉壓縮影像的心率提取演算方法,係包括下列步驟: 一.第一次重疊切割步驟:取得一經過視頻壓縮處理後之人臉壓縮影片,將該人臉壓縮影片重疊切割為複數個小段影片;該重疊切割之方式係被定義為由該人臉壓縮影片之開始處,擷取第一個該小段影片,之後每隔一個步長時間,再擷取另一個該小段影片,重覆前述動作直到該人臉壓縮影片結束,進而能取得複數個該小段影片; 二.第一次處理步驟,係針對每一該小段影片逐一進行下列步驟: [a] 預處理步驟:每一該小段影片係包含N禎原始影像;對每一禎原始影像進行人臉區域追蹤,可得到一人臉區域,該人臉區域具有X乘Y個像素,針對該X乘Y之複數像素之綠色值加以平均,而得到一純量,其係被定義為單禎平均綠色值;依此類推,可得到N禎平均綠色值,進一步由該N禎平均綠色值得到一原始訊號G0(t),其中t=1至N; [b] 帶通濾波處理步驟:對前述之該原始訊號G0(t)進行帶通濾波處理,選取頻率0.8Hz至2.0Hz間之訊號,而得到一第一訊號G1(t),其中t=1至N; [c] 第一次奇異譜分析步驟:將該第一訊號G1(t)分解成複數個第一子序列,並對所有該第一子序列進行快速傅利葉轉換,而得到複數個該第一子序列之頻譜,再將每一該第一子序列之頻譜中之幅值最大的頻率作為其第一主頻率; [d] 二倍關係篩選步驟:將前述第一子序列兩兩進行比對,若該兩第一主頻率比對後呈兩倍關係,則該兩第一主頻率保留,否則捨棄該兩第一主頻率;若兩兩比對後並無任一滿足此二倍關係時,則所有第一子序列均保留;而可得到至少二個之第一保留子序列; [e] 重建步驟:將前一步驟之所有的該第一保留子序列重建,而得到一第二訊號G2(t),其中t=1至N,該第二訊號G2(t)之橫軸為時間,縱軸為綠色值強度,第二訊號G2(t)係具有該時間長度; 三.第一次重疊相加步驟:將該第一次處理步驟後得到之複數個第二訊號G2(t)利用現有升餘弦窗法之重疊相加技術處理,而得到一重疊相加後第二訊號,其具有該時間長度;該重疊相加技術係被定義為相鄰該兩第二訊號G2(t)之間,係以該步長時間重疊相加; 四.第二次重疊切割步驟:取得前述重疊相加後第二訊號,其具有該時間長度,將該重疊相加後第二訊號重疊切割為複數個小段訊號,該每一小段訊號係具有該時間長度;該重疊切割之方式係被定義為由該重疊相加後第二訊號開始處,擷取一個該小段訊號,之後每隔一個該步長時間,再擷取另一個該小段訊號,重覆前述動作,直到前述重疊相加後第二訊號結束,進而能取複數個該小段訊號,該每一小段訊號係被定義為第三訊號G3(t),其中t=1至N; 五.第二次處理步驟:針對該每一第三訊號G3(t)逐一進行下列步驟: [f] 頻率遮罩建構步驟:將該第三訊號G3(t)之幅值最大的頻率作為其中心頻率,並以該中心頻率為中心,設定一通過上限頻率及一通過下限頻率,而得到一頻率遮罩範圍; [g] 第二次奇異譜分析步驟:將該第三訊號G3(t)分解成複數個第二子序列,並對所有第二子序列進行快速傅利葉轉換,而得到複數個第二子序列之頻譜,再將該每一第二子序列之頻譜中之幅值最大的頻率,作為其第二主頻率; [h] 頻率遮罩篩選步驟:僅將該第二主頻率介於該頻率遮罩範圍內之該第二子序列保留,則可篩選得到至少一個第二保留子序列,其變成一第四訊號G4(t),其中t=1至N,該第四訊號G4(t)之橫軸為時間,縱軸為綠色值強度;若篩選未得到任一該第二保留子序列,則該第四訊號G4(t)直接等於該第三訊號G3(t); 六.第二次重疊相加步驟:將該第二次處理步驟後得到之複數個該第四訊號G4(t),利用現有升餘弦窗法之重疊相加技術處理,而得到一重疊相加後第四訊號,其具有該時間長度,係為最終心率訊號。A heart rate extraction calculation method for a face compressed image includes the following steps: 1. The first overlapping cutting step: obtaining a face compressed movie subjected to video compression processing, and superimposing the face compressed film into a plurality of short films; the overlapping cutting manner is defined as compressing the movie by the face At the beginning, the first piece of the film is captured, and then every other step is taken for another long time, and then the other piece of the film is captured, and the action is repeated until the face compression movie ends, and then the plurality of pieces of the film can be obtained; two. In the first processing step, the following steps are performed one by one for each of the short films: [a] Preprocessing step: each of the short films contains N祯 original images; face region tracking is performed for each original image. Obtaining a face region having X by Y pixels, averaging the green values of the plurality of pixels of the X by Y, to obtain a scalar quantity, which is defined as a single 祯 average green value; and so on , an N祯 average green value is obtained, and an original signal G0(t) is obtained from the N祯 average green value, wherein t=1 to N; [b] band pass filtering processing step: the original signal G0 (for the foregoing) t) performing band pass filtering processing, selecting a signal between frequencies of 0.8 Hz to 2.0 Hz, and obtaining a first signal G1(t), where t=1 to N; [c] first singular spectrum analysis step: Decoding the first signal G1(t) into a plurality of first sub-sequences, and performing fast Fourier transform on all the first sub-sequences to obtain a plurality of spectra of the first sub-sequence, and then each of the first sub-sequences The frequency with the largest amplitude in the spectrum as its first dominant frequency; [d] Relationship screening step: comparing the first sub-sequences to two or two, if the two first main frequencies are doubled after the comparison, the two first main frequencies are reserved, otherwise the two first main frequencies are discarded; If none of the two or two comparisons satisfy the double relationship, then all the first subsequences are retained; and at least two first reserved subsequences are obtained; [e] reconstruction step: all of the previous steps The first reserved subsequence is reconstructed to obtain a second signal G2(t), where t=1 to N, the horizontal axis of the second signal G2(t) is time, the vertical axis is green value intensity, and the second The signal G2(t) has the length of time; The first overlapping addition step: the plurality of second signals G2(t) obtained after the first processing step are processed by the overlap addition technique of the existing raised cosine window method to obtain an overlapped second signal , having the length of time; the overlap-adding technique is defined as being adjacent between the two second signals G2(t), and the steps are overlapped and added for a long time; The second overlapping cutting step is: obtaining the second overlapping signal after the overlap, and having the length of time, after the overlapping is added, the second signal is overlapped and cut into a plurality of small segments, and each of the small segments has the length of time The method of overlapping cutting is defined as the beginning of the second signal after the overlap is added, taking a small segment of the signal, and then every other step for a long time, and then capturing another small segment of the signal, repeating the foregoing After the overlapping, the second signal ends, and then the plurality of small signals can be obtained, and each of the small signals is defined as a third signal G3(t), where t=1 to N; The second processing step: performing the following steps one by one for each of the third signals G3(t): [f] Frequency mask construction step: the frequency with the largest amplitude of the third signal G3(t) is taken as its center frequency And centering on the center frequency, setting a pass frequency and a pass frequency to obtain a frequency mask range; [g] second singular spectrum analysis step: decomposing the third signal G3(t) into a plurality of second sub-sequences, and performing fast Fourier transform on all second sub-sequences to obtain a spectrum of the plurality of second sub-sequences, and then using the maximum amplitude of the spectrum in each second sub-sequence as a second main frequency; [h] a frequency mask screening step: only the second sub-sequence whose second main frequency is within the frequency mask is retained, and at least one second reserved sub-sequence can be selected. It becomes a fourth signal G4(t), where t=1 to N, the horizontal axis of the fourth signal G4(t) is time, and the vertical axis is the green value intensity; if the screening does not obtain any of the second reserved objects Sequence, the fourth signal G4(t) is directly equal to the third signal G3(t); . The second overlapping addition step: the plurality of the fourth signals G4(t) obtained after the second processing step are processed by the overlap addition technique of the existing raised cosine window method to obtain an overlap addition The four signal, which has the length of time, is the final heart rate signal. 如申請專利範圍第1項所述之人臉壓縮影像的心率提取演算方法,其中,於該取得原始影像步驟,係設置兩組視頻裝置及一網路連接裝置,該兩組視頻裝置係透過該網路連接裝置,達成可供視頻連絡者。The method for calculating a heart rate extraction algorithm for a face compressed image according to claim 1, wherein in the step of acquiring the original image, two sets of video devices and a network connecting device are disposed, and the two sets of video devices pass through the Network connection device to reach a video contact. 申請專利範圍第1項所述之人臉壓縮影像的心率提取演算方法,其中,該人臉壓縮影像之壓縮方式係選自x264、x265、vp8、vp9其中之一。The heart rate extraction calculation method for the face compressed image according to the first aspect of the invention, wherein the compression method of the face compressed image is selected from one of x264, x265, vp8, and vp9.
TW107120251A 2018-06-12 2018-06-12 Algorithmic method for extracting human pulse rate from compressed video data of a human face TWI653027B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW107120251A TWI653027B (en) 2018-06-12 2018-06-12 Algorithmic method for extracting human pulse rate from compressed video data of a human face

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW107120251A TWI653027B (en) 2018-06-12 2018-06-12 Algorithmic method for extracting human pulse rate from compressed video data of a human face

Publications (2)

Publication Number Publication Date
TWI653027B true TWI653027B (en) 2019-03-11
TW202000124A TW202000124A (en) 2020-01-01

Family

ID=66590731

Family Applications (1)

Application Number Title Priority Date Filing Date
TW107120251A TWI653027B (en) 2018-06-12 2018-06-12 Algorithmic method for extracting human pulse rate from compressed video data of a human face

Country Status (1)

Country Link
TW (1) TWI653027B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103054569A (en) 2012-12-20 2013-04-24 Tcl集团股份有限公司 Method, device and handhold device for measuring human body heart rate based on visible image
CN105678780A (en) 2016-01-14 2016-06-15 合肥工业大学智能制造技术研究院 Video heart rate detection method removing interference of ambient light variation
US20170367590A1 (en) 2016-06-24 2017-12-28 Universita' degli Studi di Trento (University of Trento) Self-adaptive matrix completion for heart rate estimation from face videos under realistic conditions

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103054569A (en) 2012-12-20 2013-04-24 Tcl集团股份有限公司 Method, device and handhold device for measuring human body heart rate based on visible image
CN105678780A (en) 2016-01-14 2016-06-15 合肥工业大学智能制造技术研究院 Video heart rate detection method removing interference of ambient light variation
US20170367590A1 (en) 2016-06-24 2017-12-28 Universita' degli Studi di Trento (University of Trento) Self-adaptive matrix completion for heart rate estimation from face videos under realistic conditions

Also Published As

Publication number Publication date
TW202000124A (en) 2020-01-01

Similar Documents

Publication Publication Date Title
Gideon et al. The way to my heart is through contrastive learning: Remote photoplethysmography from unlabelled video
US9855012B2 (en) Method and system for noise cleaning of photoplethysmogram signals
Zhao et al. A novel framework for remote photoplethysmography pulse extraction on compressed videos
RU2568776C2 (en) Methods and systems for providing combination of media data and metadata
CN113408508B (en) Transformer-based non-contact heart rate measurement method
CN110969124A (en) Two-dimensional human body posture estimation method and system based on lightweight multi-branch network
CN109793506A (en) A kind of contactless radial artery Wave shape extracting method
AU2013302623A1 (en) Real-time physiological characteristic detection based on reflected components of light
Chen et al. Eliminating physiological information from facial videos
Wang et al. VitaSi: A real-time contactless vital signs estimation system
TWI653027B (en) Algorithmic method for extracting human pulse rate from compressed video data of a human face
JP7044171B2 (en) Pulse wave calculation device, pulse wave calculation method and pulse wave calculation program
US10492678B2 (en) Image capturing apparatus, image processing apparatus and image processing method for secure processing of biological information
WO2020003910A1 (en) Heartbeat detection device, heartbeat detection method, and program
JP6506957B2 (en) Objective image quality evaluation device and program
Dautov et al. On the effect of face detection on heart rate estimation in videoplethysmography
Javaid et al. Video colour variation detection and motion magnification to observe subtle changes
CN114463784A (en) Multi-person rope skipping analysis method based on video-audio multi-mode deep learning
US7744218B1 (en) Pupil position acquisition system, method therefor, and device containing computer software
Choe et al. Improving video-based resting heart rate estimation: A comparison of two methods
US20170294193A1 (en) Determining when a subject is speaking by analyzing a respiratory signal obtained from a video
CN113693573A (en) Video-based non-contact multi-physiological-parameter monitoring system and method
WO2017051415A1 (en) A system and method for remotely obtaining physiological parameter of a subject
JP6585623B2 (en) Biological information measuring device, biological information measuring method, and biological information measuring program
CN116681700B (en) Method, device and readable storage medium for evaluating heart rate and heart rate variability of user