CN113657345B - A non-contact heart rate variability feature extraction method based on real application scenarios - Google Patents
A non-contact heart rate variability feature extraction method based on real application scenarios Download PDFInfo
- Publication number
- CN113657345B CN113657345B CN202111011120.8A CN202111011120A CN113657345B CN 113657345 B CN113657345 B CN 113657345B CN 202111011120 A CN202111011120 A CN 202111011120A CN 113657345 B CN113657345 B CN 113657345B
- Authority
- CN
- China
- Prior art keywords
- face
- heart rate
- feature extraction
- image
- rate variability
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000605 extraction Methods 0.000 title claims abstract description 86
- 238000001514 detection method Methods 0.000 claims abstract description 51
- 238000000034 method Methods 0.000 claims abstract description 37
- 230000002159 abnormal effect Effects 0.000 claims abstract description 16
- 230000003321 amplification Effects 0.000 claims abstract description 11
- 238000003199 nucleic acid amplification method Methods 0.000 claims abstract description 11
- 238000004364 calculation method Methods 0.000 claims abstract description 9
- 238000001914 filtration Methods 0.000 claims abstract description 8
- 239000000284 extract Substances 0.000 claims abstract description 6
- 238000012545 processing Methods 0.000 claims description 18
- 230000001815 facial effect Effects 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 11
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 6
- 238000000354 decomposition reaction Methods 0.000 claims description 4
- 239000011159 matrix material Substances 0.000 claims description 4
- 238000010183 spectrum analysis Methods 0.000 claims description 3
- 238000009499 grossing Methods 0.000 claims description 2
- 238000012937 correction Methods 0.000 abstract description 12
- 230000003044 adaptive effect Effects 0.000 abstract description 9
- 238000005286 illumination Methods 0.000 abstract description 3
- 238000006243 chemical reaction Methods 0.000 abstract 1
- 230000008859 change Effects 0.000 description 7
- 238000000926 separation method Methods 0.000 description 7
- 230000005856 abnormality Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 210000000887 face Anatomy 0.000 description 4
- 238000012935 Averaging Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000008909 emotion recognition Effects 0.000 description 3
- 238000013186 photoplethysmography Methods 0.000 description 3
- 230000008570 general process Effects 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000002567 autonomic effect Effects 0.000 description 1
- 210000003403 autonomic nervous system Anatomy 0.000 description 1
- 230000008278 dynamic mechanism Effects 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 208000019622 heart disease Diseases 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 210000001002 parasympathetic nervous system Anatomy 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 208000020016 psychiatric disease Diseases 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 210000002820 sympathetic nervous system Anatomy 0.000 description 1
- 238000001931 thermography Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/08—Feature extraction
- G06F2218/10—Feature extraction by analysing the shape of a waveform, e.g. extracting parameters relating to peaks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/02—Preprocessing
- G06F2218/04—Denoising
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Artificial Intelligence (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
- Measuring Pulse, Heart Rate, Blood Pressure Or Blood Flow (AREA)
Abstract
Description
技术领域Technical field
本发明涉及到计算机视觉、信号处理技术领域,尤其涉及非接触式心率变异性特征提取方法。The invention relates to the technical fields of computer vision and signal processing, and in particular to a non-contact heart rate variability feature extraction method.
背景技术Background technique
心率变异性(HRV)是评价自主神经活动及心脏内在动力学机制的重要指标。心率变异的现象源自自主神经系统对心率的调节作用,交感神经系统与副交感神经系统间相互作用引起了心率的周期性变化。心率变异性在心脏疾病、精神类疾病、情感识别等研究中被广泛应用。Heart rate variability (HRV) is an important indicator for evaluating autonomic nervous activity and the intrinsic dynamic mechanism of the heart. The phenomenon of heart rate variability originates from the regulatory effect of the autonomic nervous system on heart rate. The interaction between the sympathetic nervous system and the parasympathetic nervous system causes periodic changes in heart rate. Heart rate variability is widely used in research on heart disease, mental illness, emotion recognition, etc.
心率变异性特征主要通过接触式方法采集的心电信号进行提取。在现实应用场景下,使用接触式方式获取心率变异性特征需要被试者主动配合、佩戴相关采集设备,这给现实应用带来很多不便。特别是当心率变异性特征用于情感识别时,采用接触式方式采集心率变异性特征时,由于采集过程中人为接触因素的影响,也会对提取的被试者心率变异性特征产生影响。Heart rate variability features are mainly extracted from ECG signals collected by contact methods. In real-life application scenarios, using contact methods to obtain heart rate variability characteristics requires subjects to actively cooperate and wear relevant collection equipment, which brings a lot of inconvenience to real-life applications. Especially when heart rate variability features are used for emotion recognition and contact methods are used to collect heart rate variability features, the influence of human contact factors during the collection process will also have an impact on the extracted heart rate variability features of the subject.
非接触式心率检测方式主要有激光多普勒技术、微波或毫米波多普勒雷达及热成像技术等。但上述技术一般设备造价较高,而且设备长时间使用对人体会有影响,不适于在实际应用中广泛使用。近年来,基于光电容积描记技术(PPG)发展而来的成像式光电容积描记法(IPPG)已经能够较为准确的获得人体心率信息,这也使得通过该原理进行非接触式心率变异性测量成为可能。Non-contact heart rate detection methods mainly include laser Doppler technology, microwave or millimeter wave Doppler radar and thermal imaging technology. However, the above-mentioned technology generally has high equipment cost, and long-term use of the equipment will have an impact on the human body, so it is not suitable for widespread use in practical applications. In recent years, imaging photoplethysmography (IPPG) developed based on photoplethysmography (PPG) has been able to obtain human heart rate information more accurately, which also makes non-contact heart rate variability measurement possible through this principle. .
通过IPPG提取心率变异性特征的一般流程首先是通过摄像头采集包含人脸的视频,对视频进行人脸检测划定人脸区域,再从人脸区域内选取特定区域作为ROI区域,并对此区域通过颜色空间进行通道分离,计算每一帧单通道或多通道的像素均值,根据像素均值间的差异应用信号处理方法得到心率值。但是在现实应用场景下,直接应用上述流程提取HRV特征存在的问题是由于需要逐帧进行人脸检测,提取速度较慢,不能满足实际应用的需求。另外,在现实应用场景下,由于被测者晃动、不同光照条件等环境因素也会影响心率变异性特征提取的准确性,上述提取流程也不能很好地解决这个问题。The general process of extracting heart rate variability features through IPPG is to first collect a video containing a face through a camera, perform face detection on the video to delineate the face area, and then select a specific area from the face area as the ROI area, and then select this area Channel separation is performed through color space, the single-channel or multi-channel pixel mean of each frame is calculated, and the heart rate value is obtained by applying a signal processing method based on the difference between the pixel mean values. However, in real application scenarios, the problem of directly applying the above process to extract HRV features is that face detection needs to be carried out frame by frame, and the extraction speed is slow, which cannot meet the needs of practical applications. In addition, in real application scenarios, environmental factors such as shaking of the subject and different lighting conditions will also affect the accuracy of heart rate variability feature extraction, and the above extraction process cannot solve this problem well.
发明内容Contents of the invention
为了解决上述问题,本发明从非接触式心率变异性特征提取速度慢、不同光照条件和晃动对非接触心率变异性特征提取准确度影响大两个方面出发,基于IPPG原理综合图像处理技术、信号处理技术、特征提取技术,提出了一个基于现实应用场景下的非接触式心率变异性特征提取方法,提取到的特征与接触式的提取结果具有一致性。In order to solve the above problems, the present invention starts from two aspects: the non-contact heart rate variability feature extraction speed is slow, and different lighting conditions and shaking have a great impact on the accuracy of non-contact heart rate variability feature extraction. Based on the IPPG principle, the invention integrates image processing technology, signal Processing technology and feature extraction technology, a non-contact heart rate variability feature extraction method based on real application scenarios is proposed. The extracted features are consistent with the contact extraction results.
本发明提出的非接触式心率变异性特征提取方法,为了提高心率变异性特征的提取速度提出了一个结合人脸检测和人脸跟踪获取人脸区域的解决策略,通过人脸检测获取人脸位置然后对该人脸位置进行跟踪,同时在跟踪过程中按照固定时间间隔通过人脸检测对人脸位置进行重定位防止跟踪偏移,并不断对因晃动而产生的人脸偏移进行人脸校正防止人脸区域提取不全,从而保证在提升速度的同时不影响人脸区域获取的准确性。并且本发明还将从摄像头获取图像和图像处理分别放在两个线程中依靠一个共享队列来实现同时进行,从而更好提高提取速度。The non-contact heart rate variability feature extraction method proposed by the present invention proposes a solution strategy that combines face detection and face tracking to obtain the face area in order to improve the extraction speed of the heart rate variability feature. The face position is obtained through face detection. Then the face position is tracked, and at the same time during the tracking process, the face position is repositioned through face detection at fixed time intervals to prevent tracking offset, and the face offset caused by shaking is continuously corrected. Prevent incomplete extraction of the face area, thereby ensuring that the accuracy of face area acquisition is not affected while increasing the speed. In addition, the present invention also places image acquisition and image processing from the camera in two threads and relies on a shared queue to achieve simultaneous processing, thereby better improving the extraction speed.
为了减轻不同光照条件对提取结果产生影响,本发明提出了一个结合通道分离、自适应皮肤检测、EEMD滤波的解决策略。首先将人脸区域图像转换到LUV色彩空间,将反映亮度变化的L通道分离出来,获取反映色度变化的U通道图像,并将人脸区域图像转换到YCrCb色彩空间,根据不同光照条件下亮度分量Y结合Cb分量自适应确定阈值来进行皮肤检测获取人脸区域图像的皮肤部分,与运算皮肤检测结果和U通道图像获取原始心率信号,初步降低光照强度变化的影响,应用EEMD方法对原始心率信号进行降噪,进一步降低光照变化的影响。In order to reduce the impact of different lighting conditions on the extraction results, the present invention proposes a solution strategy that combines channel separation, adaptive skin detection, and EEMD filtering. First, convert the face area image to the LUV color space, separate the L channel that reflects the brightness change, obtain the U channel image that reflects the chroma change, and convert the face area image to the YCrCb color space. According to the brightness under different lighting conditions The Y component is combined with the Cb component to adaptively determine the threshold to perform skin detection to obtain the skin part of the face area image, and calculate the skin detection results and the U channel image to obtain the original heart rate signal, initially reducing the impact of changes in light intensity, and applying the EEMD method to the original heart rate The signal is denoised to further reduce the impact of illumination changes.
为了减轻被试者晃动对提取结果产生影响,本发明提出了一个可修正受晃动影响峰值点的峰值点提取策略。首先计算信号峰值点,然后通过信号相邻峰值点间的斜率和峰值点间距离是否在阈值范围内来判定各个提取到的峰值点是否受到晃动影响出现提取异常,对出现提取异常的峰值点,通过当前异常峰值点前各正常峰值点位置取平均纠正该异常峰值点,从而得到相对准确的信号峰值点,进而减轻晃动对于特征提取的影响。In order to reduce the impact of the subject's shaking on the extraction results, the present invention proposes a peak point extraction strategy that can correct the peak points affected by shaking. First, calculate the signal peak points, and then determine whether each extracted peak point is affected by shaking and causes extraction abnormalities by using the slope between adjacent peak points of the signal and whether the distance between peak points is within the threshold range. For the peak points where extraction abnormalities occur, The abnormal peak point is corrected by averaging the positions of the normal peak points before the current abnormal peak point, thereby obtaining a relatively accurate signal peak point, thereby reducing the impact of shaking on feature extraction.
为了达到上述目的,本发明提出的采集流程是:In order to achieve the above objectives, the collection process proposed by the present invention is:
步骤一、采集包含人脸的图像:Step 1. Collect images containing faces:
被测者面向摄像头按照摄像头固定帧率30FPS采集人脸图像,被测者至少要连续采集30s的人脸图像才能提取到相对准确的心率变异性特征。The subject faces the camera and collects face images at a fixed frame rate of 30FPS. The subject must continuously collect face images for at least 30 seconds to extract relatively accurate heart rate variability features.
其中,关于采集到的包含人脸的图像的存储需要在一个子线程中进行,也就是说在程序中图像的采集和读取图像进行处理应该是在两个线程中同时进行的,两个线程会共享一个图像队列。Among them, the storage of collected images containing faces needs to be carried out in a sub-thread. That is to say, in the program, the collection of images and the reading of images for processing should be carried out in two threads at the same time. The two threads An image queue will be shared.
步骤二、获取图像面部区域:Step 2. Obtain the face area of the image:
使用libfacedetection开源人脸检测库进行人脸检测与KLT(Kandade-Lucas-Tomasi)跟踪方法进行人脸跟踪相结合的这种方式提取面部区域,人脸检测获取人脸位置然后对该人脸位置进行跟踪,同时在跟踪过程中按照10s固定时间间隔重新使用人脸检测定位人脸位置再继续进行跟踪。Using the libfacedetection open source face detection library for face detection and the KLT (Kandade-Lucas-Tomasi) tracking method for face tracking, this method extracts the facial area, the face detection obtains the face position, and then performs the face position on the face. tracking, and at the same time, face detection is reused to locate the face position at a fixed time interval of 10 seconds during the tracking process and then the tracking is continued.
在人脸跟踪过程中,通过人脸跟踪后确定的人脸区域所在的四个顶点,确定一个最小的外接矩形,通过矩形的中心点和偏转角度来对受晃动影响的人脸进行校正,防止面部区域提取不全。During the face tracking process, a minimum circumscribed rectangle is determined through the four vertices of the face area determined after face tracking, and the face affected by shaking is corrected through the center point and deflection angle of the rectangle to prevent Facial area extraction is incomplete.
步骤三、欧拉放大:Step 3. Euler amplification:
使用欧拉放大方法进行人脸区域肤色变化增强,增强人脸图像中与生理信号有关部分的信息。The Euler amplification method is used to enhance skin color changes in the face area and enhance the information related to physiological signals in the face image.
步骤四、通道分离:Step 4. Channel separation:
将经过步骤三得到的经过欧拉放大后的人脸图像从RGB色彩空间转换到LUV色彩空间,目的是将反映亮度变化的L通道分离出来,使用反映色度变化的U通道来提取原始心率信号。Convert the Euler-enlarged face image obtained in step 3 from the RGB color space to the LUV color space. The purpose is to separate the L channel that reflects the brightness change, and use the U channel that reflects the chromaticity change to extract the original heart rate signal. .
步骤五、自适应阈值皮肤检测:Step 5. Adaptive threshold skin detection:
提出自适应阈值进行皮肤检测,根据不同光照条件下亮度分量Y结合Cb分量自适应确定阈值,符合阈值范围为皮肤像素,将其像素点设置为255白色,其余设置为0黑色。将皮肤检测图像与U通道做与运算,从而去除非皮肤区域,得到去除非皮肤区域的U通道人脸图像。An adaptive threshold is proposed for skin detection. The threshold is adaptively determined based on the brightness component Y combined with the Cb component under different lighting conditions. The range that meets the threshold is skin pixels, and its pixels are set to 255 white, and the rest are set to 0 black. The skin detection image is ANDed with the U channel to remove non-skin areas, and a U-channel face image with non-skin areas removed is obtained.
步骤六、源信号提取:Step 6. Source signal extraction:
对一轮HRV特征提取过程中,经过步骤五得到的全部去除非皮肤区域的U通道人脸图像计算像素均值,得到的一系列像素均值点,然后对这一系列像素点进行标准化计算就构成了原始心率信号。In a round of HRV feature extraction process, the pixel mean is calculated for all U-channel face images with non-skin areas removed in step 5, and a series of pixel mean points are obtained, and then the standardized calculation is performed on this series of pixel points to form Raw heart rate signal.
步骤七、EEMD去噪:Step 7. EEMD denoising:
使用EEMD(Ensemble Empirical Mode Decomposition)方法对原始心率信号进行降噪进一步降低光照条件不同的影响。Use the EEMD (Ensemble Empirical Mode Decomposition) method to denoise the original heart rate signal to further reduce the impact of different lighting conditions.
步骤八、五点滑动:Step 8. Five-point sliding:
应用五点滑动平滑滤波方法去除信号中仍然含有的高频噪声Apply five-point sliding smoothing filter method to remove high-frequency noise still contained in the signal
步骤九、峰值点提取与修正:Step 9. Peak point extraction and correction:
计算信号峰值点,并通过设置两峰值点的距离和斜率的阈值范围,找出受晃动影响的峰值点并将其修正,从而得到相对准确的信号峰值点。Calculate the signal peak point, and by setting the distance between the two peak points and the threshold range of the slope, find the peak point affected by shaking and correct it to obtain a relatively accurate signal peak point.
步骤十、HRV特征提取:Step 10. HRV feature extraction:
使用步骤九得到的修正后的峰值点计算RR间期和R点时间进而提取HRV特征,提取时域特征、频域特征、非线性特征共27个HRV特征。Use the corrected peak point obtained in step nine to calculate the RR interval and R point time to extract HRV features. A total of 27 HRV features including time domain features, frequency domain features, and nonlinear features were extracted.
其中,时域特征包括:max、min、mean、median、SDNN、RMSSD、hr-mean、hr-sd、NN40、pNN40或HRVti;Among them, time domain features include: max, min, mean, median, SDNN, RMSSD, hr-mean, hr-sd, NN40, pNN40 or HRVti;
其中,频域特征使用Lomb-Scargle周期图进行频谱分析提取得到,频域特征包括:aVLF、aLF、aHF、aTotal、pVLF、pLF、pHF、nLF、nHF、LFHF、peakVLF、peakLF或peakHF;Among them, the frequency domain features are extracted by spectrum analysis using the Lomb-Scargle periodogram. The frequency domain features include: aVLF, aLF, aHF, aTotal, pVLF, pLF, pHF, nLF, nHF, LFHF, peakVLF, peakLF or peakHF;
其中,非线性特征包括:SD1、SD2或SD1/SD2。Among them, nonlinear characteristics include: SD1, SD2 or SD1/SD2.
本发明的优点和积极效果是:The advantages and positive effects of the present invention are:
本发明提出了一个基于现实应用场景的非接触式心率变异性特征提取方法,具体体现在,在提取方法中融入提高非接触式心率变异性特征提取速度的策略,以及克服晃动、光照条件不同影响,从而提高非接触式心率变异性特征提取准确性的策略。在现实应用场景下,可以在本发明的基础上实现测试者自动切换、自动检测、自动计算HRV特征等现实应用功能,有很强的现实应用意义。The present invention proposes a non-contact heart rate variability feature extraction method based on real application scenarios. Specifically, the extraction method incorporates a strategy to improve the speed of non-contact heart rate variability feature extraction, and overcomes the different effects of shaking and lighting conditions. , thereby improving the accuracy of non-contact heart rate variability feature extraction. In real application scenarios, real application functions such as automatic switching of testers, automatic detection, and automatic calculation of HRV characteristics can be realized based on the present invention, which has strong practical application significance.
附图说明Description of the drawings
图1为本发明技术方案流程图。Figure 1 is a flow chart of the technical solution of the present invention.
图2为结合人脸检测和人脸跟踪获取人脸区域的提高特征提取速度策略流程图。Figure 2 is a flow chart of a strategy for improving the speed of feature extraction by combining face detection and face tracking to obtain face areas.
图3为结合通道分离、自适应皮肤检测、EEMD滤波的减轻光照条件不同影响策略流程图。Figure 3 is a flow chart of a strategy that combines channel separation, adaptive skin detection, and EEMD filtering to reduce the impact of different lighting conditions.
图4为可修正受晃动影响峰值点的峰值点提取策略流程图。Figure 4 is a flow chart of a peak point extraction strategy that can correct peak points affected by shaking.
图5为人脸矫正旋转坐标系。Figure 5 shows the face correction rotation coordinate system.
具体实施方式Detailed ways
下面结合附图对本发明做进一步说明。The present invention will be further described below in conjunction with the accompanying drawings.
本发明提供了一种基于现实应用场景下的非接触式心率变异性特征提取方法,可以在现实应用场景下快速、较为准确地提取到心率变异性特征,可用于进行情感识别以反映被测者的心理压力水平,可以应用于协助海关工作人员筛查可疑通关人员等现实场景。The present invention provides a non-contact heart rate variability feature extraction method based on real application scenarios, which can quickly and more accurately extract heart rate variability features in real application scenarios, and can be used for emotion recognition to reflect the subject The level of psychological stress can be applied to real-life scenarios such as assisting customs staff in screening suspicious customs clearance personnel.
参照附图2,本发明提出的结合人脸检测和人脸跟踪获取人脸区域的提高特征提取速度策略。通过人脸检测获取人脸位置然后对该人脸位置进行跟踪,同时在跟踪过程中按照固定时间间隔通过人脸检测对人脸位置进行重定位防止跟踪偏移,并不断对因晃动而产生的人脸偏移进行人脸校正防止人脸区域提取不全,从而保证在提升速度的同时而不影响人脸区域获取的准确性,并且本发明还将从摄像头获取图像和图像处理分别放在两个线程中依靠一个共享队列来实现同时进行,从而更好提高提取速度。Referring to Figure 2, the present invention proposes a strategy for improving the speed of feature extraction by combining face detection and face tracking to obtain face areas. Obtain the face position through face detection and then track the face position. At the same time, during the tracking process, the face position is repositioned through face detection at fixed time intervals to prevent tracking offset, and the tracking deviation caused by shaking is continuously detected. Face offset performs face correction to prevent incomplete extraction of the face area, thereby ensuring that the accuracy of the face area acquisition is not affected while increasing the speed, and the present invention also separates image acquisition and image processing from the camera into two The threads rely on a shared queue to achieve simultaneous processing, thereby better improving the extraction speed.
参照附图3,本发明提出的结合通道分离、自适应皮肤检测、EEMD滤波的减轻不同光照条件影响策略。首先将人脸区域图像转换到LUV色彩空间,将反映亮度变化的L通道分离出来,获取反映色度变化的U通道图像,并将人脸区域图像转换到YCrCb色彩空间,根据不同光照条件下亮度分量Y结合Cb分量自适应确定阈值来进行皮肤检测获取人脸区域图像的皮肤部分,与运算皮肤检测结果和U通道图像获取原始心率信号,初步降低光照条件不同的影响,应用EEMD方法对源信号进行降噪,进一步降低光照条件不同的影响。Referring to Figure 3, the present invention proposes a strategy that combines channel separation, adaptive skin detection, and EEMD filtering to reduce the impact of different lighting conditions. First, convert the face area image to the LUV color space, separate the L channel that reflects the brightness change, obtain the U channel image that reflects the chroma change, and convert the face area image to the YCrCb color space. According to the brightness under different lighting conditions The Y component is combined with the Cb component to adaptively determine the threshold for skin detection to obtain the skin part of the face area image. The skin detection results and the U channel image are calculated to obtain the original heart rate signal to initially reduce the impact of different lighting conditions. The EEMD method is applied to the source signal. Perform noise reduction to further reduce the impact of different lighting conditions.
参照附图4,本发明提出的可修正受晃动影响峰值点的峰值点提取策略。首先计算信号峰值点,然后通过信号相邻峰值点间的斜率和峰值点间距离是否在阈值范围内来判定各个提取到的峰值点是否受到晃动影响出现提取异常,对出现提取异常的峰值点,通过当前异常峰值点前各正常峰值点位置取平均纠正该异常峰值点,从而得到相对准确的信号峰值点,进而减轻晃动对于特征提取的影响。Referring to Figure 4, the peak point extraction strategy proposed by the present invention can correct the peak points affected by shaking. First, calculate the signal peak points, and then determine whether each extracted peak point is affected by shaking and causes extraction abnormalities by using the slope between adjacent peak points of the signal and whether the distance between peak points is within the threshold range. For the peak points where extraction abnormalities occur, The abnormal peak point is corrected by averaging the positions of the normal peak points before the current abnormal peak point, thereby obtaining a relatively accurate signal peak point, thereby reducing the impact of shaking on feature extraction.
参照附图1,本发明结合上述策略提出的基于现实应用场景下的非接触式心率变异性特征提取方法主要包含4大部分:数据采集部分、图像处理部分、信号处理部分、HRV特征提取部分,可细分为10个步骤:采集包含人脸的图像、获取图像面部区域、欧拉放大、通道分离、自适应阈值皮肤检测、源信号提取、EEMD去噪、五点滑动、峰值点提取与修正、HRV特征提取。Referring to Figure 1, the non-contact heart rate variability feature extraction method based on real application scenarios proposed by the present invention in combination with the above strategies mainly includes four parts: data acquisition part, image processing part, signal processing part, and HRV feature extraction part. It can be subdivided into 10 steps: collecting images containing faces, obtaining image facial areas, Euler amplification, channel separation, adaptive threshold skin detection, source signal extraction, EEMD denoising, five-point sliding, peak point extraction and correction , HRV feature extraction.
具体步骤如下:Specific steps are as follows:
步骤一、采集包含人脸的图像:Step 1. Collect images containing faces:
被测者面向摄像头按照摄像头固定帧率采集人脸图像,现在市面上普通USB摄像头的帧率大多都是30FPS,所以本发明假定按照30FPS采集人脸图像。被测者至少要连续采集30s的人脸图像才能提取到相对准确的心率变异性特征。The subject faces the camera and collects face images at a fixed frame rate of the camera. Most of the frame rates of ordinary USB cameras on the market are now 30FPS, so the present invention assumes that face images are collected at 30FPS. The subject must continuously collect facial images for at least 30 seconds to extract relatively accurate heart rate variability features.
由于本发明是基于现实应用场景的,所以要是如一般流程一样先进行人脸视频录制,然后再从人脸视频中提取心率变异性特征是不符合要求的。所以本发明在应用时需要有三个线程同时进行,一个线程负责采集人脸图像,一个线程负责处理人脸图像、一个线程负责信号处理和特征提取。Since the present invention is based on real application scenarios, it does not meet the requirements if the face video is recorded first and then the heart rate variability features are extracted from the face video as in the general process. Therefore, when the present invention is applied, three threads are required to be executed simultaneously, one thread is responsible for collecting face images, one thread is responsible for processing face images, and one thread is responsible for signal processing and feature extraction.
既然是分线程进行处理,一个线程采集人脸图像、一个线程处理人脸图像,那就需要一个共享空间保证一个线程将采集到的人脸图像按顺序存入共享空间,另一个线程从共享空间中按顺序读取人脸图像进行处理,因为存取和读取操作都需要按顺序进行且要求快速,所以这个共享空间使用队列这个数据结构实现最为合适。Since the processing is divided into threads, one thread collects face images and another thread processes the face images, a shared space is needed to ensure that one thread stores the collected face images in the shared space in order, and the other thread collects the face images from the shared space. Face images are read in order for processing. Because access and read operations need to be performed in order and fast, it is most suitable to use the queue data structure to implement this shared space.
步骤二、获取图像面部区域:Step 2. Obtain the face area of the image:
传统非接触式心率变异性特征提取流程提取速度比较慢。传统非接触式特征提取会对每一帧图像都做一次人脸检测,这种方式虽然能稳定提取到人脸区域,但速度很慢,效率低。面部区域提取在HRV特征提取总体流程中是至关重要的,并且面部区域提取是对图像进行处理,与信号处理、情感分类等纯数值计算相比,本身就会占程序运行的大部分耗时,所以优化这个阶段会极大提高系统整体运行速度。The extraction speed of the traditional non-contact heart rate variability feature extraction process is relatively slow. Traditional non-contact feature extraction performs face detection on each frame of image. Although this method can stably extract the face area, it is very slow and inefficient. Facial area extraction is crucial in the overall process of HRV feature extraction, and facial area extraction processes images. Compared with pure numerical calculations such as signal processing and emotion classification, it itself will account for most of the time-consuming program running. , so optimizing this stage will greatly improve the overall operating speed of the system.
从这个问题出发,本发明使用libfacedetection开源人脸检测库进行人脸检测与KLT(Kandade-Lucas-Tomasi)跟踪方法进行人脸跟踪相结合的这种方式提取面部区域,这种提取方式速度快,面部区域提取稳定,而且不影响其他功能实现(比如在现实应用场景下需要考虑被测者自动切换等关键功能),人脸检测在这个提取方式中可以视为人脸跟踪提供跟踪模板,并且为了防止跟踪位置相对人脸位置发生偏移,在跟踪过程中还会按照固定时间间隔重新进行人脸检测重新定位人脸位置在进行人脸跟踪。Starting from this problem, the present invention uses the libfacedetection open source face detection library for face detection and the KLT (Kandade-Lucas-Tomasi) tracking method for face tracking. This method extracts the facial area. This extraction method is fast. The facial area extraction is stable and does not affect the implementation of other functions (for example, in real application scenarios, key functions such as automatic switching of subjects need to be considered). Face detection in this extraction method can be regarded as face tracking to provide a tracking template, and in order to prevent The tracking position is offset relative to the face position. During the tracking process, the face will be re-detected and repositioned at a fixed time interval for face tracking.
在实际应用场景下还要考虑晃动情况,使用人脸检测加人脸跟踪这种方式在晃动情况下会出现面部区域提取不完全的情况,所以需要进行人脸矫正,传统进行人脸矫正一般是先通过提取面部特征点,然后定位瞳孔位置,通过两瞳孔间连线的偏转角度来进行人脸矫正。这种方式虽然能有效校正人脸位置,但矫正速度慢,如果放在非接触式心率变异性特征提取的流程中会严重拖慢提取速度。所以本发明通过人脸跟踪后确定的人脸区域所在的四个顶点,确定一个最小的外接矩形,通过矩形的中心点和偏转角度来进行校正。参照附图5,矩形的偏转角度由一个坐标系来确定,当矩形向右偏移时,在坐标系的第一象限,偏转角度以x轴正半轴为0度基准。当矩形向左偏移时,由第二象限来判定角度,偏移角度以y轴正半轴为0度基准。当角度在0-45度之间用当前度数确定为旋转角度顺时针旋转,当角度在45-90度之间用90减去度数确定为旋转角度逆时针旋转。通过这个矩形的中心点,和这个矩形的偏转角度,构造仿射变换矩阵,对整张图像按仿射变换矩阵进行仿射变换,并按照矩形中心点和矩形长宽重新截取图像,就可以的到校正后的人脸了,虽然这种方法虽然效果不如人脸特征点校正方案,但速度快,校正结果也可以符合应用要求。In actual application scenarios, shaking should also be considered. Using face detection plus face tracking will result in incomplete facial area extraction when shaking, so face correction is required. Traditional face correction is generally First, facial feature points are extracted, and then the pupil position is located, and the face is corrected through the deflection angle of the line between the two pupils. Although this method can effectively correct the face position, the correction speed is slow. If used in the non-contact heart rate variability feature extraction process, it will seriously slow down the extraction speed. Therefore, the present invention determines a minimum circumscribed rectangle through the four vertices of the face area determined after face tracking, and performs correction through the center point and deflection angle of the rectangle. Referring to Figure 5, the deflection angle of the rectangle is determined by a coordinate system. When the rectangle is shifted to the right, in the first quadrant of the coordinate system, the deflection angle is based on the positive semi-axis of the x-axis as 0 degrees. When the rectangle is offset to the left, the angle is determined by the second quadrant, and the offset angle is based on the positive half-axis of the y-axis as 0 degrees. When the angle is between 0-45 degrees, use the current degree to determine the rotation angle clockwise. When the angle is between 45-90 degrees, use 90 minus the degree to determine the rotation angle counter-clockwise. Through the center point of the rectangle and the deflection angle of the rectangle, construct an affine transformation matrix, perform affine transformation on the entire image according to the affine transformation matrix, and re-intercept the image according to the center point of the rectangle and the length and width of the rectangle. That's it. Now comes the corrected face. Although this method is not as effective as the face feature point correction scheme, it is fast and the correction results can meet the application requirements.
步骤三、欧拉放大:Step 3. Euler amplification:
在本发明中应用欧拉放大方法进行人脸区域肤色变化增强,增强人脸图像中与生理信号有关部分的信息。In the present invention, the Euler amplification method is used to enhance the skin color changes in the face area and enhance the information related to physiological signals in the face image.
其中,欧拉放大方法中空间分解层数为6层,时域滤波频带为1-2Hz,图像放大倍数为200.Among them, the number of spatial decomposition layers in the Euler amplification method is 6, the time domain filtering frequency band is 1-2Hz, and the image amplification factor is 200.
步骤四、通道分离:Step 4. Channel separation:
在本发明中为了减轻不同光照条件对HRV特征提取的影响,所以将经过步骤三得到的经过欧拉放大后的人脸图像从RGB色彩空间转换到LUV色彩空间,目的是将反映亮度变化的L通道分离出来,使用反映色度变化的U通道来提取原始心率信号。In the present invention, in order to reduce the impact of different lighting conditions on HRV feature extraction, the Euler-amplified face image obtained in step 3 is converted from the RGB color space to the LUV color space. The purpose is to convert the L that reflects the brightness change into The channels are separated, and the U channel that reflects the chromaticity changes is used to extract the original heart rate signal.
步骤五、自适应阈值皮肤检测:Step 5. Adaptive threshold skin detection:
因为经过人脸检测和人脸跟踪得到的人脸区域图像还存在非皮肤区域的人脸部分,这些部分对会影响HRV特征提取的准确性,所以需要筛除人脸非皮肤区域,并且由于现实应用场景下光照条件不同,所以单一阈值的皮肤检测效果欠佳,本发明提出自适应阈值进行皮肤检测,根据不同光照条件下亮度分量Y结合Cb分量自适应确定阈值,符合阈值范围为皮肤像素,将其像素点设置为255白色,其余设置为0黑色。将皮肤检测图像与U通道做与运算,从而去除非皮肤区域,得到去除非皮肤区域的U通道人脸图像。Because the face area image obtained through face detection and face tracking also contains face parts in non-skin areas, these parts will affect the accuracy of HRV feature extraction, so it is necessary to filter out non-skin areas of the face, and due to the reality The lighting conditions in application scenarios are different, so the skin detection effect of a single threshold is not good. The present invention proposes an adaptive threshold for skin detection. The threshold is determined adaptively based on the brightness component Y combined with the Cb component under different lighting conditions. The range that meets the threshold is the skin pixel. Set its pixels to 255 white and the rest to 0 black. The skin detection image is ANDed with the U channel to remove non-skin areas, and a U-channel face image with non-skin areas removed is obtained.
在YcrCb颜色空间中定义动态配置规则如下:Define dynamic configuration rules in YcrCb color space as follows:
θ3=6;θ4=-8θ 3 =6; θ 4 =-8
if(Y≤128)θ1=6;θ2=12;if(Y≤128)θ 1 =6; θ 2 =12;
如果像素的Cr值满足以下条件,则该像素为皮肤像素If the Cr value of a pixel meets the following conditions, the pixel is a skin pixel
cr≥-2(cb+24);cr≥-(cb+17);c r ≥-2(c b +24); c r ≥-(c b +17);
cr≥-4(cb+32);cr≥2.5(cb+θ1);c r ≥-4(c b +32); c r ≥2.5(c b +θ 1 );
cr≥θ3;cr≥0.5(θ4-cb);c r ≥ θ 3 ; c r ≥ 0.5(θ 4 -c b );
其中,Y为亮度分量,Cb为蓝色色度分量,Cr为红色色度分量,θ1~θ4为中间变量。Among them, Y is the brightness component, Cb is the blue chroma component, Cr is the red chroma component, and θ1 to θ4 are intermediate variables.
步骤六、源信号提取:Step 6. Source signal extraction:
在本发明中,对一轮HRV特征提取过程,经过步骤五得到的全部去除非皮肤区域的U通道人脸图像计算像素均值,得到的一系列像素均值点,然后对这一系列像素点进行标准化处理就构成了原始心率信号。In the present invention, for one round of HRV feature extraction process, the pixel mean is calculated for all U-channel face images with non-skin areas removed in step 5, and a series of pixel mean points are obtained, and then this series of pixel points are standardized. The processing constitutes the original heart rate signal.
步骤七、EEMD去噪:Step 7. EEMD denoising:
在现实应用场景下,不同光照条件对于HRV特征提取有很大影响,实验结果表明照度越低,提取的HRV特征误差越大。为了降低不同光照条件对于HRV提取造成的误差,本发明采取EEMD(Ensemble Empirical Mode Decomposition)方法进行降噪处理。应用EEMD方法自适应获取各尺度下不同分辨率的IMF分量,通过Hilbert变换计算出瞬时频率,并根据瞬时频率分辨出噪声主导的IMF分量和信号主导的IMF分量,舍弃噪声主导的IMF分量,保留信号主导的IMF分量重构信号,从而达到减轻不同光照条件下对于HRV特征提取的影响。In real application scenarios, different lighting conditions have a great impact on HRV feature extraction. Experimental results show that the lower the illumination, the greater the error in the extracted HRV features. In order to reduce errors caused by different lighting conditions in HRV extraction, the present invention adopts the EEMD (Ensemble Empirical Mode Decomposition) method for noise reduction processing. Apply the EEMD method to adaptively obtain IMF components at different resolutions at each scale, calculate the instantaneous frequency through Hilbert transform, and distinguish the noise-dominated IMF component and the signal-dominated IMF component based on the instantaneous frequency. The noise-dominated IMF component is discarded and retained. The signal-dominated IMF component reconstructs the signal, thereby reducing the impact on HRV feature extraction under different lighting conditions.
步骤八、五点滑动:Step 8. Five-point sliding:
五点滑动平均滤波属于低通滤波,可以有效去除信号中仍然含有的高频噪声。应用五点滑动平均滤波对步骤七中经过EEMD滤波之后的信号进一步进行滤波,使信号更加平滑。The five-point moving average filter is a low-pass filter that can effectively remove high-frequency noise still contained in the signal. Apply five-point moving average filtering to further filter the signal after EEMD filtering in step 7 to make the signal smoother.
五点滑动平均第i个新数据的计算公式为: The calculation formula for the i-th new data of the five-point moving average is:
其中,N为信号长度,f(j)为五点滑动窗口内的信号值,y(i)为五点滑动平均确定的新信号值。Among them, N is the signal length, f(j) is the signal value within the five-point sliding window, and y(i) is the new signal value determined by the five-point sliding average.
步骤九、峰值点提取与修正:Step 9. Peak point extraction and correction:
当被试者头部出现晃动时会造成心率变异曲线的抖动,从而会导致心率变异曲线峰值点提取不准确。所以本发明提出了一个可修正受晃动影响峰值点的提取峰值点策略,先对步骤八之后得到的平滑信号计算信号峰值点,然后通过相邻峰值点间的斜率和峰值点间距离是否在阈值范围内来判定峰值点提取是否受晃动影响出现提取异常,对出现提取异常的峰值点,通过当前异常峰值点前各正常峰值点位置取平均纠正该异常峰值点,修正所有出现异常的峰值点,从而得到相对准确的信号峰值点。When the subject's head shakes, it will cause the heart rate variability curve to jitter, which will lead to inaccurate peak point extraction of the heart rate variability curve. Therefore, the present invention proposes a peak point extraction strategy that can correct the peak points affected by shaking. First, the signal peak point is calculated for the smooth signal obtained after step eight, and then based on the slope between adjacent peak points and whether the distance between peak points is within the threshold. Within the range, it is determined whether the peak point extraction is affected by shaking and an extraction abnormality occurs. For the peak point where the extraction abnormality occurs, the abnormal peak point is corrected by averaging the positions of the normal peak points before the current abnormal peak point, and all abnormal peak points are corrected. This results in a relatively accurate signal peak point.
在本发明中判定相邻峰值点间斜率是否在阈值范围内的公式如下:In the present invention, the formula for determining whether the slope between adjacent peak points is within the threshold range is as follows:
其中,hi表示第i(i=1,2,…,n)个峰值点的高度,ti表示第i(i=1,2,…,n)个峰值点对应的时间,不满足该公式则代表第i个峰值点为异常峰值点。Among them, h i represents the height of the i-th (i=1,2,...,n) peak point, and t i represents the time corresponding to the i-th (i=1,2,...,n) peak point. If this is not satisfied, The formula represents that the i-th peak point is an abnormal peak point.
在本发明中判定相邻峰值点间距离是否在阈值范围内的公式如下:In the present invention, the formula for determining whether the distance between adjacent peak points is within the threshold range is as follows:
60/(HR-14)≤ti-ti-1<60/(HR+14),(i=2,3,…,n)60/(HR-14)≤t i -t i-1 <60/(HR+14),(i=2,3,…,n)
其中,ti表示第i(i=1,2,…,n)个峰值点对应的时间,HR表示心率均值,不满足该公式则代表第i个峰值点为异常峰值点,其中平均心率HR的计算公式如下:Among them, t i represents the time corresponding to the i-th (i=1,2,...,n) peak point, and HR represents the average heart rate. If this formula is not satisfied, it means that the i-th peak point is an abnormal peak point, where the average heart rate HR The calculation formula is as follows:
其中,tall表示检测的总时长,count表示检测中峰值点总数。Among them, t all represents the total duration of detection, and count represents the total number of peak points in detection.
对于检测到的异常峰值点,对其进行修正的计算公式如下:For the detected abnormal peak points, the calculation formula for correcting them is as follows:
tF_new=tF-1+[(tF-1-tF-2)+…+(t2-t1)]/(F-2)t F_new =t F-1 +[(t F-1 -t F-2 )+…+(t 2 -t 1 )]/(F-2)
其中,tF_new为修正后结果,(tF-1,…,t1)为待修正异常峰值点前的正常峰值点或已修正峰值点。Among them, t F_new is the corrected result, (t F-1 ,...,t 1 ) is the normal peak point or the corrected peak point before the abnormal peak point to be corrected.
步骤十、HRV特征提取:Step 10. HRV feature extraction:
使用步骤九得到的修正后的峰值点计算RR间期和R点时间进而提取HRV特征,提取时域特征、频域特征、非线性特征共27个HRV特征。Use the corrected peak point obtained in step nine to calculate the RR interval and R point time to extract HRV features. A total of 27 HRV features including time domain features, frequency domain features, and nonlinear features were extracted.
其中,时域特征包括:max、min、mean、median、SDNN、RMSSD、hr-mean、hr-sd、NN40、pNN40或HRVti;Among them, time domain features include: max, min, mean, median, SDNN, RMSSD, hr-mean, hr-sd, NN40, pNN40 or HRVti;
其中,频域特征使用Lomb-Scargle周期图进行频谱分析提取得到,频域特征包括:aVLF、aLF、aHF、aTotal、pVLF、pLF、pHF、nLF、nHF、LFHF、peakVLF、peakLF或peakHF;Among them, the frequency domain features are extracted by spectrum analysis using the Lomb-Scargle periodogram. The frequency domain features include: aVLF, aLF, aHF, aTotal, pVLF, pLF, pHF, nLF, nHF, LFHF, peakVLF, peakLF or peakHF;
其中,非线性特征包括:SD1、SD2或SD1/SD2。Among them, nonlinear characteristics include: SD1, SD2 or SD1/SD2.
以上为本发明的具体实施方式。The above are the specific embodiments of the present invention.
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111011120.8A CN113657345B (en) | 2021-08-31 | 2021-08-31 | A non-contact heart rate variability feature extraction method based on real application scenarios |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111011120.8A CN113657345B (en) | 2021-08-31 | 2021-08-31 | A non-contact heart rate variability feature extraction method based on real application scenarios |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113657345A CN113657345A (en) | 2021-11-16 |
CN113657345B true CN113657345B (en) | 2023-09-15 |
Family
ID=78493317
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111011120.8A Active CN113657345B (en) | 2021-08-31 | 2021-08-31 | A non-contact heart rate variability feature extraction method based on real application scenarios |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113657345B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114403838A (en) * | 2022-01-24 | 2022-04-29 | 佛山科学技术学院 | A portable remote heart rate detection device and method based on Raspberry Pi |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102217931A (en) * | 2011-06-09 | 2011-10-19 | 李红锦 | Method and device for acquiring heart rate variation characteristic parameter |
CN103824420A (en) * | 2013-12-26 | 2014-05-28 | 苏州清研微视电子科技有限公司 | Fatigue driving identification system based on heart rate variability non-contact measuring |
EP2745770A1 (en) * | 2012-12-18 | 2014-06-25 | Werner Wittling | Method and device for determining the variability of a creature's heart rate |
CN104127194A (en) * | 2014-07-14 | 2014-11-05 | 华南理工大学 | Depression evaluating system and method based on heart rate variability analytical method |
CN106333658A (en) * | 2016-09-12 | 2017-01-18 | 吉林大学 | Photoelectric volume pulse wave detector and photoelectric volume pulse wave detection method |
CN109044322A (en) * | 2018-08-29 | 2018-12-21 | 北京航空航天大学 | A kind of contactless heart rate variability measurement method |
CN111429345A (en) * | 2020-03-03 | 2020-07-17 | 贵阳像树岭科技有限公司 | Method for visually calculating heart rate and heart rate variability with ultra-low power consumption |
CN111714144A (en) * | 2020-07-24 | 2020-09-29 | 长春理工大学 | Mental stress analysis method based on video non-contact measurement |
CN112200099A (en) * | 2020-10-14 | 2021-01-08 | 浙江大学山东工业技术研究院 | A video-based dynamic heart rate detection method |
CN112656393A (en) * | 2020-12-08 | 2021-04-16 | 山东中科先进技术研究院有限公司 | Heart rate variability detection method and system |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7869631B2 (en) * | 2006-12-11 | 2011-01-11 | Arcsoft, Inc. | Automatic skin color model face detection and mean-shift face tracking |
-
2021
- 2021-08-31 CN CN202111011120.8A patent/CN113657345B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102217931A (en) * | 2011-06-09 | 2011-10-19 | 李红锦 | Method and device for acquiring heart rate variation characteristic parameter |
EP2745770A1 (en) * | 2012-12-18 | 2014-06-25 | Werner Wittling | Method and device for determining the variability of a creature's heart rate |
CN103824420A (en) * | 2013-12-26 | 2014-05-28 | 苏州清研微视电子科技有限公司 | Fatigue driving identification system based on heart rate variability non-contact measuring |
CN104127194A (en) * | 2014-07-14 | 2014-11-05 | 华南理工大学 | Depression evaluating system and method based on heart rate variability analytical method |
CN106333658A (en) * | 2016-09-12 | 2017-01-18 | 吉林大学 | Photoelectric volume pulse wave detector and photoelectric volume pulse wave detection method |
CN109044322A (en) * | 2018-08-29 | 2018-12-21 | 北京航空航天大学 | A kind of contactless heart rate variability measurement method |
CN111429345A (en) * | 2020-03-03 | 2020-07-17 | 贵阳像树岭科技有限公司 | Method for visually calculating heart rate and heart rate variability with ultra-low power consumption |
CN111714144A (en) * | 2020-07-24 | 2020-09-29 | 长春理工大学 | Mental stress analysis method based on video non-contact measurement |
CN112200099A (en) * | 2020-10-14 | 2021-01-08 | 浙江大学山东工业技术研究院 | A video-based dynamic heart rate detection method |
CN112656393A (en) * | 2020-12-08 | 2021-04-16 | 山东中科先进技术研究院有限公司 | Heart rate variability detection method and system |
Non-Patent Citations (1)
Title |
---|
非侵入式胎儿心电信号的R波检测方法;田文龙;戴敏;;天津理工大学学报(第02期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN113657345A (en) | 2021-11-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Song et al. | New insights on super-high resolution for video-based heart rate estimation with a semi-blind source separation method | |
US11229372B2 (en) | Systems and methods for computer monitoring of remote photoplethysmography based on chromaticity in a converted color space | |
Po et al. | Block-based adaptive ROI for remote photoplethysmography | |
CN111938622B (en) | Heart rate detection method, device and system and readable storage medium | |
Mannapperuma et al. | Performance limits of ICA-based heart rate identification techniques in imaging photoplethysmography | |
CN106778695A (en) | A kind of many people's examing heartbeat fastly methods based on video | |
US11583198B2 (en) | Computer-implemented method and system for contact photoplethysmography (PPG) | |
Ernst et al. | Optimal color channel combination across skin tones for remote heart rate measurement in camera-based photoplethysmography | |
Bai et al. | Real-time robust noncontact heart rate monitoring with a camera | |
CN113657345B (en) | A non-contact heart rate variability feature extraction method based on real application scenarios | |
Nikolaiev et al. | Non-contact video-based remote photoplethysmography for human stress detection | |
Tabei et al. | A novel diversity method for smartphone camera-based heart rhythm signals in the presence of motion and noise artifacts | |
Gupta et al. | A motion and illumination resistant non-contact method using undercomplete independent component analysis and Levenberg-Marquardt algorithm | |
Attivissimo et al. | Performance evaluation of image processing algorithms for eye blinking detection | |
Zheng et al. | Remote measurement of heart rate from facial video in different scenarios | |
Chen et al. | Camera-based heart rate estimation for hospitalized newborns in the presence of motion artifacts | |
Gao et al. | Region of interest analysis using delaunay triangulation for facial video-based heart rate estimation | |
Mehta et al. | Heart rate estimation from RGB facial videos using robust face demarcation and VMD | |
Zou et al. | Non-contact real-time heart rate measurement algorithm based on PPG-standard deviation | |
Gong et al. | Heart rate estimation in driver monitoring system using quality-guided spectrum peak screening | |
Wang et al. | KLT algorithm for non-contact heart rate detection based on image photoplethysmography | |
Zheng et al. | Hand-over-face occlusion and distance adaptive heart rate detection based on imaging photoplethysmography and pixel distance in online learning | |
Geng et al. | Motion resistant facial video based heart rate estimation method using head-mounted camera | |
Hu et al. | Study on Real-Time Heart Rate Detection Based on Multi-People. | |
Ben Salah et al. | Contactless heart rate estimation from facial video using skin detection and multi-resolution analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |