CN116889388A - Intelligent detection system and method based on rPPG technology - Google Patents

Intelligent detection system and method based on rPPG technology Download PDF

Info

Publication number
CN116889388A
CN116889388A CN202311161515.5A CN202311161515A CN116889388A CN 116889388 A CN116889388 A CN 116889388A CN 202311161515 A CN202311161515 A CN 202311161515A CN 116889388 A CN116889388 A CN 116889388A
Authority
CN
China
Prior art keywords
signal
rppg
unit
channel
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311161515.5A
Other languages
Chinese (zh)
Other versions
CN116889388B (en
Inventor
孙运杰
嵇晓强
李贵文
隋雅茹
王美娇
饶治
郝颢
陶雪
马艳蓉
曹国华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun Gauss Vision Technology Co ltd
Changchun University of Science and Technology
Original Assignee
Changchun Gauss Vision Technology Co ltd
Changchun University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun Gauss Vision Technology Co ltd, Changchun University of Science and Technology filed Critical Changchun Gauss Vision Technology Co ltd
Priority to CN202311161515.5A priority Critical patent/CN116889388B/en
Publication of CN116889388A publication Critical patent/CN116889388A/en
Application granted granted Critical
Publication of CN116889388B publication Critical patent/CN116889388B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/021Measuring pressure in heart or blood vessels
    • A61B5/02108Measuring pressure in heart or blood vessels from analysis of pulse wave characteristics
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Cardiology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Physiology (AREA)
  • Veterinary Medicine (AREA)
  • Vascular Medicine (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of rPPG (rPPG), in particular to an intelligent detection system and method based on an rPPG technology.

Description

Intelligent detection system and method based on rPPG technology
Technical Field
The invention relates to the technical field of rPPG, in particular to an intelligent detection system and method based on rPPG technology.
Background
Remote photoplethysmography is a non-contact optical detection technology, pulse wave signals of a human body can be extracted through videos, blood volumes in blood vessels periodically change due to contraction and relaxation of a heart, and absorption capacities of hemoglobin in different blood volumes on light are different, so that the regularity of reflected light on the surface of skin is changed, original pulse waves can be obtained from videos through shooting through video imaging equipment and using corresponding signal processing modes, the change of the blood volumes directly relates to the lateral pressure of the blood vessels, the increase of the blood volumes and the lateral pressure of the blood on the blood vessels are also increased, and related data measurement is carried out by trained professionals in the prior art without home monitoring and daily use conditions.
Disclosure of Invention
The invention aims to provide an intelligent detection system and method based on rPPG technology, which solves the problems in the background technology, and the invention provides the following technical scheme:
an intelligent detection method based on an rPPG technology, the method comprising the following steps:
s1, acquiring palm position information of a user in real time through a camera, dividing a region of interest (ROI) by combining the acquired palm position information of the user, extracting an image G channel signal in a divided region, and preprocessing an average pixel value in the G channel signal as an original rPPG signal;
s2, carrying out targeted single-period segmentation on the rPPG signals after pretreatment, carrying out primary screening on segmentation results by combining with human standard pulse rates, and carrying out graph-based detection processing by combining with the screening results;
s3, extracting characteristic values in the rPPG signal after the base detection processing based on the analysis result of the S2, and carrying out network parameter training through an Adma optimizer to construct a two-channel characteristic fusion data prediction model;
s4, acquiring current intensive care unit patient information in real time, carrying out data training on the information through a two-channel feature fusion data prediction model, obtaining a trained two-channel feature fusion data prediction model, inputting a preprocessed signal into the trained two-channel feature fusion data prediction model for prediction, and further obtaining a feedback result.
Further, the method in S1 includes the following steps:
step 1001, acquiring the active video of the palm part of the user in real time through a camera, storing each frame of image in the video, recording as a set A,
A=(A 1 ,A 2 ,A 3 ,...,A n ),
wherein A is n Representing an nth frame image, wherein n represents the total frame number of the acquired video;
step 1002, randomly extracting one frame of image to divide the ROI, wherein 21 key points in the hand area in the nth frame of image are marked by an image recognition technology,
taking the center of the junction between the palm and the wrist as a first key point, marking as a key point 0, taking the key point 0 as an original point, taking the original point as a reference point, constructing a first plane rectangular coordinate system at intervals of unit length, marking corresponding key points in a hand area in an nth frame of image in the first plane rectangular coordinate system, and carrying out digital marking according to the sequence;
step 1003, respectively calculating the line segment slope values formed by each key point and the origin point and the distance values between the corresponding key point and the origin point in the first plane rectangular coordinate system, combining the analysis results of the corresponding key point and the origin point to generate a set B,
B={[(X A(n) 0,1 ,D A(n) 0,1 ),(X A(n) 0,2 ,D A(n) 0,2 ),...,(X A(n) 0,20 ,D A(n) 0,20 )]},
wherein X is A(n) 0,20 A value D representing the slope of the line segment formed by the key point marked with the number 20 and the key point marked with the number 0 A(n) 0,20 Representing the distance value between the key point with the number of 20 and the key point with the number of 0 in the rectangular coordinate system of the first plane,
wherein X is A(n) 0,20 =(y 20 -y 0 )/(x 20 -x 0 ),D A(n) 0,20 =[(y 20 -y 0 ) 2 +(x 20 -x 0 ) 2 ] 1/2
Step 1004, repeating the steps 1002 to 1003 to obtain a line segment slope value formed by the corresponding key point and the origin point of each frame image in the acquired video and a distance value between the corresponding key point and the origin point in the first plane rectangular coordinate system, sequentially calculating the difference condition between the position of the key point in each frame image and the standard position, and marking the difference condition as a set C,
C=(C 1 ,C 2 ,C 3 ,...,C n ),
wherein C is n Representing the difference between the positions of the key points in the nth frame image relative to the standard positions,
wherein C is n =α·∑ 20 a=1 |X A(n) 0,a -X a standard |/20+β·∑ 20 a=1 |D A(n) 0,a -D a standard |/20,
Alpha and beta both represent a proportionality coefficient, which isDatabase preset value, X A(n) 0,a A line segment slope value X representing a key point marked with a number and a key point marked with 0 in an nth frame image a standard A line segment slope standard value formed by a key point with a data mark and a key point with a number mark of 0, wherein the slope standard value is a database preset value, D A(n) 0,a Representing the distance value of a key point with a data mark and a key point with a number mark of 0 in a first plane rectangular coordinate system, D a standard The method comprises the steps of representing the distance standard value of a key point with a data mark and a key point with a number mark of 0 in a first plane rectangular coordinate system, wherein the distance standard value is a database preset value;
step 1005, using an image corresponding to the minimum difference condition in the set C as an optimal image in the current acquired video, matching the optimal image with an image in a Hand Landmark model, and positioning 21 key points of a Hand region in the corresponding model in the matching result in the optimal image, wherein the Hand Landmark model is obtained by Google corporation according to 30K real world Hand image training, positioning of 21 key points of the Hand region is achieved, and for each acquired frame of image, hand landmark judgment is performed, because palm region blood vessels are abundant, palm ROI regions are artificially divided according to coordinates of the key points, accurate ROI positioning can be achieved even if Hand motion exists, and external detection interference is reduced to the greatest extent;
in step 1006, since blood tissue can absorb more light than other tissue, and the color of the opaque object is determined by the color of the reflected light, the blood reflects red light and absorbs green light, so the color change of the green channel in the video collected by the camera is most obvious, since rpg is signal processing by the change of the reflected light, the G channel is preserved, the G channel pixel mean value in the best image is read, the read G channel pixel mean value is used as the original rpg signal, the contour of the hand region is separated from the background by a canny edge monitoring algorithm to obtain a background ROI, the rpg signal of the hand ROI is corrected by the change of the brightness of the background ROI, wherein the calculation formula of the change of the brightness of the background ROI is I (t)=[∑ w c=1h d=1 G v (c,d,t)]/s,
Wherein I (t) represents the change of light with time t, G v (c, d, t) represents the value of the corresponding green channel of the pixel point with the abscissa of c pixel pitches and the coordinate of d pixel pitches in the background ROI when the time is t, w represents the pixel width of the background ROI, h represents the pixel height of the background ROI, s represents the total pixel point number of the background ROI, the brightness change curve is fitted by a nine-order polynomial on discrete points with the brightness changing along with the time, the brightness change curve is subtracted from the original rPPG signal, so as to eliminate illumination imitation, on the other hand, in order to reduce the noise reflected when the original rPPG is acquired, the original rPPG signal is denoised by using ensemble empirical mode decomposition, and in order to provide a positioned starting point and a final value point when the single period segmentation is provided, the error caused by insufficient sampling precision is reduced, and the rPPG signal is subjected to a three-sample interpolation function to interpolate the original waveform from the sampling frequency of 30Hz to 300Hz.
According to the invention, through collecting the active video of the palm part of the user in real time and analyzing the positions of key points in the video, each frame of image is extracted and matched with the image in the Hand Landmark model to obtain accurate positioning of 21 key points of the Hand region, the contour of the Hand region is separated from the background through a canny edge monitoring algorithm to obtain a background ROI, the rPPG signal of the Hand ROI is corrected through the brightness change of the background ROI, and data reference is provided for subsequent data prediction.
Further, the method in S2 includes the following steps:
step 2001, acquiring a preprocessed rPPG signal, constructing a second plane rectangular coordinate system by taking a point o1 as an origin, time as an abscissa and amplitude as an ordinate, and mapping the preprocessed rPPG signal into the second plane rectangular coordinate system;
step 2002, marking all zero crossing points in the descending period of the preprocessed rPPG signal in a second plane rectangular coordinate system, sequentially combining two adjacent marking points, taking two adjacent marking points in any combination as interval endpoints, extracting the minimum value in the rPPG signal in the corresponding interval as a segmentation point, and carrying out rPPG signal segmentation;
step 2003, the step 2002 of circulating divides the pre-processed rpg signal into a plurality of monocycle signal waveforms, performs preliminary screening on the monocycle signal waveforms in combination with the standard pulse rate range of the human body, and performs waveform calibration on the monocycle signal waveforms in sequence in combination with the preliminary screening result, because the rpg signal is easily affected by noise in the shooting process, correlation screening is required for rpg peeling, on one hand, the monocycle signal waveforms with the minimum 120 sampling points and the maximum 300 sampling points in the retention period considering that the normal pulse rate range of the human body is 60-100 times per minute and the sampling sample frequency is 300Hz are considered, on the other hand, the pulse rate range of each person is different considering that the pulse rate of the human body is unique after the preliminary screening is completed on the other hand, so that the picture-based detection is performed on the data after the preliminary screening, and the retention interval of the monocycle waveforms is as follows:
P calibration of ={[Q 1 -k(Q 3 -Q 1 )-r][Q 3 +k(Q 3 -Q 1 )-r]}·ξ·G Abnormality of
Q 1 Lower quartile, Q, representing point data for all monocycle waveforms 3 Representing the upper quartile of all monocycle waveform adoption point data, k representing an anomaly coefficient, when k=3, representing extreme anomaly, when k=1.5, representing moderate anomaly, r representing the number of sampling points, ζ representing a scaling factor, the scaling factor being a database preset value, G Abnormality of Representing the total number of abnormal sampling points in the waveform of the monocycle signal, if [ Q 1 -k(Q 3 -Q 1 )-r]< 0 and [ Q ] 3 +k(Q 3 -Q 1 )-r]> 0, then [ Q ] 1 -k(Q 3 -Q 1 )-r][Q 3 +k(Q 3 -Q 1 )-r]The number of abnormal sampling points in the single-period signal waveform is calculated by carrying out difference value operation on the sampling points of the single-period signal waveform and the standard pulse rate range of the human body, and the number of the negative values is calculated by calculating the difference value;
according to the invention, the monocycle signal is divided, the number of sampling points of each monocycle waveform is used as a division basis to carry out waveform screening, and data reference is provided for subsequent model training and data detection according to screening results.
Further, the method in S3 includes the following steps:
step 3001, training network parameters through an Adam optimizer, taking a linear rectifying unit as an activation function, taking a root mean square error as a loss function, taking an average absolute error as an evaluation index, setting a learning rate as delta, preprocessing data, obtaining rPPG signals, and sequentially scrambling the rPPG signals, and dividing the rPPG signals into a training set and a testing set;
step 3002, sending the signals of the input model to a branch of a FNN multi-layer perceptron for feature extraction, wherein the FNN is completely composed of full-connection layers, and the input features are combined through a multi-layer structure for mining strong correlation between the input features and data;
the invention sets an input layer for receiving data, nine hidden layers for excavating and extracting deep features in input signals, and the last output layer comprises 160 nodes, and finally vectors containing signal features are sent to a feature fusion module;
step 3003, sending the signals of the input model to a CNN convolutional neural network module for multi-view feature extraction; the AlexNet network is taken as a main trunk, the network structure totally comprises 9 layers, the first 8 layers consist of convolution layers and pooling layers, the convolution layers are used for extracting features in signals, the pooling layers are used for pooling maximally, the size of a feature map is reduced, the calculation complexity is reduced, the last layer is a flat layer, feature vectors are flattened into one-dimensional data, and the one-dimensional data are sent to a feature fusion module;
step 3004, fusing the output features of the branches of step 3002 and step 3003, and predicting data through two full-connection layers.
Further, the method in S4 collects current information of the patient in the intensive care unit in real time, performs data training on the information through a dual-channel feature fusion data prediction model, obtains a trained dual-channel feature fusion data prediction model, inputs a preprocessed signal into the trained dual-channel feature fusion data prediction model for prediction, and further obtains a feedback result.
An intelligent detection system based on rpg technology, the system comprising the following modules:
information data preprocessing module: the information data preprocessing module is used for acquiring palm position information of a user in real time through a camera, carrying out ROI region division by combining the acquired palm position information of the user, extracting an image G channel signal in the divided region, and preprocessing an average pixel value in the G channel signal as an original rPPG signal;
the rPPG signal segmentation and waveform selection module: the rPPG signal segmentation and waveform selection module is used for carrying out targeted monocycle segmentation on the preprocessed rPPG signal, carrying out monocycle signal segmentation according to segmentation results, carrying out rPPG waveform screening on the segmentation results, and carrying out base detection processing by combining the screening results;
the two-channel characteristic fusion data prediction module is as follows: the dual-channel feature fusion data prediction module is used for constructing a dual-channel feature fusion data prediction model by combining the analysis results of the rPPG signal segmentation and waveform selection module;
and the rPPG signal characteristic extraction and data prediction module is as follows: the rPPG signal characteristic extraction and data prediction module is used for training by inputting training data into the two-channel characteristic fusion data prediction model, obtaining a trained two-channel characteristic fusion data prediction model, inputting a preprocessed signal into the trained two-channel characteristic fusion data prediction model for prediction, and further obtaining a feedback result.
Further, the information data preprocessing module comprises an image acquisition unit, an ROI region dividing unit, a channel data extraction unit and an rPPG signal preprocessing unit:
the image acquisition unit is used for acquiring the moving video of the palm part of the user in real time through the camera and extracting each frame of image in the acquired video;
the ROI region dividing unit is used for judging hand landmarks of each acquired frame of image by combining the analysis result of the image acquisition unit, and positioning 21 key points of the hand region according to the judgment result;
the channel data extraction unit is used for carrying out RGB three-color channel acquisition by combining the analysis result of the ROI region dividing unit, reserving a G channel, and generating an original rPPG signal by calculating the average pixel value of the G channel;
the rPPG signal preprocessing unit is used for separating the hand region outline from the background through a canny edge monitoring algorithm to obtain a background ROI, and correcting the rPPG signal of the hand ROI through the change of the brightness of the background ROI.
Further, the rpg signal segmentation and waveform selection module includes a single-period segmentation unit, an rpg waveform screening unit, and a graph-based detection processing unit:
the single-period segmentation unit is used for extracting all zero crossing points in the descending period in the original rPPG signal by combining the analysis result of the information data preprocessing module, taking the minimum value between two adjacent zero crossing points as the starting point of one period, and carrying out segmentation processing on the original rPPG signal;
the rPPG waveform screening unit is used for acquiring sampling point data according to the normal pulse rate range value of the human body and carrying out preliminary screening on the analysis result of the single-period segmentation unit according to the acquired result;
the base detection unit is used for carrying out base detection by combining the analysis result of the rPPG waveform screening unit, and judging whether the preliminarily screened monocycle waveform is reserved or not by taking the number of sampling points of each monocycle waveform as a dividing basis.
Further, the two-channel feature fusion data prediction module comprises a two-channel feature fusion unit and a training two-channel feature fusion unit:
the dual-channel feature fusion unit is used for combining the analysis result of the rPPG signal segmentation and waveform selection module, sending the rPPG signal into the FNN multi-layer perceptron branch for feature extraction, and combining the extracted features through a multi-layer structure;
the training double-channel feature fusion unit is used for combining the analysis result of the rPPG signal segmentation and waveform selection module, and sending the rPPG signal to the CNN convolutional neural network module for multi-view feature extraction.
Further, the rpg signal feature extraction and data prediction module includes a feature fusion unit and a data prediction unit:
the feature fusion unit is used for fusing the analysis results of the two-channel feature fusion unit and the training two-channel feature fusion unit;
the data prediction unit is used for carrying out data prediction by combining the analysis result of the feature fusion unit.
The invention provides a non-contact measurement method based on rPPG, which has the advantages of non-invasiveness, portability and universality, namely, the micro color change of the skin surface is captured through the camera, the measurement method based on rPPG is convenient, and the video of the palm area is collected through the rear camera of the common smart phone to conduct data prediction, so that a user can conduct detection data monitoring in daily life, no special equipment or support of a medical institution is needed, and the method is more suitable for a special use environment because body surface contact is not needed during measurement, and further interference to a patient is reduced, and the method is more suitable and convenient.
Drawings
FIG. 1 is a schematic flow chart of an intelligent detection method based on rPPG technology;
FIG. 2 is a schematic diagram of an intelligent detection system based on rPPG technology according to the present invention;
fig. 3 is a schematic diagram of location information of key points of a hand region in an intelligent detection method based on an rpg technology;
FIG. 4 is a schematic diagram of a dual-channel feature fusion neural network of an intelligent detection method based on rPPG technology;
fig. 5 is a schematic diagram of a data prediction result in an intelligent detection method based on an rpg technology.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1: referring to fig. 1, in this embodiment:
an intelligent detection method based on an rPPG technology, the method comprising the following steps:
s1, acquiring palm position information of a user in real time through a camera, dividing a region of interest (ROI) by combining the acquired palm position information of the user, extracting an image G channel signal in a divided region, and preprocessing an average pixel value in the G channel signal as an original rPPG signal;
the method in S1 comprises the following steps:
step 1001, acquiring the active video of the palm part of the user in real time through a camera, storing each frame of image in the video, recording as a set A,
A=(A 1 ,A 2 ,A 3 ,...,A n ),
wherein A is n Representing an nth frame image, wherein n represents the total frame number of the acquired video;
step 1002, randomly extracting one frame of image to divide the ROI, wherein 21 key points in the hand area in the nth frame of image are marked by an image recognition technology (as shown in figure 3),
taking the center of the junction between the palm and the wrist as a first key point, marking as a key point 0, taking the key point 0 as an original point, taking the original point as a reference point, constructing a first plane rectangular coordinate system at intervals of unit length, marking corresponding key points in a hand area in an nth frame of image in the first plane rectangular coordinate system, and carrying out digital marking according to the sequence;
step 1003, respectively calculating the line segment slope values formed by each key point and the origin point and the distance values between the corresponding key point and the origin point in the first plane rectangular coordinate system, combining the analysis results of the corresponding key point and the origin point to generate a set B,
B={[(X A(n) 0,1 ,D A(n) 0,1 ),(X A(n) 0,2 ,D A(n) 0,2 ),...,(X A(n) 0,20 ,D A(n) 0,20 )]},
wherein X is A(n) 0,20 A value D representing the slope of the line segment formed by the key point marked with the number 20 and the key point marked with the number 0 A(n) 0,20 Representing the distance value between the key point with the number of 20 and the key point with the number of 0 in the rectangular coordinate system of the first plane,
wherein X is A(n) 0,20 =(y 20 -y 0 )/(x 20 -x 0 ),D A(n) 0,20 =[(y 20 -y 0 ) 2 +(x 20 -x 0 ) 2 ] 1/2
Step 1004, repeating the steps 1002 to 1003 to obtain a line segment slope value formed by the corresponding key point and the origin point of each frame image in the acquired video and a distance value between the corresponding key point and the origin point in the first plane rectangular coordinate system, sequentially calculating the difference condition between the position of the key point in each frame image and the standard position, and marking the difference condition as a set C,
C=(C 1 ,C 2 ,C 3 ,...,C n ),
wherein C is n Representing the difference between the positions of the key points in the nth frame image relative to the standard positions,
wherein C is n =α·∑ 20 a=1 |X A(n) 0,a -X a standard |/20+β·∑ 20 a=1 |D A(n) 0,a -D a standard |/20,
Alpha and beta both represent the proportionality coefficient, which is a database preset value, X A(n) 0,a A line segment slope value X representing a key point marked with a number and a key point marked with 0 in an nth frame image a standard A line segment slope standard value formed by a key point with a data mark and a key point with a number mark of 0, wherein the slope standard value is a database preset value, D A(n) 0,a The key point with the data marked as a and the key point with the number marked as 0 are shown in the firstDistance value D in plane rectangular coordinate system a standard The method comprises the steps of representing the distance standard value of a key point with a data mark and a key point with a number mark of 0 in a first plane rectangular coordinate system, wherein the distance standard value is a database preset value;
step 1005, using an image corresponding to the minimum value of the difference condition in the set C as an optimal image in the current acquired video, matching the optimal image with an image in a Hand Landmark model, and positioning 21 key points corresponding to a Hand region in the model in the matching result in the optimal image;
step 1006, reading a G channel pixel mean value in the optimal image, using the read G channel pixel mean value as an original rpg signal, separating the hand region outline from the background by a canny edge monitoring algorithm to obtain a background ROI, and correcting the rpg signal of the hand ROI by the change of the brightness of the background ROI, wherein the calculation formula of the brightness change condition of the background ROI is I (t) = [ Σ w c=1h d=1 G v (c,d,t)]/s,
Wherein I (t) represents the change condition of illumination intensity change along with time t, G v (c, d, t) represents the value of the corresponding green channel at time t for the pixel points of the background ROI with the abscissa of c pixel pitches and the coordinate of d pixel pitches, w represents the pixel width of the background ROI, h represents the pixel height of the background ROI, and s represents the total pixel points of the background ROI.
S2, carrying out targeted single-period segmentation on the rPPG signals after pretreatment, carrying out primary screening on segmentation results by combining with human standard pulse rates, and carrying out graph-based detection processing by combining with the screening results;
the method in S2 comprises the steps of:
step 2001, acquiring a preprocessed rPPG signal, constructing a second plane rectangular coordinate system by taking a point o1 as an origin, time as an abscissa and amplitude as an ordinate, and mapping the preprocessed rPPG signal into the second plane rectangular coordinate system;
step 2002, marking all zero crossing points in the descending period of the preprocessed rPPG signal in a second plane rectangular coordinate system, sequentially combining two adjacent marking points, taking two adjacent marking points in any combination as interval endpoints, extracting the minimum value in the rPPG signal in the corresponding interval as a segmentation point, and carrying out rPPG signal segmentation;
step 2003, the step 2002 of circulating divides the preprocessed rPPG signal into a plurality of monocycle signal waveforms, performs preliminary screening on the monocycle signal waveforms by combining the standard pulse rate range of the human body, and performs waveform calibration on the monocycle signal waveforms in sequence by calculation by combining the preliminary screening result, wherein the expression is as follows:
P calibration of ={[Q 1 -k(Q 3 -Q 1 )-r][Q 3 +k(Q 3 -Q 1 )-r]}·ξ·G Abnormality of
Q 1 Lower quartile, Q, representing point data for all monocycle waveforms 3 Representing the upper quartile of all monocycle waveform adoption point data, wherein k represents an abnormal coefficient, the abnormal coefficient is a database preset value, ζ represents a proportionality coefficient, and the proportionality coefficient is a database preset value, G Abnormality of Representing the total number of abnormal sampling points in the waveform of the monocycle signal, if [ Q 1 -k(Q 3 -Q 1 )-r]< 0 and [ Q ] 3 +k(Q 3 -Q 1 )-r]> 0, then [ Q ] 1 -k(Q 3 -Q 1 )-r][Q 3 +k(Q 3 -Q 1 )-r]=1, and vice versa are 0.
S3, extracting characteristic values in the rPPG signal after the base detection processing based on the analysis result of the S2, and carrying out network parameter training through an Adma optimizer to construct a two-channel characteristic fusion data prediction model;
the method in S3 comprises the following steps:
step 3001, training network parameters through an Adam optimizer, taking a linear rectifying unit as an activation function, taking a root mean square error as a loss function, taking an average absolute error as an evaluation index, setting a learning rate to be 0.001, preprocessing data, and obtaining rPPG signals to be disordered in sequence, and dividing the rPPG signals into a training set and a test set;
step 3002, sending the signals of the input model into a branch of the FNN multi-layer perceptron for feature extraction;
step 3003, sending the signals of the input model to a CNN convolutional neural network module for multi-view feature extraction;
step 3004, fusing the output features of the branches of step 3002 and step 3003, and predicting data through two full-connection layers.
S4, acquiring current intensive care unit patient information in real time, carrying out data training on the information through a two-channel feature fusion data prediction model, obtaining a trained two-channel feature fusion data prediction model, inputting a preprocessed signal into the trained two-channel feature fusion data prediction model for prediction, and further obtaining a feedback result.
And the method in S4 collects the current intensive care unit patient information in real time, carries out data training on the information through a two-channel feature fusion data prediction model, obtains a trained two-channel feature fusion data prediction model, inputs the preprocessed signals into the trained two-channel feature fusion data prediction model for prediction, and further obtains a feedback result.
In this embodiment: an intelligent detection system (shown in fig. 2) based on rpg technology is disclosed, which is used for realizing the specific scheme content of the method.
Example 2: when constructing and training a two-channel feature fusion neural network, an Adam optimizer is used for training network parameters, a linear rectifying unit is used as an activation function, root mean square error is used as a loss function, average absolute value is used as an evaluation index, learning rate is set to be 0.001, rPPG signals obtained after data preprocessing are sequentially disordered and divided into training and testing sets for subsequent training and prediction of a model (shown in fig. 4), and the two-channel feature fusion neural network is specifically constructed as follows:
step S4.1: setting an input layer for receiving data, sending a signal of an input model into a branch of an FNN multi-layer perceptron for feature extraction, wherein the FNN is completely composed of all-connected layers, combining the input features through a multi-layer structure and used for excavating strong correlation between the input features and blood pressure, nine hidden layers are used for excavating and extracting deep features in the input signal, the last output layer comprises 160 nodes, and finally vectors containing the signal features are sent into a feature fusion module;
step S4.2: the signal input into the model is simultaneously sent to a CNN convolutional neural network module for multi-view feature extraction, the module takes an AlexNet network as a main body, the network structure totally comprises 9 layers, the first 8 layers consist of convolutional layers and pooling layers, the convolutional layers are used for extracting features in the signal, the pooling layers are used for maximum pooling, the size of a feature map is reduced, the calculation complexity is reduced, the last layer is a flat layer, feature vectors are flattened into one-dimensional data, and the one-dimensional data are sent to a feature fusion module;
step S4.3: and finally, the output characteristics of the two branches are fused, the blood pressure is predicted through the two full-connection layers, training data is input into a two-channel characteristic fusion blood pressure prediction model for training, a two-channel characteristic fusion blood pressure prediction model is obtained, data in a MIMIMIIC intensive care database is extracted and sent into the model for training, and the predicted diastolic pressure and systolic pressure have good fitting degree with the true values (shown in figure 5), wherein the database records patient information of a intensive care unit, including blood pressure and pulse wave signals.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Finally, it should be noted that: the foregoing description is only a preferred embodiment of the present invention, and the present invention is not limited thereto, but it is to be understood that modifications and equivalents of some of the technical features described in the foregoing embodiments may be made by those skilled in the art, although the present invention has been described in detail with reference to the foregoing embodiments. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An intelligent detection method based on an rPPG technology is characterized by comprising the following steps:
s1, acquiring palm position information of a user in real time through a camera, dividing a region of interest (ROI) by combining the acquired palm position information of the user, extracting an image G channel signal in a divided region, and preprocessing an average pixel value in the G channel signal as an original rPPG signal;
s2, carrying out targeted single-period segmentation on the rPPG signals after pretreatment, carrying out primary screening on segmentation results by combining with human standard pulse rates, and carrying out graph-based detection processing by combining with the screening results;
s3, extracting characteristic values in the rPPG signal after the base detection processing based on the analysis result of the S2, and carrying out network parameter training through an Adma optimizer to construct a two-channel characteristic fusion data prediction model;
s4, acquiring current intensive care unit patient information in real time, carrying out data training on the information through a two-channel feature fusion data prediction model, obtaining a trained two-channel feature fusion data prediction model, inputting a preprocessed signal into the trained two-channel feature fusion data prediction model for prediction, and further obtaining a feedback result.
2. The method for intelligent detection based on rpg technology according to claim 1, wherein the method in S1 comprises the steps of:
step 1001, acquiring the active video of the palm part of the user in real time through a camera, storing each frame of image in the video, recording as a set A,
A=(A 1 ,A 2 ,A 3 ,...,A n ),
wherein A is n Representing an nth frame image, wherein n represents the total frame number of the acquired video;
step 1002, randomly extracting one frame of image to divide the ROI, wherein 21 key points in the hand area in the nth frame of image are marked by an image recognition technology,
taking the center of the junction between the palm and the wrist as a first key point, marking as a key point 0, taking the key point 0 as an original point, taking the original point as a reference point, constructing a first plane rectangular coordinate system at intervals of unit length, marking corresponding key points in a hand area in an nth frame of image in the first plane rectangular coordinate system, and carrying out digital marking according to the sequence;
step 1003, respectively calculating the line segment slope values formed by each key point and the origin point and the distance values between the corresponding key point and the origin point in the first plane rectangular coordinate system, combining the analysis results of the corresponding key point and the origin point to generate a set B,
B={[(X A(n) 0,1 ,D A(n) 0,1 ),(X A(n) 0,2 ,D A(n) 0,2 ),...,(X A(n) 0,20 ,D A(n) 0,20 )]},
wherein X is A(n) 0,20 A value D representing the slope of the line segment formed by the key point marked with the number 20 and the key point marked with the number 0 A(n) 0,20 The key point denoted by numeral 20 and the key point denoted by numeral 0 are straight in the first planeThe distance values in the angular coordinate system,
wherein X is A(n) 0,20 =(y 20 -y 0 )/(x 20 -x 0 ),D A(n) 0,20 =[(y 20 -y 0 ) 2 +(x 20 -x 0 ) 2 ] 1/2
Step 1004, repeating the steps 1002 to 1003 to obtain a line segment slope value formed by the corresponding key point and the origin point of each frame image in the acquired video and a distance value between the corresponding key point and the origin point in the first plane rectangular coordinate system, sequentially calculating the difference condition between the position of the key point in each frame image and the standard position, and marking the difference condition as a set C,
C=(C 1 ,C 2 ,C 3 ,...,C n ),
wherein C is n Representing the difference between the positions of the key points in the nth frame image relative to the standard positions,
wherein C is n =α·∑ 20 a=1 |X A(n) 0,a -X a standard |/20+β·∑ 20 a=1 |D A(n) 0,a -D a standard |/20,
Alpha and beta both represent the proportionality coefficient, which is a database preset value, X A(n) 0,a A line segment slope value X representing a key point marked with a number and a key point marked with 0 in an nth frame image a standard A line segment slope standard value formed by a key point with a data mark and a key point with a number mark of 0, wherein the slope standard value is a database preset value, D A (n) 0,a Representing the distance value of a key point with a data mark and a key point with a number mark of 0 in a first plane rectangular coordinate system, D a standard The method comprises the steps of representing the distance standard value of a key point with a data mark and a key point with a number mark of 0 in a first plane rectangular coordinate system, wherein the distance standard value is a database preset value;
step 1005, using an image corresponding to the minimum value of the difference condition in the set C as an optimal image in the current acquired video, matching the optimal image with an image in a Hand Landmark model, and positioning 21 key points corresponding to a Hand region in the model in the matching result in the optimal image;
step 1006, reading a G channel pixel mean value in the optimal image, using the read G channel pixel mean value as an original rpg signal, separating the hand region outline from the background by a canny edge monitoring algorithm to obtain a background ROI, and correcting the rpg signal of the hand ROI by the change of the brightness of the background ROI, wherein the calculation formula of the brightness change condition of the background ROI is I (t) = [ Σ w c=1h d=1 G v (c,d,t)]/s,
Wherein I (t) represents the change condition of illumination intensity change along with time t, G v (c, d, t) represents the value of the corresponding green channel at time t for the pixel points of the background ROI with the abscissa of c pixel pitches and the coordinate of d pixel pitches, w represents the pixel width of the background ROI, h represents the pixel height of the background ROI, and s represents the total pixel points of the background ROI.
3. The method for intelligent detection based on rpg technology according to claim 2, wherein the method in S2 comprises the steps of:
step 2001, acquiring a preprocessed rPPG signal, constructing a second plane rectangular coordinate system by taking a point o1 as an origin, time as an abscissa and amplitude as an ordinate, and mapping the preprocessed rPPG signal into the second plane rectangular coordinate system;
step 2002, marking all zero crossing points in the descending period of the preprocessed rPPG signal in a second plane rectangular coordinate system, sequentially combining two adjacent marking points, taking two adjacent marking points in any combination as interval endpoints, extracting the minimum value in the rPPG signal in the corresponding interval as a segmentation point, and carrying out rPPG signal segmentation;
step 2003, the step 2002 of circulating divides the preprocessed rPPG signal into a plurality of monocycle signal waveforms, performs preliminary screening on the monocycle signal waveforms by combining the standard pulse rate range of the human body, and performs waveform calibration on the monocycle signal waveforms in sequence by calculation by combining the preliminary screening result, wherein the expression is as follows:
P calibration of ={[Q 1 -k(Q 3 -Q 1 )-r][Q 3 +k(Q 3 -Q 1 )-r]}·ξ·G Abnormality of
Q 1 Lower quartile, Q, representing point data for all monocycle waveforms 3 Representing the upper quartile of all monocycle waveform adoption point data, wherein k represents an abnormal coefficient, the abnormal coefficient is a database preset value, r represents the number of sampling points, and ζ represents a proportionality coefficient, the proportionality coefficient is a database preset value, G Abnormality of Representing the total number of abnormal sampling points in the waveform of the monocycle signal, if [ Q 1 -k(Q 3 -Q 1 )-r]< 0 and [ Q ] 3 +k(Q 3 -Q 1 )-r]> 0, then [ Q ] 1 -k(Q 3 -Q 1 )-r][Q 3 +k(Q 3 -Q 1 )-r]=1, and vice versa are 0.
4. An intelligent detection method based on rpg technology according to claim 3, wherein the method in S3 comprises the steps of:
step 3001, training network parameters through an Adam optimizer, taking a linear rectifying unit as an activation function, taking a root mean square error as a loss function, taking an average absolute error as an evaluation index, setting a learning rate as delta, preprocessing data, obtaining rPPG signals, and sequentially scrambling the rPPG signals, and dividing the rPPG signals into a training set and a testing set;
step 3002, sending the signals of the input model into a branch of the FNN multi-layer perceptron for feature extraction;
step 3003, sending the signals of the input model to a CNN convolutional neural network module for multi-view feature extraction;
step 3004, fusing the output features of the branches of step 3002 and step 3003, and predicting data through two full-connection layers.
5. The method for intelligently detecting the rPPG technology according to claim 4, wherein the method in S4 is characterized in that current intensive care unit patient information is collected in real time, the information is subjected to data training through a dual-channel feature fusion data prediction model, a trained dual-channel feature fusion data prediction model is obtained, a preprocessed signal is input into the trained dual-channel feature fusion data prediction model for prediction, and a feedback result is obtained.
6. An intelligent detection system based on an rPPG technology is characterized by comprising the following modules:
information data preprocessing module: the information data preprocessing module is used for acquiring palm position information of a user in real time through a camera, carrying out ROI region division by combining the acquired palm position information of the user, extracting an image G channel signal in the divided region, and preprocessing an average pixel value in the G channel signal as an original rPPG signal;
the rPPG signal segmentation and waveform selection module: the rPPG signal segmentation and waveform selection module is used for carrying out targeted monocycle segmentation on the preprocessed rPPG signal, carrying out monocycle signal segmentation according to segmentation results, carrying out rPPG waveform screening on the segmentation results, and carrying out base detection processing by combining the screening results;
the two-channel characteristic fusion data prediction module is as follows: the dual-channel feature fusion data prediction module is used for constructing a dual-channel feature fusion data prediction model by combining the analysis results of the rPPG signal segmentation and waveform selection module;
and the rPPG signal characteristic extraction and data prediction module is as follows: the rPPG signal characteristic extraction and data prediction module is used for training by inputting training data into the two-channel characteristic fusion data prediction model, obtaining a trained two-channel characteristic fusion data prediction model, inputting a preprocessed signal into the trained two-channel characteristic fusion data prediction model for prediction, and further obtaining a feedback result.
7. The rpg technology-based intelligent detection system of claim 6, wherein the information data preprocessing module comprises an image acquisition unit, an ROI area division unit, a channel data extraction unit, and an rpg signal preprocessing unit:
the image acquisition unit is used for acquiring the moving video of the palm part of the user in real time through the camera and extracting each frame of image in the acquired video;
the ROI region dividing unit is used for judging hand landmarks of each acquired frame of image by combining the analysis result of the image acquisition unit, and positioning 21 key points of the hand region according to the judgment result;
the channel data extraction unit is used for carrying out RGB three-color channel acquisition by combining the analysis result of the ROI region dividing unit, reserving a G channel, and generating an original rPPG signal by calculating the average pixel value of the G channel;
the rPPG signal preprocessing unit is used for separating the hand region outline from the background through a canny edge monitoring algorithm to obtain a background ROI, and correcting the rPPG signal of the hand ROI through the change of the brightness of the background ROI.
8. The rpg technology-based intelligent detection system of claim 7, wherein the rpg signal segmentation and waveform selection module comprises a single-period segmentation unit, an rpg waveform screening unit, and a graph-based detection processing unit:
the single-period segmentation unit is used for extracting all zero crossing points in the descending period in the original rPPG signal by combining the analysis result of the information data preprocessing module, taking the minimum value between two adjacent zero crossing points as the starting point of one period, and carrying out segmentation processing on the original rPPG signal;
the rPPG waveform screening unit is used for acquiring sampling point data according to the normal pulse rate range value of the human body and carrying out preliminary screening on the analysis result of the single-period segmentation unit according to the acquired result;
the base detection unit is used for carrying out base detection by combining the analysis result of the rPPG waveform screening unit, and judging whether the preliminarily screened monocycle waveform is reserved or not by taking the number of sampling points of each monocycle waveform as a dividing basis.
9. The rpg technology-based intelligent detection system of claim 8, wherein the dual-channel feature fusion data prediction module comprises a dual-channel feature fusion unit and a training dual-channel feature fusion unit:
the dual-channel feature fusion unit is used for combining the analysis result of the rPPG signal segmentation and waveform selection module, sending the rPPG signal into the FNN multi-layer perceptron branch for feature extraction, and combining the extracted features through a multi-layer structure;
the training double-channel feature fusion unit is used for combining the analysis result of the rPPG signal segmentation and waveform selection module, and sending the rPPG signal to the CNN convolutional neural network module for multi-view feature extraction.
10. The rpg technology-based intelligent detection system of claim 9, wherein the rpg signal feature extraction and data prediction module comprises a feature fusion unit and a data prediction unit:
the feature fusion unit is used for fusing the analysis results of the two-channel feature fusion unit and the training two-channel feature fusion unit;
the data prediction unit is used for carrying out data prediction by combining the analysis result of the feature fusion unit.
CN202311161515.5A 2023-09-11 2023-09-11 Intelligent detection system and method based on rPPG technology Active CN116889388B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311161515.5A CN116889388B (en) 2023-09-11 2023-09-11 Intelligent detection system and method based on rPPG technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311161515.5A CN116889388B (en) 2023-09-11 2023-09-11 Intelligent detection system and method based on rPPG technology

Publications (2)

Publication Number Publication Date
CN116889388A true CN116889388A (en) 2023-10-17
CN116889388B CN116889388B (en) 2023-11-17

Family

ID=88315256

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311161515.5A Active CN116889388B (en) 2023-09-11 2023-09-11 Intelligent detection system and method based on rPPG technology

Country Status (1)

Country Link
CN (1) CN116889388B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5746698A (en) * 1995-09-28 1998-05-05 Nederlandse Organisatie Voor Toegepast-Natuurwetenschappelijk Onderzoek Tno Method and device for determining brachial arterial pressure wave on the basis of nonivasively measured finger blood pressure wave
US20140148664A1 (en) * 2012-02-13 2014-05-29 Marina Borisovna Girina Device and method for assessing regional blood circulation
CN103932686A (en) * 2014-04-22 2014-07-23 北京印刷学院 Method and device for extracting pulse condition signal
US9307928B1 (en) * 2010-03-30 2016-04-12 Masimo Corporation Plethysmographic respiration processor
US20180042486A1 (en) * 2015-03-30 2018-02-15 Tohoku University Biological information measuring apparatus and biological information measuring method
CN109036552A (en) * 2018-07-19 2018-12-18 上海中医药大学 Tcm diagnosis terminal and its storage medium
US20200064444A1 (en) * 2015-07-17 2020-02-27 Origin Wireless, Inc. Method, apparatus, and system for human identification based on human radio biometric information
CN111714144A (en) * 2020-07-24 2020-09-29 长春理工大学 Mental stress analysis method based on video non-contact measurement
CN111839489A (en) * 2020-05-26 2020-10-30 合肥工业大学 Non-contact physiological and psychological health detection system
WO2021184620A1 (en) * 2020-03-19 2021-09-23 南京昊眼晶睛智能科技有限公司 Camera-based non-contact heart rate and body temperature measurement method
CN113556972A (en) * 2019-02-13 2021-10-26 Viavi科技有限公司 Baseline correction and heartbeat curve extraction
CN114366090A (en) * 2022-01-13 2022-04-19 湖南龙罡智能科技有限公司 Blood component detection method integrating multiple measurement mechanisms
WO2023141404A2 (en) * 2022-01-20 2023-07-27 Jeffrey Thomas Loh Photoplethysmography-based blood pressure monitoring device

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5746698A (en) * 1995-09-28 1998-05-05 Nederlandse Organisatie Voor Toegepast-Natuurwetenschappelijk Onderzoek Tno Method and device for determining brachial arterial pressure wave on the basis of nonivasively measured finger blood pressure wave
US9307928B1 (en) * 2010-03-30 2016-04-12 Masimo Corporation Plethysmographic respiration processor
US20140148664A1 (en) * 2012-02-13 2014-05-29 Marina Borisovna Girina Device and method for assessing regional blood circulation
CN103932686A (en) * 2014-04-22 2014-07-23 北京印刷学院 Method and device for extracting pulse condition signal
US20180042486A1 (en) * 2015-03-30 2018-02-15 Tohoku University Biological information measuring apparatus and biological information measuring method
US20200064444A1 (en) * 2015-07-17 2020-02-27 Origin Wireless, Inc. Method, apparatus, and system for human identification based on human radio biometric information
CN109036552A (en) * 2018-07-19 2018-12-18 上海中医药大学 Tcm diagnosis terminal and its storage medium
CN113556972A (en) * 2019-02-13 2021-10-26 Viavi科技有限公司 Baseline correction and heartbeat curve extraction
WO2021184620A1 (en) * 2020-03-19 2021-09-23 南京昊眼晶睛智能科技有限公司 Camera-based non-contact heart rate and body temperature measurement method
CN111839489A (en) * 2020-05-26 2020-10-30 合肥工业大学 Non-contact physiological and psychological health detection system
CN111714144A (en) * 2020-07-24 2020-09-29 长春理工大学 Mental stress analysis method based on video non-contact measurement
CN114366090A (en) * 2022-01-13 2022-04-19 湖南龙罡智能科技有限公司 Blood component detection method integrating multiple measurement mechanisms
WO2023141404A2 (en) * 2022-01-20 2023-07-27 Jeffrey Thomas Loh Photoplethysmography-based blood pressure monitoring device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WEIMIN WU, YUNJIE SUN, ZHE LIN, ET AL: "A New LCL-Filter With In-Series Parallel Resonant Circuit for Single-Phase Grid-Tied Inverter", 《IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS》, vol. 61, no. 9, XP011543653, DOI: 10.1109/TIE.2013.2293703 *
李炳霖 等: "基于图像光电容积描记法的心率测量", 《长春理工大学学报(自然科学版)》, vol. 45, no. 3 *

Also Published As

Publication number Publication date
CN116889388B (en) 2023-11-17

Similar Documents

Publication Publication Date Title
Fan et al. Robust blood pressure estimation using an RGB camera
CN103908236B (en) A kind of automatic blood pressure measurement system
CN112914527B (en) Arterial blood pressure signal acquisition method based on pulse wave photoplethysmography
CN111728602A (en) Non-contact blood pressure measuring device based on PPG
CN113017630B (en) Visual perception emotion recognition method
CN106236049A (en) Blood pressure measuring method based on video image
CN112001122B (en) Non-contact physiological signal measurement method based on end-to-end generation countermeasure network
CN106793962A (en) Method and apparatus for continuously estimating human blood-pressure using video image
Casado et al. Face2PPG: An unsupervised pipeline for blood volume pulse extraction from faces
Gudi et al. Efficient real-time camera based estimation of heart rate and its variability
Hsu et al. A deep learning framework for heart rate estimation from facial videos
CN116109818B (en) Traditional Chinese medicine pulse condition distinguishing system, method and device based on facial video
CN116012916A (en) Remote photoplethysmograph signal and heart rate detection model construction method and detection method
Bousefsaf et al. iPPG 2 cPPG: reconstructing contact from imaging photoplethysmographic signals using U-Net architectures
CN114557685B (en) Non-contact type exercise robust heart rate measurement method and measurement device
Wang et al. Cuff-less blood pressure estimation via small convolutional neural networks
CN116889388B (en) Intelligent detection system and method based on rPPG technology
CN113456042A (en) Non-contact facial blood pressure measuring method based on 3D CNN
CN116758619A (en) Facial video-based emotion classification method, system, storage medium and equipment
Ben Salah et al. Contactless heart rate estimation from facial video using skin detection and multi-resolution analysis
Hansen et al. Real-time estimation of heart rate in situations characterized by dynamic illumination using remote photoplethysmography
Kuang et al. Remote photoplethysmography signals enhancement based on generative adversarial networks
Suriani et al. Non-contact Facial based Vital Sign Estimation using Convolutional Neural Network Approach
CN114246570A (en) Near-infrared heart rate detection method with peak signal-to-noise ratio and Pearson correlation coefficient fused
KR20220123376A (en) Methods and systems for determining cardiovascular parameters

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant