GB2605401A - System and method of estimating vital signs of user using artificial intelligence - Google Patents

System and method of estimating vital signs of user using artificial intelligence Download PDF

Info

Publication number
GB2605401A
GB2605401A GB2104538.0A GB202104538A GB2605401A GB 2605401 A GB2605401 A GB 2605401A GB 202104538 A GB202104538 A GB 202104538A GB 2605401 A GB2605401 A GB 2605401A
Authority
GB
United Kingdom
Prior art keywords
user
video
face
vital signs
estimating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2104538.0A
Other versions
GB202104538D0 (en
Inventor
Sehgal Nikhil
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vastmindz Ai Ltd
Original Assignee
Vastmindz Ai Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vastmindz Ai Ltd filed Critical Vastmindz Ai Ltd
Priority to GB2104538.0A priority Critical patent/GB2605401A/en
Publication of GB202104538D0 publication Critical patent/GB202104538D0/en
Publication of GB2605401A publication Critical patent/GB2605401A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/0205Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • A61B5/02416Detecting, measuring or recording pulse rate or heart rate using photoplethysmograph signals, e.g. generated by infrared radiation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/1455Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue using optical sensors, e.g. spectral photometrical oximeters
    • A61B5/14551Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue using optical sensors, e.g. spectral photometrical oximeters for measuring blood gases
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • A61B5/0816Measuring devices for examining respiratory frequency
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30076Plethysmography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • G06T2207/30104Vascular flow; Blood flow; Perfusion

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Physiology (AREA)
  • Surgery (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Cardiology (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Artificial Intelligence (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Human Computer Interaction (AREA)
  • Pulmonology (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Optics & Photonics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Psychiatry (AREA)

Abstract

Disclosed is a system and processor-implemented method of estimating one or more vital signs of a user based on a video of the user by using artificial intelligence. It comprises extracting a plurality of face frames and one or more time stamps from the video of the user, determining at least one region of interest (ROI) in the plurality of face frames using a histogram of oriented gradients (HOG) and extracting a plurality of health indicator signals by detecting volumetric changes in a peripheral blood circulation in the at least one ROI based on a remote photoplethysmography (RPPG). One or more vital signs of the user based on the plurality of health indicator signals and the one or more time stamps may then be estimated.

Description

SYSTEM AND METHOD OF ESTIMATING VITAL SIGNS OF USER USING
ARTIFICIAL INTELLIGENCE
TECHNICAL FIELD
The present disclosure generally relates to remote medical diagnosis. More particularly, the present disclosure relates to systems and methods for estimating one or more vital signs of a user based on a video of the user by using artificial intelligence.
BACKGROUND
Currently, due to the COVID-19 pandemic, patients with COVID-19 typically experience symptoms such as a fever, cough, shortness of breath, all of which can be quantitatively measured through physiological signs. Numerous medical research organizations have shown that an abnormally high pulse rate (greater than 100 beats per minute), respiratory rate (greater than 30 respirations per minute) as well as an abnormally low oxygen saturation level (less than 94 percent) are consistent with features present in patients with severe viral infections. Providing an easy and effective way to measure these features on the go can be supportive to wellbeing during this difficult time. Remote health consultations via phone or video call have become common due to the pandemic, however existing remote health consultation techniques fail to measure health & wellness objectively and also rely on subjective information or completion of lengthy questionnaires that aim to give a diagnosis based on symptoms.
Moreover, vital signs such as heart rate, blood pressure, and respiration rate are typically measured using equipment such as a chest strap transmitter, strapless heart rate monitors and the like.
However, such an equipment is not particularly accurate, is susceptible to noise, does not provide much detail. Additionally, such an equipment does not provide results instantly within a few seconds. Also, conventional techniques measuring vital signs require close access and direct physical contact with the body of a human subject, typically with the arm of the subject. This contact requires that the subject is compliant and aware that a measurement, such as blood pressure measurement is underway.
Therefore, in light of the foregoing discussion, there is a need to overcome the aforementioned drawbacks associated with the existing techniques for providing a method and a system of estimating one or more vital signs of a user based on a video of the user by using artificial intelligence.
SUMMARY
The present disclosure seeks to provide a method of estimating one or more vital signs of a user based on a video of the user by using artificial intelligence. The present disclosure also seeks to provide a system of estimating one or more vital signs of a user based on a video of the user by using artificial intelligence. An aim of the present disclosure is to provide a solution that overcomes at least partially the problems encountered in prior art by providing a non-invasive technique of estimating one or more vital signals of the user from the video of the user based on a remote photoplethysmography (RPPG) using artificial intelligence (AI) that facilitates faster and accurate remote estimation of vital signals of the user.
In one aspect, an embodiment of the present disclosure provides a system for estimating one or more vital signs of a user based on a video of the user using artificial intelligence, the system comprising: - a video capture device associated with a computing device for capturing the video of the user; - a memory operatively coupled to the video capture device and configured to store a set of modules and the video of the user; and -a processor that executes the set of modules for estimating the one or more vital signs of the user based on the video of the user using artificial intelligence, the modules comprising: - a face frame extraction module for extracting a plurality of face frames and one or more time stamps from the video of the user; -a ROI determination module for determining at least one region of interest (ROT) in the plurality of face frames using a histogram of oriented gradients; - a signal extraction module for extracting a plurality of health indicator signals by detecting volumetric changes in a peripheral blood circulation in the at least one ROT based on a remote photoplethysmography (RPPG); and - a vital sign estimation module for estimating the one or more vital signs of the user based on the plurality of health indicator signals and the one or more time stamps.
In another aspect, the present disclosure provides a processor-implemented method of estimating one or more vital signs of a user based on a video of the user by using artificial intelligence, said method comprising: - extracting a plurality of face frames and one or more time stamps 25 from the video of the user; - determining at least one region of interest (ROT) in the plurality of face frames using a histogram of oriented gradients (HOG); - extracting a plurality of health indicator signals by detecting volumetric changes in a peripheral blood circulation in the at least one 30 ROT based on a remote photoplethysmography (RPPG); and -estimating the one or more vital signs of the user based on the plurality of health indicator signals and the one or more time stamps.
Embodiments of the present disclosure substantially eliminate or at least partially address the aforementioned problems in the prior art, and provide a non-invasive technique of estimating one or more vital signals of the user from the video of the user based on a remote photoplethysmography (RPPG) using artificial intelligence (AI) that facilitates faster and accurate remote estimation of vital signals of the user.
Additional aspects, advantages, features and objects of the present disclosure would be made apparent from the drawings and the detailed description of the illustrative embodiments construed in conjunction with the appended claims that follow.
It will be appreciated that features of the present disclosure are 15 susceptible to being combined in various combinations without departing from the scope of the present disclosure as defined by the appended claims.
BRIEF DESCRIPTION OF THE DRAWINGS
The summary above, as well as the following detailed description of illustrative embodiments, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the present disclosure, exemplary constructions of the disclosure are shown in the drawings. However, the present disclosure is not limited to specific methods and instrumentalities disclosed herein. Moreover, those skilled in the art will understand that the drawings are not to scale. Wherever possible, like elements have been indicated by identical numbers.
Embodiments of the present disclosure will now be described, by way of example only, with reference to the following diagrams wherein: FIG. 1 illustrates steps of a processor-implemented method of estimating one or more vital signs of a user based on a video of the user by using artificial intelligence, in accordance with an
embodiment of the present disclosure;
FIG. 2A depicts a schematic illustration of a system for determining one or more vital signs of a user based on a video of the user by using artificial intelligence, in accordance with an embodiment of
the present disclosure;
FIG. 2B depicts a data flow across an RPPG library block scheme generated with a visual studio code map, in accordance with an embodiment; FIG. 3 illustrates an exemplary user interface view depicting one or more face landmarks along with the estimated one or more vital signs rendered to a user via a user interface of the user device, in accordance with an exemplary scenario; and FIG. 4 depicts another exemplary user interface view rendered to a user via a user interface of a user device, such as mobile device, in accordance with another exemplary scenario.
In the accompanying drawings, an underlined number is employed to represent an item over which the underlined number is positioned or an item to which the underlined number is adjacent. A non-underlined number relates to an item identified by a line linking the non-underlined number to the item. When a number is non-underlined and accompanied by an associated arrow, the non-underlined number is used to identify a general item at which the arrow is pointing.
DETAILED DESCRIPTION OF EMBODIMENTS
The following detailed description illustrates embodiments of the present disclosure and ways in which they can be implemented. Although some modes of carrying out the present disclosure have been disclosed, those skilled in the art would recognize that other embodiments for carrying out or practicing the present disclosure are also possible.
In one aspect, an embodiment of the present disclosure provides a system for estimating one or more vital signs of a user based on a video of the user using artificial intelligence, the system comprising: -a video capture device associated with a computing device for capturing the video of the user; - a memory operatively coupled to the video capture device and configured to store a set of modules and the video of the user; and - a processor that executes the set of modules for estimating the one 15 or more vital signs of the user based on the video of the user using artificial intelligence, the modules comprising: - a face frame extraction module for extracting a plurality of face frames and one or more time stamps from the video of the user; - a ROI determination module for determining at least one region of interest (ROI) in the plurality of face frames using a histogram of oriented gradients; - a signal extraction module for extracting a plurality of health indicator signals by detecting volumetric changes in a peripheral blood circulation in the at least one ROI based on a remote photoplethysmography (RPPG); and - a vital sign estimation module for estimating the one or more vital signs of the user based on the plurality of health indicator signals and the one or more time stamps.
In another aspect, the present disclosure provides a method of estimating one or more vital signs of a user based on a video of the user by using artificial intelligence, said method comprising: - extracting a plurality of face frames and one or more time stamps 5 from the video of the user; - determining at least one region of interest (ROT) in the plurality of face frames using a histogram of oriented gradients (HOG); - extracting a plurality of health indicator signals by detecting volumetric changes in a peripheral blood circulation in the at least one 10 ROT based on a remote photoplethysnnography (RPPG); and - estimating the one or more vital signs of the user based on the plurality of health indicator signals and the one or more time stamps.
The present disclosure provides a processor-implemented method and system for estimating one or more vital signs of a user based on a video of the user by using artificial intelligence. In various embodiments, a plurality of health indicator signals are extracted from the video by detecting volumetric changes in a peripheral blood circulation in at least one region of interest based on a remote photoplethysmography (RPPG) and one or more vital signs are estimated based on the plurality of health indicator signals. The processor-implemented method of the present disclosure provides a non-invasive, accurate and faster technique of estimating one or more vital signals of the user based on RPPG. Further, the processor-implemented method of the present disclosure provides an easy to use, entirely contactless, fast and cost-effective remote health and wellness solution that uses artificial intelligence to enable the user to keep track of their vital signs and relay the information associated with the vital signs to various patient clinical data management systems. Moreover, the processor-implemented method and system of the present disclosure provides a seamless process with existing infrastructure facilitating an enhanced transparency at existing touchpoints, such as check-in pods or at passport control by implementing user friendly health & wellness measures. Additionally, the processor-implemented method and system of the present disclosure provides ability to the users (such as, for example air passengers and crew members) to conduct a quick & easy health awareness check from anywhere using a video capture device. Also, the processor-implemented method and system of the present disclosure enables measuring vital signs of a user at any time, including while on the call, and send the data directly to a virtual physician or doctor during a consultation without the need to rely on subjective information or complete lengthy questionnaires that aim to give them a diagnosis based on symptoms. Moreover, the processor-implemented method and system of the present disclosure facilitates an improvement of health and wellness remotely, supports wellness of an entire workforce remotely, makes health and wellbeing assessments more accessible worldwide and increases a self-awareness of potential health issues. Therefore, the present disclosure enables expedited identification of a disease related symptoms, such as of COVID-19 or other pandemic diseases, and screening of individuals thereafter. Such expedited identification of symptoms allows authorities to actively monitor and control spread of the pandemic, thereby allowing better response to the pandemic. Furthermore, with faster screening times, persons infected with the diseases can be restricted from boarding airplanes or passenger ships, thereby allowing a reduction in CO2 emissions.
The method comprises extracting a plurality of face frames and one or more time stamps from the video of the user. In an embodiment, the video includes a real-time video of a face of the user captured for 30 instance, through a mobile phone of the user. In an embodiment, the plurality of face frames are detected using an OpenCV Haar classifier. Notably, the time stamps are used to provide more accuracy of image processing, to fix frames per second (FPS) problems and improve library accuracy.
Optionally, the video of the user is received from a video capture device and the video is analyzed for detecting the plurality face frames and the one or more time stamps in the video. Examples of the video capture device, include, but is not limited to, a camera, a video camera, a camcorder, a camera associated with mobile devices, and the like. In an embodiment, one or more predetermined configuration parameters are used to determine the plurality of face frames by detecting face in the video.
The method comprises determining at least one region of interest (ROI) in the plurality of face frames using a histogram of oriented gradients (HOG). As used herein the term "HOG" refers to a feature descriptor used in computer vision and image processing for the purpose of object detection. Notably, the HOG technique counts occurrences of gradient orientation in localized portions of an image. HOG is similar to that of edge orientation histograms, scale-invariant feature transform descriptors, and shape contexts, but differs in that it is computed on a dense grid of uniformly spaced cells and uses overlapping local contrast normalization for improved accuracy. The HOG operates on local cells and is therefore invariant to geometric and photometric transformations and HOG is thus particularly suited for human detection in images.
Optionally, determining at least one ROI comprises generating at least one cropped face image and extracting, using the HOG, one or more HOG features from the at least one cropped face image, wherein the one or more HOG features comprises HOG descriptors for object recognition by a machine learning model and determining, at least one face landmark, a corrected face location and the at least one ROT in the corrected face location, by the machine learning model using the one or more HOG features. In an embodiment, a local binary feature algorithm is used to localize the at least one ROT.
The method comprises extracting a plurality of health indicator signals by detecting volumetric changes in a peripheral blood circulation in the at least one ROT based on a remote photoplethysnnography (RPPG). The term "RPPG" as used herein refers to a simple optical technique used to detect volumetric changes in blood in peripheral circulation. RPPG is a low cost and non-invasive method that makes measurements at the surface of the skin by detecting volumetric changes in a peripheral blood circulation based on analyzing at least one ROT. In an embodiment, subtle changes in light absorption from the skin are measured using the RPPG technology and the plurality of health indicator signals are extracted based on the measured subtle changes in light absorption. In an embodiment, one or more pulse color changes in the skin of the user is detected using a multi-wave RGB camera and RPPG. The pulse color changes are encoded within the changes in pixel values of the video for extracting the health indicator signals. The RPPG technique has the benefit of being a low cost, simple and portable technology.
Optionally, extracting the plurality of health indicator signals comprises determining the volumetric changes in blood in peripheral circulation in the at least one ROI, and generating the plurality of health indicator signals comprising a plurality of bandgap reference (BGR) signals by processing at least one of: the corrected face location, the at least one ROT, and the plurality of face landmarks based on the volumetric changes in blood in peripheral circulation.
The method comprises estimating the one or more vital signs of the user based on the plurality of health indicator signals and the one or more time stamps. As used herein the term "vital signs" refers to a group of important medical signs that indicate the status of vital (life-sustaining) functions of a body. Notably vital sign measurements are taken to help assess the general physical health of a person, give clues to possible diseases, and show progress toward recovery. The normal ranges for vital signs of a person vary with age, weight, gender, and overall health.
Optionally, the one or more vital signs comprises at least one of: a heart rate, a respiration rate, a stress level, an oxygen saturation and a blood pressure.
Optionally, for estimating the one or more vital signs, the health indicator signals are interpolated with the one or more tinnestamps. A batch is executed with balancing parameters using the health indicator signals to generate a fast Fourier transform (FFT) spectrum. A signal to noise ratio (SNR) is estimated based on the FFT spectrum to generate an estimated SNR. A spectrum is selected with a highest SNR based on the estimated SNR and generating at least one balanced signal with the spectrum with the highest SNR. One or more peaks are detected in the at least one balanced signal and filtering one or more weak peaks from among the one or more peaks to generate a filtered signal. The one or more vital signs are estimated based on the filtered signal. It will be appreciated that, one or more features are extracted from the one or more peaks in the filtered signal and a value of at least one of: a blood pressure, a heart rate, a respiration rate, a stress level, and an oxygen saturation is predicted based on the one or more features, for estimating the one or more vital signs. In an embodiment, a pre-trained regression tree model is used for estimating the one or more vital signs based on the filtered signal. In an embodiment, the at least one face landmark along with the estimated one or more vital signs are rendered to the user via a user interface of the user device.
The present disclosure also relates to the system as described above. 5 Various embodiments and variants disclosed above apply mutatis nnutandis to the system.
The system of the present disclosure estimating one or more vital signs of a user based on a video of the user using artificial intelligence by detecting volumetric changes in a peripheral blood circulation in at least one region of interest based on a remote photoplethysmography (RPPG). The system of the present disclosure provides a non-invasive, accurate and faster technique of estimating one or more vital signals of the user based on RPPG. Further, the system of the present disclosure provides an easy to use, entirely contactless, fast and cost-effective remote health and wellness solution that uses artificial intelligence to enable anyone to keep track of their vital signs and relay the information associated with the vital signs to various patient clinical data management systems. Moreover, the system of the present disclosure facilitates a seamless process with existing infrastructure providing an enhanced transparency at existing touchpoints, such as check-in pods or at passport control by implementing user friendly health & wellness measures. Additionally, the system of the present disclosure provides ability to the users (such as, for example air passengers and crew members) to conduct a quick & easy health awareness check from anywhere using just a mobile device. Also, the system of the present disclosure enables measuring vital signs of a user at any time, including while on the call, and send the data directly to a virtual physician or doctor during a consultation without the need to rely on subjective information or complete lengthy questionnaires that aim to give them a diagnosis based on symptoms. Additionally, the system of the present disclosure facilitates an improvement of health and wellness remotely, supports wellness of an entire workforce remotely, makes health and wellbeing assessments more accessible 5 worldwide and increases a self-awareness of potential health issues. Moreover, the system of the present disclosure provides auditable data records (e.g., corporate level data infrastructure) to keep clear records, provides a secure data platform (e.g., authorized for personal medical records) and can be provided in the form of an app (application) that 10 can be downloaded quickly, safely and remotely into devices of the users.
The system comprises a video capture device associated with a computing device for capturing the video of the user. The term "video capture device" refers to a device configured to capture a video of a user and can include, for example, video camera, a camcorder, a camera of a mobile device, and the like. Examples of the computing device include, but are not limited to, a mobile phone, a laptop, a desktop, a tablet computer, and the like. The system also comprises a memory operatively coupled to the video capture device and configured to store a set of modules and the video of the user, and a processor that executes the set of modules for estimating the one or more vital signs of the user based on the video of the user using artificial intelligence. The set of modules comprises a face frame module, a ROI determination module, a signal extraction module, and a vital sign estimation module.
The face frame extraction module is configured to extract a plurality of face frames and one or more time stamps from the video of the user. The ROI determination module is configured to determine at least one region of interest (ROT) in the plurality of face frames using a histogram of oriented gradients. The signal extraction module is configured to extract a plurality of health indicator signals by detecting volumetric changes in a peripheral blood circulation in the at least one ROT based on a remote photoplethysmography (RPPG). The vital sign estimation module is configured to estimate the one or more vital signs of the user based on the plurality of health indicator signals and the one or more time stamps. The one or more vital signs comprises at least one of: a heart rate, a respiration rate, a stress level, an oxygen saturation and a blood pressure.
Optionally, the face frame extraction module is further configured to receive the video of the user from the video capture device and analyze the video for detecting the one or more face frames and the one or more time stamps in the video.
Optionally, the ROI determination module is further configured to a) crop the plurality of face frames for generating at least one cropped face image, b) extract, using the histogram of oriented gradients (HOG), one or more HOG features from the at least one cropped face image, wherein the one or more HOG features comprises HOG descriptors for object recognition, and c) determine, at least one face landmark, a corrected face location and the at least one ROT in the corrected face location using the one or more HOG features.
Optionally, the signal extraction module is further configured to a) determine the volumetric changes in blood in peripheral circulation in the at least one ROI and b) generate the plurality of health indicator signals comprising a plurality of bandgap reference (BGR) signals by processing at least one of: the corrected face location, the at least one ROT, and the one or more face landmarks based on the volumetric changes in blood in peripheral circulation.
Optionally, the vital sign estimation module is further configured to a) interpolate the health indicator signals with the one or more timestamps, b) execute a batch with balancing parameters using the health indicator signals to generate a fast Fourier transform (FFT) spectrum, estimate a signal to noise ratio (SNR) based on the FFT spectrum to generate an estimated SNR, c) select a spectrum with a highest SNR based on the estimated SNR and generate at least one balanced signal with the spectrum with the highest SNR, d) detect one or more peaks in the at least one balanced signal and filter one or more weak peaks from among the one or more peaks to generate a filtered signal and e) estimate the one or more vital signs based on the filtered signal.
Optionally, the vital sign estimation module is further configured to extract one or more features from the one or more peaks in the filtered signal and predict a value of at least one of: a blood pressure, a heart rate, a respiration rate, a stress level, and an oxygen saturation based on the one or more features.
The present disclosure further provides a computer program product comprising a non-transitory computer-readable storage medium having computer-readable instructions stored thereon, the computer-readable instructions being executable by a computerized device comprising processing hardware to execute the method as described above.
The present disclosure further provides a method of determining one 25 or more vital health indicators of a user based on a video of the user by using artificial intelligence, said method comprising: - extracting one or more face frames in the video of the user; - determining one or more face landmarks from the one or more face frames for extracting one or more health trigger indicators; - measuring subtle changes in a light absorption in the skin of the user from the one or more face landmarks for extracting a plurality of physiological signals; and - converting the plurality of physiological signals into the one or more 5 vital health indicators based on a remote photoplethysmography (RPPG) technique.
DETAILED DESCRIPTION OF THE DRAWINGS
Referring to FIGS. 1 to 4, FIG. 1 illustrates steps of a processor-implemented method of estimating one or more vital signs of a user based on a video of the user by using artificial intelligence, in accordance with an embodiment of the present disclosure. At step 102, a plurality of face frames and one or more time stamps are extracted from the video of the user. At step 104, at least one region of interest (ROI) is determined in the plurality of face frames using a histogram of oriented gradients (HOG). At step 106, a plurality of health indicator signals are extracted by detecting volumetric changes in a peripheral blood circulation in the at least one ROI based on a remote photoplethysmography (RPPG). At step 108, one or more vital signs of the user are estimated based on the plurality of health indicator signals and the one or more time stamps.
The steps 102, 104, 106, and 108, are only illustrative and other alternatives can also be provided where one or more steps are added, one or more steps are removed, or one or more steps are provided in a different sequence without departing from the scope of the claims herein.
Referring to FIG. 25, FIG. 2A depicts a schematic illustration of a system 200 for determining one or more vital signs of a user based on a video of the user by using artificial intelligence, in accordance with an embodiment of the present disclosure. The system 200 comprises an video capture device 202 associated with a computing device for capturing the vide of the user, a memory 204 operatively coupled to the video capture device 202 and configured store a set of modules and the video of the user, and a processor 206 that executes the set of modules for estimating the one or more vital signs of the user based on the video of the user using artificial intelligence. The modules comprises a face frame extraction module 208, a ROI determination module 210, a signal extraction module 212, and a vital sign estimation module 214. The face frame extraction module 208 is configured to extract a plurality of face frames and one or more time stamps from the video of the user. The ROT determination module 210 is configured to determine at least one region of interest (ROT) in the plurality of face frames using a histogram of oriented gradients. The signal extraction module 212 is configured to extract a plurality of health indicator signals by detecting volumetric changes in a peripheral blood circulation in the at least one ROT based on a remote photoplethysmography (RPPG). The vital sign estimation module 214 is configured to estimate the one or more vital signs of the user based on the plurality of health indicator signals and the one or more time stamps. The one or more vital signs comprises at least one of: a heart rate, a respiration rate, a stress level, an oxygen saturation and a blood pressure.
Optionally, the face frame extraction module 208 is further configured to receive the video of the user from the video capture device 202 and analyze the video for detecting the one or more face frames and the one or more time stamps in the video.
Optionally, the ROI determination module 210 is further configured to a) crop the plurality of face frames for generating at least one cropped face image, b) extract, using the histogram of oriented gradients (HOG), one or more HOG features from the at least one cropped face image, wherein the one or more HOG features comprises HOG descriptors for object recognition, and c) determine, at least one face landmark, a corrected face location and the at least one ROI in the corrected face location using the one or more HOG features.
Optionally, the signal extraction module 212 is further configured to a) determine the volumetric changes in blood in peripheral circulation in the at least one ROI and b) generate the plurality of health indicator signals comprising a plurality of bandgap reference (BGR) signals by processing at least one of: the corrected face location, the at least one ROI, and the one or more face landmarks based on the volumetric changes in blood in peripheral circulation.
Optionally, the vital sign estimation module 214 is further configured to a) interpolate the health indicator signals with the one or more timestamps, b) execute a batch with balancing parameters using the health indicator signals to generate a fast Fourier transform (FFT) spectrum, estimate a signal to noise ratio (SNR) based on the FFT spectrum to generate an estimated SNR, c) select a spectrum with a highest SNR based on the estimated SNR and generate at least one balanced signal with the spectrum with the highest SNR, d) detect one or more peaks in the at least one balanced signal and filter one or more weak peaks from among the one or more peaks to generate a filtered signal and e) estimate the one or more vital signs based on the filtered signal.
Optionally, the vital sign estimation module 214 is further configured to extract one or more features from the one or more peaks in the filtered signal and predict a value of at least one of: a blood pressure, a heart rate, a respiration rate, a stress level, and an oxygen saturation based on the one or more features.
In an embodiment, the at least one face landmark along with the 5 estimated one or more vital signs are rendered to the user via a user interface of the user device.
It may be understood by a person skilled in the art that the FIG. 2A is merely an example for sake of clarity, which should not unduly limit the scope of the claims herein. The person skilled in the art will recognize many variations, alternatives, and modifications of embodiments of the present disclosure.
Referring to FIG. 2B, FIG. 2B depicts a data flow across an RPPG library block scheme 216 generated with a visual studio code map, in accordance with an embodiment.
Referring to FIG. 3, FIG. 3 illustrates an exemplary user interface view 300 depicting one or more face landmarks along with the estimated one or more vital signs rendered to a user via a user interface of the user device, in accordance with an exemplary scenario. As illustrated in FIG. 3, the user interface view 300 includes the face landmarks 302, 304, and 306 marked on the face of the user, an output frame with a current frame number 308 on a top left corner of the user interface view 300, and the estimated vital signs 310 including, a signal to a noise ratio (SNR) 24, an oxygen saturation (5p02) 97, a respiration rate (RR) 19, a stress level 0, a blood pressure (BP) 0, and a library progress (prog) 352. Additionally, the user interface view 300 also includes a library status 312 on the top, an FPS rate 6 in the right top corner 314, a tracked face 316 of the user, the ROIs 318 for signal extraction, heartrate (BPM) 72 320, and an extracted signal 322 in the left bottom corner 324, a detected signal peaks 326, and a simulated echo cardiogram (ECG) signal 328.
Referring to FIG. 4, FIG. 4 depicts another exemplary user interface view 400 rendered to a user via a user interface of a user device 402, such as, for example, a mobile device, in accordance with another exemplary scenario. The user interface view 400 depicts the value of the vital signs 404 estimated using the system of the present technology in an exemplary scenario.
It has to be noted that all devices, modules, and means described in the present application could be implemented in the software or hardware elements or any kind of combination thereof. All steps which are performed by the various entities described in the present application as well as the functionalities described to be performed by the various entities are intended to mean that the respective entity is adapted to or configured to perform the respective steps and functionalities. Even if, in the following description of specific embodiments, a specific functionality or step to be performed by external entities is not reflected in the description of a specific detailed element of that entity which performs that specific step or functionality, it should be clear for a skilled person that these methods and functionalities can be implemented in respective software or hardware elements, or any kind of combination thereof. It will be appreciated that features of the present disclosure are susceptible to being combined in various combinations without departing from the scope of the present disclosure as defined by the appended claims.
Modifications to embodiments of the present disclosure described in the foregoing are possible without departing from the scope of the present disclosure as defined by the accompanying claims. Expressions such as "including", "comprising", "incorporating", "have", "is" used to describe and claim the present disclosure are intended to be construed in a non-exclusive manner, namely allowing for items, components or elements not explicitly described also to be present. Reference to the singular is also to be construed to relate to the plural.

Claims (15)

  1. CLAIMS1. A system (200) for estimating one or more vital signs of a user based on a video of the user using artificial intelligence, the system (200) comprising: -a video capture device (202) associated with a computing device for capturing the video of the user; - a memory (204) operatively coupled to the video capture device and configured to store a set of modules and the video of the user; and - a processor (206) that executes the set of modules for estimating the 10 one or more vital signs of the user based on the video of the user using artificial intelligence, the modules comprising: - a face frame extraction module (208) for extracting a plurality of face frames and one or more time stamps from the video of the user; -a ROT determination module (210) for determining at least one region of interest (ROT) in the plurality of face frames using a histogram of oriented gradients; - a signal extraction module (212) for extracting a plurality of health indicator signals by detecting volumetric changes in a peripheral blood circulation in the at least one ROT based on a remote photoplethysmography (RPPG); and - a vital sign estimation module (214) for estimating the one or more vital signs of the user based on the plurality of health indicator signals and the one or more time stamps.
  2. 2. The system of claim 1, wherein the one or more vital signs comprises at least one of: a heart rate, a respiration rate, a stress level, an oxygen saturation and a blood pressure.
  3. 3. The system of claim 1 or 2, wherein the face frame extraction module (208) is further configured to: - receive the video of the user from the video capture device; and - analyze the video for detecting the one or more face frames and the one or more time stamps in the video.
  4. 4. The system of any of the preceding claims, wherein the ROT determination module (210) is further configured to: - crop the plurality of face frames for generating at least one cropped face image; - extract, using the histogram of oriented gradients (HOG), one or more HOG features from the at least one cropped face image, wherein 10 the one or more HOG features comprises HOG descriptors for object recognition; and - determine, at least one face landmark, a corrected face location and the at least one ROT in the corrected face location using the one or more HOG features.
  5. 5. The system of any of the preceding claims, wherein the signal extraction module (212) is further configured to: - determine the volumetric changes in blood in peripheral circulation in the at least one ROI; and - generate the plurality of health indicator signals comprising a plurality of bandgap reference (BGR) signals by processing at least one of: the corrected face location, the at least one ROT, and the one or more face landmarks based on the volumetric changes in blood in peripheral circulation.
  6. 6. The system of any of the preceding claims, wherein the vital sign estimation module (214) is further configured to: - interpolate the health indicator signals with the one or more timestamps; - execute a batch with balancing parameters using the health indicator signals to generate a fast Fourier transform (FFT) spectrum; - estimate a signal to noise ratio (SNR) based on the FFT spectrum to generate an estimated SNR; -select a spectrum with a highest SNR based on the estimated SNR and generating at least one balanced signal with the spectrum with the highest SNR; - detect one or more peaks in the at least one balanced signal and filter one or more weak peaks from among the one or more peaks to 10 generate a filtered signal; and -estimate the one or more vital signs based on the filtered signal.
  7. 7. The system of claim 6, wherein the vital sign estimation module (214) is further configured to: - extract one or more features from the one or more peaks in the 15 filtered signal; and - predict a value of at least one of: a blood pressure, a heart rate, a respiration rate, a stress level, and an oxygen saturation based on the one or more features.
  8. 8. A processor-implemented method of estimating one or more vital signs of a user based on a video of the user by using artificial intelligence, said method comprising: - extracting (102) a plurality of face frames and one or more time stamps from the video of the user; - determining (104) at least one region of interest (ROI) in the plurality 25 of face frames using a histogram of oriented gradients (HOG); - extracting (106) a plurality of health indicator signals by detecting volumetric changes in a peripheral blood circulation in the at least one ROT based on a remote photoplethysmography (RPPG); and - estimating (108) the one or more vital signs of the user based on the plurality of health indicator signals and the one or more time stamps.
  9. 9. The processor-implemented method of claim 8, wherein the one or more vital signs comprises at least one of: a heart rate, a respiration 5 rate, a stress level, an oxygen saturation and a blood pressure.
  10. 10. The processor-implemented method of claim 8 or 9, wherein extracting the plurality of face frames comprises: - receiving the video of the user from a video capture device; and - analyzing the video for detecting the plurality face frames and the 10 one or more time stamps in the video.
  11. 11. The processor-implemented method of any of the claims 8-10, wherein determining the at least one ROI from the plurality of face frames comprising: - generating at least one cropped face image; -extracting, using the histogram of oriented gradients (HOG), one or more HOG features from the at least one cropped face image, wherein the one or more HOG features comprises HOG descriptors for object recognition by a machine learning model; and - determining, at least one face landmark, a corrected face location 20 and the at least one ROI in the corrected face location, by the machine learning model using the one or more HOG features.
  12. 12. The processor-implemented method of any of the claims 8-11, wherein extracting the plurality of health indicator signals comprises: - determining the volumetric changes in blood in peripheral circulation 25 in the at least one ROI; and - generating the plurality of health indicator signals comprising a plurality of bandgap reference (BGR) signals by processing at least one of: the corrected face location, the at least one ROT, and the plurality of face landmarks based on the volumetric changes in blood in peripheral circulation.
  13. 13. The processor-implemented method of any of the claims 8-12, wherein estimating the one or more vital signs based on the plurality 5 of health indicator signals and the one or more time stamps comprises: - interpolating the health indicator signals with the one or more timestamps; - executing a batch with balancing parameters using the health indicator signals to generate a fast Fourier transform (FFT) spectrum; 10 -estimating a signal to noise ratio (SNR) based on the FFT spectrum to generate an estimated SNR; - selecting a spectrum with a highest SNR based on the estimated SNR and generating at least one balanced signal with the spectrum with the highest SNR; -detecting one or more peaks in the at least one balanced signal and filtering one or more weak peaks from among the one or more peaks to generate a filtered signal; and -estimating the one or more vital signs based on the filtered signal.
  14. 14. The processor-implemented method of claim 13, wherein 20 estimating the one or more vital signs based on the filtered signal comprises: - extracting one or more features from the one or more peaks in the filtered signal; and - predicting a value of at least one of: a blood pressure, a heart rate, 25 a respiration rate, a stress level, and an oxygen saturation based on the one or more features.
  15. 15. A computer program product comprising a non-transitory computer-readable storage medium having computer-readable instructions stored thereon, the computer-readable instructions being executable by a computerized device comprising processing hardware to execute a method as claimed in any one of claims 1 to 7.
GB2104538.0A 2021-03-30 2021-03-30 System and method of estimating vital signs of user using artificial intelligence Pending GB2605401A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB2104538.0A GB2605401A (en) 2021-03-30 2021-03-30 System and method of estimating vital signs of user using artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2104538.0A GB2605401A (en) 2021-03-30 2021-03-30 System and method of estimating vital signs of user using artificial intelligence

Publications (2)

Publication Number Publication Date
GB202104538D0 GB202104538D0 (en) 2021-05-12
GB2605401A true GB2605401A (en) 2022-10-05

Family

ID=75783663

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2104538.0A Pending GB2605401A (en) 2021-03-30 2021-03-30 System and method of estimating vital signs of user using artificial intelligence

Country Status (1)

Country Link
GB (1) GB2605401A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013058060A (en) * 2011-09-08 2013-03-28 Dainippon Printing Co Ltd Person attribute estimation device, person attribute estimation method and program
US20150154229A1 (en) * 2013-11-29 2015-06-04 Canon Kabushiki Kaisha Scalable attribute-driven image retrieval and re-ranking
US20170238860A1 (en) * 2010-06-07 2017-08-24 Affectiva, Inc. Mental state mood analysis using heart rate collection based on video imagery
US20200085312A1 (en) * 2015-06-14 2020-03-19 Facense Ltd. Utilizing correlations between PPG signals and iPPG signals to improve detection of physiological responses
CN111127511A (en) * 2018-12-18 2020-05-08 玄云子智能科技(深圳)有限责任公司 Non-contact heart rate monitoring method
US20200245873A1 (en) * 2015-06-14 2020-08-06 Facense Ltd. Detecting respiratory tract infection based on changes in coughing sounds

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170238860A1 (en) * 2010-06-07 2017-08-24 Affectiva, Inc. Mental state mood analysis using heart rate collection based on video imagery
JP2013058060A (en) * 2011-09-08 2013-03-28 Dainippon Printing Co Ltd Person attribute estimation device, person attribute estimation method and program
US20150154229A1 (en) * 2013-11-29 2015-06-04 Canon Kabushiki Kaisha Scalable attribute-driven image retrieval and re-ranking
US20200085312A1 (en) * 2015-06-14 2020-03-19 Facense Ltd. Utilizing correlations between PPG signals and iPPG signals to improve detection of physiological responses
US20200245873A1 (en) * 2015-06-14 2020-08-06 Facense Ltd. Detecting respiratory tract infection based on changes in coughing sounds
CN111127511A (en) * 2018-12-18 2020-05-08 玄云子智能科技(深圳)有限责任公司 Non-contact heart rate monitoring method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHEN JUNKAI ET AL: "Facial Expression Recognition Based on Facial Components Detection and HOG Features", SCIENTIFIC COOPERATIONS INTERNATIONAL WORKSHOPS ON ELECTRICAL AND COMPUTER ENGINEERING SUBFIELDS, 1 August 2014 (2014-08-01), XP055853901, Retrieved from the Internet <URL:https://www.researchgate.net/profile/Junkai-Chen/publication/304630361_Facial_Expression_Recognition_Based_on_Facial_Components_Detection_and_HOG_Features/links/5775c59f08aead7ba070027c/Facial-Expression-Recognition-Based-on-Facial-Components-Detection-and-HOG-Features.pdf> [retrieved on 20211022] *
EUGENE LEE ET AL: "Meta-rPPG: Remote Heart Rate Estimation Using a Transductive Meta-Learner", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 14 July 2020 (2020-07-14), XP081720769 *
PASQUADIBISCEGLIE VINCENZO ET AL: "A personal healthcare system for contact-less estimation of cardiovascular parameters", 2018 AEIT INTERNATIONAL ANNUAL CONFERENCE, AEIT, 3 October 2018 (2018-10-03), pages 1 - 6, XP033474964, DOI: 10.23919/AEIT.2018.8577458 *
YU WENYING ET AL: "Emotion Recognition from Facial Expressions and Contactless Heart Rate Using Knowledge Graph", 2020 IEEE INTERNATIONAL CONFERENCE ON KNOWLEDGE GRAPH (ICKG), IEEE, 9 August 2020 (2020-08-09), pages 64 - 69, XP033823360, DOI: 10.1109/ICBK50248.2020.00019 *

Also Published As

Publication number Publication date
GB202104538D0 (en) 2021-05-12

Similar Documents

Publication Publication Date Title
Sanyal et al. Algorithms for monitoring heart rate and respiratory rate from the video of a user’s face
Li et al. The obf database: A large face video database for remote physiological signal measurement and atrial fibrillation detection
US10004410B2 (en) System and methods for measuring physiological parameters
JP6371837B2 (en) Devices and methods for obtaining vital signs of subjects
KR101738278B1 (en) Emotion recognition method based on image
US10984914B2 (en) CPR assistance device and a method for determining patient chest compression depth
Feng et al. Motion artifacts suppression for remote imaging photoplethysmography
Basu et al. Infrared imaging based hyperventilation monitoring through respiration rate estimation
US20230233091A1 (en) Systems and Methods for Measuring Vital Signs Using Multimodal Health Sensing Platforms
Alkali et al. Facial tracking in thermal images for real-time noncontact respiration rate monitoring
Alnaggar et al. Video-based real-time monitoring for heart rate and respiration rate
US20220270344A1 (en) Multimodal diagnosis system, method and apparatus
US20230293113A1 (en) System and method of estimating vital signs of user using artificial intelligence
Hessler et al. A non-contact method for extracting heart and respiration rates
GB2605401A (en) System and method of estimating vital signs of user using artificial intelligence
AV et al. Non-contact heart rate monitoring using machine learning
Sacramento et al. A real-time software to the acquisition of heart rate and photoplethysmography signal using two region of interest simultaneously via webcam
Lee et al. Video-based bio-signal measurements for a mobile healthcare system
Rivest-Hénault et al. Quasi real-time contactless physiological sensing using consumer-grade cameras
Lee et al. Smartphone-based heart-rate measurement using facial images and a spatiotemporal alpha-trimmed mean filter
Panigrahi et al. Video-based HR measurement using adaptive facial regions with multiple color spaces
Hassan et al. Machine learning approach for predicting COVID-19 suspect using non-contact vital signs monitoring system by RGB camera
Jalil et al. E-Health monitoring using camera: Measurement of vital parameters in a noisy environment
Sturekova et al. Non-Contact Detection of Vital Parameters with Optoelectronic Measurements under Stress in Education Process
Nithyaa et al. Contactless measurement of heart rate from live video and comparison with standard method