US9185353B2 - Removing environment factors from signals generated from video images captured for biomedical measurements - Google Patents

Removing environment factors from signals generated from video images captured for biomedical measurements Download PDF

Info

Publication number
US9185353B2
US9185353B2 US13/401,207 US201213401207A US9185353B2 US 9185353 B2 US9185353 B2 US 9185353B2 US 201213401207 A US201213401207 A US 201213401207A US 9185353 B2 US9185353 B2 US 9185353B2
Authority
US
United States
Prior art keywords
signal
subject
signals
source
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US13/401,207
Other versions
US20130215244A1 (en
Inventor
Lalit Keshav MESTHA
Beilei Xu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xerox Corp
Original Assignee
Xerox Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xerox Corp filed Critical Xerox Corp
Priority to US13/401,207 priority Critical patent/US9185353B2/en
Assigned to XEROX CORPORATION reassignment XEROX CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MESTHA, LALIT KESHAV, XU, BEILEI
Publication of US20130215244A1 publication Critical patent/US20130215244A1/en
Application granted granted Critical
Publication of US9185353B2 publication Critical patent/US9185353B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed circuit television systems, i.e. systems in which the signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/001Image restoration
    • G06T5/002Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/001Image restoration
    • G06T5/003Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • G06T2207/20144
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal

Abstract

What is disclosed is a system and method for automatically removing undesirable periodic or random background noise from heart rate measurement signals obtained from a video camera, ambient illuminator and other unknown electromagnetic sources to improve the overall reliability of biomedical measurements. In one embodiment, a time varying video image acquired over at least one imaging channel of a subject of interest is received. The video images are then segmented into a first region comprising a localized area where plethysmographic signals of the subject can be registered and a second region comprising a localized area of the environment where the plethysmographic signals cannot be registered. Both of the regions are exposed to the same environmental factors. The segmented video signals are pre-processed and the processed signals are subtracted from each other to generate an environmentally compensated signal. The environmentally compensated signal is then communicated to a computer system.

Description

TECHNICAL FIELD
The present invention is directed to systems and methods for removing undesirable signals and background noise from signals generated from video images captured using a RGB camera or an Infrared (IR) camera for improved accuracy and reliability of biomedical measurements derived from those images.
BACKGROUND
Current Electro-Cardio Graphic (ECG) systems require the patient to be located in close proximity to the ECG machine obtaining measurements via electrodes attached to the skin. The adhesive electrodes can cause skin irritation, infection, discomfort, and other issues to the patient. This can especially be a problem to newborns with sensitive skin. Methods for non-contact cardiac pulse measurement based on imaging patients using RGB and/or multi-spectral infrared (IR) cameras have arisen in this art. By recording video images of the region of exposed skin where concentrations of blood vessels exist, small changes in pulsating inside blood vessels are registered as blood volume signals on detector arrays. These signals can comprise a mixture of patient plethysmographic signals (i.e., blood volume signals) along with other artifacts from the environment. The detector arrays also may register involuntary and voluntary bodily motions and muscle fluctuations. Biomedical signals can be corrupted by fluctuations in illumination source, electronic power line noise, periodic signals manifested by camera auto calibration, and the like. Unwanted signals are difficult to separate from desired signals when these have frequency components that are within the bandwidth of the frequency of the human heart rate. Therefore, a need exists to automatically compensate video images to enhance the signal quality required during estimation.
Accordingly, what is needed in this art are sophisticated systems and methods for removing undesirable periodic signals and random background noise from video images obtained from a RGB camera or an infrared (IR) camera for improved accuracy and reliability of biomedical measurements obtained from those captured signals.
INCORPORATED REFERENCES
The following U.S. patents, U.S. patent applications, and Publications are incorporated herein in their entirety by reference.
  • “Estimating Cardiac Pulse Recovery From Multi-Channel Source Data Via Constrained Source Separation”, U.S. Pat. No. 8,617,081.
  • “Filtering Source Video Data Via Independent Component Selection”, U.S. Pat. No. 8,600,213.
  • Blind Signal Separation: Statistical Principles”, Jean-Francois Cardoso, Proceedings of the IEEE, Vol. 9, No. 10, pp. 2009-2025, (October 1998).
  • Independent Component Analysis: Algorithms And Applications”, Aapo Hyvärinen and Erkki Oja, Neural Networks, 13(4-5), pp. 411-430, (2000).
  • Infrared Thermal Imaging: Fundamentals, Research and Applications”, Michael Vollmer, Klaus Peter Möllmann, Wiley-VCH; 1st Ed. (2010) ISBN-13: 978-3527407170.
BRIEF SUMMARY
What is disclosed is a system and method for removing undesirable periodic signals and random background noise from signals generated from video images captured from a RGB or infrared (IR) camera for improved accuracy and reliability of biomedical measurements.
One embodiment of the present system and method for removing environmental factors from video images captured by a non-contact imaging system involves the following. First, video images are captured of a subject of interest. The video comprises a time varying source video images acquired over at least one imaging channel. The acquired source signal can be any combination of: NIR signals, RGB signals, multi-spectral signals, and hyperspectral signals. The video images are segmented into two regions of interest, i.e., a first region being a localized area where plethysmographic signals of the subject can be registered, and a second region being a localized area of the environment where plethysmographic signals cannot be registered. Both of these regions have been exposed to the same environmental factors containing undesirable environmental factors such as periodic signals and random background noise. The segmented video images for each of the first and second ROIs are pre-processed by performing various image pre-processing steps to generate time-series signals and further source separation with blind source separation or by a constrained source separation. The pre-processed signals corresponding to each of the imaging channels are subtracted to generate corresponding environmentally compensated signals. The environmentally compensated signals are then communicated to a computing system to extract plethysmographic signals.
Many features and advantages of the above-described method will become readily apparent from the following detailed description and accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
The foregoing and other features and advantages of the subject matter disclosed herein will be made apparent from the following detailed description taken in conjunction with the accompanying drawings, in which:
FIG. 1 shows a video image captured of a subject of interest;
FIG. 2 shows the image of FIG. 1 with a first localized area 201 identified where the subject's plethysmographic signals are registered, and a second localized area 202 where those same signals are registered, i.e., a background area of the image;
FIG. 3 is a flow diagram of one example embodiment of the present method for removing environmental factors from signals generated from video images captured by a non-contact imaging system in a remote sensing environment;
FIG. 4 is a block diagram of an example networked video image processing system wherein various aspects of the present method as described with respect to the flow diagram of FIG. 3 are implemented;
FIG. 5A plots the spectral content of a second imaging channel (post-ICA) of the imaging system used to acquire the source video images to isolate the components present in the segmented video images of the localized area 201 of FIG. 2;
FIG. 5B plots the spectral content of a second imaging channel (post-ICA) of the imaging system used to acquire the source video images to isolate the signal components in the segmented source video images of the localized area 202 of FIG. 2 where the subject's plethysmographic signal cannot be registered;
FIGS. 6A-C shows the power spectral density of the signals acquired for all three imaging channels before performing signal compensation; and
FIGS. 7A-C shows the power spectral density for the signals of the three imaging channels after compensating the source video image according to the teachings hereof.
DETAILED DESCRIPTION
What is disclosed is a system and method for removing undesirable periodic signals and random background noise from video images obtained from a RGB or multi-spectral IR camera for improved accuracy and reliability of biomedical measurements obtained from those captured signals.
It should be understood that one of ordinary skill in this art would be readily familiar with advance mathematical techniques involving matrix methods, independent component analysis, and data projection. One of ordinary skill would be familiar with the texts, “Independent Component Analysis”, Wiley-Interscience, 1st Ed. (2001), ISBN-13: 978-0471405405, and “Independent Component Analysis: Principles and Practice”, Cambridge University Press; 1st Ed. (2001), ISBN-13: 978-0521792981, which are incorporated herein in their entirety by reference.
Non-Limiting Definitiions
A “subject of interest”, as used herein, refers to a subject capable of registering a plethysmographic signal. FIG. 1 shows an example image 100 of a video taken of a subject of interest 102 for processing in accordance with the teachings hereof. Use of the term “human”, or “person, or “patient” herein for explanatory purposes is not to be viewed as limiting the scope of the appended claims solely to human beings. The present method applies equally to other biological subjects capable of registering a plethysmographic signal in a captured video image such as mammals, birds, fish, reptiles, and certain insects.
A “plythesmographic signal” is a signal which contains meaningful data as to a physiological change in pulsating blood volume or volumetric pressure of the localized area of the subject intended to be analyzed. Pulmonary plethysmography measures the volume in the subject's lungs, i.e., lung volume. Plythesmography of the limbs helps determine circulatory capacity. Penile plethysmography measures changes in blood flow in the penis. Whole-body plethysmography helps practitioners measure a variety of parameters in their patients.
An “imaging sensor” is a device for capturing source video data over one or more channels of a subject of interest. The imaging sensor may be a device with a high frame rate and high spatial resolution such as, for example, a monochrome camera for capturing black/white video images, or a color camera for capturing color video images. The imaging sensor may be a spectral sensor such as a multi-spectral or hyperspectral system. Spectral sensors are devices which have relatively low frame rates and low spatial resolution but high spectral resolution. The imaging sensor may be a hybrid device capable of operating in a conventional video mode with high frame rate and high spatial resolution, and a spectral mode with low frame rates but high spectral resolution. Imaging sensors comprising standard video cameras and those comprising spectral sensors are readily available from many vendors in various streams of commerce.
A “source video image” is the time varying video image acquired using an imaging sensor. A source video image can be any combination of: NIR images, RGB images, RGB and NIR images, multi-spectral images, and hyperspectral video images.
A “time-series signal” is time varying signal obtained from the 2D video images by transforming to 1D during pre-processing.
“Segmenting the video image” means identifying, in the video images, a first region of interest comprising a localized area where the subject's plethysmographic signals can be registered (area 201 of image 200 of FIG. 2) and identifying a second region of interest comprising a localized area where the subject's plethysmographic signals cannot be registered (area 202). The localized areas do not have to be the same size, but both areas in the image need to have been exposed to the same environmental factors. Environmental factors include fluctuations in illumination source, electronic power line noise, periodic signals induced by the imaging system, and the like, which induce undesirable periodic signals and random background noise in the video. As discussed in the background section hereof, undesirable signals and background noise are difficult to separate from desired signals of interest when the undesirable signals have frequency components that are within the bandwidth of the frequency of the subject's plethysmographic signals intended to be accurately acquired for biomedical measurements. The teachings hereof are directed to processing video images such that the quality of the desired signals of interest is enhanced to improve the accuracy of the biomedical measurements derived therefrom.
“Subtracting the pre-processed signals” means subtracting the signals generated from pre-processed video images of the segmented region which does not contain the subject's plethysmographic signals from the signals generated from pre-processed video images which do contain the subject's plethysmographic signals (corresponding to each of the imaging channels used to acquire the source signals) to generate, for each channel, an environmentally compensated signal.
“Independent Component Analysis” (ICA) is a decomposition method for uncovering independent source signal components from a set of observations that are composed of linear mixtures of the underlying sources, called “independent components” of the observed data. ICA defines a generative model for the observed multivariate data, which is typically given as a large database of samples. In the model, the data variables are assumed to be linear mixtures of some unknown latent variables, and the mixing system is also unknown. The latent variables are assumed non-Gaussian and mutually independent, and they are called the independent components of the observed data. These independent components, also called sources or factors, can be found by ICA. ICA is superficially related to principal component analysis and factor analysis. ICA is a much more powerful technique, however, capable of finding the underlying factors or sources when these classic methods fail completely. The data analyzed by ICA could originate from many different kinds of application fields, including digital images, databases, psychometric measurements. In many cases, the measurements are given as a set of parallel signals or time-series. ICA is one form of blind source separation.
“Blind Source Separation” (BSS) is a technique for the recovery of unobserved source signals from a set of observed mixed signals without any prior information being known about the “mixing” process.
A “remote sensing environment” refers to non-contact, non-invasive sensing, i.e., the imaging sensor does not physically contact the subject being sensed. The environment may be any settings such as, for example, a hospital, ambulance, medical office, and the like.
Flow Diagram of One Example Embodiment
Reference is now being made to FIG. 3 which is a flow diagram of one example embodiment of the present method for removing environmental factors from signals generated from video images captured by a non-contact imaging system in a remote sensing environment. Flow processing begins at step 300 and immediately proceeds to step 302.
At step 302, receive video images captured of the subject of interest using an imaging sensor. The video images are preprocessed to compensate for motion blur, slow illumination variation induced color inconsistency, and any geometric distortion.
At step 304, segment the video images into at least two regions of interest with a first region of interest comprising a localized area where the subject's plethysmographic signals can be registered (such as the localized area of exposed skin 201 of FIG. 1, based on color, material, spatial features, and the like), and a second region of interest comprising the surrounding background environment where the subject's plethysmographic signals cannot be registered (such as localized background area 202 of FIG. 1). In advance of segmenting the video, the source signal may be processed to compensate for motion induced blur, imaging blur, and/or slow illuminant variation. This compensation is preferably carried out in a time-domain before performing a Fourier transform.
At step 306, pre-process the video images for each of the first and second regions of interest. Pre-processing includes at least one of a source separation with blind source separation, and/or a constrained source separation. In various embodiments, pre-processing includes performing, for each of the first and second regions, the following steps: 1) average the value of all pixels in this channel to obtain a channel average per image frame; 2) compute a global channel average and a global standard deviation; 3) subtract the channel average from the global channel average to produce a resulting signal; 4) divide the resulting signal by the standard deviation to obtain a zero-mean unit variance time-series signal; 5) normalize the time-series signal; 6) band pass filter the normalized time-series signal to remove undesirable frequencies which are below and above the expected frequencies of the subject; and 7) perform signal whitening. A Fourier Transform (or any other spectral analysis techniques such as Auto-regression Model) may be performed on the source signal to remove periodic noise in advance of performing the subtraction of step 308. It is to be noted that a sorting or phase problem may arise while processing each regions with the blind source separation method such as the independent component analysis which should be resolved prior to subtraction of source separated signals. Alternately, if sorting or phase problems persist then source separation can be carried out on the subtracted signals after performing signal whitening.
At step 308, subtract the pre-processed signal of the region containing the localized area of the surrounding background environment from the pre-processed signal of the region containing the subject's plethysmographic signals. This generates an environmentally compensated signal, for each channel.
At step 310, communicate the environmentally compensated signal to a computer system. In this embodiment, further processing stops.
In another embodiment, a set of reference signals having a frequency range which approximates a frequency range of the subject's cardiac pulse are generated. Then, using the reference signal, a constrained source separation is performed on the subtracted signals to obtain an estimated source signal with a minimum error. The minimum error is achieved by adjusting the phase of the estimated source signal and calculating a difference between the two waveforms. A cardiac frequency of the subject is then estimated based upon a frequency at which the minimum error was achieved. One or more aspects of the reference signal can be modified by changing a frequency, amplitude, phase, or the wave form of the reference signal where the wave form is a sine wave, a square wave, a user defined shape such as that obtained from an ECG signal, or a cardiac pulse wave form derived from the subject.
Example Signal Processing System
Reference is now being made to FIG. 4 which is a block diagram of an example networked video image processing system wherein various aspects of the present method as described with respect to the flow diagram of FIG. 3 are implemented.
In FIG. 4, imaging sensor 402 acquires source video images of a subject of interest in the sensor's field of view 403 over at least one imaging channel. The source video images are communicated to Video Image Processing System 404 wherein various aspects of the present method are performed. The example image processing system is shown comprising a Buffer 406 for buffering frames of the source video image for processing. Buffer 406 may further store data, formulas and other mathematical representations as are necessary to process the source video images in accordance with the teachings hereof. Image Stabilizer Module 408 is provided to process the images to compensate for motion induced blur, imaging blur, slow illuminant variation, and the like. Video Image Processor 410 segments the video images into signals of a first localized area (such as localized area 201 of FIG. 2) where the subject's plethysmographic signals can be registered, and a second localized area (such as localized area 202 of FIG. 2) where the subject's plethysmographic signals cannot be registered. One or more frames of the source video captured of the subject of interest may be displayed on a display device such that the user or operator can select any of the first and second localized areas using, for example, a rubber-band box generated by a mouse-over operation. Source video images associated with each of the identified localized areas, for each of the acquiring channels, are provided to Video Image Pre-Processor 412 which receives the segmented source images of each localized area, pre-processes the segmented source images, converts to time-series signal and identifies the components of those signals by having performed a source separation with blind source separation, or a constrained source separation on the signals of each of the segmented regions for each imaging channel used to acquire those source images. If the sorting and phase problem cannot be fully resolved then source separation is performed after Signal Comparator 418. Various signal components may be stored/retrieved to storage device 416 using communication pathways not shown. Signal Comparator 418 receives the pre-processed signals for each region for each channel and subtracts the two signals from each other. A result of the subtraction is an environmentally compensated signal 420. Signal Communication Link 422 receives the signal 420 and provides the environmentally compensated signal to one or more remote devices via Transmission Antenna 424. Network Link 422 further provides the environmentally compensated signal 420 to computer system 428. Data is transferred between devices in a network in the form of signals which may be in any combination of electrical, electro-magnetic, optical, or other forms. Such signals are transmitted via wire, cable, fiber optic, phone line, cellular link, RF, satellite, or any other medium known in the arts.
In the embodiment shown, computer system 428 comprises a workstation. Networked workstation 428 includes a hard drive (internal to computer case 442) which reads/writes to computer readable media 440 such as a floppy disk, optical disk, CD-ROM, DVD, magnetic tape, etc. Case 442 also houses a motherboard with a processor and memory, a network card, graphics card, and the like, and other software and hardware. The workstation includes a user interface which comprises display 432 such as a CRT, LCD, touch screen, etc., mouse 435, and keyboard 434. It should be appreciated that the workstation has an operating system and other specialized software configured to display a variety of numeric values, text, scroll bars, pull-down menus with user selectable options, and the like, for entering, selecting, or modifying information displayed on display device 432. Various portions of the source video signals captured by sensor 402 may be communicated to workstation 428 for processing. It should be appreciated that some or all of the functionality performed by any of the modules and processing units of the signal processing system 404 can be performed, in whole or in part, by workstation 428. Workstation 428 is in communication with network 430 via a communications interface (not shown). A user or technician may use the keyboard 434 and mouse 436, to identify regions of interest, set parameters, select images for processing, view results, and the like. Any of these may be stored to storage device 438 or written to computer media 440 such as, for example, a CD-ROM drive, using a read/write device located in computer case 442. Any of the modules and processing units of FIG. 4 can be placed in communication with device 416 and may store/retrieve therefrom data, variables, records, parameters, functions, machine readable/executable program instructions required to perform their intended functions. Moreover each of the modules of system 404 may be placed in communication with one or more devices over network 430. Although shown as a desktop computer, it should be appreciated that computer system 428 can be any of a laptop, mainframe, server, or a special purpose computer such as an ASIC, circuit board, dedicated processor, or the like.
Performance Results
Tests were performed using a 3 channel RGB video camera which produces a source video images containing camera-induced noise. Attention is directed to FIG. 5A which plots the power spectral density of the second channel from the segmented source video images of the localized area 201 of FIG. 2. Two dominant components are present, i.e., a first dominant signal component (at 501) comprises the subject's plethysmographic signal (at approximately 56 beats per minute (bpm)), and a second dominant component (at 502) comprises the undesirable camera-induced noise centered about 120 bpm. FIG. 5B plots the power spectral density of the same second channel (post-ICA) from the segmented source video images of the localized area 202 of FIG. 2 where the subject's plethysmographic signal cannot be registered. Notice that the subject's plethysmographic signal (around 56 bpm) does not appear indicating the non-existence of those signals in the background environment but that the localized area 202 does contain the undesirable signal (at 503) centered about 120 bpm.
FIGS. 6A-C shows the power spectral density of the signals acquired for all three imaging channels before performing signal compensation as disclosed herein. FIGS. 7A-C shows the power spectral density for the signals of the three imaging channels after compensating the source video signal according to the teachings hereof. As shown, the undesirable camera-induced noise around 120 bpm (present in each of the three imaging channels) has been effectively eliminated while the subject's plethysmographic signal (dominant in the second channel) is largely retained. These clearly demonstrate the viability of the teachings disclosed herein. Moreover, various random signals on either side of subject's plethysmographic signal have been reduced as well.
Various Embodiments
It should also be appreciated that various modules may designate one or more components which may, in turn, comprise software and/or hardware designed to perform the intended function. A plurality of modules may collectively perform a single function. Each module may have a specialized processor capable of executing machine readable program instructions. A module may comprise a single piece of hardware such as an ASIC, electronic circuit, or special purpose processor. A plurality of modules may be executed by either a single special purpose computer system or a plurality of special purpose computer systems operating in parallel. Connections between modules include both physical and logical connections. Modules may further include one or more software/hardware modules which may further comprise an operating system, drivers, device controllers, and other apparatuses some or all of which may be connected via a network. It is also contemplated that one or more aspects of the present method may be implemented on a dedicated computer system and may also be practiced in distributed computing environments where tasks are performed by remote devices that are linked through a network. The teachings hereof can be implemented in hardware or software using any known or later developed systems, structures, devices, and/or software by those skilled in the applicable art without undue experimentation from the functional description provided herein with a general knowledge of the relevant arts.
One or more aspects of the methods described herein are intended to be incorporated in an article of manufacture, including one or more computer program products, having computer usable or machine readable media. For purposes hereof, a computer usable or machine readable media is, for example, a floppy disk, a hard-drive, memory, CD-ROM, DVD, tape, cassette, or other digital or analog media, or the like, which is capable of having embodied thereon a computer readable program, one or more logical instructions, or other machine executable codes or commands that implement and facilitate the function, capability, and methodologies described herein. Furthermore, the article of manufacture may be included on at least one storage media readable by a machine architecture or image processing system embodying executable program instructions capable of performing the methodology described in the flow diagrams. The article of manufacture may be included as part of an operating system, a plug-in, or may be shipped, sold, leased, or otherwise provided separately, either alone or as part of an add-on, update, upgrade, or product suite.
It will be appreciated that various of the above-disclosed and other features and functions, or alternatives thereof, may be desirably combined into many other different systems or applications. Various presently unforeseen or unanticipated alternatives, modifications, variations, or improvements therein may become apparent and/or subsequently made by those skilled in the art, which are also intended to be encompassed by the following claims. Accordingly, the embodiments set forth above are considered to be illustrative and not limiting. Various changes to the above-described embodiments may be made without departing from the spirit and scope of the invention. The teachings of any printed publications including patents and patent applications, are each separately hereby incorporated by reference in their entirety.

Claims (25)

What is claimed is:
1. A method for removing environmental factors from signals generated from video images captured by a non-contact imaging system in a remote sensing environment, the method comprising:
receiving video images captured of a subject of interest, said video comprising a time varying source video image acquired over at least one imaging channel;
segmenting said video images into a first region of interest comprising a localized area where plethysmographic signals of said subject can be registered and a second region of interest comprising a background area, both of said first and second regions being exposed to the same environmental factors;
pre-processing said video images to generate a time-series signal for each of said first and second regions;
subtracting said time-series signal corresponding to each of said first and second regions to obtain an environmentally compensated signal; and
communicating said environmentally compensated signal to a computer system.
2. The method of claim 1, wherein pre-processing said video images is performed using at least one of: a source separation with blind source separation, and a constrained source separation.
3. The method of claim 2, wherein said pre-processing further comprises, for each channel:
computing an average value of all pixels acquired with this channel to obtain a channel average per image frame;
computing a global channel average and a global standard deviation from said computed averages for this channel;
subtracting said channel average from said global channel average to produce a resulting signal;
dividing said resulting signal by said standard deviation to obtain a zero-mean unit variance time-series signal for said region;
normalizing said time-series signal; and
band-pass filtering said normalized time-series signal to remove frequencies that are above and below expected frequencies of the plethysmographic signals of said subject.
4. The method of claim 1, wherein said acquired source video image comprises any combination of: NIR images, RGB images, RGB and NIR images, multi-spectral images, and hyperspectral video images.
5. The method of claim 1, wherein, in advance of segmenting said video images, further comprising compensating for any of: a motion induced blur, an imaging blur, and slow illuminant variation.
6. The method of claim 5, wherein said compensation is carried out in a time-domain before performing a Fourier transform such that both a periodic and a non-periodic background signal can be reduced.
7. The method of claim 1, further comprising performing a Fourier Transform on said source signal to remove periodic noise in advance of said subtraction.
8. The method of claim 1, further comprising:
generating a set of reference signals having a frequency range which approximates a frequency range of said subject's cardiac pulse;
performing, using said reference signal, a constrained source separation on said source data to obtain an estimated source signal with a minimum error; and
estimating a cardiac frequency of said subject based upon a frequency at which said minimum error was achieved.
9. The method of claim 8, wherein said minimum error was achieved by adjusting phase of the estimated source signal and calculating a difference between two waveforms.
10. The method of claim 8, further comprising changing at least one aspect of said reference signal by changing any of: a frequency, an amplitude, a phase, and a wave form of said reference signal.
11. The method of claim 10, wherein said wave form comprises any of: a sine wave, a square wave, a user defined shape such as that obtained from an ECG signal, and a cardiac pulse wave form derived from said subject.
12. A system for removing environmental factors from signals generated from video images captured by a non-contact imaging system in a remote sensing environment, the system comprising:
an imaging sensor for acquiring a time varying source video image acquired over at least one imaging channel; and
a processor in communication with said imaging sensor and a memory, said processor executing machine readable instructions for performing:
receiving video images captured of a subject of interest using said sensor;
segmenting said video images into a first region of interest comprising a localized area where plethysmographic signals of said subject can be registered and a second region of interest comprising a background area, both of said first and second regions being exposed to the same environmental factors;
pre-processing said video images to generate a time-series signal for each of said first and second regions;
subtracting said time-series signal corresponding to each of said first and second regions to obtain an environmentally compensated signal; and
communicating said environmentally compensated signal to a computer system.
13. The system of claim 12, wherein pre-processing said video images is performed using at least one of: a source separation with blind source separation, and a constrained source separation.
14. The system of claim 13, wherein said pre-processing further comprises, for each channel:
computing an average value of all pixels acquired with this channel to obtain a channel average per image frame;
computing a global channel average and a global standard deviation from said computed averages for this channel;
subtracting said channel average from said global channel average to produce a resulting signal;
dividing said resulting signal by said standard deviation to obtain a zero-mean unit variance time-series signal for said region;
normalizing said time-series signal; and
band-pass filtering said normalized time-series signal to remove frequencies that are above and below expected frequencies of the plethysmographic signals of said subject.
15. The system of claim 12, wherein said acquired source video image comprises any combination of: NIR images, RGB images, RGB and NIR images, multi-spectral images, and hyperspectral video images.
16. The system of claim 12, wherein, in advance of segmenting said video images, further comprising compensating for any of: a motion induced blur, an imaging blur, and slow illuminant variation.
17. The system of claim 16, wherein said compensation is carried out in a time-domain before performing a Fourier transform such that both a periodic and a non-periodic background signal can be reduced.
18. The system of claim 12, further comprising performing a Fourier Transform on said source signal to remove periodic noise in advance of said subtraction.
19. The system of claim 12, further comprising:
generating a set of reference signals having a frequency range which approximates a frequency range of said subject's cardiac pulse;
performing, using said reference signal, a constrained source separation on said source data to obtain an estimated source signal with a minimum error; and
estimating a cardiac frequency of said subject based upon a frequency at which said minimum error was achieved.
20. The system of claim 19, wherein said minimum error was achieved by adjusting phase of the estimated source signal and calculating a difference between two waveforms.
21. The system of claim 19, further comprising changing at least one aspect of said reference signal by changing any of: a frequency, an amplitude, a phase, and a wave form of said reference signal.
22. The system of claim 21, wherein said wave form comprises any of: a sine wave, a square wave, a user defined shape such as that obtained from an ECG signal, and a cardiac pulse wave form derived from said subject.
23. A computer implemented method for removing environmental factors from signals generated from video images captured by a non-contact imaging system in a remote sensing environment, the method comprising:
receiving video images captured of a subject of interest, said video comprising a time varying source video image acquired over at least one imaging channel, said acquired source video image comprising any combination of: NIR images, RGB images, RGB and NIR images, multi-spectral images, and hyperspectral video images;
segmenting said video images into a first region of interest comprising a localized area where plethysmographic signals of said subject can be registered and a second region of interest comprising a background area, both of said first and second regions being exposed to the same environmental factors;
pre-processing said video images to generate a normalized time-series signal for each of said first and second regions, said pre-processing including performing at least one of: a source separation with blind source separation, and a constrained source separation;
subtracting said normalized time-series signal corresponding to each of said first and second regions to obtain an environmentally compensated signal; and
communicating said environmentally compensated signal to a computer system.
24. The computer implemented method of claim 23, wherein said pre-processing further comprises, for each channel:
computing an average value of all pixels acquired with this channel to obtain a channel average per image frame;
computing a global channel average and a global standard deviation from said computed averages for this channel;
subtracting said channel average from said global channel average to produce a resulting signal;
dividing said resulting signal by said standard deviation to obtain a zero-mean unit variance time-series signal for said region;
normalizing said time-series signal; and
band-pass filtering said normalized time-series signal to remove frequencies that are above and below expected frequencies of the plethysmographic signals of said subject.
25. The computer implemented method of claim 23, further comprising:
generating a set of reference signals having a frequency range which approximates a frequency range of said subject's cardiac pulse;
performing, using said reference signal, a constrained source separation on said source data to obtain an estimated source signal with a minimum error, said minimum error being achieved by adjusting phase of the estimated source signal and calculating a difference between two waveforms; and
estimating a cardiac frequency of said subject based upon a frequency at which said minimum error was achieved.
US13/401,207 2012-02-21 2012-02-21 Removing environment factors from signals generated from video images captured for biomedical measurements Active 2034-08-20 US9185353B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/401,207 US9185353B2 (en) 2012-02-21 2012-02-21 Removing environment factors from signals generated from video images captured for biomedical measurements

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/401,207 US9185353B2 (en) 2012-02-21 2012-02-21 Removing environment factors from signals generated from video images captured for biomedical measurements

Publications (2)

Publication Number Publication Date
US20130215244A1 US20130215244A1 (en) 2013-08-22
US9185353B2 true US9185353B2 (en) 2015-11-10

Family

ID=48981976

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/401,207 Active 2034-08-20 US9185353B2 (en) 2012-02-21 2012-02-21 Removing environment factors from signals generated from video images captured for biomedical measurements

Country Status (1)

Country Link
US (1) US9185353B2 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140376789A1 (en) * 2013-06-21 2014-12-25 Xerox Corporation Compensating for motion induced artifacts in a physiological signal extracted from multiple videos
US20140376788A1 (en) * 2013-06-21 2014-12-25 Xerox Corporation Compensating for motion induced artifacts in a physiological signal extracted from a single video
US20150131879A1 (en) * 2013-11-14 2015-05-14 Industrial Technology Research Institute Apparatus based on image for detecting heart rate activity and method thereof
CN107341837A (en) * 2017-06-26 2017-11-10 华中师范大学 Grid and vector data conversion and continuous yardstick expression based on image pyramid
US10172517B2 (en) 2016-02-25 2019-01-08 Samsung Electronics Co., Ltd Image-analysis for assessing heart failure
US10362998B2 (en) 2016-02-25 2019-07-30 Samsung Electronics Co., Ltd. Sensor-based detection of changes in health and ventilation threshold
US10420514B2 (en) 2016-02-25 2019-09-24 Samsung Electronics Co., Ltd. Detection of chronotropic incompetence
US10939834B2 (en) 2017-05-01 2021-03-09 Samsung Electronics Company, Ltd. Determining cardiovascular features using camera-based sensing
US11164596B2 (en) 2016-02-25 2021-11-02 Samsung Electronics Co., Ltd. Sensor assisted evaluation of health and rehabilitation

Families Citing this family (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201114406D0 (en) * 2011-08-22 2011-10-05 Isis Innovation Remote monitoring of vital signs
US9351649B2 (en) 2012-02-21 2016-05-31 Xerox Corporation System and method for determining video-based pulse transit time with time-series signals
CN104434078A (en) * 2013-09-13 2015-03-25 施乐公司 System and method for determining video-based pulse transit time with time-series signals
US9839360B2 (en) * 2012-05-11 2017-12-12 Optica, Inc. Systems, methods, and apparatuses for monitoring end stage renal disease
US9141868B2 (en) 2012-06-26 2015-09-22 Xerox Corporation Contemporaneously reconstructing images captured of a scene illuminated with unstructured and structured illumination sources
KR20140041106A (en) * 2012-09-27 2014-04-04 에스엔유 프리시젼 주식회사 Image processing method and image processing apparatus using time axis low pass filter
EP2936432B1 (en) * 2012-12-21 2019-08-07 Koninklijke Philips N.V. System and method for extracting physiological information from remotely detected electromagnetic radiation
US10546210B2 (en) 2014-02-17 2020-01-28 Mobileye Vision Technologies Ltd. Topology preserving intensity binning on reduced resolution grid of adaptive weighted cells
US9615050B2 (en) 2014-02-17 2017-04-04 Mobileye Vision Technologies Ltd. Topology preserving intensity binning on reduced resolution grid of adaptive weighted cells
US9336594B2 (en) 2014-03-07 2016-05-10 Xerox Corporation Cardiac pulse rate estimation from source video data
US9320440B2 (en) 2014-04-01 2016-04-26 Xerox Corporation Discriminating between atrial fibrillation and sinus rhythm in physiological signals obtained from video
US9245338B2 (en) 2014-05-19 2016-01-26 Xerox Corporation Increasing accuracy of a physiological signal obtained from a video of a subject
US9924896B2 (en) 2014-06-23 2018-03-27 Koninklijke Philips N.V. Device, system and method for determining the concentration of a substance in the blood of a subject
US10660533B2 (en) * 2014-09-30 2020-05-26 Rapsodo Pte. Ltd. Remote heart rate monitoring based on imaging for moving subjects
US10076250B2 (en) 2015-06-14 2018-09-18 Facense Ltd. Detecting physiological responses based on multispectral data from head-mounted cameras
US10799122B2 (en) 2015-06-14 2020-10-13 Facense Ltd. Utilizing correlations between PPG signals and iPPG signals to improve detection of physiological responses
US10151636B2 (en) 2015-06-14 2018-12-11 Facense Ltd. Eyeglasses having inward-facing and outward-facing thermal cameras
US10045726B2 (en) 2015-06-14 2018-08-14 Facense Ltd. Selecting a stressor based on thermal measurements of the face
US10136852B2 (en) 2015-06-14 2018-11-27 Facense Ltd. Detecting an allergic reaction from nasal temperatures
US11103139B2 (en) 2015-06-14 2021-08-31 Facense Ltd. Detecting fever from video images and a baseline
US10064559B2 (en) 2015-06-14 2018-09-04 Facense Ltd. Identification of the dominant nostril using thermal measurements
US10130308B2 (en) 2015-06-14 2018-11-20 Facense Ltd. Calculating respiratory parameters from thermal measurements
US10085685B2 (en) 2015-06-14 2018-10-02 Facense Ltd. Selecting triggers of an allergic reaction based on nasal temperatures
US10092232B2 (en) 2015-06-14 2018-10-09 Facense Ltd. User state selection based on the shape of the exhale stream
US10299717B2 (en) 2015-06-14 2019-05-28 Facense Ltd. Detecting stress based on thermal measurements of the face
US10130261B2 (en) 2015-06-14 2018-11-20 Facense Ltd. Detecting physiological responses while taking into account consumption of confounding substances
US10523852B2 (en) 2015-06-14 2019-12-31 Facense Ltd. Wearable inward-facing camera utilizing the Scheimpflug principle
US9968264B2 (en) 2015-06-14 2018-05-15 Facense Ltd. Detecting physiological responses based on thermal asymmetry of the face
US9867546B2 (en) 2015-06-14 2018-01-16 Facense Ltd. Wearable device for taking symmetric thermal measurements
US11064892B2 (en) 2015-06-14 2021-07-20 Facense Ltd. Detecting a transient ischemic attack using photoplethysmogram signals
US10667697B2 (en) 2015-06-14 2020-06-02 Facense Ltd. Identification of posture-related syncope using head-mounted sensors
US10791938B2 (en) 2015-06-14 2020-10-06 Facense Ltd. Smartglasses for detecting congestive heart failure
US10638938B1 (en) 2015-06-14 2020-05-05 Facense Ltd. Eyeglasses to detect abnormal medical events including stroke and migraine
US10216981B2 (en) * 2015-06-14 2019-02-26 Facense Ltd. Eyeglasses that measure facial skin color changes
US11154203B2 (en) 2015-06-14 2021-10-26 Facense Ltd. Detecting fever from images and temperatures
US10376163B1 (en) 2015-06-14 2019-08-13 Facense Ltd. Blood pressure from inward-facing head-mounted cameras
US10349887B1 (en) 2015-06-14 2019-07-16 Facense Ltd. Blood pressure measuring smartglasses
US10080861B2 (en) 2015-06-14 2018-09-25 Facense Ltd. Breathing biofeedback eyeglasses
US10076270B2 (en) 2015-06-14 2018-09-18 Facense Ltd. Detecting physiological responses while accounting for touching the face
US10045737B2 (en) 2015-06-14 2018-08-14 Facense Ltd. Clip-on device with inward-facing cameras
US10154810B2 (en) 2015-06-14 2018-12-18 Facense Ltd. Security system that detects atypical behavior
US11103140B2 (en) 2015-06-14 2021-08-31 Facense Ltd. Monitoring blood sugar level with a comfortable head-mounted device
US10130299B2 (en) 2015-06-14 2018-11-20 Facense Ltd. Neurofeedback eyeglasses
US10045699B2 (en) 2015-06-14 2018-08-14 Facense Ltd. Determining a state of a user based on thermal measurements of the forehead
US10159411B2 (en) 2015-06-14 2018-12-25 Facense Ltd. Detecting irregular physiological responses during exposure to sensitive data
US9697599B2 (en) 2015-06-17 2017-07-04 Xerox Corporation Determining a respiratory pattern from a video of a subject
KR101777472B1 (en) * 2015-07-01 2017-09-12 순천향대학교 산학협력단 A method for estimating respiratory and heart rate using dual cameras on a smart phone
US10113913B2 (en) 2015-10-03 2018-10-30 Facense Ltd. Systems for collecting thermal measurements of the face
US10136856B2 (en) 2016-06-27 2018-11-27 Facense Ltd. Wearable respiration measurements system
CN111374647A (en) * 2018-12-29 2020-07-07 中兴通讯股份有限公司 Method and device for detecting pulse wave and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100198027A1 (en) * 2007-05-02 2010-08-05 Barry Dixon Non-invasive measurement of blood oxygen saturation
US20110251493A1 (en) * 2010-03-22 2011-10-13 Massachusetts Institute Of Technology Method and system for measurement of physiological parameters

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100198027A1 (en) * 2007-05-02 2010-08-05 Barry Dixon Non-invasive measurement of blood oxygen saturation
US20110251493A1 (en) * 2010-03-22 2011-10-13 Massachusetts Institute Of Technology Method and system for measurement of physiological parameters

Non-Patent Citations (10)

* Cited by examiner, † Cited by third party
Title
Dalal et al. "Histograms of Oriented Gradients for Human Detection", Proceedings of the Conference on Computer Vision and Pattern Recognition, San Diego, California, USA, pp. 886-893, (2005).
Hyvarinen, et al., "Independent Component Analysis: Algorithms and Applications", Neural Networks Research Centre, Helsinki University of Technology, Finland, Neutral Networks, pp. 1-31, 13(4-5); 411-430, 2000.
J. Lee, et al., "Temporally constrained ICA-based foetal ECG separation", Electronics Letters, Oct. 13, 2005, vol. 41, No. 21.
Jean-Francois Cardoso, "Blind signal separation: statistical principles", pp. 1-16, (Official Version published as: Proceedings of the IEEE, vol. 9, No. 10, pp. 2009-2025, Oct. 1998).
Mestha et al., "Estimating Cardiac Pulse Recovery From Multi-Channel Source Data Via Constrained Source Separation", U.S. Appl. No. 13/247,683, filed Sep. 28, 2011.
Mestha et al., "Filtering Source Video Data Via Independent Component Selection", U.S. Appl. No. 13/281,975, filed Oct. 26, 2011.
Poh, et al., "Non-contact, automated cardiac pulse measurements using video imaging and blind source separation.", May 10, 2010, vol. 18, No. 10 / Optics Express 10762.
Takano, et al., "Heart rate measurement based on a time-lapse image", Medical Engineering & Physics 29 (2007), pp. 853-857, www.sciencedirect.com.
Wei Lu et al., "Approach and Applications of Constrained ICA", IEEE Transactions on Neural Networks, vol. 16, No. 1, Jan. 2005.
Wei Lu, et al., "Constrained Independent Component Analysis", School of Computer Engineering, Nanyang Technological University, Singapore 639798.

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140376789A1 (en) * 2013-06-21 2014-12-25 Xerox Corporation Compensating for motion induced artifacts in a physiological signal extracted from multiple videos
US20140376788A1 (en) * 2013-06-21 2014-12-25 Xerox Corporation Compensating for motion induced artifacts in a physiological signal extracted from a single video
US9436984B2 (en) * 2013-06-21 2016-09-06 Xerox Corporation Compensating for motion induced artifacts in a physiological signal extracted from a single video
US9443289B2 (en) * 2013-06-21 2016-09-13 Xerox Corporation Compensating for motion induced artifacts in a physiological signal extracted from multiple videos
US20150131879A1 (en) * 2013-11-14 2015-05-14 Industrial Technology Research Institute Apparatus based on image for detecting heart rate activity and method thereof
US9364157B2 (en) * 2013-11-14 2016-06-14 Industrial Technology Research Institute Apparatus based on image for detecting heart rate activity and method thereof
US11164596B2 (en) 2016-02-25 2021-11-02 Samsung Electronics Co., Ltd. Sensor assisted evaluation of health and rehabilitation
US10172517B2 (en) 2016-02-25 2019-01-08 Samsung Electronics Co., Ltd Image-analysis for assessing heart failure
US10362998B2 (en) 2016-02-25 2019-07-30 Samsung Electronics Co., Ltd. Sensor-based detection of changes in health and ventilation threshold
US10420514B2 (en) 2016-02-25 2019-09-24 Samsung Electronics Co., Ltd. Detection of chronotropic incompetence
US10939834B2 (en) 2017-05-01 2021-03-09 Samsung Electronics Company, Ltd. Determining cardiovascular features using camera-based sensing
CN107341837B (en) * 2017-06-26 2020-07-10 华中师范大学 Grid-vector data conversion and continuous scale expression method based on image pyramid
CN107341837A (en) * 2017-06-26 2017-11-10 华中师范大学 Grid and vector data conversion and continuous yardstick expression based on image pyramid

Also Published As

Publication number Publication date
US20130215244A1 (en) 2013-08-22

Similar Documents

Publication Publication Date Title
US9185353B2 (en) Removing environment factors from signals generated from video images captured for biomedical measurements
JP6371837B2 (en) Devices and methods for obtaining vital signs of subjects
JP6813285B2 (en) Judgment of breathing pattern from the image of the subject
US9036877B2 (en) Continuous cardiac pulse rate estimation from multi-channel source video data with mid-point stitching
US8838209B2 (en) Deriving arterial pulse transit time from a source video image
US8617081B2 (en) Estimating cardiac pulse recovery from multi-channel source data via constrained source separation
US9336594B2 (en) Cardiac pulse rate estimation from source video data
US8897522B2 (en) Processing a video for vascular pattern detection and cardiac function analysis
US9504426B2 (en) Using an adaptive band-pass filter to compensate for motion induced artifacts in a physiological signal extracted from video
Blackford et al. Effects of frame rate and image resolution on pulse rate measured using multiple camera imaging photoplethysmography
US9351649B2 (en) System and method for determining video-based pulse transit time with time-series signals
CN109977858B (en) Heart rate detection method and device based on image analysis
US9483837B2 (en) Compensating for motion during real-time batch processing of video for physiological function assessment
Gibson et al. Non-contact heart and respiratory rate monitoring of preterm infants based on a computer vision system: A method comparison study
US20150313502A1 (en) Determining arterial pulse wave transit time from vpg and ecg/ekg signals
Sinhal et al. An overview of remote photoplethysmography methods for vital sign monitoring
EP2848193A1 (en) System and method for determining video-based pulse transit time with time-series signals
Luguern et al. Wavelet variance maximization: a contactless respiration rate estimation method based on remote photoplethysmography
US20200367773A1 (en) Device, system and method for determining a physiological parameter of a subject
US20200178809A1 (en) Device, system and method for determining a physiological parameter of a subject
Luguern et al. Remote photoplethysmography combining color channels with SNR maximization for respiratory rate assessment
Spicher et al. Heart rate monitoring in ultra-high-field MRI using frequency information obtained from video signals of the human skin compared to electrocardiography and pulse oximetry
Ruminski The accuracy of pulse rate estimation from the sequence of face images
Zhou et al. Non-contact detection of human heart rate with Kinect
Lohani et al. Extraction of vital signs using real time video analysis for neonatal monitoring

Legal Events

Date Code Title Description
AS Assignment

Owner name: XEROX CORPORATION, CONNECTICUT

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MESTHA, LALIT KESHAV;XU, BEILEI;REEL/FRAME:027737/0167

Effective date: 20120220

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4