WO2019137886A1 - Capturing subject data in dynamic environments using mobile cameras - Google Patents

Capturing subject data in dynamic environments using mobile cameras Download PDF

Info

Publication number
WO2019137886A1
WO2019137886A1 PCT/EP2019/050282 EP2019050282W WO2019137886A1 WO 2019137886 A1 WO2019137886 A1 WO 2019137886A1 EP 2019050282 W EP2019050282 W EP 2019050282W WO 2019137886 A1 WO2019137886 A1 WO 2019137886A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
configuration
image data
subject
analyzing
Prior art date
Application number
PCT/EP2019/050282
Other languages
French (fr)
Inventor
Mladen Milosevic
Cornelis Conradus Adrianus Maria Van Zon
Hans-Aloys Wischmann
Original Assignee
Koninklijke Philips N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips N.V. filed Critical Koninklijke Philips N.V.
Publication of WO2019137886A1 publication Critical patent/WO2019137886A1/en

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/20ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the management or administration of healthcare resources or facilities, e.g. managing hospital staff or surgery rooms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/021Measuring pressure in heart or blood vessels
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • A61B5/0816Measuring devices for examining respiratory frequency
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/1455Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue using optical sensors, e.g. spectral photometrical oximeters

Definitions

  • Various embodiments described herein are directed generally to health care. More particularly, but not exclusively, various methods and apparatus disclosed herein relate to capturing subject data in dynamic environments using mobile cameras.
  • a triage nurse assesses the condition of the subject and assigns a score which determines the priority of care given to the subject. In some case this score may acknowledge the fact that a given subject may be able to wait without detrimental effects.
  • This triage score may be based on many data points, with the subject’s vital signs playing an important role. In order to catch deterioration while subjects await treatment, vitals should ideally be captured periodically, but in practice this rarely happens. This is also true in some of the non-emergency-department scenarios mentioned previously, in which medical personnel are typically overwhelmed by the number and/or severity of casualties. Summary
  • a mobile device equipped with a camera such as a portable vital sign acquisition camera, a mobile telephone, a phablet/tablet computer, a standalone (networked) digital camera, a wearable device (e.g ., smart glasses, smart watch, a digital camera strapped to medical personnel, etc.), and so forth, may be carried by medical personnel that are tasked with monitoring subjects awaiting treatment.
  • a wearable device e.g ., smart glasses, smart watch, a digital camera strapped to medical personnel, etc.
  • These mobile devices may be used to identify and/or extract various information from subjects, such as vital signs, blushing, symptoms (e.g., coughing), and so forth, in order to facilitate input of symptoms, observations, and other information obtained, for example, during triage.
  • a configuration of the mobile device’s camera may be altered based on processing of image data it captures, e.g., to improve detection of various information.
  • a frame rate of the camera may be adjusted based on a prediction— made using one or more image frames that depict a subject— of a period in time at which the subject is likely to exhibit peak blushing. This may be done in order to capture images at an elevated frame rate (e.g., switching from a normal frame rate of approximately 10- 15 frames per second for most cameras, to its maximum frame rate of at least 60 frames per second) during the predicted blushing.
  • the higher frame rate may be determined by the temporal resolution and/or accuracy requirements for relevant vital parameters in general, and in particular for heartrate/respiratory rate detection by the duration of the “blushing” that must never fall between two consecutive frames.
  • a region of interest (“ROI”) to be captured by the camera may be determined based on image processing of image data.
  • Other camera settings/parameters, such as pixel binning, lighting setings, shuter speed, zoom, resolution, color settings, etc., may also be adjusted.
  • a frame rate of the camera may be decreased and/or a resolution may be increased in order to capture image data that is usable to identify a subject.
  • a camera-equipped mobile device may be relatively resource-constrained, e.g., it may have a limited batery life, processor speed, memory, etc. Accordingly, in various embodiments, image processing and/or camera configuration selection may be delegated to one or more remote computing devices with more resources. For example, in some embodiments, image data captured by the mobile device’s camera may be transmitted to one or more remote computing devices, e.g., operating a so-called“cloud” computing environment, to be processed/analyzed.
  • the remote computing device(s) may perform the image
  • processing/analysis and, based on the processing/analysis, select a configuration for the mobile device’s camera, e.g., to capture a targeted piece of information (e.g., a particular vital sign, the subject’s identity, etc.).
  • a targeted piece of information e.g., a particular vital sign, the subject’s identity, etc.
  • a method may include: obtaining first image data generated by a mobile device equipped with a camera; analyzing the received first image data; selecting , based on the analyzing, at least one configuration for the camera; providing the mobile device with data indicative of the selected at least one configuration; obtaining second image data generated by the mobile device after provision of the data indicative of the selected at least one configuration, wherein the second image data captures a subject; and processing at least the second image data to extract at least one vital sign of the subject.
  • the at least one configuration for the camera may include a color setting that is selected based on the analyzing. In various embodiments, the at least one configuration for the camera may include a lighting setting that is selected based on the analyzing. In various embodiments, the at least one configuration for the camera may include a region of interest within a field of view of the camera that is selected based on the analyzing. In various embodiments, the at least one configuration for the camera may include a frame rate that is selected based on the analyzing. In various embodiments, the selected frame rate may include an elevated frame rate of at least approximately sixty frames per second. In various
  • the elevated frame rate may cause the camera to utilize binning.
  • the first image data may also capture the subject.
  • selecting the at least one configuration may include: predicting at least a point in time associated with peak blushing of the subject; and selecting, as the at least one configuration, an instruction for the camera to perform burst-mode capture during the at least one point in time.
  • FIG. 1 schematically illustrates an example environment in which selected aspects of the present disclosure may be implemented, in accordance with various embodiments.
  • FIG. 2 schematically illustrates one example of an exchange that may occur between a camera-equipped mobile device and an image processing system to implement selected aspects of the present disclosure, in accordance with various embodiments.
  • FIG. 3 depicts an example method for performing selected aspects of the present disclosure, in accordance with various embodiments.
  • FIG. 4 schematically depicts an example computer system architecture, in accordance with various embodiments.
  • FIG. 1 an environment in which selected aspects of the present disclosure may be implemented is depicted.
  • Various components are depicted with lines and other connectors to indicate that these components are, at the very least, in communication with each other, or“communicatively coupled.”
  • Various types of communication technologies may be employed by the various components to communicate with each other, depending on whether they are implemented on a single computing system, across multiple computing systems via one or more computing networks, and so forth.
  • components that are in network communication with each other may employ various wired and/or wireless technologies, including but not limited to Wi-Fi, Bluetooth, Ethernet, cellular communication, satellite communication, optical communication, and so forth.
  • a mobile device 102 that is carried by medical personnel may be equipped with a camera 104.
  • Mobile device 102 may take various forms, such as a body- mounted camera (e.g., similar to those used sometimes by law enforcement), smart glasses, a smart watch, a mobile phone, a digital camera, a tablet computer, and so forth.
  • a body- mounted camera e.g., similar to those used sometimes by law enforcement
  • smart glasses e.g., similar to those used sometimes by law enforcement
  • a smart watch e.g., similar to those used sometimes by law enforcement
  • mobile phone e.g., a smart watch
  • mobile phone e.g., a digital camera
  • tablet computer e.g., a tablet computer
  • mobile device 102 and camera 104 may be integral in a single unit. However, they may also be separate.
  • camera 104 may be a standalone camera, e.g., a so-called“action camera” strapped to a triage nurse, that is in Bluetooth communication with a mobile phone, tablet, etc., carried by the triage nurse.
  • camera 104 is strapped to the forehead of a triage nurse 105.
  • mobile device 102 may capture image data that may include, for instance, one or more image frames.
  • image data may include, for instance, one or more image frames.
  • One or more of these image frames may capture one or more subjects 103, such as subjects in an emergency department waiting room, in a remote disaster area, in a conflict zone, etc.
  • Image data may come in various forms, such as color data (e.g., RGB), black and white, thermal imaging data (e.g., infrared), three-dimensional data, so-called 2.5 dimensional data, etc.
  • camera 104 may include a variety of settings and/or parameters, often referred to herein collectively as its“configuration.”
  • One or more of these settings and/or parameters may be adjustable, e.g., by the user and/or automatically, e.g., by way of one or more commands received at an application-specific programming interface (“API”) of camera 104.
  • API application-specific programming interface
  • These settings and/or parameters may include but are not limited to frame rate, color settings, lighting settings, shutter speed, zoom, resolution, pixel“binning”, aspect ratio, a region of interest (“ROI”), and any other setting and/or parameter known to be associated with cameras generally.
  • ROI region of interest
  • mobile device 102 may be relatively resource-constrained. This may be particularly true in scenarios in which mobile device 102 is carried by medical personnel in the field, e.g., in a conflict zone, disaster area, remote pandemic outbreak, etc. Accordingly, in various embodiments, various aspects of techniques described herein may be performed remotely from mobile device 102, e.g., by “cloud-based” components that have, at least relative to mobile device 102, access to virtually limitless resources. For example, in some embodiments, mobile device 102 may be configured to provide image data captured by camera 104 to a remote image processing system 106, so that image processing system 106 can perform techniques described herein.
  • image processing system 106 may be configured to obtain image data provided by one or more mobile devices 102, process/analyze the image data, and based on the processing/analyzing, select various configurations to be implemented by the mobile devices 102— and more particularly, by their respective cameras 104— so that various targeted information about subjects 103 can be determined. Once the camera(s) 104 are properly configured, image processing system 106 may be further configured to process subsequent image data to identify and/or determine various targeted information about subjects, such as their identities, vital signs, symptoms, etc. In various examples described herein, image processing system 106 performs the bulk of the processing. However, this is not meant to be limiting. In various embodiments, selected aspects of the present disclosure may be performed by other components of Fig. 1. For example, various aspects of the present disclosure may be performed, for instance, using logic onboard mobile device 102.
  • image processing system 106 may be implemented by one or more computing devices in network communication with each other that may be organized, for instance, as an abstract“cloud-based” computing environment.
  • image processing system 106 may include an identification module 108 and a vital sign extraction module 1 10.
  • one or more of modules 108-110 may be combined into a single module, distributed in whole or in part elsewhere, and/or omitted.
  • Identification module 108 may be configured to process image data captured by camera 104 to determine a depicted subject’s identity. Identification module 108 may employ a variety of different techniques to identify subjects. In some embodiments, one or more facial recognition techniques may be employed by identification module 108, including but not limited to three-dimensional recognition, principle component analysis using eigenfaces, linear discriminant analysis, elastic bunch graph matching using the Fisherface algorithm, a hidden Markov model, a multilinear subspace learning using tensor representation, trained machine learning models ( e.g ., convolutional neural networks, recurrent neural networks), a neuronal motivated dynamic link matching, skin texture analysis, and so forth. In other embodiments, more conventional identification techniques may be employed, such as radio-frequency identification (“RFID”) badges or tags inserted into clothing, bar codes, quick review (“QR”) codes, etc.
  • RFID radio-frequency identification
  • QR quick review
  • Vital signs extraction module 110 may be configured to detect various vital signs based on image data captured by camera 104. These vital signs may include, but are not limited to, blood pressure, pulse (or heart) rate, skin color, respiratory rate, Sp02, temperature, posture, sweat levels, and so forth.
  • camera 104 may be equipped to perform so- called“contactless methods” to acquire vital signs and/or extract physiological information from subject 103. Non-limiting examples of such cameras are described in United States Patent Application Publication Nos. 20140192177A1, 20140139656A1 , 20140148663A1,
  • triage application 112 which is communicatively coupled ( e.g ., via one or more computing networks) with a personal health record index 1 14.
  • triage application 1 12 may include software executed by one or more processors of one or more client devices operated by triage personnel 105, such as mobile device 102 and/or a separate computing device (e.g., laptop, smart phone, tablet, etc.) that is carried by triage personnel 105.
  • a separate computing device e.g., laptop, smart phone, tablet, etc.
  • mobile device 102 may be communicatively coupled with camera 104, e.g., using Bluetooth or other similar
  • T riage application 112 may be operable by medical personnel such as triage staff to input information about subjects, such as identifying information, demographic information (e.g., age, gender), weight, symptoms, observations, lab results, vital signs measured by the medical personnel, etc.
  • triage application 1 12 may be operable by medical personnel to retrieve, view, and/or alter electronic health records (“EHRs”) associated with subjects.
  • EHRs electronic health records
  • HIS hospital information system
  • an EHR associated with a subject in personal health record index 114 may include one or more reference digital images of the subject.
  • reference digital images associated with subjects may be stored in a separate identity database 115, e.g., when techniques described herein are deployed in the field in which case personal health record index 114 may not be readily available.
  • identity database 115 may be implemented on a portable computing system (e.g., contained in a triage vehicle or distributed among devices carried by medical personnel) and may be populated on- the-fly with information about subjects encountered when medical personnel enter a crisis situation.
  • identity database 115 may in some embodiments operate as a temporary subject health record index that may or may not eventually be imported into personal health record index 114. In some embodiments in which personal health record index 114 is not immediately available ( e.g ., in remote crisis scenarios), a direct link may be established between various other components, such as between triage application 112 and an emergency severity scoring module 116 (described below). In some embodiments, identity database 115 may be prepopulated with data imported from personal health record index 1 14, e.g., if a population of patients to be visited are already known.
  • reference information associated with subjects may also be stored, such as reference features extracted from digital images of the subjects, reduced dimensionality and/or feature-rich embeddings that are usable to match subjects depicted in incoming image data, etc.
  • a machine learning model e.g., using by identification module 108, may be trained to process, as input, image data depicting a subject, and generate output that includes an embedded feature vector that maps the input image data to a reduced dimensionality embedding space.
  • new image data depicting a subject is received (e.g., in the field, in a waiting room), that new image data may be similarly embedded.
  • One or more nearest neighbor embeddings may be determined, e.g., using Euclidian distance, cosine similarity, etc., and an identity associated with the nearest neighbor(s) may be determined to be the identity of the subject depicted in the new image data.
  • identification module 108 may apply one or more of the previously-mentioned facial recognition techniques with the image(s) associated with subjects in personal health record index 114 and/or identity database 115 as input to identify a subject depicted in image data captured by camera 104.
  • various types of machine learning models e.g., convolutional neural networks, recurrent neural networks, support vector machines, etc.
  • may be trained using conventional techniques e.g., with labeled images depicting known subjects as training data
  • output such as the previously-described embeddings
  • a new record (e.g ., HER) may be created in index 114/1 15, e.g., with an assigned unique identifier, so that the subject can be identified later, and so that the subject’s EHR can be populated by triage personnel and/or using techniques described herein.
  • An emergency severity scoring module 116 may be communicatively coupled with personal health record index 1 14 and, e.g., via a communication interface 118, computing devices operated by medical personnel that may include the triage medical personnel 105 and/or other medical personnel 120, such as clinicians located remotely from a disaster area in which disclosed techniques are being applied.
  • emergency severity scoring module 1 16 may be implemented as part of the same cloud-based infrastructure as image processing system 106, or it may be separate therefrom.
  • Emergency severity scoring module 116 may be configured to assign severity scores to subjects, e.g., encountered in a crisis situation and/or in an emergency department waiting room. These severity scores may be used for a variety of purposes, such as determining how often to recheck subjects, which order to treat subjects (i.e. priority of care), and so forth.
  • Severity scores may be determined based on various subject-related inputs, such as vital signs, demographics, subject waiting times, and/or other subject information.
  • Various clinical decision support (“CDS”) algorithms may be employed to calculate severity scores.
  • CDS clinical decision support
  • a subject queue may be maintained for subjects in an area being monitored, such as an emergency department waiting room and/or a crisis situation.
  • the subject queue may dictate an order in which subjects are periodically monitored (e.g., using techniques described herein) and/or treated.
  • the subject queue may be ordered and/or reordered based on ongoing monitoring of subjects, e.g., using techniques described herein.
  • Communication interface 118 may be a standalone computing system and/or a module incorporated into other computing systems (e.g., mobile device 102 or a cloud-based system) described herein.
  • communication interface 118 may be configured to deliver alerts and/or reports to relevant parties (e.g., 105, 120) through suitable communication channels, such as wireless transmission, wired transmission, etc. These alerts and/or reports may be delivered in various forms, such as emails, text messages, smartphone applications, web pages, etc.
  • vital signs captured by vital signs extraction module 1 10 and/or subject identities determined by identification module 108 may be used for various purposes, in addition to or instead of for populating EHRs in personal health record index 114.
  • captured vital signs and/or subject identities may be provided to triage application 112, e.g., to automatically populate appropriate fields. This may free potentially overwhelmed medical personnel operating mobile device 102 to perform other tasks, ask questions, etc.
  • a subject’s identity determined by identification module 108 and/or the subject’s vital signs extracted by vital signs extraction module 110 may be accompanied by confidence measures. These confidence measures may be calculated, e.g., by module 108 and/or 1 10, taking into account various factors, such as patient and/or medical personnel movement, occlusion of subjects, etc., that may impact a reliability of the identity/vital signs. Depending on these confidence measures, in various embodiments, subject identities and/or vital signs may be ignored, accepted, tentatively accepted (e.g., which may cause an attempt to be made to corroborate), and so forth.
  • image processing system 106 may select, and provide to mobile device 102, configurations to be implemented by camera 104 in real time.
  • mobile device 102/camera 104 may, at step A, initially capture relatively high resolution image(s), e.g., at a relatively low frame rate, and provide these to image processing system 106.
  • These initial images may be used, e.g., by identification module 108 of image processing system 106, for patient identification.
  • image processing system 106 may or may not return the patient’s identity (e.g., to be imported into input fields of triage application 112), along with a configuration to be implemented by camera 104.
  • the configuration calls for an increased frame rate.
  • image processing system 106 may predict at least a point in time associated with peak blushing of the subject captured in the image data transmitted at step A.
  • image processing system 106 may select, as the configuration to be implemented by camera 104, one or more instructions for camera 104 to perform burst-mode capture during the predicted point in time. [0036] Consequently, at step C of Fig.
  • camera 104 may initiate a burst-mode capture during the predicted point in time, and may return the image data to image processing system 106.
  • the higher frame rate image data is represented visually by a sequence of higher temporal density image frames provided by camera 104 ( e.g . , via mobile device 102) to image processing system 106.
  • camera 104 may employ maximum binning— relatively high frame rate at relatively low resolution— to capture the subject’s heartrate and/or respiratory rate with increased accuracy. Some cameras may have a relatively low ceiling with regard to maximum frame rate, and so binning can be helpful to achieve a higher frame rate.
  • the configuration provided to camera 104 by image processing system 106 may request uneven temporal spacing of image frames, e.g., with a higher temporal density of frames requests at moments in time when a heartbeat is predicted. From this sequence of high (temporal) density image frames, image processing system 106, e.g., by way of vital signs extraction module 110, may extract various vital signs, such as the subject’s pulse (heartrate), respiratory rate, Sp02, etc.
  • image processing system 106 may transmit another configuration to camera 104 that identifies a particular region of interest to capture.
  • image processing system 106 may provide specific Cartesian coordinates, and/or a reference point (e.g., the subject’s eyes) that should be used as a point of focus.
  • reference point e.g., the subject’s eyes
  • camera 104 may pan, tilt, and/or zoom in on the requested region/reference feature and acquire additional images.
  • camera 104 may provide these ROI image frames to image processing system 106 for additional processing (e.g., vital sign acquisition).
  • additional processing e.g., vital sign acquisition
  • the ROI of the camera may be altered, e.g., by image processing system 106, to capture one subject (or a portion thereof) and exclude the other subject.
  • the ROI may then be switched to the other subject, so that vital sign(s) and/or other information may be captured from both subjects.
  • Burst-mode and ROI capture are just two non-limiting examples of parameters that may be included in a configuration for camera 104.
  • additional parameters may include parameters that are selected to enable camera 104 to capture (or avoid capturing) a subject sneezing or coughing.
  • subjects may manifest various physical signs before coughing or sneezing, such as taking a deep breath, closing their eyes, covering their mouth/nose, etc.
  • image processing system 106 may send a configuration that transitions camera 104 to a burst-mode that captures the subject’s sneeze/cough at higher resolution, and/or stops capturing image data for some predetermined amount of time until it is predicted that the subject’s sneeze/cough will have completed. Additionally or alternatively, in some embodiments, image processing system 106 may provide camera 104 with a configuration that requests a relatively low resolution temperature readout (assuming camera 104 is capable of thermal imaging). Other parameters may include lighting and/or color settings, which may be deliberately and/or automatically altered based on the lighting of the environment.
  • a nighttime crisis or a crisis that occurs in a dark interior may require different lighting settings than a well-lit emergency department and/or daytime scenario.
  • a light source typically an LED “flash” of the camera or mobile device may be triggered in synchrony with the frame capture, thereby increasing battery life of the camera or mobile device compared to manually turning on the“LED” flash.
  • Fig. 3 depicts an example method 300 for practicing selected aspects of the present disclosure, in accordance with various embodiments.
  • the operations of the flow chart are described with reference to a system that performs the operations.
  • This system may include various components of various computer systems, including image processing system 106 and/or its various constituent modules.
  • operations of method 300 are shown in a particular order, this is not meant to be limiting. One or more operations may be reordered, omitted or added.
  • the system may obtain first image data generated by a mobile device (e.g ., 102) equipped with a camera, such as camera 104.
  • a mobile device e.g ., 102
  • a camera such as camera 104.
  • the camera 104 itself may comprise the mobile device 102 insofar as the onboard electronics are those that are typically associated with a digital camera.
  • the digital camera may include a communication interface, such as a Wi-Fi or Bluetooth interface, that enables the digital camera to communicate with another computing device, such as a smart phone or tablet carried by medical personnel.
  • camera 104 may be integral with a device such as a smart phone, smart watch, and/or tablet such that collectively, camera 104 and that device comprise mobile device 102.
  • the camera may be mounted to the medical personnel’s body, e.g., worn as smart glasses, strapped to their head/chest/torso (e.g., a bodycam), etc.
  • Image data captured by the camera which depicts a subject (e.g., being interacted with by medical personnel), may be provided to image processing system 106.
  • the system may analyze the received first image data.
  • This analysis may include the various types of facial recognition described above, which may be employed to determine an identity of the subject depicted in the image data. Additionally or alternatively, the analysis may include image processing that is performed to extract one or more vital signs of the depicted subject.
  • the system may select at least one configuration for the camera. For example, image processing system 106 may determine that a particular ROI of the image data obtained at block 302 depicts a portion of interest, and may request that the camera pan, tilt, and/or zoom in on this area (or perform post-processing that crops unneeded areas of the image data). Additionally or alternatively, based on the analysis at block 304, in some embodiments the system may predict, at block 308, a point or period in time at which the depicted subject will likely be undergoing maximum blushing. It is at this point/period in time at which it is possible to obtain heartrate, respiratory rate, and/or other vital signs with an optimized accuracy. At block 310, the system may select, as the configuration to be provided to the camera, an instruction to perform burst-mode capture during the at least one point/period in time predicted at block 308.
  • the system may provide the mobile device with data indicative of the selected at least one configuration.
  • image processing system 106 may transmit, to mobile device 102, data indicative of one or more settings and/or parameters to be implemented by camera 104.
  • the settings and/or parameters may be accompanied with temporal data that indicates when the settings/parameters are to be implemented, such as the burst-mode being used during a point in time during which the subject is most likely to be blushing.
  • the system may obtain second image data generated by the mobile device after provision of the data indicative of the selected at least one configuration.
  • the second image data may capture the same subject but may have been captured pursuant to the configuration provided to the mobile device at block 312.
  • the system e.g., by way of vital sign extraction module 110, may process the second image data to extract at least one vital sign of the subject.
  • FIG. 4 is a block diagram of an example computer system 410.
  • Computer system 410 typically includes at least one processor 414 which communicates with a number of peripheral devices via bus subsystem 412.
  • peripheral devices may include a storage subsystem 424, including, for example, a memory subsystem 425 and a file storage subsystem 426, user interface output devices 420, user interface input devices 422, and a network interface subsystem 416.
  • Network interface subsystem 416 provides an interface to outside networks and is coupled to
  • User interface input devices 422 may include a keyboard, pointing devices such as a mouse, trackball, touchpad, or graphics tablet, a scanner, a touchscreen incorporated into the display, audio input devices such as voice recognition systems, microphones, and/or other types of input devices.
  • pointing devices such as a mouse, trackball, touchpad, or graphics tablet
  • audio input devices such as voice recognition systems, microphones, and/or other types of input devices.
  • use of the term "input device” is intended to include all possible types of devices and ways to input information into computer system 410 or onto a
  • User interface output devices 420 may include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices.
  • the display subsystem may include a cathode ray tube (CRT), a flat -panel device such as a liquid crystal display (LCD), a projection device, or some other mechanism for creating a visible image.
  • the display subsystem may also provide non-visual display such as via audio output devices.
  • output device is intended to include all possible types of devices and ways to output information from computer system 410 to the subject or to another machine or computer system.
  • Storage subsystem 424 stores programming and data constructs that provide the functionality of some or all of the modules/engines described herein.
  • the storage subsystem 424 may include the logic to perform selected aspects of methods 600 and/or 700, and/or to implement one or more components depicted in the various figures.
  • Memory 425 used in the storage subsystem 424 can include a number of memories including a main random access memory (RAM) 430 for storage of instructions and data during program execution and a read only memory (ROM) 432 in which fixed instructions are stored.
  • a file storage subsystem 426 can provide persistent storage for program and data files, and may include a solid state drive, hard disk drive, a CD-ROM drive, an optical drive, or removable media cartridges. Modules implementing the functionality of certain implementations may be stored by file storage subsystem 426 in the storage subsystem 424, or in other machines accessible by the processor(s) 414.
  • Bus subsystem 412 provides a mechanism for leting the various components and subsystems of computer system 410 communicate with each other as intended. Although bus subsystem 412 is shown schematically as a single bus, alternative implementations of the bus subsystem may use multiple busses.
  • Computer system 410 can be of varying types including a workstation, server, computing cluster, blade server, server farm, smart phone, smart watch, smart glasses, set top box, tablet computer, laptop, or any other data processing system or computing device. Due to the ever-changing nature of computers and networks, the description of computer system 410 depicted in Fig. 4 is intended only as a specific example for purposes of illustrating some implementations. Many other configurations of computer system 410 are possible having more or fewer components than the computer system depicted in Fig. 4.
  • inventive embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, inventive embodiments may be practiced otherwise than as specifically described and claimed.
  • inventive embodiments of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein.
  • a reference to“A and/or B”, when used in conjunction with open-ended language such as“comprising” can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc.
  • “or” should be understood to have the same meaning as“and/or” as defined above.
  • “or” or“and/or” shall be interpreted as being inclusive, i.e., the inclusion of at least one, but also including more than one, of a number or list of elements, and, optionally, additional unlisted items. Only terms clearly indicated to the contrary, such as“only one of’ or“exactly one of,” or, when used in the claims,“consisting of,” will refer to the inclusion of exactly one element of a number or list of elements.
  • the phrase“at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements.
  • This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase“at least one” refers, whether related or unrelated to those elements specifically identified.
  • “at least one of A and B” can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • General Business, Economics & Management (AREA)
  • Biomedical Technology (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

Techniques described herein relate generally to capturing subject data in dynamic environments using mobile cameras. In various embodiments, first image data generated by a mobile device (102) equipped with a camera (104) may be obtained (302) and analyzed (304). Based on the analysis, at least one configuration may be selected (306) for the camera. The mobile device may be provided (312) with data indicative of the selected at least one configuration. Second image data generated by the mobile device after provision of the data indicative of the selected at least one configuration may be provided (312). The second image data may capture a subject. The second image data may be processed (316) to extract at least one vital sign of the subject.

Description

CAPTURING SUBJECT DATA IN DYNAMIC ENVIRONMENTS USING
MOBILE CAMERAS
Technical Field
[0001] Various embodiments described herein are directed generally to health care. More particularly, but not exclusively, various methods and apparatus disclosed herein relate to capturing subject data in dynamic environments using mobile cameras.
Background
[0002] In situations in which patients (or“subjects”) await treatment, the wait times may vary considerably, and in some cases subjects’ conditions may deteriorate while waiting. This is true in often-crowded emergency department waiting rooms, as well as in various other scenarios, e.g., in remote settings, in combat, on cruise ships (e.g., norovirus outbreaks), during disasters, pandemics, mass casualty events, etc. Accordingly, it is important to monitor the subjects for an initial triage and thereafter while they await treatment. This can be difficult in potentially- chaotic situations in which medical personnel are highly stressed, often overworked and/or lack resources. For example, it may be difficult for medical personnel to both manually detect vital signs, symptoms, and other relevant information, and to correctly assign this information to the correct subjects.
[0003] For example, upon each subject’s arrival at the emergency department, a triage nurse assesses the condition of the subject and assigns a score which determines the priority of care given to the subject. In some case this score may acknowledge the fact that a given subject may be able to wait without detrimental effects. This triage score may be based on many data points, with the subject’s vital signs playing an important role. In order to catch deterioration while subjects await treatment, vitals should ideally be captured periodically, but in practice this rarely happens. This is also true in some of the non-emergency-department scenarios mentioned previously, in which medical personnel are typically overwhelmed by the number and/or severity of casualties. Summary
[0004] The present disclosure is directed to methods and apparatus for capturing subject data, particularly physiological data, in dynamic environments using mobile cameras. In various embodiments, a mobile device equipped with a camera, such as a portable vital sign acquisition camera, a mobile telephone, a phablet/tablet computer, a standalone (networked) digital camera, a wearable device ( e.g ., smart glasses, smart watch, a digital camera strapped to medical personnel, etc.), and so forth, may be carried by medical personnel that are tasked with monitoring subjects awaiting treatment. These mobile devices may be used to identify and/or extract various information from subjects, such as vital signs, blushing, symptoms (e.g., coughing), and so forth, in order to facilitate input of symptoms, observations, and other information obtained, for example, during triage.
[0005] In some embodiments, a configuration of the mobile device’s camera (e.g., its setings and/or parameters) may be altered based on processing of image data it captures, e.g., to improve detection of various information. For example, a frame rate of the camera may be adjusted based on a prediction— made using one or more image frames that depict a subject— of a period in time at which the subject is likely to exhibit peak blushing. This may be done in order to capture images at an elevated frame rate (e.g., switching from a normal frame rate of approximately 10- 15 frames per second for most cameras, to its maximum frame rate of at least 60 frames per second) during the predicted blushing. In some embodiments, the higher frame rate may be determined by the temporal resolution and/or accuracy requirements for relevant vital parameters in general, and in particular for heartrate/respiratory rate detection by the duration of the “blushing” that must never fall between two consecutive frames. As another example, a region of interest (“ROI”) to be captured by the camera may be determined based on image processing of image data. Other camera settings/parameters, such as pixel binning, lighting setings, shuter speed, zoom, resolution, color settings, etc., may also be adjusted. In some embodiments, a frame rate of the camera may be decreased and/or a resolution may be increased in order to capture image data that is usable to identify a subject.
[0006] A camera-equipped mobile device may be relatively resource-constrained, e.g., it may have a limited batery life, processor speed, memory, etc. Accordingly, in various embodiments, image processing and/or camera configuration selection may be delegated to one or more remote computing devices with more resources. For example, in some embodiments, image data captured by the mobile device’s camera may be transmitted to one or more remote computing devices, e.g., operating a so-called“cloud” computing environment, to be processed/analyzed.
In various embodiments, the remote computing device(s) may perform the image
processing/analysis and, based on the processing/analysis, select a configuration for the mobile device’s camera, e.g., to capture a targeted piece of information (e.g., a particular vital sign, the subject’s identity, etc.).
[0007] Generally, in one aspect, a method may include: obtaining first image data generated by a mobile device equipped with a camera; analyzing the received first image data; selecting , based on the analyzing, at least one configuration for the camera; providing the mobile device with data indicative of the selected at least one configuration; obtaining second image data generated by the mobile device after provision of the data indicative of the selected at least one configuration, wherein the second image data captures a subject; and processing at least the second image data to extract at least one vital sign of the subject.
[0008] In various embodiments, the at least one configuration for the camera may include a color setting that is selected based on the analyzing. In various embodiments, the at least one configuration for the camera may include a lighting setting that is selected based on the analyzing. In various embodiments, the at least one configuration for the camera may include a region of interest within a field of view of the camera that is selected based on the analyzing. In various embodiments, the at least one configuration for the camera may include a frame rate that is selected based on the analyzing. In various embodiments, the selected frame rate may include an elevated frame rate of at least approximately sixty frames per second. In various
embodiments, the elevated frame rate may cause the camera to utilize binning.
[0009] In various embodiments, the first image data may also capture the subject. In various embodiments, selecting the at least one configuration may include: predicting at least a point in time associated with peak blushing of the subject; and selecting, as the at least one configuration, an instruction for the camera to perform burst-mode capture during the at least one point in time.
[0010] It should be appreciated that all combinations of the foregoing concepts and additional concepts discussed in greater detail below (provided such concepts are not mutually inconsistent) are contemplated as being part of the inventive subject matter disclosed herein. In particular, all combinations of claimed subject matter appearing at the end of this disclosure are contemplated as being part of the inventive subject matter disclosed herein. It should also be appreciated that terminology explicitly employed herein that also may appear in any disclosure incorporated by reference should be accorded a meaning most consistent with the particular concepts disclosed herein.
Brief Description of the Drawings
[0011] In the drawings, like reference characters generally refer to the same parts throughout the different views. Also, the drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating various principles of the embodiments described herein.
[0012] Fig. 1 schematically illustrates an example environment in which selected aspects of the present disclosure may be implemented, in accordance with various embodiments.
[0013] Fig. 2 schematically illustrates one example of an exchange that may occur between a camera-equipped mobile device and an image processing system to implement selected aspects of the present disclosure, in accordance with various embodiments.
[0014] Fig. 3 depicts an example method for performing selected aspects of the present disclosure, in accordance with various embodiments.
[0015] Fig. 4 schematically depicts an example computer system architecture, in accordance with various embodiments.
Detailed Description
[0016] In situations in which patients (or“subjects”) await treatment, the wait times may vary considerably, and in some cases subjects’ conditions may deteriorate while waiting. This is true in in often-crowded emergency department waiting rooms, as well as in various other scenarios, e.g., in remote settings, in combat, on cruise ships (e.g., norovirus outbreaks), during disasters, pandemics, mass casualty events, etc. Accordingly, it is important to monitor the subjects for an initial triage and thereafter while they await treatment. This can be difficult in potentially- chaotic situations in which medical personnel are highly stressed, often overworked and/or lack resources. For example, it may be difficult for medical personnel to both manually detect vital signs, symptoms, and other relevant information, and to correctly assign this information to the correct subjects. In view of the foregoing, various embodiments and implementations of the present disclosure are directed to capturing subject data in dynamic environments using mobile cameras.
[0017] Referring to Fig. 1 , an environment in which selected aspects of the present disclosure may be implemented is depicted. Various components are depicted with lines and other connectors to indicate that these components are, at the very least, in communication with each other, or“communicatively coupled.” Various types of communication technologies may be employed by the various components to communicate with each other, depending on whether they are implemented on a single computing system, across multiple computing systems via one or more computing networks, and so forth. For example, components that are in network communication with each other may employ various wired and/or wireless technologies, including but not limited to Wi-Fi, Bluetooth, Ethernet, cellular communication, satellite communication, optical communication, and so forth.
[0018] A mobile device 102 that is carried by medical personnel (not depicted) may be equipped with a camera 104. Mobile device 102 may take various forms, such as a body- mounted camera (e.g., similar to those used sometimes by law enforcement), smart glasses, a smart watch, a mobile phone, a digital camera, a tablet computer, and so forth. In some embodiments it may be preferable (though not required) that mobile device 102 be passively mounted to the body of medical personnel, leaving their hands free for other purposes. Thus, in various embodiments, mobile device 102 and camera 104 may be integral in a single unit. However, they may also be separate. For example, camera 104 may be a standalone camera, e.g., a so-called“action camera” strapped to a triage nurse, that is in Bluetooth communication with a mobile phone, tablet, etc., carried by the triage nurse. In Fig. 1, as shown by the dashed lines, camera 104 is strapped to the forehead of a triage nurse 105.
[0019] In various embodiments, mobile device 102, e.g., by way of camera 104, may capture image data that may include, for instance, one or more image frames. One or more of these image frames may capture one or more subjects 103, such as subjects in an emergency department waiting room, in a remote disaster area, in a conflict zone, etc. Image data may come in various forms, such as color data (e.g., RGB), black and white, thermal imaging data (e.g., infrared), three-dimensional data, so-called 2.5 dimensional data, etc. In various embodiments, camera 104 may include a variety of settings and/or parameters, often referred to herein collectively as its“configuration.” One or more of these settings and/or parameters may be adjustable, e.g., by the user and/or automatically, e.g., by way of one or more commands received at an application-specific programming interface (“API”) of camera 104. These settings and/or parameters may include but are not limited to frame rate, color settings, lighting settings, shutter speed, zoom, resolution, pixel“binning”, aspect ratio, a region of interest (“ROI”), and any other setting and/or parameter known to be associated with cameras generally.
[0020] As mentioned previously, in various embodiments, mobile device 102 may be relatively resource-constrained. This may be particularly true in scenarios in which mobile device 102 is carried by medical personnel in the field, e.g., in a conflict zone, disaster area, remote pandemic outbreak, etc. Accordingly, in various embodiments, various aspects of techniques described herein may be performed remotely from mobile device 102, e.g., by “cloud-based” components that have, at least relative to mobile device 102, access to virtually limitless resources. For example, in some embodiments, mobile device 102 may be configured to provide image data captured by camera 104 to a remote image processing system 106, so that image processing system 106 can perform techniques described herein.
[0021] Generally speaking, image processing system 106 may be configured to obtain image data provided by one or more mobile devices 102, process/analyze the image data, and based on the processing/analyzing, select various configurations to be implemented by the mobile devices 102— and more particularly, by their respective cameras 104— so that various targeted information about subjects 103 can be determined. Once the camera(s) 104 are properly configured, image processing system 106 may be further configured to process subsequent image data to identify and/or determine various targeted information about subjects, such as their identities, vital signs, symptoms, etc. In various examples described herein, image processing system 106 performs the bulk of the processing. However, this is not meant to be limiting. In various embodiments, selected aspects of the present disclosure may be performed by other components of Fig. 1. For example, various aspects of the present disclosure may be performed, for instance, using logic onboard mobile device 102.
[0022] In some embodiments, image processing system 106 may be implemented by one or more computing devices in network communication with each other that may be organized, for instance, as an abstract“cloud-based” computing environment. In various embodiments, image processing system 106 may include an identification module 108 and a vital sign extraction module 1 10. In other embodiments, one or more of modules 108-110 may be combined into a single module, distributed in whole or in part elsewhere, and/or omitted.
[0023] Identification module 108 may be configured to process image data captured by camera 104 to determine a depicted subject’s identity. Identification module 108 may employ a variety of different techniques to identify subjects. In some embodiments, one or more facial recognition techniques may be employed by identification module 108, including but not limited to three-dimensional recognition, principle component analysis using eigenfaces, linear discriminant analysis, elastic bunch graph matching using the Fisherface algorithm, a hidden Markov model, a multilinear subspace learning using tensor representation, trained machine learning models ( e.g ., convolutional neural networks, recurrent neural networks), a neuronal motivated dynamic link matching, skin texture analysis, and so forth. In other embodiments, more conventional identification techniques may be employed, such as radio-frequency identification (“RFID”) badges or tags inserted into clothing, bar codes, quick review (“QR”) codes, etc.
[0024] Vital signs extraction module 110 may be configured to detect various vital signs based on image data captured by camera 104. These vital signs may include, but are not limited to, blood pressure, pulse (or heart) rate, skin color, respiratory rate, Sp02, temperature, posture, sweat levels, and so forth. In some embodiments, camera 104 may be equipped to perform so- called“contactless methods” to acquire vital signs and/or extract physiological information from subject 103. Non-limiting examples of such cameras are described in United States Patent Application Publication Nos. 20140192177A1, 20140139656A1 , 20140148663A1,
20140253709A1, 20140235976A1, and U.S. Patent No. US9125606B2, which are incorporated herein by reference for all purposes.
[0025] Also depicted in Fig. 1 integral with mobile device 102 is a triage application 112, which is communicatively coupled ( e.g ., via one or more computing networks) with a personal health record index 1 14. In various embodiments, triage application 1 12 may include software executed by one or more processors of one or more client devices operated by triage personnel 105, such as mobile device 102 and/or a separate computing device (e.g., laptop, smart phone, tablet, etc.) that is carried by triage personnel 105. In some embodiments in which camera 104 is a standalone camera, e.g., mounted to the body of triage personnel 105, mobile device 102 may be communicatively coupled with camera 104, e.g., using Bluetooth or other similar
technologies.
[0026] T riage application 112 may be operable by medical personnel such as triage staff to input information about subjects, such as identifying information, demographic information (e.g., age, gender), weight, symptoms, observations, lab results, vital signs measured by the medical personnel, etc. In some embodiments, triage application 1 12 may be operable by medical personnel to retrieve, view, and/or alter electronic health records (“EHRs”) associated with subjects. These EHRs or other (e.g., retrospective) information about subject may be contained, for instance, in personal health record index 114, which may be, for instance, part of a conventional hospital information system (“HIS”).
[0027] In order for identification module 108 to identify subjects, in some embodiments, it may rely on reference data about subjects that was obtained previously. Accordingly, in various embodiments, an EHR associated with a subject in personal health record index 114 may include one or more reference digital images of the subject. Additionally or alternatively, in some embodiments, reference digital images associated with subjects may be stored in a separate identity database 115, e.g., when techniques described herein are deployed in the field in which case personal health record index 114 may not be readily available. For example, identity database 115 may be implemented on a portable computing system (e.g., contained in a triage vehicle or distributed among devices carried by medical personnel) and may be populated on- the-fly with information about subjects encountered when medical personnel enter a crisis situation. Thus, identity database 115 may in some embodiments operate as a temporary subject health record index that may or may not eventually be imported into personal health record index 114. In some embodiments in which personal health record index 114 is not immediately available ( e.g ., in remote crisis scenarios), a direct link may be established between various other components, such as between triage application 112 and an emergency severity scoring module 116 (described below). In some embodiments, identity database 115 may be prepopulated with data imported from personal health record index 1 14, e.g., if a population of patients to be visited are already known.
[0028] In addition to or instead of reference digital images, other reference information associated with subjects may also be stored, such as reference features extracted from digital images of the subjects, reduced dimensionality and/or feature-rich embeddings that are usable to match subjects depicted in incoming image data, etc. For example, in some embodiments, a machine learning model, e.g., using by identification module 108, may be trained to process, as input, image data depicting a subject, and generate output that includes an embedded feature vector that maps the input image data to a reduced dimensionality embedding space. When new image data depicting a subject is received (e.g., in the field, in a waiting room), that new image data may be similarly embedded. One or more nearest neighbor embeddings may be determined, e.g., using Euclidian distance, cosine similarity, etc., and an identity associated with the nearest neighbor(s) may be determined to be the identity of the subject depicted in the new image data.
[0029] In some such embodiments, identification module 108 may apply one or more of the previously-mentioned facial recognition techniques with the image(s) associated with subjects in personal health record index 114 and/or identity database 115 as input to identify a subject depicted in image data captured by camera 104. In some embodiments, various types of machine learning models (e.g., convolutional neural networks, recurrent neural networks, support vector machines, etc.) may be trained using conventional techniques (e.g., with labeled images depicting known subjects as training data) to generate output, such as the previously-described embeddings, that is usable to associate a previously-captured image of a subject with a subject depicted in image data captured by camera 104. If the captured subject is not found in personal health record index 1 l4/identity database 115, the subject may be new. Accordingly, a new record ( e.g ., HER) may be created in index 114/1 15, e.g., with an assigned unique identifier, so that the subject can be identified later, and so that the subject’s EHR can be populated by triage personnel and/or using techniques described herein.
[0030] An emergency severity scoring module 116 may be communicatively coupled with personal health record index 1 14 and, e.g., via a communication interface 118, computing devices operated by medical personnel that may include the triage medical personnel 105 and/or other medical personnel 120, such as clinicians located remotely from a disaster area in which disclosed techniques are being applied. In some embodiments, emergency severity scoring module 1 16 may be implemented as part of the same cloud-based infrastructure as image processing system 106, or it may be separate therefrom.
[0031] Emergency severity scoring module 116 may be configured to assign severity scores to subjects, e.g., encountered in a crisis situation and/or in an emergency department waiting room. These severity scores may be used for a variety of purposes, such as determining how often to recheck subjects, which order to treat subjects (i.e. priority of care), and so forth.
Severity scores may be determined based on various subject-related inputs, such as vital signs, demographics, subject waiting times, and/or other subject information. Various clinical decision support (“CDS”) algorithms may be employed to calculate severity scores. In some
embodiments, a subject queue may be maintained for subjects in an area being monitored, such as an emergency department waiting room and/or a crisis situation. The subject queue may dictate an order in which subjects are periodically monitored (e.g., using techniques described herein) and/or treated. The subject queue may be ordered and/or reordered based on ongoing monitoring of subjects, e.g., using techniques described herein.
[0032] Communication interface 118 may be a standalone computing system and/or a module incorporated into other computing systems (e.g., mobile device 102 or a cloud-based system) described herein. In some embodiments, communication interface 118 may be configured to deliver alerts and/or reports to relevant parties (e.g., 105, 120) through suitable communication channels, such as wireless transmission, wired transmission, etc. These alerts and/or reports may be delivered in various forms, such as emails, text messages, smartphone applications, web pages, etc. [0033] In various embodiments, vital signs captured by vital signs extraction module 1 10 and/or subject identities determined by identification module 108 may be used for various purposes, in addition to or instead of for populating EHRs in personal health record index 114. For example, in some embodiments, captured vital signs and/or subject identities may be provided to triage application 112, e.g., to automatically populate appropriate fields. This may free potentially overwhelmed medical personnel operating mobile device 102 to perform other tasks, ask questions, etc. In some embodiments, a subject’s identity determined by identification module 108 and/or the subject’s vital signs extracted by vital signs extraction module 110 may be accompanied by confidence measures. These confidence measures may be calculated, e.g., by module 108 and/or 1 10, taking into account various factors, such as patient and/or medical personnel movement, occlusion of subjects, etc., that may impact a reliability of the identity/vital signs. Depending on these confidence measures, in various embodiments, subject identities and/or vital signs may be ignored, accepted, tentatively accepted (e.g., which may cause an attempt to be made to corroborate), and so forth.
[0034] Of particular relevance to the present disclosure is the ability for image processing system 106 to select, and provide to mobile device 102, configurations to be implemented by camera 104 in real time. As an example, and referring to Fig. 2, when triage personnel 105 initially approaches a subject 103, mobile device 102/camera 104 may, at step A, initially capture relatively high resolution image(s), e.g., at a relatively low frame rate, and provide these to image processing system 106. These initial images may be used, e.g., by identification module 108 of image processing system 106, for patient identification.
[0035] Based on the results of patient identification, at step B, image processing system 106 may or may not return the patient’s identity (e.g., to be imported into input fields of triage application 112), along with a configuration to be implemented by camera 104. In this example, the configuration calls for an increased frame rate. For example, image processing system 106 may predict at least a point in time associated with peak blushing of the subject captured in the image data transmitted at step A. In various embodiments, image processing system 106 may select, as the configuration to be implemented by camera 104, one or more instructions for camera 104 to perform burst-mode capture during the predicted point in time. [0036] Consequently, at step C of Fig. 2, camera 104 may initiate a burst-mode capture during the predicted point in time, and may return the image data to image processing system 106. In Fig. 2, the higher frame rate image data is represented visually by a sequence of higher temporal density image frames provided by camera 104 ( e.g . , via mobile device 102) to image processing system 106. In some embodiments, camera 104 may employ maximum binning— relatively high frame rate at relatively low resolution— to capture the subject’s heartrate and/or respiratory rate with increased accuracy. Some cameras may have a relatively low ceiling with regard to maximum frame rate, and so binning can be helpful to achieve a higher frame rate. In some implementations, the configuration provided to camera 104 by image processing system 106 may request uneven temporal spacing of image frames, e.g., with a higher temporal density of frames requests at moments in time when a heartbeat is predicted. From this sequence of high (temporal) density image frames, image processing system 106, e.g., by way of vital signs extraction module 110, may extract various vital signs, such as the subject’s pulse (heartrate), respiratory rate, Sp02, etc.
[0037] Now, suppose it is determined, e.g., by image processing system 106, that a particular region of interest (“ROI”) of the subject, such as the subject’s face (which may be the only portion of the subject not covered by clothing), should be captured. At step D of Fig. 2, image processing system 106 may transmit another configuration to camera 104 that identifies a particular region of interest to capture. For example, image processing system 106 may provide specific Cartesian coordinates, and/or a reference point (e.g., the subject’s eyes) that should be used as a point of focus. In various embodiments, camera 104 may pan, tilt, and/or zoom in on the requested region/reference feature and acquire additional images. At step E of Fig. 2, camera 104 may provide these ROI image frames to image processing system 106 for additional processing (e.g., vital sign acquisition). In some embodiments in which multiple subjects are depicted in an image frame, the ROI of the camera may be altered, e.g., by image processing system 106, to capture one subject (or a portion thereof) and exclude the other subject. In some cases, the ROI may then be switched to the other subject, so that vital sign(s) and/or other information may be captured from both subjects.
[0038] Burst-mode and ROI capture are just two non-limiting examples of parameters that may be included in a configuration for camera 104. In some embodiments, additional parameters may include parameters that are selected to enable camera 104 to capture (or avoid capturing) a subject sneezing or coughing. For example, subjects may manifest various physical signs before coughing or sneezing, such as taking a deep breath, closing their eyes, covering their mouth/nose, etc. When these sneeze/cough precursors are detected by image processing system 106, it may send a configuration that transitions camera 104 to a burst-mode that captures the subject’s sneeze/cough at higher resolution, and/or stops capturing image data for some predetermined amount of time until it is predicted that the subject’s sneeze/cough will have completed. Additionally or alternatively, in some embodiments, image processing system 106 may provide camera 104 with a configuration that requests a relatively low resolution temperature readout (assuming camera 104 is capable of thermal imaging). Other parameters may include lighting and/or color settings, which may be deliberately and/or automatically altered based on the lighting of the environment. For example, a nighttime crisis or a crisis that occurs in a dark interior may require different lighting settings than a well-lit emergency department and/or daytime scenario. In some such scenarios, a light source (typically an LED “flash”) of the camera or mobile device may be triggered in synchrony with the frame capture, thereby increasing battery life of the camera or mobile device compared to manually turning on the“LED” flash.
[0039] Fig. 3 depicts an example method 300 for practicing selected aspects of the present disclosure, in accordance with various embodiments. For convenience, the operations of the flow chart are described with reference to a system that performs the operations. This system may include various components of various computer systems, including image processing system 106 and/or its various constituent modules. Moreover, while operations of method 300 are shown in a particular order, this is not meant to be limiting. One or more operations may be reordered, omitted or added.
[0040] At block 302, the system may obtain first image data generated by a mobile device ( e.g ., 102) equipped with a camera, such as camera 104. As noted above, in various
embodiments, the camera 104 itself may comprise the mobile device 102 insofar as the onboard electronics are those that are typically associated with a digital camera. In some such embodiments, the digital camera may include a communication interface, such as a Wi-Fi or Bluetooth interface, that enables the digital camera to communicate with another computing device, such as a smart phone or tablet carried by medical personnel. In other embodiments, camera 104 may be integral with a device such as a smart phone, smart watch, and/or tablet such that collectively, camera 104 and that device comprise mobile device 102. In various embodiments, the camera may be mounted to the medical personnel’s body, e.g., worn as smart glasses, strapped to their head/chest/torso (e.g., a bodycam), etc. Image data captured by the camera, which depicts a subject (e.g., being interacted with by medical personnel), may be provided to image processing system 106.
[0041] At block 304, the system, e.g., by way of image processing system 106, may analyze the received first image data. This analysis may include the various types of facial recognition described above, which may be employed to determine an identity of the subject depicted in the image data. Additionally or alternatively, the analysis may include image processing that is performed to extract one or more vital signs of the depicted subject.
[0042] Based on the analyzing, at block 306, the system may select at least one configuration for the camera. For example, image processing system 106 may determine that a particular ROI of the image data obtained at block 302 depicts a portion of interest, and may request that the camera pan, tilt, and/or zoom in on this area (or perform post-processing that crops unneeded areas of the image data). Additionally or alternatively, based on the analysis at block 304, in some embodiments the system may predict, at block 308, a point or period in time at which the depicted subject will likely be undergoing maximum blushing. It is at this point/period in time at which it is possible to obtain heartrate, respiratory rate, and/or other vital signs with an optimized accuracy. At block 310, the system may select, as the configuration to be provided to the camera, an instruction to perform burst-mode capture during the at least one point/period in time predicted at block 308.
[0043] At block 312, the system may provide the mobile device with data indicative of the selected at least one configuration. For example, image processing system 106 may transmit, to mobile device 102, data indicative of one or more settings and/or parameters to be implemented by camera 104. In some embodiments, the settings and/or parameters may be accompanied with temporal data that indicates when the settings/parameters are to be implemented, such as the burst-mode being used during a point in time during which the subject is most likely to be blushing.
[0044] At block 314, the system may obtain second image data generated by the mobile device after provision of the data indicative of the selected at least one configuration. In various embodiments, the second image data may capture the same subject but may have been captured pursuant to the configuration provided to the mobile device at block 312. At block 316, the system, e.g., by way of vital sign extraction module 110, may process the second image data to extract at least one vital sign of the subject.
[0045] Fig. 4 is a block diagram of an example computer system 410. Computer system 410 typically includes at least one processor 414 which communicates with a number of peripheral devices via bus subsystem 412. These peripheral devices may include a storage subsystem 424, including, for example, a memory subsystem 425 and a file storage subsystem 426, user interface output devices 420, user interface input devices 422, and a network interface subsystem 416.
The input and output devices allow user interaction with computer system 410. Network interface subsystem 416 provides an interface to outside networks and is coupled to
corresponding interface devices in other computer systems.
[0046] User interface input devices 422 may include a keyboard, pointing devices such as a mouse, trackball, touchpad, or graphics tablet, a scanner, a touchscreen incorporated into the display, audio input devices such as voice recognition systems, microphones, and/or other types of input devices. In general, use of the term "input device" is intended to include all possible types of devices and ways to input information into computer system 410 or onto a
communication network.
[0047] User interface output devices 420 may include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices. The display subsystem may include a cathode ray tube (CRT), a flat -panel device such as a liquid crystal display (LCD), a projection device, or some other mechanism for creating a visible image. The display subsystem may also provide non-visual display such as via audio output devices. In general, use of the term "output device" is intended to include all possible types of devices and ways to output information from computer system 410 to the subject or to another machine or computer system.
[0048] Storage subsystem 424 stores programming and data constructs that provide the functionality of some or all of the modules/engines described herein. For example, the storage subsystem 424 may include the logic to perform selected aspects of methods 600 and/or 700, and/or to implement one or more components depicted in the various figures. Memory 425 used in the storage subsystem 424 can include a number of memories including a main random access memory (RAM) 430 for storage of instructions and data during program execution and a read only memory (ROM) 432 in which fixed instructions are stored. A file storage subsystem 426 can provide persistent storage for program and data files, and may include a solid state drive, hard disk drive, a CD-ROM drive, an optical drive, or removable media cartridges. Modules implementing the functionality of certain implementations may be stored by file storage subsystem 426 in the storage subsystem 424, or in other machines accessible by the processor(s) 414.
[0049] Bus subsystem 412 provides a mechanism for leting the various components and subsystems of computer system 410 communicate with each other as intended. Although bus subsystem 412 is shown schematically as a single bus, alternative implementations of the bus subsystem may use multiple busses.
[0050] Computer system 410 can be of varying types including a workstation, server, computing cluster, blade server, server farm, smart phone, smart watch, smart glasses, set top box, tablet computer, laptop, or any other data processing system or computing device. Due to the ever-changing nature of computers and networks, the description of computer system 410 depicted in Fig. 4 is intended only as a specific example for purposes of illustrating some implementations. Many other configurations of computer system 410 are possible having more or fewer components than the computer system depicted in Fig. 4.
[0051] While several inventive embodiments have been described and illustrated herein, those of ordinary skill in the art will readily envision a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein, and each of such variations and/or modifications is deemed to be within the scope of the inventive embodiments described herein. More generally, those skilled in the art will readily appreciate that all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the inventive teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific inventive embodiments described herein. It is, therefore, to be understood that the foregoing embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, inventive embodiments may be practiced otherwise than as specifically described and claimed. Inventive embodiments of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the inventive scope of the present disclosure.
[0052] All definitions, as defined and used herein, should be understood to control over dictionary definitions, definitions in documents incorporated by reference, and/or ordinary meanings of the defined terms.
[0053] The indefinite articles“a” and“an,” as used herein in the specification and in the claims, unless clearly indicated to the contrary, should be understood to mean“at least one.”
[0054] The phrase“and/or,” as used herein in the specification and in the claims, should be understood to mean“either or both” of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with“and/or” should be construed in the same fashion, i.e.,“one or more” of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the“and/or” clause, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, a reference to“A and/or B”, when used in conjunction with open-ended language such as“comprising” can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc.
[0055] As used herein in the specification and in the claims,“or” should be understood to have the same meaning as“and/or” as defined above. For example, when separating items in a list,“or” or“and/or” shall be interpreted as being inclusive, i.e., the inclusion of at least one, but also including more than one, of a number or list of elements, and, optionally, additional unlisted items. Only terms clearly indicated to the contrary, such as“only one of’ or“exactly one of,” or, when used in the claims,“consisting of,” will refer to the inclusion of exactly one element of a number or list of elements. In general, the term“or” as used herein shall only be interpreted as indicating exclusive alternatives (i.e.“one or the other but not both”) when preceded by terms of exclusivity, such as“either,”“one of,”“only one of,” or“exactly one of.”“Consisting essentially of,” when used in the claims, shall have its ordinary meaning as used in the field of patent law.
[0056] As used herein in the specification and in the claims, the phrase“at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase“at least one” refers, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example,“at least one of A and B” (or, equivalently,“at least one of A or B,” or, equivalently“at least one of A and/or B”) can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc.
[0057] It should also be understood that, unless clearly indicated to the contrary, in any methods claimed herein that include more than one step or act, the order of the steps or acts of the method is not necessarily limited to the order in which the steps or acts of the method are recited.
[0058] In the claims, as well as in the specification above, all transitional phrases such as “comprising,”“including,”“carrying,”“having,”“containing,”“involving,”“holding,” “composed of,” and the like are to be understood to be open-ended, i.e., to mean including but not limited to. Only the transitional phrases“consisting of’ and“consisting essentially of’ shall be closed or semi-closed transitional phrases, respectively, as set forth in the United States Patent Office Manual of Patent Examining Procedures, Section 2111.03. It should be understood that certain expressions and reference signs used in the claims pursuant to Rule 6.2(b) of the Patent Cooperation Treaty (“PCT”) do not limit the scope.

Claims

CLAIMS What is claimed is:
1. A method implemented using one or more processors, comprising:
obtaining (302) first image data generated by a mobile device (102) equipped with a camera (104);
analyzing (304) the received first image data;
selecting (306), based on the analyzing, at least one configuration for the camera;
providing (312) the mobile device with data indicative of the selected at least one configuration;
obtaining (314) second image data generated by the mobile device after provision of the data indicative of the selected at least one configuration, wherein the second image data captures a subject; and
processing (316) at least the second image data to extract at least one vital sign of the subject.
2. The method of claim 1 , wherein the at least one configuration for the camera comprises a color setting that is selected based on the analyzing.
3. The method of claim 1 , wherein the at least one configuration for the camera comprises a lighting setting that is selected based on the analyzing.
4. The method of claim 1, wherein the at least one configuration for the camera comprises a region of interest within a field of view of the camera that is selected based on the analyzing.
5. The method of claim 1 , wherein the at least one configuration for the camera comprises a frame rate that is selected based on the analyzing.
6. The method of claim 5, wherein the selected frame rate comprises an elevated frame rate of at least approximately sixty frames per second.
7. The method of claim 6, wherein the elevated frame rate causes the camera to utilize binning.
8. The method of claim 1, wherein the first image data also captures the subject.
9. The method of claim 8, wherein selecting the at least one configuration comprises:
predicting (308) at least a point in time associated with peak blushing of the subject; and selecting (310), as the at least one configuration, an instruction for the camera to perform burst-mode capture during the at least one point in time.
10. A system comprising one or more processors and memory operably coupled with the one or more processors, wherein the memory stores instructions that, in response to execution of the instructions by one or more processors, cause the one or more processors to perform the following operations:
obtaining (302) first image data generated by a mobile device (102) equipped with a camera (104);
analyzing (304) the received first image data;
selecting (306), based on the analyzing, at least one configuration for the camera;
providing (312) the mobile device with data indicative of the selected at least one configuration;
obtaining (314) second image data generated by the mobile device after provision of the data indicative of the selected at least one configuration, wherein the second image data captures a subject; and
processing (316) at least the second image data to extract at least one vital sign of the subject.
11. The system of claim 10, wherein the at least one configuration for the camera comprises a color setting that is selected based on the analyzing.
12. The system of claim 10, wherein the at least one configuration for the camera comprises a lighting setting that is selected based on the analyzing.
13. The system of claim 10, wherein the at least one configuration for the camera comprises a region of interest within a field of view of the camera that is selected based on the analyzing.
14. The system of claim 10, wherein the at least one configuration for the camera comprises a frame rate that is selected based on the analyzing.
15. The system of claim 14, wherein the selected frame rate comprises an elevated frame rate of at least approximately sixty frames per second.
16. The system of claim 15, wherein the elevated frame rate causes the camera to utilize binning.
17. The system of claim 10, wherein the first image data also captures the subject.
18. The system of claim 17, wherein selecting the at least one configuration comprises:
predicting (308) at least a point in time associated with peak blushing of the subject; and selecting (310), as the at least one configuration, an instruction for the camera to perform burst-mode capture during the at least one point in time.
19. At least one non-transitory computer-readable medium comprising instructions that, in response to execution of the instructions by one or more processors, cause the one or more processors to perform the following operations:
obtaining (302) first image data generated by a mobile device (102) equipped with a camera (104);
analyzing (304) the received first image data;
selecting (306), based on the analyzing, at least one configuration for the camera;
providing (312) the mobile device with data indicative of the selected at least one configuration;
obtaining (314) second image data generated by the mobile device after provision of the data indicative of the selected at least one configuration, wherein the second image data captures a subject; and
processing (316) at least the second image data to extract at least one vital sign of the subject.
20. The at least one non-transitory computer-readable medium of claim 19, wherein the at least one configuration for the camera comprises a color setting that is selected based on the analyzing.
PCT/EP2019/050282 2018-01-10 2019-01-08 Capturing subject data in dynamic environments using mobile cameras WO2019137886A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862615566P 2018-01-10 2018-01-10
US62/615,566 2018-01-10

Publications (1)

Publication Number Publication Date
WO2019137886A1 true WO2019137886A1 (en) 2019-07-18

Family

ID=65010791

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2019/050282 WO2019137886A1 (en) 2018-01-10 2019-01-08 Capturing subject data in dynamic environments using mobile cameras

Country Status (1)

Country Link
WO (1) WO2019137886A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102019215307A1 (en) * 2019-10-07 2021-04-08 Volkswagen Aktiengesellschaft System for the detection and therapy support of Parkinson's disease in a vehicle occupant for use in a vehicle

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013156908A1 (en) * 2012-04-17 2013-10-24 Koninklijke Philips N.V. Device and method for obtaining vital sign information of a living being
US20140139656A1 (en) 2011-08-01 2014-05-22 Koninklijke Philips N.V. Device and method for obtaining and processing measurement readings of a living being
US20140148663A1 (en) 2012-11-23 2014-05-29 Koninklijke Philips Electronics N.V. Device and method for extracting physiological information
US20140192177A1 (en) 2011-09-02 2014-07-10 Koninklijke Philips N.V. Camera for generating a biometrical signal of a living being
US20140235976A1 (en) 2013-02-15 2014-08-21 Koninklijke Philips N. V. System and method for determining a vital sign of a subject
US20140253709A1 (en) 2013-03-06 2014-09-11 Koninklijke Philips N.V. System and method for determining vital sign information
US9125606B2 (en) 2013-03-13 2015-09-08 Koninklijke Philips N.V. Device and method for determining the blood oxygen saturation of a subject
US20160317041A1 (en) * 2013-12-19 2016-11-03 The Board Of Trustees Of The University Of Illinois System and methods for measuring physiological parameters
US20170238805A1 (en) * 2016-02-19 2017-08-24 Covidien Lp Systems and methods for video-based monitoring of vital signs
WO2018083074A1 (en) * 2016-11-03 2018-05-11 Koninklijke Philips N.V. Automatic pan-tilt-zoom adjustment to improve vital sign acquisition

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140139656A1 (en) 2011-08-01 2014-05-22 Koninklijke Philips N.V. Device and method for obtaining and processing measurement readings of a living being
US20140192177A1 (en) 2011-09-02 2014-07-10 Koninklijke Philips N.V. Camera for generating a biometrical signal of a living being
WO2013156908A1 (en) * 2012-04-17 2013-10-24 Koninklijke Philips N.V. Device and method for obtaining vital sign information of a living being
US20140148663A1 (en) 2012-11-23 2014-05-29 Koninklijke Philips Electronics N.V. Device and method for extracting physiological information
US20140235976A1 (en) 2013-02-15 2014-08-21 Koninklijke Philips N. V. System and method for determining a vital sign of a subject
US20140253709A1 (en) 2013-03-06 2014-09-11 Koninklijke Philips N.V. System and method for determining vital sign information
US9125606B2 (en) 2013-03-13 2015-09-08 Koninklijke Philips N.V. Device and method for determining the blood oxygen saturation of a subject
US20160317041A1 (en) * 2013-12-19 2016-11-03 The Board Of Trustees Of The University Of Illinois System and methods for measuring physiological parameters
US20170238805A1 (en) * 2016-02-19 2017-08-24 Covidien Lp Systems and methods for video-based monitoring of vital signs
WO2018083074A1 (en) * 2016-11-03 2018-05-11 Koninklijke Philips N.V. Automatic pan-tilt-zoom adjustment to improve vital sign acquisition

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102019215307A1 (en) * 2019-10-07 2021-04-08 Volkswagen Aktiengesellschaft System for the detection and therapy support of Parkinson's disease in a vehicle occupant for use in a vehicle

Similar Documents

Publication Publication Date Title
US20220036055A1 (en) Person identification systems and methods
US10832035B2 (en) Subject identification systems and methods
US11295150B2 (en) Subject identification systems and methods
CN111344713B (en) Camera and image calibration for object recognition
JP2023171650A (en) Systems and methods for identifying persons and/or identifying and quantifying pain, fatigue, mood and intent with protection of privacy
WO2019105218A1 (en) Recognition method and device for image feature, storage medium and electronic device
US10997397B2 (en) Patient identification systems and methods
Zhou et al. Tackling mental health by integrating unobtrusive multimodal sensing
CN110458101B (en) Criminal personnel sign monitoring method and equipment based on combination of video and equipment
CN111149104B (en) Apparatus, method and computer readable storage medium for biometric identification
US20170004288A1 (en) Interactive and multimedia medical report system and method thereof
JP7299923B2 (en) Personal identification system and method
WO2019137886A1 (en) Capturing subject data in dynamic environments using mobile cameras
US20230238144A1 (en) Stroke examination system, stroke examination method, and recording medium
Gutstein et al. Optical flow, positioning, and eye coordination: automating the annotation of physician-patient interactions
Serbaya An Internet of Things (IoT) Based Image Process Screening to Prevent COVID-19 in Public Gatherings
Brenes-Vega et al. Design of Algorithms for People Segmentation Using in Scenes for Non-Invasive Monitoring of Vital Signs Through Video

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19700218

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19700218

Country of ref document: EP

Kind code of ref document: A1