WO2021185872A1 - Risk assessment for a candidate medical condition - Google Patents

Risk assessment for a candidate medical condition Download PDF

Info

Publication number
WO2021185872A1
WO2021185872A1 PCT/EP2021/056737 EP2021056737W WO2021185872A1 WO 2021185872 A1 WO2021185872 A1 WO 2021185872A1 EP 2021056737 W EP2021056737 W EP 2021056737W WO 2021185872 A1 WO2021185872 A1 WO 2021185872A1
Authority
WO
WIPO (PCT)
Prior art keywords
person
data
image data
communication device
mobile communication
Prior art date
Application number
PCT/EP2021/056737
Other languages
French (fr)
Inventor
Ajintha PATHMANATHAN
Ashwin JAINARAYANAN
Original Assignee
Cliniq Inc.
Virusiq Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cliniq Inc., Virusiq Ltd filed Critical Cliniq Inc.
Publication of WO2021185872A1 publication Critical patent/WO2021185872A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/01Measuring temperature of body parts ; Diagnostic temperature sensing, e.g. for malignant or inflamed tissue
    • A61B5/015By temperature mapping of body part
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7275Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0075Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by spectroscopy, i.e. measuring spectra, e.g. Raman spectroscopy, infrared absorption spectroscopy
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present invention relates to the assessment of risk of a person having a candidate medical condition, including but not limited to when there is an epidemic or pandemic of an infectious disease.
  • the invention has particular, but not exclusive, relevance for risk assessment during large-scale infectious disease events such as the COVTD-19 coronavirus pandemic of 2020.
  • Infectious disease epidemics or pandemics such as seasonal influenza or the COVTD-19 pandemic of 2020, result in a sharp increase in demand for resources such as medical supplies, space in hospitals, and the time of medical professionals.
  • attending a screening or medical facility such as a doctor’s surgery or hospital for diagnosis and/or treatment can increase the risk of the infectious disease spreading.
  • risk factors can be assessed without the involvement of such professionals, for example by means of a patient answering questions relating to his or her recent activities and any perceived symptoms.
  • Such data can be collected without the person needing to attend a medical facility.
  • a person’s temperature as measured for example using a mouth thermometer or an ear thermometer, can be a useful indicator of whether that person might have an infectious disease. Temperature can also be raised due to physiological hormonal changes or pathological inflammatory processes.
  • a computer-implemented method comprising: obtaining, from a camera of a mobile communication device, image data representing a portion of a person’s body, at least part of the image data being dependent on infrared radiation emitted by the portion of the person’s body; processing the at least part of the image data to determine temperature data indicative of a temperature of the portion of the person’s body; processing the determined temperature data to determine risk data indicative of whether the person is symptomatic with a candidate medical condition; and outputting, via a user interface, information depending on the determined risk data.
  • a mobile communication device comprising: a camera arranged to capture image data representing a portion of a person’s body, the camera comprising an imaging system arranged to focus incident radiation on an array of detector elements configured to generate the image data, the imaging system being configured such that the generated image data includes a component dependent on infrared radiation within the incident radiation; a user interface; and a software application.
  • the software application is arranged to cause the mobile communication device to process at least part of the image data, at least part of the image data including the component dependent on infrared radiation, to determine temperature data indicative of a temperature of the person’s body.
  • Figure 1 shows schematically a system for determining a risk of a person being symptomatic with a candidate medical condition in accordance with examples
  • Figure 2 shows components of a camera of a mobile communication device arranged to capture image data in accordance with examples
  • Figure 3 is a flow diagram showing a method of determining a risk of a person being symptomatic with a candidate medical condition in accordance with examples.
  • Figure 1 shows an example of a system arranged to perform methods in accordance with the present disclosure.
  • the system includes a mobile communication device 102 and a data processing system 104.
  • the mobile communication device 102 and the data processing system 104 are arranged to communicate with each other via the internet, either using a cellular data protocol via an access network and packet switched network, and/or using a wireless communication protocol such as Wi-Fi.
  • the data processing system 104 is a cloud- based system hosted by one or more servers, and the mobile communication device 102 is a smartphone.
  • a smartphone is a mobile telephone that, in addition to being arranged to perform conventional audio communications, has processing circuitry capable of executing downloaded software applications, commonly referred to as apps.
  • apps downloaded software applications
  • the methods described herein could be performed using a different type of mobile or fixed communication device, for example a laptop computer, a desktop computer or a tablet computer.
  • the data processing system 104 is shown as being remote from the mobile communication device 102, in other examples some or all of the functions of the data processing system 104 are instead performed locally at a communication device.
  • the mobile communication device 102 includes a camera 106 and a microphone 108.
  • the mobile communication device 102 includes various other components not shown in Figure 1, for example an aerial, a power supply, processing circuitry, memory circuitry, and a user interface.
  • the user interface in this example includes a touch-screen display.
  • the mobile communication device 102 is provisioned with an app capable of controlling the camera 106, the microphone 108, and the user interface when performing methods as described hereafter.
  • the data processing system 104 is arranged to process data received from the mobile communication device 106, including image data captured using the camera 106, and optionally including further data such as audio data recorded using the microphone 108 and personal data obtained via the user interface of the mobile communication device 102.
  • the data processing system 104 includes memory (not shown) for storing the received data along with algorithm data required for performing various computer-implemented methods including a temperature determination method and a risk determination method. As will be explained in more detail hereafter, the data processing system 104 is arranged to determine temperature data from an image of a person 110 captured using the camera 106 of the mobile communication device 102.
  • FIG. 2 shows components of the camera 106 of the mobile communication device 102.
  • the camera includes an imaging system 202 which directs incident radiation onto a detector array 204, which is a charge-coupled device arranged to convert the incident radiation into electrical signals.
  • the imaging system 202 in this example includes a lens 206 and a filter 208.
  • the filter 208 has the purpose of filtering out electromagnetic radiation with wavelengths outside of the visible part of the electromagnetic spectrum, in particular infrared radiation. Detector arrays commonly used in cameras are able to detect radiation over a range of wavelengths extending beyond the upper wavelength limit of visible light.
  • the wavelength range for visible light is approximately 400nm to 700nm, and detector arrays are typically able to detect radiation in a range of approximately 400 nm to 11 OOnm. It is known that infrared radiation within the range of 700nm to 1 lOOnm can have an adverse effect on image quality, and therefore cameras generally include one or more filters to filter out such radiation. In high-specification cameras, the filter or filters included to filter out infrared radiation are generally of a high quality and have a high thickness, and as a result are very effective at filtering out infrared radiation.
  • the filter 208 reduces the intensity of infrared radiation reaching the detector array 204, without having any significant effect on the intensity of visible light reaching the detector array 204.
  • the intensity of infrared radiation relative to the intensity of visible light is thus reduced, but a certain proportion of the infrared radiation still reaches the detector array 204.
  • Infrared radiation depends on the temperature of an object, and therefore contains information relevant to the determination of the temperature of that object.
  • the camera 106 includes a filter
  • a camera of a mobile communication device may not include a filter.
  • the methods described herein are applicable in any case where an imaging system of a camera is configured such that image data generated by the camera includes a component dependent on infrared radiation.
  • the camera 106 further includes a digitizer 210, which converts the output of the detector array 204 to a digital image.
  • the digital image in this example is encoded using a red green blue (RGB) color model, though in other examples a digital image could use a different color model, for example cyan, magenta, yellow, key (CMYK) or YUV.
  • RGB red green blue
  • CMYK magenta
  • YUV YUV
  • the infrared radiation which reaches the detector array 204 contributes to the red component of the resultant digital image.
  • Figure 3 shows an example of a method performed by the system of Figure 1 for determining whether the person 110 is symptomatic with a candidate medical condition.
  • the method is used to determine whether the person 110 is symptomatic with the COVTD- 19 coronavirus, as opposed to another disease or an unrelated physiological process.
  • the app on the mobile communication device 102 prompts, at 302, a user of the mobile communication device 102 to point the camera 106 towards the face of the person 110.
  • the prompt is made via the user interface of the mobile communication device 102, and could include, for example, a visual prompt on the screen of the mobile communication device 102, and/or could include an audio prompt through a speaker of the mobile communication device 102.
  • the app further provides information to the user on how to obtain a suitable photograph, for example including information on lighting conditions, how to position and orient the camera 106, and so on.
  • the app prompts the user to take a photograph of the person 110 in darkness, ensuring that any image captured by the camera 106 is indicative of a relatively high ratio of infrared radiation to visible light.
  • the app activates, at 304, the camera 106 of the mobile communication device 102.
  • the app activates the camera 106 automatically when a predetermined amount of time has elapsed after the prompt. In other examples, the activating of the camera is dependent on user input.
  • the mobile communication device 102 captures, at 306, an image of the face of the person 110, using the camera 106.
  • the image is encoded using the RGB color model, and at least the red component of the image is dependent on infrared radiation incident on the camera 106.
  • the mobile communication device 102 sends, at 308, the image to the data processing system 104, along with additional data for processing alongside the image data.
  • the additional data sent to the data processing system 104 includes personal data indicative of answers to one or more predetermined questions relating to the person 110, for example questions regarding symptoms perceived by the person 110, places recently visited by the person, and questions relevant to genetic susceptibility, for example characteristics such as race, eye color, and so on.
  • the app on the mobile communication device 102 is arranged to request answers to these questions via the user interface of the mobile communication device 102.
  • the camera 106 is activated by the app on the mobile communication device 102, the app could alternatively be given access to images stored on the mobile communication device 102 that were previously captured using the camera 106
  • the data processing system 104 receives the image data and the additional data at 310, and processes, at 312, the image data to determine temperature data indicative of a temperature of the face of the person 110.
  • all three color components of the image are first processed using face detection/image segmentation to generate masked image data consists of a portion of the image containing the face, with surrounding pixel values set to a fixed value, for example zero.
  • image segmentation is performed using a deep CNN architecture such as that described in the article Mask R-CNN by Kaiming He et al, arXiv: 1703.06870, trained to recognize human faces.
  • the masked image data is processed using a further CNN to determine the temperature data.
  • all three color components of the masked image data are processed using the further CNN.
  • only the red component of the masked image data is processed by the CNN.
  • the red component includes a contribution from the infrared part of the spectrum, and is therefore directly relevant for the determination of the temperature of the face.
  • the contribution of the infrared radiation is combined with a contribution from red visible light, and it is not possible to isolate the infrared contribution for the purpose of determining temperature.
  • the temperature data must be inferred from a signature in the image data, depending on, for example, the variation of the intensity of the red component across the face of the person 110.
  • the visible components of the masked image data can provide contextual information relevant to the determination of temperature.
  • the variation of the ratio of the red component of the image data to other components of the image data may be useful for determining temperature.
  • the CNN is trained to extract the relevant information from the masked image data, or part of the masked image data, as will be described in more detail hereafter.
  • the CNN includes multiple convolutional layers and pooling layers, followed by several fully connected layers arranged to output the temperature data.
  • Other network architectures are possible without departing from the scope of the invention.
  • alternative models can be used for determining temperature data, for example support vector machines, logistic regression models, and so on.
  • the chosen model Prior to the system being deployed, the chosen model is trained using supervised learning with labelled training data including images of faces labelled with known temperatures measured via one or more alternative methods such as with a thermometer or an infrared camera.
  • the training may be performed using any suitable techniques, including for example transfer learning and/or augmentation of training data by translating or rotating the training images such that the trained model is insensitive to the location and orientation of faces.
  • the method of extracting masked image data prior for processing by the further CNN has the advantage that portions of the image not containing the person’s face, which may be extraneous to the determination of temperature data, are excluded from the processing by the further CNN.
  • masked image data is not used.
  • an image is processed using image segmentation or face detection, and the output of the image segmentation or face detection is processed together with a portion of the image data dependent on infrared radiation, for example using a CNN.
  • there is no explicit segmentation/face detection stage and instead the image data, or one or more components of the image data, is processed using a single trained CNN to generate the temperature data.
  • the temperature data determined at 312 is indicative of an absolute or relative temperature of the face of the person 110. Such information may be a relevant indicator as to whether the person 110 is likely to be currently infected with a virus.
  • more detailed temperature data is determined, for example including a spatial temperature profile including an array of values each being indicative of a temperature at a location corresponding to one or more pixels of an image.
  • a spatial temperature profile of a person’s face may provide further information relevant to whether the person is symptomatic with a candidate medical condition, and/or may be used to discriminate between different conditions such as different infectious diseases.
  • multiple images are obtained over a period of time, for example one or more images each day for several days, and the determined temperature data is indicative of a temperature or spatial temperature profile for each image.
  • the temperature data is indicative of a variation of temperature with time, from which a temperature trend can be ascertained.
  • the temperature trend may be useful for example in distinguishing an onset of an infectious disease from other sources of temperature variation, such as physiological hormonal changes or pathological inflammatory processes.
  • the data processing system 104 processes, at 314, the determined temperature data, along with the additional data received at 310, to determine risk data indicative of whether the person 110 is symptomatic of the candidate medical condition.
  • the risk data is a single number indicative of how likely it is that the person 110 is symptomatic with COVID-19.
  • risk data is indicative of whether the person is symptomatic with a specific disease
  • risk data is indicative of whether the person is symptomatic with any of a broader class of medical conditions, for example viral infections, as opposed to another type of condition such as an unrelated inflammatory process.
  • risk data provides information as to which of a set of candidate medical conditions the person is likely to be symptomatic with.
  • the risk data may include multiple values corresponding to risks associated with different candidate conditions, such as different infectious diseases, physiological hormonal changes or pathological inflammatory processes.
  • the risk data is determined on the basis of predetermined rules. For example, if the personal data indicates that the person 110 has recently travelled to a region where a high number of cases of COVD-19 have been detected, and the temperature data indicates that the temperature of the person’s face is above a predetermined threshold value, the risk data may indicate that there is a high risk that the person 110 is currently infected with the virus. On the other hand, even with the same personal data, if the temperature data indicates that the temperature of the person’s face is below the threshold value, the risk data may indicate that there is a moderate or low risk that the person is infected with the virus.
  • the rules may be dependent on temperature trends as determined from multiple images captured over a period of time as described above.
  • the rules can be updated either manually by an operator of the data processing system 104, or automatically for example as further cases of infection are detected or further information about a disease is ascertained.
  • the processing of image data, personal data, and optionally additional data (see below) to determine risk data is performed using a machine learning model.
  • the personal data can be encoded as an array of numbers and provided to a neural network or other model, along with the temperature data, to determine the risk data.
  • a model can be trained using supervised learning with training data collected from patients determined to be symptomatic with different medical conditions.
  • Such a model can be trained to recognize different medical conditions, for example different infectious diseases, and to provide risk data indicative of the likelihood of a person having each of these conditions.
  • risk data is not a substitute for a medical diagnosis by a qualified professional, but may be useful in deciding whether or not a person should attend a medical facility, and/or whether further steps should be taken to mitigate risks to that person or others.
  • the data processing system 104 sends, at 316, the determined risk data to the mobile communication device 102.
  • the mobile communication device 102 outputs, at 320, information depending on the risk data via the user interface of the mobile communication device 102.
  • the information provides a recommendation as to whether the person 110 should seek medical attention, self-isolate, or perform any other steps to mitigate any danger to the person 110 or other people.
  • the data processing system 104 processes personal data and image data to determine risk data indicative of whether the person is symptomatic with a candidate medical condition.
  • the app on the mobile device 102 can further be configured to record audio data using the microphone 108 of the mobile device 102, representing a vocal output of the person, for example speech or a cough.
  • the app is configured to activate the microphone 108 and to prompt the person 110 to speak or cough in range of the microphone 108.
  • the recorded audio data is then provided to the data processing system 104 along with the personal data and the image data and used in determining the risk data at 314. Audio data representing a vocal output of a person may contain additional information relevant to the determination of whether the person 110 is symptomatic with a given medical condition.
  • the above embodiments are to be understood as illustrative examples of the invention. Further embodiments of the invention are envisaged.
  • the methods described herein could be performed using images of only a specific part of a person’s face, or any other suitable part of the person’s symptomatic anatomy.
  • other types of data are collected and processed in addition to, or as an alternative to, the data discussed above, for example genetic data collected using a genetic testing kit, which may be relevant to genetic susceptibility to certain medical conditions.
  • the genetic data may be collected by a service provider and stored in a server. The genetic data may then be obtained from the server by the data processing system arranged to determine the risk data, for example via an application programming interface (API).
  • API application programming interface
  • further biometric data is collected such as heart rate data indicative of heart rate and/or heart rate variability.
  • Heart rate data can be collected, for example, manually by a person measuring his or her pulse, or using a wearable device such as a watch.
  • one or more types of data are collected over a period of time, for example over a period of several days, and trends in data are used to determine risk data.
  • a computer-implemented diagnostic tool is provided.
  • the diagnostic tool is operable to process at least one of: image data representing a portion of a person’s body, at least part of the image data being dependent on infrared radiation emitted by the portion of the person’s body; audio data representing a vocal output of the person; and personal data indicative of answers to one or more predetermined questions, to determine risk data indicative of whether the person is symptomatic with a candidate medical condition.
  • the diagnostic tool is arranged to output information indicative of the determined risk data.
  • the diagnostic tool is provided as an app on a mobile communication device, for example a smartphone.
  • the diagnostic tool is implemented using a cloud-based system.
  • an app for a mobile communication device which is arranged to process image data captured using a camera of the mobile communication device and representing a part of a person’s body, the image data including a component dependent on infrared radiation, to determine temperature data indicative of a temperature of the person’s body.
  • the app is configured to output information in accordance with a predetermined temperature metric.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

A computer-implemented method includes: obtaining, from a camera of a mobile communication device, image data representing a portion of a person's body, at least part of the image data being dependent on infrared radiation emitted by the portion of the person's body; processing the at least part of the image data to determine temperature data indicative of a temperature of the portion of the person's body; processing the determined temperature data to determine risk data indicative of whether the person is symptomatic with a candidate medical condition; and outputting, via a user interface, information depending on the determined risk data.

Description

RISK ASSESSMENT FOR A CANDIDATE MEDICAL CONDITION
BACKGROUND OF THE INVENTION
Field of the Invention
[0001] The present invention relates to the assessment of risk of a person having a candidate medical condition, including but not limited to when there is an epidemic or pandemic of an infectious disease. The invention has particular, but not exclusive, relevance for risk assessment during large-scale infectious disease events such as the COVTD-19 coronavirus pandemic of 2020.
Description of the Related Technology
[0002] Infectious disease epidemics or pandemics, such as seasonal influenza or the COVTD-19 pandemic of 2020, result in a sharp increase in demand for resources such as medical supplies, space in hospitals, and the time of medical professionals. When a person has an infectious disease, attending a screening or medical facility such as a doctor’s surgery or hospital for diagnosis and/or treatment can increase the risk of the infectious disease spreading. Although diagnosing a patient as being infected by a specific infection usually requires blood tests which must be administered by a qualified professional, in some cases risk factors can be assessed without the involvement of such professionals, for example by means of a patient answering questions relating to his or her recent activities and any perceived symptoms. Such data can be collected without the person needing to attend a medical facility. It is well known that a person’s temperature, as measured for example using a mouth thermometer or an ear thermometer, can be a useful indicator of whether that person might have an infectious disease. Temperature can also be raised due to physiological hormonal changes or pathological inflammatory processes.
SUMMARY
[0003] According to a first aspect, there is provided a computer-implemented method comprising: obtaining, from a camera of a mobile communication device, image data representing a portion of a person’s body, at least part of the image data being dependent on infrared radiation emitted by the portion of the person’s body; processing the at least part of the image data to determine temperature data indicative of a temperature of the portion of the person’s body; processing the determined temperature data to determine risk data indicative of whether the person is symptomatic with a candidate medical condition; and outputting, via a user interface, information depending on the determined risk data.
[0004] According to a second aspect, there is provided a mobile communication device comprising: a camera arranged to capture image data representing a portion of a person’s body, the camera comprising an imaging system arranged to focus incident radiation on an array of detector elements configured to generate the image data, the imaging system being configured such that the generated image data includes a component dependent on infrared radiation within the incident radiation; a user interface; and a software application. The software application is arranged to cause the mobile communication device to process at least part of the image data, at least part of the image data including the component dependent on infrared radiation, to determine temperature data indicative of a temperature of the person’s body.
BRIEF DESCRIPTION OF THE DRAWINGS [0005] Figure 1 shows schematically a system for determining a risk of a person being symptomatic with a candidate medical condition in accordance with examples;
[0006] Figure 2 shows components of a camera of a mobile communication device arranged to capture image data in accordance with examples;
[0007] Figure 3 is a flow diagram showing a method of determining a risk of a person being symptomatic with a candidate medical condition in accordance with examples.
PET ATT ED DESCRIPTION OF CERTAIN INVENTIVE EMBODIMENTS [0008] Figure 1 shows an example of a system arranged to perform methods in accordance with the present disclosure. The system includes a mobile communication device 102 and a data processing system 104. The mobile communication device 102 and the data processing system 104 are arranged to communicate with each other via the internet, either using a cellular data protocol via an access network and packet switched network, and/or using a wireless communication protocol such as Wi-Fi. In this example, the data processing system 104 is a cloud- based system hosted by one or more servers, and the mobile communication device 102 is a smartphone. A smartphone is a mobile telephone that, in addition to being arranged to perform conventional audio communications, has processing circuitry capable of executing downloaded software applications, commonly referred to as apps. In other examples, the methods described herein could be performed using a different type of mobile or fixed communication device, for example a laptop computer, a desktop computer or a tablet computer. Although in the present example the data processing system 104 is shown as being remote from the mobile communication device 102, in other examples some or all of the functions of the data processing system 104 are instead performed locally at a communication device.
[0009] The mobile communication device 102 includes a camera 106 and a microphone 108. The mobile communication device 102 includes various other components not shown in Figure 1, for example an aerial, a power supply, processing circuitry, memory circuitry, and a user interface. The user interface in this example includes a touch-screen display. The mobile communication device 102 is provisioned with an app capable of controlling the camera 106, the microphone 108, and the user interface when performing methods as described hereafter.
[0010] The data processing system 104 is arranged to process data received from the mobile communication device 106, including image data captured using the camera 106, and optionally including further data such as audio data recorded using the microphone 108 and personal data obtained via the user interface of the mobile communication device 102. The data processing system 104 includes memory (not shown) for storing the received data along with algorithm data required for performing various computer-implemented methods including a temperature determination method and a risk determination method. As will be explained in more detail hereafter, the data processing system 104 is arranged to determine temperature data from an image of a person 110 captured using the camera 106 of the mobile communication device 102.
[0011] Figure 2 shows components of the camera 106 of the mobile communication device 102. The camera includes an imaging system 202 which directs incident radiation onto a detector array 204, which is a charge-coupled device arranged to convert the incident radiation into electrical signals. The imaging system 202 in this example includes a lens 206 and a filter 208. The filter 208 has the purpose of filtering out electromagnetic radiation with wavelengths outside of the visible part of the electromagnetic spectrum, in particular infrared radiation. Detector arrays commonly used in cameras are able to detect radiation over a range of wavelengths extending beyond the upper wavelength limit of visible light. The wavelength range for visible light is approximately 400nm to 700nm, and detector arrays are typically able to detect radiation in a range of approximately 400 nm to 11 OOnm. It is known that infrared radiation within the range of 700nm to 1 lOOnm can have an adverse effect on image quality, and therefore cameras generally include one or more filters to filter out such radiation. In high-specification cameras, the filter or filters included to filter out infrared radiation are generally of a high quality and have a high thickness, and as a result are very effective at filtering out infrared radiation. On the other hand, cameras typically included in mobile communication devices such as smartphones, tablet computers, and laptop computers, are more cheaply produced and generally have lower quality and/or thinner filters, which are less effective at filtering out infrared radiation. As illustrated by the thickness of the arrows in Figure 2, the filter 208 reduces the intensity of infrared radiation reaching the detector array 204, without having any significant effect on the intensity of visible light reaching the detector array 204. The intensity of infrared radiation relative to the intensity of visible light is thus reduced, but a certain proportion of the infrared radiation still reaches the detector array 204. Infrared radiation depends on the temperature of an object, and therefore contains information relevant to the determination of the temperature of that object. The presence of information relevant to the determination of temperature is a surprising consequence of the lower quality of filter used in cameras of mobile communication devices. It is noted that, although in the present example the camera 106 includes a filter, in other examples a camera of a mobile communication device may not include a filter. The methods described herein are applicable in any case where an imaging system of a camera is configured such that image data generated by the camera includes a component dependent on infrared radiation.
[0012] The camera 106 further includes a digitizer 210, which converts the output of the detector array 204 to a digital image. The digital image in this example is encoded using a red green blue (RGB) color model, though in other examples a digital image could use a different color model, for example cyan, magenta, yellow, key (CMYK) or YUV. In this example, the infrared radiation which reaches the detector array 204 contributes to the red component of the resultant digital image.
[0013] Figure 3 shows an example of a method performed by the system of Figure 1 for determining whether the person 110 is symptomatic with a candidate medical condition. In this example, the method is used to determine whether the person 110 is symptomatic with the COVTD- 19 coronavirus, as opposed to another disease or an unrelated physiological process. The app on the mobile communication device 102 prompts, at 302, a user of the mobile communication device 102 to point the camera 106 towards the face of the person 110. The prompt is made via the user interface of the mobile communication device 102, and could include, for example, a visual prompt on the screen of the mobile communication device 102, and/or could include an audio prompt through a speaker of the mobile communication device 102. The app further provides information to the user on how to obtain a suitable photograph, for example including information on lighting conditions, how to position and orient the camera 106, and so on. In one configuration, the app prompts the user to take a photograph of the person 110 in darkness, ensuring that any image captured by the camera 106 is indicative of a relatively high ratio of infrared radiation to visible light.
[0014] The app activates, at 304, the camera 106 of the mobile communication device 102. In the present example, the app activates the camera 106 automatically when a predetermined amount of time has elapsed after the prompt. In other examples, the activating of the camera is dependent on user input.
[0015] The mobile communication device 102 captures, at 306, an image of the face of the person 110, using the camera 106. As mentioned above, in the present example the image is encoded using the RGB color model, and at least the red component of the image is dependent on infrared radiation incident on the camera 106. The mobile communication device 102 sends, at 308, the image to the data processing system 104, along with additional data for processing alongside the image data. In the present example, the additional data sent to the data processing system 104 includes personal data indicative of answers to one or more predetermined questions relating to the person 110, for example questions regarding symptoms perceived by the person 110, places recently visited by the person, and questions relevant to genetic susceptibility, for example characteristics such as race, eye color, and so on. In this example, the app on the mobile communication device 102 is arranged to request answers to these questions via the user interface of the mobile communication device 102. Although in the present example the camera 106 is activated by the app on the mobile communication device 102, the app could alternatively be given access to images stored on the mobile communication device 102 that were previously captured using the camera 106
[0016] The data processing system 104 receives the image data and the additional data at 310, and processes, at 312, the image data to determine temperature data indicative of a temperature of the face of the person 110. In the present example, all three color components of the image are first processed using face detection/image segmentation to generate masked image data consists of a portion of the image containing the face, with surrounding pixel values set to a fixed value, for example zero. In the present example, image segmentation is performed using a deep CNN architecture such as that described in the article Mask R-CNN by Kaiming He et al, arXiv: 1703.06870, trained to recognize human faces. Other methods for image segmentation are known and compatible with the present disclosure, for example methods using cascading classifiers based on descriptors such as Haar-like features or SURF descriptors. Such methods may involve lower memory and/or processing resources and may therefore be more suitable for cases where the face detection/segmentation is performed locally on a mobile communication device.
[0017] In the present example, the masked image data is processed using a further CNN to determine the temperature data. In the present example all three color components of the masked image data are processed using the further CNN. In another example, only the red component of the masked image data is processed by the CNN. As explained above, the red component includes a contribution from the infrared part of the spectrum, and is therefore directly relevant for the determination of the temperature of the face. However, the contribution of the infrared radiation is combined with a contribution from red visible light, and it is not possible to isolate the infrared contribution for the purpose of determining temperature. Instead, the temperature data must be inferred from a signature in the image data, depending on, for example, the variation of the intensity of the red component across the face of the person 110. It is noted that whilst not directly related to the temperature of the face, the visible components of the masked image data can provide contextual information relevant to the determination of temperature. For example, the variation of the ratio of the red component of the image data to other components of the image data may be useful for determining temperature. In any case, the CNN is trained to extract the relevant information from the masked image data, or part of the masked image data, as will be described in more detail hereafter.
[0018] In the present example, the CNN includes multiple convolutional layers and pooling layers, followed by several fully connected layers arranged to output the temperature data. Other network architectures are possible without departing from the scope of the invention. In other examples, alternative models can be used for determining temperature data, for example support vector machines, logistic regression models, and so on. Prior to the system being deployed, the chosen model is trained using supervised learning with labelled training data including images of faces labelled with known temperatures measured via one or more alternative methods such as with a thermometer or an infrared camera. The training may be performed using any suitable techniques, including for example transfer learning and/or augmentation of training data by translating or rotating the training images such that the trained model is insensitive to the location and orientation of faces.
[0019] The method of extracting masked image data prior for processing by the further CNN has the advantage that portions of the image not containing the person’s face, which may be extraneous to the determination of temperature data, are excluded from the processing by the further CNN. However, in other examples, masked image data is not used. In one example, an image is processed using image segmentation or face detection, and the output of the image segmentation or face detection is processed together with a portion of the image data dependent on infrared radiation, for example using a CNN. In other examples, there is no explicit segmentation/face detection stage, and instead the image data, or one or more components of the image data, is processed using a single trained CNN to generate the temperature data.
[0020] In the present example, the temperature data determined at 312 is indicative of an absolute or relative temperature of the face of the person 110. Such information may be a relevant indicator as to whether the person 110 is likely to be currently infected with a virus. In other examples, more detailed temperature data is determined, for example including a spatial temperature profile including an array of values each being indicative of a temperature at a location corresponding to one or more pixels of an image. A spatial temperature profile of a person’s face may provide further information relevant to whether the person is symptomatic with a candidate medical condition, and/or may be used to discriminate between different conditions such as different infectious diseases. In some examples, multiple images are obtained over a period of time, for example one or more images each day for several days, and the determined temperature data is indicative of a temperature or spatial temperature profile for each image. In this way, the temperature data is indicative of a variation of temperature with time, from which a temperature trend can be ascertained. The temperature trend may be useful for example in distinguishing an onset of an infectious disease from other sources of temperature variation, such as physiological hormonal changes or pathological inflammatory processes.
[0021] The data processing system 104 processes, at 314, the determined temperature data, along with the additional data received at 310, to determine risk data indicative of whether the person 110 is symptomatic of the candidate medical condition. In the present example, the risk data is a single number indicative of how likely it is that the person 110 is symptomatic with COVID-19. Although in this example the risk data is indicative of whether the person is symptomatic with a specific disease, in other examples risk data is indicative of whether the person is symptomatic with any of a broader class of medical conditions, for example viral infections, as opposed to another type of condition such as an unrelated inflammatory process. In other examples, risk data provides information as to which of a set of candidate medical conditions the person is likely to be symptomatic with. In such examples, the risk data may include multiple values corresponding to risks associated with different candidate conditions, such as different infectious diseases, physiological hormonal changes or pathological inflammatory processes.
[0022] In the present example, the risk data is determined on the basis of predetermined rules. For example, if the personal data indicates that the person 110 has recently travelled to a region where a high number of cases of COVD-19 have been detected, and the temperature data indicates that the temperature of the person’s face is above a predetermined threshold value, the risk data may indicate that there is a high risk that the person 110 is currently infected with the virus. On the other hand, even with the same personal data, if the temperature data indicates that the temperature of the person’s face is below the threshold value, the risk data may indicate that there is a moderate or low risk that the person is infected with the virus. In other examples, the rules may be dependent on temperature trends as determined from multiple images captured over a period of time as described above. The rules can be updated either manually by an operator of the data processing system 104, or automatically for example as further cases of infection are detected or further information about a disease is ascertained.
[0023] In other examples, the processing of image data, personal data, and optionally additional data (see below) to determine risk data is performed using a machine learning model. For example, the personal data can be encoded as an array of numbers and provided to a neural network or other model, along with the temperature data, to determine the risk data. Such a model can be trained using supervised learning with training data collected from patients determined to be symptomatic with different medical conditions. Such a model can be trained to recognize different medical conditions, for example different infectious diseases, and to provide risk data indicative of the likelihood of a person having each of these conditions. Such risk data is not a substitute for a medical diagnosis by a qualified professional, but may be useful in deciding whether or not a person should attend a medical facility, and/or whether further steps should be taken to mitigate risks to that person or others.
[0024] The data processing system 104 sends, at 316, the determined risk data to the mobile communication device 102. The mobile communication device 102 outputs, at 320, information depending on the risk data via the user interface of the mobile communication device 102. In this example, the information provides a recommendation as to whether the person 110 should seek medical attention, self-isolate, or perform any other steps to mitigate any danger to the person 110 or other people.
[0025] In the example described above, the data processing system 104 processes personal data and image data to determine risk data indicative of whether the person is symptomatic with a candidate medical condition. The app on the mobile device 102 can further be configured to record audio data using the microphone 108 of the mobile device 102, representing a vocal output of the person, for example speech or a cough. In such a configuration, the app is configured to activate the microphone 108 and to prompt the person 110 to speak or cough in range of the microphone 108. The recorded audio data is then provided to the data processing system 104 along with the personal data and the image data and used in determining the risk data at 314. Audio data representing a vocal output of a person may contain additional information relevant to the determination of whether the person 110 is symptomatic with a given medical condition.
[0026] The above embodiments are to be understood as illustrative examples of the invention. Further embodiments of the invention are envisaged. For example, the methods described herein could be performed using images of only a specific part of a person’s face, or any other suitable part of the person’s symptomatic anatomy. In other examples, other types of data are collected and processed in addition to, or as an alternative to, the data discussed above, for example genetic data collected using a genetic testing kit, which may be relevant to genetic susceptibility to certain medical conditions. In such examples, the genetic data may be collected by a service provider and stored in a server. The genetic data may then be obtained from the server by the data processing system arranged to determine the risk data, for example via an application programming interface (API). In other examples, further biometric data is collected such as heart rate data indicative of heart rate and/or heart rate variability. Heart rate data can be collected, for example, manually by a person measuring his or her pulse, or using a wearable device such as a watch. In some examples, one or more types of data are collected over a period of time, for example over a period of several days, and trends in data are used to determine risk data.
[0027] In examples, a computer-implemented diagnostic tool is provided. The diagnostic tool is operable to process at least one of: image data representing a portion of a person’s body, at least part of the image data being dependent on infrared radiation emitted by the portion of the person’s body; audio data representing a vocal output of the person; and personal data indicative of answers to one or more predetermined questions, to determine risk data indicative of whether the person is symptomatic with a candidate medical condition. The diagnostic tool is arranged to output information indicative of the determined risk data. In examples, the diagnostic tool is provided as an app on a mobile communication device, for example a smartphone. In other examples, the diagnostic tool is implemented using a cloud-based system.
[0028] In examples, an app for a mobile communication device is provided which is arranged to process image data captured using a camera of the mobile communication device and representing a part of a person’s body, the image data including a component dependent on infrared radiation, to determine temperature data indicative of a temperature of the person’s body. In one example, the app is configured to output information in accordance with a predetermined temperature metric.
[0029] It is to be understood that any feature described in relation to any one embodiment may be used alone, or in combination with other features described, and may also be used in combination with one or more features of any other of the embodiments, or any combination of any other of the embodiments. Furthermore, equivalents and modifications not described above may also be employed without departing from the scope of the invention, which is defined in the accompanying claims.

Claims

WHAT IS CLAIMED IS:
1. A computer-implemented method comprising: obtaining, from a camera of a mobile communication device, image data representing a portion of a person’s body, at least part of the image data being dependent on infrared radiation emitted by the portion of the person’s body; processing the at least part of the image data to determine temperature data indicative of a temperature of the portion of the person’s body; processing the determined temperature data to determine risk data indicative of whether the person is symptomatic with a candidate medical condition; and outputting, via a user interface, information depending on the determined risk data.
2. The method of claim 1 , wherein: the image data comprises a plurality of color components including a red color component, the red color component being dependent on the infrared radiation emitted by the person’s body; and the at least part of the image data comprises the red color component of the image data.
3. The method of claim 2, wherein the image data is RGB image data.
4. The method of claim 1, comprising detecting a location of the portion of the person’s body in the image data, wherein the processing of the at least part of the image data is in dependence on the detected location of the portion of the person’s body in the image data.
5. The method of claim 4, wherein processing the at least part of the image data comprises: generating, using image segmentation, masked image data corresponding to the location of the portion of the person’s body; and processing the masked image data to determine the temperature data.
6. The method of claim 1, wherein the portion of the person’s body is at least part of the person’s face.
7. The method of claim 1, wherein obtaining the image data comprises: prompting, via the user interface, a user of the mobile communication device to face the camera toward the portion of the person’s body; activating the camera of the mobile communication device; and capturing the image data using the camera of the mobile communication device.
8. The method of claim 1, wherein: the image data comprises a plurality of images representing the portion of the person’s body at a plurality of times; and the determined temperature data is indicative of a respective temperature of the portion of the person’s body at each of the plurality of times.
9. The method of claim 1 , comprising obtaining audio data representing a vocal output of the person, wherein determining of the risk data further comprises processing the obtained audio data.
10. The method of claim 9, wherein obtaining the audio data comprises: activating a microphone of the mobile communication device; prompting, via the user interface, the person to speak or cough in range of the microphone of the mobile communication device; and recording the audio data using the microphone of the mobile communication device.
11. The method of claim 1, comprising obtaining personal data indicative of answers to one or more predetermined questions relating to the person, wherein the determining of the risk data further comprises processing the obtained personal data.
12. The method of claim 11, wherein obtaining the personal data comprises: requesting, via the user interface, a user of the mobile communication device to provide user input indicative of answers to the one or more predetermined questions; receiving the requested user input via the user interface; and generating the personal data in dependence on the received user input.
13. The method of claim 11, wherein the personal data is processed with the temperature data such that the risk data is indicative of whether the person is symptomatic with a specific infectious disease.
14. The method of claim 13, wherein the specific infectious disease is the COVTD-19 coronavirus.
15. A mobile communication device comprising: a camera arranged to capture image data representing a portion of a person’s body, the camera comprising an imaging system arranged to focus incident radiation on an array of detector elements configured to generate the image data, the imaging system being configured such that the generated image data includes a component dependent on infrared radiation within the incident radiation; a user interface; and a software application arranged to cause the mobile communication device to process at least part of the image data, the at least part of the image data including the component dependent on infrared radiation, to determine temperature data indicative of a temperature of the person’s body.
16. The mobile communication device of claim 15, wherein the software application is further arranged to cause the mobile communication device to: process the determined temperature data to determine risk data indicative of whether the person is symptomatic with a candidate medical condition; and output, via the user interface, information depending on the determined risk data.
17. The mobile communication device of claim 16, wherein the software application is further arranged to cause the mobile communication device to output, via the user interface, information dependent on the determined temperature data in accordance with a given temperature metric.
18. The mobile communication device of claim 15, being a smartphone or tablet computer.
19. The mobile communication device of claim 15, comprising a microphone arranged to record audio data representing a vocal output of the person, wherein the software application is arranged to cause the mobile communication device to determine risk data further in dependence on the recorded audio data.
20. A non-transient storage medium comprising machine-readable instructions, which, when executed by processing circuitry of a computing system, cause the computing system to: process image data captured using a camera of a mobile communication device and representing a portion of a person’s body, at least part of the image data being dependent on infrared radiation emitted by the portion of the person’s body, to determine temperature data indicative of a temperature of the portion of the person’s body; process the determined temperature data to determine risk data indicative of whether the person is symptomatic with a candidate medical condition; and output, for display via a user interface, information depending on the determined risk data.
PCT/EP2021/056737 2020-03-16 2021-03-16 Risk assessment for a candidate medical condition WO2021185872A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202062990234P 2020-03-16 2020-03-16
US62/990,234 2020-03-16

Publications (1)

Publication Number Publication Date
WO2021185872A1 true WO2021185872A1 (en) 2021-09-23

Family

ID=75426556

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2021/056737 WO2021185872A1 (en) 2020-03-16 2021-03-16 Risk assessment for a candidate medical condition

Country Status (1)

Country Link
WO (1) WO2021185872A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060193498A1 (en) * 2005-02-25 2006-08-31 Jason Hartlove System and method for detecting thermal anomalies
US20190216333A1 (en) * 2018-01-12 2019-07-18 Futurewei Technologies, Inc. Thermal face image use for health estimation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060193498A1 (en) * 2005-02-25 2006-08-31 Jason Hartlove System and method for detecting thermal anomalies
US20190216333A1 (en) * 2018-01-12 2019-07-18 Futurewei Technologies, Inc. Thermal face image use for health estimation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
K. LAI ET AL: "Multi-spectral facial biometrics in access control", 2014 IEEE SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE IN BIOMETRICS AND IDENTITY MANAGEMENT (CIBIM), 1 December 2014 (2014-12-01), pages 102 - 109, XP055260466, ISBN: 978-1-4799-4533-7, DOI: 10.1109/CIBIM.2014.7015450 *
KAIMING HE ET AL.: "Mask R-CNN", ARXIV: 1703.06870

Similar Documents

Publication Publication Date Title
CN111414831B (en) Monitoring method and system, electronic device and storage medium
US11163981B2 (en) Periocular facial recognition switching
KR102420100B1 (en) Electronic apparatus for providing health status information, method for controlling the same, and computer-readable storage medium
US20140316235A1 (en) Skin imaging and applications
US9098901B2 (en) Device and method for processing data derivable from remotely detected electromagnetic radiation
US10945637B2 (en) Image based jaundice diagnosing method and apparatus and image based jaundice diagnosis assisting apparatus
EP3420713B1 (en) Devices, system and methods for determining a priority level and/or conversation duration of a call
JP2020533701A (en) Camera and image calibration to identify the subject
US11612314B2 (en) Electronic device and method for determining degree of conjunctival hyperemia by using same
KR101998595B1 (en) Method and Apparatus for jaundice diagnosis based on an image
CN110568930B (en) Method for calibrating fixation point and related equipment
KR101789166B1 (en) Method and Apparatus for jaundice diagnosis based on an image, Assisting Apparatus for jaundice diagnosis based on an image
US20230060676A1 (en) Multimodal diagnosis system, method and apparatus
WO2007125794A1 (en) Data measuring device and data measuring method
WO2020248389A1 (en) Region recognition method and apparatus, computing device, and computer readable storage medium
WO2021185872A1 (en) Risk assessment for a candidate medical condition
US20220165421A1 (en) Machine learning model based method and analysis system for performing covid-19 testing according to eye image captured by smartphone
KR102263708B1 (en) De-identification device for healthcare image
JP5995610B2 (en) Subject recognition device and control method therefor, imaging device, display device, and program
US20230298175A1 (en) Machine learning model based method and analysis system for performing covid-19 testing according to eye image captured by smartphone
JP7457496B2 (en) Data management device, data management method, and data management program
KR102555157B1 (en) Integrated care system and method using heatlcare device
KR102263689B1 (en) De-identification device for healthcare image
US20240055125A1 (en) System and method for determining data quality for cardiovascular parameter determination
Rupanagudi et al. An optimized video oculographic approach to assist patients with motor neuron disease to communicate

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21716967

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21716967

Country of ref document: EP

Kind code of ref document: A1