US12219297B2 - Image processing system and method - Google Patents
Image processing system and method Download PDFInfo
- Publication number
- US12219297B2 US12219297B2 US17/690,588 US202217690588A US12219297B2 US 12219297 B2 US12219297 B2 US 12219297B2 US 202217690588 A US202217690588 A US 202217690588A US 12219297 B2 US12219297 B2 US 12219297B2
- Authority
- US
- United States
- Prior art keywords
- user
- colour
- image data
- image
- regions
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G06T11/10—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/643—Hue control means, e.g. flesh tone control
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/174—Segmentation; Edge detection involving the use of two or more images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/97—Determining parameters from multiple pictures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/15—Biometric patterns based on physiological signals, e.g. heartbeat, blood flow
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30048—Heart; Cardiac
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30076—Plethysmography
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30088—Skin; Dermal
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- the present invention relates to an image processing system and method.
- Photo-plethysmography uses an LED to illuminate the skin, with the quantity of transmitted/reflected light being monitored to identify cardiac cycles (which, over time, indicates a heart rate of the user) or other characteristics of the user's cardiac activity.
- Photo-plethysmography has been commonly used in devices such as fitness bands, which are particularly suited for such an implementation as this technology requires attaching a dedicated light source and optical sensor to a part of a user's body, typically a finger or wrist, to measure their cardiovascular activity.
- TOI transdermal optical imaging
- TOI has been developed to use images captured by widely-available digital cameras, such as the camera of a smart-phone, to detect the small colour variations of a user's skin. These colour changes are indicative of the blood flow beneath the skin of the user, and therefore analysis of these images can enable measurement the user's cardiovascular activity.
- One method for performing TOI uses machine learning to detect the regions in video images where the colour signal from the re-emitted light indicate concentrations of haemoglobin, as haemoglobin concentrations have distinct colour signatures based on the colour of the light re-emitted. Independent component analysis may then be used on these detected regions to extract the user's cardiovascular information.
- an image processing system is provided in claim 1 .
- an image processing method is provided in claim 14 .
- FIG. 1 schematically illustrates an image processing system
- FIG. 2 schematically illustrates an example of a frame of image data
- FIG. 3 schematically illustrates an example image processing method.
- a company may interview prospective candidates to hire for a job opening.
- the company may decide to use TOI to monitor a prospective candidate's cardiovascular activity, potentially without the consent or knowledge of the prospective candidate.
- the company could then examine the prospective candidate's cardiovascular activity to identify the candidate's level of stress or nervousness during the interview.
- an image processing system that can modify captured image data so as to obscure the biomarker signals of a user in image data that can be used for TOI, such as the temporal pattern of the colour change of the user's exposed skin. Additionally, it may also be desirable to provide an image processing system can modify captured image data so as to cause detected biomarker signals to indicate a predetermined cardiovascular activity.
- an image processing system for modifying one or more images of a user to change a cardiovascular activity measurable from the one or more images
- the device comprises an image receiving section 110 configured to receive image data comprising a plurality of image frames; a region selection section 120 configured to select one or more regions where the colour of the skin of the user changes temporally in the image data, wherein the colour changes are indicative of the cardiovascular activity of the user; and a modifying section 130 configured to modify the colour change of the one or more selected regions in one or more image frames to create modified image data.
- the image receiving section 110 may receive image data that are being captured live, such as a live video call, or image data that have been captured previously, such as a pre-recorded lecture or instructional video, in order to modify the image data prior to the image data being shared for example.
- FIG. 2 illustrates an example image frame 200 that comprises an image of a user 210 .
- the region selection section 120 may select one or more regions 220 where the colour of the skin of the user changes temporally in the image data, wherein the colour changes are indicative of the cardiovascular activity of the user.
- the regions 220 shown in FIG. 2 are only a selection of exemplary regions in which the colour of the skin of the user may change temporally.
- the selected regions may vary in number such that there may be more or fewer regions, and these regions may be located in other positions.
- FIG. 2 shows an image of the user where only the skin of the user's face is visible in the image data
- the image data may alternatively, or additionally, comprise images of the user's hand or chest (or indeed any other area) where the skin of the user is visible for example. Therefore, the region selection section may select one or more regions where the colour of the skin of the user changes temporally in image data of the user's hand or chest where corresponding parts of the skin of the user is visible, either instead of or in addition to selecting regions of the user's face.
- the one or more selected regions where the colour of the skin of the user changes temporally in the image data may be one or more predetermined regions. These predetermined regions may be monitored/modified specifically, rather than (or in addition to) performing a continuous detection process to identify which regions should be subjected to image processing techniques in accordance with embodiments of the present disclosure.
- the predetermined regions may be determined using any suitable process, including any one or more of those described below.
- One or more of the predetermined regions may be selected using a calibration process that is performed upon start-up, or during a user's first use of the system (for example).
- This calibration may use TOI techniques to predetermine one or more regions of the user's exposed skin that may be used to detect the user's cardiovascular activity, and the location of the one or more predetermined regions on the user's exposed skin may be stored as a part of a user profile.
- Suitable regions detected in this manner may be those that demonstrate an above-threshold magnitude or frequency of biomarker signals, for example, and/or biomarker signals associated with particular cardiovascular activity.
- the region selection section 120 may then select one or more regions by detecting one or more locations where the skin of the user is visible in the image data, and selecting one or more of those locations that correspond to the predetermined regions as identified in the user's profile.
- the one or more predetermined regions may be selected by a user prior to, or whilst, using the image processing system.
- the one or more predetermined regions may be generated by a plurality of users and provided to a server or a central database so as to provide a library of one or more predetermined regions that may be used for TOI techniques.
- developers may indicate one or more regions in such a library rather than relying upon user uploads (or only user uploads).
- a library comprising a number of different user's predetermined regions it may be considered advantageous to generate a number of representative predetermined regions by averaging or otherwise combining different users' predetermined regions. These averages or combinations may be generated in any suitable fashion for generating an improved dataset for use by specific users or groups of users.
- This library may be used in place of a user profile in identifying predetermined regions, or it may be used as a source of data from which the user profile can be updated—for instance, a selection of predetermined regions may be added to a user profile in dependence upon suitability for that user.
- Suitability may vary in dependence upon a number of factors, such as common peripherals used (for instance, a user of a full-immersion HMD may not have a visible forehead) and/or demographic information such as age (for instance, older users may require different predetermined regions if wrinkles impact detection of biomarker signals).
- common peripherals used for instance, a user of a full-immersion HMD may not have a visible forehead
- demographic information such as age (for instance, older users may require different predetermined regions if wrinkles impact detection of biomarker signals).
- Such embodiments are examples of embodiments in which the region selection section 120 is configured to select one or more regions in accordance with predetermined regions in which colour changes are likely to be observed. That is to say that rather than selecting regions in which colour changes are observed, regions may be selected based upon an expected or likely observation of colour changes over time.
- the region selection section may compare the colour changes between the temporally adjacent frames in order to select one or more regions 220 where the colour of the skin of the user changes temporally in the image data.
- Such a process may be performed in an initial stages of the image modification process, effectively as a live calibration, or may be performed throughout the image modification process. This is therefore an alternative (or additional) approach to that of the use of predetermined regions.
- the region selection section 120 may compare a predetermined number of frames corresponding to a period of time to select one or more regions 220 where the colour of the skin of the user changes temporally in the image data. For example, thirty frames would correspond to one second if the image data are recorded at thirty frames per second. These frames may be compared to one another (effectively a monitoring performed for a fixed amount of time), or the average of the frames may be compared to the average of another set of frames corresponding to a different time period of the same duration to identify changes. Any other suitable comparisons to identify a change in colour over time may also be considered.
- the region selection section 120 may instead sample every other frame in order to reduce the processing requirements relative to considering every frame for a particular time period (for instance, sampling fifteen frames a second in a thirty frames per second embodiment). Sampling every other frame is provided only as one example of the number of frames that may be sampled. For example the region selection section may sample two frames out of three. Any other appropriate sampling scheme may be considered—for example, based upon user preference or technological considerations.
- the region selection section 120 may select one or more regions where the colour of the skin of the user changes temporally within a predetermined number of frames. For example, if the image data is configured at thirty frames per second, the region selection section may select one or more regions where the colour of the skin of the user changes in a thirty frame period, which would correspond to one second, or fifteen frames, which would correspond to half a second. These numbers are provided as an example and other appropriate numbers of frame may be used. Additionally, the image data may be configured for other frame rates, such as sixty frames per second for example, with the selected number of frames being modified in a corresponding fashion.
- the region selection section 120 may select one or more regions where the colour of the skin of the user changes temporally relative to a temporal colour change averaged across all of the skin (or a portion thereof, such as the face) of the user visible in the image data. That is to say that an average colour change for the identified skin may be omitted when identifying colour changes for individual regions of the user's skin.
- This may allow the region selection section to differentiate between temporal colour changes that may occur from changes in illumination and temporal colour changes that are indicative of the user's cardiovascular activity.
- this can lead to a more reliable determination of appropriate regions, as a selection of a region based upon observed colour changes indicative of environmental lighting changes may be avoided in many cases.
- the modifying section 130 may modify the colour change of the one or more selected regions in one or more image frames by modifying the colour value of one or more of the pixels in each of one or more of the regions.
- modify the colour change is considered to refer to the performing of process that causes a detected colour change (as detected from a comparison of a plurality of image frames) to be different in the modified images to that which would be detected from the captured (that is, unmodified) images.
- the modification may be implemented on a per-region and/or per-pixel basis as appropriate. Any appropriate colour space may be used, for example RGB or YUV.
- the modifying section may shift the colour value of the pixels in a colour space towards the colour value of the pixels in the corresponding region in a reference image frame. That is to say that the colour value of pixels may be modified so as to decrease the difference between the colour values of corresponding pixels (or pixel regions) is reduced between an image frame and the reference image frame.
- the reference image frame may be the immediately preceding image frame, for example.
- the reference image frame used as the colour reference for the modification of an image frame may not be the immediately preceding image frame.
- a single image frame say, image frame n
- image frame n may be used as a reference image frame for two or more following image frames (n+1, n+2, for example). This can reduce the frequency with which a colour reference is updated, thereby reducing the amount of processing.
- the modifying section may use the immediately preceding frame (the first image frame) as the reference image frame for modifying the second image frame.
- the third image frame is the next frame to be modified by the modifying section, and the modifying section may also use the first image frame as the reference image frame for modifying the third frame.
- the modifying section may then continue to use the first image frame as the reference image frame for modifying later image frames until a predetermined number of image frames have been modified with the first image frame as the reference image frame. After the predetermined number of image frames have been modified, another image frame may then be selected as the reference image frame.
- the modifying section will use the first image frame as the reference image frame up to and including the modification of the sixth image frame.
- the modifying section modifies the seventh image frame
- the modifying section will use the image frame immediately preceding the seventh image frame as the reference image frame; in this case, the sixth image frame.
- the modifying section will then continue to use the sixth image frame as the reference image frame for the modification next four image frames (the eighth image frame to the eleventh image frame) before another frame is selected as the reference image frame.
- any appropriate separation between the image frame to be modified and the reference image frame may be used.
- the separation may be five image frames when the reference image frame is selected (i e image frame two could be selected as the reference image frame for the modification of the seventh image frame).
- the predetermined number of frames may be selected based on the cardiovascular activity that is desired to be indicated in the modified image data, and/or based upon the cardiovascular activity that is desired to be concealed. For instance, if a user wishes to indicate that they are calm then the predetermined number of frames may be increased as the amount of cardiovascular activity (and therefore colour change activity) would be expected to be lower. However, in the alternative case in which a user wishes to appear angry then the number of predetermined frames may be lowered in accordance with the increased cardiovascular that would be expected to be indicated. The same considerations may also apply when selected a predetermined number of frames based upon the cardiovascular activity that is being concealed.
- the modifying section 130 may use a modified version of an image frame that has already been modified by the modifying section as the reference image frame.
- the modifying section may use an unmodified version of an image frame for modifying subsequent frames, even when a modified version of the image frame has been output in the modified image data.
- the determined change may be proportional to the difference between an average of the colour value of the pixels in a region of the image frame to be modified and an average of the colour value of the pixels in the corresponding region of the reference image frame.
- the determined colour change may be a percentage of the identified difference in averages. Therefore, when the difference between these two average colour values is large, the determined change will also be large, and when the difference between the two colour values is small, the determined change will also be small.
- the determined change may be chosen so that the average colour value of the pixels in the region of the modified image frame is an average of the colour value of the pixels in both the region of the image frame to be modified and the corresponding region of the reference image frame.
- the modifying section 130 may modify the colour change of the one or more selected regions in the one or more image frames in the image data to create modified image data in any of a variety of ways depending upon the desired result.
- the modifying section may modify the colour change of the one or more selected regions in the image data to create modified image data, in which a temporal pattern of the colour change of the one or more selected regions in the modified image data is indicative of a predetermined cardiovascular activity representative of a cardiovascular activity of a human.
- the temporal pattern may be predetermined based on a database of temporal patterns of the colour change of one or more regions in unmodified image data of a plurality of different users. The temporal pattern may also be determined based on previously captured image data of the user, for example.
- the TOI techniques would generate results that show a predetermined cardiovascular activity, but the predetermined cardiovascular activity may not be the actual cardiovascular activity of the user in the image data.
- This type of modification to the image data could advantageously allow a user to not only protect their privacy, but also allow the user to display a cardiovascular activity of their choice. For example, the user may select a predetermined cardiovascular activity corresponding to a relaxed person, and that would result in the generation of an image that would be identified by TOI techniques as showing the user being relaxed.
- the modifying section may modify the colour change of the one or more selected regions in the image data to create modified image data where the temporal pattern of the colour change of the one or more selected regions in the modified image data is not indicative of any particular cardiovascular activity.
- This type of modification to the image data could reduce the information about the user that is able to be obtained using TOI techniques on the modified image data; for instance, concealing the emotional state of the user.
- the modifying section 130 may modify the colour of the one or more regions by randomly perturbing the colour value within a predetermined range for each region in each image frame in the plurality of image frames.
- the predetermined range may be selected based on the range of colour variation in the one or more regions within a set time period, for example, or may be defined as a variable based upon human perception of the colour changes.
- the perturbation of the colour may be small enough to be imperceptible to a human view whilst also preventing TOI techniques from determining the cardiovascular activity of the user.
- the modifying section 130 may modify the average colour change of the pixels in each of the one or more selected regions to a rolling average of the colour of each respective pixel in the region over a predetermined number of frames. For example, this rolling average would smooth the temporal pattern of the colour change for each of the one or more selected regions. Therefore, TOI may be unable to accurately measure the cardiovascular activity of the user, as the level of detail of the temporal pattern of the colour change would be reduced.
- the temporal pattern of the colour change may be selected by the user.
- the user may select a temporal pattern of the colour change from a list of potential temporal patterns of the colour change.
- the list of temporal patterns of the colour change may include, for example, temporal patterns that are indicative of a desired cardiovascular activity, which may be based on previously captured image data of the user; and one or more temporal patterns that are similar to a temporal pattern of one or more other participants in a video call. Therefore, a user could select an option that would cause the image processing system to generate modified image data in which the cardiovascular activity measurable from the image data could mimic a cardiovascular activity of another participant in a video call.
- Allowing a user to select the temporal pattern of the colour change could advantageously enable a user to be able to choose between whether, and which, predetermined cardiovascular activity may be measured from the modified image data using TOI.
- the operation of the modifying section 130 may be activated in response to one or more predetermined trigger conditions.
- the predetermined trigger condition may be based upon an input provided by a user; a detection of one or more temporal patterns of the colour change in the image data that may indicate a certain cardiovascular activity of the user; or a change in the number of one or more regions selected by the region selection section.
- the modifying section 130 may be activated to disguise this temporal pattern of the colour change. This activation may be for a predetermined period of time, until the end of the video call (or recording), or until the detected pattern has returned to a preferred state (such as indicating that the user is calm), or indeed any other period.
- each image frame is modified
- only a subset of the image frames may be modified to achieve a desired effect.
- a user's cardiovascular could be sufficiently concealed by modifying only every other image frame (or any other distribution of frames) as this may prevent the colour change from being measured as corresponding to a particular pattern. That is to say that not every image frame must have a modified colour change in order for a cardiovascular activity to be obscured from TOI methods.
- the colour change applied by the modifying section 130 may be determined by a machine learning model.
- the machine learning model is trained to modify the image data in order to control a measurement of the cardiovascular activity of the user by an imaging system configured to detect the cardiovascular activity of the user indicated in the image data.
- TOI techniques typically use a trained machine learning model to measure a user's cardiovascular activity from image data comprising image frames of the user; an aim of the present disclosure may therefore be considered to be causing such a model to generate an incorrect output for a captured image by applying an appropriate modification.
- a generative adversarial network may be utilised in which a generative network is trained to generate outputs (from input images) that generate particular results when provided to one or more existing TOI models (which serve as the adversarial model in the GAN). These particular results may be a desired classification (such as ‘relaxed’) or a failure to classify, for instance. Based upon the success or failure of particular modifications, the generative model may be refined as appropriate.
- the machine learning model may be trained based on a database of image data for a plurality of users, or a database of image data for an individual user. Additionally, preliminary training may be based on a database of image data for a plurality of users and the model may then be further calibrated for a user based on images of the user. The calibration may require specific camera angles or lighting when capturing images of the user, although in some embodiments this may not be necessary.
- an adversarial machine learning model may be used to modify the image data in order to control the result generated by the TOI machine learning model.
- any appropriate adversarial learning model may be used. For example, fast gradient sign or projected gradient descent.
- the machine learning model may modify the image so that TOI techniques cannot extract a user's cardiovascular activity from the image data.
- the machine learning model may be trained to generate a modification to the image data to cause the measurement of the cardiovascular activity of the user, generated by the imaging system configured to detect the cardiovascular activity of the user indicated in the image data, to a predetermined cardiovascular activity.
- a user will not only be able to disguise their cardiovascular activity, but also be able to modify the image data to show a different cardiovascular activity, when TOI is used to measure the user's cardiovascular activity from the modified image data.
- varying light levels and/or colours within the environment may have an effect on the TOI process. It may therefore be advantageous in some embodiments to perform a calibration process, either initially during a setup process or as an ongoing process alongside the image modification process, so as to determine the light levels within the environment and any fluctuations that may occur.
- image processing techniques may be performed to identify a colour change that effects the whole of the captured image, or a particular region of the image (such as the user's face) that may be indicative of an environmental lighting change.
- a user may be wearing an HMD or the like (for instance, headphones or a microphone) that can be provided with an indicator light or a predetermined marker.
- This light or marker may be used as a reference for such a calibration, as the appearance of these elements would be expected to be both known and constant over time; in addition to this, such elements may also be located close to the user's face and so provide useful information.
- the image modification system comprises a wearable component that comprises a predetermined marker and/or a light source.
- a camera may comprise one or more components of the image processing system. Therefore, a user would be able to purchase a camera with the advantageous features of the image processing system of the present description. Therefore, image data of individuals captured by the camera would have increased privacy in comparison to image data of individuals captured with another camera that does not comprise the image processing system, irrespective of the video call platform that is used. Additionally, if a camera comprises the image processing system, it may reduce the processing load on the device operating the video call platform or prevent data collection by the video platform itself.
- an image processing method comprising the steps of: receiving 310 image data comprising a plurality of image frames; selecting 320 one or more regions where the colour of the skin of the user changes temporally in the image data, wherein the colour changes are indicative of the cardiovascular activity of the user; and modifying 330 the colour change of the one or more selected regions in the one or more image frames to create modified image data.
- An image processing system for modifying one or more images of a user to change a cardiovascular activity that is measurable from the one or more images, the system comprising:
- modifying section is operable to apply a colour change generated by a machine learning model that is trained to control a measurement of the cardiovascular activity of the user, the measurement being generated by an imaging system configured to detect the cardiovascular activity of the user indicated in the image data.
- region selection section is configured to select one or more regions in accordance with predetermined regions in which colour changes are likely to be observed.
- region selection section is configured to select one or more regions where the colour of the skin of the user changes temporally relative to a temporal colour change averaged across all of the skin of the user visible in the image data.
- region selection section is configured to select one or more regions where the colour of the skin of the user changes temporally over a predetermined number of frames.
- a camera comprising the image processing system of any of the preceding clauses.
- An image processing method for modifying one or more images of a user to change a cardiovascular activity that is measurable from the one or more images comprising the steps of:
- a computer program comprising computer executable instructions adapted to cause a computer system to perform the method of clause 14.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Signal Processing (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Cardiology (AREA)
- Heart & Thoracic Surgery (AREA)
- Physiology (AREA)
- Artificial Intelligence (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Image Analysis (AREA)
Abstract
Description
-
- an image receiving section configured to receive image data comprising a plurality of image frames;
- a region selection section configured to select one or more regions where the colour of the skin of the user changes temporally in the image data, wherein the colour changes are indicative of the cardiovascular activity of the user; and
- a modifying section configured to modify the colour change of the one or more selected regions in one or more image frames to create modified image data.
-
- receiving image data comprising a plurality of image frames;
- selecting one or more regions where the colour of the skin of the user changes temporally in the image data, wherein the colour changes are indicative of the cardiovascular activity of the user; and
- modifying the colour change of the one or more selected regions in the one or more image frames to create modified image data.
Claims (14)
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB2103825.2 | 2021-03-19 | ||
| GB2103825.2A GB2604913B (en) | 2021-03-19 | 2021-03-19 | Image processing system and method |
| GB2103825 | 2021-03-19 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20220303514A1 US20220303514A1 (en) | 2022-09-22 |
| US12219297B2 true US12219297B2 (en) | 2025-02-04 |
Family
ID=75539689
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/690,588 Active 2042-07-21 US12219297B2 (en) | 2021-03-19 | 2022-03-09 | Image processing system and method |
Country Status (3)
| Country | Link |
|---|---|
| US (1) | US12219297B2 (en) |
| EP (1) | EP4060617A1 (en) |
| GB (1) | GB2604913B (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20220327718A1 (en) * | 2021-04-13 | 2022-10-13 | Qualcomm Incorporated | Techniques for enhancing slow motion recording |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20140161421A1 (en) | 2012-12-07 | 2014-06-12 | Intel Corporation | Physiological Cue Processing |
| US20170238842A1 (en) * | 2016-02-19 | 2017-08-24 | Covidien Lp | Systems and methods for video-based monitoring of vital signs |
| US20190012531A1 (en) * | 2017-07-06 | 2019-01-10 | Wisconsin Alumni Research Foundation | Movement monitoring system |
| US20190046056A1 (en) * | 2017-08-10 | 2019-02-14 | VVVital Patent Holdings Limited | Multi-Vital Sign Detector in an Electronic Medical Records System |
| US20190197368A1 (en) * | 2017-12-21 | 2019-06-27 | International Business Machines Corporation | Adapting a Generative Adversarial Network to New Data Sources for Image Classification |
-
2021
- 2021-03-19 GB GB2103825.2A patent/GB2604913B/en active Active
-
2022
- 2022-02-16 EP EP22157063.3A patent/EP4060617A1/en active Pending
- 2022-03-09 US US17/690,588 patent/US12219297B2/en active Active
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20140161421A1 (en) | 2012-12-07 | 2014-06-12 | Intel Corporation | Physiological Cue Processing |
| US20170238842A1 (en) * | 2016-02-19 | 2017-08-24 | Covidien Lp | Systems and methods for video-based monitoring of vital signs |
| US20190012531A1 (en) * | 2017-07-06 | 2019-01-10 | Wisconsin Alumni Research Foundation | Movement monitoring system |
| US20190046056A1 (en) * | 2017-08-10 | 2019-02-14 | VVVital Patent Holdings Limited | Multi-Vital Sign Detector in an Electronic Medical Records System |
| US20190197368A1 (en) * | 2017-12-21 | 2019-06-27 | International Business Machines Corporation | Adapting a Generative Adversarial Network to New Data Sources for Image Classification |
Non-Patent Citations (5)
| Title |
|---|
| Combined search and examination report for corresponding GB Application No. GB2103825.2, 13 pages, dated Jan. 6, 2022. |
| Communication Pursuant to Article 94(3)EPC for corresponding EP Application No. 22157063.3, 5 pages, dated Mar. 7, 2024. |
| Extended European Search Report for corresponding EP Application No. 22157063.3, 11 pages, dated Jul. 5, 2022. |
| Shamsabadi AS, Sanchez-Matilla R, Cavallaro A. Colorfool: Semantic adversarial colorization. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2020 (pp. 1151-1160). (Year: 2020). * |
| Tsou Yun-Yun, et al., "Multi-task Learning for Simultaneous Video Generation and Remote Photoplethysmography Estimation" 15th Asian Conference on Computer Vision, vol. 12626, 16 pages, Nov. 30-Dec. 4, 2020. |
Also Published As
| Publication number | Publication date |
|---|---|
| GB2604913B (en) | 2024-08-28 |
| GB2604913A (en) | 2022-09-21 |
| US20220303514A1 (en) | 2022-09-22 |
| EP4060617A1 (en) | 2022-09-21 |
| GB202103825D0 (en) | 2021-05-05 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11883173B2 (en) | Roadside impairment sensor | |
| JP6545658B2 (en) | Estimating bilirubin levels | |
| Kotowski et al. | Validation of Emotiv EPOC+ for extracting ERP correlates of emotional face processing | |
| JP6899989B2 (en) | Emotion estimation device and emotion estimation method | |
| Guo et al. | Face in profile view reduces perceived facial expression intensity: an eye-tracking study | |
| US20110040191A1 (en) | Stress detection device and methods of use thereof | |
| EP3466324A1 (en) | Skin diagnostic device and skin diagnostic method | |
| Röhrbein et al. | How does image noise affect actual and predicted human gaze allocation in assessing image quality? | |
| US20130096397A1 (en) | Sensitivity evaluation system, sensitivity evaluation method, and program | |
| JPWO2012150657A1 (en) | Concentration presence / absence estimation device and content evaluation device | |
| Moon et al. | Perceptual experience analysis for tone-mapped HDR videos based on EEG and peripheral physiological signals | |
| JP2015179062A (en) | Freshness information output method, freshness information output device, control program | |
| US20180242898A1 (en) | Viewing state detection device, viewing state detection system and viewing state detection method | |
| JP2015229040A (en) | Emotion analysis system, emotion analysis method, and emotion analysis program | |
| US20160029938A1 (en) | Diagnosis supporting device, diagnosis supporting method, and computer-readable recording medium | |
| US20200042090A1 (en) | Information processing device, information processing method, and program | |
| US12219297B2 (en) | Image processing system and method | |
| Vuori et al. | Can eye movements be quantitatively applied to image quality studies? | |
| KR20210028200A (en) | How to assess a child's risk of neurodevelopmental disorders | |
| US20240074683A1 (en) | System and method of predicting a neuropsychological state of a user | |
| Zafar et al. | Visual methods for determining ambient illumination conditions when viewing medical images in mobile display devices | |
| KR20180061629A (en) | Evaluation method for skin condition using image and evaluation apparatus for skin condition using image | |
| CN115315217B (en) | Cognitive Impairment Diagnostic Devices and Recording Media for Cognitive Impairment Diagnostic Procedures | |
| US20240382125A1 (en) | Information processing system, information processing method and computer program product | |
| Das et al. | Detecting inner emotions from video based heart rate sensing |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: SONY INTERACTIVE ENTERTAINMENT INC., JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HUME, OLIVER;REEL/FRAME:059211/0479 Effective date: 20220225 |
|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| AS | Assignment |
Owner name: SONY INTERACTIVE ENTERTAINMENT INC., JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BRADLEY, TIMOTHY;REEL/FRAME:059305/0095 Effective date: 20220318 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| AS | Assignment |
Owner name: SONY INTERACTIVE ENTERTAINMENT INC., JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SONY INTERACTIVE ENTERTAINMENT EUROPE LIMITED;REEL/FRAME:059761/0698 Effective date: 20220425 Owner name: SONY INTERACTIVE ENTERTAINMENT EUROPE LIMITED, GREAT BRITAIN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CAPPELLO, FABIO;REEL/FRAME:059820/0812 Effective date: 20160506 Owner name: SONY INTERACTIVE ENTERTAINMENT EUROPE LIMITED, GREAT BRITAIN Free format text: ASSIGNMENT OF ASSIGNOR'S INTEREST;ASSIGNOR:CAPPELLO, FABIO;REEL/FRAME:059820/0812 Effective date: 20160506 Owner name: SONY INTERACTIVE ENTERTAINMENT INC., JAPAN Free format text: ASSIGNMENT OF ASSIGNOR'S INTEREST;ASSIGNOR:SONY INTERACTIVE ENTERTAINMENT EUROPE LIMITED;REEL/FRAME:059761/0698 Effective date: 20220425 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |