US10742904B2 - Multispectral image processing system for face detection - Google Patents

Multispectral image processing system for face detection Download PDF

Info

Publication number
US10742904B2
US10742904B2 US15/990,519 US201815990519A US10742904B2 US 10742904 B2 US10742904 B2 US 10742904B2 US 201815990519 A US201815990519 A US 201815990519A US 10742904 B2 US10742904 B2 US 10742904B2
Authority
US
United States
Prior art keywords
face
pixels
nir light
sub
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US15/990,519
Other versions
US20190364229A1 (en
Inventor
Piotr Stec
Petronel Bigioi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fotonation Ltd
Original Assignee
Fotonation Ireland Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fotonation Ireland Ltd filed Critical Fotonation Ireland Ltd
Priority to US15/990,519 priority Critical patent/US10742904B2/en
Assigned to FOTONATION LIMITED reassignment FOTONATION LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: STEC, PIOTR, BIGIOI, PETRONEL
Priority to EP19162767.8A priority patent/EP3572975B1/en
Priority to CN201910443371.XA priority patent/CN110532849A/en
Publication of US20190364229A1 publication Critical patent/US20190364229A1/en
Application granted granted Critical
Publication of US10742904B2 publication Critical patent/US10742904B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • H04N5/332
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • G06K9/00255
    • G06K9/00604
    • G06K9/00617
    • G06K9/2027
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/147Details of sensors, e.g. sensor lenses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/11Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/56Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/53Control of the integration time
    • H04N25/531Control of the integration time by controlling rolling shutters in CMOS SSIS
    • H04N5/2256
    • H04N5/3532
    • G06K2009/00939
    • G06K9/00845
    • G06K9/00885
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/15Biometric patterns based on physiological signals, e.g. heartbeat, blood flow

Definitions

  • the present invention relates to a multispectral image processing system for face detection and applications based on such face detection.
  • Face detection and tracking in real-time is well known in image processing, for example as described in European Patent No. EP2052347 (Ref: FN-143). These techniques enable one or more face regions within a scene being imaged to be readily delineated and to allow for subsequent image processing based on this information.
  • Such image processing can include face recognition which attempts to identify individuals being imaged; auto-focussing by bringing a detected and/or selected face region into focus; or defect detection and/or correction of the face region(s).
  • IR infra-red
  • NIR near infra-red
  • the iris regions are typically extracted from identified eye regions and a more detailed analysis may be performed to confirm if a valid iris pattern is detectable.
  • J. Daugman, “New methods in iris recognition,” IEEE Trans. Syst. Man. Cybern. B. Cybern., vol. 37, pp. 1167-1175, 2007 discloses a range of additional refinements which can be utilized to determine the exact shape of iris and the eye-pupil. It is also common practice to transform the iris from a polar to rectangular co-ordinate system, although this is not necessary.
  • Detecting and tracking eyes or iris regions can also be used for determining gaze or a person's condition, such as fatigue or other health condition, which is especially useful in driver monitoring systems (DMS) integrated in vehicles.
  • DMS driver monitoring systems
  • most cameras and smartphones can identify specific patterns, such as ‘eye-blink’ and ‘smile’ in real-time tracked faces, and the timing of main image acquisition can be adjusted to ensure subjects within a scene are in-focus, not blinking or are smiling such as disclosed in WO2007/106117 (Ref: FN-149).
  • a common problem when capturing images within a scene is limited system dynamic range when acquiring differently illuminated subjects.
  • regions of acquired images corresponding to bright regions of a scene tend to be overexposed, while regions of acquired images corresponding to dark regions of a scene tend to be underexposed.
  • This problem can particularly effect the acquisition with active illumination of faces within a scene extending over a significant depth of field within the scene, such as faces of occupants disposed at different rows within a vehicle being imaged from a camera located towards the front of the vehicle, for example, near a rear-view mirror.
  • the exposure is set for acquiring properly exposed images of faces near to the camera (which are more illuminated by a light source)
  • the acquired images of faces distant from the camera (which are less illuminated by the light source) tend to be underexposed.
  • the exposure is set for acquiring properly exposed images of the distant faces, the images of the nearer faces tend to be overexposed.
  • a known solution to acquire an image with high dynamic range is to capture a sequence of consecutive images of the same scene, at different exposure levels, for example, by varying the exposure time at which each image is acquired, wherein shorter exposure times are used to properly capture bright scene regions and longer exposure times are used to properly capture dark scene regions.
  • the acquired images can be then combined to create a single image, where various regions within the scene are properly exposed.
  • Embodiments of these systems are based on employing an image sensor with multiple groups of sub-pixels, each configured to substantially simultaneously acquire, at different NIR light bands and with different sensitivity, image planes for a given scene.
  • the image sensor is employed in cooperation with at least one active NIR illumination source capable of emitting NIR light matching the NIR light bands of the respective groups of sub-pixels.
  • using a larger and more sensitive group of sub-pixels in cooperation with a matching higher intensity emitted NIR light allows for the acquisition of properly exposed images of faces farther from the system (which due to the greater distance from the NIR illumination source would tend to be poorly illuminated and, therefore, underexposed), and using a less sensitive group of fewer sub-pixels in cooperation with a matching lower intensity emitted NIR light allows for the concurrent acquisition of properly exposed images of faces closer to the system (which due to the proximity to the NIR illumination source would tend to be over illuminated and, therefore, overexposed).
  • this combination of features allows a natural balance of the exposure levels for properly acquiring faces (i.e. with a required level of face detail) at different depths into an imaged scene, thus achieving a greater acquisition dynamic range than employing a typical single wavelength image processing system.
  • images of a given detected face that can be acquired from the differently sensitive groups of sub-pixels, can be used to analyse a differential signal indicative of a difference in illumination between the acquired face images over a sequence of acquisition periods.
  • the system can switch from the first working mode to the second working mode subject to an adjustment of the acquisition settings for acquiring properly exposed images of the same face from the differently sensitive groups of sub-pixels.
  • Embodiments of the system of claim 16 can implement the differential pulse rate monitoring separately from the capability to properly acquire images of faces at different scene depths. As such, these embodiments can be provided without features specifically provided for compensating for the exposure levels of faces at different distances from the system, such as having a different number of sub-pixels in the differently sensitive groups of sub-pixels and/or emitting the corresponding matching NIR lights with a different intensity.
  • a portable electronic device or a vehicle occupant monitoring system including systems according to the above aspects.
  • FIG. 1 shows an image processing system according to an embodiment of the present invention
  • FIG. 2 shows a multispectral filter array image sensor according to an embodiment of the present invention, having three groups of sub-pixels associated to respective different band-pass filters;
  • FIG. 3 shows the frequency responses of the band-pass filters associated to the sub-pixel groups of the sensor illustrated in FIG. 2 , as well as matching LED emission bands and the relative pixel sensitivity.
  • FIG. 1 there is shown an image processing system 10 according to an embodiment of the present invention.
  • the system 10 which may be integrated within a portable device, for example, a camera, a smartphone, a tablet or the like, or be integrated into a vehicle safety system such as a driver monitoring system (DMS), comprises at least one central processing unit (CPU) 24 , which typically runs operating system software as well as general purpose application software.
  • CPU 24 can run camera applications, browser, messaging, e-mail or other apps.
  • CPU 24 can run camera applications dedicated to monitor a status of a driver or other occupants, especially for determining the health or attentiveness of an occupant.
  • the operating system may be set so that users must authenticate themselves to unlock the system and to gain access to applications installed on the system; or individual applications running on the system may require a user to authenticate themselves before they gain access to sensitive information.
  • the system 10 further comprises at least one NIR illumination source 16 and, in some embodiments, at least one visible light illumination source 17 capable of actively illuminating a scene in front of the system 10 , and a lens assembly 12 capable of focusing the light reflected from the illuminated scene to an image sensor 14 .
  • the senor 14 is a multispectral filter array image sensor capable of substantially simultaneously acquiring multiband images planes of the illuminated scene during an image acquisition period.
  • An overview of using multispectral filter arrays for simultaneously acquiring multiband visible and NIR images, as well as other multispectral imaging acquisition techniques, is provided for example in P. Lapray et. al, “Multispectral Filter Arrays: Recent Advances and Practical Implementation,” Sensors 2014, 14(11), 21626-21659.
  • the senor 14 includes an array of pixels 100 and is formed by a multispectral filter array (MSFA) mounted on or disposed close to a common image CMOS sensor, so as to filter incoming light before it can reach the CMOS sensor.
  • MSFA multispectral filter array
  • the MSFA is patterned so as to resemble a typical Bayer pattern arrangement and comprises, for each sensor pixel 100 : a spectral band-pass filter for passing a lower NIR light band centred around a wavelength of 875 nm to sub-pixels I 8 , a spectral band-pass filter for passing a higher NIR light band centred around a wavelength of 950 nm to sub-pixels I 9 , and a spectral filter for passing a visible bright Red light band centred around a wavelength of 700 nm to sub-pixels R.
  • each pixel 100 includes: two sub-pixels I 8 which are sensitive to the NIR light band passed by filter with a passband 101 , one sub-pixel I 9 which is sensitive to the NIR light band passed by a corresponding filter with a passband 102 , and one sub-sub-pixel R which is sensitive to the Red light band passed by a corresponding filter with a passband 105 .
  • the ratio between the number of sub-pixels I 8 and sub-pixels I 9 (as well as between the number of sub-pixels I 8 and sub-pixels R) within the array of pixels 100 is 2:1.
  • the groups of sub-pixels I 8 , sub-pixels I 9 and sub-pixels R provide three sensor channels for substantially simultaneously acquiring, during each image acquisition period, three respective image planes of the illuminated scene at different wavelength bands.
  • the quantum efficiency (QE) of the CMOS sensor substantially decreases as the wavelength of incident radiation increases.
  • such a QE variation 107 results in a sensitivity of the sub-pixels I 8 to the incident NIR light band passed at the filter passband 101 being substantially double the sensitivity of the sub-pixels I 9 to the incident NIR light band passed at the filter passband 102 .
  • the relative sensitivity of the sub-pixels I 8 and I 9 can be further controlled by combining the respective band-pass filters 101 , 102 with additional absorption filters, for example disposed within the lens assembly 12 or between the lens assembly 12 and the sensor 14 .
  • the sensitivity of the sub-pixels R can be controlled relative to the sensitivity of the sub-pixels I 8 and I 9 by using ND filters to match the range of the NIR pixels.
  • the lens assembly 12 is capable of focusing radiation towards the sensor 14 at all the passbands 101 , 102 , 105 .
  • the lens of the assembly 12 can be of an apochromatic type tuned to the pass-bands 101 , 102 , 105 . This results in proper focusing images of the illuminated scene on the sensor plane, for all the wavelengths of interest.
  • the NIR illumination source 16 comprises at least a first LED, or an array of first LEDs, having a NIR emission band 103 centered around the 875 nm wavelength and matching the passband 101 , and a second LED, or an array of second LEDs, having a NIR emission band 104 centred around the 950 nm wavelength and matching the passband 102 .
  • the passbands 101 and 102 are narrower and sharper than the corresponding LED emission bands 103 and 104 .
  • the LED emission bands 103 and 104 are partially overlapped at their edges, the LED emitted NIR light passed by the respective filter and directed to the sub-pixels I 8 is properly separated from the LED emitted NIR light passed by the filter and directed to the sub-pixels I 9 . This results in a cross-talk reduction between the sensor channels provided by the sub-pixels I 8 and I 9 .
  • to match encompasses all the cases where the LED emission bands 103 , 104 includes wavelengths falling within the passbands 101 , 102 , including the case where the emission bands 103 , 104 substantially coincide with or are narrower than the passbands of the respective passbands 101 , 102 .
  • the first and second LEDs can be driven at different power levels by a dedicated circuitry controllable by the CPU 24 , in such a way that the intensity of the emitted NIR light 103 is higher than the intensity of the emitted NIR light 104 .
  • the first and second LEDs can be separately controlled with less leakage and less overlap between their emission bands 103 , 104 , thus further improving the cross-talk reduction between channels.
  • the visible light illumination source 17 includes a LED, or an array of LEDs, having a bright Red light emission band 106 centered around the 700 nm wavelength and matching the passband 105 .
  • the passband 105 is narrower and sharper than the corresponding LED emitted band 106 .
  • image data acquired, at each acquisition period, from the multiband channels provided by the sub-pixels I 8 , I 9 and R can be written into a system memory 22 across the system bus 26 as required either by applications being executed by the CPU 24 or other dedicated processing blocks which have access to the image sensor 14 and/or memory 22 .
  • the system 10 further comprises a dedicated face detector 18 for identifying a face region from the image planes acquired through the multiband sensor channels.
  • This functionality could equally be implemented in software executed by the CPU 24 .
  • the data for the identified faces may also be stored in the system memory 22 and/or other memories such as secure memory or databases belonging to or separate from the system 10 .
  • the system 10 is especially adapted to operate face detection according to a first working mode, where the detector 18 is used for detecting and tracking faces at different depths within the illuminated scene using the NIR sensor channels provided by the sub-pixels I 8 and the sub-pixels I 9 .
  • the lower sensitivity and number of sub-pixels I 9 and the lower intensity level of the emitted NIR light 104 cooperate to acquire properly exposed images of faces near to the system 10 , such as the face of a vehicle driver and/or faces of occupants beside the driver, while the sub-pixels I 8 will be mostly unaffected by the NIR light 104 .
  • the higher sensitivity and number of sub-pixels I 8 and the higher intensity level of the emitted NIR light 103 cooperate to acquire properly exposed images of faces distant from the system 10 , such as faces of vehicle occupants behind the driver, while the sub-pixels I 9 will be mostly unaffected by the NIR light 103 .
  • having a larger number of sub-pixels I 8 with respect to the number of sub-pixels I 9 improves the resolution of the acquired images of distant faces, which are smaller and involve a greater distance travelled by the reflected NIR light to reach the system 10 than near faces.
  • this combination of features enables the acquisition of both distant and near faces with a required level of detail, so as they can be accurately identified by the detector 18 .
  • the detector 18 can implement a dedicated de-mosaicing (de-bayering) module for reconstructing a full image from the image planes acquired from the sensor IR channels.
  • This module is aware of the different sensitivity of and the different NIR bands 101 , 102 associated to the sub-pixels I 8 and the sub-pixels I 9 , and it can rely on a minimum maintained level of cross-correlation between the channels. It will be appreciated that in obtaining de-mosaiced images for near faces, the image components acquired in the NIR band 101 can be even ignored. This functionality could equally be implemented in software executed by the CPU 24 or other dedicated unit within the system 10 .
  • image signal processing techniques can be customized for the sensor 14 , and applied to the acquired images, such as gamma correction and dynamic range compression being aware of the properties of the sensor 14 for properly rendering faces at different distances.
  • the detector 18 is further configured for identifying, within the detected distant and/or near faces, one or more eye regions and/or iris regions. This functionality could equally be implemented in software executed by the CPU 24 .
  • the data for the identified one or more iris regions may also be stored in the system memory 22 and/or other memories belonging to or separate from the system 10 .
  • the identified iris regions can be used as an input for a biometric authentication unit (BAU) 20 .
  • the BAU 20 is configured for extracting an iris code from the received identified iris regions, and it may store this code in the memory 22 and/or other memories or databases belonging to or separate from the system 10 .
  • the BAU 20 is preferably configured to compare the received one or more iris regions with reference iris region(s) associated with one or more predetermined subjects (such as an owner of a vehicle and members of his family), which can be stored in memory 22 , within secure memory in the BAU 20 or in any location accessible to the BAU 20 .
  • WO2011/124512 Ref: FN-458
  • the master mask excludes blocks from the matching process and/or weights blocks according to their known or expected reliability.
  • the system of FIG. 1 can also operate according to a second working mode for heart pulse detection.
  • the dilation and contraction of the blood vessels corresponding to the heart rhythm causes a periodic variation in the colour of illuminated skin.
  • Pulse detection applications typically monitor this periodic colour variation in skin portions of a tracked detected face, using only one visible colour acquisition channel.
  • Webcam Pulse Detector which can work in cooperation with a PC webcam (https://lifehacker.com/the-webcam-pulse-detector-shows-your-life-signs-using-y -1704207849).
  • Using visible light for pulse detection is often unreliable and detectability may vary with different skin colours.
  • the reliability of the pulse detection is low in view of the fact that detected changes in colour can be affected by superficial skin colour changes due to factors unrelated to the pulse rate, such as environmental illumination and motion of the monitored person. For example, the skin shade varies across the face and if the person moves, those variations will exceed the variations due to the pulse rate making the detection unreliable.
  • Eulerian Video Magnification can be applied to amplify the periodic colour variation visible in the face over consecutive frames of a video sequence (http://people.csail.mit.edu/mrub/vidmag/).
  • neural network (NN) processing can improve the accuracy of camera-based pulse detection during a natural (i.e. not controller) human-computer interaction such as disclosed in “A Machine Learning Approach to Improve Contactless Heart Rate Monitoring Using a Webcam,” by Hamad Monkaresi et al.
  • visible light is used and the acquired visual signal is not used directly in the pulse detection but processed and after independent component analysis.
  • the detector 18 is used for detecting and tracking a face of a person using the NIR sensor channels provided by the sub-pixels I 8 and the sub-pixels I 9 , as well the Red light channel provided by the sub-pixels R as a support.
  • the switching from the first working mode to the second working mode can occur periodically or triggered by a specific command/event.
  • the CPU 24 or another dedicated unit of the system 10 is capable of adjusting the image acquisition settings used in the first working mode for acquiring a properly exposed image of the same face in the different image planes provided by the sub-pixels I 8 and the sub-pixels I 9 .
  • a properly exposed image of the tracked face can be formed in the image plane acquired from the sub-pixels I 8 , concurrently with a properly exposed image of the same face in the image plane acquired form the sub-pixels I 9 .
  • a properly exposed image of the tracked face can be formed in the image plane acquired from the sub-pixels I 9 , concurrently with a properly exposed image of the face in the image plane acquired form the sub-pixels I 8 .
  • the relative exposure level between the NIR sensor channels provided by the sub-pixels I 8 and the sub-pixels I 9 can be controlled by using additional filters attenuating or amplifying the wavelengths of interest.
  • the CPU 24 can further adjust the image acquiring settings for the channel provided by the sub-pixels R for properly capturing the tracked face, especially in view of the distance of the face from the system 10 .
  • a lower intensity LED-emitted red light 106 and/or a lower gain of the sub-pixels R and/or a lower integration time can be set for a closer face, while a higher intensity LED-emitted red light 106 and/or a higher gain of the sub-pixels R and/or a higher integration time can be set for a farther face.
  • the CPU 24 or other dedicated unit When returning to the first working mode, the CPU 24 or other dedicated unit is capable of returning to the image acquisition settings for the first working mode.
  • the system 10 further comprises a dedicated unit 30 for monitoring the heart pulse rate.
  • This functionality could equally be implemented in software executed by the CPU 24 .
  • the unit 30 can access the stored image data from the detector 18 operating in the second working mode, so as to have available for each one of a sequence of image acquisition periods:
  • d 2 ( t )
  • d 3 ( t )
  • the dilation and contraction of the blood vessels caused by the heart rhythm will cause different illumination variations of the monitored face at the different bands of the sensor channels provided by the sub-pixels I 8 , I 9 and R.
  • each of d 1 (t), d 2 (t) and d 3 (t) contains a non-zero component indicative of an illumination variation of the monitored face due to the pulse rate, while components within each of the signals I 8 ( t ), I 9 ( t ) and V(t) which are due to other factors tend to mutually cancel in the calculated d 1 (t), d 2 (t) and d 3 (t).
  • the differential signals d 1 (t), d 2 (t) and d 3 (t) provide a more reliable measurement for monitoring the pulse rate than just tracking illumination changes in one wavelength image acquisition channel. Furthermore, the pulse rate will be correlated with all the differential signals d 1 (t), d 2 (t) and d 3 (t), while the noise will be random. This aspect also means an increase of the measurement accuracy.
  • NIR light also improves the measurement, because IR light can penetrate the skin deeper and, therefore, permits better visualization of the blood vessels than using only visible light. Furthermore, IR light is especially suitable for monitoring the pulse rate of a vehicle driver, because in contrast with visible light it can substantially pass through sunglasses.
  • the role of signal R(t) is mainly of support, especially in view of the fact that the measurement conditions can change causing an overexpose or underexposure of the images acquired through the sensor channels. In case that the images from one of the channels are overexposed or underexposed, the other remaining two channels can be used for properly performing pulse detection.
  • Frequency detection algorithms can be applied to the differential signals d 1 (t), d 2 (t), d 3 (t) for monitoring the pulse rate, which can result in determining whether the pulse rate satisfies critical threshold levels or a calculation of the pulse rate values.
  • auto- and cross-correlation methods can be used, or other signal frequency detection methods e.g. involving Fourier transformations.
  • the heart pulse monitoring based on the differential signals d 1 (t), d 2 (t), d 3 (t) can be performed using an artificial neural network processing.
  • the lens assembly 12 can be configured to filter and split incident radiation into spectral bands separately focused on respective different regions of sub-pixels on a same sensor or group of sensors, which can be used for multispectral image acquisition.
  • An example of such an arrangement, employing a plurality of lens barrels, is disclosed in European patent application No. EP3066690 (Ref: 10006-0035-EP-01).
  • the MSFA filtering functionality can be implemented by configuring groups of sub-pixels of the image sensor itself, for example, through suitable choice of materials, to be selectively and differently sensitive to respective different bands of incoming radiation.
  • the senor 14 can comprise more than two group of differently NIR sensitive pixels to properly acquire faces at more than substantially two levels of depth with the imaged scene, such as the case of a vehicle having more than two rows for occupants.
  • the NIR illumination source 16 can comprise a single device capable of emitting different wavelengths including at least the emission band 103 matching the filter passband 101 associated with the sub-pixels I 8 and the emission band 104 matching the filter passband 102 associated with the sub-pixels I 9 , e.g. a laser or flash source.
  • the relative intensity of such bands within the emitted light can be controlled using spectral filters or masks included in or arranged close the emission opening of the light source.
  • sub-pixels R can be replaced by sub-pixels sensitive to a different visible wavelength band, e.g. a green light band, or just by sub-pixels sensitive to white light. Nevertheless, it will be appreciated that the presence of sub-pixels for providing a visible light sensor channel is optional.
  • system 10 is configured to switch between the first and second working modes, it can be appreciated that the functionalities associated with these working modes can be implemented separately in dedicated/separated image processing systems.
  • a system specifically dedicated for pulse rate monitoring can differ from the disclosed system 10 at least in that the number of subpixels I 8 and I 9 and/or the illumination intensities of the matching NIR lights can be the same.
  • image processing functionalities of the disclosed system 10 can be implemented in a same processing unit, or a bank of processing units. Especially in case of application in a vehicle DMS, such image processing functionalities can be usefully implemented in the kind of multi-processor engine disclosed in U.S. provisional patent application No. 62/592,665 (Ref: FN-618), the disclosure of which is incorporated by reference.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Signal Processing (AREA)
  • Ophthalmology & Optometry (AREA)
  • Vascular Medicine (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Processing (AREA)
  • Measuring Pulse, Heart Rate, Blood Pressure Or Blood Flow (AREA)

Abstract

An image processing system comprises at least one image sensor comprising a plurality of sub-pixels, and configured to provide a first image plane from a group of first sub-pixels selectively sensitive to a first NIR light band and a second image plane from a group of second sub-pixels selectively sensitive to a second NIR light band. An NIR light source is capable of separately emitting first NIR light corresponding to the first NIR light band and second NIR light corresponding to the second NIR light band. The system can be configured to operate according to at least a first working mode where a face detector is configured to detect at least a first face in the first image plane and a second face in the second image plane at a spatially non-coincident location to the first face.

Description

FIELD
The present invention relates to a multispectral image processing system for face detection and applications based on such face detection.
BACKGROUND
Face detection and tracking in real-time is well known in image processing, for example as described in European Patent No. EP2052347 (Ref: FN-143). These techniques enable one or more face regions within a scene being imaged to be readily delineated and to allow for subsequent image processing based on this information.
Such image processing can include face recognition which attempts to identify individuals being imaged; auto-focussing by bringing a detected and/or selected face region into focus; or defect detection and/or correction of the face region(s).
Concerning individual identification based on face features, A. K. Jain, A. Ross, and S. Prabhakar, “An introduction to biometric recognition,” IEEE Trans. Circuits Syst. Video Technol., vol. 14, 2004 discloses that the iris of the eye is a near-ideal biometric. Typically, an image of an iris is best acquired in a dedicated imaging system that uses infra-red (IR) illumination, typically near infra-red (NIR) above 700 nm.
The iris regions are typically extracted from identified eye regions and a more detailed analysis may be performed to confirm if a valid iris pattern is detectable. For example, J. Daugman, “New methods in iris recognition,” IEEE Trans. Syst. Man. Cybern. B. Cybern., vol. 37, pp. 1167-1175, 2007 discloses a range of additional refinements which can be utilized to determine the exact shape of iris and the eye-pupil. It is also common practice to transform the iris from a polar to rectangular co-ordinate system, although this is not necessary.
Detecting and tracking eyes or iris regions can also be used for determining gaze or a person's condition, such as fatigue or other health condition, which is especially useful in driver monitoring systems (DMS) integrated in vehicles.
Separately, most cameras and smartphones can identify specific patterns, such as ‘eye-blink’ and ‘smile’ in real-time tracked faces, and the timing of main image acquisition can be adjusted to ensure subjects within a scene are in-focus, not blinking or are smiling such as disclosed in WO2007/106117 (Ref: FN-149).
A common problem when capturing images within a scene is limited system dynamic range when acquiring differently illuminated subjects. In particular, regions of acquired images corresponding to bright regions of a scene tend to be overexposed, while regions of acquired images corresponding to dark regions of a scene tend to be underexposed.
This problem can particularly effect the acquisition with active illumination of faces within a scene extending over a significant depth of field within the scene, such as faces of occupants disposed at different rows within a vehicle being imaged from a camera located towards the front of the vehicle, for example, near a rear-view mirror. In particular, if the exposure is set for acquiring properly exposed images of faces near to the camera (which are more illuminated by a light source), the acquired images of faces distant from the camera (which are less illuminated by the light source) tend to be underexposed. Vice versa, if the exposure is set for acquiring properly exposed images of the distant faces, the images of the nearer faces tend to be overexposed.
A known solution to acquire an image with high dynamic range (HDR) is to capture a sequence of consecutive images of the same scene, at different exposure levels, for example, by varying the exposure time at which each image is acquired, wherein shorter exposure times are used to properly capture bright scene regions and longer exposure times are used to properly capture dark scene regions. The acquired images can be then combined to create a single image, where various regions within the scene are properly exposed.
It can be readily appreciated how this solution can be quite satisfactorily applied to scenes with static subjects, such as landscapes, while being impractical for capturing faces which are relatively close to the camera and which can move during consecutive image acquisitions, thus causing artefacts when attempting to construct an image of the scene. It should also be noted that it is not possible to acquire such sequences of variably exposed images using rolling shutter techniques.
From “High Dynamic Range Image Sensors,” by Abbas El Gamal, Stanford University, ISSCC′02 (http://cafe.stanford.edu/˜abbas/group/papers and pub/isscc02_tutorial.pdf) it is further known to use an HDR CMOS image sensor with spatially varying pixel sensitivity. In particular, an array of neutral density (ND) filters is deposited on the image sensor so that, in a single captured image of a scene, sensor pixels associated with darker filters can be used to acquire bright regions of the scene and sensor pixels associated with lighter filters can be used to acquire dark regions of the scene. However, this document is not concerned about face acquisition and detection across an extensive depth of field within a scene, especially when using active IR illumination.
SUMMARY
According to aspects of the present invention, there are provided image processing systems according to claims 1, 15 and 16.
Embodiments of these systems are based on employing an image sensor with multiple groups of sub-pixels, each configured to substantially simultaneously acquire, at different NIR light bands and with different sensitivity, image planes for a given scene. The image sensor is employed in cooperation with at least one active NIR illumination source capable of emitting NIR light matching the NIR light bands of the respective groups of sub-pixels.
Notably, in embodiments of the system according to claims 1 and 15, using a larger and more sensitive group of sub-pixels in cooperation with a matching higher intensity emitted NIR light allows for the acquisition of properly exposed images of faces farther from the system (which due to the greater distance from the NIR illumination source would tend to be poorly illuminated and, therefore, underexposed), and using a less sensitive group of fewer sub-pixels in cooperation with a matching lower intensity emitted NIR light allows for the concurrent acquisition of properly exposed images of faces closer to the system (which due to the proximity to the NIR illumination source would tend to be over illuminated and, therefore, overexposed).
As such, this combination of features allows a natural balance of the exposure levels for properly acquiring faces (i.e. with a required level of face detail) at different depths into an imaged scene, thus achieving a greater acquisition dynamic range than employing a typical single wavelength image processing system.
These embodiments are advantageously capable of operating in a first working mode for detecting faces at different depths within a scene with an increased accuracy, but they can also provide advantages when operating according to a second working mode for heart pulse monitoring.
Indeed, images of a given detected face, that can be acquired from the differently sensitive groups of sub-pixels, can be used to analyse a differential signal indicative of a difference in illumination between the acquired face images over a sequence of acquisition periods.
In this differential monitoring approach, superficial illumination variations of the monitored face due to factors unrelated to the heart pulse rate, such a change in environment illumination or motion of the monitored person, will affect in the same way the face images acquired from the differently sensitive groups of sub-pixels and, therefore, will tend to be mutually cancelled in the monitored differential signal.
In some embodiments, the system can switch from the first working mode to the second working mode subject to an adjustment of the acquisition settings for acquiring properly exposed images of the same face from the differently sensitive groups of sub-pixels.
Embodiments of the system of claim 16 can implement the differential pulse rate monitoring separately from the capability to properly acquire images of faces at different scene depths. As such, these embodiments can be provided without features specifically provided for compensating for the exposure levels of faces at different distances from the system, such as having a different number of sub-pixels in the differently sensitive groups of sub-pixels and/or emitting the corresponding matching NIR lights with a different intensity.
According to other aspects of the presented invention, there are provided a portable electronic device or a vehicle occupant monitoring system including systems according to the above aspects.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the invention will now be described, by way of example, with reference to the accompanying drawings, in which:
FIG. 1 shows an image processing system according to an embodiment of the present invention;
FIG. 2 shows a multispectral filter array image sensor according to an embodiment of the present invention, having three groups of sub-pixels associated to respective different band-pass filters; and
FIG. 3 shows the frequency responses of the band-pass filters associated to the sub-pixel groups of the sensor illustrated in FIG. 2, as well as matching LED emission bands and the relative pixel sensitivity.
DESCRIPTION OF THE EMBODIMENTS
Referring now to FIG. 1 there is shown an image processing system 10 according to an embodiment of the present invention.
The system 10, which may be integrated within a portable device, for example, a camera, a smartphone, a tablet or the like, or be integrated into a vehicle safety system such as a driver monitoring system (DMS), comprises at least one central processing unit (CPU) 24, which typically runs operating system software as well as general purpose application software. For example, in a portable device, CPU 24 can run camera applications, browser, messaging, e-mail or other apps. In a vehicle safety system, CPU 24 can run camera applications dedicated to monitor a status of a driver or other occupants, especially for determining the health or attentiveness of an occupant.
The operating system may be set so that users must authenticate themselves to unlock the system and to gain access to applications installed on the system; or individual applications running on the system may require a user to authenticate themselves before they gain access to sensitive information.
The system 10 further comprises at least one NIR illumination source 16 and, in some embodiments, at least one visible light illumination source 17 capable of actively illuminating a scene in front of the system 10, and a lens assembly 12 capable of focusing the light reflected from the illuminated scene to an image sensor 14.
In the embodiment, the sensor 14 is a multispectral filter array image sensor capable of substantially simultaneously acquiring multiband images planes of the illuminated scene during an image acquisition period. An overview of using multispectral filter arrays for simultaneously acquiring multiband visible and NIR images, as well as other multispectral imaging acquisition techniques, is provided for example in P. Lapray et. al, “Multispectral Filter Arrays: Recent Advances and Practical Implementation,” Sensors 2014, 14(11), 21626-21659.
With reference to FIG. 2, the sensor 14 includes an array of pixels 100 and is formed by a multispectral filter array (MSFA) mounted on or disposed close to a common image CMOS sensor, so as to filter incoming light before it can reach the CMOS sensor.
The MSFA is patterned so as to resemble a typical Bayer pattern arrangement and comprises, for each sensor pixel 100: a spectral band-pass filter for passing a lower NIR light band centred around a wavelength of 875 nm to sub-pixels I8, a spectral band-pass filter for passing a higher NIR light band centred around a wavelength of 950 nm to sub-pixels I9, and a spectral filter for passing a visible bright Red light band centred around a wavelength of 700 nm to sub-pixels R.
Also, with reference to FIG. 3, each pixel 100 includes: two sub-pixels I8 which are sensitive to the NIR light band passed by filter with a passband 101, one sub-pixel I9 which is sensitive to the NIR light band passed by a corresponding filter with a passband 102, and one sub-sub-pixel R which is sensitive to the Red light band passed by a corresponding filter with a passband 105. Thus, the ratio between the number of sub-pixels I8 and sub-pixels I9 (as well as between the number of sub-pixels I8 and sub-pixels R) within the array of pixels 100 is 2:1.
The groups of sub-pixels I8, sub-pixels I9 and sub-pixels R provide three sensor channels for substantially simultaneously acquiring, during each image acquisition period, three respective image planes of the illuminated scene at different wavelength bands.
As known from, for example, P. Lapray et. al referenced above, the quantum efficiency (QE) of the CMOS sensor substantially decreases as the wavelength of incident radiation increases.
As illustrated in FIG. 3, such a QE variation 107 results in a sensitivity of the sub-pixels I8 to the incident NIR light band passed at the filter passband 101 being substantially double the sensitivity of the sub-pixels I9 to the incident NIR light band passed at the filter passband 102.
The relative sensitivity of the sub-pixels I8 and I9 can be further controlled by combining the respective band- pass filters 101, 102 with additional absorption filters, for example disposed within the lens assembly 12 or between the lens assembly 12 and the sensor 14. The sensitivity of the sub-pixels R can be controlled relative to the sensitivity of the sub-pixels I8 and I9 by using ND filters to match the range of the NIR pixels.
The lens assembly 12 is capable of focusing radiation towards the sensor 14 at all the passbands 101, 102, 105. For example, the lens of the assembly 12 can be of an apochromatic type tuned to the pass- bands 101, 102, 105. This results in proper focusing images of the illuminated scene on the sensor plane, for all the wavelengths of interest.
With reference back to FIG. 3, the NIR illumination source 16 comprises at least a first LED, or an array of first LEDs, having a NIR emission band 103 centered around the 875 nm wavelength and matching the passband 101, and a second LED, or an array of second LEDs, having a NIR emission band 104 centred around the 950 nm wavelength and matching the passband 102.
In the embodiment, the passbands 101 and 102 are narrower and sharper than the corresponding LED emission bands 103 and 104. In this way, although the LED emission bands 103 and 104 are partially overlapped at their edges, the LED emitted NIR light passed by the respective filter and directed to the sub-pixels I8 is properly separated from the LED emitted NIR light passed by the filter and directed to the sub-pixels I9. This results in a cross-talk reduction between the sensor channels provided by the sub-pixels I8 and I9.
Nevertheless, in the context of present disclosure “to match” encompasses all the cases where the LED emission bands 103, 104 includes wavelengths falling within the passbands 101, 102, including the case where the emission bands 103, 104 substantially coincide with or are narrower than the passbands of the respective passbands 101, 102.
The first and second LEDs can be driven at different power levels by a dedicated circuitry controllable by the CPU 24, in such a way that the intensity of the emitted NIR light 103 is higher than the intensity of the emitted NIR light 104. As such, the first and second LEDs can be separately controlled with less leakage and less overlap between their emission bands 103, 104, thus further improving the cross-talk reduction between channels.
The visible light illumination source 17 includes a LED, or an array of LEDs, having a bright Red light emission band 106 centered around the 700 nm wavelength and matching the passband 105. Preferably, the passband 105 is narrower and sharper than the corresponding LED emitted band 106.
With reference now back to FIG. 1, image data acquired, at each acquisition period, from the multiband channels provided by the sub-pixels I8, I9 and R can be written into a system memory 22 across the system bus 26 as required either by applications being executed by the CPU 24 or other dedicated processing blocks which have access to the image sensor 14 and/or memory 22.
In the embodiment, the system 10 further comprises a dedicated face detector 18 for identifying a face region from the image planes acquired through the multiband sensor channels. This functionality could equally be implemented in software executed by the CPU 24. The data for the identified faces may also be stored in the system memory 22 and/or other memories such as secure memory or databases belonging to or separate from the system 10.
In particular, the system 10 is especially adapted to operate face detection according to a first working mode, where the detector 18 is used for detecting and tracking faces at different depths within the illuminated scene using the NIR sensor channels provided by the sub-pixels I8 and the sub-pixels I9.
Indeed, the lower sensitivity and number of sub-pixels I9 and the lower intensity level of the emitted NIR light 104 cooperate to acquire properly exposed images of faces near to the system 10, such as the face of a vehicle driver and/or faces of occupants beside the driver, while the sub-pixels I8 will be mostly unaffected by the NIR light 104.
Concurrently, the higher sensitivity and number of sub-pixels I8 and the higher intensity level of the emitted NIR light 103 cooperate to acquire properly exposed images of faces distant from the system 10, such as faces of vehicle occupants behind the driver, while the sub-pixels I9 will be mostly unaffected by the NIR light 103.
Notably, having a larger number of sub-pixels I8 with respect to the number of sub-pixels I9 improves the resolution of the acquired images of distant faces, which are smaller and involve a greater distance travelled by the reflected NIR light to reach the system 10 than near faces.
Ultimately, this combination of features enables the acquisition of both distant and near faces with a required level of detail, so as they can be accurately identified by the detector 18.
For example, the detector 18 (or a pre-processor) can implement a dedicated de-mosaicing (de-bayering) module for reconstructing a full image from the image planes acquired from the sensor IR channels. This module is aware of the different sensitivity of and the different NIR bands 101, 102 associated to the sub-pixels I8 and the sub-pixels I9, and it can rely on a minimum maintained level of cross-correlation between the channels. It will be appreciated that in obtaining de-mosaiced images for near faces, the image components acquired in the NIR band 101 can be even ignored. This functionality could equally be implemented in software executed by the CPU 24 or other dedicated unit within the system 10.
Other image signal processing techniques can be customized for the sensor 14, and applied to the acquired images, such as gamma correction and dynamic range compression being aware of the properties of the sensor 14 for properly rendering faces at different distances.
In the embodiment, the detector 18 is further configured for identifying, within the detected distant and/or near faces, one or more eye regions and/or iris regions. This functionality could equally be implemented in software executed by the CPU 24. The data for the identified one or more iris regions may also be stored in the system memory 22 and/or other memories belonging to or separate from the system 10.
As such, the identified iris regions can be used as an input for a biometric authentication unit (BAU) 20. Preferably, the BAU 20 is configured for extracting an iris code from the received identified iris regions, and it may store this code in the memory 22 and/or other memories or databases belonging to or separate from the system 10. Further, the BAU 20 is preferably configured to compare the received one or more iris regions with reference iris region(s) associated with one or more predetermined subjects (such as an owner of a vehicle and members of his family), which can be stored in memory 22, within secure memory in the BAU 20 or in any location accessible to the BAU 20.
An exemplary way for performing iris code extraction and comparison between iris regions is disclosed in WO2011/124512 (Ref: FN-458) and this involves a comparison between two image templates using a master mask to select corresponding codes from the templates. The master mask excludes blocks from the matching process and/or weights blocks according to their known or expected reliability.
Other applications can benefit from the improved face detection provided by the system 10 operating in the first working mode, among which fatigue, glaze and face emotions detections, face auto-focussing or defect detection and/or correction.
The system of FIG. 1 can also operate according to a second working mode for heart pulse detection. As known, the dilation and contraction of the blood vessels corresponding to the heart rhythm causes a periodic variation in the colour of illuminated skin. Pulse detection applications typically monitor this periodic colour variation in skin portions of a tracked detected face, using only one visible colour acquisition channel.
An example of such applications is Webcam Pulse Detector which can work in cooperation with a PC webcam (https://lifehacker.com/the-webcam-pulse-detector-shows-your-life-signs-using-y -1704207849). Using visible light for pulse detection is often unreliable and detectability may vary with different skin colours. Furthermore, the reliability of the pulse detection is low in view of the fact that detected changes in colour can be affected by superficial skin colour changes due to factors unrelated to the pulse rate, such as environmental illumination and motion of the monitored person. For example, the skin shade varies across the face and if the person moves, those variations will exceed the variations due to the pulse rate making the detection unreliable. Eulerian Video Magnification can be applied to amplify the periodic colour variation visible in the face over consecutive frames of a video sequence (http://people.csail.mit.edu/mrub/vidmag/). Alternatively, neural network (NN) processing can improve the accuracy of camera-based pulse detection during a natural (i.e. not controller) human-computer interaction such as disclosed in “A Machine Learning Approach to Improve Contactless Heart Rate Monitoring Using a Webcam,” by Hamad Monkaresi et al. Here, visible light is used and the acquired visual signal is not used directly in the pulse detection but processed and after independent component analysis.
In the second working mode of the present embodiment, the detector 18 is used for detecting and tracking a face of a person using the NIR sensor channels provided by the sub-pixels I8 and the sub-pixels I9, as well the Red light channel provided by the sub-pixels R as a support.
The switching from the first working mode to the second working mode can occur periodically or triggered by a specific command/event.
When switching from the first to the second working mode, the CPU 24 or another dedicated unit of the system 10 is capable of adjusting the image acquisition settings used in the first working mode for acquiring a properly exposed image of the same face in the different image planes provided by the sub-pixels I8 and the sub-pixels I9.
In particular, when the face tracked for pulse monitoring is near to the system 10 (such as in the case where the face belongs to a vehicle driver or an occupant beside the driver), one or a combination of the following adjustments is performed:
    • decrease the intensity of the LED-emitted NIR light 103,
    • decrease the gain of the sub-pixels I8;
    • decrease the integration (exposure) time.
Note that using a different integration time for the pixels I8, I9 and R is not typically possible using a rolling shutter exposure.
In any case, in this way, a properly exposed image of the tracked face can be formed in the image plane acquired from the sub-pixels I8, concurrently with a properly exposed image of the same face in the image plane acquired form the sub-pixels I9.
When the face tracked for pulse monitoring is far from the system 10 (such as in the case where the face belongs to a vehicle occupant behind the driver), one or a combination of the following adjustments is performed:
    • increase the intensity of the LED-emitted NIR light 104,
    • increase the gain of the sub-pixels I9;
    • increase the integration time.
In this way, a properly exposed image of the tracked face can be formed in the image plane acquired from the sub-pixels I9, concurrently with a properly exposed image of the face in the image plane acquired form the sub-pixels I8.
Additionally or alternatively, the relative exposure level between the NIR sensor channels provided by the sub-pixels I8 and the sub-pixels I9 can be controlled by using additional filters attenuating or amplifying the wavelengths of interest.
The CPU 24 can further adjust the image acquiring settings for the channel provided by the sub-pixels R for properly capturing the tracked face, especially in view of the distance of the face from the system 10. In particular, a lower intensity LED-emitted red light 106 and/or a lower gain of the sub-pixels R and/or a lower integration time can be set for a closer face, while a higher intensity LED-emitted red light 106 and/or a higher gain of the sub-pixels R and/or a higher integration time can be set for a farther face.
When returning to the first working mode, the CPU 24 or other dedicated unit is capable of returning to the image acquisition settings for the first working mode.
With reference now to FIG. 1, the system 10 further comprises a dedicated unit 30 for monitoring the heart pulse rate. This functionality could equally be implemented in software executed by the CPU 24.
In particular, the unit 30 can access the stored image data from the detector 18 operating in the second working mode, so as to have available for each one of a sequence of image acquisition periods:
    • the image data of the monitored face from the sub-pixels I8;
    • the image data of the monitored face from the sub-pixels I9; and
    • the image data of the monitored face from the sub-pixels R.
Thus, when all the three sensor channels are available, the unit 30 can track the evolving in time of the following three differential signals:
d 1(t)=|V(t)−I8(t)|
d 2(t)=|V(t)−I9(t)|
d 3(t)=|I9(t)−I8(t)|
wherein I8(t) is a time signal indicative of the illumination of the monitored face as acquired over time from the sub-pixels I8, I9(t) is time signal indicative of the illumination of the monitored face as acquired over time from the sub-pixels I9, and V(t) is indicative of the illumination of the monitored face as acquired over time from the sub-pixels R.
Since the skin penetration depth depends on the wavelength of incident radiation, the dilation and contraction of the blood vessels caused by the heart rhythm will cause different illumination variations of the monitored face at the different bands of the sensor channels provided by the sub-pixels I8, I9 and R. On the other hand, superficial illumination changes due to other factors, e.g. a variation in the environment illumination or face motion, tend to substantially effect these multiband channels in the same way.
Thus, each of d1(t), d2(t) and d3(t) contains a non-zero component indicative of an illumination variation of the monitored face due to the pulse rate, while components within each of the signals I8(t), I9(t) and V(t) which are due to other factors tend to mutually cancel in the calculated d1(t), d2(t) and d3(t).
As such, the differential signals d1(t), d2(t) and d3(t) provide a more reliable measurement for monitoring the pulse rate than just tracking illumination changes in one wavelength image acquisition channel. Furthermore, the pulse rate will be correlated with all the differential signals d1(t), d2(t) and d3(t), while the noise will be random. This aspect also means an increase of the measurement accuracy.
Using NIR light also improves the measurement, because IR light can penetrate the skin deeper and, therefore, permits better visualization of the blood vessels than using only visible light. Furthermore, IR light is especially suitable for monitoring the pulse rate of a vehicle driver, because in contrast with visible light it can substantially pass through sunglasses.
The role of signal R(t) is mainly of support, especially in view of the fact that the measurement conditions can change causing an overexpose or underexposure of the images acquired through the sensor channels. In case that the images from one of the channels are overexposed or underexposed, the other remaining two channels can be used for properly performing pulse detection.
Frequency detection algorithms can be applied to the differential signals d1(t), d2(t), d3(t) for monitoring the pulse rate, which can result in determining whether the pulse rate satisfies critical threshold levels or a calculation of the pulse rate values. For example, auto- and cross-correlation methods can be used, or other signal frequency detection methods e.g. involving Fourier transformations.
Alternatively, the heart pulse monitoring based on the differential signals d1(t), d2(t), d3(t) can be performed using an artificial neural network processing.
In variants of the above described embodiment, instead of using a MSFA mounted on the CMOS sensor, the lens assembly 12 can be configured to filter and split incident radiation into spectral bands separately focused on respective different regions of sub-pixels on a same sensor or group of sensors, which can be used for multispectral image acquisition. An example of such an arrangement, employing a plurality of lens barrels, is disclosed in European patent application No. EP3066690 (Ref: 10006-0035-EP-01).
Alternatively, the MSFA filtering functionality can be implemented by configuring groups of sub-pixels of the image sensor itself, for example, through suitable choice of materials, to be selectively and differently sensitive to respective different bands of incoming radiation.
In other variants of the above described embodiment, the sensor 14 can comprise more than two group of differently NIR sensitive pixels to properly acquire faces at more than substantially two levels of depth with the imaged scene, such as the case of a vehicle having more than two rows for occupants.
In other variants of the disclosed embodiment, the NIR illumination source 16 can comprise a single device capable of emitting different wavelengths including at least the emission band 103 matching the filter passband 101 associated with the sub-pixels I8 and the emission band 104 matching the filter passband 102 associated with the sub-pixels I9, e.g. a laser or flash source. The relative intensity of such bands within the emitted light can be controlled using spectral filters or masks included in or arranged close the emission opening of the light source.
In other variants of the disclosed embodiment, sub-pixels R can be replaced by sub-pixels sensitive to a different visible wavelength band, e.g. a green light band, or just by sub-pixels sensitive to white light. Nevertheless, it will be appreciated that the presence of sub-pixels for providing a visible light sensor channel is optional.
Although in the disclosed embodiment the system 10 is configured to switch between the first and second working modes, it can be appreciated that the functionalities associated with these working modes can be implemented separately in dedicated/separated image processing systems.
In this case, a system specifically dedicated for pulse rate monitoring can differ from the disclosed system 10 at least in that the number of subpixels I8 and I9 and/or the illumination intensities of the matching NIR lights can be the same.
It can be further appreciated that image processing functionalities of the disclosed system 10 can be implemented in a same processing unit, or a bank of processing units. Especially in case of application in a vehicle DMS, such image processing functionalities can be usefully implemented in the kind of multi-processor engine disclosed in U.S. provisional patent application No. 62/592,665 (Ref: FN-618), the disclosure of which is incorporated by reference.

Claims (19)

The invention claimed is:
1. An image processing system comprising:
at least one image sensor comprising a plurality of pixels, each pixel comprising a plurality of sub-pixels, and configured to provide, during an image acquisition period, a first image plane from a group of first sub-pixels selectively sensitive to a first NIR light band and a second image plane from a group of second sub-pixels selectively sensitive to a second NIR light band, wherein the sensitivity of the first sub-pixels to the first NIR light is greater than the sensitivity of the second sub-pixels to the second NIR light band, and the number of first sub- pixels is greater than the number of second sub-pixels;
at least one NIR light source capable of separately emitting first NIR light corresponding to the first NIR light band and second NIR light corresponding to the second NIR light band, the first NIR light having a higher intensity than the second NIR light; and
a face detector configured to detect at least a first face from the first image plane and a second face from the second image plane, respectively, wherein the first face is a face of a first person and the second face is a face of a second person, different from the first person.
2. The image processing system of claim 1, comprising first spectral band-pass filters configured to pass light within the first NIR light band towards the first sub-pixels and second spectral band-pass filters configured to pass light within the second NIR light band towards the second sub-pixels.
3. The image processing system of claim 2, wherein said image sensor comprises a multispectral filter array including the first and second band-pass filters.
4. The image processing system of claim 1, wherein the first NIR light band and the second NIR light band are centered around a first wavelength and a second wavelength, respectively, the first wavelength being lower than the second wavelength.
5. The image processing system of claim 4, wherein the first wavelength is 875 nm and the second wavelength is 950 nm.
6. The image processing system of claim 1, wherein the at least one light source comprises at least one first LED configured to generate the first NIR light and at least one second LED configured to generate the second NIR light.
7. The image processing system of claim 1, further comprising an iris detector configured to detect at least one iris within the detected first face and to detect at least one iris within the detected second face.
8. The image processing system of claim 7, further comprising a biometric authentication unit configured to identify one or more subjects based on the at least one iris detected within the first face or the at least one iris detected within the second face.
9. The image processing system of claim 1, wherein the system is configured to operate in a first working mode where the face detector is configured to detect the first face and the second face at spatially non-coincident locations within said first and second image planes respectively.
10. The image processing system of claim 9, being configured, when switching from the first working mode to a second working mode, to adjust:
at least one of the intensity of the first NIR light, the gain of the first sub-pixels and the image acquisition period, in such a way that the first face is properly exposed within the first image plane; or
at least one of the intensity of the second NIR light, the gain of the second sub-pixels and the image acquisition period, in such a way that the second face is properly exposed within the second image plane.
11. The image processing system of claim 1, wherein the system is configured to operate in a second working mode where the face detector is configured to detect the first face and the second face at a spatially coincident location within said first and second image planes.
12. The image processing system of claim 11, further comprising a heart rate pulse monitoring unit configured to analyse at least a first differential signal indicative of a difference in illumination between the detected first and second faces over a sequence of image acquisition periods.
13. The image processing system of claim 12, wherein the at least one image sensor is further configured to acquire, at each image acquisition period, a third image plane from a group of third sub-pixels of the array of pixels, the third sub-pixels being selectively sensitive to a visible light band,
the system further comprising at least one visible light source capable of emitting light corresponding to the visible light band,
the face detector being configured to detect a face within the third image plane at a location coincident with one of the first and second faces, and
the heart rate pulse monitoring unit is further configured to analyse at least a second differential signal indicative of a difference in illumination between the detected third face and at least one of the detected first and second faces, over said sequence of image acquisition periods.
14. The image processing system of claim 1 wherein said at least one image sensor is arranged to operate in a rolling shutter mode where each of said image planes is acquired for a common integration time.
15. One of a portable electronic device or a vehicle occupant monitoring system including the image processing system according to claim 1.
16. An image processing system comprising:
at least one image sensor comprising a plurality of pixels, each pixel comprising a plurality of sub-pixels, and configured to provide, during an image acquisition period, a first image plane from a group of first sub-pixels selectively sensitive to a first NIR light band and a second image plane from a group of second sub-pixels selectively sensitive to a second NIR light band, wherein the sensitivity of the first sub-pixels to the first NIR light is greater than the sensitivity of the second sub-pixels to the second NIR light band, and the number of first sub-pixels is greater than the number of second sub-pixels;
at least one NIR light source capable of separately emitting first NIR light corresponding to the first NIR light band and second NIR light corresponding to the second NIR light band, the first NIR light having a higher intensity than the second NIR light; and
a face detector;
the system being configured to operate according to at least a first working mode where the face detector is configured to detect at least a first face in the first image plane and a second face in the second image plane at a spatially non-coincident location to the first face, wherein the first face is a face of a first person and the second face is a face of a second person, different from the first person.
17. One of a portable electronic device or a vehicle occupant monitoring system including the image processing system according to claim 16.
18. An image processing system comprising:
at least one image sensor comprising a plurality of pixels, each pixel comprising a plurality of sub-pixels, and configured to provide, during an image acquisition period, a first image plane from a group of first sub-pixels selectively sensitive to a first NIR light band and a second image plane from a group of second sub-pixels selectively sensitive to a second NIR light band;
at least one NIR light source capable of separately emitting first NIR light corresponding to the first NIR light band and second NIR light corresponding to the second NIR light band, the first NIR light having a higher intensity than the second NIR light;
a face detector configured to detect a first face from the first image plane and a second face from the second image plane at a spatially coincident location, wherein the first face is a face of a first person and the second face is a face of a second person, different from the first person; and
a heart rate pulse monitoring unit configured to analyse at least a differential signal indicative of a difference in illumination between the detected first and second faces over a sequence of image acquisition periods.
19. One of a portable electronic device or a vehicle occupant monitoring system including the image processing system according to claim 18.
US15/990,519 2018-05-25 2018-05-25 Multispectral image processing system for face detection Active 2038-06-09 US10742904B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US15/990,519 US10742904B2 (en) 2018-05-25 2018-05-25 Multispectral image processing system for face detection
EP19162767.8A EP3572975B1 (en) 2018-05-25 2019-03-14 A multispectral image processing system for face detection
CN201910443371.XA CN110532849A (en) 2018-05-25 2019-05-24 Multi-spectral image processing system for face detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/990,519 US10742904B2 (en) 2018-05-25 2018-05-25 Multispectral image processing system for face detection

Publications (2)

Publication Number Publication Date
US20190364229A1 US20190364229A1 (en) 2019-11-28
US10742904B2 true US10742904B2 (en) 2020-08-11

Family

ID=65812170

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/990,519 Active 2038-06-09 US10742904B2 (en) 2018-05-25 2018-05-25 Multispectral image processing system for face detection

Country Status (3)

Country Link
US (1) US10742904B2 (en)
EP (1) EP3572975B1 (en)
CN (1) CN110532849A (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220152989A1 (en) * 2019-03-29 2022-05-19 Sekisui Chemical Co., Ltd. Laminated glass and vehicle system
US11046327B2 (en) 2019-04-09 2021-06-29 Fotonation Limited System for performing eye detection and/or tracking
US11157761B2 (en) * 2019-10-22 2021-10-26 Emza Visual Sense Ltd. IR/Visible image camera with dual mode, active-passive-illumination, triggered by masked sensor to reduce power consumption
CN113542529B (en) * 2020-04-21 2024-03-12 安霸国际有限合伙企业 940NM LED flash synchronization for DMS and OMS
CN112507930B (en) * 2020-12-16 2023-06-20 华南理工大学 Method for improving human face video heart rate detection by utilizing illumination equalization method
GB2609914A (en) * 2021-08-12 2023-02-22 Continental Automotive Gmbh A monitoring system and method for identifying objects
CN113842128B (en) * 2021-09-29 2023-09-26 北京清智图灵科技有限公司 Non-contact heart rate detection device based on multiple filtering and mixed amplification

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070147811A1 (en) * 2005-12-26 2007-06-28 Funai Electric Co., Ltd. Compound-eye imaging device
WO2007106117A2 (en) 2006-02-24 2007-09-20 Fotonation Vision Limited Method and apparatus for selective rejection of digital images
EP2052347B1 (en) 2006-08-11 2011-04-13 Tessera Technologies Ireland Limited Real-time face tracking in a digital image acquisition device
WO2011124512A2 (en) 2010-04-09 2011-10-13 Donald Martin Monro Image template masking
US8345936B2 (en) * 2008-05-09 2013-01-01 Noblis, Inc. Multispectral iris fusion for enhancement and interoperability
US20130228687A1 (en) * 2010-09-17 2013-09-05 Centre National De La Recherche Scientifique-Cnrs Spectral band-pass filter having high selectivity and controlled polarization
US20130329101A1 (en) * 2012-06-07 2013-12-12 Industry-Academic Cooperation, Yonsei University Camera system with multi-spectral filter array and image processing method thereof
US20140153823A1 (en) * 2012-11-30 2014-06-05 Industry-Academic Cooperation Foundation, Yonsei University Method and apparatus for processing image
US20150304535A1 (en) * 2014-02-21 2015-10-22 Samsung Electronics Co., Ltd. Multi-band biometric camera system having iris color recognition
US20160092731A1 (en) * 2014-08-08 2016-03-31 Fotonation Limited Optical system for an image acquisition device
EP3066690A1 (en) 2013-11-07 2016-09-14 Pelican Imaging Corporation Methods of manufacturing array camera modules incorporating independently aligned lens stacks
US20170330025A1 (en) * 2016-05-16 2017-11-16 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and non-transitory computer-readable storage medium
US20170344793A1 (en) * 2014-10-22 2017-11-30 Veridium Ip Limited Systems and methods for performing iris identification and verification using mobile devices
US20170366761A1 (en) * 2016-06-17 2017-12-21 Fotonation Limited Iris image acquisition system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160019421A1 (en) * 2014-07-15 2016-01-21 Qualcomm Incorporated Multispectral eye analysis for identity authentication
WO2016020147A1 (en) * 2014-08-08 2016-02-11 Fotonation Limited An optical system for an image acquisition device

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070147811A1 (en) * 2005-12-26 2007-06-28 Funai Electric Co., Ltd. Compound-eye imaging device
WO2007106117A2 (en) 2006-02-24 2007-09-20 Fotonation Vision Limited Method and apparatus for selective rejection of digital images
EP2052347B1 (en) 2006-08-11 2011-04-13 Tessera Technologies Ireland Limited Real-time face tracking in a digital image acquisition device
US8345936B2 (en) * 2008-05-09 2013-01-01 Noblis, Inc. Multispectral iris fusion for enhancement and interoperability
WO2011124512A2 (en) 2010-04-09 2011-10-13 Donald Martin Monro Image template masking
US20130228687A1 (en) * 2010-09-17 2013-09-05 Centre National De La Recherche Scientifique-Cnrs Spectral band-pass filter having high selectivity and controlled polarization
US20130329101A1 (en) * 2012-06-07 2013-12-12 Industry-Academic Cooperation, Yonsei University Camera system with multi-spectral filter array and image processing method thereof
US20140153823A1 (en) * 2012-11-30 2014-06-05 Industry-Academic Cooperation Foundation, Yonsei University Method and apparatus for processing image
EP3066690A1 (en) 2013-11-07 2016-09-14 Pelican Imaging Corporation Methods of manufacturing array camera modules incorporating independently aligned lens stacks
US20150304535A1 (en) * 2014-02-21 2015-10-22 Samsung Electronics Co., Ltd. Multi-band biometric camera system having iris color recognition
US20160092731A1 (en) * 2014-08-08 2016-03-31 Fotonation Limited Optical system for an image acquisition device
US20170344793A1 (en) * 2014-10-22 2017-11-30 Veridium Ip Limited Systems and methods for performing iris identification and verification using mobile devices
US20170330025A1 (en) * 2016-05-16 2017-11-16 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and non-transitory computer-readable storage medium
US20170366761A1 (en) * 2016-06-17 2017-12-21 Fotonation Limited Iris image acquisition system

Non-Patent Citations (15)

* Cited by examiner, † Cited by third party
Title
"Face Recognition Across the Imaging Spectrum", 12 February 2016, SPRINGER, article THIRIMACHOS BOURLAI: "Face Recognition Across the Imaging Spectrum - preface and index", pages: v - ix, XP055628016, DOI: 10.1007/978-3-319-28501-6
A. K. Jain, A. Ross, and S. Prabhakar, "An introduction to biometric recognition," IEEE Trans. Circuits Syst. Video Technol., vol. 14, 2004, 17 pages.
Abbas El Gamal"High Dynamic Range Image Sensors," Stanford University, ISSCC'02 (www.cafe.stanford.edu/˜abbas/group/papers_and_pub/isscc02_tutorial.pdf) 62 pages.
Bigioi, P., U.S. Appl. No. 62/592,665 titled "Peripheral processing device", filed Nov. 30, 2017, 30 pages.
European Patent Office, "Extended European Search Report" dated Oct. 11, 2019 in EP Patent Application No. 19162767.8 filed Mar. 14, 2019 and titled "A Multispectral Image Processing System for Face Detection", 10 pages.
Hamad Monkaresi, et al. "A Machine Learning Approach to Improve Contactless Heart Rate Monitoring Using a Webcam".
J. Daugman, "New methods in iris recognition," IEEE Trans. Syst. Man. Cybern. B. Cybern., vol. 37, pp. 1167-1175, 2007.
Nayar S K et al: "High Dynamic Range Imaging: Spatially Varying Pixel Exposures", Proceedings 2000 IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000. Hilton Head Island, SC, Jun. 13-15, 2000; [Proceedings of the IEEE Computer Conference on Computer Vision and Pattern Recognition], Los Alamitos CA,: IEEE Comp., Jun. 13, 2000 (Jun. 13, 2000), pp. 472-479, XP002236923,ISBN: 978-0-7803-6527-8.
NAYAR S K, MITSUNAGA T: "High Dynamic Range Imaging: Spatially Varying Pixel Exposures", PROCEEDINGS 2000 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION. CVPR 2000. HILTON HEAD ISLAND, SC, JUNE 13-15, 2000., LOS ALAMITOS, CA : IEEE COMP. SOC., US, 13 June 2000 (2000-06-13) - 15 June 2000 (2000-06-15), US, pages 472 - 479, XP002236923, ISBN: 978-0-7803-6527-8
P. Lapray et. al, "Multispectral Filter Arrays: Recent Advances and Practical Implementation," Sensors 2014, 14(11), 21626-21659.
Pereira Manuela, et al.: "Automatic face recognition in HDR imaging", Visual Communicataion and Image Processing, Jan. 20, 2004- Jan. 20, 2004; San Jose, vol. 9138, May 15, 2014 (May 15, 2014), pp. 913804-1 to 913804-10, XP060030332, DOI: 10.1117/12.2054539, ISBN: 978-1-62841-730-2.
PEREIRA MANUELA; MORENO JUAN-CARLOS; PROEN�A HUGO; PINHEIRO ANT�NIO M. G.: "Automatic face recognition in HDR imaging", PROCEEDINGS OF SPIE, IEEE, US, vol. 9138, 15 May 2014 (2014-05-15), US, pages 913804 - 913804-10, XP060030332, ISBN: 978-1-62841-730-2, DOI: 10.1117/12.2054539
Thirimachos Bourlai: "Face Recognition Across the Imaging Spectrum-preface and index" in: "Face Recognition Across the Imaging Spectrum", Feb. 12, 2016 (Feb. 12, 2016), Springer, XP055628016, pp. v-ix, DOI: 10.1007/978-3-319-28501-6.
www.lifehacker.com/the-webcam-pulse-detector-shows-your-life-signs-using-y-1704207849, 2 pages.
www.people.csail.mit.edu/mrub/vidmag, 3 pages.

Also Published As

Publication number Publication date
US20190364229A1 (en) 2019-11-28
CN110532849A (en) 2019-12-03
EP3572975A1 (en) 2019-11-27
EP3572975B1 (en) 2024-01-24
EP3572975C0 (en) 2024-01-24

Similar Documents

Publication Publication Date Title
US10742904B2 (en) Multispectral image processing system for face detection
US20210334526A1 (en) Living body detection device, living body detection method, and recording medium
US9152850B2 (en) Authentication apparatus, authentication method, and program
JP5145555B2 (en) Pupil detection method
US10452910B2 (en) Method of avoiding biometrically identifying a subject within an image
CN109661668B (en) Image processing method and system for iris recognition
JP5018653B2 (en) Image identification device
WO2005002441A1 (en) Organism eye judgment method and organism eye judgment device
KR20070038536A (en) Method and system for reducing artifacts in image detection
KR20180134280A (en) Apparatus and method of face recognition verifying liveness based on 3d depth information and ir information
JP6381654B2 (en) Gaze detection device
CN111132599B (en) Image acquisition with reduced reflections
KR20220052828A (en) Biometric authentication apparatus and biometric authentication method
JP3848953B2 (en) Living body eye determination method and living body eye determination device
JP4527088B2 (en) Living body eye determination method and living body eye determination device
KR101635602B1 (en) Method and apparatus for iris scanning
KR20220059417A (en) Compact system and method for iris recognition
JP2018101289A (en) Biometric authentication device, biometric authentication system, biometric authentication program, and biometric authentication method
JP6819653B2 (en) Detection device
CN109426762B (en) Biological recognition system, method and biological recognition terminal
David et al. Robust Iris Image Recognition System Using Normalization Process and Neural Network Techniques
KR101487801B1 (en) Method for detecting sleepiness
JP2021047929A (en) Information processor
CA2801610A1 (en) Device for producing images of irises of the eyes

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: FOTONATION LIMITED, IRELAND

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:STEC, PIOTR;BIGIOI, PETRONEL;SIGNING DATES FROM 20180622 TO 20180627;REEL/FRAME:046406/0192

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4